2023-01-11T21:14:25.1603730Z Requested labels: linux.2xlarge 2023-01-11T21:14:25.1603874Z Job defined at: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/tags/ciflow/trunk/91627 2023-01-11T21:14:25.1604133Z Reusable workflow chain: 2023-01-11T21:14:25.1604181Z pytorch/pytorch/.github/workflows/trunk.yml@refs/tags/ciflow/trunk/91627 (8419ddda87c8a47eacc63b54bc7ec98c1f27c26e) 2023-01-11T21:14:25.1604256Z -> pytorch/pytorch/.github/workflows/_linux-test.yml@refs/tags/ciflow/trunk/91627 (8419ddda87c8a47eacc63b54bc7ec98c1f27c26e) 2023-01-11T21:14:25.1604278Z Waiting for a runner to pick up this job... 2023-01-11T21:14:25.8717641Z Job is about to start running on the runner: i-0c70567fa7dfd6397 (organization) 2023-01-11T21:14:29.4563735Z Current runner version: '2.300.2' 2023-01-11T21:14:29.4569071Z Runner name: 'i-0c70567fa7dfd6397' 2023-01-11T21:14:29.4569519Z Runner group name: 'Default' 2023-01-11T21:14:29.4570213Z Machine name: 'ip-10-0-0-26' 2023-01-11T21:14:29.4572140Z ##[group]GITHUB_TOKEN Permissions 2023-01-11T21:14:29.4572893Z Actions: write 2023-01-11T21:14:29.4573169Z Checks: write 2023-01-11T21:14:29.4573487Z Contents: write 2023-01-11T21:14:29.4573806Z Deployments: write 2023-01-11T21:14:29.4574090Z Discussions: write 2023-01-11T21:14:29.4574426Z Issues: write 2023-01-11T21:14:29.4574772Z Metadata: read 2023-01-11T21:14:29.4575049Z Packages: write 2023-01-11T21:14:29.4575357Z Pages: write 2023-01-11T21:14:29.4575702Z PullRequests: write 2023-01-11T21:14:29.4576021Z RepositoryProjects: write 2023-01-11T21:14:29.4576379Z SecurityEvents: write 2023-01-11T21:14:29.4576705Z Statuses: write 2023-01-11T21:14:29.4576972Z ##[endgroup] 2023-01-11T21:14:29.4580209Z Secret source: Actions 2023-01-11T21:14:29.4580894Z Prepare workflow directory 2023-01-11T21:14:29.7129396Z Prepare all required actions 2023-01-11T21:14:29.7298052Z Getting action download info 2023-01-11T21:14:30.0292834Z Download action repository 'pytorch/test-infra@main' (SHA:2c225610d00fb13c04fcd60389d3e4d8326167c3) 2023-01-11T21:14:30.3038169Z Download action repository 'pytorch/pytorch@master' (SHA:c5836153f5332ca83d5cacde38f2829a4d54793e) 2023-01-11T21:14:33.0847293Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2023-01-11T21:14:33.3677148Z Getting action download info 2023-01-11T21:14:33.5550710Z Download action repository 'malfet/checkout@silent-checkout' (SHA:c7b8fef48edfe1bca0044a44b1f7f7c4318a3076) 2023-01-11T21:14:33.7874082Z Getting action download info 2023-01-11T21:14:33.9554166Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2023-01-11T21:14:34.1005857Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml 2023-01-11T21:14:34.1007577Z ##[group] Inputs 2023-01-11T21:14:34.1008006Z build-environment: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:14:34.1009248Z test-matrix: { include: [ { config: "default", shard: 1, num_shards: 4, runner: "linux.4xlarge.nvidia.gpu" }, { config: "default", shard: 2, num_shards: 4, runner: "linux.4xlarge.nvidia.gpu" }, { config: "default", shard: 3, num_shards: 4, runner: "linux.4xlarge.nvidia.gpu" }, { config: "default", shard: 4, num_shards: 4, runner: "linux.4xlarge.nvidia.gpu" }, { config: "functorch", shard: 1, num_shards: 1, runner: "linux.4xlarge.nvidia.gpu" }, { config: "nogpu_AVX512", shard: 1, num_shards: 1, runner: "linux.2xlarge" }, { config: "nogpu_NO_AVX2", shard: 1, num_shards: 1, runner: "linux.2xlarge" }, { config: "jit_legacy", shard: 1, num_shards: 1, runner: "linux.4xlarge.nvidia.gpu" }, { config: "distributed", shard: 1, num_shards: 3, runner: "linux.8xlarge.nvidia.gpu" }, { config: "distributed", shard: 2, num_shards: 3, runner: "linux.8xlarge.nvidia.gpu" }, { config: "distributed", shard: 3, num_shards: 3, runner: "linux.8xlarge.nvidia.gpu" }, ]} 2023-01-11T21:14:34.1010572Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:14:34.1011048Z sync-tag: 2023-01-11T21:14:34.1011827Z timeout-minutes: 240 2023-01-11T21:14:34.1012114Z use-gha: 2023-01-11T21:14:34.1012446Z ##[endgroup] 2023-01-11T21:14:34.1013126Z Complete job name: linux-bionic-cuda11.7-py3.10-gcc7 / test (nogpu_NO_AVX2, 1, 1, linux.2xlarge) 2023-01-11T21:14:34.1733108Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2023-01-11T21:14:34.1733421Z with: 2023-01-11T21:14:34.1733962Z github-secret: *** 2023-01-11T21:14:34.1734322Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2023-01-11T21:14:34.1734689Z activate-with-label: false 2023-01-11T21:14:34.1735024Z label: with-ssh 2023-01-11T21:14:34.1735246Z remove-existing-keys: true 2023-01-11T21:14:34.1735454Z env: 2023-01-11T21:14:34.1735660Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:14:34.1735880Z ##[endgroup] 2023-01-11T21:14:34.2505773Z ciflow reference detected, attempting to extract PR number 2023-01-11T21:14:34.6477848Z Grabbing public ssh keys from https://github.com/pytorch-bot[bot].keys 2023-01-11T21:14:34.7232933Z No SSH keys found for user pytorch-bot[bot] 2023-01-11T21:14:34.7233308Z Grabbing public ssh keys from https://github.com/LucaLumetti.keys 2023-01-11T21:14:34.8086980Z ~/.ssh/authorized_keys file found on node, removing ~/.ssh and starting fresh 2023-01-11T21:14:34.8099611Z Public keys pulled and installed to /home/ec2-user/.ssh/authorized_keys 2023-01-11T21:14:34.8124214Z Login using: ssh ec2-user@ec2-52-23-220-178.compute-1.amazonaws.com 2023-01-11T21:14:34.8124879Z All testing is done inside the container, to start an interactive session run: 2023-01-11T21:14:34.8125251Z docker exec -it $(docker container ps --format '{{.ID}}') bash 2023-01-11T21:14:34.8345242Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@master 2023-01-11T21:14:34.8345511Z with: 2023-01-11T21:14:34.8345672Z submodules: recursive 2023-01-11T21:14:34.8345857Z fetch-depth: 0 2023-01-11T21:14:34.8346024Z env: 2023-01-11T21:14:34.8346180Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:14:34.8346367Z ##[endgroup] 2023-01-11T21:14:34.8553689Z ##[group]Run retry () { 2023-01-11T21:14:34.8553954Z retry () { 2023-01-11T21:14:34.8554182Z  $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*) 2023-01-11T21:14:34.8554401Z } 2023-01-11T21:14:34.8554578Z echo "${GITHUB_WORKSPACE}" 2023-01-11T21:14:34.8554804Z if [ -z "${NO_SUDO}" ]; then 2023-01-11T21:14:34.8555034Z  retry sudo rm -rf "${GITHUB_WORKSPACE}" 2023-01-11T21:14:34.8555228Z else 2023-01-11T21:14:34.8555432Z  retry rm -rf "${GITHUB_WORKSPACE}" 2023-01-11T21:14:34.8555633Z fi 2023-01-11T21:14:34.8555846Z mkdir "${GITHUB_WORKSPACE}" 2023-01-11T21:14:34.8571211Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:14:34.8571445Z env: 2023-01-11T21:14:34.8571628Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:14:34.8571813Z NO_SUDO: 2023-01-11T21:14:34.8571971Z ##[endgroup] 2023-01-11T21:14:34.8665354Z /home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T21:14:37.1376015Z ##[group]Run malfet/checkout@silent-checkout 2023-01-11T21:14:37.1376271Z with: 2023-01-11T21:14:37.1376477Z ref: 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:14:37.1376683Z fetch-depth: 0 2023-01-11T21:14:37.1376868Z submodules: recursive 2023-01-11T21:14:37.1377058Z quiet-checkout: true 2023-01-11T21:14:37.1377247Z repository: pytorch/pytorch 2023-01-11T21:14:37.1377578Z token: *** 2023-01-11T21:14:37.1377754Z ssh-strict: true 2023-01-11T21:14:37.1377951Z persist-credentials: true 2023-01-11T21:14:37.1378144Z clean: true 2023-01-11T21:14:37.1378314Z lfs: false 2023-01-11T21:14:37.1378500Z set-safe-directory: true 2023-01-11T21:14:37.1378669Z env: 2023-01-11T21:14:37.1378845Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:14:37.1379037Z ##[endgroup] 2023-01-11T21:14:37.2469074Z Syncing repository: pytorch/pytorch 2023-01-11T21:14:37.2470650Z ##[group]Getting Git version info 2023-01-11T21:14:37.2471126Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2023-01-11T21:14:37.2471595Z [command]/usr/bin/git version 2023-01-11T21:14:37.2471808Z git version 2.38.1 2023-01-11T21:14:37.2472386Z ##[endgroup] 2023-01-11T21:14:37.2483920Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/e8b51ab1-7ca3-4fd4-bb05-0fa628c8aad6' before making global git config changes 2023-01-11T21:14:37.2484837Z Adding repository directory to the temporary git global config as a safe directory 2023-01-11T21:14:37.2489743Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T21:14:37.2527915Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2023-01-11T21:14:37.2532603Z ##[group]Initializing the repository 2023-01-11T21:14:37.2535822Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T21:14:37.2564546Z hint: Using 'master' as the name for the initial branch. This default branch name 2023-01-11T21:14:37.2565086Z hint: is subject to change. To configure the initial branch name to use in all 2023-01-11T21:14:37.2565645Z hint: of your new repositories, which will suppress this warning, call: 2023-01-11T21:14:37.2566041Z hint: 2023-01-11T21:14:37.2566523Z hint: git config --global init.defaultBranch 2023-01-11T21:14:37.2566847Z hint: 2023-01-11T21:14:37.2567313Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2023-01-11T21:14:37.2567948Z hint: 'development'. The just-created branch can be renamed via this command: 2023-01-11T21:14:37.2568342Z hint: 2023-01-11T21:14:37.2568864Z hint: git branch -m 2023-01-11T21:14:37.2569527Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2023-01-11T21:14:37.2575104Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2023-01-11T21:14:37.2604021Z ##[endgroup] 2023-01-11T21:14:37.2604650Z ##[group]Disabling automatic garbage collection 2023-01-11T21:14:37.2607885Z [command]/usr/bin/git config --local gc.auto 0 2023-01-11T21:14:37.2634806Z ##[endgroup] 2023-01-11T21:14:37.2635355Z ##[group]Setting up auth 2023-01-11T21:14:37.2642044Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2023-01-11T21:14:37.2671610Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2023-01-11T21:14:37.2917222Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2023-01-11T21:14:37.2948894Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2023-01-11T21:14:37.3191960Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2023-01-11T21:14:37.3239427Z ##[endgroup] 2023-01-11T21:14:37.3240056Z ##[group]Fetching the repository 2023-01-11T21:14:37.3246612Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --quiet --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2023-01-11T21:15:30.9503955Z [command]/usr/bin/git rev-parse --verify --quiet 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e^{object} 2023-01-11T21:15:30.9533015Z 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:15:30.9539292Z ##[endgroup] 2023-01-11T21:15:30.9539899Z ##[group]Determining the checkout info 2023-01-11T21:15:30.9540776Z ##[endgroup] 2023-01-11T21:15:30.9541322Z ##[group]Checking out the ref 2023-01-11T21:15:30.9546334Z [command]/usr/bin/git checkout --quiet --force 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:15:32.2858328Z ##[endgroup] 2023-01-11T21:15:32.2858762Z ##[group]Setting up auth for fetching submodules 2023-01-11T21:15:32.2864852Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2023-01-11T21:15:32.2962240Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2023-01-11T21:15:32.2999860Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2023-01-11T21:15:32.3035768Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2023-01-11T21:15:32.3070284Z ##[endgroup] 2023-01-11T21:15:32.3070969Z ##[group]Fetching submodules 2023-01-11T21:15:32.3075318Z [command]/usr/bin/git submodule sync --recursive 2023-01-11T21:15:32.3362482Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2023-01-11T21:15:32.3646015Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2023-01-11T21:15:32.3646693Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2023-01-11T21:15:32.3647485Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2023-01-11T21:15:32.3649870Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2023-01-11T21:15:32.3651910Z Submodule 'third_party/QNNPACK' (https://github.com/pytorch/QNNPACK) registered for path 'third_party/QNNPACK' 2023-01-11T21:15:32.3654368Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2023-01-11T21:15:32.3656751Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2023-01-11T21:15:32.3659334Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2023-01-11T21:15:32.3661948Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2023-01-11T21:15:32.3665006Z Submodule 'third_party/cub' (https://github.com/NVlabs/cub.git) registered for path 'third_party/cub' 2023-01-11T21:15:32.3667618Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2023-01-11T21:15:32.3670954Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2023-01-11T21:15:32.3674089Z Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' 2023-01-11T21:15:32.3677260Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2023-01-11T21:15:32.3680603Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2023-01-11T21:15:32.3683931Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2023-01-11T21:15:32.3687488Z Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' 2023-01-11T21:15:32.3691132Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:15:32.3694714Z Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' 2023-01-11T21:15:32.3698715Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2023-01-11T21:15:32.3703287Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2023-01-11T21:15:32.3707189Z Submodule 'third_party/ios-cmake' (https://github.com/Yangqing/ios-cmake.git) registered for path 'third_party/ios-cmake' 2023-01-11T21:15:32.3710883Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2023-01-11T21:15:32.3715093Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2023-01-11T21:15:32.3719388Z Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' 2023-01-11T21:15:32.3723780Z Submodule 'third_party/neon2sse' (https://github.com/intel/ARM_NEON_2_x86_SSE.git) registered for path 'third_party/neon2sse' 2023-01-11T21:15:32.3728244Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2023-01-11T21:15:32.3732764Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2023-01-11T21:15:32.3737574Z Submodule 'third_party/onnx-tensorrt' (https://github.com/onnx/onnx-tensorrt) registered for path 'third_party/onnx-tensorrt' 2023-01-11T21:15:32.3742308Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2023-01-11T21:15:32.3747634Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2023-01-11T21:15:32.3752755Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2023-01-11T21:15:32.3757891Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2023-01-11T21:15:32.3763034Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2023-01-11T21:15:32.3768609Z Submodule 'third_party/python-enum' (https://github.com/PeachPy/enum34.git) registered for path 'third_party/python-enum' 2023-01-11T21:15:32.3773859Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2023-01-11T21:15:32.3779359Z Submodule 'third_party/python-six' (https://github.com/benjaminp/six.git) registered for path 'third_party/python-six' 2023-01-11T21:15:32.3785283Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2023-01-11T21:15:32.3791188Z Submodule 'third_party/tbb' (https://github.com/01org/tbb) registered for path 'third_party/tbb' 2023-01-11T21:15:32.3797204Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2023-01-11T21:15:32.3803146Z Submodule 'third_party/zstd' (https://github.com/facebook/zstd.git) registered for path 'third_party/zstd' 2023-01-11T21:15:32.3833247Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2023-01-11T21:15:32.7040188Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2023-01-11T21:15:32.9455578Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2023-01-11T21:15:33.1709838Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2023-01-11T21:15:33.4708417Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/QNNPACK'... 2023-01-11T21:15:33.8311126Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2023-01-11T21:15:36.1760998Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2023-01-11T21:15:42.8178485Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2023-01-11T21:15:43.2592377Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2023-01-11T21:15:43.7953915Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cub'... 2023-01-11T21:15:45.2016586Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2023-01-11T21:15:46.3261624Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2023-01-11T21:15:47.5636586Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/eigen'... 2023-01-11T21:15:52.7492712Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2023-01-11T21:15:53.5369363Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2023-01-11T21:15:55.0491691Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2023-01-11T21:15:56.2782802Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/foxi'... 2023-01-11T21:15:56.4857787Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2023-01-11T21:15:56.9924977Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2023-01-11T21:15:57.3177407Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2023-01-11T21:15:58.3486571Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2023-01-11T21:15:58.8221786Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ios-cmake'... 2023-01-11T21:15:59.0370703Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2023-01-11T21:15:59.3544677Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2023-01-11T21:16:00.7265928Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nccl/nccl'... 2023-01-11T21:16:01.1294145Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/neon2sse'... 2023-01-11T21:16:01.5884384Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2023-01-11T21:16:07.7768324Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2023-01-11T21:16:09.7015995Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt'... 2023-01-11T21:16:10.1874361Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2023-01-11T21:16:10.4725017Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2023-01-11T21:16:16.4622252Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2023-01-11T21:16:16.7059957Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2023-01-11T21:16:17.0052109Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2023-01-11T21:16:17.8761503Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-enum'... 2023-01-11T21:16:18.2348832Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2023-01-11T21:16:18.5808636Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-six'... 2023-01-11T21:16:18.8949889Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2023-01-11T21:16:19.4952876Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tbb'... 2023-01-11T21:16:21.2910800Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2023-01-11T21:16:21.7680834Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/zstd'... 2023-01-11T21:16:24.0712075Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2023-01-11T21:16:24.0805303Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2023-01-11T21:16:24.0880182Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2023-01-11T21:16:24.1079753Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2023-01-11T21:16:24.1277505Z Submodule path 'third_party/QNNPACK': checked out '7d2a4e9931a82adc3814275b6219a03e24e36b4c' 2023-01-11T21:16:24.1614153Z Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' 2023-01-11T21:16:24.7174124Z Submodule path 'third_party/XNNPACK': checked out 'ae108ef49aa5623b896fc93d4298c49d1750d9ba' 2023-01-11T21:16:24.7363352Z Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2023-01-11T21:16:24.8268427Z Submodule path 'third_party/cpuinfo': checked out '8ec7bd91ad0470e61cf38f618cc1f270dede599c' 2023-01-11T21:16:24.8577687Z Submodule path 'third_party/cub': checked out 'd106ddb991a56c3df1b6d51b2409e36ba8181ce4' 2023-01-11T21:16:25.1211654Z Submodule path 'third_party/cudnn_frontend': checked out '171a7a986f7fbd9ed71bd0cf3c7ad4f55843d6b3' 2023-01-11T21:16:25.4944462Z Submodule path 'third_party/cutlass': checked out 'b72cbf957df8cf84a6d0ff91c190ad51a9c1d24a' 2023-01-11T21:16:25.7187607Z Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' 2023-01-11T21:16:25.7609042Z Submodule path 'third_party/fbgemm': checked out '80d64206c07879fd4683be66873de7cefa1a0a71' 2023-01-11T21:16:25.7622901Z Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:16:25.7624889Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:16:25.7627192Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:16:25.7629427Z Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:16:25.7652058Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... 2023-01-11T21:16:26.7602831Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... 2023-01-11T21:16:27.3135539Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... 2023-01-11T21:16:28.3216762Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... 2023-01-11T21:16:28.6564912Z Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' 2023-01-11T21:16:28.7481540Z Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' 2023-01-11T21:16:28.8009241Z Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' 2023-01-11T21:16:28.8098497Z Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '1840658c184f3eeba787dae0f06c45756c1daaf5' 2023-01-11T21:16:28.8865419Z Submodule path 'third_party/flatbuffers': checked out 'd0cede9c90c5257537c293517a21376408b549fa' 2023-01-11T21:16:28.9199678Z Submodule path 'third_party/fmt': checked out '7bdf0628b1276379886c7f6dda2cef2b3b374f0b' 2023-01-11T21:16:28.9279235Z Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' 2023-01-11T21:16:28.9639669Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2023-01-11T21:16:28.9853021Z Submodule path 'third_party/gloo': checked out '4a5e339b764261d20fc409071dc7a8b8989aa195' 2023-01-11T21:16:29.0265071Z Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2023-01-11T21:16:29.0369624Z Submodule path 'third_party/ideep': checked out 'e533c771a1e75a1c225c14b2261eefa62681d9e6' 2023-01-11T21:16:29.0382241Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2023-01-11T21:16:29.0402999Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2023-01-11T21:16:37.2981543Z Submodule path 'third_party/ideep/mkl-dnn': checked out '404ad76ee633c939d705eb583ffe50a806969d5e' 2023-01-11T21:16:37.2997835Z Submodule 'third_party/oneDNN' (https://github.com/oneapi-src/oneDNN.git) registered for path 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:16:37.3021107Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN'... 2023-01-11T21:16:45.6491958Z Submodule path 'third_party/ideep/mkl-dnn/third_party/oneDNN': checked out 'fbec3e25a559ee252022ae066817b204e106a6ba' 2023-01-11T21:16:45.6590892Z Submodule path 'third_party/ios-cmake': checked out '8abaed637d56f1337d6e1d2c4026e25c1eade724' 2023-01-11T21:16:45.6728277Z Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' 2023-01-11T21:16:45.7592085Z Submodule path 'third_party/kineto': checked out '6c1629809068efd78a8d56b4aa479c7ec49ae562' 2023-01-11T21:16:45.7606128Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:16:45.7608163Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:16:45.7632012Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2023-01-11T21:16:47.0000381Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2023-01-11T21:16:48.0466587Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '2591ab91c3898c9f6544fff04660276537d32ffd' 2023-01-11T21:16:48.0975792Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2023-01-11T21:16:48.1175235Z Submodule path 'third_party/nccl/nccl': checked out 'f89fd4777d2ef9229c039ff750ae21da01626f52' 2023-01-11T21:16:48.1309751Z Submodule path 'third_party/neon2sse': checked out '97a126f08ce318023be604d03f88bf0820a9464a' 2023-01-11T21:16:48.2294956Z Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' 2023-01-11T21:16:48.4560220Z Submodule path 'third_party/onnx': checked out 'f7ee1ac60d06abe8e26c9b6bbe1e3db5286b614b' 2023-01-11T21:16:48.4585589Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' 2023-01-11T21:16:48.4587833Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2023-01-11T21:16:48.4612189Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/benchmark'... 2023-01-11T21:16:48.8542156Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2023-01-11T21:16:49.7611455Z Submodule path 'third_party/onnx/third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2023-01-11T21:16:49.7889855Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'ffa346860b306c9bbfb341aed9c14c067751feb8' 2023-01-11T21:16:49.8023285Z Submodule path 'third_party/onnx-tensorrt': checked out 'c153211418a7c57ce071d9ce2a41f8d1c85a878f' 2023-01-11T21:16:49.8035494Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:16:49.8056856Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx'... 2023-01-11T21:16:51.6997196Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx': checked out '765f5ee823a67a866f4bd28a9860e81f3c811ce8' 2023-01-11T21:16:51.7014944Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:16:51.7016935Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:16:51.7040689Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark'... 2023-01-11T21:16:52.1566179Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11'... 2023-01-11T21:16:53.2419205Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' 2023-01-11T21:16:53.3025980Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11': checked out 'a1041190c8b8ff0cd9e2f0752248ad5e3789ea0c' 2023-01-11T21:16:53.3038705Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:16:53.3060953Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang'... 2023-01-11T21:16:53.5640657Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2023-01-11T21:16:53.5720622Z Submodule path 'third_party/pocketfft': checked out 'ea778e37710c07723435b1be58235996d1d43a5a' 2023-01-11T21:16:53.8065644Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2023-01-11T21:16:53.8082935Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:16:53.8085238Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2023-01-11T21:16:53.8108776Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2023-01-11T21:16:54.2622766Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2023-01-11T21:16:55.2497054Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2023-01-11T21:16:55.3109189Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2023-01-11T21:16:55.3185691Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2023-01-11T21:16:55.3281244Z Submodule path 'third_party/pthreadpool': checked out 'a134dd5d4cee80cce15db81a72e7f929d71dd413' 2023-01-11T21:16:55.3573164Z Submodule path 'third_party/pybind11': checked out '80dc998efced8ceb2be59756668a7e90e8bef917' 2023-01-11T21:16:55.3652192Z Submodule path 'third_party/python-enum': checked out '4cfedc426c4e2fc52e3f5c2b4297e15ed8d6b8c7' 2023-01-11T21:16:55.3908509Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2023-01-11T21:16:55.3990652Z Submodule path 'third_party/python-six': checked out '15e31431af97e5e64b80af0a3f598d382bcdd49a' 2023-01-11T21:16:55.4385781Z Submodule path 'third_party/sleef': checked out 'e0a003ee838b75d11763aa9c3ef17bf71a725bff' 2023-01-11T21:16:55.5365654Z Submodule path 'third_party/tbb': checked out 'a51a90bc609bb73db8ea13841b5cf7aa4344d4a9' 2023-01-11T21:16:55.5595100Z Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' 2023-01-11T21:16:55.5608109Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:16:55.5610150Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:16:55.5612336Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:16:55.5614650Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:16:55.5637361Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2023-01-11T21:16:56.6191403Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2023-01-11T21:16:56.9060975Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2023-01-11T21:16:58.0577310Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2023-01-11T21:16:59.0322115Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2023-01-11T21:16:59.0452155Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2023-01-11T21:16:59.1027178Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' 2023-01-11T21:16:59.1273141Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2023-01-11T21:16:59.1286001Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:16:59.1313215Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2023-01-11T21:16:59.3675688Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2023-01-11T21:16:59.4906570Z Submodule path 'third_party/zstd': checked out 'aec56a52fbab207fc639a1937d1e708a282edca8' 2023-01-11T21:16:59.4934825Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2023-01-11T21:16:59.5188385Z Entering 'android/libs/fbjni' 2023-01-11T21:16:59.5222817Z Entering 'third_party/FP16' 2023-01-11T21:16:59.5257190Z Entering 'third_party/FXdiv' 2023-01-11T21:16:59.5291063Z Entering 'third_party/NNPACK' 2023-01-11T21:16:59.5325642Z Entering 'third_party/QNNPACK' 2023-01-11T21:16:59.5359718Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T21:16:59.5393063Z Entering 'third_party/XNNPACK' 2023-01-11T21:16:59.5436138Z Entering 'third_party/benchmark' 2023-01-11T21:16:59.5470112Z Entering 'third_party/cpuinfo' 2023-01-11T21:16:59.5504593Z Entering 'third_party/cub' 2023-01-11T21:16:59.5537954Z Entering 'third_party/cudnn_frontend' 2023-01-11T21:16:59.5575598Z Entering 'third_party/cutlass' 2023-01-11T21:16:59.5614032Z Entering 'third_party/eigen' 2023-01-11T21:16:59.5650144Z Entering 'third_party/fbgemm' 2023-01-11T21:16:59.5683496Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:16:59.5717402Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:16:59.5751302Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:16:59.5786840Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:16:59.5819560Z Entering 'third_party/flatbuffers' 2023-01-11T21:16:59.5855276Z Entering 'third_party/fmt' 2023-01-11T21:16:59.5888627Z Entering 'third_party/foxi' 2023-01-11T21:16:59.5922463Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:16:59.5958300Z Entering 'third_party/gloo' 2023-01-11T21:16:59.5992716Z Entering 'third_party/googletest' 2023-01-11T21:16:59.6028241Z Entering 'third_party/ideep' 2023-01-11T21:16:59.6061484Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T21:16:59.6097818Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:16:59.6137639Z Entering 'third_party/ios-cmake' 2023-01-11T21:16:59.6172270Z Entering 'third_party/ittapi' 2023-01-11T21:16:59.6206225Z Entering 'third_party/kineto' 2023-01-11T21:16:59.6242126Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:16:59.6275862Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:16:59.6310712Z Entering 'third_party/nccl/nccl' 2023-01-11T21:16:59.6345536Z Entering 'third_party/neon2sse' 2023-01-11T21:16:59.6379677Z Entering 'third_party/nlohmann' 2023-01-11T21:16:59.6414727Z Entering 'third_party/onnx' 2023-01-11T21:16:59.6459812Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T21:16:59.6494674Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T21:16:59.6529001Z Entering 'third_party/onnx-tensorrt' 2023-01-11T21:16:59.6563214Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:16:59.6600497Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:16:59.6633813Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:16:59.6667292Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:16:59.6704296Z Entering 'third_party/pocketfft' 2023-01-11T21:16:59.6738229Z Entering 'third_party/protobuf' 2023-01-11T21:16:59.6777081Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:16:59.6810161Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T21:16:59.6846363Z Entering 'third_party/psimd' 2023-01-11T21:16:59.6879887Z Entering 'third_party/pthreadpool' 2023-01-11T21:16:59.6913337Z Entering 'third_party/pybind11' 2023-01-11T21:16:59.6947977Z Entering 'third_party/python-enum' 2023-01-11T21:16:59.6981280Z Entering 'third_party/python-peachpy' 2023-01-11T21:16:59.7014671Z Entering 'third_party/python-six' 2023-01-11T21:16:59.7048126Z Entering 'third_party/sleef' 2023-01-11T21:16:59.7081479Z Entering 'third_party/tbb' 2023-01-11T21:16:59.7116900Z Entering 'third_party/tensorpipe' 2023-01-11T21:16:59.7152770Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:16:59.7187129Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:16:59.7218926Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:16:59.7252850Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:16:59.7285998Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:16:59.7323305Z Entering 'third_party/zstd' 2023-01-11T21:16:59.7364928Z ##[endgroup] 2023-01-11T21:16:59.7365525Z ##[group]Persisting credentials for submodules 2023-01-11T21:16:59.7371774Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || : 2023-01-11T21:16:59.7620551Z Entering 'android/libs/fbjni' 2023-01-11T21:16:59.7654759Z Entering 'third_party/FP16' 2023-01-11T21:16:59.7687856Z Entering 'third_party/FXdiv' 2023-01-11T21:16:59.7722911Z Entering 'third_party/NNPACK' 2023-01-11T21:16:59.7757536Z Entering 'third_party/QNNPACK' 2023-01-11T21:16:59.7791504Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T21:16:59.7825102Z Entering 'third_party/XNNPACK' 2023-01-11T21:16:59.7869513Z Entering 'third_party/benchmark' 2023-01-11T21:16:59.7903022Z Entering 'third_party/cpuinfo' 2023-01-11T21:16:59.7936804Z Entering 'third_party/cub' 2023-01-11T21:16:59.7971608Z Entering 'third_party/cudnn_frontend' 2023-01-11T21:16:59.8009866Z Entering 'third_party/cutlass' 2023-01-11T21:16:59.8050476Z Entering 'third_party/eigen' 2023-01-11T21:16:59.8087663Z Entering 'third_party/fbgemm' 2023-01-11T21:16:59.8122233Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:16:59.8154462Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:16:59.8187555Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:16:59.8220765Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:16:59.8254829Z Entering 'third_party/flatbuffers' 2023-01-11T21:16:59.8290701Z Entering 'third_party/fmt' 2023-01-11T21:16:59.8324578Z Entering 'third_party/foxi' 2023-01-11T21:16:59.8359789Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:16:59.8393498Z Entering 'third_party/gloo' 2023-01-11T21:16:59.8427527Z Entering 'third_party/googletest' 2023-01-11T21:16:59.8462040Z Entering 'third_party/ideep' 2023-01-11T21:16:59.8495826Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T21:16:59.8531290Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:16:59.8571228Z Entering 'third_party/ios-cmake' 2023-01-11T21:16:59.8605850Z Entering 'third_party/ittapi' 2023-01-11T21:16:59.8638896Z Entering 'third_party/kineto' 2023-01-11T21:16:59.8672304Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:16:59.8705811Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:16:59.8740881Z Entering 'third_party/nccl/nccl' 2023-01-11T21:16:59.8774973Z Entering 'third_party/neon2sse' 2023-01-11T21:16:59.8809702Z Entering 'third_party/nlohmann' 2023-01-11T21:16:59.8844546Z Entering 'third_party/onnx' 2023-01-11T21:16:59.8889708Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T21:16:59.8923540Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T21:16:59.8959441Z Entering 'third_party/onnx-tensorrt' 2023-01-11T21:16:59.8992931Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:16:59.9030875Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:16:59.9065056Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:16:59.9097668Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:16:59.9135372Z Entering 'third_party/pocketfft' 2023-01-11T21:16:59.9169350Z Entering 'third_party/protobuf' 2023-01-11T21:16:59.9207456Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:16:59.9241106Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T21:16:59.9276807Z Entering 'third_party/psimd' 2023-01-11T21:16:59.9312424Z Entering 'third_party/pthreadpool' 2023-01-11T21:16:59.9345648Z Entering 'third_party/pybind11' 2023-01-11T21:16:59.9380200Z Entering 'third_party/python-enum' 2023-01-11T21:16:59.9414398Z Entering 'third_party/python-peachpy' 2023-01-11T21:16:59.9448044Z Entering 'third_party/python-six' 2023-01-11T21:16:59.9481864Z Entering 'third_party/sleef' 2023-01-11T21:16:59.9515985Z Entering 'third_party/tbb' 2023-01-11T21:16:59.9553133Z Entering 'third_party/tensorpipe' 2023-01-11T21:16:59.9587801Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:16:59.9621765Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:16:59.9655780Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:16:59.9690335Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:16:59.9722498Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:16:59.9759090Z Entering 'third_party/zstd' 2023-01-11T21:16:59.9805048Z [command]/usr/bin/git submodule foreach --recursive git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url 2023-01-11T21:17:00.0081170Z Entering 'android/libs/fbjni' 2023-01-11T21:17:00.0112955Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2023-01-11T21:17:00.0127667Z Entering 'third_party/FP16' 2023-01-11T21:17:00.0160076Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2023-01-11T21:17:00.0174465Z Entering 'third_party/FXdiv' 2023-01-11T21:17:00.0209412Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2023-01-11T21:17:00.0223539Z Entering 'third_party/NNPACK' 2023-01-11T21:17:00.0254518Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2023-01-11T21:17:00.0269302Z Entering 'third_party/QNNPACK' 2023-01-11T21:17:00.0300800Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/QNNPACK/config remote.origin.url 2023-01-11T21:17:00.0315291Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T21:17:00.0346417Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2023-01-11T21:17:00.0360937Z Entering 'third_party/XNNPACK' 2023-01-11T21:17:00.0392923Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2023-01-11T21:17:00.0417882Z Entering 'third_party/benchmark' 2023-01-11T21:17:00.0450379Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2023-01-11T21:17:00.0466764Z Entering 'third_party/cpuinfo' 2023-01-11T21:17:00.0498595Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2023-01-11T21:17:00.0514614Z Entering 'third_party/cub' 2023-01-11T21:17:00.0546314Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cub/config remote.origin.url 2023-01-11T21:17:00.0560903Z Entering 'third_party/cudnn_frontend' 2023-01-11T21:17:00.0592796Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2023-01-11T21:17:00.0612519Z Entering 'third_party/cutlass' 2023-01-11T21:17:00.0644892Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2023-01-11T21:17:00.0665535Z Entering 'third_party/eigen' 2023-01-11T21:17:00.0697888Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/eigen/config remote.origin.url 2023-01-11T21:17:00.0714060Z Entering 'third_party/fbgemm' 2023-01-11T21:17:00.0746244Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2023-01-11T21:17:00.0760822Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:17:00.0792552Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/asmjit/config remote.origin.url 2023-01-11T21:17:00.0806541Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:17:00.0838438Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cpuinfo/config remote.origin.url 2023-01-11T21:17:00.0853357Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:17:00.0885333Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/googletest/config remote.origin.url 2023-01-11T21:17:00.0899395Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:17:00.0932295Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/hipify_torch/config remote.origin.url 2023-01-11T21:17:00.0947857Z Entering 'third_party/flatbuffers' 2023-01-11T21:17:00.0979362Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2023-01-11T21:17:00.0995153Z Entering 'third_party/fmt' 2023-01-11T21:17:00.1027326Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2023-01-11T21:17:00.1041287Z Entering 'third_party/foxi' 2023-01-11T21:17:00.1074342Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/foxi/config remote.origin.url 2023-01-11T21:17:00.1089274Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:17:00.1123008Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2023-01-11T21:17:00.1136954Z Entering 'third_party/gloo' 2023-01-11T21:17:00.1170289Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2023-01-11T21:17:00.1185205Z Entering 'third_party/googletest' 2023-01-11T21:17:00.1216569Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2023-01-11T21:17:00.1231557Z Entering 'third_party/ideep' 2023-01-11T21:17:00.1263923Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2023-01-11T21:17:00.1277235Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T21:17:00.1309298Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2023-01-11T21:17:00.1325797Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:17:00.1358265Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/modules/third_party/oneDNN/config remote.origin.url 2023-01-11T21:17:00.1379878Z Entering 'third_party/ios-cmake' 2023-01-11T21:17:00.1412431Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ios-cmake/config remote.origin.url 2023-01-11T21:17:00.1427237Z Entering 'third_party/ittapi' 2023-01-11T21:17:00.1458631Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2023-01-11T21:17:00.1473007Z Entering 'third_party/kineto' 2023-01-11T21:17:00.1505184Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2023-01-11T21:17:00.1518550Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:17:00.1551287Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2023-01-11T21:17:00.1565933Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:17:00.1598297Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2023-01-11T21:17:00.1612963Z Entering 'third_party/nccl/nccl' 2023-01-11T21:17:00.1645624Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nccl/nccl/config remote.origin.url 2023-01-11T21:17:00.1660528Z Entering 'third_party/neon2sse' 2023-01-11T21:17:00.1693022Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/neon2sse/config remote.origin.url 2023-01-11T21:17:00.1707508Z Entering 'third_party/nlohmann' 2023-01-11T21:17:00.1739936Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2023-01-11T21:17:00.1755604Z Entering 'third_party/onnx' 2023-01-11T21:17:00.1787433Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2023-01-11T21:17:00.1812821Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T21:17:00.1847225Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2023-01-11T21:17:00.1861310Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T21:17:00.1894245Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2023-01-11T21:17:00.1911274Z Entering 'third_party/onnx-tensorrt' 2023-01-11T21:17:00.1944021Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/config remote.origin.url 2023-01-11T21:17:00.1957734Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:17:00.1990696Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/config remote.origin.url 2023-01-11T21:17:00.2008883Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:17:00.2041471Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2023-01-11T21:17:00.2055741Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:17:00.2090112Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2023-01-11T21:17:00.2104387Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:17:00.2136132Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2023-01-11T21:17:00.2155452Z Entering 'third_party/pocketfft' 2023-01-11T21:17:00.2186814Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2023-01-11T21:17:00.2218332Z Entering 'third_party/protobuf' 2023-01-11T21:17:00.2237295Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2023-01-11T21:17:00.2254851Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:17:00.2287238Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2023-01-11T21:17:00.2302598Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T21:17:00.2333823Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2023-01-11T21:17:00.2350877Z Entering 'third_party/psimd' 2023-01-11T21:17:00.2382859Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2023-01-11T21:17:00.2396592Z Entering 'third_party/pthreadpool' 2023-01-11T21:17:00.2428840Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2023-01-11T21:17:00.2442423Z Entering 'third_party/pybind11' 2023-01-11T21:17:00.2474771Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2023-01-11T21:17:00.2489227Z Entering 'third_party/python-enum' 2023-01-11T21:17:00.2521688Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-enum/config remote.origin.url 2023-01-11T21:17:00.2535957Z Entering 'third_party/python-peachpy' 2023-01-11T21:17:00.2568344Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2023-01-11T21:17:00.2581921Z Entering 'third_party/python-six' 2023-01-11T21:17:00.2613945Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-six/config remote.origin.url 2023-01-11T21:17:00.2628253Z Entering 'third_party/sleef' 2023-01-11T21:17:00.2660275Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2023-01-11T21:17:00.2674719Z Entering 'third_party/tbb' 2023-01-11T21:17:00.2706900Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tbb/config remote.origin.url 2023-01-11T21:17:00.2722954Z Entering 'third_party/tensorpipe' 2023-01-11T21:17:00.2754517Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2023-01-11T21:17:00.2768811Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:17:00.2800712Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2023-01-11T21:17:00.2816036Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:17:00.2849498Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2023-01-11T21:17:00.2864208Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:17:00.2895637Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2023-01-11T21:17:00.2911346Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:17:00.2943529Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2023-01-11T21:17:00.2956789Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:17:00.2990885Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2023-01-11T21:17:00.3007477Z Entering 'third_party/zstd' 2023-01-11T21:17:00.3040364Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/zstd/config remote.origin.url 2023-01-11T21:17:00.6118949Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2023-01-11T21:17:00.6383579Z Entering 'android/libs/fbjni' 2023-01-11T21:17:00.6418436Z Entering 'third_party/FP16' 2023-01-11T21:17:00.6454161Z Entering 'third_party/FXdiv' 2023-01-11T21:17:00.6489476Z Entering 'third_party/NNPACK' 2023-01-11T21:17:00.6524051Z Entering 'third_party/QNNPACK' 2023-01-11T21:17:00.6559917Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T21:17:00.6596019Z Entering 'third_party/XNNPACK' 2023-01-11T21:17:00.6642817Z Entering 'third_party/benchmark' 2023-01-11T21:17:00.6679832Z Entering 'third_party/cpuinfo' 2023-01-11T21:17:00.6717895Z Entering 'third_party/cub' 2023-01-11T21:17:00.6752861Z Entering 'third_party/cudnn_frontend' 2023-01-11T21:17:00.6794789Z Entering 'third_party/cutlass' 2023-01-11T21:17:00.6836693Z Entering 'third_party/eigen' 2023-01-11T21:17:00.6874096Z Entering 'third_party/fbgemm' 2023-01-11T21:17:00.6909561Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:17:00.6942256Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:17:00.6977382Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:17:00.7013043Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:17:00.7047671Z Entering 'third_party/flatbuffers' 2023-01-11T21:17:00.7086018Z Entering 'third_party/fmt' 2023-01-11T21:17:00.7121223Z Entering 'third_party/foxi' 2023-01-11T21:17:00.7155312Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:17:00.7190411Z Entering 'third_party/gloo' 2023-01-11T21:17:00.7225630Z Entering 'third_party/googletest' 2023-01-11T21:17:00.7260719Z Entering 'third_party/ideep' 2023-01-11T21:17:00.7295536Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T21:17:00.7332523Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:17:00.7372277Z Entering 'third_party/ios-cmake' 2023-01-11T21:17:00.7406950Z Entering 'third_party/ittapi' 2023-01-11T21:17:00.7441744Z Entering 'third_party/kineto' 2023-01-11T21:17:00.7476205Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:17:00.7510688Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:17:00.7545267Z Entering 'third_party/nccl/nccl' 2023-01-11T21:17:00.7580679Z Entering 'third_party/neon2sse' 2023-01-11T21:17:00.7615297Z Entering 'third_party/nlohmann' 2023-01-11T21:17:00.7652480Z Entering 'third_party/onnx' 2023-01-11T21:17:00.7699956Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T21:17:00.7735077Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T21:17:00.7773012Z Entering 'third_party/onnx-tensorrt' 2023-01-11T21:17:00.7807456Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:17:00.7845887Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:17:00.7881270Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:17:00.7914859Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:17:00.7953907Z Entering 'third_party/pocketfft' 2023-01-11T21:17:00.7988674Z Entering 'third_party/protobuf' 2023-01-11T21:17:00.8027435Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:17:00.8061049Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T21:17:00.8096446Z Entering 'third_party/psimd' 2023-01-11T21:17:00.8130380Z Entering 'third_party/pthreadpool' 2023-01-11T21:17:00.8165077Z Entering 'third_party/pybind11' 2023-01-11T21:17:00.8200246Z Entering 'third_party/python-enum' 2023-01-11T21:17:00.8234379Z Entering 'third_party/python-peachpy' 2023-01-11T21:17:00.8268392Z Entering 'third_party/python-six' 2023-01-11T21:17:00.8303223Z Entering 'third_party/sleef' 2023-01-11T21:17:00.8338358Z Entering 'third_party/tbb' 2023-01-11T21:17:00.8373904Z Entering 'third_party/tensorpipe' 2023-01-11T21:17:00.8407909Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:17:00.8442116Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:17:00.8475031Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:17:00.8509387Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:17:00.8543415Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:17:00.8581519Z Entering 'third_party/zstd' 2023-01-11T21:17:00.8627130Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2023-01-11T21:17:00.8880997Z Entering 'android/libs/fbjni' 2023-01-11T21:17:00.8915117Z Entering 'third_party/FP16' 2023-01-11T21:17:00.8949630Z Entering 'third_party/FXdiv' 2023-01-11T21:17:00.8983095Z Entering 'third_party/NNPACK' 2023-01-11T21:17:00.9017805Z Entering 'third_party/QNNPACK' 2023-01-11T21:17:00.9052290Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T21:17:00.9089531Z Entering 'third_party/XNNPACK' 2023-01-11T21:17:00.9133358Z Entering 'third_party/benchmark' 2023-01-11T21:17:00.9167969Z Entering 'third_party/cpuinfo' 2023-01-11T21:17:00.9202911Z Entering 'third_party/cub' 2023-01-11T21:17:00.9237157Z Entering 'third_party/cudnn_frontend' 2023-01-11T21:17:00.9277535Z Entering 'third_party/cutlass' 2023-01-11T21:17:00.9316765Z Entering 'third_party/eigen' 2023-01-11T21:17:00.9352926Z Entering 'third_party/fbgemm' 2023-01-11T21:17:00.9388262Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T21:17:00.9421250Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T21:17:00.9454319Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T21:17:00.9487422Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T21:17:00.9522173Z Entering 'third_party/flatbuffers' 2023-01-11T21:17:00.9557936Z Entering 'third_party/fmt' 2023-01-11T21:17:00.9592814Z Entering 'third_party/foxi' 2023-01-11T21:17:00.9627988Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T21:17:00.9662173Z Entering 'third_party/gloo' 2023-01-11T21:17:00.9696621Z Entering 'third_party/googletest' 2023-01-11T21:17:00.9731449Z Entering 'third_party/ideep' 2023-01-11T21:17:00.9766282Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T21:17:00.9802080Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T21:17:00.9840712Z Entering 'third_party/ios-cmake' 2023-01-11T21:17:00.9874800Z Entering 'third_party/ittapi' 2023-01-11T21:17:00.9908684Z Entering 'third_party/kineto' 2023-01-11T21:17:00.9942156Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T21:17:00.9975594Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T21:17:01.0011090Z Entering 'third_party/nccl/nccl' 2023-01-11T21:17:01.0044522Z Entering 'third_party/neon2sse' 2023-01-11T21:17:01.0078671Z Entering 'third_party/nlohmann' 2023-01-11T21:17:01.0113894Z Entering 'third_party/onnx' 2023-01-11T21:17:01.0158935Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T21:17:01.0194501Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T21:17:01.0230355Z Entering 'third_party/onnx-tensorrt' 2023-01-11T21:17:01.0264057Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T21:17:01.0301161Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T21:17:01.0335198Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T21:17:01.0368502Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T21:17:01.0406659Z Entering 'third_party/pocketfft' 2023-01-11T21:17:01.0441981Z Entering 'third_party/protobuf' 2023-01-11T21:17:01.0480921Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T21:17:01.0514137Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T21:17:01.0549143Z Entering 'third_party/psimd' 2023-01-11T21:17:01.0583486Z Entering 'third_party/pthreadpool' 2023-01-11T21:17:01.0617657Z Entering 'third_party/pybind11' 2023-01-11T21:17:01.0651459Z Entering 'third_party/python-enum' 2023-01-11T21:17:01.0685562Z Entering 'third_party/python-peachpy' 2023-01-11T21:17:01.0719602Z Entering 'third_party/python-six' 2023-01-11T21:17:01.0753886Z Entering 'third_party/sleef' 2023-01-11T21:17:01.0787790Z Entering 'third_party/tbb' 2023-01-11T21:17:01.0822872Z Entering 'third_party/tensorpipe' 2023-01-11T21:17:01.0856853Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T21:17:01.0892229Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T21:17:01.0927035Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T21:17:01.0961432Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T21:17:01.0994079Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T21:17:01.1030378Z Entering 'third_party/zstd' 2023-01-11T21:17:01.1073475Z ##[endgroup] 2023-01-11T21:17:01.1111613Z [command]/usr/bin/git log -1 --format='%H' 2023-01-11T21:17:01.1136662Z '8419ddda87c8a47eacc63b54bc7ec98c1f27c26e' 2023-01-11T21:17:01.1256077Z Prepare all required actions 2023-01-11T21:17:01.1281595Z ##[group]Run ./.github/actions/setup-linux 2023-01-11T21:17:01.1281796Z env: 2023-01-11T21:17:01.1281972Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.1282145Z ##[endgroup] 2023-01-11T21:17:01.1327962Z ##[group]Run set -euo pipefail 2023-01-11T21:17:01.1328193Z set -euo pipefail 2023-01-11T21:17:01.1328405Z function get_ec2_metadata() { 2023-01-11T21:17:01.1328675Z  # Pulled from instance metadata endpoint for EC2 2023-01-11T21:17:01.1329020Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2023-01-11T21:17:01.1329323Z  category=$1 2023-01-11T21:17:01.1329564Z  curl -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2023-01-11T21:17:01.1329790Z } 2023-01-11T21:17:01.1329982Z echo "ami-id: $(get_ec2_metadata ami-id)" 2023-01-11T21:17:01.1330314Z echo "instance-id: $(get_ec2_metadata instance-id)" 2023-01-11T21:17:01.1330736Z echo "instance-type: $(get_ec2_metadata instance-type)" 2023-01-11T21:17:01.1331128Z echo "system info $(uname -a)" 2023-01-11T21:17:01.1343652Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:17:01.1343880Z env: 2023-01-11T21:17:01.1344043Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.1344232Z ##[endgroup] 2023-01-11T21:17:01.1421454Z ami-id: ami-096198a0bccc6bad4 2023-01-11T21:17:01.1469711Z instance-id: i-0c70567fa7dfd6397 2023-01-11T21:17:01.1515638Z instance-type: c5.2xlarge 2023-01-11T21:17:01.1521896Z system info Linux ip-10-0-0-26.ec2.internal 4.14.252-195.483.amzn2.x86_64 #1 SMP Mon Nov 1 20:58:46 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux 2023-01-11T21:17:01.1536638Z ##[group]Run if systemctl is-active --quiet docker; then 2023-01-11T21:17:01.1536908Z if systemctl is-active --quiet docker; then 2023-01-11T21:17:01.1537156Z  echo "Docker daemon is running..."; 2023-01-11T21:17:01.1537369Z else 2023-01-11T21:17:01.1537591Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2023-01-11T21:17:01.1537816Z fi 2023-01-11T21:17:01.1548990Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:17:01.1549194Z env: 2023-01-11T21:17:01.1549368Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.1549553Z ##[endgroup] 2023-01-11T21:17:01.1589972Z Docker daemon is running... 2023-01-11T21:17:01.1603313Z ##[group]Run AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2023-01-11T21:17:01.1603653Z AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2023-01-11T21:17:01.1603937Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2023-01-11T21:17:01.1604330Z retry aws ecr get-login*** "$AWS_DEFAULT_REGION" | docker login --username AWS \ 2023-01-11T21:17:01.1604684Z  --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" 2023-01-11T21:17:01.1615173Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:17:01.1615377Z env: 2023-01-11T21:17:01.1615555Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.1615754Z AWS_RETRY_MODE: standard 2023-01-11T21:17:01.1615929Z AWS_MAX_ATTEMPTS: 5 2023-01-11T21:17:01.1616126Z AWS_DEFAULT_REGION: us-east-1 2023-01-11T21:17:01.1616317Z ##[endgroup] 2023-01-11T21:17:01.8979915Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2023-01-11T21:17:01.8980322Z Configure a credential helper to remove this warning. See 2023-01-11T21:17:01.8980789Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2023-01-11T21:17:01.8981089Z 2023-01-11T21:17:01.8981222Z Login Succeeded 2023-01-11T21:17:01.9008977Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-01-11T21:17:01.9009268Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-01-11T21:17:01.9009611Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-01-11T21:17:01.9020871Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:17:01.9021079Z env: 2023-01-11T21:17:01.9021260Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.9021451Z ##[endgroup] 2023-01-11T21:17:01.9091542Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2023-01-11T21:17:01.9091788Z with: 2023-01-11T21:17:01.9092163Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:17:01.9092508Z env: 2023-01-11T21:17:01.9092684Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.9092873Z ##[endgroup] 2023-01-11T21:17:01.9104337Z ##[group]Run retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2023-01-11T21:17:01.9104596Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2023-01-11T21:17:01.9104856Z # ignore output since only exit code is used for conditional 2023-01-11T21:17:01.9105142Z # only pull docker image if it's not available locally 2023-01-11T21:17:01.9105435Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2023-01-11T21:17:01.9105727Z  retry docker pull "${DOCKER_IMAGE}" 2023-01-11T21:17:01.9105926Z fi 2023-01-11T21:17:01.9116114Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:17:01.9116335Z env: 2023-01-11T21:17:01.9116513Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:17:01.9116893Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:17:01.9117261Z ##[endgroup] 2023-01-11T21:17:02.2094644Z fd224c2e6c79d7fdec6408da598bf52bc5b201dd: Pulling from pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7 2023-01-11T21:17:02.2095252Z fb668870d8a7: Pulling fs layer 2023-01-11T21:17:02.2095554Z 4542784317be: Pulling fs layer 2023-01-11T21:17:02.2095915Z e0bec5df5af5: Pulling fs layer 2023-01-11T21:17:02.2096254Z 4053f75740ab: Pulling fs layer 2023-01-11T21:17:02.2096613Z 57e09105cdfd: Pulling fs layer 2023-01-11T21:17:02.2096956Z 606761d225e5: Pulling fs layer 2023-01-11T21:17:02.2097273Z 69473a703fb4: Pulling fs layer 2023-01-11T21:17:02.2097623Z a08ab4e0594b: Pulling fs layer 2023-01-11T21:17:02.2097963Z 4cd507bccac2: Pulling fs layer 2023-01-11T21:17:02.2098277Z fa92f16621a4: Pulling fs layer 2023-01-11T21:17:02.2098628Z 6dc2b05bd224: Pulling fs layer 2023-01-11T21:17:02.2098969Z ce4a87d45645: Pulling fs layer 2023-01-11T21:17:02.2099314Z 41860ea59b6c: Pulling fs layer 2023-01-11T21:17:02.2099627Z 87d0ffa55850: Pulling fs layer 2023-01-11T21:17:02.2099991Z f9f75aaba8d7: Pulling fs layer 2023-01-11T21:17:02.2100288Z 0c06be5c20e0: Pulling fs layer 2023-01-11T21:17:02.2100459Z 69473a703fb4: Waiting 2023-01-11T21:17:02.2100647Z d23c0a07b67c: Pulling fs layer 2023-01-11T21:17:02.2100842Z 1001f0d2f3d0: Pulling fs layer 2023-01-11T21:17:02.2101173Z a08ab4e0594b: Waiting 2023-01-11T21:17:02.2101361Z e1c655e7ec0e: Pulling fs layer 2023-01-11T21:17:02.2101555Z a11b4b5fd784: Pulling fs layer 2023-01-11T21:17:02.2101743Z bc41eab7f454: Pulling fs layer 2023-01-11T21:17:02.2101929Z 606761d225e5: Waiting 2023-01-11T21:17:02.2102119Z b8f759fd0191: Pulling fs layer 2023-01-11T21:17:02.2102293Z 4cd507bccac2: Waiting 2023-01-11T21:17:02.2102758Z f410dcc9d0be: Pulling fs layer 2023-01-11T21:17:02.2102945Z 57e09105cdfd: Waiting 2023-01-11T21:17:02.2103110Z fa92f16621a4: Waiting 2023-01-11T21:17:02.2103296Z 90d8f9bbe048: Pulling fs layer 2023-01-11T21:17:02.2103494Z eedfbaa04e4f: Pulling fs layer 2023-01-11T21:17:02.2103675Z 6dc2b05bd224: Waiting 2023-01-11T21:17:02.2103855Z 2f2308643d60: Pulling fs layer 2023-01-11T21:17:02.2104050Z c1a92fad2c2c: Pulling fs layer 2023-01-11T21:17:02.2104221Z ce4a87d45645: Waiting 2023-01-11T21:17:02.2104395Z 41860ea59b6c: Waiting 2023-01-11T21:17:02.2104620Z 47037a50f270: Pulling fs layer 2023-01-11T21:17:02.2104793Z 87d0ffa55850: Waiting 2023-01-11T21:17:02.2104974Z 1a2fd7b216d7: Pulling fs layer 2023-01-11T21:17:02.2105167Z 765839304d2e: Pulling fs layer 2023-01-11T21:17:02.2105341Z f9f75aaba8d7: Waiting 2023-01-11T21:17:02.2105530Z e51794baeb92: Pulling fs layer 2023-01-11T21:17:02.2105821Z ea4bfeaa0fc7: Pulling fs layer 2023-01-11T21:17:02.2105996Z 0c06be5c20e0: Waiting 2023-01-11T21:17:02.2106178Z d8065d17513d: Pulling fs layer 2023-01-11T21:17:02.2106374Z 6d83ca3dedf3: Pulling fs layer 2023-01-11T21:17:02.2106546Z d23c0a07b67c: Waiting 2023-01-11T21:17:02.2106734Z 12ddc57b99eb: Pulling fs layer 2023-01-11T21:17:02.2106946Z 1001f0d2f3d0: Waiting 2023-01-11T21:17:02.2107126Z b590670d273c: Pulling fs layer 2023-01-11T21:17:02.2107310Z 8afbc57dfec9: Pulling fs layer 2023-01-11T21:17:02.2107508Z 29a7c0d5fa4c: Pulling fs layer 2023-01-11T21:17:02.2107700Z 16825bb02017: Pulling fs layer 2023-01-11T21:17:02.2107880Z bdf297d7f88c: Pulling fs layer 2023-01-11T21:17:02.2108064Z 90d8f9bbe048: Waiting 2023-01-11T21:17:02.2108253Z 885c12efa4ae: Pulling fs layer 2023-01-11T21:17:02.2108433Z 28c5689cb975: Pulling fs layer 2023-01-11T21:17:02.2108699Z cca768f96df4: Pulling fs layer 2023-01-11T21:17:02.2108899Z 904b81494b5e: Pulling fs layer 2023-01-11T21:17:02.2109073Z e1c655e7ec0e: Waiting 2023-01-11T21:17:02.2109259Z eedfbaa04e4f: Waiting 2023-01-11T21:17:02.2109452Z 61eecfa8b34e: Pulling fs layer 2023-01-11T21:17:02.2109628Z a11b4b5fd784: Waiting 2023-01-11T21:17:02.2109812Z 95c1ac011645: Pulling fs layer 2023-01-11T21:17:02.2110053Z 07cee023724c: Pulling fs layer 2023-01-11T21:17:02.2110229Z c1a92fad2c2c: Waiting 2023-01-11T21:17:02.2110416Z 195d560d8cf6: Pulling fs layer 2023-01-11T21:17:02.2110642Z b8f759fd0191: Waiting 2023-01-11T21:17:02.2110802Z 47037a50f270: Waiting 2023-01-11T21:17:02.2110975Z f410dcc9d0be: Waiting 2023-01-11T21:17:02.2111148Z 4053f75740ab: Waiting 2023-01-11T21:17:02.2111342Z cca768f96df4: Waiting 2023-01-11T21:17:02.2111515Z 16825bb02017: Waiting 2023-01-11T21:17:02.2111698Z a399389c7f8e: Pulling fs layer 2023-01-11T21:17:02.2111885Z 1a2fd7b216d7: Waiting 2023-01-11T21:17:02.2112044Z 29a7c0d5fa4c: Waiting 2023-01-11T21:17:02.2112214Z 904b81494b5e: Waiting 2023-01-11T21:17:02.2112394Z 7447f84b33ef: Pulling fs layer 2023-01-11T21:17:02.2112567Z bdf297d7f88c: Waiting 2023-01-11T21:17:02.2112743Z 6d83ca3dedf3: Waiting 2023-01-11T21:17:02.2112914Z 765839304d2e: Waiting 2023-01-11T21:17:02.2113081Z 0d8aeb1421f9: Pulling fs layer 2023-01-11T21:17:02.2113275Z 02048a597c22: Pulling fs layer 2023-01-11T21:17:02.2113468Z 25d615d8a5e2: Pulling fs layer 2023-01-11T21:17:02.2113638Z 885c12efa4ae: Waiting 2023-01-11T21:17:02.2113814Z 12ddc57b99eb: Waiting 2023-01-11T21:17:02.2113991Z 09d400b86049: Pulling fs layer 2023-01-11T21:17:02.2114159Z 28c5689cb975: Waiting 2023-01-11T21:17:02.2114330Z b590670d273c: Waiting 2023-01-11T21:17:02.2114499Z 61eecfa8b34e: Waiting 2023-01-11T21:17:02.2114659Z 95c1ac011645: Waiting 2023-01-11T21:17:02.2114836Z ea4bfeaa0fc7: Waiting 2023-01-11T21:17:02.2115012Z 07cee023724c: Waiting 2023-01-11T21:17:02.2115238Z d8065d17513d: Waiting 2023-01-11T21:17:02.2115408Z 195d560d8cf6: Waiting 2023-01-11T21:17:02.2115577Z 09d400b86049: Waiting 2023-01-11T21:17:02.2115728Z 02048a597c22: Waiting 2023-01-11T21:17:02.2115896Z 25d615d8a5e2: Waiting 2023-01-11T21:17:02.2116067Z 0d8aeb1421f9: Waiting 2023-01-11T21:17:02.2116225Z 7447f84b33ef: Waiting 2023-01-11T21:17:02.2116398Z 8afbc57dfec9: Waiting 2023-01-11T21:17:02.3342872Z 4542784317be: Verifying Checksum 2023-01-11T21:17:02.3343221Z 4542784317be: Download complete 2023-01-11T21:17:02.4771636Z 4053f75740ab: Download complete 2023-01-11T21:17:02.5608013Z 57e09105cdfd: Download complete 2023-01-11T21:17:02.6253112Z fb668870d8a7: Verifying Checksum 2023-01-11T21:17:02.6253405Z fb668870d8a7: Download complete 2023-01-11T21:17:02.7353771Z 69473a703fb4: Download complete 2023-01-11T21:17:02.8284934Z a08ab4e0594b: Verifying Checksum 2023-01-11T21:17:02.8285327Z a08ab4e0594b: Download complete 2023-01-11T21:17:02.9004433Z 4cd507bccac2: Verifying Checksum 2023-01-11T21:17:02.9004722Z 4cd507bccac2: Download complete 2023-01-11T21:17:02.9716012Z e0bec5df5af5: Verifying Checksum 2023-01-11T21:17:02.9716413Z e0bec5df5af5: Download complete 2023-01-11T21:17:03.0545432Z 6dc2b05bd224: Verifying Checksum 2023-01-11T21:17:03.0546010Z 6dc2b05bd224: Download complete 2023-01-11T21:17:03.1495230Z ce4a87d45645: Verifying Checksum 2023-01-11T21:17:03.1496128Z ce4a87d45645: Download complete 2023-01-11T21:17:03.2958917Z fb668870d8a7: Pull complete 2023-01-11T21:17:03.5326484Z 4542784317be: Pull complete 2023-01-11T21:17:04.2848500Z e0bec5df5af5: Pull complete 2023-01-11T21:17:04.3798203Z 4053f75740ab: Pull complete 2023-01-11T21:17:04.4904180Z 57e09105cdfd: Pull complete 2023-01-11T21:17:12.5403155Z 41860ea59b6c: Verifying Checksum 2023-01-11T21:17:12.5403620Z 41860ea59b6c: Download complete 2023-01-11T21:17:12.6656972Z 87d0ffa55850: Verifying Checksum 2023-01-11T21:17:12.6657488Z 87d0ffa55850: Download complete 2023-01-11T21:17:12.7588587Z f9f75aaba8d7: Verifying Checksum 2023-01-11T21:17:12.7589086Z f9f75aaba8d7: Download complete 2023-01-11T21:17:12.8614712Z 0c06be5c20e0: Download complete 2023-01-11T21:17:15.7972827Z d23c0a07b67c: Verifying Checksum 2023-01-11T21:17:15.7973096Z d23c0a07b67c: Download complete 2023-01-11T21:17:15.9033325Z 1001f0d2f3d0: Verifying Checksum 2023-01-11T21:17:15.9033588Z 1001f0d2f3d0: Download complete 2023-01-11T21:17:16.0052945Z e1c655e7ec0e: Verifying Checksum 2023-01-11T21:17:16.0053238Z e1c655e7ec0e: Download complete 2023-01-11T21:17:16.1974258Z 606761d225e5: Verifying Checksum 2023-01-11T21:17:16.1974506Z 606761d225e5: Download complete 2023-01-11T21:17:16.2862260Z bc41eab7f454: Download complete 2023-01-11T21:17:16.4014417Z b8f759fd0191: Download complete 2023-01-11T21:17:16.4824482Z f410dcc9d0be: Verifying Checksum 2023-01-11T21:17:16.4824822Z f410dcc9d0be: Download complete 2023-01-11T21:17:16.5485047Z 90d8f9bbe048: Verifying Checksum 2023-01-11T21:17:16.5485429Z 90d8f9bbe048: Download complete 2023-01-11T21:17:16.6268165Z eedfbaa04e4f: Download complete 2023-01-11T21:17:16.7132775Z 2f2308643d60: Verifying Checksum 2023-01-11T21:17:16.7133217Z 2f2308643d60: Download complete 2023-01-11T21:17:20.5380585Z c1a92fad2c2c: Download complete 2023-01-11T21:17:20.6425543Z 47037a50f270: Download complete 2023-01-11T21:17:20.7357786Z 1a2fd7b216d7: Download complete 2023-01-11T21:17:20.9163798Z 765839304d2e: Verifying Checksum 2023-01-11T21:17:20.9164231Z 765839304d2e: Download complete 2023-01-11T21:17:21.0718928Z e51794baeb92: Download complete 2023-01-11T21:17:21.1931972Z ea4bfeaa0fc7: Verifying Checksum 2023-01-11T21:17:22.1267989Z fa92f16621a4: Verifying Checksum 2023-01-11T21:17:22.1268338Z fa92f16621a4: Download complete 2023-01-11T21:17:22.2233700Z 6d83ca3dedf3: Download complete 2023-01-11T21:17:22.3018984Z 12ddc57b99eb: Verifying Checksum 2023-01-11T21:17:22.3019421Z 12ddc57b99eb: Download complete 2023-01-11T21:17:23.9161630Z b590670d273c: Verifying Checksum 2023-01-11T21:17:23.9161951Z b590670d273c: Download complete 2023-01-11T21:17:23.9902951Z 8afbc57dfec9: Verifying Checksum 2023-01-11T21:17:23.9903328Z 8afbc57dfec9: Download complete 2023-01-11T21:17:24.0946297Z 29a7c0d5fa4c: Verifying Checksum 2023-01-11T21:17:24.0946581Z 29a7c0d5fa4c: Download complete 2023-01-11T21:17:24.7935105Z 16825bb02017: Verifying Checksum 2023-01-11T21:17:24.7935501Z 16825bb02017: Download complete 2023-01-11T21:17:24.8794565Z bdf297d7f88c: Verifying Checksum 2023-01-11T21:17:24.8794879Z bdf297d7f88c: Download complete 2023-01-11T21:17:26.8210938Z 885c12efa4ae: Verifying Checksum 2023-01-11T21:17:26.8211451Z 885c12efa4ae: Download complete 2023-01-11T21:17:26.9502296Z 28c5689cb975: Verifying Checksum 2023-01-11T21:17:26.9502899Z 28c5689cb975: Download complete 2023-01-11T21:17:27.0539923Z cca768f96df4: Verifying Checksum 2023-01-11T21:17:27.0540248Z cca768f96df4: Download complete 2023-01-11T21:17:30.3284300Z d8065d17513d: Verifying Checksum 2023-01-11T21:17:30.3284556Z d8065d17513d: Download complete 2023-01-11T21:17:30.4133493Z 61eecfa8b34e: Verifying Checksum 2023-01-11T21:17:30.5006794Z 61eecfa8b34e: Download complete 2023-01-11T21:17:30.5007041Z 95c1ac011645: Download complete 2023-01-11T21:17:30.6001126Z 07cee023724c: Download complete 2023-01-11T21:17:30.7049381Z 195d560d8cf6: Verifying Checksum 2023-01-11T21:17:30.7049931Z 195d560d8cf6: Download complete 2023-01-11T21:17:31.4094192Z a399389c7f8e: Verifying Checksum 2023-01-11T21:17:31.4094555Z a399389c7f8e: Download complete 2023-01-11T21:17:31.5091736Z 7447f84b33ef: Verifying Checksum 2023-01-11T21:17:31.5092296Z 7447f84b33ef: Download complete 2023-01-11T21:17:32.1623961Z 606761d225e5: Pull complete 2023-01-11T21:17:32.6385506Z 69473a703fb4: Pull complete 2023-01-11T21:17:32.9449922Z a08ab4e0594b: Pull complete 2023-01-11T21:17:33.1397289Z 4cd507bccac2: Pull complete 2023-01-11T21:17:34.4632774Z 0d8aeb1421f9: Verifying Checksum 2023-01-11T21:17:34.4633180Z 0d8aeb1421f9: Download complete 2023-01-11T21:17:34.5399886Z 02048a597c22: Verifying Checksum 2023-01-11T21:17:34.5400296Z 02048a597c22: Download complete 2023-01-11T21:17:37.8446127Z 904b81494b5e: Verifying Checksum 2023-01-11T21:17:37.8446415Z 904b81494b5e: Download complete 2023-01-11T21:17:37.9084801Z 09d400b86049: Download complete 2023-01-11T21:18:12.7974032Z fa92f16621a4: Pull complete 2023-01-11T21:18:13.2730005Z 6dc2b05bd224: Pull complete 2023-01-11T21:18:13.4919678Z ce4a87d45645: Pull complete 2023-01-11T21:18:21.7077098Z 41860ea59b6c: Pull complete 2023-01-11T21:18:21.9886260Z 87d0ffa55850: Pull complete 2023-01-11T21:18:22.2430257Z f9f75aaba8d7: Pull complete 2023-01-11T21:18:22.4637221Z 0c06be5c20e0: Pull complete 2023-01-11T21:18:25.1917158Z a11b4b5fd784: Verifying Checksum 2023-01-11T21:18:25.1917411Z a11b4b5fd784: Download complete 2023-01-11T21:18:25.4133558Z d23c0a07b67c: Pull complete 2023-01-11T21:18:25.9020981Z 1001f0d2f3d0: Pull complete 2023-01-11T21:18:26.1242346Z e1c655e7ec0e: Pull complete 2023-01-11T21:18:56.5584916Z 25d615d8a5e2: Verifying Checksum 2023-01-11T21:18:56.5585204Z 25d615d8a5e2: Download complete 2023-01-11T21:19:05.0675886Z a11b4b5fd784: Pull complete 2023-01-11T21:19:05.4222020Z bc41eab7f454: Pull complete 2023-01-11T21:19:05.5585388Z b8f759fd0191: Pull complete 2023-01-11T21:19:05.7720935Z f410dcc9d0be: Pull complete 2023-01-11T21:19:05.9962364Z 90d8f9bbe048: Pull complete 2023-01-11T21:19:06.2048613Z eedfbaa04e4f: Pull complete 2023-01-11T21:19:06.4136462Z 2f2308643d60: Pull complete 2023-01-11T21:19:08.3341884Z c1a92fad2c2c: Pull complete 2023-01-11T21:19:08.5649228Z 47037a50f270: Pull complete 2023-01-11T21:19:08.8007578Z 1a2fd7b216d7: Pull complete 2023-01-11T21:19:09.0797027Z 765839304d2e: Pull complete 2023-01-11T21:19:09.3245246Z e51794baeb92: Pull complete 2023-01-11T21:19:09.5556598Z ea4bfeaa0fc7: Pull complete 2023-01-11T21:19:13.7146481Z d8065d17513d: Pull complete 2023-01-11T21:19:13.9488194Z 6d83ca3dedf3: Pull complete 2023-01-11T21:19:14.1470630Z 12ddc57b99eb: Pull complete 2023-01-11T21:19:15.0145794Z b590670d273c: Pull complete 2023-01-11T21:19:15.2589804Z 8afbc57dfec9: Pull complete 2023-01-11T21:19:15.4797863Z 29a7c0d5fa4c: Pull complete 2023-01-11T21:19:15.9131787Z 16825bb02017: Pull complete 2023-01-11T21:19:16.1578794Z bdf297d7f88c: Pull complete 2023-01-11T21:19:17.4302928Z 885c12efa4ae: Pull complete 2023-01-11T21:19:17.6696733Z 28c5689cb975: Pull complete 2023-01-11T21:19:17.9065201Z cca768f96df4: Pull complete 2023-01-11T21:19:21.7346234Z 904b81494b5e: Pull complete 2023-01-11T21:19:21.8774917Z 61eecfa8b34e: Pull complete 2023-01-11T21:19:22.0913868Z 95c1ac011645: Pull complete 2023-01-11T21:19:22.2964296Z 07cee023724c: Pull complete 2023-01-11T21:19:22.4925892Z 195d560d8cf6: Pull complete 2023-01-11T21:19:23.1826351Z a399389c7f8e: Pull complete 2023-01-11T21:19:23.3255791Z 7447f84b33ef: Pull complete 2023-01-11T21:19:24.9512283Z 0d8aeb1421f9: Pull complete 2023-01-11T21:19:25.1812447Z 02048a597c22: Pull complete 2023-01-11T21:19:47.6313628Z 25d615d8a5e2: Pull complete 2023-01-11T21:19:47.8552266Z 09d400b86049: Pull complete 2023-01-11T21:19:47.9921012Z Digest: sha256:0da23f4faf0ce20770149c4a783e08eaa91c07112511dc5511c77937c66edb24 2023-01-11T21:19:48.0258451Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:19:48.0446702Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:19:48.0705215Z ##[group]Run python3 -m pip install psutil==5.9.1 2023-01-11T21:19:48.0705592Z python3 -m pip install psutil==5.9.1 2023-01-11T21:19:48.0705922Z python3 -m pip install pynvml==11.4.1 2023-01-11T21:19:48.0706273Z python3 -m tools.stats.monitor > usage_log.txt 2>&1 & 2023-01-11T21:19:48.0706627Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2023-01-11T21:19:48.6345071Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:19:48.6345330Z env: 2023-01-11T21:19:48.6345540Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:19:48.6345755Z ##[endgroup] 2023-01-11T21:19:52.0304391Z Defaulting to user installation because normal site-packages is not writeable 2023-01-11T21:19:52.0574583Z Requirement already satisfied: psutil==5.9.1 in /home/ec2-user/.local/lib/python3.7/site-packages (5.9.1) 2023-01-11T21:19:52.4949179Z Defaulting to user installation because normal site-packages is not writeable 2023-01-11T21:19:52.5127475Z Requirement already satisfied: pynvml==11.4.1 in /home/ec2-user/.local/lib/python3.7/site-packages (11.4.1) 2023-01-11T21:19:52.7223532Z Prepare all required actions 2023-01-11T21:19:52.7223798Z Getting action download info 2023-01-11T21:19:53.0782714Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:4a8bfae15cc25cc0785c1603ee87a9da8fd442ea) 2023-01-11T21:19:53.2793725Z Download action repository 'actions/download-artifact@v3' (SHA:9bc31d5ccc31df68ecc42ccf4149144866c47d8a) 2023-01-11T21:19:53.4003005Z ##[group]Run ./.github/actions/download-build-artifacts 2023-01-11T21:19:53.4003242Z with: 2023-01-11T21:19:53.4003441Z name: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:19:53.4003647Z env: 2023-01-11T21:19:53.4003823Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:19:53.4003997Z ##[endgroup] 2023-01-11T21:19:53.4025483Z ##[group]Run seemethere/download-artifact-s3@v4 2023-01-11T21:19:53.4025703Z with: 2023-01-11T21:19:53.4025895Z name: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:19:53.4026123Z s3-bucket: gha-artifacts 2023-01-11T21:19:53.4026314Z region: us-east-1 2023-01-11T21:19:53.4026499Z env: 2023-01-11T21:19:53.4026657Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:19:53.4026845Z ##[endgroup] 2023-01-11T21:19:53.8386781Z Found 1 objects with prefix pytorch/pytorch/3896346758/linux-bionic-cuda11.7-py3.10-gcc7/ 2023-01-11T21:19:53.8387310Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2023-01-11T21:20:04.3430047Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2023-01-11T21:20:04.3430292Z 2023-01-11T21:20:04.3443922Z ##[warning]The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/ 2023-01-11T21:20:04.3451720Z Artifact download has finished successfully 2023-01-11T21:20:04.3578940Z ##[group]Run unzip -o artifacts.zip 2023-01-11T21:20:04.3579176Z unzip -o artifacts.zip 2023-01-11T21:20:04.3590758Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:20:04.3590982Z env: 2023-01-11T21:20:04.3591163Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:20:04.3591342Z ##[endgroup] 2023-01-11T21:20:04.3660051Z Archive: artifacts.zip 2023-01-11T21:20:04.3661527Z creating: dist/ 2023-01-11T21:20:05.9915498Z inflating: dist/torch-2.0.0a0+git8419ddd-cp310-cp310-linux_x86_64.whl 2023-01-11T21:20:05.9915895Z creating: build/custom_test_artifacts/ 2023-01-11T21:20:05.9916248Z creating: build/custom_test_artifacts/custom-op-build/ 2023-01-11T21:20:05.9916651Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2023-01-11T21:20:05.9921135Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeOutput.log 2023-01-11T21:20:05.9921566Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/ 2023-01-11T21:20:05.9922020Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-01-11T21:20:05.9922474Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-01-11T21:20:05.9922914Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-01-11T21:20:05.9924304Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-01-11T21:20:05.9925173Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-01-11T21:20:05.9925626Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-01-11T21:20:05.9926076Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-01-11T21:20:05.9927772Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-01-11T21:20:05.9928623Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-01-11T21:20:05.9929896Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-01-11T21:20:05.9930786Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-01-11T21:20:05.9931870Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-01-11T21:20:05.9932635Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-01-11T21:20:05.9933107Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-01-11T21:20:05.9933558Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-01-11T21:20:05.9974828Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-01-11T21:20:05.9975412Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-01-11T21:20:05.9975996Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-01-11T21:20:05.9976579Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-01-11T21:20:05.9977153Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-01-11T21:20:05.9977715Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-01-11T21:20:05.9978386Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-01-11T21:20:05.9978949Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-01-11T21:20:05.9979495Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-01-11T21:20:06.0010759Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-01-11T21:20:06.0042535Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-01-11T21:20:06.0043200Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-01-11T21:20:06.0043725Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-01-11T21:20:06.0044241Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-01-11T21:20:06.0044747Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-01-11T21:20:06.0045265Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-01-11T21:20:06.0045799Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-01-11T21:20:06.0047130Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-01-11T21:20:06.0104414Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-01-11T21:20:06.0161383Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-01-11T21:20:06.0161907Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-01-11T21:20:06.0162347Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2023-01-11T21:20:06.0162784Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeError.log 2023-01-11T21:20:06.0163405Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2023-01-11T21:20:06.0163846Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2023-01-11T21:20:06.0164303Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2023-01-11T21:20:06.0164791Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2023-01-11T21:20:06.0165275Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2023-01-11T21:20:06.0165741Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2023-01-11T21:20:06.0166196Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2023-01-11T21:20:06.0166673Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2023-01-11T21:20:06.0167150Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2023-01-11T21:20:06.0167625Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2023-01-11T21:20:06.0168077Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2023-01-11T21:20:06.0183655Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2023-01-11T21:20:06.0273996Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2023-01-11T21:20:06.0274561Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2023-01-11T21:20:06.0275035Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2023-01-11T21:20:06.0275541Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2023-01-11T21:20:06.0276032Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2023-01-11T21:20:06.0276507Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2023-01-11T21:20:06.0276982Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2023-01-11T21:20:06.0277465Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2023-01-11T21:20:06.0277948Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2023-01-11T21:20:06.0278430Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2023-01-11T21:20:06.0278900Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2023-01-11T21:20:06.0293974Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2023-01-11T21:20:06.0360483Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2023-01-11T21:20:06.0360993Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-01-11T21:20:06.0361469Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2023-01-11T21:20:06.0361923Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2023-01-11T21:20:06.0362725Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2023-01-11T21:20:06.0363313Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2023-01-11T21:20:06.0363749Z inflating: build/custom_test_artifacts/custom-op-build/detect_cuda_version.cc 2023-01-11T21:20:06.0365537Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2023-01-11T21:20:06.0366192Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2023-01-11T21:20:06.0366768Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2023-01-11T21:20:06.0439836Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2023-01-11T21:20:06.0489800Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2023-01-11T21:20:06.0490191Z creating: build/custom_test_artifacts/jit-hook-build/ 2023-01-11T21:20:06.0490572Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2023-01-11T21:20:06.0495500Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeOutput.log 2023-01-11T21:20:06.0495933Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/ 2023-01-11T21:20:06.0496358Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-01-11T21:20:06.0496807Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-01-11T21:20:06.0497249Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-01-11T21:20:06.0498520Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-01-11T21:20:06.0499403Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-01-11T21:20:06.0499846Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-01-11T21:20:06.0500295Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-01-11T21:20:06.0501977Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-01-11T21:20:06.0503579Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-01-11T21:20:06.0504579Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-01-11T21:20:06.0505284Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-01-11T21:20:06.0506207Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-01-11T21:20:06.0506942Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-01-11T21:20:06.0507458Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-01-11T21:20:06.0507891Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-01-11T21:20:06.0549240Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-01-11T21:20:06.0549921Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-01-11T21:20:06.0550521Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-01-11T21:20:06.0551148Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-01-11T21:20:06.0551742Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-01-11T21:20:06.0552294Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-01-11T21:20:06.0552864Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-01-11T21:20:06.0553393Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-01-11T21:20:06.0553921Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-01-11T21:20:06.0585515Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-01-11T21:20:06.0616582Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-01-11T21:20:06.0617414Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-01-11T21:20:06.0618119Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-01-11T21:20:06.0618647Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-01-11T21:20:06.0619138Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-01-11T21:20:06.0619644Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-01-11T21:20:06.0620146Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-01-11T21:20:06.0621291Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-01-11T21:20:06.0678567Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-01-11T21:20:06.0735490Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-01-11T21:20:06.0736006Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-01-11T21:20:06.0736555Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2023-01-11T21:20:06.0736981Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeError.log 2023-01-11T21:20:06.0737425Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2023-01-11T21:20:06.0737867Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2023-01-11T21:20:06.0738343Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2023-01-11T21:20:06.0738827Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2023-01-11T21:20:06.0739302Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2023-01-11T21:20:06.0739779Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2023-01-11T21:20:06.0740254Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2023-01-11T21:20:06.0740838Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2023-01-11T21:20:06.0741318Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2023-01-11T21:20:06.0741779Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2023-01-11T21:20:06.0742255Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2023-01-11T21:20:06.0757381Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2023-01-11T21:20:06.0808704Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2023-01-11T21:20:06.0809249Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-01-11T21:20:06.0809730Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2023-01-11T21:20:06.0810183Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2023-01-11T21:20:06.0810614Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2023-01-11T21:20:06.0811173Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2023-01-11T21:20:06.0811703Z inflating: build/custom_test_artifacts/jit-hook-build/detect_cuda_version.cc 2023-01-11T21:20:06.0813576Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2023-01-11T21:20:06.0814124Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2023-01-11T21:20:06.0814843Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2023-01-11T21:20:06.0854072Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2023-01-11T21:20:06.0854518Z creating: build/custom_test_artifacts/custom-backend-build/ 2023-01-11T21:20:06.0854961Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2023-01-11T21:20:06.0860366Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeOutput.log 2023-01-11T21:20:06.0860801Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/ 2023-01-11T21:20:06.0861232Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-01-11T21:20:06.0861677Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-01-11T21:20:06.0862114Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-01-11T21:20:06.0863488Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-01-11T21:20:06.0864400Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-01-11T21:20:06.0864959Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-01-11T21:20:06.0865406Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-01-11T21:20:06.0866957Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-01-11T21:20:06.0867851Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-01-11T21:20:06.0868936Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-01-11T21:20:06.0869607Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-01-11T21:20:06.0870634Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-01-11T21:20:06.0871382Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-01-11T21:20:06.0871885Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-01-11T21:20:06.0872366Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-01-11T21:20:06.0913546Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-01-11T21:20:06.0914155Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-01-11T21:20:06.0914827Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-01-11T21:20:06.0915440Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-01-11T21:20:06.0916026Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-01-11T21:20:06.0916614Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-01-11T21:20:06.0917193Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-01-11T21:20:06.0917836Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-01-11T21:20:06.0918424Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-01-11T21:20:06.0949740Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-01-11T21:20:06.0981138Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-01-11T21:20:06.0982031Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-01-11T21:20:06.0983108Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-01-11T21:20:06.0983737Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-01-11T21:20:06.0984381Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-01-11T21:20:06.0984980Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-01-11T21:20:06.0985472Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-01-11T21:20:06.0986162Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-01-11T21:20:06.1043307Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-01-11T21:20:06.1100103Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-01-11T21:20:06.1100772Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-01-11T21:20:06.1101317Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2023-01-11T21:20:06.1101871Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeError.log 2023-01-11T21:20:06.1102520Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2023-01-11T21:20:06.1103123Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2023-01-11T21:20:06.1103691Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2023-01-11T21:20:06.1104292Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2023-01-11T21:20:06.1104892Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2023-01-11T21:20:06.1105464Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2023-01-11T21:20:06.1106078Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2023-01-11T21:20:06.1106552Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2023-01-11T21:20:06.1107030Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2023-01-11T21:20:06.1107502Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2023-01-11T21:20:06.1107977Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2023-01-11T21:20:06.1109684Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2023-01-11T21:20:06.1227769Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2023-01-11T21:20:06.1228456Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2023-01-11T21:20:06.1229007Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2023-01-11T21:20:06.1229506Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2023-01-11T21:20:06.1230141Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2023-01-11T21:20:06.1230641Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2023-01-11T21:20:06.1231184Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2023-01-11T21:20:06.1231740Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2023-01-11T21:20:06.1232302Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2023-01-11T21:20:06.1232796Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2023-01-11T21:20:06.1233277Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2023-01-11T21:20:06.1248262Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2023-01-11T21:20:06.1295305Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2023-01-11T21:20:06.1296006Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-01-11T21:20:06.1296539Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2023-01-11T21:20:06.1296993Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2023-01-11T21:20:06.1297448Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2023-01-11T21:20:06.1297993Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2023-01-11T21:20:06.1298467Z inflating: build/custom_test_artifacts/custom-backend-build/detect_cuda_version.cc 2023-01-11T21:20:06.1300313Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2023-01-11T21:20:06.1301086Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2023-01-11T21:20:06.1301528Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2023-01-11T21:20:06.1396062Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2023-01-11T21:20:06.1432619Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2023-01-11T21:20:06.1432912Z creating: build/lib/ 2023-01-11T21:20:06.1433504Z inflating: build/lib/libclog.a 2023-01-11T21:20:06.1487404Z inflating: build/lib/libgtest.a 2023-01-11T21:20:06.1495252Z inflating: build/lib/libpthreadpool.a 2023-01-11T21:20:06.1570423Z inflating: build/lib/libbenchmark.a 2023-01-11T21:20:06.1577422Z inflating: build/lib/libittnotify.a 2023-01-11T21:20:06.1659820Z inflating: build/lib/libprotobuf-lite.a 2023-01-11T21:20:06.1684675Z inflating: build/lib/libtensorpipe_uv.a 2023-01-11T21:20:06.1744569Z inflating: build/lib/libasmjit.a 2023-01-11T21:20:06.2160735Z inflating: build/lib/libprotobuf.a 2023-01-11T21:20:06.2270187Z inflating: build/lib/libgloo.a 2023-01-11T21:20:06.2295222Z inflating: build/lib/libfmt.a 2023-01-11T21:20:06.2295745Z inflating: build/lib/libfoxi_loader.a 2023-01-11T21:20:06.2297235Z inflating: build/lib/libcaffe2_nvrtc.so 2023-01-11T21:20:06.2361828Z inflating: build/lib/libc10.so 2023-01-11T21:20:06.2362633Z inflating: build/lib/libtorch_global_deps.so 2023-01-11T21:20:06.2807438Z inflating: build/lib/libprotoc.a 2023-01-11T21:20:06.2814786Z inflating: build/lib/libcpuinfo.a 2023-01-11T21:20:06.2816499Z inflating: build/lib/libnnpack_reference_layers.a 2023-01-11T21:20:06.2823359Z inflating: build/lib/libcpuinfo_internals.a 2023-01-11T21:20:06.2836990Z inflating: build/lib/libgmock.a 2023-01-11T21:20:06.2837449Z inflating: build/lib/libgtest_main.a 2023-01-11T21:20:06.2838243Z inflating: build/lib/libbenchmark_main.a 2023-01-11T21:20:06.3346106Z inflating: build/lib/libtensorpipe.a 2023-01-11T21:20:07.0891580Z inflating: build/lib/libdnnl.a 2023-01-11T21:20:07.1001743Z inflating: build/lib/libXNNPACK.a 2023-01-11T21:20:07.1044269Z inflating: build/lib/libc10_cuda.so 2023-01-11T21:20:07.1056479Z inflating: build/lib/libqnnpack.a 2023-01-11T21:20:07.1056935Z inflating: build/lib/libgmock_main.a 2023-01-11T21:20:07.2247502Z inflating: build/lib/libfbgemm.a 2023-01-11T21:20:07.2265133Z inflating: build/lib/libpytorch_qnnpack.a 2023-01-11T21:20:07.3159561Z inflating: build/lib/libdnnl_graph.a 2023-01-11T21:20:07.3561846Z inflating: build/lib/libkineto.a 2023-01-11T21:20:07.3785939Z inflating: build/lib/libtensorpipe_cuda.a 2023-01-11T21:20:07.3820839Z inflating: build/lib/libcaffe2_protos.a 2023-01-11T21:20:07.3838095Z inflating: build/lib/libnnpack.a 2023-01-11T21:20:07.3875573Z inflating: build/lib/libonnx_proto.a 2023-01-11T21:20:07.4402145Z inflating: build/lib/libonnx.a 2023-01-11T21:20:07.4732478Z inflating: build/lib/libgloo_cuda.a 2023-01-11T21:20:09.3233729Z inflating: build/lib/libtorch_cpu.so 2023-01-11T21:20:09.3241477Z inflating: build/lib/libunbox_lib.a 2023-01-11T21:20:10.9832305Z inflating: build/lib/libtorch_cuda.so 2023-01-11T21:20:10.9832679Z inflating: build/lib/libtorch.so 2023-01-11T21:20:10.9834559Z inflating: build/lib/libc10d_cuda_test.so 2023-01-11T21:20:11.7566942Z inflating: build/lib/libtorch_cuda_linalg.so 2023-01-11T21:20:11.7614185Z inflating: build/lib/libtorchbind_test.so 2023-01-11T21:20:11.7633183Z inflating: build/lib/libjitbackend_test.so 2023-01-11T21:20:11.7657264Z inflating: build/lib/libbackend_with_compiler.so 2023-01-11T21:20:11.7660990Z inflating: build/lib/libshm.so 2023-01-11T21:20:11.9104729Z inflating: build/lib/libtorch_python.so 2023-01-11T21:20:11.9135513Z inflating: build/lib/libnnapi_backend.so 2023-01-11T21:20:11.9135782Z creating: build/bin/ 2023-01-11T21:20:11.9178248Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2023-01-11T21:20:11.9223335Z inflating: build/bin/c10_DeviceGuard_test 2023-01-11T21:20:11.9267250Z inflating: build/bin/c10_Device_test 2023-01-11T21:20:11.9317812Z inflating: build/bin/c10_DispatchKeySet_test 2023-01-11T21:20:11.9359420Z inflating: build/bin/c10_StreamGuard_test 2023-01-11T21:20:11.9402153Z inflating: build/bin/c10_SymInt_test 2023-01-11T21:20:11.9450827Z inflating: build/bin/c10_InlineDeviceGuard_test 2023-01-11T21:20:11.9499205Z inflating: build/bin/c10_InlineStreamGuard_test 2023-01-11T21:20:11.9548374Z inflating: build/bin/c10_SizesAndStrides_test 2023-01-11T21:20:11.9589961Z inflating: build/bin/c10_Array_test 2023-01-11T21:20:11.9635864Z inflating: build/bin/c10_Bitset_test 2023-01-11T21:20:11.9680336Z inflating: build/bin/c10_C++17_test 2023-01-11T21:20:11.9722119Z inflating: build/bin/c10_ConstexprCrc_test 2023-01-11T21:20:11.9764463Z inflating: build/bin/c10_DeadlockDetection_test 2023-01-11T21:20:11.9807253Z inflating: build/bin/c10_Half_test 2023-01-11T21:20:11.9856274Z inflating: build/bin/c10_LeftRight_test 2023-01-11T21:20:11.9910227Z inflating: build/bin/c10_Metaprogramming_test 2023-01-11T21:20:11.9953603Z inflating: build/bin/c10_Synchronized_test 2023-01-11T21:20:12.0077762Z inflating: build/bin/c10_SmallVectorTest 2023-01-11T21:20:12.0126823Z inflating: build/bin/c10_ThreadLocal_test 2023-01-11T21:20:12.0172630Z inflating: build/bin/c10_TypeIndex_test 2023-01-11T21:20:12.0216422Z inflating: build/bin/c10_TypeList_test 2023-01-11T21:20:12.0257930Z inflating: build/bin/c10_TypeTraits_test 2023-01-11T21:20:12.0303346Z inflating: build/bin/c10_accumulate_test 2023-01-11T21:20:12.0351941Z inflating: build/bin/c10_bfloat16_test 2023-01-11T21:20:12.0399872Z inflating: build/bin/c10_complex_math_test 2023-01-11T21:20:12.0447516Z inflating: build/bin/c10_complex_test 2023-01-11T21:20:12.0540404Z inflating: build/bin/c10_either_test 2023-01-11T21:20:12.0586239Z inflating: build/bin/c10_exception_test 2023-01-11T21:20:12.0629366Z inflating: build/bin/c10_flags_test 2023-01-11T21:20:12.0771799Z inflating: build/bin/c10_intrusive_ptr_test 2023-01-11T21:20:12.0815377Z inflating: build/bin/c10_irange_test 2023-01-11T21:20:12.0864757Z inflating: build/bin/c10_logging_test 2023-01-11T21:20:12.0928243Z inflating: build/bin/c10_optional_test 2023-01-11T21:20:12.0981589Z inflating: build/bin/c10_ordered_preserving_dict_test 2023-01-11T21:20:12.1029514Z inflating: build/bin/c10_registry_test 2023-01-11T21:20:12.1079441Z inflating: build/bin/c10_string_view_test 2023-01-11T21:20:12.1124296Z inflating: build/bin/c10_tempfile_test 2023-01-11T21:20:12.1172697Z inflating: build/bin/c10_typeid_test 2023-01-11T21:20:12.1220497Z inflating: build/bin/c10_intrusive_ptr_benchmark 2023-01-11T21:20:12.1630043Z inflating: build/bin/protoc-3.13.0.0 2023-01-11T21:20:12.2038873Z inflating: build/bin/protoc 2023-01-11T21:20:12.2085164Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_stream 2023-01-11T21:20:12.2131227Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_1_var_test 2023-01-11T21:20:12.2176995Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device 2023-01-11T21:20:12.2222242Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_from_2_processes 2023-01-11T21:20:12.2268470Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads 2023-01-11T21:20:12.2314625Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks 2023-01-11T21:20:12.2356502Z inflating: build/bin/c10_cuda_CUDATest 2023-01-11T21:20:12.2402476Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block 2023-01-11T21:20:12.2653280Z inflating: build/bin/vec_test_all_types_DEFAULT 2023-01-11T21:20:12.2932580Z inflating: build/bin/vec_test_all_types_AVX2 2023-01-11T21:20:12.2979451Z inflating: build/bin/HashStoreTest 2023-01-11T21:20:12.3031272Z inflating: build/bin/TCPStoreTest 2023-01-11T21:20:12.3077828Z inflating: build/bin/FileStoreTest 2023-01-11T21:20:12.3090305Z inflating: build/bin/ProcessGroupMPITest 2023-01-11T21:20:12.3139477Z inflating: build/bin/test_edge_op_registration 2023-01-11T21:20:12.3141872Z inflating: build/bin/example_allreduce 2023-01-11T21:20:12.3187978Z inflating: build/bin/Dimname_test 2023-01-11T21:20:12.3250507Z inflating: build/bin/Dict_test 2023-01-11T21:20:12.3305758Z inflating: build/bin/MaybeOwned_test 2023-01-11T21:20:12.3355051Z inflating: build/bin/NamedTensor_test 2023-01-11T21:20:12.3406249Z inflating: build/bin/apply_utils_test 2023-01-11T21:20:12.3458678Z inflating: build/bin/basic 2023-01-11T21:20:12.3509441Z inflating: build/bin/atest 2023-01-11T21:20:12.3556245Z inflating: build/bin/broadcast_test 2023-01-11T21:20:12.3606525Z inflating: build/bin/cpu_generator_test 2023-01-11T21:20:12.3651778Z inflating: build/bin/cpu_profiling_allocator_test 2023-01-11T21:20:12.3727299Z inflating: build/bin/cpu_rng_test 2023-01-11T21:20:12.3770422Z inflating: build/bin/dispatch_key_set_test 2023-01-11T21:20:12.3813738Z inflating: build/bin/dlconvertor_test 2023-01-11T21:20:12.3864305Z inflating: build/bin/extension_backend_test 2023-01-11T21:20:12.3912345Z inflating: build/bin/half_test 2023-01-11T21:20:12.3954832Z inflating: build/bin/lazy_tensor_test 2023-01-11T21:20:12.4001934Z inflating: build/bin/math_kernel_test 2023-01-11T21:20:12.4083487Z inflating: build/bin/ivalue_test 2023-01-11T21:20:12.4129994Z inflating: build/bin/memory_format_test 2023-01-11T21:20:12.4176616Z inflating: build/bin/memory_overlapping_test 2023-01-11T21:20:12.4220566Z inflating: build/bin/operator_name_test 2023-01-11T21:20:12.4266084Z inflating: build/bin/mobile_memory_cleanup 2023-01-11T21:20:12.4314299Z inflating: build/bin/native_test 2023-01-11T21:20:12.4357666Z inflating: build/bin/operators_test 2023-01-11T21:20:12.4403497Z inflating: build/bin/packedtensoraccessor_test 2023-01-11T21:20:12.4453291Z inflating: build/bin/quantized_test 2023-01-11T21:20:12.4510179Z inflating: build/bin/pow_test 2023-01-11T21:20:12.4552618Z inflating: build/bin/reduce_ops_test 2023-01-11T21:20:12.4596835Z inflating: build/bin/reportMemoryUsage_test 2023-01-11T21:20:12.4645743Z inflating: build/bin/scalar_tensor_test 2023-01-11T21:20:12.4695174Z inflating: build/bin/scalar_test 2023-01-11T21:20:12.4740667Z inflating: build/bin/stride_properties_test 2023-01-11T21:20:12.4808049Z inflating: build/bin/tensor_iterator_test 2023-01-11T21:20:12.4855564Z inflating: build/bin/type_ptr_test 2023-01-11T21:20:12.4858040Z inflating: build/bin/thread_init_test 2023-01-11T21:20:12.4906707Z inflating: build/bin/test_parallel 2023-01-11T21:20:12.4949574Z inflating: build/bin/variant_test 2023-01-11T21:20:12.4995349Z inflating: build/bin/undefined_tensor_test 2023-01-11T21:20:12.5047821Z inflating: build/bin/type_test 2023-01-11T21:20:12.5048731Z inflating: build/bin/verify_api_visibility 2023-01-11T21:20:12.5108605Z inflating: build/bin/legacy_vmap_test 2023-01-11T21:20:12.5152998Z inflating: build/bin/weakref_test 2023-01-11T21:20:12.5197148Z inflating: build/bin/wrapdim_test 2023-01-11T21:20:12.5290028Z inflating: build/bin/List_test 2023-01-11T21:20:12.5341742Z inflating: build/bin/IListRef_test 2023-01-11T21:20:12.5384158Z inflating: build/bin/xla_tensor_test 2023-01-11T21:20:12.5488446Z inflating: build/bin/kernel_function_legacy_test 2023-01-11T21:20:12.5544568Z inflating: build/bin/KernelFunction_test 2023-01-11T21:20:12.5626932Z inflating: build/bin/kernel_function_test 2023-01-11T21:20:12.5736752Z inflating: build/bin/kernel_lambda_legacy_test 2023-01-11T21:20:12.5788806Z inflating: build/bin/kernel_stackbased_test 2023-01-11T21:20:12.5877378Z inflating: build/bin/kernel_lambda_test 2023-01-11T21:20:12.5921631Z inflating: build/bin/CppSignature_test 2023-01-11T21:20:12.6003711Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2023-01-11T21:20:12.6045448Z inflating: build/bin/op_allowlist_test 2023-01-11T21:20:12.6092165Z inflating: build/bin/inline_container_test 2023-01-11T21:20:12.6140968Z inflating: build/bin/backend_fallback_test 2023-01-11T21:20:12.6385499Z inflating: build/bin/op_registration_test 2023-01-11T21:20:12.6430949Z inflating: build/bin/cuda_apply_test 2023-01-11T21:20:12.6492247Z inflating: build/bin/cuda_complex_math_test 2023-01-11T21:20:12.6534885Z inflating: build/bin/cuda_device_test 2023-01-11T21:20:12.6581993Z inflating: build/bin/cuda_caching_host_allocator_test 2023-01-11T21:20:12.6633889Z inflating: build/bin/cuda_atomic_ops_test 2023-01-11T21:20:12.6677264Z inflating: build/bin/cuda_dlconvertor_test 2023-01-11T21:20:12.6727665Z inflating: build/bin/cuda_complex_test 2023-01-11T21:20:12.6778952Z inflating: build/bin/cuda_cub_test 2023-01-11T21:20:12.6822781Z inflating: build/bin/cuda_integer_divider_test 2023-01-11T21:20:12.6880469Z inflating: build/bin/cuda_distributions_test 2023-01-11T21:20:12.6926496Z inflating: build/bin/cuda_reportMemoryUsage_test 2023-01-11T21:20:12.6979058Z inflating: build/bin/cuda_stream_test 2023-01-11T21:20:12.7030110Z inflating: build/bin/cuda_generator_test 2023-01-11T21:20:12.7072294Z inflating: build/bin/cuda_optional_test 2023-01-11T21:20:12.7115126Z inflating: build/bin/cuda_half_test 2023-01-11T21:20:12.7159673Z inflating: build/bin/cuda_packedtensoraccessor_test 2023-01-11T21:20:12.7173481Z inflating: build/bin/tutorial_tensorexpr 2023-01-11T21:20:12.7230075Z inflating: build/bin/ProcessGroupGlooTest 2023-01-11T21:20:12.7272750Z inflating: build/bin/cuda_cudnn_test 2023-01-11T21:20:12.7319053Z inflating: build/bin/ProcessGroupUCCTest 2023-01-11T21:20:12.7365868Z inflating: build/bin/test_dist_autograd 2023-01-11T21:20:12.7416459Z inflating: build/bin/ProcessGroupGlooAsyncTest 2023-01-11T21:20:12.7466881Z inflating: build/bin/ProcessGroupNCCLErrorsTest 2023-01-11T21:20:12.7520237Z inflating: build/bin/ProcessGroupNCCLTest 2023-01-11T21:20:12.7580460Z inflating: build/bin/test_cpp_rpc 2023-01-11T21:20:12.7582600Z inflating: build/bin/parallel_benchmark 2023-01-11T21:20:12.7641854Z inflating: build/bin/test_mobile_nnc 2023-01-11T21:20:12.7650743Z inflating: build/bin/aot_model_compiler_test 2023-01-11T21:20:12.7696383Z inflating: build/bin/cuda_vectorized_test 2023-01-11T21:20:12.7700891Z inflating: build/bin/torch_shm_manager 2023-01-11T21:20:12.8003432Z inflating: build/bin/test_lazy 2023-01-11T21:20:12.8713286Z inflating: build/bin/test_tensorexpr 2023-01-11T21:20:12.9742321Z inflating: build/bin/test_api 2023-01-11T21:20:13.0682186Z inflating: build/bin/test_jit 2023-01-11T21:20:13.0684166Z inflating: .pytorch-test-times.json 2023-01-11T21:20:13.0706091Z ##[group]Run df -H 2023-01-11T21:20:13.0706359Z df -H 2023-01-11T21:20:13.0717389Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T21:20:13.0717594Z env: 2023-01-11T21:20:13.0717772Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:20:13.0717960Z ##[endgroup] 2023-01-11T21:20:13.0784116Z Filesystem Size Used Avail Use% Mounted on 2023-01-11T21:20:13.0784493Z devtmpfs 8.2G 0 8.2G 0% /dev 2023-01-11T21:20:13.0784805Z tmpfs 8.2G 12M 8.2G 1% /dev/shm 2023-01-11T21:20:13.0785156Z tmpfs 8.2G 431k 8.2G 1% /run 2023-01-11T21:20:13.0785436Z tmpfs 8.2G 0 8.2G 0% /sys/fs/cgroup 2023-01-11T21:20:13.0785937Z /dev/nvme0n1p1 162G 35G 127G 22% / 2023-01-11T21:20:13.0818512Z ##[group]Run .github/scripts/parse_ref.py 2023-01-11T21:20:13.0818748Z .github/scripts/parse_ref.py 2023-01-11T21:20:13.0830140Z shell: /usr/bin/bash -e {0} 2023-01-11T21:20:13.0830334Z env: 2023-01-11T21:20:13.0830499Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:20:13.0830699Z ##[endgroup] 2023-01-11T21:20:13.1129614Z ##[group]Run set -x 2023-01-11T21:20:13.1129888Z set -x 2023-01-11T21:20:13.1130056Z  2023-01-11T21:20:13.1130250Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2023-01-11T21:20:13.1130494Z  TEST_COMMAND=.jenkins/pytorch/multigpu-test.sh 2023-01-11T21:20:13.1130757Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2023-01-11T21:20:13.1130994Z  TEST_COMMAND=.jenkins/onnx/test.sh 2023-01-11T21:20:13.1131178Z else 2023-01-11T21:20:13.1131378Z  TEST_COMMAND=.jenkins/pytorch/test.sh 2023-01-11T21:20:13.1131574Z fi 2023-01-11T21:20:13.1131716Z  2023-01-11T21:20:13.1131948Z COMMIT_MESSAGES=$(git cherry -v "origin/${GIT_DEFAULT_BRANCH:-master}") 2023-01-11T21:20:13.1132187Z  2023-01-11T21:20:13.1132404Z # sanitize the input commit message and PR body here: 2023-01-11T21:20:13.1132606Z # 2023-01-11T21:20:13.1132901Z # trim all new lines from commit messages + PR_BODY to avoid issues with batch environment 2023-01-11T21:20:13.1133282Z # variable copying. see https://github.com/pytorch/pytorch/pull/80043#issuecomment-1167796028 2023-01-11T21:20:13.1133583Z COMMIT_MESSAGES="${COMMIT_MESSAGES//[$'\n\r']}" 2023-01-11T21:20:13.1133814Z PR_BODY="${PR_BODY//[$'\n\r']}" 2023-01-11T21:20:13.1134118Z  2023-01-11T21:20:13.1134367Z # then trim all special characters like single and double quotes to avoid unescaped inputs to 2023-01-11T21:20:13.1134652Z # wreak havoc internally 2023-01-11T21:20:13.1134891Z export COMMIT_MESSAGES="${COMMIT_MESSAGES//[\'\"]}" 2023-01-11T21:20:13.1135134Z export PR_BODY="${PR_BODY//[\'\"]}" 2023-01-11T21:20:13.1135385Z  2023-01-11T21:20:13.1135616Z # detached container should get cleaned up by teardown_ec2_linux 2023-01-11T21:20:13.1135913Z # TODO: Stop building test binaries as part of the build phase 2023-01-11T21:20:13.1136178Z # Used for GPU_FLAG since that doesn't play nice 2023-01-11T21:20:13.1136420Z # shellcheck disable=SC2086,SC2090 2023-01-11T21:20:13.1136645Z container_name=$(docker run \ 2023-01-11T21:20:13.1136835Z  ${GPU_FLAG:-} \ 2023-01-11T21:20:13.1137037Z  -e BUILD_ENVIRONMENT \ 2023-01-11T21:20:13.1137238Z  -e PR_NUMBER \ 2023-01-11T21:20:13.1137433Z  -e GITHUB_ACTIONS \ 2023-01-11T21:20:13.1137612Z  -e BASE_SHA \ 2023-01-11T21:20:13.1137796Z  -e BRANCH \ 2023-01-11T21:20:13.1137976Z  -e SHA1 \ 2023-01-11T21:20:13.1138153Z  -e AWS_DEFAULT_REGION \ 2023-01-11T21:20:13.1138351Z  -e IN_WHEEL_TEST \ 2023-01-11T21:20:13.1138543Z  -e SHARD_NUMBER \ 2023-01-11T21:20:13.1138720Z  -e TEST_CONFIG \ 2023-01-11T21:20:13.1138911Z  -e NUM_TEST_SHARDS \ 2023-01-11T21:20:13.1139099Z  -e PR_BODY \ 2023-01-11T21:20:13.1139362Z  -e COMMIT_MESSAGES \ 2023-01-11T21:20:13.1139573Z  -e CONTINUE_THROUGH_ERROR \ 2023-01-11T21:20:13.1139788Z  -e PYTORCH_RETRY_TEST_CASES \ 2023-01-11T21:20:13.1140184Z  -e PYTORCH_OVERRIDE_FLAKY_SIGNAL \ 2023-01-11T21:20:13.1140391Z  -e PR_LABELS \ 2023-01-11T21:20:13.1140592Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2023-01-11T21:20:13.1140796Z  -e SCCACHE_BUCKET \ 2023-01-11T21:20:13.1140985Z  -e SCCACHE_S3_KEY_PREFIX \ 2023-01-11T21:20:13.1141179Z  -e XLA_CUDA \ 2023-01-11T21:20:13.1141387Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2023-01-11T21:20:13.1141609Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2023-01-11T21:20:13.1141851Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2023-01-11T21:20:13.1142108Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2023-01-11T21:20:13.1142329Z  --ulimit stack=10485760:83886080 \ 2023-01-11T21:20:13.1142798Z  --security-opt seccomp=unconfined \ 2023-01-11T21:20:13.1143032Z  --cap-add=SYS_PTRACE \ 2023-01-11T21:20:13.1143215Z  --ipc=host \ 2023-01-11T21:20:13.1143506Z  --shm-size="${SHM_SIZE}" \ 2023-01-11T21:20:13.1143692Z  --tty \ 2023-01-11T21:20:13.1143861Z  --detach \ 2023-01-11T21:20:13.1144047Z  --name="${container_name}" \ 2023-01-11T21:20:13.1144248Z  --user jenkins \ 2023-01-11T21:20:13.1144484Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2023-01-11T21:20:13.1144725Z  -w /var/lib/jenkins/workspace \ 2023-01-11T21:20:13.1144932Z  "${DOCKER_IMAGE}" 2023-01-11T21:20:13.1145105Z ) 2023-01-11T21:20:13.1145311Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2023-01-11T21:20:13.1145641Z docker exec -t "${container_name}" sh -c "pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2023-01-11T21:20:13.1156883Z shell: /usr/bin/bash -e {0} 2023-01-11T21:20:13.1157054Z env: 2023-01-11T21:20:13.1157229Z GIT_DEFAULT_BRANCH: master 2023-01-11T21:20:13.1157481Z BUILD_ENVIRONMENT: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:20:13.1157708Z PR_NUMBER: 2023-01-11T21:20:13.1157972Z BRANCH: 2023-01-11T21:20:13.1158181Z SHA1: 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:13.1158432Z BASE_SHA: 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:13.1158646Z PYTORCH_RETRY_TEST_CASES: 1 2023-01-11T21:20:13.1158856Z PYTORCH_OVERRIDE_FLAKY_SIGNAL: 1 2023-01-11T21:20:13.1159065Z TEST_CONFIG: nogpu_NO_AVX2 2023-01-11T21:20:13.1159241Z SHARD_NUMBER: 1 2023-01-11T21:20:13.1159408Z NUM_TEST_SHARDS: 1 2023-01-11T21:20:13.1159571Z PR_BODY: 2023-01-11T21:20:13.1159741Z CONTINUE_THROUGH_ERROR: False 2023-01-11T21:20:13.1159981Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2023-01-11T21:20:13.1160217Z SCCACHE_S3_KEY_PREFIX: trunk 2023-01-11T21:20:13.1160386Z SHM_SIZE: 2g 2023-01-11T21:20:13.1160761Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:20:13.1161226Z XLA_CUDA: 2023-01-11T21:20:13.1161488Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2023-01-11T21:20:13.1161762Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2023-01-11T21:20:13.1161985Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2023-01-11T21:20:13.1162185Z ##[endgroup] 2023-01-11T21:20:13.1187026Z + [[ nogpu_NO_AVX2 == \m\u\l\t\i\g\p\u ]] 2023-01-11T21:20:13.1187492Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *onnx* ]] 2023-01-11T21:20:13.1187748Z + TEST_COMMAND=.jenkins/pytorch/test.sh 2023-01-11T21:20:13.1190101Z ++ git cherry -v origin/master 2023-01-11T21:20:13.7508222Z + COMMIT_MESSAGES='+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into '\''input'\'' 2023-01-11T21:20:13.7508641Z + 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch 2023-01-11T21:20:13.7509325Z + 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e '\''other'\'' instead of '\''output'\'' in documentation' 2023-01-11T21:20:13.7510188Z + COMMIT_MESSAGES='+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into '\''input'\''+ 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch+ 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e '\''other'\'' instead of '\''output'\'' in documentation' 2023-01-11T21:20:13.7510628Z + PR_BODY= 2023-01-11T21:20:13.7511272Z + export 'COMMIT_MESSAGES=+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into input+ 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch+ 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e other instead of output in documentation' 2023-01-11T21:20:13.7512236Z + COMMIT_MESSAGES='+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into input+ 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch+ 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e other instead of output in documentation' 2023-01-11T21:20:13.7512657Z + export PR_BODY= 2023-01-11T21:20:13.7512818Z + PR_BODY= 2023-01-11T21:20:13.7518591Z +++ nproc --ignore=2 2023-01-11T21:20:13.7551522Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e PR_BODY -e COMMIT_MESSAGES -e CONTINUE_THROUGH_ERROR -e PYTORCH_RETRY_TEST_CASES -e PYTORCH_OVERRIDE_FLAKY_SIGNAL -e PR_LABELS -e MAX_JOBS=6 -e SCCACHE_BUCKET -e SCCACHE_S3_KEY_PREFIX -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS --env-file=/tmp/github_env_3896346758 --ulimit stack=10485760:83886080 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=2g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T21:20:24.0374992Z + container_name=a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T21:20:24.0375544Z + echo DOCKER_CONTAINER_ID=a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T21:20:24.0378813Z ++ echo dist/torch-2.0.0a0+git8419ddd-cp310-cp310-linux_x86_64.whl 2023-01-11T21:20:24.0379876Z + docker exec -t a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 sh -c 'pip install dist/torch-2.0.0a0+git8419ddd-cp310-cp310-linux_x86_64.whl[opt-einsum] && .jenkins/pytorch/test.sh' 2023-01-11T21:20:24.5825297Z Processing ./dist/torch-2.0.0a0+git8419ddd-cp310-cp310-linux_x86_64.whl 2023-01-11T21:20:25.2984093Z Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch==2.0.0a0+git8419ddd) (1.11.1) 2023-01-11T21:20:25.2987135Z Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch==2.0.0a0+git8419ddd) (2.6.3) 2023-01-11T21:20:25.2991066Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch==2.0.0a0+git8419ddd) (4.4.0) 2023-01-11T21:20:25.3003161Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/lib/python3.10/site-packages (from torch==2.0.0a0+git8419ddd) (3.3.0) 2023-01-11T21:20:25.3056258Z Requirement already satisfied: numpy>=1.7 in /opt/conda/lib/python3.10/site-packages (from opt-einsum>=3.3->torch==2.0.0a0+git8419ddd) (1.21.2) 2023-01-11T21:20:25.3198403Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch==2.0.0a0+git8419ddd) (1.2.1) 2023-01-11T21:20:25.9243749Z Installing collected packages: torch 2023-01-11T21:20:32.8135000Z Successfully installed torch-2.0.0a0+git8419ddd 2023-01-11T21:20:32.9481269Z + echo 'Environment variables:' 2023-01-11T21:20:32.9481709Z Environment variables: 2023-01-11T21:20:32.9481884Z + env 2023-01-11T21:20:32.9499590Z SHARD_NUMBER=1 2023-01-11T21:20:32.9499997Z NV_LIBCUBLAS_DEV_VERSION=11.10.1.25-1 2023-01-11T21:20:32.9500352Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-7 2023-01-11T21:20:32.9500715Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-01-11T21:20:32.9501086Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.13.4-1+cuda11.7 2023-01-11T21:20:32.9501315Z UCC_HOME=/usr 2023-01-11T21:20:32.9502677Z BUILD_ENVIRONMENT=linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:20:32.9503101Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2023-01-11T21:20:32.9503635Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-7=11.7.3.21-1 2023-01-11T21:20:32.9503927Z INSTALLED_DB=yes 2023-01-11T21:20:32.9504151Z HOSTNAME=a71a466242c1 2023-01-11T21:20:32.9504340Z GITHUB_REF_NAME=ciflow/trunk/91627 2023-01-11T21:20:32.9504581Z GITHUB_API_URL=https://api.github.com 2023-01-11T21:20:32.9504885Z GITHUB_REPOSITORY_OWNER_ID=21003710 2023-01-11T21:20:32.9505081Z OPENSSL_DIR=/opt/openssl 2023-01-11T21:20:32.9509216Z UCC_COMMIT=1c7a7127186e7836f73aafbd7697bbc274a77eee 2023-01-11T21:20:32.9510836Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9511396Z CUDA_PATH=/usr/local/cuda 2023-01-11T21:20:32.9512058Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2023-01-11T21:20:32.9512518Z GITHUB_RUN_ATTEMPT=1 2023-01-11T21:20:32.9512834Z TEST_CONFIG=nogpu_NO_AVX2 2023-01-11T21:20:32.9513207Z NV_LIBNPP_VERSION=11.7.3.21-1 2023-01-11T21:20:32.9513658Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-7=11.7.50-1 2023-01-11T21:20:32.9514049Z GITHUB_REPOSITORY_OWNER=pytorch 2023-01-11T21:20:32.9514381Z GITHUB_ACTIONS=true 2023-01-11T21:20:32.9514688Z NVIDIA_VISIBLE_DEVICES=all 2023-01-11T21:20:32.9515057Z NV_NVPROF_VERSION=11.7.50-1 2023-01-11T21:20:32.9515421Z NV_LIBCUSPARSE_VERSION=11.7.3.50-1 2023-01-11T21:20:32.9515920Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/trunk.yml@refs/tags/ciflow/trunk/91627 2023-01-11T21:20:32.9516377Z NVIDIA_PRODUCT_NAME=CUDA 2023-01-11T21:20:32.9516677Z CI=true 2023-01-11T21:20:32.9516982Z PYTORCH_OVERRIDE_FLAKY_SIGNAL=1 2023-01-11T21:20:32.9517478Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-7=11.10.1.25-1 2023-01-11T21:20:32.9517819Z BRANCH= 2023-01-11T21:20:32.9518090Z GITHUB_HEAD_REF= 2023-01-11T21:20:32.9518454Z UCX_COMMIT=31e74cac7bee0ef66bef2af72e7d86d9c282e5ab 2023-01-11T21:20:32.9518947Z GITHUB_ACTOR=pytorch-bot[bot] 2023-01-11T21:20:32.9519357Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2023-01-11T21:20:32.9519720Z GITHUB_ACTION_REF= 2023-01-11T21:20:32.9520065Z NCCL_VERSION=2.13.4-1 2023-01-11T21:20:32.9520392Z GITHUB_ACTION=__self 2023-01-11T21:20:32.9543970Z GITHUB_REF_PROTECTED=false 2023-01-11T21:20:32.9544620Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2023-01-11T21:20:32.9545138Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2023-01-11T21:20:32.9546016Z *** 2023-01-11T21:20:32.9546335Z INSTALLED_VISION=yes 2023-01-11T21:20:32.9546663Z NVARCH=x86_64 2023-01-11T21:20:32.9546942Z NV_LIBCUSPARSE_DEV_VERSION=11.7.3.50-1 2023-01-11T21:20:32.9547132Z HOME=/var/lib/jenkins 2023-01-11T21:20:32.9547546Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9547856Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2023-01-11T21:20:32.9548048Z NVIDIA_CUDA_END_OF_LIFE=1 2023-01-11T21:20:32.9548242Z GITHUB_ACTION_REPOSITORY= 2023-01-11T21:20:32.9548429Z GITHUB_REF_TYPE=tag 2023-01-11T21:20:32.9548636Z NV_LIBNCCL_PACKAGE_VERSION=2.13.4-1 2023-01-11T21:20:32.9548837Z GITHUB_RETENTION_DAYS=90 2023-01-11T21:20:32.9549122Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2023-01-11T21:20:32.9549426Z NV_LIBNCCL_PACKAGE=libnccl2=2.13.4-1+cuda11.7 2023-01-11T21:20:32.9549905Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9550365Z DEBIAN_FRONTEND=noninteractive 2023-01-11T21:20:32.9550627Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2023-01-11T21:20:32.9550838Z GITHUB_REF=refs/tags/ciflow/trunk/91627 2023-01-11T21:20:32.9551070Z NV_CUDA_LIB_VERSION=11.7.0-1 2023-01-11T21:20:32.9551308Z GITHUB_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9551522Z INSTALLED_PROTOBUF=yes 2023-01-11T21:20:32.9551717Z GITHUB_REPOSITORY_ID=65600975 2023-01-11T21:20:32.9551907Z GITHUB_RUN_ID=3896346758 2023-01-11T21:20:32.9552148Z NV_LIBNPP_PACKAGE=libnpp-11-7=11.7.3.21-1 2023-01-11T21:20:32.9552366Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2023-01-11T21:20:32.9552583Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2023-01-11T21:20:32.9552794Z NV_NVTX_VERSION=11.7.50-1 2023-01-11T21:20:32.9552987Z CONTINUE_THROUGH_ERROR=False 2023-01-11T21:20:32.9553208Z GITHUB_SERVER_URL=https://github.com 2023-01-11T21:20:32.9553398Z MAX_JOBS=6 2023-01-11T21:20:32.9553574Z GITHUB_ACTOR_ID=54816060 2023-01-11T21:20:32.9553794Z NV_LIBCUBLAS_VERSION=11.10.1.25-1 2023-01-11T21:20:32.9554131Z NV_LIBCUBLAS_PACKAGE=libcublas-11-7=11.10.1.25-1 2023-01-11T21:20:32.9554506Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2023-01-11T21:20:32.9554762Z UCX_HOME=/usr 2023-01-11T21:20:32.9554951Z PYTORCH_RETRY_TEST_CASES=1 2023-01-11T21:20:32.9555181Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2023-01-11T21:20:32.9555446Z BASE_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9555707Z NV_CUDA_CUDART_DEV_VERSION=11.7.60-1 2023-01-11T21:20:32.9555887Z PR_BODY= 2023-01-11T21:20:32.9556058Z GITHUB_BASE_REF= 2023-01-11T21:20:32.9556227Z TERM=xterm 2023-01-11T21:20:32.9556372Z XLA_CUDA= 2023-01-11T21:20:32.9556569Z NV_NVML_DEV_VERSION=11.7.50-1 2023-01-11T21:20:32.9556767Z TORCH_CUDA_ARCH_LIST=Maxwell 2023-01-11T21:20:32.9556942Z CUDA_VERSION=11.7.0 2023-01-11T21:20:32.9557194Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-7 2023-01-11T21:20:32.9557414Z OPENSSL_ROOT_DIR=/opt/openssl 2023-01-11T21:20:32.9557819Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9558115Z GITHUB_JOB=test 2023-01-11T21:20:32.9558307Z SCCACHE_S3_KEY_PREFIX=trunk 2023-01-11T21:20:32.9558781Z COMMIT_MESSAGES=+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into input+ 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch+ 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e other instead of output in documentation 2023-01-11T21:20:32.9559232Z NVIDIA_DRIVER_CAPABILITIES=compute,utility 2023-01-11T21:20:32.9559443Z NUM_TEST_SHARDS=1 2023-01-11T21:20:32.9559616Z PR_NUMBER= 2023-01-11T21:20:32.9560001Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9560285Z SHLVL=1 2023-01-11T21:20:32.9560541Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-7 2023-01-11T21:20:32.9562634Z GITHUB_REPOSITORY=pytorch/pytorch 2023-01-11T21:20:32.9563624Z NVIDIA_REQUIRE_CUDA=cuda>=11.7 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 2023-01-11T21:20:32.9565047Z NV_LIBNPP_DEV_VERSION=11.7.3.21-1 2023-01-11T21:20:32.9565319Z SHA1=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9565543Z GITHUB_EVENT_NAME=push 2023-01-11T21:20:32.9565772Z NV_CUDA_CUDART_VERSION=11.7.60-1 2023-01-11T21:20:32.9566110Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2023-01-11T21:20:32.9566313Z GITHUB_RUN_NUMBER=22986 2023-01-11T21:20:32.9566508Z GITHUB_WORKFLOW=trunk 2023-01-11T21:20:32.9566823Z PATH=/opt/cache/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-01-11T21:20:32.9567147Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.13.4-1 2023-01-11T21:20:32.9567409Z GITHUB_WORKFLOW_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9567765Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T21:20:32.9568071Z GITHUB_TRIGGERING_ACTOR=pytorch-bot[bot] 2023-01-11T21:20:32.9568277Z _=/usr/bin/env 2023-01-11T21:20:32.9568563Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2023-01-11T21:20:32.9670260Z + TORCH_INSTALL_DIR=/opt/conda/lib/python3.10/site-packages/torch 2023-01-11T21:20:32.9671281Z + TORCH_BIN_DIR=/opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T21:20:32.9671950Z + TORCH_LIB_DIR=/opt/conda/lib/python3.10/site-packages/torch/lib 2023-01-11T21:20:32.9672375Z + TORCH_TEST_DIR=/opt/conda/lib/python3.10/site-packages/torch/test 2023-01-11T21:20:32.9672598Z + BUILD_DIR=build 2023-01-11T21:20:32.9672796Z + BUILD_RENAMED_DIR=build_renamed 2023-01-11T21:20:32.9673036Z + BUILD_BIN_DIR=build/bin 2023-01-11T21:20:32.9673226Z + export VALGRIND=ON 2023-01-11T21:20:32.9673406Z + VALGRIND=ON 2023-01-11T21:20:32.9673604Z + export TORCH_INDUCTOR_INSTALL_GXX=ON 2023-01-11T21:20:32.9673855Z + TORCH_INDUCTOR_INSTALL_GXX=ON 2023-01-11T21:20:32.9674168Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *clang9* ]] 2023-01-11T21:20:32.9674573Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *bazel* ]] 2023-01-11T21:20:32.9675452Z ++ realpath build/custom_test_artifacts 2023-01-11T21:20:32.9704070Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2023-01-11T21:20:32.9706542Z ++ dirname .jenkins/pytorch/test.sh 2023-01-11T21:20:32.9712801Z + source .jenkins/pytorch/common.sh 2023-01-11T21:20:32.9722015Z +++ dirname .jenkins/pytorch/common.sh 2023-01-11T21:20:32.9731361Z ++ source .jenkins/pytorch/common_utils.sh 2023-01-11T21:20:32.9738705Z +++ declare -f -t trap_add 2023-01-11T21:20:32.9743917Z ++ set -ex 2023-01-11T21:20:32.9744381Z ++ [[ linux-bionic-cuda11.7-py3.10-gcc7 == *rocm* ]] 2023-01-11T21:20:32.9744666Z ++ BUILD_TEST_LIBTORCH=0 2023-01-11T21:20:32.9744979Z + echo 'Environment variables' 2023-01-11T21:20:32.9745172Z Environment variables 2023-01-11T21:20:32.9745350Z + env 2023-01-11T21:20:32.9752398Z SHARD_NUMBER=1 2023-01-11T21:20:32.9752800Z NV_LIBCUBLAS_DEV_VERSION=11.10.1.25-1 2023-01-11T21:20:32.9753492Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-7 2023-01-11T21:20:32.9753960Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-01-11T21:20:32.9754512Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.13.4-1+cuda11.7 2023-01-11T21:20:32.9754871Z UCC_HOME=/usr 2023-01-11T21:20:32.9755366Z BUILD_ENVIRONMENT=linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T21:20:32.9755838Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2023-01-11T21:20:32.9756334Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-7=11.7.3.21-1 2023-01-11T21:20:32.9756711Z INSTALLED_DB=yes 2023-01-11T21:20:32.9757023Z HOSTNAME=a71a466242c1 2023-01-11T21:20:32.9757355Z GITHUB_REF_NAME=ciflow/trunk/91627 2023-01-11T21:20:32.9757715Z GITHUB_API_URL=https://api.github.com 2023-01-11T21:20:32.9758125Z GITHUB_REPOSITORY_OWNER_ID=21003710 2023-01-11T21:20:32.9758493Z OPENSSL_DIR=/opt/openssl 2023-01-11T21:20:32.9758877Z UCC_COMMIT=1c7a7127186e7836f73aafbd7697bbc274a77eee 2023-01-11T21:20:32.9759733Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9760132Z CUDA_PATH=/usr/local/cuda 2023-01-11T21:20:32.9760622Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2023-01-11T21:20:32.9761048Z GITHUB_RUN_ATTEMPT=1 2023-01-11T21:20:32.9761370Z TEST_CONFIG=nogpu_NO_AVX2 2023-01-11T21:20:32.9761942Z NV_LIBNPP_VERSION=11.7.3.21-1 2023-01-11T21:20:32.9762429Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-7=11.7.50-1 2023-01-11T21:20:32.9762849Z GITHUB_REPOSITORY_OWNER=pytorch 2023-01-11T21:20:32.9763208Z GITHUB_ACTIONS=true 2023-01-11T21:20:32.9763518Z NVIDIA_VISIBLE_DEVICES=all 2023-01-11T21:20:32.9763908Z NV_NVPROF_VERSION=11.7.50-1 2023-01-11T21:20:32.9764315Z NV_LIBCUSPARSE_VERSION=11.7.3.50-1 2023-01-11T21:20:32.9764828Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/trunk.yml@refs/tags/ciflow/trunk/91627 2023-01-11T21:20:32.9765321Z NVIDIA_PRODUCT_NAME=CUDA 2023-01-11T21:20:32.9765608Z CI=true 2023-01-11T21:20:32.9765876Z PYTORCH_OVERRIDE_FLAKY_SIGNAL=1 2023-01-11T21:20:32.9766304Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-7=11.10.1.25-1 2023-01-11T21:20:32.9766695Z BRANCH= 2023-01-11T21:20:32.9766979Z GITHUB_HEAD_REF= 2023-01-11T21:20:32.9767243Z UCX_COMMIT=31e74cac7bee0ef66bef2af72e7d86d9c282e5ab 2023-01-11T21:20:32.9767521Z GITHUB_ACTOR=pytorch-bot[bot] 2023-01-11T21:20:32.9767974Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2023-01-11T21:20:32.9768375Z GITHUB_ACTION_REF= 2023-01-11T21:20:32.9768708Z NCCL_VERSION=2.13.4-1 2023-01-11T21:20:32.9768894Z GITHUB_ACTION=__self 2023-01-11T21:20:32.9769060Z VALGRIND=ON 2023-01-11T21:20:32.9769243Z GITHUB_REF_PROTECTED=false 2023-01-11T21:20:32.9769631Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2023-01-11T21:20:32.9769920Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2023-01-11T21:20:32.9770197Z *** 2023-01-11T21:20:32.9770364Z INSTALLED_VISION=yes 2023-01-11T21:20:32.9770545Z NVARCH=x86_64 2023-01-11T21:20:32.9770757Z NV_LIBCUSPARSE_DEV_VERSION=11.7.3.50-1 2023-01-11T21:20:32.9770963Z HOME=/var/lib/jenkins 2023-01-11T21:20:32.9771372Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9771672Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2023-01-11T21:20:32.9771877Z NVIDIA_CUDA_END_OF_LIFE=1 2023-01-11T21:20:32.9772079Z GITHUB_ACTION_REPOSITORY= 2023-01-11T21:20:32.9772262Z GITHUB_REF_TYPE=tag 2023-01-11T21:20:32.9772482Z NV_LIBNCCL_PACKAGE_VERSION=2.13.4-1 2023-01-11T21:20:32.9772689Z GITHUB_RETENTION_DAYS=90 2023-01-11T21:20:32.9772964Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2023-01-11T21:20:32.9773275Z NV_LIBNCCL_PACKAGE=libnccl2=2.13.4-1+cuda11.7 2023-01-11T21:20:32.9773698Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9774004Z DEBIAN_FRONTEND=noninteractive 2023-01-11T21:20:32.9774257Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2023-01-11T21:20:32.9774484Z GITHUB_REF=refs/tags/ciflow/trunk/91627 2023-01-11T21:20:32.9774714Z NV_CUDA_LIB_VERSION=11.7.0-1 2023-01-11T21:20:32.9774938Z GITHUB_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9775165Z INSTALLED_PROTOBUF=yes 2023-01-11T21:20:32.9775362Z GITHUB_REPOSITORY_ID=65600975 2023-01-11T21:20:32.9775545Z GITHUB_RUN_ID=3896346758 2023-01-11T21:20:32.9775806Z NV_LIBNPP_PACKAGE=libnpp-11-7=11.7.3.21-1 2023-01-11T21:20:32.9776030Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2023-01-11T21:20:32.9776238Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2023-01-11T21:20:32.9776464Z NV_NVTX_VERSION=11.7.50-1 2023-01-11T21:20:32.9776666Z CONTINUE_THROUGH_ERROR=False 2023-01-11T21:20:32.9776880Z GITHUB_SERVER_URL=https://github.com 2023-01-11T21:20:32.9777087Z MAX_JOBS=6 2023-01-11T21:20:32.9777268Z GITHUB_ACTOR_ID=54816060 2023-01-11T21:20:32.9777479Z NV_LIBCUBLAS_VERSION=11.10.1.25-1 2023-01-11T21:20:32.9777763Z NV_LIBCUBLAS_PACKAGE=libcublas-11-7=11.10.1.25-1 2023-01-11T21:20:32.9778134Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2023-01-11T21:20:32.9778382Z UCX_HOME=/usr 2023-01-11T21:20:32.9778570Z PYTORCH_RETRY_TEST_CASES=1 2023-01-11T21:20:32.9778816Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2023-01-11T21:20:32.9779089Z BASE_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9779337Z NV_CUDA_CUDART_DEV_VERSION=11.7.60-1 2023-01-11T21:20:32.9779589Z PR_BODY= 2023-01-11T21:20:32.9779763Z GITHUB_BASE_REF= 2023-01-11T21:20:32.9779924Z TERM=xterm 2023-01-11T21:20:32.9780113Z TORCH_INDUCTOR_INSTALL_GXX=ON 2023-01-11T21:20:32.9780303Z XLA_CUDA= 2023-01-11T21:20:32.9780490Z NV_NVML_DEV_VERSION=11.7.50-1 2023-01-11T21:20:32.9780697Z TORCH_CUDA_ARCH_LIST=Maxwell 2023-01-11T21:20:32.9780893Z CUDA_VERSION=11.7.0 2023-01-11T21:20:32.9781138Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-7 2023-01-11T21:20:32.9781363Z OPENSSL_ROOT_DIR=/opt/openssl 2023-01-11T21:20:32.9781784Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9782070Z GITHUB_JOB=test 2023-01-11T21:20:32.9782264Z SCCACHE_S3_KEY_PREFIX=trunk 2023-01-11T21:20:32.9783058Z COMMIT_MESSAGES=+ 52a16ce42647731c772e14e7175afa40fda07b3d make torchgen rename also Number arguments into input+ 87db01a53ecb702267ec36787654e418a52f8e93 fix torch.where signature mismatch+ 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e other instead of output in documentation 2023-01-11T21:20:32.9783535Z NVIDIA_DRIVER_CAPABILITIES=compute,utility 2023-01-11T21:20:32.9783738Z NUM_TEST_SHARDS=1 2023-01-11T21:20:32.9783913Z PR_NUMBER= 2023-01-11T21:20:32.9784325Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_b048ddbb-aaa9-4b9f-9784-3beb8ea60621 2023-01-11T21:20:32.9784603Z SHLVL=1 2023-01-11T21:20:32.9784862Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-7 2023-01-11T21:20:32.9785102Z GITHUB_REPOSITORY=pytorch/pytorch 2023-01-11T21:20:32.9785999Z NVIDIA_REQUIRE_CUDA=cuda>=11.7 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 2023-01-11T21:20:32.9786847Z NV_LIBNPP_DEV_VERSION=11.7.3.21-1 2023-01-11T21:20:32.9787080Z SHA1=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9787288Z GITHUB_EVENT_NAME=push 2023-01-11T21:20:32.9787546Z NV_CUDA_CUDART_VERSION=11.7.60-1 2023-01-11T21:20:32.9787807Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2023-01-11T21:20:32.9788008Z GITHUB_RUN_NUMBER=22986 2023-01-11T21:20:32.9788197Z GITHUB_WORKFLOW=trunk 2023-01-11T21:20:32.9788503Z PATH=/opt/cache/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-01-11T21:20:32.9788829Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.13.4-1 2023-01-11T21:20:32.9789093Z GITHUB_WORKFLOW_SHA=8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T21:20:32.9789451Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T21:20:32.9789816Z GITHUB_TRIGGERING_ACTOR=pytorch-bot[bot] 2023-01-11T21:20:32.9790023Z _=/usr/bin/env 2023-01-11T21:20:32.9790235Z + echo 'Testing pytorch' 2023-01-11T21:20:32.9790407Z Testing pytorch 2023-01-11T21:20:32.9790605Z + export LANG=C.UTF-8 2023-01-11T21:20:32.9790799Z + LANG=C.UTF-8 2023-01-11T21:20:32.9859023Z + PR_NUMBER= 2023-01-11T21:20:32.9859400Z + [[ nogpu_NO_AVX2 == \d\e\f\a\u\l\t ]] 2023-01-11T21:20:32.9859771Z + [[ nogpu_NO_AVX2 == \d\i\s\t\r\i\b\u\t\e\d ]] 2023-01-11T21:20:32.9860106Z + [[ nogpu_NO_AVX2 == \s\l\o\w ]] 2023-01-11T21:20:32.9860681Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *slow-gradcheck* ]] 2023-01-11T21:20:32.9861290Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *cuda* ]] 2023-01-11T21:20:32.9861758Z + export PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2023-01-11T21:20:32.9862008Z + PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2023-01-11T21:20:32.9862325Z + [[ nogpu_NO_AVX2 == *crossref* ]] 2023-01-11T21:20:32.9862667Z + [[ nogpu_NO_AVX2 == *dynamo* ]] 2023-01-11T21:20:32.9862856Z + [[ nogpu_NO_AVX2 == *inductor* ]] 2023-01-11T21:20:32.9863147Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *rocm* ]] 2023-01-11T21:20:32.9863477Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *-bazel-* ]] 2023-01-11T21:20:32.9863759Z + pip_install --user ninja==1.10.2 2023-01-11T21:20:32.9864039Z + pip install --progress-bar off --user ninja==1.10.2 2023-01-11T21:20:33.3740813Z Collecting ninja==1.10.2 2023-01-11T21:20:33.3917303Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2023-01-11T21:20:33.9705544Z Installing collected packages: ninja 2023-01-11T21:20:33.9776768Z  WARNING: The script ninja is installed in '/var/lib/jenkins/.local/bin' which is not on PATH. 2023-01-11T21:20:33.9777721Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2023-01-11T21:20:33.9826631Z Successfully installed ninja-1.10.2 2023-01-11T21:20:34.0468287Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-01-11T21:20:34.0468994Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-01-11T21:20:34.0469839Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *asan* ]] 2023-01-11T21:20:34.0470243Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-tsan* ]] 2023-01-11T21:20:34.0470512Z + [[ nogpu_NO_AVX2 == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2023-01-11T21:20:34.0470744Z + export ATEN_CPU_CAPABILITY=default 2023-01-11T21:20:34.0470946Z + ATEN_CPU_CAPABILITY=default 2023-01-11T21:20:34.0476886Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *tbb* ]] 2023-01-11T21:20:34.0488950Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *libtorch* ]] 2023-01-11T21:20:34.0489567Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-bazel-* ]] 2023-01-11T21:20:34.0489914Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-tsan* ]] 2023-01-11T21:20:34.0491937Z + cd test 2023-01-11T21:20:34.0508171Z + python -c 'import torch; print(torch.__config__.show())' 2023-01-11T21:20:35.3558287Z PyTorch built with: 2023-01-11T21:20:35.3558834Z - GCC 7.5 2023-01-11T21:20:35.3559233Z - C++ Version: 201703 2023-01-11T21:20:35.3559707Z - Intel(R) oneAPI Math Kernel Library Version 2022.0-Product Build 20211112 for Intel(R) 64 architecture applications 2023-01-11T21:20:35.3560123Z - Intel(R) MKL-DNN v2.7.2 (Git Hash fbec3e25a559ee252022ae066817b204e106a6ba) 2023-01-11T21:20:35.3560427Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2023-01-11T21:20:35.3560705Z - LAPACK is enabled (usually provided by MKL) 2023-01-11T21:20:35.3560949Z - NNPACK is enabled 2023-01-11T21:20:35.3561171Z - CPU capability usage: NO AVX 2023-01-11T21:20:35.3563702Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Werror -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=ON, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 2023-01-11T21:20:35.3565840Z 2023-01-11T21:20:35.9374298Z + cd test 2023-01-11T21:20:35.9374753Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2023-01-11T21:20:37.0657033Z ATen/Parallel: 2023-01-11T21:20:37.0668984Z at::get_num_threads() : 4 2023-01-11T21:20:37.0669257Z at::get_num_interop_threads() : 4 2023-01-11T21:20:37.0669473Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2023-01-11T21:20:37.0669771Z omp_get_max_threads() : 4 2023-01-11T21:20:37.0670290Z Intel(R) oneAPI Math Kernel Library Version 2022.0-Product Build 20211112 for Intel(R) 64 architecture applications 2023-01-11T21:20:37.0670578Z mkl_get_max_threads() : 4 2023-01-11T21:20:37.0670903Z Intel(R) MKL-DNN v2.7.2 (Git Hash fbec3e25a559ee252022ae066817b204e106a6ba) 2023-01-11T21:20:37.0671394Z std::thread::hardware_concurrency() : 8 2023-01-11T21:20:37.0671607Z Environment variables: 2023-01-11T21:20:37.0671801Z OMP_NUM_THREADS : [not set] 2023-01-11T21:20:37.0671982Z MKL_NUM_THREADS : [not set] 2023-01-11T21:20:37.0672182Z ATen parallel backend: OpenMP 2023-01-11T21:20:37.0672307Z 2023-01-11T21:20:37.2787515Z + [[ nogpu_NO_AVX2 == *backward* ]] 2023-01-11T21:20:37.2787867Z + [[ nogpu_NO_AVX2 == *xla* ]] 2023-01-11T21:20:37.2788226Z + [[ nogpu_NO_AVX2 == \j\i\t\_\l\e\g\a\c\y ]] 2023-01-11T21:20:37.2788901Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *libtorch* ]] 2023-01-11T21:20:37.2789144Z + [[ nogpu_NO_AVX2 == distributed ]] 2023-01-11T21:20:37.2789352Z + [[ nogpu_NO_AVX2 == deploy ]] 2023-01-11T21:20:37.2789569Z + [[ nogpu_NO_AVX2 == *inductor_distributed* ]] 2023-01-11T21:20:37.2789822Z + [[ nogpu_NO_AVX2 == *dynamo* ]] 2023-01-11T21:20:37.2790018Z + [[ nogpu_NO_AVX2 == *dynamo* ]] 2023-01-11T21:20:37.2790236Z + [[ nogpu_NO_AVX2 == *inductor_huggingface* ]] 2023-01-11T21:20:37.2790478Z + [[ nogpu_NO_AVX2 == *inductor_timm* ]] 2023-01-11T21:20:37.2790691Z + [[ nogpu_NO_AVX2 == *inductor_torchbench* ]] 2023-01-11T21:20:37.2791002Z + [[ nogpu_NO_AVX2 == *inductor* ]] 2023-01-11T21:20:37.2791183Z + [[ 1 == 1 ]] 2023-01-11T21:20:37.2791414Z + [[ 1 -gt 1 ]] 2023-01-11T21:20:37.2791670Z + [[ 1 == 2 ]] 2023-01-11T21:20:37.2791881Z + [[ 1 -gt 2 ]] 2023-01-11T21:20:37.2792141Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *vulkan* ]] 2023-01-11T21:20:37.2792473Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-bazel-* ]] 2023-01-11T21:20:37.2792857Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-mobile-lightweight-dispatch* ]] 2023-01-11T21:20:37.2793207Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *-tsan* ]] 2023-01-11T21:20:37.2793438Z + [[ nogpu_NO_AVX2 = docs_test ]] 2023-01-11T21:20:37.2793643Z + [[ nogpu_NO_AVX2 == *functorch* ]] 2023-01-11T21:20:37.2793832Z + install_torchvision 2023-01-11T21:20:37.2794008Z + local commit 2023-01-11T21:20:37.2794190Z ++ get_pinned_commit vision 2023-01-11T21:20:37.2794396Z ++ cat .github/ci_commit_pins/vision.txt 2023-01-11T21:20:37.2818656Z + commit=32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:37.2819461Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/vision.git@32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:37.2820036Z + pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/vision.git@32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:37.6142964Z Collecting git+https://github.com/pytorch/vision.git@32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:37.6148906Z Cloning https://github.com/pytorch/vision.git (to revision 32d254bbfcf14975f846765775584e61ef25a5bc) to /tmp/pip-req-build-fx6dww2u 2023-01-11T21:20:37.6317253Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/vision.git /tmp/pip-req-build-fx6dww2u 2023-01-11T21:20:39.8634918Z Running command git rev-parse -q --verify 'sha^32d254bbfcf14975f846765775584e61ef25a5bc' 2023-01-11T21:20:39.8653255Z Running command git fetch -q https://github.com/pytorch/vision.git 32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:40.8144553Z Running command git checkout -q 32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:41.4138317Z Resolved https://github.com/pytorch/vision.git to commit 32d254bbfcf14975f846765775584e61ef25a5bc 2023-01-11T21:20:43.2827923Z Preparing metadata (setup.py) ... [?25l- done 2023-01-11T21:20:43.2912257Z [?25hRequirement already satisfied: typing_extensions in /opt/conda/lib/python3.10/site-packages (from torchvision==0.15.0a0+32d254b) (4.4.0) 2023-01-11T21:20:43.2917626Z Requirement already satisfied: numpy in /opt/conda/lib/python3.10/site-packages (from torchvision==0.15.0a0+32d254b) (1.21.2) 2023-01-11T21:20:43.2921828Z Requirement already satisfied: requests in /opt/conda/lib/python3.10/site-packages (from torchvision==0.15.0a0+32d254b) (2.28.1) 2023-01-11T21:20:43.2928893Z Requirement already satisfied: torch in /opt/conda/lib/python3.10/site-packages (from torchvision==0.15.0a0+32d254b) (2.0.0a0+git8419ddd) 2023-01-11T21:20:43.2937549Z Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /opt/conda/lib/python3.10/site-packages (from torchvision==0.15.0a0+32d254b) (9.3.0) 2023-01-11T21:20:43.3112356Z Requirement already satisfied: charset-normalizer<3,>=2 in /opt/conda/lib/python3.10/site-packages (from requests->torchvision==0.15.0a0+32d254b) (2.0.4) 2023-01-11T21:20:43.3117690Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.10/site-packages (from requests->torchvision==0.15.0a0+32d254b) (1.26.13) 2023-01-11T21:20:43.3123445Z Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.10/site-packages (from requests->torchvision==0.15.0a0+32d254b) (3.4) 2023-01-11T21:20:43.3130432Z Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.10/site-packages (from requests->torchvision==0.15.0a0+32d254b) (2022.12.7) 2023-01-11T21:20:43.3170634Z Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch->torchvision==0.15.0a0+32d254b) (1.11.1) 2023-01-11T21:20:43.3174146Z Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch->torchvision==0.15.0a0+32d254b) (2.6.3) 2023-01-11T21:20:43.3462563Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch->torchvision==0.15.0a0+32d254b) (1.2.1) 2023-01-11T21:20:43.3513967Z Building wheels for collected packages: torchvision 2023-01-11T21:21:36.2454400Z Building wheel for torchvision (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done 2023-01-11T21:21:36.2487176Z [?25h Created wheel for torchvision: filename=torchvision-0.15.0a0+32d254b-cp310-cp310-linux_x86_64.whl size=1003187 sha256=cb53fcc86fcc15f7231c78e9a501fda3a377c068517f98a1c73f50f160945a01 2023-01-11T21:21:36.2487939Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/ca/33/ae/1f7c8972d058d079236e7ca0a30b53b050afb405820b9ed787 2023-01-11T21:21:36.2522966Z Successfully built torchvision 2023-01-11T21:21:36.7755660Z Installing collected packages: torchvision 2023-01-11T21:21:37.1560251Z Successfully installed torchvision-0.15.0a0+32d254b 2023-01-11T21:21:37.2614615Z + install_triton 2023-01-11T21:21:37.2614875Z + local commit 2023-01-11T21:21:37.2615069Z + [[ nogpu_NO_AVX2 == *rocm* ]] 2023-01-11T21:21:37.2617792Z ++ get_pinned_commit triton 2023-01-11T21:21:37.2617994Z ++ cat .github/ci_commit_pins/triton.txt 2023-01-11T21:21:37.2653721Z + commit=0d7e7532279e45672555e344646f5c19c3972331 2023-01-11T21:21:37.2654382Z + pip_install --user git+https://github.com/openai/triton@0d7e7532279e45672555e344646f5c19c3972331#subdirectory=python 2023-01-11T21:21:37.2654972Z + pip install --progress-bar off --user git+https://github.com/openai/triton@0d7e7532279e45672555e344646f5c19c3972331#subdirectory=python 2023-01-11T21:21:37.6099665Z Collecting git+https://github.com/openai/triton@0d7e7532279e45672555e344646f5c19c3972331#subdirectory=python 2023-01-11T21:21:37.6104474Z Cloning https://github.com/openai/triton (to revision 0d7e7532279e45672555e344646f5c19c3972331) to /tmp/pip-req-build-q7ry5r63 2023-01-11T21:21:37.6121323Z Running command git clone --filter=blob:none --quiet https://github.com/openai/triton /tmp/pip-req-build-q7ry5r63 2023-01-11T21:21:38.3260639Z Running command git rev-parse -q --verify 'sha^0d7e7532279e45672555e344646f5c19c3972331' 2023-01-11T21:21:38.3278917Z Running command git fetch -q https://github.com/openai/triton 0d7e7532279e45672555e344646f5c19c3972331 2023-01-11T21:21:38.6653042Z Running command git checkout -q 0d7e7532279e45672555e344646f5c19c3972331 2023-01-11T21:21:39.0000353Z Resolved https://github.com/openai/triton to commit 0d7e7532279e45672555e344646f5c19c3972331 2023-01-11T21:21:39.0001397Z Running command git submodule update --init --recursive -q 2023-01-11T21:21:39.5005870Z Preparing metadata (setup.py) ... [?25l- done 2023-01-11T21:21:39.6469957Z [?25hCollecting cmake 2023-01-11T21:21:39.6691355Z Downloading cmake-3.25.0-py2.py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.7 MB) 2023-01-11T21:21:39.9501940Z Collecting filelock 2023-01-11T21:21:39.9548426Z Downloading filelock-3.9.0-py3-none-any.whl (9.7 kB) 2023-01-11T21:21:39.9587416Z Requirement already satisfied: torch in /opt/conda/lib/python3.10/site-packages (from triton==2.0.0) (2.0.0a0+git8419ddd) 2023-01-11T21:21:39.9761105Z Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch->triton==2.0.0) (1.11.1) 2023-01-11T21:21:39.9763978Z Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch->triton==2.0.0) (2.6.3) 2023-01-11T21:21:39.9767140Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch->triton==2.0.0) (4.4.0) 2023-01-11T21:21:39.9907929Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch->triton==2.0.0) (1.2.1) 2023-01-11T21:21:39.9955670Z Building wheels for collected packages: triton 2023-01-11T21:22:21.3448030Z Building wheel for triton (setup.py) ... [?25l- \ | / - \ | / - \ | / - done 2023-01-11T21:22:21.3813075Z [?25h Created wheel for triton: filename=triton-2.0.0-cp310-cp310-linux_x86_64.whl size=15377935 sha256=1afa21e5959155ed2bd815b9f7982252c103b5a50cf26a592ee40063a2f21ae8 2023-01-11T21:22:21.3813552Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/3f/1d/23/1c2bc47d618a44f9c949aea4b7e355e737a1f1ed208f009295 2023-01-11T21:22:21.3830265Z Successfully built triton 2023-01-11T21:22:21.9768130Z Installing collected packages: cmake, filelock, triton 2023-01-11T21:22:23.0445749Z Successfully installed cmake-3.25.0 filelock-3.9.0 triton-2.0.0 2023-01-11T21:22:23.1854107Z + pip_install --user jinja2 2023-01-11T21:22:23.1854621Z + pip install --progress-bar off --user jinja2 2023-01-11T21:22:23.6080539Z Collecting jinja2 2023-01-11T21:22:23.6280251Z Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB) 2023-01-11T21:22:23.6439124Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.10/site-packages (from jinja2) (2.1.1) 2023-01-11T21:22:24.2461970Z Installing collected packages: jinja2 2023-01-11T21:22:24.3269914Z Successfully installed jinja2-3.1.2 2023-01-11T21:22:24.4304798Z + install_monkeytype 2023-01-11T21:22:24.4305052Z + pip_install MonkeyType 2023-01-11T21:22:24.4305475Z + pip install --progress-bar off MonkeyType 2023-01-11T21:22:24.8190636Z Collecting MonkeyType 2023-01-11T21:22:24.8389865Z Downloading MonkeyType-22.2.0-py3-none-any.whl (37 kB) 2023-01-11T21:22:24.9005126Z Collecting libcst>=0.3.7 2023-01-11T21:22:24.9071371Z Downloading libcst-0.4.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.8 MB) 2023-01-11T21:22:24.9452578Z Requirement already satisfied: mypy-extensions in /opt/conda/lib/python3.10/site-packages (from MonkeyType) (0.4.3) 2023-01-11T21:22:24.9624800Z Requirement already satisfied: pyyaml>=5.2 in /opt/conda/lib/python3.10/site-packages (from libcst>=0.3.7->MonkeyType) (6.0) 2023-01-11T21:22:24.9631407Z Requirement already satisfied: typing-extensions>=3.7.4.2 in /opt/conda/lib/python3.10/site-packages (from libcst>=0.3.7->MonkeyType) (4.4.0) 2023-01-11T21:22:24.9778591Z Collecting typing-inspect>=0.4.0 2023-01-11T21:22:24.9837999Z Downloading typing_inspect-0.8.0-py3-none-any.whl (8.7 kB) 2023-01-11T21:22:25.6148815Z Installing collected packages: typing-inspect, libcst, MonkeyType 2023-01-11T21:22:26.1451348Z Successfully installed MonkeyType-22.2.0 libcst-0.4.9 typing-inspect-0.8.0 2023-01-11T21:22:26.2466711Z + test_python 2023-01-11T21:22:26.2467499Z + python test/run_test.py --exclude-jit-executor --exclude-distributed-tests --verbose 2023-01-11T21:22:27.9231267Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:28.0857736Z Ignoring disabled issues: [] 2023-01-11T21:22:28.1717226Z /var/lib/jenkins/workspace/test/run_test.py:1169: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 2023-01-11T21:22:28.1717707Z if torch.version.cuda is not None and LooseVersion(torch.version.cuda) >= "11.6": 2023-01-11T21:22:28.1756220Z Selected tests: 2023-01-11T21:22:28.1756529Z backends/xeon/test_launch 2023-01-11T21:22:28.1756856Z benchmark_utils/test_benchmark_utils 2023-01-11T21:22:28.1758523Z distributions/test_distributions 2023-01-11T21:22:28.1758876Z dynamo/test_aot_autograd 2023-01-11T21:22:28.1759166Z dynamo/test_aot_cudagraphs 2023-01-11T21:22:28.1759459Z dynamo/test_comptime 2023-01-11T21:22:28.1759743Z dynamo/test_dynamic_shapes 2023-01-11T21:22:28.1760170Z dynamo/test_export 2023-01-11T21:22:28.1760467Z dynamo/test_export_mutations 2023-01-11T21:22:28.1760946Z dynamo/test_functions 2023-01-11T21:22:28.1761302Z dynamo/test_global 2023-01-11T21:22:28.1761650Z dynamo/test_global_declaration 2023-01-11T21:22:28.1762026Z dynamo/test_minifier 2023-01-11T21:22:28.1762326Z dynamo/test_misc 2023-01-11T21:22:28.1762643Z dynamo/test_model_output 2023-01-11T21:22:28.1762970Z dynamo/test_modules 2023-01-11T21:22:28.1763281Z dynamo/test_nops 2023-01-11T21:22:28.1763591Z dynamo/test_optimizations 2023-01-11T21:22:28.1763934Z dynamo/test_optimizers 2023-01-11T21:22:28.1764276Z dynamo/test_python_autograd 2023-01-11T21:22:28.1764592Z dynamo/test_recompile_ux 2023-01-11T21:22:28.1764883Z dynamo/test_replay_record 2023-01-11T21:22:28.1765194Z dynamo/test_repros 2023-01-11T21:22:28.1765519Z dynamo/test_skip_non_tensor 2023-01-11T21:22:28.1765839Z dynamo/test_subgraphs 2023-01-11T21:22:28.1766163Z dynamo/test_torchxla_integration 2023-01-11T21:22:28.1766523Z dynamo/test_torchxla_num_output 2023-01-11T21:22:28.1766818Z dynamo/test_torchxla_util 2023-01-11T21:22:28.1767128Z dynamo/test_unspec 2023-01-11T21:22:28.1767427Z dynamo/test_verify_correctness 2023-01-11T21:22:28.1783252Z inductor/test_minifier 2023-01-11T21:22:28.1783553Z inductor/test_perf 2023-01-11T21:22:28.1783884Z inductor/test_smoke 2023-01-11T21:22:28.1784131Z inductor/test_torchinductor 2023-01-11T21:22:28.1784342Z inductor/test_torchinductor_opinfo 2023-01-11T21:22:28.1784677Z lazy/test_bindings 2023-01-11T21:22:28.1784983Z lazy/test_debug_util 2023-01-11T21:22:28.1785274Z lazy/test_extract_compiled_graph 2023-01-11T21:22:28.1785603Z lazy/test_meta_kernel 2023-01-11T21:22:28.1785902Z lazy/test_reuse_ir 2023-01-11T21:22:28.1786142Z lazy/test_step_closures 2023-01-11T21:22:28.1786335Z lazy/test_ts_opinfo 2023-01-11T21:22:28.1786520Z nn/test_convolution 2023-01-11T21:22:28.1786687Z nn/test_dropout 2023-01-11T21:22:28.1786866Z nn/test_embedding 2023-01-11T21:22:28.1787039Z nn/test_init 2023-01-11T21:22:28.1787202Z nn/test_lazy_modules 2023-01-11T21:22:28.1787387Z nn/test_module_hooks 2023-01-11T21:22:28.1787580Z nn/test_multihead_attention 2023-01-11T21:22:28.1787765Z nn/test_packed_sequence 2023-01-11T21:22:28.1787965Z nn/test_parametrization 2023-01-11T21:22:28.1788155Z nn/test_pooling 2023-01-11T21:22:28.1788692Z nn/test_pruning 2023-01-11T21:22:28.1789062Z profiler/test_memory_profiler 2023-01-11T21:22:28.1789396Z profiler/test_profiler 2023-01-11T21:22:28.1789587Z profiler/test_profiler_tree 2023-01-11T21:22:28.1789783Z test_ao_sparsity 2023-01-11T21:22:28.1790038Z test_autocast 2023-01-11T21:22:28.1790211Z test_autograd 2023-01-11T21:22:28.1790389Z test_binary_ufuncs 2023-01-11T21:22:28.1790575Z test_bundled_inputs 2023-01-11T21:22:28.1790757Z test_comparison_utils 2023-01-11T21:22:28.1791022Z test_complex 2023-01-11T21:22:28.1791304Z test_cpp_api_parity 2023-01-11T21:22:28.1791683Z test_cpp_extensions_aot_ninja 2023-01-11T21:22:28.1792050Z test_cpp_extensions_aot_no_ninja 2023-01-11T21:22:28.1792421Z test_cpp_extensions_jit 2023-01-11T21:22:28.1792820Z test_cpp_extensions_open_device_registration 2023-01-11T21:22:28.1793163Z test_cuda 2023-01-11T21:22:28.1793470Z test_cuda_nvml_based_avail 2023-01-11T21:22:28.1793697Z test_cuda_primary_ctx 2023-01-11T21:22:28.1793868Z test_cuda_sanitizer 2023-01-11T21:22:28.1794157Z test_cuda_trace 2023-01-11T21:22:28.1794339Z test_dataloader 2023-01-11T21:22:28.1794565Z test_datapipe 2023-01-11T21:22:28.1794848Z test_decomp 2023-01-11T21:22:28.1795143Z test_deploy 2023-01-11T21:22:28.1795417Z test_dispatch 2023-01-11T21:22:28.1795713Z test_dlpack 2023-01-11T21:22:28.1796020Z test_dynamic_shapes 2023-01-11T21:22:28.1796285Z test_expanded_weights 2023-01-11T21:22:28.1796602Z test_fake_tensor 2023-01-11T21:22:28.1796901Z test_foreach 2023-01-11T21:22:28.1797192Z test_function_schema 2023-01-11T21:22:28.1797523Z test_functional_autograd_benchmark 2023-01-11T21:22:28.1797847Z test_functional_optim 2023-01-11T21:22:28.1798169Z test_functionalization 2023-01-11T21:22:28.1798490Z test_futures 2023-01-11T21:22:28.1798778Z test_fx 2023-01-11T21:22:28.1799059Z test_fx_experimental 2023-01-11T21:22:28.1799383Z test_fx_passes 2023-01-11T21:22:28.1799713Z test_fx_reinplace_pass 2023-01-11T21:22:28.1799977Z test_hub 2023-01-11T21:22:28.1800147Z test_import_stats 2023-01-11T21:22:28.1800332Z test_indexing 2023-01-11T21:22:28.1800486Z test_itt 2023-01-11T21:22:28.1800643Z test_jit 2023-01-11T21:22:28.1800809Z test_jit_autocast 2023-01-11T21:22:28.1800974Z test_jit_cuda_fuser 2023-01-11T21:22:28.1801154Z test_jit_disabled 2023-01-11T21:22:28.1801331Z test_jit_fuser_te 2023-01-11T21:22:28.1801495Z test_jit_llga_fuser 2023-01-11T21:22:28.1801673Z test_jiterator 2023-01-11T21:22:28.1801849Z test_legacy_vmap 2023-01-11T21:22:28.1802008Z test_license 2023-01-11T21:22:28.1802174Z test_linalg 2023-01-11T21:22:28.1802341Z test_logging 2023-01-11T21:22:28.1802494Z test_masked 2023-01-11T21:22:28.1802665Z test_maskedtensor 2023-01-11T21:22:28.1802844Z test_matmul_cuda 2023-01-11T21:22:28.1803002Z test_meta 2023-01-11T21:22:28.1803170Z test_mkl_verbose 2023-01-11T21:22:28.1803338Z test_mkldnn 2023-01-11T21:22:28.1803501Z test_mkldnn_fusion 2023-01-11T21:22:28.1803683Z test_mkldnn_verbose 2023-01-11T21:22:28.1803867Z test_mobile_optimizer 2023-01-11T21:22:28.1804042Z test_model_dump 2023-01-11T21:22:28.1804220Z test_module_init 2023-01-11T21:22:28.1804390Z test_modules 2023-01-11T21:22:28.1804544Z test_monitor 2023-01-11T21:22:28.1804721Z test_multiprocessing 2023-01-11T21:22:28.1804922Z test_multiprocessing_spawn 2023-01-11T21:22:28.1805102Z test_namedtensor 2023-01-11T21:22:28.1805291Z test_namedtuple_return_api 2023-01-11T21:22:28.1805486Z test_native_functions 2023-01-11T21:22:28.1805654Z test_native_mha 2023-01-11T21:22:28.1805832Z test_nestedtensor 2023-01-11T21:22:28.1806002Z test_nn 2023-01-11T21:22:28.1806163Z test_numba_integration 2023-01-11T21:22:28.1806350Z test_numpy_interop 2023-01-11T21:22:28.1806532Z test_nvfuser_dynamo 2023-01-11T21:22:28.1806715Z test_nvfuser_frontend 2023-01-11T21:22:28.1806880Z test_openmp 2023-01-11T21:22:28.1807069Z test_ops 2023-01-11T21:22:28.1807248Z test_ops_fwd_gradients 2023-01-11T21:22:28.1807421Z test_ops_gradients 2023-01-11T21:22:28.1807593Z test_ops_jit 2023-01-11T21:22:28.1807760Z test_optim 2023-01-11T21:22:28.1808007Z test_overrides 2023-01-11T21:22:28.1808184Z test_package 2023-01-11T21:22:28.1808368Z test_per_overload_api 2023-01-11T21:22:28.1808536Z test_prims 2023-01-11T21:22:28.1808711Z test_proxy_tensor 2023-01-11T21:22:28.1808896Z test_pruning_op 2023-01-11T21:22:28.1809068Z test_public_bindings 2023-01-11T21:22:28.1809261Z test_python_dispatch 2023-01-11T21:22:28.1809437Z test_pytree 2023-01-11T21:22:28.1809601Z test_quantization 2023-01-11T21:22:28.1809781Z test_reductions 2023-01-11T21:22:28.1809970Z test_scatter_gather_ops 2023-01-11T21:22:28.1810149Z test_schema_check 2023-01-11T21:22:28.1810339Z test_serialization 2023-01-11T21:22:28.1810549Z test_set_default_mobile_cpu_allocator 2023-01-11T21:22:28.1810740Z test_shape_ops 2023-01-11T21:22:28.1810924Z test_show_pickle 2023-01-11T21:22:28.1811110Z test_sort_and_select 2023-01-11T21:22:28.1811274Z test_sparse 2023-01-11T21:22:28.1811453Z test_sparse_csr 2023-01-11T21:22:28.1811640Z test_spectral_ops 2023-01-11T21:22:28.1811886Z test_stateless 2023-01-11T21:22:28.1812063Z test_subclass 2023-01-11T21:22:28.1812250Z test_tensor_creation_ops 2023-01-11T21:22:28.1812426Z test_tensorboard 2023-01-11T21:22:28.1812606Z test_tensorexpr 2023-01-11T21:22:28.1812795Z test_tensorexpr_pybind 2023-01-11T21:22:28.1812965Z test_testing 2023-01-11T21:22:28.1813137Z test_torch 2023-01-11T21:22:28.1813313Z test_transformers 2023-01-11T21:22:28.1813479Z test_type_hints 2023-01-11T21:22:28.1813656Z test_type_info 2023-01-11T21:22:28.1813840Z test_type_promotion 2023-01-11T21:22:28.1814012Z test_unary_ufuncs 2023-01-11T21:22:28.1814185Z test_utils 2023-01-11T21:22:28.1814356Z test_view_ops 2023-01-11T21:22:28.1814513Z test_vulkan 2023-01-11T21:22:28.1814676Z test_weak 2023-01-11T21:22:28.1814857Z test_xnnpack_integration 2023-01-11T21:22:28.1815026Z doctests 2023-01-11T21:22:28.2358761Z Prioritized test from test file changes. 2023-01-11T21:22:28.2359257Z reordering tests for PR: 2023-01-11T21:22:28.2360626Z prioritized: ['dynamo/test_export', 'dynamo/test_misc', 'dynamo/test_optimizations', 'dynamo/test_repros', 'dynamo/test_torchxla_integration', 'dynamo/test_unspec', 'inductor/test_torchinductor', 'inductor/test_torchinductor_opinfo', 'profiler/test_profiler_tree', 'test_ao_sparsity', 'test_autograd', 'test_cpp_extensions_jit', 'test_cuda', 'test_fake_tensor', 'test_foreach', 'test_function_schema', 'test_jit', 'test_masked', 'test_meta', 'test_nn', 'test_overrides', 'test_proxy_tensor', 'test_public_bindings', 'test_python_dispatch', 'test_scatter_gather_ops', 'test_sort_and_select', 'test_sparse', 'test_sparse_csr', 'test_stateless', 'test_testing', 'test_torch', 'test_transformers', 'test_utils'] 2023-01-11T21:22:28.2366731Z the rest: ['backends/xeon/test_launch', 'benchmark_utils/test_benchmark_utils', 'distributions/test_distributions', 'dynamo/test_aot_autograd', 'dynamo/test_aot_cudagraphs', 'dynamo/test_comptime', 'dynamo/test_dynamic_shapes', 'dynamo/test_export_mutations', 'dynamo/test_functions', 'dynamo/test_global', 'dynamo/test_global_declaration', 'dynamo/test_minifier', 'dynamo/test_model_output', 'dynamo/test_modules', 'dynamo/test_nops', 'dynamo/test_optimizers', 'dynamo/test_python_autograd', 'dynamo/test_recompile_ux', 'dynamo/test_replay_record', 'dynamo/test_skip_non_tensor', 'dynamo/test_subgraphs', 'dynamo/test_torchxla_num_output', 'dynamo/test_torchxla_util', 'dynamo/test_verify_correctness', 'inductor/test_minifier', 'inductor/test_perf', 'inductor/test_smoke', 'lazy/test_bindings', 'lazy/test_debug_util', 'lazy/test_extract_compiled_graph', 'lazy/test_meta_kernel', 'lazy/test_reuse_ir', 'lazy/test_step_closures', 'lazy/test_ts_opinfo', 'nn/test_convolution', 'nn/test_dropout', 'nn/test_embedding', 'nn/test_init', 'nn/test_lazy_modules', 'nn/test_module_hooks', 'nn/test_multihead_attention', 'nn/test_packed_sequence', 'nn/test_parametrization', 'nn/test_pooling', 'nn/test_pruning', 'profiler/test_memory_profiler', 'profiler/test_profiler', 'test_autocast', 'test_binary_ufuncs', 'test_bundled_inputs', 'test_comparison_utils', 'test_complex', 'test_cpp_api_parity', 'test_cpp_extensions_aot_ninja', 'test_cpp_extensions_aot_no_ninja', 'test_cpp_extensions_open_device_registration', 'test_cuda_nvml_based_avail', 'test_cuda_primary_ctx', 'test_cuda_sanitizer', 'test_cuda_trace', 'test_dataloader', 'test_datapipe', 'test_decomp', 'test_deploy', 'test_dispatch', 'test_dlpack', 'test_dynamic_shapes', 'test_expanded_weights', 'test_functional_autograd_benchmark', 'test_functional_optim', 'test_functionalization', 'test_futures', 'test_fx', 'test_fx_experimental', 'test_fx_passes', 'test_fx_reinplace_pass', 'test_hub', 'test_import_stats', 'test_indexing', 'test_itt', 'test_jit_autocast', 'test_jit_cuda_fuser', 'test_jit_disabled', 'test_jit_fuser_te', 'test_jit_llga_fuser', 'test_jiterator', 'test_legacy_vmap', 'test_license', 'test_linalg', 'test_logging', 'test_maskedtensor', 'test_matmul_cuda', 'test_mkl_verbose', 'test_mkldnn', 'test_mkldnn_fusion', 'test_mkldnn_verbose', 'test_mobile_optimizer', 'test_model_dump', 'test_module_init', 'test_modules', 'test_monitor', 'test_multiprocessing', 'test_multiprocessing_spawn', 'test_namedtensor', 'test_namedtuple_return_api', 'test_native_functions', 'test_native_mha', 'test_nestedtensor', 'test_numba_integration', 'test_numpy_interop', 'test_nvfuser_dynamo', 'test_nvfuser_frontend', 'test_openmp', 'test_ops', 'test_ops_fwd_gradients', 'test_ops_gradients', 'test_ops_jit', 'test_optim', 'test_package', 'test_per_overload_api', 'test_prims', 'test_pruning_op', 'test_pytree', 'test_quantization', 'test_reductions', 'test_schema_check', 'test_serialization', 'test_set_default_mobile_cpu_allocator', 'test_shape_ops', 'test_show_pickle', 'test_spectral_ops', 'test_subclass', 'test_tensor_creation_ops', 'test_tensorboard', 'test_tensorexpr', 'test_tensorexpr_pybind', 'test_type_hints', 'test_type_info', 'test_type_promotion', 'test_unary_ufuncs', 'test_view_ops', 'test_vulkan', 'test_weak', 'test_xnnpack_integration', 'doctests'] 2023-01-11T21:22:28.2369982Z 2023-01-11T21:22:28.2370411Z Downloading https://raw.githubusercontent.com/pytorch/test-infra/generated-stats/stats/slow-tests.json to /var/lib/jenkins/workspace/test/.pytorch-slow-tests.json 2023-01-11T21:22:28.2553754Z Downloading https://raw.githubusercontent.com/pytorch/test-infra/generated-stats/stats/disabled-tests-condensed.json to /var/lib/jenkins/workspace/test/.pytorch-disabled-tests.json 2023-01-11T21:22:28.2732733Z parallel (file granularity) tests: 2023-01-11T21:22:28.2733066Z dynamo/test_export 2023-01-11T21:22:28.2733418Z dynamo/test_misc 2023-01-11T21:22:28.2733692Z dynamo/test_optimizations 2023-01-11T21:22:28.2734030Z dynamo/test_repros 2023-01-11T21:22:28.2734397Z dynamo/test_torchxla_integration 2023-01-11T21:22:28.2734668Z dynamo/test_unspec 2023-01-11T21:22:28.2734862Z inductor/test_torchinductor 2023-01-11T21:22:28.2735075Z profiler/test_profiler_tree 2023-01-11T21:22:28.2735255Z test_ao_sparsity 2023-01-11T21:22:28.2735427Z test_foreach 2023-01-11T21:22:28.2735605Z test_function_schema 2023-01-11T21:22:28.2735785Z test_jit 2023-01-11T21:22:28.2735945Z test_masked 2023-01-11T21:22:28.2736111Z test_meta 2023-01-11T21:22:28.2736264Z test_proxy_tensor 2023-01-11T21:22:28.2736446Z test_public_bindings 2023-01-11T21:22:28.2736631Z test_python_dispatch 2023-01-11T21:22:28.2736807Z test_scatter_gather_ops 2023-01-11T21:22:28.2736996Z test_sort_and_select 2023-01-11T21:22:28.2737168Z test_sparse 2023-01-11T21:22:28.2737322Z test_stateless 2023-01-11T21:22:28.2737492Z test_testing 2023-01-11T21:22:28.2737661Z test_transformers 2023-01-11T21:22:28.2737816Z test_utils 2023-01-11T21:22:28.2738001Z backends/xeon/test_launch 2023-01-11T21:22:28.2738208Z benchmark_utils/test_benchmark_utils 2023-01-11T21:22:28.2738399Z dynamo/test_aot_autograd 2023-01-11T21:22:28.2738593Z dynamo/test_aot_cudagraphs 2023-01-11T21:22:28.2738781Z dynamo/test_comptime 2023-01-11T21:22:28.2738957Z dynamo/test_dynamic_shapes 2023-01-11T21:22:28.2739156Z dynamo/test_export_mutations 2023-01-11T21:22:28.2739488Z dynamo/test_functions 2023-01-11T21:22:28.2739658Z dynamo/test_global 2023-01-11T21:22:28.2739849Z dynamo/test_global_declaration 2023-01-11T21:22:28.2740043Z dynamo/test_minifier 2023-01-11T21:22:28.2740216Z dynamo/test_model_output 2023-01-11T21:22:28.2740402Z dynamo/test_modules 2023-01-11T21:22:28.2740579Z dynamo/test_nops 2023-01-11T21:22:28.2740745Z dynamo/test_optimizers 2023-01-11T21:22:28.2740941Z dynamo/test_python_autograd 2023-01-11T21:22:28.2741138Z dynamo/test_recompile_ux 2023-01-11T21:22:28.2741320Z dynamo/test_replay_record 2023-01-11T21:22:28.2741515Z dynamo/test_skip_non_tensor 2023-01-11T21:22:28.2741702Z dynamo/test_subgraphs 2023-01-11T21:22:28.2741884Z dynamo/test_torchxla_num_output 2023-01-11T21:22:28.2742082Z dynamo/test_torchxla_util 2023-01-11T21:22:28.2742282Z dynamo/test_verify_correctness 2023-01-11T21:22:28.2742678Z inductor/test_minifier 2023-01-11T21:22:28.2742867Z inductor/test_perf 2023-01-11T21:22:28.2743050Z inductor/test_smoke 2023-01-11T21:22:28.2743218Z lazy/test_bindings 2023-01-11T21:22:28.2743483Z lazy/test_debug_util 2023-01-11T21:22:28.2743682Z lazy/test_extract_compiled_graph 2023-01-11T21:22:28.2743879Z lazy/test_meta_kernel 2023-01-11T21:22:28.2744045Z lazy/test_reuse_ir 2023-01-11T21:22:28.2744225Z lazy/test_step_closures 2023-01-11T21:22:28.2744410Z lazy/test_ts_opinfo 2023-01-11T21:22:28.2744572Z nn/test_dropout 2023-01-11T21:22:28.2744745Z nn/test_embedding 2023-01-11T21:22:28.2744915Z nn/test_init 2023-01-11T21:22:28.2745072Z nn/test_lazy_modules 2023-01-11T21:22:28.2745255Z nn/test_module_hooks 2023-01-11T21:22:28.2745442Z nn/test_multihead_attention 2023-01-11T21:22:28.2745622Z nn/test_packed_sequence 2023-01-11T21:22:28.2745813Z nn/test_parametrization 2023-01-11T21:22:28.2745994Z nn/test_pruning 2023-01-11T21:22:28.2746171Z profiler/test_memory_profiler 2023-01-11T21:22:28.2746367Z profiler/test_profiler 2023-01-11T21:22:28.2746546Z test_autocast 2023-01-11T21:22:28.2746702Z test_binary_ufuncs 2023-01-11T21:22:28.2746889Z test_bundled_inputs 2023-01-11T21:22:28.2747071Z test_comparison_utils 2023-01-11T21:22:28.2747235Z test_complex 2023-01-11T21:22:28.2747402Z test_cuda_sanitizer 2023-01-11T21:22:28.2747574Z test_dataloader 2023-01-11T21:22:28.2747731Z test_datapipe 2023-01-11T21:22:28.2747895Z test_decomp 2023-01-11T21:22:28.2748056Z test_deploy 2023-01-11T21:22:28.2748203Z test_dlpack 2023-01-11T21:22:28.2748373Z test_dynamic_shapes 2023-01-11T21:22:28.2748555Z test_expanded_weights 2023-01-11T21:22:28.2748749Z test_functional_autograd_benchmark 2023-01-11T21:22:28.2748949Z test_functional_optim 2023-01-11T21:22:28.2749136Z test_functionalization 2023-01-11T21:22:28.2749305Z test_futures 2023-01-11T21:22:28.2749478Z test_fx_experimental 2023-01-11T21:22:28.2749654Z test_fx_passes 2023-01-11T21:22:28.2749818Z test_fx_reinplace_pass 2023-01-11T21:22:28.2750053Z test_hub 2023-01-11T21:22:28.2750226Z test_import_stats 2023-01-11T21:22:28.2750383Z test_itt 2023-01-11T21:22:28.2750548Z test_jit_autocast 2023-01-11T21:22:28.2750726Z test_jit_fuser_te 2023-01-11T21:22:28.2750886Z test_jit_llga_fuser 2023-01-11T21:22:28.2751064Z test_jiterator 2023-01-11T21:22:28.2751235Z test_legacy_vmap 2023-01-11T21:22:28.2751392Z test_license 2023-01-11T21:22:28.2751558Z test_logging 2023-01-11T21:22:28.2751727Z test_maskedtensor 2023-01-11T21:22:28.2751888Z test_matmul_cuda 2023-01-11T21:22:28.2752060Z test_mkl_verbose 2023-01-11T21:22:28.2752226Z test_mkldnn 2023-01-11T21:22:28.2752381Z test_mkldnn_fusion 2023-01-11T21:22:28.2752559Z test_mkldnn_verbose 2023-01-11T21:22:28.2752733Z test_model_dump 2023-01-11T21:22:28.2752947Z test_module_init 2023-01-11T21:22:28.2753100Z test_monitor 2023-01-11T21:22:28.2753267Z test_namedtensor 2023-01-11T21:22:28.2753440Z test_native_functions 2023-01-11T21:22:28.2753602Z test_native_mha 2023-01-11T21:22:28.2753779Z test_nestedtensor 2023-01-11T21:22:28.2753959Z test_numba_integration 2023-01-11T21:22:28.2754128Z test_numpy_interop 2023-01-11T21:22:28.2754388Z test_nvfuser_dynamo 2023-01-11T21:22:28.2754570Z test_nvfuser_frontend 2023-01-11T21:22:28.2754731Z test_openmp 2023-01-11T21:22:28.2754895Z test_optim 2023-01-11T21:22:28.2755057Z test_package 2023-01-11T21:22:28.2755219Z test_per_overload_api 2023-01-11T21:22:28.2755398Z test_pruning_op 2023-01-11T21:22:28.2755570Z test_pytree 2023-01-11T21:22:28.2755728Z test_quantization 2023-01-11T21:22:28.2755904Z test_schema_check 2023-01-11T21:22:28.2756083Z test_serialization 2023-01-11T21:22:28.2756270Z test_set_default_mobile_cpu_allocator 2023-01-11T21:22:28.2756466Z test_shape_ops 2023-01-11T21:22:28.2756633Z test_subclass 2023-01-11T21:22:28.2756789Z test_tensorboard 2023-01-11T21:22:28.2756978Z test_tensorexpr_pybind 2023-01-11T21:22:28.2757157Z test_type_hints 2023-01-11T21:22:28.2757315Z test_type_info 2023-01-11T21:22:28.2757494Z test_type_promotion 2023-01-11T21:22:28.2757674Z test_unary_ufuncs 2023-01-11T21:22:28.2757831Z test_view_ops 2023-01-11T21:22:28.2757998Z test_vulkan 2023-01-11T21:22:28.2758234Z test_weak 2023-01-11T21:22:28.2758398Z test_xnnpack_integration 2023-01-11T21:22:28.2758609Z serial (file granularity) tests: 2023-01-11T21:22:28.2758825Z inductor/test_torchinductor_opinfo 2023-01-11T21:22:28.2759008Z test_autograd 2023-01-11T21:22:28.2759190Z test_cpp_extensions_jit 2023-01-11T21:22:28.2759368Z test_cuda 2023-01-11T21:22:28.2759522Z test_fake_tensor 2023-01-11T21:22:28.2759687Z test_nn 2023-01-11T21:22:28.2759847Z test_overrides 2023-01-11T21:22:28.2760004Z test_sparse_csr 2023-01-11T21:22:28.2760169Z test_torch 2023-01-11T21:22:28.2760363Z distributions/test_distributions 2023-01-11T21:22:28.2760550Z nn/test_convolution 2023-01-11T21:22:28.2760732Z nn/test_pooling 2023-01-11T21:22:28.2760912Z test_cpp_api_parity 2023-01-11T21:22:28.2761101Z test_cpp_extensions_aot_ninja 2023-01-11T21:22:28.2761312Z test_cpp_extensions_aot_no_ninja 2023-01-11T21:22:28.2761542Z test_cpp_extensions_open_device_registration 2023-01-11T21:22:28.2761748Z test_cuda_nvml_based_avail 2023-01-11T21:22:28.2761948Z test_cuda_primary_ctx 2023-01-11T21:22:28.2762133Z test_cuda_trace 2023-01-11T21:22:28.2762290Z test_dispatch 2023-01-11T21:22:28.2762456Z test_fx 2023-01-11T21:22:28.2762623Z test_indexing 2023-01-11T21:22:28.2762781Z test_jit_cuda_fuser 2023-01-11T21:22:28.2762962Z test_jit_disabled 2023-01-11T21:22:28.2763131Z test_linalg 2023-01-11T21:22:28.2763291Z test_mobile_optimizer 2023-01-11T21:22:28.2763468Z test_modules 2023-01-11T21:22:28.2763646Z test_multiprocessing 2023-01-11T21:22:28.2763833Z test_multiprocessing_spawn 2023-01-11T21:22:28.2764040Z test_namedtuple_return_api 2023-01-11T21:22:28.2764224Z test_ops 2023-01-11T21:22:28.2764384Z test_ops_fwd_gradients 2023-01-11T21:22:28.2764565Z test_ops_gradients 2023-01-11T21:22:28.2764736Z test_ops_jit 2023-01-11T21:22:28.2764890Z test_prims 2023-01-11T21:22:28.2765059Z test_reductions 2023-01-11T21:22:28.2765232Z test_show_pickle 2023-01-11T21:22:28.2765394Z test_spectral_ops 2023-01-11T21:22:28.2765590Z test_tensor_creation_ops 2023-01-11T21:22:28.2765775Z test_tensorexpr 2023-01-11T21:22:28.2765926Z doctests 2023-01-11T21:22:29.9623092Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:29.9751835Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:30.0457193Z Ignoring disabled issues: [] 2023-01-11T21:22:30.0562220Z Ignoring disabled issues: [] 2023-01-11T21:22:30.0595959Z Running dynamo/test_export ... [2023-01-11 21:22:30.059287] 2023-01-11T21:22:30.0598191Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_export.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:30.059579] 2023-01-11T21:22:30.0703143Z Running dynamo/test_misc ... [2023-01-11 21:22:30.069904] 2023-01-11T21:22:30.0704234Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_misc.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:30.070198] 2023-01-11T21:22:38.7001641Z 2023-01-11T21:22:38.7002469Z Expand the folded group to see the log file of dynamo/test_export 2023-01-11T21:22:38.7010885Z ##[group]PRINTING LOG FILE of dynamo/test_export (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_export_u8euuxo2) 2023-01-11T21:22:38.7011497Z 2023-01-11T21:22:38.7011781Z Running tests... 2023-01-11T21:22:38.7012300Z ---------------------------------------------------------------------- 2023-01-11T21:22:38.7012716Z Test results will be stored in test-reports/python-unittest/dynamo.test_export 2023-01-11T21:22:38.7013079Z test_dict_return (__main__.ExportTests) ... ok (3.920s) 2023-01-11T21:22:38.7013593Z test_dict_return_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7014110Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7014367Z ok (0.076s) 2023-01-11T21:22:38.7014793Z test_dupes (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7015309Z ok (0.010s) 2023-01-11T21:22:38.7015848Z test_dupes_2 (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7016453Z ok (0.010s) 2023-01-11T21:22:38.7017079Z test_dupes_2_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7017420Z ok (0.023s) 2023-01-11T21:22:38.7017814Z test_dupes_and_bypass (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7018170Z ok (0.011s) 2023-01-11T21:22:38.7018645Z test_dupes_and_bypass_reorder_with_non_tensor_arg (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7018959Z ok (0.099s) 2023-01-11T21:22:38.7019480Z test_dupes_and_bypass_reorder_with_non_tensor_arg_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7019878Z ok (0.024s) 2023-01-11T21:22:38.7020335Z test_dupes_and_bypass_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7020662Z ok (0.024s) 2023-01-11T21:22:38.7021127Z test_dupes_and_bypass_with_non_tensor_arg (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7021471Z ok (0.012s) 2023-01-11T21:22:38.7021940Z test_dupes_and_bypass_with_non_tensor_arg_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7022300Z ok (0.024s) 2023-01-11T21:22:38.7023089Z test_dupes_and_bypass_with_non_tensor_output (__main__.ExportTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:38.7023441Z ok (0.016s) 2023-01-11T21:22:38.7023904Z test_dupes_and_bypass_with_non_tensor_output_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:38.7024319Z ok (0.015s) 2023-01-11T21:22:38.7024797Z test_dupes_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7025091Z ok (0.022s) 2023-01-11T21:22:38.7025360Z test_export (__main__.ExportTests) ... inline_call [] 2023-01-11T21:22:38.7025803Z stats [('calls_captured', 80), ('fusions_possible', 78), ('unique_graphs', 2)] 2023-01-11T21:22:38.7026057Z ok (0.123s) 2023-01-11T21:22:38.7072724Z test_export_compare_optimize_with_make_fx (__main__.ExportTests) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:38.7073232Z ok (0.281s) 2023-01-11T21:22:38.7073722Z test_export_decomp (__main__.ExportTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:38.7074440Z ok (0.081s) 2023-01-11T21:22:38.7074828Z test_export_decomp_asserts_bad_args (__main__.ExportTests) ... ok (0.001s) 2023-01-11T21:22:38.7075331Z test_export_decomp_asserts_bad_args_mode (__main__.ExportTests) ... ok (0.001s) 2023-01-11T21:22:38.7076114Z test_export_graph_bypass (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7076544Z ok (0.013s) 2023-01-11T21:22:38.7077244Z test_export_graph_bypass_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7077722Z ok (0.026s) 2023-01-11T21:22:38.7078269Z test_export_graph_with_complex_reorder (__main__.ExportTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7078544Z ok (0.018s) 2023-01-11T21:22:38.7079260Z test_export_graph_with_complex_reorder_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7079648Z ok (0.044s) 2023-01-11T21:22:38.7080209Z test_export_graph_with_list (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7080572Z ok (0.015s) 2023-01-11T21:22:38.7081133Z test_export_graph_with_list_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7081504Z ok (0.027s) 2023-01-11T21:22:38.7082033Z test_export_meta_val (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7082382Z ok (0.224s) 2023-01-11T21:22:38.7082916Z test_export_mismatched_out (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7083326Z ok (0.011s) 2023-01-11T21:22:38.7083921Z test_export_mismatched_out_2 (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7084349Z ok (0.010s) 2023-01-11T21:22:38.7085036Z test_export_mismatched_out_2_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7085469Z ok (0.023s) 2023-01-11T21:22:38.7086108Z test_export_mismatched_out_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7086559Z ok (0.024s) 2023-01-11T21:22:38.7087103Z test_export_shape_control_flow_1 (__main__.ExportTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:38.7087722Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:38.7088088Z ok (0.067s) 2023-01-11T21:22:38.7088468Z test_export_with_aten_graph (__main__.ExportTests) ... inline_call [] 2023-01-11T21:22:38.7089054Z stats [('calls_captured', 80), ('fusions_possible', 78), ('unique_graphs', 2)] 2023-01-11T21:22:38.7089398Z ok (0.391s) 2023-01-11T21:22:38.7090046Z test_export_with_constant_dict_values (__main__.ExportTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:38.7090566Z ok (0.012s) 2023-01-11T21:22:38.7091256Z test_export_with_constant_free_function (__main__.ExportTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:38.7091731Z ok (0.017s) 2023-01-11T21:22:38.7092454Z test_export_with_constant_free_function_and_class_method (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7092964Z ok (0.014s) 2023-01-11T21:22:38.7093681Z test_export_with_constant_free_function_and_class_method_multiarg (__main__.ExportTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:38.7094159Z ok (0.017s) 2023-01-11T21:22:38.7094694Z test_export_with_constant_free_function_and_class_method_multiarg_diff (__main__.ExportTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:38.7095195Z ok (0.009s) 2023-01-11T21:22:38.7095612Z test_export_with_constant_list_nonzero (__main__.ExportTests) ... stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:38.7095889Z ok (0.020s) 2023-01-11T21:22:38.7096317Z test_export_with_constant_list_nonzero_free_function (__main__.ExportTests) ... stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:38.7096610Z ok (0.019s) 2023-01-11T21:22:38.7097004Z test_export_with_constant_method_on_module (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7097296Z ok (0.014s) 2023-01-11T21:22:38.7097723Z test_export_with_constant_method_on_module_invoke_twice (__main__.ExportTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:38.7098032Z ok (0.016s) 2023-01-11T21:22:38.7098482Z test_export_with_constant_none_control_flow (__main__.ExportTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:38.7098775Z ok (0.008s) 2023-01-11T21:22:38.7099195Z test_export_with_constant_none_control_flow_free_func (__main__.ExportTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:38.7099496Z ok (0.007s) 2023-01-11T21:22:38.7099899Z test_export_with_constant_not_none_control_flow (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7100190Z ok (0.009s) 2023-01-11T21:22:38.7100626Z test_export_with_constant_not_none_control_flow_free_func (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7100915Z ok (0.009s) 2023-01-11T21:22:38.7101341Z test_export_with_constant_not_none_control_flow_pos (__main__.ExportTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:38.7101638Z ok (0.009s) 2023-01-11T21:22:38.7102041Z test_export_with_constant_not_return_const (__main__.ExportTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:38.7102315Z ok (0.007s) 2023-01-11T21:22:38.7103042Z test_export_with_constant_tuple_nonzero (__main__.ExportTests) ... stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:38.7103337Z ok (0.020s) 2023-01-11T21:22:38.7103556Z test_export_with_module_layer (__main__.ExportTests) ... inline_call [] 2023-01-11T21:22:38.7103924Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:38.7104149Z ok (0.030s) 2023-01-11T21:22:38.7104525Z test_export_with_stack_trace (__main__.ExportTests) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:38.7104808Z ok (0.115s) 2023-01-11T21:22:38.7105044Z test_func_return (__main__.ExportTests) ... inline_call [] 2023-01-11T21:22:38.7105394Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:38.7105613Z ok (0.016s) 2023-01-11T21:22:38.7105854Z test_func_return_with_aten_graph (__main__.ExportTests) ... inline_call [] 2023-01-11T21:22:38.7106221Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:38.7106436Z ok (0.050s) 2023-01-11T21:22:38.7106825Z test_input_container_type (__main__.ExportTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:38.7107102Z ok (0.071s) 2023-01-11T21:22:38.7107456Z test_list_unpack (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7107727Z ok (0.015s) 2023-01-11T21:22:38.7108123Z test_list_unpack_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:38.7108549Z ok (0.027s) 2023-01-11T21:22:38.7108949Z test_zeroes_in_and_out_different_shape_on_test (__main__.ExportTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7109242Z ok (0.016s) 2023-01-11T21:22:38.7109679Z test_zeroes_in_and_out_different_shape_on_test_with_aten_graph (__main__.ExportTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:38.7109968Z ok (0.041s) 2023-01-11T21:22:38.7110468Z test_zeroes_in_new_shape_scalar_out (__main__.ExportTests) ... stats [('calls_captured', 16), ('fusions_possible', 14), ('unique_graphs', 2)] 2023-01-11T21:22:38.7110756Z ok (0.023s) 2023-01-11T21:22:38.7111169Z test_zeroes_in_new_shape_scalar_out_permute (__main__.ExportTests) ... stats [('calls_captured', 22), ('fusions_possible', 20), ('unique_graphs', 2)] 2023-01-11T21:22:38.7111446Z ok (0.027s) 2023-01-11T21:22:38.7111934Z test_zeroes_in_new_shape_scalar_out_permute_dupe_and_bypass (__main__.ExportTests) ... stats [('calls_captured', 22), ('fusions_possible', 20), ('unique_graphs', 2)] 2023-01-11T21:22:38.7112246Z ok (0.028s) 2023-01-11T21:22:38.7112357Z 2023-01-11T21:22:38.7112555Z ---------------------------------------------------------------------- 2023-01-11T21:22:38.7112805Z Ran 60 tests in 6.335s 2023-01-11T21:22:38.7112920Z 2023-01-11T21:22:38.7112986Z OK 2023-01-11T21:22:38.7113079Z 2023-01-11T21:22:38.7113169Z Generating XML reports... 2023-01-11T21:22:38.7113571Z Generated XML report: test-reports/python-unittest/dynamo.test_export/TEST-ExportTests-20230111212231.xml 2023-01-11T21:22:38.7113806Z 2023-01-11T21:22:38.7114212Z ##[endgroup] 2023-01-11T21:22:38.7114619Z FINISHED PRINTING LOG FILE of dynamo/test_export (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_export_u8euuxo2) 2023-01-11T21:22:38.7114832Z 2023-01-11T21:22:40.3333290Z 2023-01-11T21:22:40.3333727Z Expand the folded group to see the log file of dynamo/test_misc 2023-01-11T21:22:40.3336195Z ##[group]PRINTING LOG FILE of dynamo/test_misc (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_misc_a8pqkth5) 2023-01-11T21:22:40.3336622Z 2023-01-11T21:22:40.3336756Z Running tests... 2023-01-11T21:22:40.3337436Z ---------------------------------------------------------------------- 2023-01-11T21:22:40.3349358Z Test results will be stored in test-reports/python-unittest/dynamo.test_misc 2023-01-11T21:22:40.3352850Z test_allow_in_graph (__main__.MiscTests) ... ok (3.915s) 2023-01-11T21:22:40.3353257Z test_autocast (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3356807Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:40.3357052Z skip: requires cuda (0.001s) 2023-01-11T21:22:40.3357455Z test_autocast_cpu (__main__.MiscTests) ... stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:40.3357736Z ok (0.079s) 2023-01-11T21:22:40.3357984Z test_autocast_device (__main__.MiscTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:40.3358274Z test_autocast_float64 (__main__.MiscTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:40.3358577Z test_autograd_function_equivalence (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3358944Z stats [('calls_captured', 4), ('unique_graphs', 4), ('fusions_possible', 0)] 2023-01-11T21:22:40.3359162Z ok (0.108s) 2023-01-11T21:22:40.3359600Z test_autograd_profiler (__main__.MiscTests) ... STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:22:40.3360085Z STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:22:40.3360542Z STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:22:40.3360955Z [2023-01-11 21:22:35,944] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-01-11T21:22:40.3361254Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3361648Z unimplemented [] 2023-01-11T21:22:40.3361875Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:22:40.3362201Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3362434Z ok (0.028s) 2023-01-11T21:22:40.3362855Z test_autograd_profiler_enabled (__main__.MiscTests) ... STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:22:40.3363347Z STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:22:40.3363796Z STAGE:2023-01-11 21:22:35 1293:1293 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:22:40.3364110Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3364291Z unimplemented [] 2023-01-11T21:22:40.3364604Z graph_break [('torch.autograd._profiler_enabled not supported yet', 1)] 2023-01-11T21:22:40.3364972Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3365246Z ok (0.014s) 2023-01-11T21:22:40.3365609Z test_boolarg (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:22:40.3365870Z ok (0.013s) 2023-01-11T21:22:40.3366078Z test_build_tuple_unpack (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3366430Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3366659Z ok (0.015s) 2023-01-11T21:22:40.3366984Z test_builder_for_class_with_metaclass (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3367343Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3367566Z ok (0.006s) 2023-01-11T21:22:40.3367936Z test_builtin_isinstance (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3368187Z ok (0.013s) 2023-01-11T21:22:40.3368434Z test_builtin_subclasses_as_method_on_class_type (__main__.MiscTests) ... ok (0.003s) 2023-01-11T21:22:40.3368747Z test_builtin_subclasses_as_method_on_var (__main__.MiscTests) ... ok (0.004s) 2023-01-11T21:22:40.3369045Z test_call_parent_non_class_methods_from_child (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3369413Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3369644Z ok (0.010s) 2023-01-11T21:22:40.3370005Z test_callpacked (__main__.MiscTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3370253Z ok (0.012s) 2023-01-11T21:22:40.3370545Z test_cell_output1 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3370893Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3371104Z ok (0.006s) 2023-01-11T21:22:40.3371398Z test_cell_output2 (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3371631Z unimplemented [] 2023-01-11T21:22:40.3372007Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:22:40.3372424Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3372651Z ok (0.010s) 2023-01-11T21:22:40.3373379Z test_change_backends (__main__.MiscTests) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T21:22:40.3373990Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T21:22:40.3374333Z stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:22:40.3374614Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3374795Z ok (0.063s) 2023-01-11T21:22:40.3375058Z test_cond (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3375331Z inline_call [] 2023-01-11T21:22:40.3375632Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3375843Z ok (0.015s) 2023-01-11T21:22:40.3376060Z test_cond_export (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3376398Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3376608Z ok (0.018s) 2023-01-11T21:22:40.3376836Z test_cond_export_single_arg (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3377188Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3377413Z ok (0.011s) 2023-01-11T21:22:40.3377687Z test_cond_nested (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3377918Z inline_call [] 2023-01-11T21:22:40.3378212Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3378418Z ok (0.017s) 2023-01-11T21:22:40.3378693Z test_cond_side_effects (__main__.MiscTests) ... expected failure (0.001s) 2023-01-11T21:22:40.3379066Z test_config_getattr_default (__main__.MiscTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3379423Z stats [('calls_captured', 21), ('fusions_possible', 18), ('unique_graphs', 3)] 2023-01-11T21:22:40.3379654Z ok (0.038s) 2023-01-11T21:22:40.3379947Z test_config_log_level (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3380303Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3380516Z ok (0.008s) 2023-01-11T21:22:40.3380803Z test_config_obj (__main__.MiscTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:40.3381149Z stats [('calls_captured', 8), ('fusions_possible', 4), ('unique_graphs', 4)] 2023-01-11T21:22:40.3381362Z ok (0.025s) 2023-01-11T21:22:40.3381599Z test_const_dict_variable_python_type (__main__.MiscTests) ... ok (0.001s) 2023-01-11T21:22:40.3382205Z test_cross_entropy_loss_fancy_ctor (__main__.MiscTests) ... /opt/conda/lib/python3.10/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. 2023-01-11T21:22:40.3382936Z warnings.warn(warning.format(ret)) 2023-01-11T21:22:40.3383205Z ok (0.002s) 2023-01-11T21:22:40.3383567Z test_cross_entropy_loss_simple_ctor (__main__.MiscTests) ... ok (0.002s) 2023-01-11T21:22:40.3384097Z test_dataclass_fields (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3384451Z inline_call [] 2023-01-11T21:22:40.3384811Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3385038Z ok (0.038s) 2023-01-11T21:22:40.3385338Z test_dict_mutation_side_effect (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3385825Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3386171Z ok (0.006s) 2023-01-11T21:22:40.3386637Z test_dict_reconstruct_keeps_original_order (__main__.MiscTests) ... frames [('total', 13), ('ok', 12)] 2023-01-11T21:22:40.3387109Z unimplemented [("Guard setup for uninitialized class ", 1)] 2023-01-11T21:22:40.3388739Z graph_break [('UnspecializedNNModuleVariable missing add_module', 3), ('construct nn.Module: ReLU', 1), ('call_function in skip_files /opt/conda/lib/python3.10/collections/__init__.py', 1), ('construct nn.Module: ModuleDict', 1), ('Patched init cannot be inlined.', 1), ('construct nn.Module: Linear', 1), ('construct nn.Module: Sigmoid', 1), ('call_method ConstDictVariable() update [TupleVariable()] {}', 1)] 2023-01-11T21:22:40.3389443Z inline_call [('inline __setitem__', 2), ('Patched init cannot be inlined.', 1)] 2023-01-11T21:22:40.3389681Z ok (0.039s) 2023-01-11T21:22:40.3389953Z test_dictcomp (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3390267Z inline_call [] 2023-01-11T21:22:40.3390577Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3391035Z ok (0.009s) 2023-01-11T21:22:40.3391385Z test_disable_flag (__main__.MiscTests) ... ok (0.002s) 2023-01-11T21:22:40.3391863Z test_disable_optimize (__main__.MiscTests) ... ok (0.002s) 2023-01-11T21:22:40.3392241Z test_disallow_in_graph (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3392481Z unimplemented [] 2023-01-11T21:22:40.3392866Z graph_break [('call_function UserDefinedObjectVariable(sub) [TensorVariable(), ConstantVariable(int)] {}', 1)] 2023-01-11T21:22:40.3393264Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3393492Z ok (0.013s) 2023-01-11T21:22:40.3393712Z test_dunder_methods (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3394056Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:40.3394267Z ok (0.022s) 2023-01-11T21:22:40.3394494Z test_duplicate_graph_break_warning (__main__.MiscTests) ... break 2023-01-11T21:22:40.3394773Z break 2023-01-11T21:22:40.3394973Z frames [('total', 9), ('ok', 9)] 2023-01-11T21:22:40.3395313Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2)] 2023-01-11T21:22:40.3395570Z unimplemented [] 2023-01-11T21:22:40.3395883Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 4)] 2023-01-11T21:22:40.3396257Z stats [('calls_captured', 6), ('unique_graphs', 4), ('fusions_possible', 2)] 2023-01-11T21:22:40.3396483Z ok (0.031s) 2023-01-11T21:22:40.3396703Z test_dynamo_min_operator_with_shape (__main__.MiscTests) ... ok (0.003s) 2023-01-11T21:22:40.3397051Z test_empty_list (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3397601Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3397828Z ok (0.009s) 2023-01-11T21:22:40.3398197Z test_enum_no_graphbreaks (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3398477Z ok (0.012s) 2023-01-11T21:22:40.3398783Z test_error_on_nested_fx_trace (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3399133Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3399358Z ok (0.006s) 2023-01-11T21:22:40.3399709Z test_fold (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3399962Z ok (0.005s) 2023-01-11T21:22:40.3400339Z test_frozenset_torch_func_contains (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3400620Z ok (0.011s) 2023-01-11T21:22:40.3400994Z test_function_annotation (__main__.MiscTests) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3401247Z ok (0.010s) 2023-01-11T21:22:40.3401600Z test_generate_tensor_from_list_of_numpy_primitive_type (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3401873Z unimplemented [] 2023-01-11T21:22:40.3402080Z graph_break [('numpy', 1)] 2023-01-11T21:22:40.3402397Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3402620Z ok (0.007s) 2023-01-11T21:22:40.3402833Z test_get_device (__main__.MiscTests) ... skip: requires cuda (0.000s) 2023-01-11T21:22:40.3403168Z test_grad (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3403394Z unimplemented [] 2023-01-11T21:22:40.3403637Z graph_break [('Tensor.backward', 1)] 2023-01-11T21:22:40.3403950Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3404177Z ok (0.013s) 2023-01-11T21:22:40.3404471Z test_grad_mode_guard (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3404692Z unimplemented [] 2023-01-11T21:22:40.3404930Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:22:40.3405252Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3405508Z ok (0.014s) 2023-01-11T21:22:40.3405798Z test_graph_break (__main__.MiscTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3406030Z unimplemented [] 2023-01-11T21:22:40.3406389Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 2)] 2023-01-11T21:22:40.3406791Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:22:40.3407018Z ok (0.022s) 2023-01-11T21:22:40.3407385Z test_guard_failure_fn (__main__.MiscTests) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:40.3407637Z ok (0.017s) 2023-01-11T21:22:40.3408003Z test_guard_failure_fn2 (__main__.MiscTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:40.3408267Z ok (0.014s) 2023-01-11T21:22:40.3408671Z test_id_of_nn_module (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3408924Z ok (0.011s) 2023-01-11T21:22:40.3409287Z test_if_cond_nn_mod (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3409546Z ok (0.012s) 2023-01-11T21:22:40.3409827Z test_inference_mode (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3410184Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3410409Z ok (0.006s) 2023-01-11T21:22:40.3410699Z test_inline_dict_mutation (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3410933Z inline_call [] 2023-01-11T21:22:40.3411233Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3411460Z ok (0.019s) 2023-01-11T21:22:40.3411776Z test_inline_func_jump_on_tensor_condition (__main__.MiscTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:40.3412117Z inline_call [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:40.3412341Z unimplemented [] 2023-01-11T21:22:40.3412593Z graph_break [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:40.3412934Z stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:22:40.3413159Z ok (0.014s) 2023-01-11T21:22:40.3413448Z test_inline_list_mutation (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3413680Z inline_call [] 2023-01-11T21:22:40.3413973Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3414185Z ok (0.017s) 2023-01-11T21:22:40.3414538Z test_inplace (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3414794Z ok (0.013s) 2023-01-11T21:22:40.3415168Z test_inplace_param_update (__main__.MiscTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:40.3415424Z ok (0.008s) 2023-01-11T21:22:40.3415721Z test_is_compiling (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3416075Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3416289Z ok (0.010s) 2023-01-11T21:22:40.3416655Z test_is_floating_point (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3416920Z ok (0.008s) 2023-01-11T21:22:40.3417276Z test_is_floating_point2 (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3417540Z ok (0.007s) 2023-01-11T21:22:40.3417893Z test_is_tensor (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3418145Z ok (0.007s) 2023-01-11T21:22:40.3418420Z test_is_tensor2 (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3418768Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3419031Z ok (0.014s) 2023-01-11T21:22:40.3419374Z test_is_tensor_like (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:40.3419636Z ok (0.015s) 2023-01-11T21:22:40.3419928Z test_is_tensor_like2 (__main__.MiscTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3420148Z unimplemented [] 2023-01-11T21:22:40.3420472Z graph_break [('call_function args: UserDefinedObjectVariable(MyTensor) ', 1)] 2023-01-11T21:22:40.3420850Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3421073Z ok (0.018s) 2023-01-11T21:22:40.3421411Z test_item (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3421662Z ok (0.019s) 2023-01-11T21:22:40.3422020Z test_item_changes (__main__.MiscTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:40.3422265Z ok (0.036s) 2023-01-11T21:22:40.3422964Z test_item_changes_new_shape (__main__.MiscTests) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:40.3423241Z ok (0.036s) 2023-01-11T21:22:40.3423448Z test_large_reduction_list (__main__.MiscTests) ... ok (0.013s) 2023-01-11T21:22:40.3423714Z test_linetable_writer (__main__.MiscTests) ... ok (0.001s) 2023-01-11T21:22:40.3424071Z test_list_append_return_none (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3424436Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3424651Z ok (0.006s) 2023-01-11T21:22:40.3424933Z test_list_mul (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3425153Z ok (0.002s) 2023-01-11T21:22:40.3425421Z test_listcomp (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3425642Z inline_call [] 2023-01-11T21:22:40.3425942Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:40.3426158Z ok (0.010s) 2023-01-11T21:22:40.3426402Z test_lnotab_writer (__main__.MiscTests) ... skip: use lnotab when python < 3.10 (0.000s) 2023-01-11T21:22:40.3426837Z test_manual_seed (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3427097Z ok (0.008s) 2023-01-11T21:22:40.3427436Z test_matmul1 (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3427688Z ok (0.005s) 2023-01-11T21:22:40.3427901Z test_module_complex_iter (__main__.MiscTests) ... ok (0.010s) 2023-01-11T21:22:40.3428238Z test_module_deepcopy (__main__.MiscTests) ... frames [('total', 6), ('ok', 6)] 2023-01-11T21:22:40.3428473Z unimplemented [] 2023-01-11T21:22:40.3428796Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/copy.py', 2)] 2023-01-11T21:22:40.3429027Z inline_call [] 2023-01-11T21:22:40.3429325Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:40.3429556Z ok (0.051s) 2023-01-11T21:22:40.3429755Z test_named_parameters (__main__.MiscTests) ... ok (0.023s) 2023-01-11T21:22:40.3430162Z test_namedtuple1 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3430519Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3430746Z ok (0.008s) 2023-01-11T21:22:40.3431023Z test_namedtuple2 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3431380Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3431602Z ok (0.008s) 2023-01-11T21:22:40.3431880Z test_namedtuple3 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3432230Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3432454Z ok (0.006s) 2023-01-11T21:22:40.3432714Z test_nan (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3433126Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3433349Z ok (0.005s) 2023-01-11T21:22:40.3433554Z test_nested_closure (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3433900Z stats [('calls_captured', 9), ('fusions_possible', 7), ('unique_graphs', 2)] 2023-01-11T21:22:40.3434126Z ok (0.036s) 2023-01-11T21:22:40.3434353Z test_nested_closure_mutation (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3434688Z stats [('calls_captured', 11), ('fusions_possible', 9), ('unique_graphs', 2)] 2023-01-11T21:22:40.3434917Z ok (0.028s) 2023-01-11T21:22:40.3435406Z test_nested_disable_decorator (__main__.MiscTests) ... [2023-01-11 21:22:37,082] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT fn3 /var/lib/jenkins/workspace/test/dynamo/test_misc.py line 1197 2023-01-11T21:22:40.3435722Z due to: 2023-01-11T21:22:40.3435905Z Traceback (most recent call last): 2023-01-11T21:22:40.3436316Z File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 67, in unimplemented 2023-01-11T21:22:40.3436581Z raise Unsupported(msg) 2023-01-11T21:22:40.3436934Z torch._dynamo.exc.Unsupported: call torch._dynamo.disable() wrapped function .fn1 at 0x7f81b936c280> 2023-01-11T21:22:40.3437199Z 2023-01-11T21:22:40.3437270Z from user code: 2023-01-11T21:22:40.3437515Z File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 1199, in fn3 2023-01-11T21:22:40.3437739Z return fn2(x) 2023-01-11T21:22:40.3437980Z File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 1192, in fn2 2023-01-11T21:22:40.3438227Z x = fn1(x) # graph break 2023-01-11T21:22:40.3438345Z 2023-01-11T21:22:40.3438462Z Set torch._dynamo.config.verbose=True for more information 2023-01-11T21:22:40.3438625Z 2023-01-11T21:22:40.3438629Z 2023-01-11T21:22:40.3438756Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3438946Z unimplemented [] 2023-01-11T21:22:40.3439381Z graph_break [('call torch._dynamo.disable() wrapped function .fn1 at 0x7f81b936c280>', 1)] 2023-01-11T21:22:40.3439805Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:40.3440290Z inline_call [('call torch._dynamo.disable() wrapped function .fn1 at 0x7f81b936c280>', 1)] 2023-01-11T21:22:40.3440588Z ok (0.015s) 2023-01-11T21:22:40.3440957Z test_nested_optimize (__main__.MiscTests) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:40.3441213Z ok (0.016s) 2023-01-11T21:22:40.3441444Z test_nested_optimize_decorator (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3441797Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:40.3442010Z ok (0.009s) 2023-01-11T21:22:40.3442384Z test_nested_optimize_run (__main__.MiscTests) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:40.3442654Z ok (0.014s) 2023-01-11T21:22:40.3443019Z test_nn_functional_reduction (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3443291Z ok (0.007s) 2023-01-11T21:22:40.3443517Z test_nn_sequential_invocation (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3443870Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3444081Z ok (0.024s) 2023-01-11T21:22:40.3444333Z test_nn_sequential_invocation_reposition_indices (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3444705Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3444920Z ok (0.019s) 2023-01-11T21:22:40.3445231Z test_no_error_on_nested_fx_trace (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3445603Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3445866Z ok (0.007s) 2023-01-11T21:22:40.3446222Z test_no_grad (__main__.MiscTests) ... stats [('calls_captured', 40), ('fusions_possible', 32), ('unique_graphs', 8)] 2023-01-11T21:22:40.3446480Z ok (0.062s) 2023-01-11T21:22:40.3446767Z test_not_dynamic_scope (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3446997Z inline_call [] 2023-01-11T21:22:40.3447291Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3447522Z ok (0.006s) 2023-01-11T21:22:40.3447858Z test_numel (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3448115Z ok (0.007s) 2023-01-11T21:22:40.3448413Z test_numpy_int_constant (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3448756Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3448985Z ok (0.007s) 2023-01-11T21:22:40.3449349Z test_numpy_variable_isinstance (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3449703Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3449928Z ok (0.005s) 2023-01-11T21:22:40.3450147Z test_object_classmethod (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3450489Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3450700Z ok (0.012s) 2023-01-11T21:22:40.3450919Z test_object_staticmethod (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3451264Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3451474Z ok (0.012s) 2023-01-11T21:22:40.3451859Z test_onnx_shape_as_tensor (__main__.MiscTests) ... stats [('calls_captured', 15), ('fusions_possible', 10), ('unique_graphs', 5)] 2023-01-11T21:22:40.3452128Z ok (0.077s) 2023-01-11T21:22:40.3452490Z test_optimize_on_module (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3452756Z ok (0.006s) 2023-01-11T21:22:40.3453100Z test_pair (__main__.MiscTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:40.3453352Z ok (0.021s) 2023-01-11T21:22:40.3453627Z test_python_slice (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3453848Z ok (0.007s) 2023-01-11T21:22:40.3454137Z test_raise_on_backend_error (__main__.MiscTests) ... frames [('total', 1)] 2023-01-11T21:22:40.3454445Z stats [('calls_captured', 3), ('fusions_possible', 2)] 2023-01-11T21:22:40.3454648Z ok (0.006s) 2023-01-11T21:22:40.3454927Z test_raises (__main__.MiscTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3455140Z unimplemented [] 2023-01-11T21:22:40.3455451Z graph_break [('call_function BuiltinVariable(str) [TensorVariable()] {}', 1)] 2023-01-11T21:22:40.3455819Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3456035Z ok (0.010s) 2023-01-11T21:22:40.3456253Z test_rand (__main__.MiscTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:40.3456516Z test_range_input (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3456855Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3457066Z ok (0.008s) 2023-01-11T21:22:40.3457387Z test_recursive_inline_list_mutation (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3457631Z inline_call [] 2023-01-11T21:22:40.3457920Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:40.3458150Z ok (0.015s) 2023-01-11T21:22:40.3458452Z test_release_input_memory (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3458795Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3459019Z ok (0.004s) 2023-01-11T21:22:40.3459326Z test_release_module_memory (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3459714Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3459938Z ok (0.009s) 2023-01-11T21:22:40.3460337Z test_repro_graph_breaks_in__get_item_by_idx (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3460625Z ok (0.011s) 2023-01-11T21:22:40.3460917Z test_restore_graphstate (__main__.MiscTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:40.3461244Z inline_call [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:40.3461466Z unimplemented [] 2023-01-11T21:22:40.3461716Z graph_break [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:40.3462056Z stats [('calls_captured', 6), ('unique_graphs', 4), ('fusions_possible', 2)] 2023-01-11T21:22:40.3462281Z ok (0.022s) 2023-01-11T21:22:40.3462942Z test_restore_graphstate_internals (__main__.MiscTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3463231Z ok (0.009s) 2023-01-11T21:22:40.3463539Z test_return_nested_function (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3463903Z stats [('calls_captured', 7), ('fusions_possible', 5), ('unique_graphs', 2)] 2023-01-11T21:22:40.3464122Z ok (0.016s) 2023-01-11T21:22:40.3464337Z test_sample_input (__main__.MiscTests) ... ok (1.315s) 2023-01-11T21:22:40.3464683Z test_setattr_mutation1 (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3465116Z unimplemented [('call_method UserDefinedObjectVariable(member_descriptor) __mul__ [ConstantVariable(int)] {}', 1)] 2023-01-11T21:22:40.3465645Z graph_break [("isinstance called on UserDefinedClass UserDefinedObjectVariable(member_descriptor) ", 1)] 2023-01-11T21:22:40.3465998Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3466300Z stats [('calls_captured', 12), ('fusions_possible', 11), ('unique_graphs', 1)] 2023-01-11T21:22:40.3466527Z ok (0.019s) 2023-01-11T21:22:40.3466826Z test_setattr_mutation2 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3467061Z inline_call [] 2023-01-11T21:22:40.3467341Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:40.3467561Z ok (0.015s) 2023-01-11T21:22:40.3467858Z test_setattr_mutation3 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3468071Z inline_call [] 2023-01-11T21:22:40.3468367Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:40.3468589Z ok (0.016s) 2023-01-11T21:22:40.3468863Z test_shape_unpack (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3469218Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3469444Z ok (0.005s) 2023-01-11T21:22:40.3469764Z test_side_effects_codegen_update_mutated (__main__.MiscTests) ... frames [('total', 6), ('ok', 6)] 2023-01-11T21:22:40.3470094Z unimplemented [] 2023-01-11T21:22:40.3470333Z graph_break [('Tensor.item', 4)] 2023-01-11T21:22:40.3470652Z stats [('calls_captured', 8), ('fusions_possible', 4), ('unique_graphs', 4)] 2023-01-11T21:22:40.3470861Z ok (0.144s) 2023-01-11T21:22:40.3471150Z test_size_input (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3471502Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:22:40.3471720Z ok (0.009s) 2023-01-11T21:22:40.3472080Z test_slice_input (__main__.MiscTests) ... stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:22:40.3472340Z ok (0.035s) 2023-01-11T21:22:40.3472636Z test_tensor_build_list_unpack (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3473000Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3473224Z ok (0.018s) 2023-01-11T21:22:40.3473570Z test_tensor_data (__main__.MiscTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3473896Z ok (0.015s) 2023-01-11T21:22:40.3474185Z test_tensor_dict1 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3474536Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3474742Z ok (0.007s) 2023-01-11T21:22:40.3475028Z test_tensor_dict2 (__main__.MiscTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:40.3475379Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:22:40.3475588Z ok (0.038s) 2023-01-11T21:22:40.3475907Z test_tensor_dot_grad_no_graph_break (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3476151Z unimplemented [] 2023-01-11T21:22:40.3476381Z graph_break [('Tensor.backward', 1)] 2023-01-11T21:22:40.3476704Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:40.3476922Z ok (0.016s) 2023-01-11T21:22:40.3477256Z test_tensor_is_contiguous (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3477604Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:22:40.3477827Z ok (0.078s) 2023-01-11T21:22:40.3478129Z test_tensor_item_capture (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3478474Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3478692Z ok (0.008s) 2023-01-11T21:22:40.3478995Z test_tensor_item_no_capture (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3479222Z unimplemented [] 2023-01-11T21:22:40.3479453Z graph_break [('Tensor.item', 1)] 2023-01-11T21:22:40.3479771Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3479977Z ok (0.009s) 2023-01-11T21:22:40.3480338Z test_tensor_layout (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3480605Z ok (0.018s) 2023-01-11T21:22:40.3480896Z test_tensor_types (__main__.MiscTests) ... frames [('total', 10), ('ok', 10)] 2023-01-11T21:22:40.3481241Z stats [('calls_captured', 10), ('unique_graphs', 10), ('fusions_possible', 0)] 2023-01-11T21:22:40.3481465Z ok (0.095s) 2023-01-11T21:22:40.3481839Z test_top_package_import (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3482096Z ok (0.006s) 2023-01-11T21:22:40.3482473Z test_torch_cuda_is_available (__main__.MiscTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3482741Z ok (0.006s) 2023-01-11T21:22:40.3482972Z test_torch_cudnn_is_acceptable (__main__.MiscTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:40.3483301Z test_torch_cudnn_is_acceptable_bad_inputs (__main__.MiscTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:40.3483705Z test_torch_nn_parameter_isinstance (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3484076Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3484287Z ok (0.014s) 2023-01-11T21:22:40.3485051Z test_torch_profiler (__main__.MiscTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/91868 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:22:40.3486130Z test_torch_seed (__main__.MiscTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/91867 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:22:40.3487154Z test_torch_size (__main__.MiscTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/91866 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:22:40.3487832Z test_type_copy (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3488171Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:40.3488402Z ok (0.016s) 2023-01-11T21:22:40.3488720Z test_typing_variable_isinstance (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3489090Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3489301Z ok (0.006s) 2023-01-11T21:22:40.3489659Z test_unpack4 (__main__.MiscTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:40.3489921Z ok (0.017s) 2023-01-11T21:22:40.3490291Z test_unpack5 (__main__.MiscTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:40.3490547Z ok (0.016s) 2023-01-11T21:22:40.3490885Z test_update_locals_and_stack_uses_shared_cache (__main__.MiscTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:40.3491124Z inline_call [] 2023-01-11T21:22:40.3491304Z unimplemented [] 2023-01-11T21:22:40.3491634Z graph_break [('call_method ListVariable() extend [ListIteratorVariable()] {}', 1)] 2023-01-11T21:22:40.3491865Z ok (0.008s) 2023-01-11T21:22:40.3492247Z test_user_defined_class_name (__main__.MiscTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3492519Z ok (0.010s) 2023-01-11T21:22:40.3492769Z test_user_function_variable_supports_enum_argument (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3492996Z ok (0.005s) 2023-01-11T21:22:40.3493252Z test_user_function_variable_supports_function_argument (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3493637Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3493849Z ok (0.006s) 2023-01-11T21:22:40.3494109Z test_user_function_variable_supports_type_abcmeta_argument (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3494491Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:40.3494703Z ok (0.007s) 2023-01-11T21:22:40.3494994Z test_user_getattr1 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3495221Z inline_call [] 2023-01-11T21:22:40.3495516Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3495725Z ok (0.007s) 2023-01-11T21:22:40.3496016Z test_user_getattr2 (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3496239Z inline_call [] 2023-01-11T21:22:40.3496520Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3496747Z ok (0.009s) 2023-01-11T21:22:40.3497035Z test_user_property (__main__.MiscTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:40.3497245Z inline_call [] 2023-01-11T21:22:40.3497539Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3497764Z ok (0.007s) 2023-01-11T21:22:40.3497972Z test_usr_cls_classmethod (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3498318Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3498540Z ok (0.009s) 2023-01-11T21:22:40.3498748Z test_usr_cls_staticmethod (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3499091Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:40.3499317Z ok (0.009s) 2023-01-11T21:22:40.3499520Z test_version_ci (__main__.MiscTests) ... ok (0.001s) 2023-01-11T21:22:40.3499776Z test_write_to_closures_in_inlining (__main__.MiscTests) ... inline_call [] 2023-01-11T21:22:40.3500171Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:40.3500391Z ok (0.010s) 2023-01-11T21:22:40.3500579Z test_jit_save (__main__.TestTracer) ... ok (0.045s) 2023-01-11T21:22:40.3500726Z 2023-01-11T21:22:40.3500925Z ---------------------------------------------------------------------- 2023-01-11T21:22:40.3501168Z Ran 161 tests in 7.770s 2023-01-11T21:22:40.3501283Z 2023-01-11T21:22:40.3501363Z OK (skipped=11, expected failures=1) 2023-01-11T21:22:40.3501493Z 2023-01-11T21:22:40.3501576Z Generating XML reports... 2023-01-11T21:22:40.3501972Z Generated XML report: test-reports/python-unittest/dynamo.test_misc/TEST-MiscTests-20230111212231.xml 2023-01-11T21:22:40.3502684Z Generated XML report: test-reports/python-unittest/dynamo.test_misc/TEST-TestTracer-20230111212231.xml 2023-01-11T21:22:40.3502914Z 2023-01-11T21:22:40.3503285Z ##[endgroup] 2023-01-11T21:22:40.3503753Z FINISHED PRINTING LOG FILE of dynamo/test_misc (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_misc_a8pqkth5) 2023-01-11T21:22:40.3503978Z 2023-01-11T21:22:40.8367346Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:40.9309409Z Ignoring disabled issues: [] 2023-01-11T21:22:40.9450373Z Running dynamo/test_optimizations ... [2023-01-11 21:22:40.944652] 2023-01-11T21:22:40.9451750Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_optimizations.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:40.944958] 2023-01-11T21:22:42.4765685Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:42.5455767Z Ignoring disabled issues: [] 2023-01-11T21:22:42.5592327Z Running dynamo/test_repros ... [2023-01-11 21:22:42.558894] 2023-01-11T21:22:42.5593554Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_repros.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:42.559168] 2023-01-11T21:22:43.5577496Z 2023-01-11T21:22:43.5579560Z Expand the folded group to see the log file of dynamo/test_optimizations 2023-01-11T21:22:43.5580837Z ##[group]PRINTING LOG FILE of dynamo/test_optimizations (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_optimizations_fis_ekyn) 2023-01-11T21:22:43.5581369Z 2023-01-11T21:22:43.5581501Z Running tests... 2023-01-11T21:22:43.5582146Z ---------------------------------------------------------------------- 2023-01-11T21:22:43.5583042Z Test results will be stored in test-reports/python-unittest/dynamo.test_optimizations 2023-01-11T21:22:43.5583584Z test_inplace_normalize (__main__.NormalizeIRTests) ... ok (0.321s) 2023-01-11T21:22:43.5584047Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:43.5584552Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:43.5585032Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:43.5585708Z test_example_inputs (__main__.TestOptimizations) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:43.5586199Z ok (0.092s) 2023-01-11T21:22:43.5586945Z test_example_inputs_runtime_use (__main__.TestOptimizations) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:43.5587433Z ok (0.009s) 2023-01-11T21:22:43.5587820Z test_has_mutation (__main__.TestOptimizations) ... ok (0.019s) 2023-01-11T21:22:43.5588522Z test_has_mutation_factory (__main__.TestOptimizations) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:43.5589178Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:43.5589567Z ok (0.018s) 2023-01-11T21:22:43.5590248Z test_inplacifier (__main__.TestOptimizations) ... optimizations [('out', 1), ('inplace', 1)] 2023-01-11T21:22:43.5590691Z ok (0.017s) 2023-01-11T21:22:43.5591078Z test_ipex_bf16 (__main__.TestOptimizations) ... skip: requires ipex (0.001s) 2023-01-11T21:22:43.5591612Z test_ipex_fp32 (__main__.TestOptimizations) ... skip: requires ipex (0.000s) 2023-01-11T21:22:43.5592311Z test_log_conv_args (__main__.TestOptimizations) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:43.5593186Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:43.5593573Z ok (0.153s) 2023-01-11T21:22:43.5593752Z 2023-01-11T21:22:43.5594115Z ---------------------------------------------------------------------- 2023-01-11T21:22:43.5594529Z Ran 9 tests in 0.631s 2023-01-11T21:22:43.5594725Z 2023-01-11T21:22:43.5594845Z OK (skipped=2) 2023-01-11T21:22:43.5595028Z 2023-01-11T21:22:43.5595174Z Generating XML reports... 2023-01-11T21:22:43.5595972Z Generated XML report: test-reports/python-unittest/dynamo.test_optimizations/TEST-NormalizeIRTests-20230111212242.xml 2023-01-11T21:22:43.5596981Z Generated XML report: test-reports/python-unittest/dynamo.test_optimizations/TEST-TestOptimizations-20230111212242.xml 2023-01-11T21:22:43.5597435Z 2023-01-11T21:22:43.5597879Z ##[endgroup] 2023-01-11T21:22:43.5598803Z FINISHED PRINTING LOG FILE of dynamo/test_optimizations (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_optimizations_fis_ekyn) 2023-01-11T21:22:43.5599257Z 2023-01-11T21:22:45.5260526Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:45.6145867Z Ignoring disabled issues: [] 2023-01-11T21:22:45.6287693Z Running dynamo/test_torchxla_integration ... [2023-01-11 21:22:45.628456] 2023-01-11T21:22:45.6289515Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_torchxla_integration.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:45.628759] 2023-01-11T21:22:47.5376849Z 2023-01-11T21:22:47.5377591Z Expand the folded group to see the log file of dynamo/test_torchxla_integration 2023-01-11T21:22:47.5378850Z ##[group]PRINTING LOG FILE of dynamo/test_torchxla_integration (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_integration_r08poec8) 2023-01-11T21:22:47.5379338Z 2023-01-11T21:22:47.5379474Z Running tests... 2023-01-11T21:22:47.5380163Z ---------------------------------------------------------------------- 2023-01-11T21:22:47.5380954Z Test results will be stored in test-reports/python-unittest/dynamo.test_torchxla_integration 2023-01-11T21:22:47.5381713Z test_basic (__main__.TorchXLAReuseGraphTest) ... skip: Skip the tests since torch_xla is not available or XLA devices are not specified (0.001s) 2023-01-11T21:22:47.5382493Z test_inplace_update (__main__.TorchXLAReuseGraphTest) ... skip: Skip the tests since torch_xla is not available or XLA devices are not specified (0.001s) 2023-01-11T21:22:47.5382971Z test_linear (__main__.TorchXLAReuseGraphTest) ... skip: Skip the tests since torch_xla is not available or XLA devices are not specified (0.001s) 2023-01-11T21:22:47.5383428Z test_matmul (__main__.TorchXLAReuseGraphTest) ... skip: Skip the tests since torch_xla is not available or XLA devices are not specified (0.001s) 2023-01-11T21:22:47.5383669Z 2023-01-11T21:22:47.5383862Z ---------------------------------------------------------------------- 2023-01-11T21:22:47.5384111Z Ran 4 tests in 0.003s 2023-01-11T21:22:47.5384232Z 2023-01-11T21:22:47.5384306Z OK (skipped=4) 2023-01-11T21:22:47.5384413Z 2023-01-11T21:22:47.5384499Z Generating XML reports... 2023-01-11T21:22:47.5384964Z Generated XML report: test-reports/python-unittest/dynamo.test_torchxla_integration/TEST-TorchXLAReuseGraphTest-20230111212247.xml 2023-01-11T21:22:47.5385234Z 2023-01-11T21:22:47.5385573Z ##[endgroup] 2023-01-11T21:22:47.5386032Z FINISHED PRINTING LOG FILE of dynamo/test_torchxla_integration (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_integration_r08poec8) 2023-01-11T21:22:47.5386271Z 2023-01-11T21:22:49.6165125Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:49.7041159Z Ignoring disabled issues: [] 2023-01-11T21:22:49.7185171Z Running dynamo/test_unspec ... [2023-01-11 21:22:49.718198] 2023-01-11T21:22:49.7187373Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_unspec.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:49.718476] 2023-01-11T21:22:51.0829676Z 2023-01-11T21:22:51.0830583Z Expand the folded group to see the log file of dynamo/test_repros 2023-01-11T21:22:51.0832363Z ##[group]PRINTING LOG FILE of dynamo/test_repros (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_repros_55ysf7ip) 2023-01-11T21:22:51.0832621Z 2023-01-11T21:22:51.0832704Z Running tests... 2023-01-11T21:22:51.0833122Z ---------------------------------------------------------------------- 2023-01-11T21:22:51.0833616Z Test results will be stored in test-reports/python-unittest/dynamo.test_repros 2023-01-11T21:22:51.0834029Z test_Size (__main__.ReproTests) ... ok (0.353s) 2023-01-11T21:22:51.0834645Z test_abc_setattr (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0835341Z inline_call [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:51.0835775Z unimplemented [] 2023-01-11T21:22:51.0836578Z graph_break [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:51.0837233Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0837669Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:51.0837955Z unimplemented [] 2023-01-11T21:22:51.0838715Z graph_break [('setattr(UserDefinedObjectVariable) .Derived.__setattr__ at 0x7efdcbc42290>', 1)] 2023-01-11T21:22:51.0839164Z inline_call [] 2023-01-11T21:22:51.0839475Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0839694Z ok (0.011s) 2023-01-11T21:22:51.0840078Z test_avoid_dupe_specialization (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0840647Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:51.0841159Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0841351Z ok (0.051s) 2023-01-11T21:22:51.0841576Z test_batch_norm_act (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0841916Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0842146Z ok (0.087s) 2023-01-11T21:22:51.0842496Z test_batchnorm_e2e (__main__.ReproTests) ... No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:51.0842843Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0843032Z inline_call [] 2023-01-11T21:22:51.0843333Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:51.0843625Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0843802Z ok (1.284s) 2023-01-11T21:22:51.0844121Z test_bigbird_unsqueeze_inplace (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0844493Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0844769Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0844962Z ok (0.036s) 2023-01-11T21:22:51.0845174Z test_boxes_len (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0845508Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0845735Z ok (0.010s) 2023-01-11T21:22:51.0845956Z test_chunk_reformer_ff (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0846400Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0846629Z ok (0.077s) 2023-01-11T21:22:51.0847000Z test_class_member (__main__.ReproTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0847268Z ok (0.011s) 2023-01-11T21:22:51.0847584Z test_convert_boxes_to_pooler_format (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0847830Z inline_call [] 2023-01-11T21:22:51.0848009Z unimplemented [] 2023-01-11T21:22:51.0848277Z graph_break [('dynamic shapes: repeat_interleave', 2)] 2023-01-11T21:22:51.0848627Z stats [('calls_captured', 10), ('fusions_possible', 6), ('unique_graphs', 4)] 2023-01-11T21:22:51.0848968Z ok (0.062s) 2023-01-11T21:22:51.0849198Z test_create_rand_mask_from_inputs (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0849565Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:22:51.0849793Z ok (0.190s) 2023-01-11T21:22:51.0849993Z test_dict_iter (__main__.ReproTests) ... ok (0.005s) 2023-01-11T21:22:51.0850340Z test_dict_list_values (__main__.ReproTests) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:51.0850583Z unimplemented [] 2023-01-11T21:22:51.0851039Z graph_break [('call_function in skip_files Builtin count', 2), ('call_function BuiltinVariable(zip) [UserDefinedObjectVariable(count), ListVariable()] {}', 2)] 2023-01-11T21:22:51.0851346Z ok (0.018s) 2023-01-11T21:22:51.0851651Z test_do_paste_mask (__main__.ReproTests) ... frames [('total', 12), ('ok', 11)] 2023-01-11T21:22:51.0851980Z unimplemented [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:51.0852329Z graph_break [('dynamic shapes: arange', 6), ('Dynamic slicing not supported', 4)] 2023-01-11T21:22:51.0852743Z stats [('calls_captured', 159), ('fusions_possible', 148), ('unique_graphs', 11)] 2023-01-11T21:22:51.0852979Z ok (1.524s) 2023-01-11T21:22:51.0853364Z test_dynamic_shapes_right_side (__main__.ReproTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0853646Z ok (0.085s) 2023-01-11T21:22:51.0854005Z test_ellipsis (__main__.ReproTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:51.0854270Z ok (0.019s) 2023-01-11T21:22:51.0854554Z test_exec_import (__main__.ReproTests) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:51.0854936Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0855191Z unimplemented [] 2023-01-11T21:22:51.0855505Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0855752Z ok (0.004s) 2023-01-11T21:22:51.0856065Z test_exec_wildcard_import (__main__.ReproTests) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:51.0856438Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0856693Z unimplemented [] 2023-01-11T21:22:51.0857019Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0857402Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:51.0857622Z ok (0.016s) 2023-01-11T21:22:51.0857933Z test_for_loop_graph_break (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0858171Z inline_call [] 2023-01-11T21:22:51.0858539Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:51.0858956Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0859190Z ok (0.012s) 2023-01-11T21:22:51.0859500Z test_for_loop_graph_break_before (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0859753Z unimplemented [] 2023-01-11T21:22:51.0860132Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:51.0860409Z inline_call [] 2023-01-11T21:22:51.0860706Z stats [('calls_captured', 100), ('fusions_possible', 99), ('unique_graphs', 1)] 2023-01-11T21:22:51.0860939Z ok (0.134s) 2023-01-11T21:22:51.0861178Z test_get_parameter_dtype (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0861519Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0861749Z ok (0.015s) 2023-01-11T21:22:51.0862008Z test_grad_mode_carrying_correct_state_after_graph_break (__main__.ReproTests) ... Break 2023-01-11T21:22:51.0862296Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:51.0862731Z unimplemented [] 2023-01-11T21:22:51.0863075Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0863524Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:51.0863754Z ok (0.011s) 2023-01-11T21:22:51.0864071Z test_guard_fail_nested_tuple (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0864441Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:51.0864657Z ok (0.021s) 2023-01-11T21:22:51.0864977Z test_guard_fail_tensor_bool (__main__.ReproTests) ... frames [('total', 12), ('ok', 12)] 2023-01-11T21:22:51.0865353Z unimplemented [('FOR_ITER UserDefinedObjectVariable(product)', 1)] 2023-01-11T21:22:51.0865963Z graph_break [('call torch._dynamo.disable() wrapped function .fn..get_expected at 0x7efe5e4e9c60>', 5), ('data dependent operator: aten.allclose.default', 5)] 2023-01-11T21:22:51.0866477Z stats [('calls_captured', 5), ('unique_graphs', 5), ('fusions_possible', 0)] 2023-01-11T21:22:51.0866713Z ok (0.079s) 2023-01-11T21:22:51.0866998Z test_guard_ordering_shape_fail (__main__.ReproTests) ... ok (0.002s) 2023-01-11T21:22:51.0867275Z test_hf_model_output (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0867625Z stats [('calls_captured', 4), ('unique_graphs', 4), ('fusions_possible', 0)] 2023-01-11T21:22:51.0867853Z ok (0.050s) 2023-01-11T21:22:51.0868062Z test_hf_t5_forward (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0868411Z stats [('calls_captured', 11), ('fusions_possible', 10), ('unique_graphs', 1)] 2023-01-11T21:22:51.0868644Z ok (0.638s) 2023-01-11T21:22:51.0868939Z test_indexing_with_list (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0869235Z inline_call [('Tensor.numpy', 1)] 2023-01-11T21:22:51.0869442Z unimplemented [] 2023-01-11T21:22:51.0869666Z graph_break [('Tensor.numpy', 1)] 2023-01-11T21:22:51.0869989Z stats [('unique_graphs', 2), ('calls_captured', 0), ('fusions_possible', -2)] 2023-01-11T21:22:51.0870309Z ok (0.022s) 2023-01-11T21:22:51.0870548Z test_is_symbolic_tracing (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0870891Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0871121Z ok (0.007s) 2023-01-11T21:22:51.0871345Z test_isinstance_dtype (__main__.ReproTests) ... ok (0.003s) 2023-01-11T21:22:51.0871951Z test_isinstance_storage (__main__.ReproTests) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1484: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:22:51.0872548Z bools = torch.BoolStorage.from_buffer(f, "big") 2023-01-11T21:22:51.0872816Z frames [('total', 9), ('ok', 9)] 2023-01-11T21:22:51.0873011Z unimplemented [] 2023-01-11T21:22:51.0873483Z graph_break [('call_function BuiltinVariable(bytearray) [ListVariable()] {}', 1), ('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:22:51.0874016Z inline_call [('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:22:51.0874424Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0874676Z expected failure (0.012s) 2023-01-11T21:22:51.0874907Z test_issue1466_size_aot_autograd (__main__.ReproTests) ... arf 2023-01-11T21:22:51.0875123Z arf 2023-01-11T21:22:51.0875334Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:51.0875516Z unimplemented [] 2023-01-11T21:22:51.0875850Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0876230Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:51.0876510Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0876705Z ok (0.032s) 2023-01-11T21:22:51.0876960Z test_issue175 (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0877306Z stats [('calls_captured', 12), ('fusions_possible', 11), ('unique_graphs', 1)] 2023-01-11T21:22:51.0877525Z ok (0.126s) 2023-01-11T21:22:51.0877905Z test_longformer_chunk (__main__.ReproTests) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:51.0878179Z ok (0.161s) 2023-01-11T21:22:51.0878404Z test_maml_item_capture (__main__.ReproTests) ... expected failure (0.001s) 2023-01-11T21:22:51.0878780Z test_maml_no_item_capture (__main__.ReproTests) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:51.0879157Z inline_call [('inlining disallowed: ', 1)] 2023-01-11T21:22:51.0879387Z unimplemented [] 2023-01-11T21:22:51.0879840Z graph_break [('Tensor.item', 2), ('call_function in skip_files /opt/conda/lib/python3.10/copy.py', 1), ('inlining disallowed: ', 1)] 2023-01-11T21:22:51.0880317Z stats [('calls_captured', 29), ('fusions_possible', 25), ('unique_graphs', 4)] 2023-01-11T21:22:51.0880553Z ok (0.143s) 2023-01-11T21:22:51.0880899Z test_modules (__main__.ReproTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:51.0881167Z ok (0.022s) 2023-01-11T21:22:51.0881470Z test_multi_dot_import (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0881912Z inline_call [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:22:51.0882208Z unimplemented [] 2023-01-11T21:22:51.0882600Z graph_break [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:22:51.0883021Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0883237Z ok (0.011s) 2023-01-11T21:22:51.0883481Z test_multi_import (__main__.ReproTests) ... skip: requires detectron2 (0.000s) 2023-01-11T21:22:51.0883927Z test_named_buffers (__main__.ReproTests) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:51.0884189Z ok (0.011s) 2023-01-11T21:22:51.0884490Z test_nn_parameter (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0884889Z inline_call [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:51.0885141Z unimplemented [] 2023-01-11T21:22:51.0885485Z graph_break [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:51.0885874Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0886104Z ok (0.014s) 2023-01-11T21:22:51.0886322Z test_norm_dtype (__main__.ReproTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:51.0886734Z test_not_rewrite_assert (__main__.ReproTests) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:51.0886995Z ok (0.005s) 2023-01-11T21:22:51.0887326Z test_not_rewrite_assert_for_other_errors (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0887708Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:51.0887938Z ok (0.008s) 2023-01-11T21:22:51.0888219Z test_numpy_list (__main__.ReproTests) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:22:51.0888448Z unimplemented [] 2023-01-11T21:22:51.0888883Z graph_break [('call torch._dynamo.disable() wrapped function .rand_gen at 0x7efe5e51ef80>', 1)] 2023-01-11T21:22:51.0889201Z expected failure (0.020s) 2023-01-11T21:22:51.0889517Z test_optimized_deepcopy (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0889886Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0890116Z ok (0.013s) 2023-01-11T21:22:51.0890396Z test_primtorch (__main__.ReproTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:51.0890877Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:51.0891169Z unimplemented [] 2023-01-11T21:22:51.0891547Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:51.0891825Z ok (0.005s) 2023-01-11T21:22:51.0892292Z test_primtorch_no_graph_break (__main__.ReproTests) ... inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:51.0892641Z expected failure (0.004s) 2023-01-11T21:22:51.0892945Z test_recursive_map (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0893175Z inline_call [] 2023-01-11T21:22:51.0893479Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0893697Z ok (0.013s) 2023-01-11T21:22:51.0893917Z test_reformer_eval (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0894297Z stats [('calls_captured', 10), ('fusions_possible', 9), ('unique_graphs', 1)] 2023-01-11T21:22:51.0894522Z ok (0.046s) 2023-01-11T21:22:51.0894751Z test_reformer_min_chunk_len (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0895104Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:51.0895334Z ok (0.015s) 2023-01-11T21:22:51.0895548Z test_reformer_remove_unused_args (__main__.ReproTests) ... foo 2023-01-11T21:22:51.0895766Z foo 2023-01-11T21:22:51.0895976Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0896301Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0896561Z unimplemented [] 2023-01-11T21:22:51.0896891Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2)] 2023-01-11T21:22:51.0897258Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:51.0897549Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0897746Z ok (0.024s) 2023-01-11T21:22:51.0897955Z test_reformer_sorting (__main__.ReproTests) ... inline_call [] 2023-01-11T21:22:51.0898307Z stats [('calls_captured', 14), ('fusions_possible', 13), ('unique_graphs', 1)] 2023-01-11T21:22:51.0898537Z ok (0.039s) 2023-01-11T21:22:51.0898822Z test_reformer_train (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0899274Z inline_call [('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:22:51.0899573Z unimplemented [] 2023-01-11T21:22:51.0900048Z graph_break [('autograd.Function with requires_grad', 1), ('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:22:51.0900499Z stats [('calls_captured', 10), ('fusions_possible', 6), ('unique_graphs', 4)] 2023-01-11T21:22:51.0900729Z ok (0.065s) 2023-01-11T21:22:51.0901023Z test_reinplacing (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0901378Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:51.0901672Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0901865Z ok (0.329s) 2023-01-11T21:22:51.0902236Z test_relative_import (__main__.ReproTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0902705Z ok (0.014s) 2023-01-11T21:22:51.0903122Z test_relative_import_no_modulename (__main__.ReproTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:51.0903417Z ok (0.006s) 2023-01-11T21:22:51.0903791Z test_rewrite_assert_noop (__main__.ReproTests) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:22:51.0904070Z ok (0.022s) 2023-01-11T21:22:51.0904455Z test_rewrite_assert_with_fstring_msg (__main__.ReproTests) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:51.0904734Z ok (0.005s) 2023-01-11T21:22:51.0905191Z test_rewrite_assert_with_msg (__main__.ReproTests) ... stats [('calls_captured', 18), ('fusions_possible', 15), ('unique_graphs', 3)] 2023-01-11T21:22:51.0905473Z ok (0.026s) 2023-01-11T21:22:51.0905870Z test_rewrite_assert_without_msg (__main__.ReproTests) ... stats [('calls_captured', 12), ('fusions_possible', 10), ('unique_graphs', 2)] 2023-01-11T21:22:51.0906139Z ok (0.019s) 2023-01-11T21:22:51.0906435Z test_rng_state (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0906838Z unimplemented [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 1)] 2023-01-11T21:22:51.0907257Z graph_break [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 2)] 2023-01-11T21:22:51.0907646Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:51.0907878Z ok (0.031s) 2023-01-11T21:22:51.0908301Z test_seq_append_list (__main__.ReproTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:51.0908561Z ok (0.024s) 2023-01-11T21:22:51.0909263Z test_sigmoid_out (__main__.ReproTests) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0909958Z torch.sigmoid(inp, out=out1) 2023-01-11T21:22:51.0910940Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:22:51.0911537Z warnings.warn(msg) 2023-01-11T21:22:51.0912211Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0912856Z torch.sigmoid(inp, out=out1) 2023-01-11T21:22:51.0913089Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:51.0913412Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:22:51.0913655Z ok (0.033s) 2023-01-11T21:22:51.0914037Z test_slice_into_list_mutable (__main__.ReproTests) ... stats [('calls_captured', 30), ('fusions_possible', 29), ('unique_graphs', 1)] 2023-01-11T21:22:51.0914316Z ok (0.082s) 2023-01-11T21:22:51.0914637Z test_slicing_dynamic_shape (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0914966Z unimplemented [('Dynamic slicing not supported', 2)] 2023-01-11T21:22:51.0915316Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:51.0915551Z ok (0.019s) 2023-01-11T21:22:51.0915878Z test_slicing_dynamic_shape_setitem (__main__.ReproTests) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:22:51.0916212Z unimplemented [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:51.0916521Z graph_break [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:51.0916859Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0917141Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0917372Z ok (0.013s) 2023-01-11T21:22:51.0918073Z test_sort_out (__main__.ReproTests) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0918766Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:51.0919820Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0920483Z return node.target(*args, **kwargs) 2023-01-11T21:22:51.0921151Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0921807Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:51.0922489Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:51.0923146Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:51.0923413Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:51.0923720Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:22:51.0923955Z ok (0.041s) 2023-01-11T21:22:51.0924268Z test_specialized_stride (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0924625Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0924855Z ok (0.011s) 2023-01-11T21:22:51.0925168Z test_swin_base_tensor_attr (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0925528Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:51.0925760Z ok (0.015s) 2023-01-11T21:22:51.0926074Z test_tensor_isinstance_tuple (__main__.ReproTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:51.0926439Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0926654Z ok (0.010s) 2023-01-11T21:22:51.0926950Z test_tokenization (__main__.ReproTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:51.0927357Z inline_call [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:22:51.0927617Z unimplemented [] 2023-01-11T21:22:51.0927970Z graph_break [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:22:51.0928224Z ok (0.009s) 2023-01-11T21:22:51.0928584Z test_torch_ops_aten (__main__.ReproTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0928890Z ok (0.012s) 2023-01-11T21:22:51.0929270Z test_vdd_duplicate_error (__main__.ReproTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:51.0929543Z ok (0.014s) 2023-01-11T21:22:51.0929842Z test_while_loop_graph_break (__main__.ReproTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:51.0930082Z inline_call [] 2023-01-11T21:22:51.0930466Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:51.0930862Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:51.0931092Z ok (0.007s) 2023-01-11T21:22:51.0931320Z test_with_on_graph_break_inst (__main__.ReproTests) ... Hello world 2023-01-11T21:22:51.0931533Z Hello world 2023-01-11T21:22:51.0931753Z frames [('total', 6), ('ok', 6)] 2023-01-11T21:22:51.0932126Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:51.0932394Z unimplemented [] 2023-01-11T21:22:51.0932749Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2), ('Tensor.backward', 1)] 2023-01-11T21:22:51.0933154Z stats [('calls_captured', 11), ('fusions_possible', 7), ('unique_graphs', 4)] 2023-01-11T21:22:51.0933387Z ok (0.029s) 2023-01-11T21:22:51.0933482Z 2023-01-11T21:22:51.0933684Z ---------------------------------------------------------------------- 2023-01-11T21:22:51.0933933Z Ran 76 tests in 6.405s 2023-01-11T21:22:51.0934048Z 2023-01-11T21:22:51.0934144Z OK (skipped=2, expected failures=4) 2023-01-11T21:22:51.0934275Z 2023-01-11T21:22:51.0934349Z Generating XML reports... 2023-01-11T21:22:51.0934762Z Generated XML report: test-reports/python-unittest/dynamo.test_repros/TEST-ReproTests-20230111212244.xml 2023-01-11T21:22:51.0934992Z 2023-01-11T21:22:51.0935327Z ##[endgroup] 2023-01-11T21:22:51.0935724Z FINISHED PRINTING LOG FILE of dynamo/test_repros (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_repros_55ysf7ip) 2023-01-11T21:22:51.0935956Z 2023-01-11T21:22:52.9032229Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:52.9688600Z Ignoring disabled issues: [] 2023-01-11T21:22:52.9820891Z Running inductor/test_torchinductor ... [2023-01-11 21:22:52.981766] 2023-01-11T21:22:52.9822206Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_torchinductor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:22:52.982018] 2023-01-11T21:22:59.1007161Z 2023-01-11T21:22:59.1007918Z Expand the folded group to see the log file of dynamo/test_unspec 2023-01-11T21:22:59.1008979Z ##[group]PRINTING LOG FILE of dynamo/test_unspec (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_unspec_sghh16j8) 2023-01-11T21:22:59.1009486Z 2023-01-11T21:22:59.1009601Z Running tests... 2023-01-11T21:22:59.1010203Z ---------------------------------------------------------------------- 2023-01-11T21:22:59.1010877Z Test results will be stored in test-reports/python-unittest/dynamo.test_unspec 2023-01-11T21:22:59.1011509Z test_access_by_keys_unspec (__main__.make_unspec_cls..UnspecTest) ... ok (0.337s) 2023-01-11T21:22:59.1012068Z test_basicmodule1_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1012741Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:59.1013414Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1013820Z ok (0.018s) 2023-01-11T21:22:59.1014580Z test_basicmodule2_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1015121Z ok (0.013s) 2023-01-11T21:22:59.1015614Z test_call_fn_with_non_const_inputs_safe_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1016310Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1016806Z ok (0.023s) 2023-01-11T21:22:59.1017223Z test_cfgmod_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:59.1017512Z ok (0.027s) 2023-01-11T21:22:59.1017913Z test_children_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1018206Z ok (0.018s) 2023-01-11T21:22:59.1018619Z test_constloop_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:59.1018900Z ok (0.026s) 2023-01-11T21:22:59.1019148Z test_densenet_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1019522Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:59.1019753Z ok (0.017s) 2023-01-11T21:22:59.1020069Z test_enumvalues_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1020452Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:59.1020680Z ok (0.016s) 2023-01-11T21:22:59.1021102Z test_fnmember_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1021411Z ok (0.012s) 2023-01-11T21:22:59.1021961Z test_fnmembercmp1_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1022244Z ok (0.012s) 2023-01-11T21:22:59.1022903Z test_fnmembercmp2_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1023201Z ok (0.012s) 2023-01-11T21:22:59.1023462Z test_forward_directly_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1023840Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1024070Z ok (0.020s) 2023-01-11T21:22:59.1024321Z test_generation_tag_unspec (__main__.make_unspec_cls..UnspecTest) ... ok (0.002s) 2023-01-11T21:22:59.1024797Z test_hasattr_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1025085Z ok (0.009s) 2023-01-11T21:22:59.1025494Z test_intarg_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1025783Z ok (0.013s) 2023-01-11T21:22:59.1026180Z test_iseval1_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1026467Z ok (0.012s) 2023-01-11T21:22:59.1026868Z test_iseval2_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1027135Z ok (0.012s) 2023-01-11T21:22:59.1027552Z test_isnonelayer_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1027846Z ok (0.010s) 2023-01-11T21:22:59.1028255Z test_istraining1_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1028537Z ok (0.012s) 2023-01-11T21:22:59.1028946Z test_istraining2_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1029234Z ok (0.012s) 2023-01-11T21:22:59.1029629Z test_layerlist_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1029921Z ok (0.016s) 2023-01-11T21:22:59.1030735Z test_lazy_module_unspec (__main__.make_unspec_cls..UnspecTest) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:22:59.1031306Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:22:59.1032039Z [2023-01-11 21:22:51,981] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [ShapeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:22:59.1032799Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:22:59.1033299Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:22:59.1033950Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:22:59.1034442Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:22:59.1035097Z [2023-01-11 21:22:52,075] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [ShapeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:22:59.1035850Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:22:59.1036347Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:22:59.1036642Z frames [('total', 16), ('ok', 14)] 2023-01-11T21:22:59.1037387Z inline_call [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2), ('call_function UserDefinedObjectVariable(_infer_parameters) [NNModuleVariable(), TupleVariable()] {}', 1), ('call_function UserDefinedObjectVariable(_infer_parameters) [UnspecializedNNModuleVariable(LazyModule), TupleVariable()] {}', 1)] 2023-01-11T21:22:59.1038071Z unimplemented [("Guard setup for uninitialized class ", 2)] 2023-01-11T21:22:59.1038481Z graph_break [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2)] 2023-01-11T21:22:59.1038852Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1039070Z ok (0.239s) 2023-01-11T21:22:59.1039350Z test_module_attribute_precedence_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1039748Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1039967Z ok (0.011s) 2023-01-11T21:22:59.1040235Z test_module_class_method_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1040623Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:59.1040853Z ok (0.028s) 2023-01-11T21:22:59.1041222Z test_module_forward_has_graph_break_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 2)] 2023-01-11T21:22:59.1041502Z inline_call [] 2023-01-11T21:22:59.1041787Z unimplemented [('reconstruct: ConstantVariable(dict)', 2)] 2023-01-11T21:22:59.1042236Z graph_break [('call_function BuiltinVariable(dict) [ListIteratorVariable()] {}', 1), ('call_method NNModuleVariable() buffers [] {}', 1)] 2023-01-11T21:22:59.1042666Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:59.1042897Z ok (0.075s) 2023-01-11T21:22:59.1043318Z test_module_name_string_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1043654Z ok (0.014s) 2023-01-11T21:22:59.1043913Z test_module_property_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1044294Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1044513Z ok (0.006s) 2023-01-11T21:22:59.1044777Z test_module_static_method_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1045159Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:59.1045373Z ok (0.028s) 2023-01-11T21:22:59.1045793Z test_moduledict_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1046092Z ok (0.011s) 2023-01-11T21:22:59.1046506Z test_modulelist_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 40), ('fusions_possible', 39), ('unique_graphs', 1)] 2023-01-11T21:22:59.1046825Z ok (0.108s) 2023-01-11T21:22:59.1047083Z test_modulemethod1_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1047467Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:59.1047687Z ok (0.028s) 2023-01-11T21:22:59.1047945Z test_modulemethod2_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1048325Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:22:59.1048538Z ok (0.028s) 2023-01-11T21:22:59.1048973Z test_nn_moduledict_contains_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:22:59.1049339Z frames [('total', 2), ('ok', 1)] 2023-01-11T21:22:59.1049619Z inline_call [('Patched init cannot be inlined.', 1)] 2023-01-11T21:22:59.1050072Z unimplemented [("Guard setup for uninitialized class .M'>", 1)] 2023-01-11T21:22:59.1050567Z graph_break [('Patched init cannot be inlined.', 1)] 2023-01-11T21:22:59.1050779Z ok (0.018s) 2023-01-11T21:22:59.1051021Z test_parameters1_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1051402Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1051632Z ok (0.009s) 2023-01-11T21:22:59.1051884Z test_parameters2_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1052248Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1052483Z ok (0.009s) 2023-01-11T21:22:59.1052901Z test_parameters3_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:59.1053181Z ok (0.023s) 2023-01-11T21:22:59.1053603Z test_self_mutating1_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:22:59.1053900Z ok (0.038s) 2023-01-11T21:22:59.1054292Z test_seq_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1054576Z ok (0.019s) 2023-01-11T21:22:59.1054839Z test_simple_torch_function_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1055222Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1055438Z ok (0.013s) 2023-01-11T21:22:59.1055857Z test_stringmember_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1056157Z ok (0.012s) 2023-01-11T21:22:59.1056397Z test_submodules1_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1056820Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:59.1057047Z ok (0.024s) 2023-01-11T21:22:59.1057300Z test_submodules2_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1057660Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:22:59.1057891Z ok (0.024s) 2023-01-11T21:22:59.1058136Z test_super1_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1058495Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1058722Z ok (0.015s) 2023-01-11T21:22:59.1058965Z test_super2_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1059322Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1059555Z ok (0.014s) 2023-01-11T21:22:59.1059815Z test_super_class_method_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1060233Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1060451Z ok (0.007s) 2023-01-11T21:22:59.1060865Z test_tensorlist_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1061158Z ok (0.011s) 2023-01-11T21:22:59.1061586Z test_torch_function_with_closure_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1061892Z ok (0.008s) 2023-01-11T21:22:59.1062261Z test_unsupportedmethod_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1062920Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:22:59.1063206Z unimplemented [] 2023-01-11T21:22:59.1063596Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:22:59.1064017Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:22:59.1064235Z ok (0.028s) 2023-01-11T21:22:59.1064604Z test_unsupportedmodule_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1065082Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:22:59.1065362Z unimplemented [] 2023-01-11T21:22:59.1065747Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:22:59.1066170Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:22:59.1066399Z ok (0.030s) 2023-01-11T21:22:59.1066644Z test_viamodulecall_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1067026Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1067258Z ok (0.015s) 2023-01-11T21:22:59.1067582Z test_Size_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1068016Z inline_call [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:59.1068292Z unimplemented [] 2023-01-11T21:22:59.1068648Z graph_break [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:59.1069030Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1069263Z ok (0.014s) 2023-01-11T21:22:59.1069613Z test_abc_setattr_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1069866Z unimplemented [] 2023-01-11T21:22:59.1070366Z graph_break [('setattr(UserDefinedObjectVariable) .Derived.__setattr__ at 0x7fa5ae2fba30>', 1)] 2023-01-11T21:22:59.1070769Z inline_call [] 2023-01-11T21:22:59.1071066Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1071295Z ok (0.009s) 2023-01-11T21:22:59.1071672Z test_avoid_dupe_specialization_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1072078Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:59.1072359Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1072548Z ok (0.043s) 2023-01-11T21:22:59.1072902Z test_batch_norm_act_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1073290Z inline_call [('data dependent operator: aten._local_scalar_dense.default', 2)] 2023-01-11T21:22:59.1073541Z unimplemented [] 2023-01-11T21:22:59.1073857Z graph_break [('data dependent operator: aten._local_scalar_dense.default', 2)] 2023-01-11T21:22:59.1074260Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1074493Z ok (0.029s) 2023-01-11T21:22:59.1074894Z test_batchnorm_e2e_unspec (__main__.make_unspec_cls..UnspecTest) ... No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:22:59.1075236Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1075412Z inline_call [] 2023-01-11T21:22:59.1075714Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1076005Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1076181Z ok (0.922s) 2023-01-11T21:22:59.1076556Z test_bigbird_unsqueeze_inplace_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1076955Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1077233Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1077425Z ok (0.035s) 2023-01-11T21:22:59.1077679Z test_boxes_len_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1078039Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1078272Z ok (0.009s) 2023-01-11T21:22:59.1078531Z test_chunk_reformer_ff_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1078908Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1079126Z ok (0.075s) 2023-01-11T21:22:59.1079546Z test_class_member_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1079841Z ok (0.011s) 2023-01-11T21:22:59.1080206Z test_convert_boxes_to_pooler_format_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1080480Z inline_call [] 2023-01-11T21:22:59.1080660Z unimplemented [] 2023-01-11T21:22:59.1080928Z graph_break [('dynamic shapes: repeat_interleave', 2)] 2023-01-11T21:22:59.1081282Z stats [('calls_captured', 10), ('fusions_possible', 6), ('unique_graphs', 4)] 2023-01-11T21:22:59.1081512Z ok (0.058s) 2023-01-11T21:22:59.1081781Z test_create_rand_mask_from_inputs_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1082161Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:22:59.1082391Z ok (0.133s) 2023-01-11T21:22:59.1082635Z test_dict_iter_unspec (__main__.make_unspec_cls..UnspecTest) ... ok (0.005s) 2023-01-11T21:22:59.1083052Z test_dict_list_values_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:59.1083319Z unimplemented [] 2023-01-11T21:22:59.1083773Z graph_break [('call_function in skip_files Builtin count', 2), ('call_function BuiltinVariable(zip) [UserDefinedObjectVariable(count), ListVariable()] {}', 2)] 2023-01-11T21:22:59.1084089Z ok (0.017s) 2023-01-11T21:22:59.1084432Z test_do_paste_mask_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 13), ('ok', 12)] 2023-01-11T21:22:59.1084862Z unimplemented [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:59.1085225Z graph_break [('dynamic shapes: arange', 7), ('Dynamic slicing not supported', 4)] 2023-01-11T21:22:59.1085589Z stats [('calls_captured', 131), ('fusions_possible', 119), ('unique_graphs', 12)] 2023-01-11T21:22:59.1085822Z ok (1.444s) 2023-01-11T21:22:59.1086269Z test_dynamic_shapes_right_side_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1086563Z ok (0.081s) 2023-01-11T21:22:59.1086976Z test_ellipsis_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1087433Z ok (0.020s) 2023-01-11T21:22:59.1087782Z test_exec_import_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:59.1088222Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1088480Z unimplemented [] 2023-01-11T21:22:59.1088810Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1089043Z ok (0.004s) 2023-01-11T21:22:59.1089406Z test_exec_wildcard_import_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:59.1089818Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1090070Z unimplemented [] 2023-01-11T21:22:59.1090383Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1090757Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:59.1090988Z ok (0.017s) 2023-01-11T21:22:59.1091349Z test_for_loop_graph_break_before_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1091632Z unimplemented [] 2023-01-11T21:22:59.1092014Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:59.1092289Z inline_call [] 2023-01-11T21:22:59.1092584Z stats [('calls_captured', 100), ('fusions_possible', 99), ('unique_graphs', 1)] 2023-01-11T21:22:59.1092820Z ok (0.135s) 2023-01-11T21:22:59.1093185Z test_for_loop_graph_break_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1093434Z inline_call [] 2023-01-11T21:22:59.1093814Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:59.1094229Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1094445Z ok (0.013s) 2023-01-11T21:22:59.1094709Z test_get_parameter_dtype_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1095101Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1095330Z ok (0.018s) 2023-01-11T21:22:59.1095610Z test_grad_mode_carrying_correct_state_after_graph_break_unspec (__main__.make_unspec_cls..UnspecTest) ... Break 2023-01-11T21:22:59.1095940Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1096133Z unimplemented [] 2023-01-11T21:22:59.1096451Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1096831Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:22:59.1097060Z ok (0.016s) 2023-01-11T21:22:59.1097417Z test_guard_fail_nested_tuple_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1097810Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:59.1098034Z ok (0.023s) 2023-01-11T21:22:59.1098392Z test_guard_fail_tensor_bool_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 12), ('ok', 12)] 2023-01-11T21:22:59.1098835Z unimplemented [('FOR_ITER UserDefinedObjectVariable(product)', 1)] 2023-01-11T21:22:59.1099455Z graph_break [('call torch._dynamo.disable() wrapped function .fn..get_expected at 0x7fa640f535b0>', 5), ('data dependent operator: aten.allclose.default', 5)] 2023-01-11T21:22:59.1099964Z stats [('calls_captured', 5), ('unique_graphs', 5), ('fusions_possible', 0)] 2023-01-11T21:22:59.1100184Z ok (0.097s) 2023-01-11T21:22:59.1100455Z test_guard_ordering_shape_fail_unspec (__main__.make_unspec_cls..UnspecTest) ... ok (0.001s) 2023-01-11T21:22:59.1100803Z test_hf_model_output_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1101183Z stats [('calls_captured', 4), ('unique_graphs', 4), ('fusions_possible', 0)] 2023-01-11T21:22:59.1101400Z ok (0.052s) 2023-01-11T21:22:59.1101688Z test_hf_t5_forward_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1102076Z stats [('calls_captured', 11), ('fusions_possible', 10), ('unique_graphs', 1)] 2023-01-11T21:22:59.1102296Z ok (0.505s) 2023-01-11T21:22:59.1102852Z test_indexing_with_list_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1103178Z inline_call [('Tensor.numpy', 1)] 2023-01-11T21:22:59.1103371Z unimplemented [] 2023-01-11T21:22:59.1103608Z graph_break [('Tensor.numpy', 1)] 2023-01-11T21:22:59.1103933Z stats [('unique_graphs', 2), ('calls_captured', 0), ('fusions_possible', -2)] 2023-01-11T21:22:59.1104164Z ok (0.022s) 2023-01-11T21:22:59.1104415Z test_is_symbolic_tracing_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1104797Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1105024Z ok (0.006s) 2023-01-11T21:22:59.1105269Z test_isinstance_dtype_unspec (__main__.make_unspec_cls..UnspecTest) ... ok (0.003s) 2023-01-11T21:22:59.1105951Z test_isinstance_storage_unspec (__main__.make_unspec_cls..UnspecTest) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1484: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:22:59.1106571Z bools = torch.BoolStorage.from_buffer(f, "big") 2023-01-11T21:22:59.1106842Z frames [('total', 9), ('ok', 9)] 2023-01-11T21:22:59.1107024Z unimplemented [] 2023-01-11T21:22:59.1107506Z graph_break [('call_function BuiltinVariable(bytearray) [ListVariable()] {}', 1), ('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:22:59.1108044Z inline_call [('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:22:59.1108451Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1108684Z expected failure (0.013s) 2023-01-11T21:22:59.1123330Z test_issue1466_size_aot_autograd_unspec (__main__.make_unspec_cls..UnspecTest) ... arf 2023-01-11T21:22:59.1123598Z arf 2023-01-11T21:22:59.1123875Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1124070Z unimplemented [] 2023-01-11T21:22:59.1124411Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1124796Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:59.1125078Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1125273Z ok (0.033s) 2023-01-11T21:22:59.1125525Z test_issue175_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1125890Z stats [('calls_captured', 12), ('fusions_possible', 11), ('unique_graphs', 1)] 2023-01-11T21:22:59.1126302Z ok (0.127s) 2023-01-11T21:22:59.1126734Z test_longformer_chunk_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:59.1127026Z ok (0.160s) 2023-01-11T21:22:59.1127299Z test_maml_item_capture_unspec (__main__.make_unspec_cls..UnspecTest) ... expected failure (0.002s) 2023-01-11T21:22:59.1127757Z test_maml_no_item_capture_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:22:59.1128159Z inline_call [('inlining disallowed: ', 1)] 2023-01-11T21:22:59.1128391Z unimplemented [] 2023-01-11T21:22:59.1128848Z graph_break [('Tensor.item', 2), ('call_function in skip_files /opt/conda/lib/python3.10/copy.py', 1), ('inlining disallowed: ', 1)] 2023-01-11T21:22:59.1129292Z stats [('calls_captured', 29), ('fusions_possible', 25), ('unique_graphs', 4)] 2023-01-11T21:22:59.1129579Z ok (0.164s) 2023-01-11T21:22:59.1129993Z test_modules_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:59.1130287Z ok (0.025s) 2023-01-11T21:22:59.1130641Z test_multi_dot_import_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1131112Z inline_call [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:22:59.1131407Z unimplemented [] 2023-01-11T21:22:59.1131806Z graph_break [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:22:59.1132209Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1132442Z ok (0.013s) 2023-01-11T21:22:59.1132722Z test_multi_import_unspec (__main__.make_unspec_cls..UnspecTest) ... skip: requires detectron2 (0.000s) 2023-01-11T21:22:59.1133249Z test_named_buffers_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:22:59.1133541Z ok (0.013s) 2023-01-11T21:22:59.1133893Z test_nn_parameter_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1134320Z inline_call [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:59.1134572Z unimplemented [] 2023-01-11T21:22:59.1134916Z graph_break [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:22:59.1135305Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1135536Z ok (0.015s) 2023-01-11T21:22:59.1135789Z test_norm_dtype_unspec (__main__.make_unspec_cls..UnspecTest) ... skip: requires cuda (0.001s) 2023-01-11T21:22:59.1136267Z test_not_rewrite_assert_for_other_errors_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1136677Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1136894Z ok (0.009s) 2023-01-11T21:22:59.1137297Z test_not_rewrite_assert_unspec (__main__.make_unspec_cls..UnspecTest) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:59.1137591Z ok (0.005s) 2023-01-11T21:22:59.1137925Z test_numpy_list_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:22:59.1138189Z unimplemented [] 2023-01-11T21:22:59.1138609Z graph_break [('call torch._dynamo.disable() wrapped function .rand_gen at 0x7fa640f52440>', 1)] 2023-01-11T21:22:59.1138922Z expected failure (0.030s) 2023-01-11T21:22:59.1139287Z test_optimized_deepcopy_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1139726Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1139958Z ok (0.014s) 2023-01-11T21:22:59.1140460Z test_primtorch_no_graph_break_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:59.1140833Z expected failure (0.003s) 2023-01-11T21:22:59.1141197Z test_primtorch_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1141658Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:59.1141933Z unimplemented [] 2023-01-11T21:22:59.1142319Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:22:59.1142760Z ok (0.005s) 2023-01-11T21:22:59.1143169Z test_recursive_map_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1143435Z inline_call [] 2023-01-11T21:22:59.1143739Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1143966Z ok (0.013s) 2023-01-11T21:22:59.1144207Z test_reformer_eval_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1144584Z stats [('calls_captured', 10), ('fusions_possible', 9), ('unique_graphs', 1)] 2023-01-11T21:22:59.1144816Z ok (0.046s) 2023-01-11T21:22:59.1145068Z test_reformer_min_chunk_len_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1145450Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1145680Z ok (0.016s) 2023-01-11T21:22:59.1145927Z test_reformer_remove_unused_args_unspec (__main__.make_unspec_cls..UnspecTest) ... foo 2023-01-11T21:22:59.1146173Z foo 2023-01-11T21:22:59.1146382Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1146714Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1146975Z unimplemented [] 2023-01-11T21:22:59.1147307Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2)] 2023-01-11T21:22:59.1147687Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:59.1147965Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1148159Z ok (0.026s) 2023-01-11T21:22:59.1148421Z test_reformer_sorting_unspec (__main__.make_unspec_cls..UnspecTest) ... inline_call [] 2023-01-11T21:22:59.1148794Z stats [('calls_captured', 14), ('fusions_possible', 13), ('unique_graphs', 1)] 2023-01-11T21:22:59.1149025Z ok (0.040s) 2023-01-11T21:22:59.1149382Z test_reformer_train_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1149860Z inline_call [('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:22:59.1150232Z unimplemented [] 2023-01-11T21:22:59.1150713Z graph_break [('autograd.Function with requires_grad', 1), ('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:22:59.1151177Z stats [('calls_captured', 10), ('fusions_possible', 6), ('unique_graphs', 4)] 2023-01-11T21:22:59.1151395Z ok (0.065s) 2023-01-11T21:22:59.1151752Z test_reinplacing_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1152142Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1152423Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1152614Z ok (0.297s) 2023-01-11T21:22:59.1153058Z test_relative_import_no_modulename_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1153367Z ok (0.008s) 2023-01-11T21:22:59.1153832Z test_relative_import_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1154129Z ok (0.007s) 2023-01-11T21:22:59.1154555Z test_rewrite_assert_noop_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:22:59.1154843Z ok (0.022s) 2023-01-11T21:22:59.1155265Z test_rewrite_assert_with_fstring_msg_unspec (__main__.make_unspec_cls..UnspecTest) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:22:59.1155565Z ok (0.005s) 2023-01-11T21:22:59.1156006Z test_rewrite_assert_with_msg_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 18), ('fusions_possible', 15), ('unique_graphs', 3)] 2023-01-11T21:22:59.1156298Z ok (0.024s) 2023-01-11T21:22:59.1156778Z test_rewrite_assert_without_msg_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 12), ('fusions_possible', 10), ('unique_graphs', 2)] 2023-01-11T21:22:59.1157082Z ok (0.017s) 2023-01-11T21:22:59.1157413Z test_rng_state_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1157839Z unimplemented [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 1)] 2023-01-11T21:22:59.1158264Z graph_break [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 2)] 2023-01-11T21:22:59.1158643Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:22:59.1158860Z ok (0.031s) 2023-01-11T21:22:59.1159280Z test_seq_append_list_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:59.1159583Z ok (0.025s) 2023-01-11T21:22:59.1160315Z test_sigmoid_out_unspec (__main__.make_unspec_cls..UnspecTest) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1161041Z torch.sigmoid(inp, out=out1) 2023-01-11T21:22:59.1161939Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:22:59.1162541Z warnings.warn(msg) 2023-01-11T21:22:59.1163214Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1163856Z torch.sigmoid(inp, out=out1) 2023-01-11T21:22:59.1164103Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:59.1164406Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:22:59.1164640Z ok (0.035s) 2023-01-11T21:22:59.1165078Z test_slice_into_list_mutable_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 30), ('fusions_possible', 29), ('unique_graphs', 1)] 2023-01-11T21:22:59.1165402Z ok (0.087s) 2023-01-11T21:22:59.1165782Z test_slicing_dynamic_shape_setitem_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:22:59.1166156Z unimplemented [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:59.1166452Z graph_break [('Dynamic slicing not supported', 1)] 2023-01-11T21:22:59.1166798Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1167093Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1167283Z ok (0.013s) 2023-01-11T21:22:59.1167633Z test_slicing_dynamic_shape_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1168000Z unimplemented [('Dynamic slicing not supported', 2)] 2023-01-11T21:22:59.1168349Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:59.1168572Z ok (0.020s) 2023-01-11T21:22:59.1169328Z test_sort_out_unspec (__main__.make_unspec_cls..UnspecTest) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1170050Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:59.1171065Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1171723Z return node.target(*args, **kwargs) 2023-01-11T21:22:59.1172409Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1173064Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:59.1173731Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:22:59.1174384Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:22:59.1174644Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:22:59.1174962Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:22:59.1175184Z ok (0.043s) 2023-01-11T21:22:59.1175550Z test_specialized_stride_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1175955Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1176175Z ok (0.012s) 2023-01-11T21:22:59.1176542Z test_swin_base_tensor_attr_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1176966Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:22:59.1177195Z ok (0.016s) 2023-01-11T21:22:59.1177555Z test_tensor_isinstance_tuple_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1177950Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1178178Z ok (0.011s) 2023-01-11T21:22:59.1178515Z test_tokenization_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1178948Z inline_call [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:22:59.1179213Z unimplemented [] 2023-01-11T21:22:59.1179563Z graph_break [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:22:59.1179803Z ok (0.010s) 2023-01-11T21:22:59.1180264Z test_torch_ops_aten_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1180571Z ok (0.012s) 2023-01-11T21:22:59.1180989Z test_vdd_duplicate_error_unspec (__main__.make_unspec_cls..UnspecTest) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:22:59.1181290Z ok (0.014s) 2023-01-11T21:22:59.1181658Z test_while_loop_graph_break_unspec (__main__.make_unspec_cls..UnspecTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:22:59.1181913Z inline_call [] 2023-01-11T21:22:59.1182291Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:22:59.1182886Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1183118Z ok (0.008s) 2023-01-11T21:22:59.1183367Z test_with_on_graph_break_inst_unspec (__main__.make_unspec_cls..UnspecTest) ... Hello world 2023-01-11T21:22:59.1183629Z Hello world 2023-01-11T21:22:59.1183853Z frames [('total', 6), ('ok', 6)] 2023-01-11T21:22:59.1184181Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:22:59.1184445Z unimplemented [] 2023-01-11T21:22:59.1184817Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2), ('Tensor.backward', 1)] 2023-01-11T21:22:59.1185205Z stats [('calls_captured', 11), ('fusions_possible', 7), ('unique_graphs', 4)] 2023-01-11T21:22:59.1185435Z ok (0.032s) 2023-01-11T21:22:59.1185688Z test_builtin_functions_on_cuda (__main__.UnspecTests) ... skip: requires cuda (0.001s) 2023-01-11T21:22:59.1186055Z test_builtin_getitem (__main__.UnspecTests) ... frames [('total', 1)] 2023-01-11T21:22:59.1186283Z expected failure (0.009s) 2023-01-11T21:22:59.1186601Z test_builtin_max_min (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1186961Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:22:59.1187181Z ok (0.011s) 2023-01-11T21:22:59.1187519Z test_feed_random_values_into_graph_only (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1187896Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:22:59.1188112Z ok (0.016s) 2023-01-11T21:22:59.1188469Z test_multiple_consecutive_random_calls_before_graph (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1188861Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1189085Z ok (0.031s) 2023-01-11T21:22:59.1189379Z test_no_recompilations (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1189744Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1189975Z ok (0.007s) 2023-01-11T21:22:59.1190330Z test_numpy_correctness (__main__.UnspecTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:22:59.1190764Z unimplemented [('reconstruct: ConstantVariable(float64)', 1)] 2023-01-11T21:22:59.1191078Z graph_break [('Tensor.numpy', 2), ('numpy', 2)] 2023-01-11T21:22:59.1191401Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1191632Z ok (0.025s) 2023-01-11T21:22:59.1191958Z test_random_call_with_while_loop (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1192331Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:22:59.1192545Z ok (0.008s) 2023-01-11T21:22:59.1192871Z test_random_values_with_graph_break (__main__.UnspecTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:22:59.1193126Z unimplemented [] 2023-01-11T21:22:59.1193350Z graph_break [('Tensor.item', 2)] 2023-01-11T21:22:59.1193674Z stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:22:59.1193903Z ok (0.021s) 2023-01-11T21:22:59.1194251Z test_unspec_float_precision (__main__.UnspecTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:22:59.1194629Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:22:59.1194852Z ok (0.109s) 2023-01-11T21:22:59.1194958Z 2023-01-11T21:22:59.1195148Z ---------------------------------------------------------------------- 2023-01-11T21:22:59.1195401Z Ran 137 tests in 7.194s 2023-01-11T21:22:59.1195515Z 2023-01-11T21:22:59.1195613Z OK (skipped=3, expected failures=5) 2023-01-11T21:22:59.1195746Z 2023-01-11T21:22:59.1195834Z Generating XML reports... 2023-01-11T21:22:59.1196255Z Generated XML report: test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecNNModuleTests-20230111212251.xml 2023-01-11T21:22:59.1196800Z Generated XML report: test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecReproTests-20230111212251.xml 2023-01-11T21:22:59.1197316Z Generated XML report: test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecTests-20230111212251.xml 2023-01-11T21:22:59.1197542Z 2023-01-11T21:22:59.1197888Z ##[endgroup] 2023-01-11T21:22:59.1198295Z FINISHED PRINTING LOG FILE of dynamo/test_unspec (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_unspec_sghh16j8) 2023-01-11T21:22:59.1198525Z 2023-01-11T21:23:01.1473391Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:23:01.2369600Z Ignoring disabled issues: [] 2023-01-11T21:23:01.2509711Z Running profiler/test_profiler_tree ... [2023-01-11 21:23:01.250586] 2023-01-11T21:23:01.2511054Z Executing ['/opt/conda/bin/python', '-bb', 'profiler/test_profiler_tree.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:23:01.250879] 2023-01-11T21:23:03.3241547Z 2023-01-11T21:23:03.3242194Z Expand the folded group to see the log file of profiler/test_profiler_tree 2023-01-11T21:23:03.3243394Z ##[group]PRINTING LOG FILE of profiler/test_profiler_tree (/var/lib/jenkins/workspace/test/test-reports/profiler-test_profiler_tree_ha4oif61) 2023-01-11T21:23:03.3243665Z 2023-01-11T21:23:03.3243742Z Running tests... 2023-01-11T21:23:03.3244323Z ---------------------------------------------------------------------- 2023-01-11T21:23:03.3245020Z Test results will be stored in test-reports/python-unittest/profiler.test_profiler_tree 2023-01-11T21:23:03.3246635Z test_profiler_experimental_tree (__main__.TestProfilerTree) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/82499 for platform(s) linux, rocm. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.244s) 2023-01-11T21:23:03.3247846Z test_profiler_experimental_tree_cuda (__main__.TestProfilerTree) ... skip: https://github.com/pytorch/pytorch/issues/83606 (0.001s) 2023-01-11T21:23:03.3248673Z test_profiler_experimental_tree_cuda_detailed (__main__.TestProfilerTree) ... skip: https://github.com/pytorch/pytorch/issues/83606 (0.001s) 2023-01-11T21:23:03.3249514Z test_profiler_experimental_tree_cuda_with_stream (__main__.TestProfilerTree) ... skip: https://github.com/pytorch/pytorch/issues/83606 (0.001s) 2023-01-11T21:23:03.3250728Z test_profiler_experimental_tree_with_memory (__main__.TestProfilerTree) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/82501 for platform(s) linux, rocm. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:23:03.3251583Z test_profiler_experimental_tree_with_memory_and_stack (__main__.TestProfilerTree) ... STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3252112Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3252566Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3253090Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3253511Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3253963Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3254428Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3254860Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3255289Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3255558Z ok (0.015s) 2023-01-11T21:23:03.3256376Z test_profiler_experimental_tree_with_record_function (__main__.TestProfilerTree) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/83246 for platform(s) linux, rocm. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:23:03.3257236Z test_profiler_experimental_tree_with_stack_and_modules (__main__.TestProfilerTree) ... STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3257740Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3258189Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3258628Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3259055Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3259488Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3259936Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3260353Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3260795Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3261058Z ok (0.014s) 2023-01-11T21:23:03.3261556Z test_profiler_experimental_tree_with_stack_and_torch_dispatch (__main__.TestProfilerTree) ... STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3262064Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3262698Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3263143Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3263640Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3264072Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3264506Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3264934Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3265362Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3265625Z ok (0.006s) 2023-01-11T21:23:03.3266120Z test_profiler_experimental_tree_with_stack_and_torch_function (__main__.TestProfilerTree) ... STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3266691Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3267126Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3267558Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3267989Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3268430Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3268849Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:23:03.3269274Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:23:03.3269718Z STAGE:2023-01-11 21:23:02 1532:1532 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:23:03.3269969Z ok (0.005s) 2023-01-11T21:23:03.3270074Z 2023-01-11T21:23:03.3270348Z ---------------------------------------------------------------------- 2023-01-11T21:23:03.3270597Z Ran 10 tests in 0.289s 2023-01-11T21:23:03.3270712Z 2023-01-11T21:23:03.3270786Z OK (skipped=6) 2023-01-11T21:23:03.3270879Z 2023-01-11T21:23:03.3270964Z Generating XML reports... 2023-01-11T21:23:03.3271404Z Generated XML report: test-reports/python-unittest/profiler.test_profiler_tree/TEST-TestProfilerTree-20230111212302.xml 2023-01-11T21:23:03.3271656Z 2023-01-11T21:23:03.3271941Z ##[endgroup] 2023-01-11T21:23:03.3272359Z FINISHED PRINTING LOG FILE of profiler/test_profiler_tree (/var/lib/jenkins/workspace/test/test-reports/profiler-test_profiler_tree_ha4oif61) 2023-01-11T21:23:03.3272604Z 2023-01-11T21:23:05.3170166Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:23:05.4078730Z Ignoring disabled issues: [] 2023-01-11T21:23:05.4214392Z Running test_ao_sparsity ... [2023-01-11 21:23:05.421127] 2023-01-11T21:23:05.4215998Z Executing ['/opt/conda/bin/python', '-bb', 'test_ao_sparsity.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:23:05.421397] 2023-01-11T21:23:13.6317547Z 2023-01-11T21:23:13.6318545Z Expand the folded group to see the log file of test_ao_sparsity 2023-01-11T21:23:13.6319902Z ##[group]PRINTING LOG FILE of test_ao_sparsity (/var/lib/jenkins/workspace/test/test-reports/test_ao_sparsity_8pf9rcmq) 2023-01-11T21:23:13.6320272Z 2023-01-11T21:23:13.6320706Z Running tests... 2023-01-11T21:23:13.6321520Z ---------------------------------------------------------------------- 2023-01-11T21:23:13.6322151Z Test results will be stored in test-reports/python-unittest/test_ao_sparsity 2023-01-11T21:23:13.6322528Z test_activation_sparsifier (ao.sparsity.test_activation_sparsifier.TestActivationSparsifier) 2023-01-11T21:23:13.6322902Z Simulates the workflow of the activation sparsifier, starting from object creation ... ok (0.308s) 2023-01-11T21:23:13.6323427Z test_constructor (ao.sparsity.test_data_scheduler.TestBaseDataScheduler) 2023-01-11T21:23:13.6323759Z Checks if the warning is thrown if the scheduler step is called ... ok (0.002s) 2023-01-11T21:23:13.6324093Z test_order_of_steps (ao.sparsity.test_data_scheduler.TestBaseDataScheduler) ... ok (0.007s) 2023-01-11T21:23:13.6324468Z test_state_dict (ao.sparsity.test_data_scheduler.TestBaseDataScheduler) ... ok (0.003s) 2023-01-11T21:23:13.6324828Z test_step (ao.sparsity.test_data_scheduler.TestBaseDataScheduler) ... ok (0.008s) 2023-01-11T21:23:13.6325607Z test_nn_embeddings (ao.sparsity.test_data_sparsifier.TestBaseDataSparsifier) ... /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/data_sparsifier/base_data_sparsifier.py:104: UserWarning: Replacing existing data of the same name. - Did you mean a different name? 2023-01-11T21:23:13.6326231Z warnings.warn("Replacing existing data of the same name. - Did you mean a different name?") 2023-01-11T21:23:13.6326484Z ok (0.021s) 2023-01-11T21:23:13.6326839Z test_nn_parameters (ao.sparsity.test_data_sparsifier.TestBaseDataSparsifier) ... ok (0.015s) 2023-01-11T21:23:13.6327221Z test_tensors (ao.sparsity.test_data_sparsifier.TestBaseDataSparsifier) ... ok (0.015s) 2023-01-11T21:23:13.6327569Z test_constructor (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6327923Z test_mask_squash (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6328290Z test_mask_squash_with_params1 (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6328649Z test_mask_squash_with_params2 (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6329018Z test_mask_squash_with_params3 (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6329378Z test_prepare_config (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6329732Z test_state_dict (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.003s) 2023-01-11T21:23:13.6330064Z test_step (ao.sparsity.test_sparsifier.TestBaseSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6330435Z test_complex_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) 2023-01-11T21:23:13.6330791Z Test fusion for models that contain Conv2d & Linear modules. ... ok (0.090s) 2023-01-11T21:23:13.6331144Z test_constructor (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.004s) 2023-01-11T21:23:13.6331571Z test_prepare_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.028s) 2023-01-11T21:23:13.6332005Z test_prepare_linear (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.012s) 2023-01-11T21:23:13.6332451Z test_prune_conv2d_activation_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.152s) 2023-01-11T21:23:13.6332890Z test_prune_conv2d_bias_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.092s) 2023-01-11T21:23:13.6333334Z test_prune_conv2d_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.050s) 2023-01-11T21:23:13.6333779Z test_prune_conv2d_padding_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.267s) 2023-01-11T21:23:13.6334226Z test_prune_conv2d_pool_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.043s) 2023-01-11T21:23:13.6334663Z test_prune_linear_activation_linear (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.061s) 2023-01-11T21:23:13.6335114Z test_prune_linear_bias_linear (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.061s) 2023-01-11T21:23:13.6335545Z test_prune_linear_linear (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) 2023-01-11T21:23:13.6335936Z test pruning linear-> linear modules ... ok (0.060s) 2023-01-11T21:23:13.6336327Z test_step_conv2d (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.025s) 2023-01-11T21:23:13.6336841Z test_step_linear (ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier) ... ok (0.009s) 2023-01-11T21:23:13.6337691Z test_convert_without_squash_mask (ao.sparsity.test_composability.TestComposability) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. 2023-01-11T21:23:13.6338183Z warnings.warn( 2023-01-11T21:23:13.6338344Z ok (0.288s) 2023-01-11T21:23:13.6338621Z test_fusion_before_s_prep (ao.sparsity.test_composability.TestComposability) ... ok (0.284s) 2023-01-11T21:23:13.6338994Z test_q_prep_before_s_prep (ao.sparsity.test_composability.TestComposability) ... ok (0.293s) 2023-01-11T21:23:13.6339401Z test_qat_prep_before_s_prep (ao.sparsity.test_composability.TestComposability) ... ok (0.011s) 2023-01-11T21:23:13.6339786Z test_s_prep_before_fusion (ao.sparsity.test_composability.TestComposability) ... ok (0.293s) 2023-01-11T21:23:13.6340155Z test_s_prep_before_q_prep (ao.sparsity.test_composability.TestComposability) ... ok (0.299s) 2023-01-11T21:23:13.6340532Z test_s_prep_before_qat_prep (ao.sparsity.test_composability.TestComposability) ... ok (0.012s) 2023-01-11T21:23:13.6340884Z test_constructor (ao.sparsity.test_scheduler.TestCubicScheduler) ... ok (0.002s) 2023-01-11T21:23:13.6341691Z test_step (ao.sparsity.test_scheduler.TestCubicScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/scheduler/base_scheduler.py:125: UserWarning: Detected call of `scheduler.step()` before `sparsifier.step()`. You have to make sure you run the sparsifier.step() BEFORE any calls to the scheduer.step(). 2023-01-11T21:23:13.6342264Z warnings.warn("Detected call of `scheduler.step()` before `sparsifier.step()`. " 2023-01-11T21:23:13.6342658Z ok (0.003s) 2023-01-11T21:23:13.6342919Z test_jit_trace (ao.sparsity.test_parametrization.TestFakeSparsity) ... ok (0.085s) 2023-01-11T21:23:13.6343283Z test_masking_logic (ao.sparsity.test_parametrization.TestFakeSparsity) ... ok (0.002s) 2023-01-11T21:23:13.6343658Z test_state_dict_preserved (ao.sparsity.test_parametrization.TestFakeSparsity) ... ok (0.005s) 2023-01-11T21:23:13.6344031Z test_weights_parametrized (ao.sparsity.test_parametrization.TestFakeSparsity) ... ok (0.002s) 2023-01-11T21:23:13.6344412Z test_q_prep_fx_before_s_prep (ao.sparsity.test_composability.TestFxComposability) 2023-01-11T21:23:13.6344860Z This test checks that the ordering of prepare_fx -> sparse prepare -> convert_fx ... ok (0.309s) 2023-01-11T21:23:13.6345220Z test_q_prep_fx_s_prep_ref_conv (ao.sparsity.test_composability.TestFxComposability) 2023-01-11T21:23:13.6345657Z This checks that the ordering: prepare_fx -> sparse prepare -> convert_to_reference_fx ... ok (0.295s) 2023-01-11T21:23:13.6346030Z test_s_prep_before_q_prep_fx (ao.sparsity.test_composability.TestFxComposability) 2023-01-11T21:23:13.6346462Z This test checks that the ordering of sparse prepare -> prepare_fx -> convert_fx ... ok (0.321s) 2023-01-11T21:23:13.6346821Z test_s_prep_before_qat_prep_fx (ao.sparsity.test_composability.TestFxComposability) 2023-01-11T21:23:13.6347258Z This test checks that the ordering of sparse prepare -> prepare_qat_fx -> convert_fx ... ok (0.029s) 2023-01-11T21:23:13.6347618Z test_s_prep_q_prep_fx_ref (ao.sparsity.test_composability.TestFxComposability) 2023-01-11T21:23:13.6348058Z This checks that the ordering: sparse prepare -> prepare_fx -> convert_to_reference_fx ... ok (0.294s) 2023-01-11T21:23:13.6348429Z test_constructor (ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6348827Z test_mask_squash (ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier) ... ok (0.002s) 2023-01-11T21:23:13.6349222Z test_prepare (ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6349683Z test_sparsity_levels (ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier) ... ok (0.437s) 2023-01-11T21:23:13.6350063Z test_step (ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier) ... ok (0.259s) 2023-01-11T21:23:13.6350508Z test_nn_embeddings (ao.sparsity.test_data_sparsifier.TestNormDataSparsifiers) ... ok (0.276s) 2023-01-11T21:23:13.6350895Z test_nn_parameters (ao.sparsity.test_data_sparsifier.TestNormDataSparsifiers) ... ok (0.225s) 2023-01-11T21:23:13.6351262Z test_tensors (ao.sparsity.test_data_sparsifier.TestNormDataSparsifiers) ... ok (0.245s) 2023-01-11T21:23:13.6351682Z test_ptq_quantize_first (ao.sparsity.test_data_sparsifier.TestQuantizationUtils) 2023-01-11T21:23:13.6352005Z The expectation is post_training_sparse_quantize function ... ok (0.019s) 2023-01-11T21:23:13.6352339Z test_ptq_sparsify_first (ao.sparsity.test_data_sparsifier.TestQuantizationUtils) 2023-01-11T21:23:13.6352716Z The expectation is post_training_sparse_quantize function ... ok (0.007s) 2023-01-11T21:23:13.6353256Z test_sparse_qlinear (ao.sparsity.test_kernels.TestQuantizedSparseKernels) ... 2023-01-11 21:23:12,642 - root - INFO - static sparse qlinear is only available in fbgemm 2023-01-11T21:23:13.6353740Z 2023-01-11 21:23:12,656 - root - INFO - static sparse qlinear is only available in fbgemm 2023-01-11T21:23:13.6354131Z 2023-01-11 21:23:12,666 - root - INFO - dynamic sparse qlinear is only available in qnnpack 2023-01-11T21:23:13.6354514Z 2023-01-11 21:23:12,669 - root - INFO - dynamic sparse qlinear is only available in qnnpack 2023-01-11T21:23:13.6354736Z ok (0.060s) 2023-01-11T21:23:13.6355333Z test_sparse_qlinear (ao.sparsity.test_kernels.TestQuantizedSparseLayers) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2023-01-11T21:23:13.6355760Z warnings.warn( 2023-01-11T21:23:13.6356096Z [W qlinear_dynamic.cpp:247] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function operator()) 2023-01-11T21:23:13.6356705Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py:302: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2023-01-11T21:23:13.6357037Z warnings.warn( 2023-01-11T21:23:13.6357207Z ok (0.107s) 2023-01-11T21:23:13.6357479Z test_sparse_qlinear_serdes (ao.sparsity.test_kernels.TestQuantizedSparseLayers) ... ok (0.139s) 2023-01-11T21:23:13.6357840Z test_constructor (ao.sparsity.test_scheduler.TestScheduler) ... ok (0.001s) 2023-01-11T21:23:13.6358174Z test_lambda_scheduler (ao.sparsity.test_scheduler.TestScheduler) ... ok (0.001s) 2023-01-11T21:23:13.6358490Z test_order_of_steps (ao.sparsity.test_scheduler.TestScheduler) 2023-01-11T21:23:13.6358781Z Checks if the warning is thrown if the scheduler step is called ... ok (0.005s) 2023-01-11T21:23:13.6359093Z test_step (ao.sparsity.test_scheduler.TestScheduler) ... ok (0.001s) 2023-01-11T21:23:13.6359432Z test_fqn_to_module (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6359734Z Tests that fqn_to_module operates as inverse ... ok (0.004s) 2023-01-11T21:23:13.6360054Z test_fqn_to_module_fail (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6360374Z Tests that fqn_to_module returns None when it tries to ... ok (0.002s) 2023-01-11T21:23:13.6360691Z test_fqn_to_module_for_tensors (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6361036Z Tests that fqn_to_module works for tensors, actually all parameters ... ok (0.005s) 2023-01-11T21:23:13.6361390Z test_get_arg_info_from_tensor_fqn (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6361745Z Tests that get_arg_info_from_tensor_fqn works for all parameters of the model. ... ok (0.004s) 2023-01-11T21:23:13.6362101Z test_get_arg_info_from_tensor_fqn_fail (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6362584Z Tests that get_arg_info_from_tensor_fqn works as expected for invalid tensor_fqn ... ok (0.003s) 2023-01-11T21:23:13.6362930Z test_module_to_fqn (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6363264Z Tests that module_to_fqn works as expected when compared to known good ... ok (0.003s) 2023-01-11T21:23:13.6363594Z test_module_to_fqn_fail (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6363995Z Tests that module_to_fqn returns None when an fqn that doesn't ... ok (0.002s) 2023-01-11T21:23:13.6364326Z test_module_to_fqn_root (ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions) 2023-01-11T21:23:13.6364731Z Tests that module_to_fqn returns '' when model and target module are the same ... ok (0.002s) 2023-01-11T21:23:13.6365087Z test_constructor (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6365509Z test_mask_squash (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6365882Z test_prepare (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.001s) 2023-01-11T21:23:13.6366245Z test_sparsity_levels (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.028s) 2023-01-11T21:23:13.6366620Z test_step (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.228s) 2023-01-11T21:23:13.6366984Z test_step_2_of_4 (ao.sparsity.test_sparsifier.TestWeightNormSparsifier) ... ok (0.013s) 2023-01-11T21:23:13.6367183Z 2023-01-11T21:23:13.6367370Z ---------------------------------------------------------------------- 2023-01-11T21:23:13.6367613Z Ran 79 tests in 6.264s 2023-01-11T21:23:13.6367727Z 2023-01-11T21:23:13.6367790Z OK 2023-01-11T21:23:13.6367880Z 2023-01-11T21:23:13.6367965Z Generating XML reports... 2023-01-11T21:23:13.6368469Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_activation_sparsifier.TestActivationSparsifier-20230111212306.xml 2023-01-11T21:23:13.6369156Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_scheduler.TestBaseDataScheduler-20230111212306.xml 2023-01-11T21:23:13.6369815Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestBaseDataSparsifier-20230111212306.xml 2023-01-11T21:23:13.6370457Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestBaseSparsifier-20230111212306.xml 2023-01-11T21:23:13.6371126Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier-20230111212306.xml 2023-01-11T21:23:13.6371796Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_composability.TestComposability-20230111212306.xml 2023-01-11T21:23:13.6372479Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_scheduler.TestCubicScheduler-20230111212306.xml 2023-01-11T21:23:13.6373115Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_parametrization.TestFakeSparsity-20230111212306.xml 2023-01-11T21:23:13.6373751Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_composability.TestFxComposability-20230111212306.xml 2023-01-11T21:23:13.6374417Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier-20230111212306.xml 2023-01-11T21:23:13.6375093Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestNormDataSparsifiers-20230111212306.xml 2023-01-11T21:23:13.6375757Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestQuantizationUtils-20230111212306.xml 2023-01-11T21:23:13.6376401Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_kernels.TestQuantizedSparseKernels-20230111212306.xml 2023-01-11T21:23:13.6377087Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_kernels.TestQuantizedSparseLayers-20230111212306.xml 2023-01-11T21:23:13.6377703Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_scheduler.TestScheduler-20230111212306.xml 2023-01-11T21:23:13.6378330Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions-20230111212306.xml 2023-01-11T21:23:13.6378972Z Generated XML report: test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestWeightNormSparsifier-20230111212306.xml 2023-01-11T21:23:13.6379264Z 2023-01-11T21:23:13.6379554Z ##[endgroup] 2023-01-11T21:23:13.6379948Z FINISHED PRINTING LOG FILE of test_ao_sparsity (/var/lib/jenkins/workspace/test/test-reports/test_ao_sparsity_8pf9rcmq) 2023-01-11T21:23:13.6380170Z 2023-01-11T21:23:15.5453168Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:23:15.6309806Z Ignoring disabled issues: [] 2023-01-11T21:23:15.6447541Z Running test_foreach ... [2023-01-11 21:23:15.644493] 2023-01-11T21:23:15.6449452Z Executing ['/opt/conda/bin/python', '-bb', 'test_foreach.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:23:15.644744] 2023-01-11T21:23:18.1671444Z 2023-01-11T21:23:18.1671995Z Expand the folded group to see the log file of test_foreach 2023-01-11T21:23:18.1672987Z ##[group]PRINTING LOG FILE of test_foreach (/var/lib/jenkins/workspace/test/test-reports/test_foreach_j11d3hc8) 2023-01-11T21:23:18.1673227Z 2023-01-11T21:23:18.1673328Z Running tests... 2023-01-11T21:23:18.1673729Z ---------------------------------------------------------------------- 2023-01-11T21:23:18.1674006Z 2023-01-11T21:23:18.1674220Z ---------------------------------------------------------------------- 2023-01-11T21:23:18.1674458Z Ran 0 tests in 0.000s 2023-01-11T21:23:18.1674602Z 2023-01-11T21:23:18.1674651Z OK 2023-01-11T21:23:18.1674741Z 2023-01-11T21:23:18.1674825Z Generating XML reports... 2023-01-11T21:23:18.1675150Z Test results will be stored in test-reports/python-unittest/test_foreach 2023-01-11T21:23:18.1675327Z 2023-01-11T21:23:18.1676836Z ##[endgroup] 2023-01-11T21:23:18.1677242Z FINISHED PRINTING LOG FILE of test_foreach (/var/lib/jenkins/workspace/test/test-reports/test_foreach_j11d3hc8) 2023-01-11T21:23:18.1677453Z 2023-01-11T21:23:20.0443319Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:23:20.1091514Z Ignoring disabled issues: [] 2023-01-11T21:23:20.1226044Z Running test_function_schema ... [2023-01-11 21:23:20.122355] 2023-01-11T21:23:20.1228585Z Executing ['/opt/conda/bin/python', '-bb', 'test_function_schema.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:23:20.122611] 2023-01-11T21:23:22.2027574Z 2023-01-11T21:23:22.2028260Z Expand the folded group to see the log file of test_function_schema 2023-01-11T21:23:22.2029110Z ##[group]PRINTING LOG FILE of test_function_schema (/var/lib/jenkins/workspace/test/test-reports/test_function_schema_jpsmuh2e) 2023-01-11T21:23:22.2029387Z 2023-01-11T21:23:22.2029484Z Running tests... 2023-01-11T21:23:22.2029883Z ---------------------------------------------------------------------- 2023-01-11T21:23:22.2030378Z Test results will be stored in test-reports/python-unittest/test_function_schema 2023-01-11T21:23:22.2030708Z test_backward_compatible_arguments (__main__.TestFunctionSchema) ... ok (0.227s) 2023-01-11T21:23:22.2031087Z test_backward_compatible_outputs (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2031410Z test_backward_compatible_structure (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2031788Z test_backward_compatible_with_smart_serialization (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2032145Z test_forward_compatible_arguments_real_use_case (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2032671Z test_forward_compatible_arguments_with_out (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2033011Z test_forward_compatible_arguments_without_out (__main__.TestFunctionSchema) ... ok (0.002s) 2023-01-11T21:23:22.2033351Z test_out_schema (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2033626Z test_schema_error (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2033919Z test_serialize_and_deserialize (__main__.TestFunctionSchema) ... ok (0.212s) 2023-01-11T21:23:22.2034290Z test_string_optional_parameter_default_value (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2034632Z test_tensor_list_alias_annotation_properly_parsed (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2035024Z test_tensor_option_arguments_properly_parsed (__main__.TestFunctionSchema) ... ok (0.001s) 2023-01-11T21:23:22.2035214Z 2023-01-11T21:23:22.2035500Z ---------------------------------------------------------------------- 2023-01-11T21:23:22.2035771Z Ran 13 tests in 0.448s 2023-01-11T21:23:22.2035882Z 2023-01-11T21:23:22.2035944Z OK 2023-01-11T21:23:22.2036034Z 2023-01-11T21:23:22.2036116Z Generating XML reports... 2023-01-11T21:23:22.2036592Z Generated XML report: test-reports/python-unittest/test_function_schema/TEST-TestFunctionSchema-20230111212321.xml 2023-01-11T21:23:22.2036836Z 2023-01-11T21:23:22.2037125Z ##[endgroup] 2023-01-11T21:23:22.2037527Z FINISHED PRINTING LOG FILE of test_function_schema (/var/lib/jenkins/workspace/test/test-reports/test_function_schema_jpsmuh2e) 2023-01-11T21:23:22.2037755Z 2023-01-11T21:23:24.0476100Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:23:24.1114251Z Ignoring disabled issues: [] 2023-01-11T21:23:24.1246781Z Running test_jit ... [2023-01-11 21:23:24.124425] 2023-01-11T21:23:24.1249043Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:23:24.124687] 2023-01-11T21:25:38.7972898Z 2023-01-11T21:25:38.7973406Z Expand the folded group to see the log file of test_jit 2023-01-11T21:25:38.8024064Z ##[group]PRINTING LOG FILE of test_jit (/var/lib/jenkins/workspace/test/test-reports/test_jit_k9iebijm) 2023-01-11T21:25:38.8024424Z CUDA not available, skipping tests 2023-01-11T21:25:38.8024559Z 2023-01-11T21:25:38.8024634Z Running tests... 2023-01-11T21:25:38.8025074Z ---------------------------------------------------------------------- 2023-01-11T21:25:38.8025708Z Test results will be stored in test-reports/python-unittest/test_jit 2023-01-11T21:25:38.8065394Z test_becomes_wildcard_annotations (jit.test_alias_analysis.TestAliasAnalysis) ... ok (0.002s) 2023-01-11T21:25:38.8066088Z test_nested_list_construct_not_wildcard (jit.test_alias_analysis.TestAliasAnalysis) ... ok (0.010s) 2023-01-11T21:25:38.8066706Z test_recursive_calls (jit.test_alias_analysis.TestAliasAnalysis) ... ok (0.014s) 2023-01-11T21:25:38.8067295Z test_async_future_type_python (jit.test_async.TestAsync) ... ok (0.001s) 2023-01-11T21:25:38.8067804Z test_async_grad_guard_no_grad (jit.test_async.TestAsync) ... ok (0.043s) 2023-01-11T21:25:38.8068339Z test_async_grad_guard_with_grad (jit.test_async.TestAsync) ... ok (0.006s) 2023-01-11T21:25:38.8068838Z test_async_kwargs (jit.test_async.TestAsync) ... ok (0.192s) 2023-01-11T21:25:38.8069337Z test_async_parsing (jit.test_async.TestAsync) ... ok (0.008s) 2023-01-11T21:25:38.8069795Z test_async_python (jit.test_async.TestAsync) ... ok (0.002s) 2023-01-11T21:25:38.8072986Z test_async_script (jit.test_async.TestAsync) ... ok (0.006s) 2023-01-11T21:25:38.8073878Z test_async_script_capture (jit.test_async.TestAsync) ... ok (0.008s) 2023-01-11T21:25:38.8074844Z test_async_script_error (jit.test_async.TestAsync) ... ok (0.030s) 2023-01-11T21:25:38.8075388Z test_async_script_multi_forks (jit.test_async.TestAsync) ... ok (0.014s) 2023-01-11T21:25:38.8075934Z test_async_script_multi_waits (jit.test_async.TestAsync) ... ok (0.006s) 2023-01-11T21:25:38.8077492Z test_async_script_nested (jit.test_async.TestAsync) ... ok (0.009s) 2023-01-11T21:25:38.8077969Z test_async_script_no_script_mod (jit.test_async.TestAsync) ... ok (0.001s) 2023-01-11T21:25:38.8078419Z test_async_script_trace (jit.test_async.TestAsync) ... ok (0.032s) 2023-01-11T21:25:38.8078809Z test_future_subtyping (jit.test_async.TestAsync) 2023-01-11T21:25:38.8079210Z Test that futures subtype each other properly. ... ok (0.009s) 2023-01-11T21:25:38.8079617Z test_no_future_subtype_message (jit.test_async.TestAsync) ... ok (0.001s) 2023-01-11T21:25:38.8080033Z test_trace_fork_wait (jit.test_async.TestAsync) ... ok (0.010s) 2023-01-11T21:25:38.8080452Z test_trace_fork_wait_inline (jit.test_async.TestAsync) ... ok (0.007s) 2023-01-11T21:25:38.8080876Z test_trace_fork_wait_leaking (jit.test_async.TestAsync) ... ok (0.002s) 2023-01-11T21:25:38.8081346Z test_trace_fork_wait_list_modulecalls (jit.test_async.TestAsync) ... ok (0.024s) 2023-01-11T21:25:38.8134922Z test_trace_modulecalls_with_different_output_types (jit.test_async.TestAsync) ... ok (0.015s) 2023-01-11T21:25:38.8135559Z test_aten_pow_zero_negative_exponent (jit.test_aten_pow.TestAtenPow) 2023-01-11T21:25:38.8136019Z 1. Testing a = int, b = int ... ok (0.021s) 2023-01-11T21:25:38.8136513Z test_autodiff_requires_grad_nograd (jit.test_autodiff.TestAutodiffJit) ... ok (0.083s) 2023-01-11T21:25:38.8137091Z test_requires_grad_outputs (jit.test_autodiff.TestAutodiffJit) ... ok (0.051s) 2023-01-11T21:25:38.8137709Z test_requires_grad_outputs_profiled_twice (jit.test_autodiff.TestAutodiffJit) ... ok (0.118s) 2023-01-11T21:25:38.8138340Z test_requires_grad_outputs_side_effects (jit.test_autodiff.TestAutodiffJit) ... ok (0.116s) 2023-01-11T21:25:38.8138937Z test_undefined_tensor_lists (jit.test_autodiff.TestAutodiffJit) ... ok (0.116s) 2023-01-11T21:25:38.8139567Z test_aliased_outputs (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.003s) 2023-01-11T21:25:38.8140285Z test_bias_as_arg (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.365s) 2023-01-11T21:25:38.8141007Z test_bias_as_module_attr (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.310s) 2023-01-11T21:25:38.8141734Z test_chunk_constant_script_ad (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.004s) 2023-01-11T21:25:38.8142640Z test_constructed_bias (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.023s) 2023-01-11T21:25:38.8143394Z test_diff_graph_inline_threshold (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.007s) 2023-01-11T21:25:38.8144323Z test_differentiable_graph_ops_requires_grad (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... skip: disable until we property handle tensor lists with undefined gradients (0.001s) 2023-01-11T21:25:38.8145223Z test_does_not_create_cycles (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.006s) 2023-01-11T21:25:38.8145965Z test_does_not_merge_unrelated (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.006s) 2023-01-11T21:25:38.8146912Z test_has_profiled_info_aliasing_outputs (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.001s) 2023-01-11T21:25:38.8147686Z test_merge_respects_aliasing (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.009s) 2023-01-11T21:25:38.8148419Z test_merges_dense (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.006s) 2023-01-11T21:25:38.8149109Z test_merges_down (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.006s) 2023-01-11T21:25:38.8149814Z test_merges_up (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.006s) 2023-01-11T21:25:38.8150515Z test_merges_without_cycles (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.005s) 2023-01-11T21:25:38.8151710Z test_prune_grad (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... skip: Simple Executor doesn't support gradients (0.001s) 2023-01-11T21:25:38.8152677Z test_requires_grad_for_tensor_list (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.005s) 2023-01-11T21:25:38.8153453Z test_respects_lexical_scoping (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.007s) 2023-01-11T21:25:38.8154196Z test_simple_merge (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.005s) 2023-01-11T21:25:38.8154904Z test_simple_no_merge (jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing) ... ok (0.005s) 2023-01-11T21:25:38.8155919Z test_errors (jit.test_backends.TestBackends) ... [W backend_detail.cpp:393] Warning: Backend [test_backend_unavailable] is not available. Execution of this Module is still possible by saving and loading on a device where the backend is available. (function codegen_backend_module) 2023-01-11T21:25:38.8156765Z ok (0.143s) 2023-01-11T21:25:38.8157750Z test_execution (jit.test_backends.TestBackends) ... [W backend_detail.cpp:393] Warning: Backend [test_backend_unavailable] is not available. Execution of this Module is still possible by saving and loading on a device where the backend is available. (function codegen_backend_module) 2023-01-11T21:25:38.8158505Z ok (0.136s) 2023-01-11T21:25:38.8159290Z test_save_load (jit.test_backends.TestBackends) ... [W backend_detail.cpp:393] Warning: Backend [test_backend_unavailable] is not available. Execution of this Module is still possible by saving and loading on a device where the backend is available. (function codegen_backend_module) 2023-01-11T21:25:38.8160026Z ok (0.245s) 2023-01-11T21:25:38.8160477Z test_errors (jit.test_backends.TestBackendsWithCompiler) ... ok (0.045s) 2023-01-11T21:25:38.8161081Z test_execution (jit.test_backends.TestBackendsWithCompiler) ... ok (0.067s) 2023-01-11T21:25:38.8161641Z test_batch_mm_no_mutation (jit.test_batch_mm.TestBatchMM) ... ok (0.006s) 2023-01-11T21:25:38.8162197Z test_batch_mm_permitted_mutation (jit.test_batch_mm.TestBatchMM) ... ok (0.006s) 2023-01-11T21:25:38.8162777Z test_batch_mm_prohibited_mutation (jit.test_batch_mm.TestBatchMM) ... ok (0.006s) 2023-01-11T21:25:38.8163354Z test_batch_mm_prohibited_mutation_if_node (jit.test_batch_mm.TestBatchMM) ... ok (0.008s) 2023-01-11T21:25:38.8163930Z test_batch_mm_prohibited_mutation_multiple_adds (jit.test_batch_mm.TestBatchMM) ... ok (0.008s) 2023-01-11T21:25:38.8164537Z test_batch_mm_side_permitted_mutation (jit.test_batch_mm.TestBatchMM) ... ok (0.008s) 2023-01-11T21:25:38.8165140Z test_batch_mm_side_prohibited_mutation_common_side (jit.test_batch_mm.TestBatchMM) ... ok (0.009s) 2023-01-11T21:25:38.8165747Z test_batch_mm_side_prohibited_mutation_uncommon_side (jit.test_batch_mm.TestBatchMM) ... ok (0.009s) 2023-01-11T21:25:38.8166245Z test_del (jit.test_builtins.TestBuiltins) ... ok (0.006s) 2023-01-11T21:25:38.8166773Z test_del_multiple_operands (jit.test_builtins.TestBuiltins) ... ok (0.008s) 2023-01-11T21:25:38.8167305Z test_has_attr (jit.test_builtins.TestBuiltins) ... ok (0.018s) 2023-01-11T21:25:38.8167806Z test_has_attr_invalid_args (jit.test_builtins.TestBuiltins) ... ok (0.008s) 2023-01-11T21:25:38.8168353Z test_cast_overloads (jit.test_class_type.TestClassType) ... ok (0.036s) 2023-01-11T21:25:38.8168888Z test_class_attribute_wrong_type (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8169480Z Test that the error message displayed when convering a class type ... ok (0.019s) 2023-01-11T21:25:38.8170003Z test_class_constant (jit.test_class_type.TestClassType) ... ok (0.032s) 2023-01-11T21:25:38.8170553Z test_class_constructs_itself (jit.test_class_type.TestClassType) ... ok (0.017s) 2023-01-11T21:25:38.8171109Z test_class_inheritance (jit.test_class_type.TestClassType) ... ok (0.123s) 2023-01-11T21:25:38.8171637Z test_class_inheritance_implicit (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8172143Z Test that inheritance is detected in ... ok (0.031s) 2023-01-11T21:25:38.8172742Z test_class_sorting (jit.test_class_type.TestClassType) ... ok (0.085s) 2023-01-11T21:25:38.8173295Z test_class_specialization (jit.test_class_type.TestClassType) ... ok (0.079s) 2023-01-11T21:25:38.8173841Z test_class_type_as_param (jit.test_class_type.TestClassType) ... ok (0.017s) 2023-01-11T21:25:38.8174360Z test_classmethod (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8174812Z Test classmethods on class types. ... ok (0.023s) 2023-01-11T21:25:38.8175282Z test_conditional_set_attr (jit.test_class_type.TestClassType) ... ok (0.134s) 2023-01-11T21:25:38.8175802Z test_custom_delete (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8176306Z Test that del can be called on an instance of a class that ... ok (0.044s) 2023-01-11T21:25:38.8176804Z test_default_args (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8177298Z Test that methods on class types can have default arguments. ... ok (0.099s) 2023-01-11T21:25:38.8177924Z test_get_attr (jit.test_class_type.TestClassType) ... ok (0.014s) 2023-01-11T21:25:38.8178463Z test_get_attr_not_initialized (jit.test_class_type.TestClassType) ... ok (0.012s) 2023-01-11T21:25:38.8178992Z test_get_with_method (jit.test_class_type.TestClassType) ... ok (0.001s) 2023-01-11T21:25:38.8179517Z test_imported_classes (jit.test_class_type.TestClassType) ... ok (0.031s) 2023-01-11T21:25:38.8180036Z test_in (jit.test_class_type.TestClassType) ... ok (0.016s) 2023-01-11T21:25:38.8180541Z test_init_compiled_first (jit.test_class_type.TestClassType) ... ok (0.016s) 2023-01-11T21:25:38.8181059Z test_interface (jit.test_class_type.TestClassType) ... ok (0.269s) 2023-01-11T21:25:38.8181588Z test_optional_type_promotion (jit.test_class_type.TestClassType) ... ok (0.030s) 2023-01-11T21:25:38.8182131Z test_out_of_order_methods (jit.test_class_type.TestClassType) ... ok (0.138s) 2023-01-11T21:25:38.8182787Z test_overloaded_fn (jit.test_class_type.TestClassType) ... ok (0.148s) 2023-01-11T21:25:38.8183314Z test_properties (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8183821Z Test that a scripted class can make use of the @property decorator. ... ok (0.072s) 2023-01-11T21:25:38.8184415Z test_py_class_to_ivalue_missing_attribute (jit.test_class_type.TestClassType) ... ok (0.021s) 2023-01-11T21:25:38.8184976Z test_python_interop (jit.test_class_type.TestClassType) ... ok (0.016s) 2023-01-11T21:25:38.8185502Z test_recursive_class (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8186071Z Recursive class types not yet supported. We should give a good error message. ... ok (0.137s) 2023-01-11T21:25:38.8186651Z test_recursive_script_builtin_type_resolution (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8187533Z Test resolution of built-in torch types(e.g. torch.Tensor, torch.device) when a class is recursively compiled. ... ok (0.053s) 2023-01-11T21:25:38.8191702Z test_recursive_script_module_builtin_type_resolution (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8192614Z Test resolution of built-in torch types(e.g. torch.Tensor, torch.device) when a class is recursively compiled ... ok (0.028s) 2023-01-11T21:25:38.8193251Z test_recursive_scripting (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8193887Z Test that class types are recursively scripted when an Python instance of one ... ok (0.025s) 2023-01-11T21:25:38.8194485Z test_recursive_scripting_failed (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8195008Z Test that class types module attributes that fail to script ... ok (0.045s) 2023-01-11T21:25:38.8195546Z test_reference_semantics (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8196106Z Test that modifications made to a class instance in TorchScript ... ok (0.016s) 2023-01-11T21:25:38.8196650Z test_save_load_with_classes (jit.test_class_type.TestClassType) ... ok (0.018s) 2023-01-11T21:25:38.8197229Z test_save_load_with_classes_nested (jit.test_class_type.TestClassType) ... ok (0.162s) 2023-01-11T21:25:38.8197991Z test_save_load_with_classes_returned (jit.test_class_type.TestClassType) ... ok (0.019s) 2023-01-11T21:25:38.8198529Z test_schema_human_readable (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8199170Z Make sure that the schema is human readable, ie the mode parameter should read "nearest" instead of being displayed in octal ... ok (0.232s) 2023-01-11T21:25:38.8199832Z test_self_referential_method (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8200399Z Test that a scripted class can have a method that refers to the class itself ... ok (0.026s) 2023-01-11T21:25:38.8200955Z test_set_attr_in_method (jit.test_class_type.TestClassType) ... ok (0.016s) 2023-01-11T21:25:38.8201487Z test_set_attr_non_initialized (jit.test_class_type.TestClassType) ... ok (0.012s) 2023-01-11T21:25:38.8202062Z test_set_attr_type_mismatch (jit.test_class_type.TestClassType) ... ok (0.011s) 2023-01-11T21:25:38.8202602Z test_staticmethod (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8203163Z Test static methods on class types. ... ok (0.032s) 2023-01-11T21:25:38.8203615Z test_type_annotation (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8204157Z Test that annotating container attributes with types works correctly ... ok (0.148s) 2023-01-11T21:25:38.8204697Z test_type_annotations (jit.test_class_type.TestClassType) ... ok (0.012s) 2023-01-11T21:25:38.8205264Z test_unresolved_class_attributes (jit.test_class_type.TestClassType) ... ok (0.022s) 2023-01-11T21:25:38.8205789Z test_unused_method (jit.test_class_type.TestClassType) 2023-01-11T21:25:38.8206254Z Test unused methods on scripted classes. ... ok (0.026s) 2023-01-11T21:25:38.8206740Z test_binary_op_complex_tensor (jit.test_complex.TestComplex) ... ok (0.018s) 2023-01-11T21:25:38.8207281Z test_comparison_ops (jit.test_complex.TestComplex) ... ok (0.013s) 2023-01-11T21:25:38.8207802Z test_complex_constants_and_ops (jit.test_complex.TestComplex) ... ok (0.449s) 2023-01-11T21:25:38.8208329Z test_complex_constructor (jit.test_complex.TestComplex) ... ok (0.048s) 2023-01-11T21:25:38.8208867Z test_complex_list_sum (jit.test_complex.TestComplex) ... ok (0.003s) 2023-01-11T21:25:38.8209376Z test_complex_parse (jit.test_complex.TestComplex) ... ok (0.007s) 2023-01-11T21:25:38.8209863Z test_complexdict (jit.test_complex.TestComplex) ... ok (0.004s) 2023-01-11T21:25:38.8210362Z test_complexlist (jit.test_complex.TestComplex) ... ok (0.003s) 2023-01-11T21:25:38.8210847Z test_div (jit.test_complex.TestComplex) ... ok (0.003s) 2023-01-11T21:25:38.8211330Z test_infj_nanj_pickle (jit.test_complex.TestComplex) ... ok (0.007s) 2023-01-11T21:25:38.8211811Z test_pickle (jit.test_complex.TestComplex) ... ok (0.007s) 2023-01-11T21:25:38.8212287Z test_script (jit.test_complex.TestComplex) ... ok (0.003s) 2023-01-11T21:25:38.8212787Z test_tensor_attributes (jit.test_complex.TestComplex) ... ok (0.007s) 2023-01-11T21:25:38.8213346Z test_torch_complex_constructor_with_tensor (jit.test_complex.TestComplex) ... ok (0.039s) 2023-01-11T21:25:38.8213999Z test_calling_scripted_custom_op (jit.test_custom_operators.TestCustomOperators) ... ok (0.004s) 2023-01-11T21:25:38.8214669Z test_calling_traced_custom_op (jit.test_custom_operators.TestCustomOperators) ... ok (0.005s) 2023-01-11T21:25:38.8215334Z test_default_arguments_are_used (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8215972Z test_dynamic_op_registry (jit.test_custom_operators.TestCustomOperators) ... ok (0.002s) 2023-01-11T21:25:38.8216596Z test_generic_list (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8217230Z test_passing_and_returning_lists (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8217913Z test_passing_one_positional_but_not_the_second (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8218561Z test_passing_too_few_args (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8219332Z test_passing_too_many_args (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8219967Z test_passing_unknown_kwargs (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8220628Z test_script_graph_contains_custom_op (jit.test_custom_operators.TestCustomOperators) ... ok (0.002s) 2023-01-11T21:25:38.8221471Z test_script_graph_for_custom_ops_matches_traced_graph (jit.test_custom_operators.TestCustomOperators) ... skip: Need to figure out default dtype differences between fbcode and oss (0.000s) 2023-01-11T21:25:38.8222286Z test_simply_calling_an_operator (jit.test_custom_operators.TestCustomOperators) ... ok (0.001s) 2023-01-11T21:25:38.8223136Z test_where_no_scalar (jit.test_custom_operators.TestCustomOperators) ... ok (0.003s) 2023-01-11T21:25:38.8223698Z test_setattr_no_aliasdb (jit.test_dce.TestDCE) ... ok (0.005s) 2023-01-11T21:25:38.8224174Z test_setattr_removed (jit.test_dce.TestDCE) ... ok (0.008s) 2023-01-11T21:25:38.8225139Z test_python_submodule_script (jit.test_data_parallel.TestDataParallel) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:25:38.8226033Z test_shared_module (jit.test_data_parallel.TestDataParallel) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:25:38.8226973Z test_tensor_sharing (jit.test_data_parallel.TestDataParallel) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:25:38.8227855Z test_tensor_sharing_with_forward (jit.test_data_parallel.TestDataParallel) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:25:38.8228743Z test_traced_module (jit.test_data_parallel.TestDataParallel) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:25:38.8229353Z test__post_init__ (jit.test_dataclasses.TestDataclasses) ... ok (3.396s) 2023-01-11T21:25:38.8229921Z test_comparators (jit.test_dataclasses.TestDataclasses) ... ok (4.813s) 2023-01-11T21:25:38.8230461Z test_custom__eq__ (jit.test_dataclasses.TestDataclasses) ... ok (0.018s) 2023-01-11T21:25:38.8231106Z test_default_factories (jit.test_dataclasses.TestDataclasses) ... ok (0.005s) 2023-01-11T21:25:38.8231683Z test_init_vars (jit.test_dataclasses.TestDataclasses) ... ok (0.084s) 2023-01-11T21:25:38.8232194Z test_no_source (jit.test_dataclasses.TestDataclasses) ... ok (0.018s) 2023-01-11T21:25:38.8232799Z test_use_unregistered_dataclass_raises (jit.test_dataclasses.TestDataclasses) ... ok (0.002s) 2023-01-11T21:25:38.8233435Z test_custom_device_op (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.008s) 2023-01-11T21:25:38.8234041Z test_device_apply (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.002s) 2023-01-11T21:25:38.8234614Z test_device_arg (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8235223Z test_device_if_propagation (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.004s) 2023-01-11T21:25:38.8235840Z test_if_loop_mix (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.004s) 2023-01-11T21:25:38.8236430Z test_loop_device_change (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8237022Z test_loop_simple (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8237595Z test_mobilenet (jit.test_device_analysis.TestDeviceAnalysis) ... ok (3.484s) 2023-01-11T21:25:38.8238181Z test_nested_loops (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.008s) 2023-01-11T21:25:38.8238745Z test_set_dtype (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8239310Z test_simple (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.005s) 2023-01-11T21:25:38.8239899Z test_tensor_as_fns (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.009s) 2023-01-11T21:25:38.8240461Z test_while_change (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8241052Z test_zerodim_cpu (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.012s) 2023-01-11T21:25:38.8241661Z test_zerodim_gpu (jit.test_device_analysis.TestDeviceAnalysis) ... skip: No CUDA (0.000s) 2023-01-11T21:25:38.8242386Z test_zerodim_no_device (jit.test_device_analysis.TestDeviceAnalysis) ... ok (0.023s) 2023-01-11T21:25:38.8242909Z test_aug_assign (jit.test_list_dict.TestDict) ... ok (0.019s) 2023-01-11T21:25:38.8243375Z test_basic (jit.test_list_dict.TestDict) ... ok (0.023s) 2023-01-11T21:25:38.8243827Z test_clear (jit.test_list_dict.TestDict) ... ok (0.005s) 2023-01-11T21:25:38.8244249Z test_copy (jit.test_list_dict.TestDict) ... ok (0.006s) 2023-01-11T21:25:38.8244701Z test_del (jit.test_list_dict.TestDict) ... ok (0.009s) 2023-01-11T21:25:38.8245180Z test_dict_bool_conversion (jit.test_list_dict.TestDict) ... ok (0.027s) 2023-01-11T21:25:38.8245673Z test_dict_preserves_order (jit.test_list_dict.TestDict) ... ok (0.077s) 2023-01-11T21:25:38.8246189Z test_dict_to_python (jit.test_list_dict.TestDict) ... ok (0.024s) 2023-01-11T21:25:38.8246646Z test_dict_variance (jit.test_list_dict.TestDict) 2023-01-11T21:25:38.8247195Z `Dict[T1, _]` is not a subtype of `Dict[T2, _]`, even if `T1` is ... ok (0.013s) 2023-01-11T21:25:38.8247641Z test_get (jit.test_list_dict.TestDict) ... ok (0.009s) 2023-01-11T21:25:38.8248094Z test_get_boolkey (jit.test_list_dict.TestDict) ... ok (0.010s) 2023-01-11T21:25:38.8248549Z test_items (jit.test_list_dict.TestDict) ... ok (0.003s) 2023-01-11T21:25:38.8248988Z test_key_type (jit.test_list_dict.TestDict) ... ok (0.002s) 2023-01-11T21:25:38.8249434Z test_keys (jit.test_list_dict.TestDict) ... ok (0.006s) 2023-01-11T21:25:38.8249866Z test_len (jit.test_list_dict.TestDict) ... ok (0.004s) 2023-01-11T21:25:38.8250286Z test_loop (jit.test_list_dict.TestDict) ... ok (0.003s) 2023-01-11T21:25:38.8250744Z test_membership (jit.test_list_dict.TestDict) ... ok (0.008s) 2023-01-11T21:25:38.8251222Z test_mutability (jit.test_list_dict.TestDict) ... ok (0.003s) 2023-01-11T21:25:38.8251720Z test_optional_dict_construct (jit.test_list_dict.TestDict) ... ok (0.012s) 2023-01-11T21:25:38.8252216Z test_ordered_dict (jit.test_list_dict.TestDict) ... ok (0.014s) 2023-01-11T21:25:38.8252683Z test_pop (jit.test_list_dict.TestDict) ... ok (0.012s) 2023-01-11T21:25:38.8253164Z test_popitem (jit.test_list_dict.TestDict) ... ok (0.004s) 2023-01-11T21:25:38.8253602Z test_setdefault (jit.test_list_dict.TestDict) ... ok (0.009s) 2023-01-11T21:25:38.8254115Z test_type_annotation_missing_contained_type (jit.test_list_dict.TestDict) 2023-01-11T21:25:38.8254667Z Test that the use of a Dict type annotation without contained ... ok (0.003s) 2023-01-11T21:25:38.8255169Z test_update (jit.test_list_dict.TestDict) ... ok (0.011s) 2023-01-11T21:25:38.8255643Z test_update_existing_key (jit.test_list_dict.TestDict) ... ok (0.006s) 2023-01-11T21:25:38.8256125Z test_values (jit.test_list_dict.TestDict) ... ok (0.003s) 2023-01-11T21:25:38.8256577Z test_view (jit.test_list_dict.TestDict) ... ok (0.067s) 2023-01-11T21:25:38.8257079Z test_binary_scalar (jit.test_dtype_analysis.TestDtypeAnalysis) ... ok (0.010s) 2023-01-11T21:25:38.8258536Z test_binary_tensors (jit.test_dtype_analysis.TestDtypeAnalysis) ... /var/lib/jenkins/workspace/test/jit/test_dtype_analysis.py:165: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/EmptyTensor.cpp:32.) 2023-01-11T21:25:38.8259523Z rand_tensor = torch.rand(shape, dtype=dtype) 2023-01-11T21:25:38.8259880Z ok (0.077s) 2023-01-11T21:25:38.8260297Z test_combined (jit.test_dtype_analysis.TestDtypeAnalysis) ... ok (0.008s) 2023-01-11T21:25:38.8260882Z test_conv_no_mixed_args (jit.test_dtype_analysis.TestDtypeAnalysis) ... ok (0.016s) 2023-01-11T21:25:38.8261470Z test_custom_rules (jit.test_dtype_analysis.TestDtypeAnalysis) ... ok (0.016s) 2023-01-11T21:25:38.8262033Z test_unary (jit.test_dtype_analysis.TestDtypeAnalysis) ... ok (0.034s) 2023-01-11T21:25:38.8262715Z test_closed_over_enum_constant (jit.test_enum.TestEnum) ... ok (0.010s) 2023-01-11T21:25:38.8263215Z test_enum_as_const (jit.test_enum.TestEnum) ... ok (0.008s) 2023-01-11T21:25:38.8263845Z test_enum_as_module_attribute (jit.test_enum.TestEnum) ... ok (0.009s) 2023-01-11T21:25:38.8264306Z test_enum_comp (jit.test_enum.TestEnum) ... ok (0.007s) 2023-01-11T21:25:38.8264769Z test_enum_comp_diff_classes (jit.test_enum.TestEnum) ... ok (0.010s) 2023-01-11T21:25:38.8265272Z test_enum_explicit_script (jit.test_enum.TestEnum) ... ok (0.001s) 2023-01-11T21:25:38.8265756Z test_enum_iterate (jit.test_enum.TestEnum) ... ok (0.011s) 2023-01-11T21:25:38.8266205Z test_enum_ivalue_type (jit.test_enum.TestEnum) ... ok (0.008s) 2023-01-11T21:25:38.8266695Z test_enum_module_return (jit.test_enum.TestEnum) ... ok (0.010s) 2023-01-11T21:25:38.8267155Z test_enum_name (jit.test_enum.TestEnum) ... ok (0.008s) 2023-01-11T21:25:38.8267585Z test_enum_return (jit.test_enum.TestEnum) ... ok (0.009s) 2023-01-11T21:25:38.8268038Z test_enum_value (jit.test_enum.TestEnum) ... ok (0.007s) 2023-01-11T21:25:38.8268614Z test_enum_value_types (jit.test_enum.TestEnum) ... ok (0.024s) 2023-01-11T21:25:38.8269128Z test_heterogenous_value_type_enum_error (jit.test_enum.TestEnum) ... ok (0.005s) 2023-01-11T21:25:38.8269656Z test_non_existent_enum_value (jit.test_enum.TestEnum) ... ok (0.006s) 2023-01-11T21:25:38.8270176Z test_string_enum_as_module_attribute (jit.test_enum.TestEnum) ... ok (0.009s) 2023-01-11T21:25:38.8270838Z test_freeze_interface_swapping_two_methods (jit.test_freezing.TestFreezing) ... ok (0.198s) 2023-01-11T21:25:38.8271471Z test_freeze_interface_within_object (jit.test_freezing.TestFreezing) ... expected failure (0.041s) 2023-01-11T21:25:38.8272061Z test_freeze_module (jit.test_freezing.TestFreezing) ... ok (0.014s) 2023-01-11T21:25:38.8272625Z test_freeze_module_detach_gradient (jit.test_freezing.TestFreezing) ... ok (0.012s) 2023-01-11T21:25:38.8273203Z test_freeze_module_in_training_mode (jit.test_freezing.TestFreezing) ... ok (0.243s) 2023-01-11T21:25:38.8273766Z test_freeze_module_inlining (jit.test_freezing.TestFreezing) ... ok (0.040s) 2023-01-11T21:25:38.8274332Z test_freeze_module_no_forward (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8274902Z test_freeze_module_return_self (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8275459Z test_freeze_module_return_sub_module (jit.test_freezing.TestFreezing) ... ok (0.014s) 2023-01-11T21:25:38.8276041Z test_freeze_module_with_aliased_attr (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8276632Z test_freeze_module_with_aliased_attr2 (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8277196Z test_freeze_module_with_aliased_attr3 (jit.test_freezing.TestFreezing) ... ok (0.006s) 2023-01-11T21:25:38.8277806Z test_freeze_module_with_aliased_tensor_attr (jit.test_freezing.TestFreezing) ... ok (0.006s) 2023-01-11T21:25:38.8278423Z test_freeze_module_with_aliased_tensor_attr2 (jit.test_freezing.TestFreezing) ... ok (0.010s) 2023-01-11T21:25:38.8279018Z test_freeze_module_with_aliased_tensor_attr3 (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8279617Z test_freeze_module_with_aliased_tensor_attr4 (jit.test_freezing.TestFreezing) ... ok (0.009s) 2023-01-11T21:25:38.8280216Z test_freeze_module_with_call_method (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8280780Z test_freeze_module_with_fork (jit.test_freezing.TestFreezing) ... ok (0.012s) 2023-01-11T21:25:38.8281332Z test_freeze_module_with_fork2 (jit.test_freezing.TestFreezing) ... ok (0.010s) 2023-01-11T21:25:38.8281919Z test_freeze_module_with_fork_calling_module_method (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8282531Z test_freeze_module_with_helperfunction (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8283142Z test_freeze_module_with_inplace_mutable (jit.test_freezing.TestFreezing) ... ok (0.006s) 2023-01-11T21:25:38.8283710Z test_freeze_module_with_list (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8284658Z test_freeze_module_with_mutable_dict (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8285249Z test_freeze_module_with_mutable_list (jit.test_freezing.TestFreezing) ... ok (0.005s) 2023-01-11T21:25:38.8285840Z test_freeze_module_with_mutable_tensor (jit.test_freezing.TestFreezing) ... ok (0.005s) 2023-01-11T21:25:38.8286427Z test_freeze_module_with_nested_fork (jit.test_freezing.TestFreezing) ... ok (0.020s) 2023-01-11T21:25:38.8287025Z test_freeze_module_with_nestedaliasing (jit.test_freezing.TestFreezing) ... ok (0.023s) 2023-01-11T21:25:38.8287645Z test_freeze_module_with_nestedaliasingscalar (jit.test_freezing.TestFreezing) ... ok (0.019s) 2023-01-11T21:25:38.8288285Z test_freeze_module_with_non_static_module_container_index (jit.test_freezing.TestFreezing) 2023-01-11T21:25:38.8289047Z Test that Modules containing non-static ModuleDict or ModuleList ... ok (0.183s) 2023-01-11T21:25:38.8289642Z test_freeze_module_with_overlapping_attrs (jit.test_freezing.TestFreezing) ... ok (0.009s) 2023-01-11T21:25:38.8290705Z test_freeze_module_with_preserve_sub_module (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8343049Z test_freeze_module_with_preserve_sub_module_and_mutation (jit.test_freezing.TestFreezing) ... ok (0.014s) 2023-01-11T21:25:38.8343729Z test_freeze_module_with_sharedclasstype (jit.test_freezing.TestFreezing) ... ok (0.024s) 2023-01-11T21:25:38.8344339Z test_freeze_module_with_submodule (jit.test_freezing.TestFreezing) ... ok (0.014s) 2023-01-11T21:25:38.8344903Z test_freeze_module_with_tensor (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8345456Z test_freeze_module_with_tuple (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8346063Z test_freeze_module_with_tupleoutput_submodule (jit.test_freezing.TestFreezing) ... ok (0.010s) 2023-01-11T21:25:38.8346690Z test_freeze_module_with_user_preserved_attr (jit.test_freezing.TestFreezing) ... ok (0.005s) 2023-01-11T21:25:38.8347361Z test_freeze_module_with_user_preserved_attribute_on_submodule (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8348069Z test_freeze_module_with_user_preserved_attribute_on_unused_submodule (jit.test_freezing.TestFreezing) ... ok (0.008s) 2023-01-11T21:25:38.8348736Z test_freeze_module_with_user_preserved_method (jit.test_freezing.TestFreezing) ... ok (0.008s) 2023-01-11T21:25:38.8349364Z test_freeze_module_with_user_preserved_method2 (jit.test_freezing.TestFreezing) ... ok (0.007s) 2023-01-11T21:25:38.8349999Z test_freeze_module_with_user_preserved_method_on_submodule (jit.test_freezing.TestFreezing) ... ok (0.011s) 2023-01-11T21:25:38.8350607Z test_freeze_no_forward (jit.test_freezing.TestFreezing) ... ok (0.010s) 2023-01-11T21:25:38.8351246Z test_freeze_non_interface_module_swap (jit.test_freezing.TestFreezing) ... ok (0.015s) 2023-01-11T21:25:38.8351823Z test_freeze_non_module_class_getattr (jit.test_freezing.TestFreezing) ... ok (0.045s) 2023-01-11T21:25:38.8352401Z test_freeze_recursive_interfaces (jit.test_freezing.TestFreezing) ... ok (0.209s) 2023-01-11T21:25:38.8353012Z test_freeze_recursive_interfaces_same_name (jit.test_freezing.TestFreezing) ... ok (0.090s) 2023-01-11T21:25:38.8353644Z test_freeze_recursive_interfaces_with_reassignment (jit.test_freezing.TestFreezing) ... ok (0.215s) 2023-01-11T21:25:38.8354258Z test_freeze_with_interface_mutable (jit.test_freezing.TestFreezing) ... ok (0.161s) 2023-01-11T21:25:38.8354832Z test_freeze_with_swapping_interfaces (jit.test_freezing.TestFreezing) ... ok (0.053s) 2023-01-11T21:25:38.8355438Z test_module_getattr_indirection (jit.test_freezing.TestFreezing) ... ok (0.040s) 2023-01-11T21:25:38.8356024Z test_module_with_shared_type_instances (jit.test_freezing.TestFreezing) ... ok (0.084s) 2023-01-11T21:25:38.8356581Z test_dictionary_as_example_inputs_for_jit_trace (__main__.TestFrontend) ... ok (0.071s) 2023-01-11T21:25:38.8357095Z test_instancing_error (__main__.TestFrontend) ... ok (0.406s) 2023-01-11T21:25:38.8357831Z test_collapse_adjacent_conversions (jit.test_freezing.TestFrozenOptimizations) ... ok (0.015s) 2023-01-11T21:25:38.8358499Z test_conv_add_folding (jit.test_freezing.TestFrozenOptimizations) ... ok (1.337s) 2023-01-11T21:25:38.8359101Z test_conv_bn_folding (jit.test_freezing.TestFrozenOptimizations) ... ok (0.365s) 2023-01-11T21:25:38.8359857Z test_conv_bn_folding_autocast_scenario_cuda (jit.test_freezing.TestFrozenOptimizations) ... skip: Optimization currently only run for GPU (0.001s) 2023-01-11T21:25:38.8360615Z test_conv_bn_folding_not_forward (jit.test_freezing.TestFrozenOptimizations) ... ok (0.012s) 2023-01-11T21:25:38.8361231Z test_conv_hardswish (jit.test_freezing.TestFrozenOptimizations) ... ok (3.241s) 2023-01-11T21:25:38.8361849Z test_conv_mul_add_bn (jit.test_freezing.TestFrozenOptimizations) ... ok (0.060s) 2023-01-11T21:25:38.8362457Z test_conv_to_mkldnn (jit.test_freezing.TestFrozenOptimizations) ... ok (0.038s) 2023-01-11T21:25:38.8363243Z test_conv_to_mkldnn_no_mkldnn (jit.test_freezing.TestFrozenOptimizations) ... skip: Testing no mkldnn (0.001s) 2023-01-11T21:25:38.8363950Z test_freeze_conv_relu_fusion (jit.test_freezing.TestFrozenOptimizations) ... skip: requires CUDNN (0.001s) 2023-01-11T21:25:38.8364696Z test_freeze_conv_relu_fusion_not_forward (jit.test_freezing.TestFrozenOptimizations) ... skip: requires CUDNN (0.001s) 2023-01-11T21:25:38.8365386Z test_freeze_mkdlnn (jit.test_freezing.TestFrozenOptimizations) ... ok (0.006s) 2023-01-11T21:25:38.8366019Z test_freeze_remove_dropout (jit.test_freezing.TestFrozenOptimizations) ... ok (0.005s) 2023-01-11T21:25:38.8366658Z test_freeze_remove_feature_dropout (jit.test_freezing.TestFrozenOptimizations) ... ok (0.006s) 2023-01-11T21:25:38.8367329Z test_hardswish_hardsigmoid (jit.test_freezing.TestFrozenOptimizations) ... ok (0.008s) 2023-01-11T21:25:38.8367998Z test_incompatible_perf_formats (jit.test_freezing.TestFrozenOptimizations) ... ok (0.191s) 2023-01-11T21:25:38.8368626Z test_linear_bn_folding (jit.test_freezing.TestFrozenOptimizations) ... ok (1.032s) 2023-01-11T21:25:38.8369395Z test_linear_bn_folding_autocast_scenario_cuda (jit.test_freezing.TestFrozenOptimizations) ... skip: Optimization currently only run for GPU (0.003s) 2023-01-11T21:25:38.8370225Z test_linear_concat (jit.test_freezing.TestFrozenOptimizations) ... skip: Optimization currently only run for GPU (0.001s) 2023-01-11T21:25:38.8370923Z test_linear_concat_complex (jit.test_freezing.TestFrozenOptimizations) 2023-01-11T21:25:38.8371600Z Testing that the interleaving of multiple optimizations does not ... skip: Optimization currently only run for GPU (0.001s) 2023-01-11T21:25:38.8372282Z test_linear_concat_different_input (jit.test_freezing.TestFrozenOptimizations) 2023-01-11T21:25:38.8372966Z There should be no change to the graph due to the optimization pass ... skip: Optimization currently only run for GPU (0.001s) 2023-01-11T21:25:38.8373758Z test_linear_multiple_blocks (jit.test_freezing.TestFrozenOptimizations) ... skip: Optimization currently only run for GPU (0.002s) 2023-01-11T21:25:38.8374485Z test_linear_non_constant_weight (jit.test_freezing.TestFrozenOptimizations) ... ok (0.007s) 2023-01-11T21:25:38.8375135Z test_linear_transpose (jit.test_freezing.TestFrozenOptimizations) ... ok (0.007s) 2023-01-11T21:25:38.8375767Z test_maxpool_mkldnn (jit.test_freezing.TestFrozenOptimizations) ... ok (0.338s) 2023-01-11T21:25:38.8376391Z test_mkldnn_fuser_broadcasting (jit.test_freezing.TestFrozenOptimizations) ... ok (0.023s) 2023-01-11T21:25:38.8377055Z test_mkldnn_inplace_removal (jit.test_freezing.TestFrozenOptimizations) ... ok (0.015s) 2023-01-11T21:25:38.8377731Z test_numel_less_than_size_with_padding (jit.test_freezing.TestFrozenOptimizations) ... ok (0.019s) 2023-01-11T21:25:38.8378396Z test_optimize_freeze_module (jit.test_freezing.TestFrozenOptimizations) ... ok (0.019s) 2023-01-11T21:25:38.8379006Z test_pool2d_batchnorm (jit.test_freezing.TestFrozenOptimizations) ... ok (0.402s) 2023-01-11T21:25:38.8379748Z test_pool3d_batchnorm (jit.test_freezing.TestFrozenOptimizations) ... ok (1.305s) 2023-01-11T21:25:38.8380365Z test_remove_detach (jit.test_freezing.TestFrozenOptimizations) ... ok (0.004s) 2023-01-11T21:25:38.8380982Z test_remove_detach_not_applied (jit.test_freezing.TestFrozenOptimizations) ... ok (0.003s) 2023-01-11T21:25:38.8381620Z test_scalar_mul (jit.test_freezing.TestFrozenOptimizations) ... ok (0.049s) 2023-01-11T21:25:38.8382234Z test_subgraph_creation (jit.test_functional_blocks.TestFunctionalBlocks) ... ok (0.077s) 2023-01-11T21:25:38.8383075Z test_check_no_type_promotion (jit.test_convert_activation.TestFunctionalToInplaceActivation) ... ok (0.226s) 2023-01-11T21:25:38.8383873Z test_functional_to_inplace_activation (jit.test_convert_activation.TestFunctionalToInplaceActivation) ... ok (0.056s) 2023-01-11T21:25:38.8384691Z test_no_functional_to_inplace (jit.test_convert_activation.TestFunctionalToInplaceActivation) ... ok (0.008s) 2023-01-11T21:25:38.8385614Z test_resnet18_correctness (jit.test_convert_activation.TestFunctionalToInplaceActivation) ... ok (3.442s) 2023-01-11T21:25:38.8386299Z test_getattr_with_default (jit.test_attr.TestGetDefaultAttr) ... ok (0.007s) 2023-01-11T21:25:38.8386886Z test_fuse_linear (jit.test_graph_rewrite_passes.TestGraphRewritePasses) ... ok (0.029s) 2023-01-11T21:25:38.8387431Z test_hash_bool (jit.test_hash.TestHash) ... ok (0.007s) 2023-01-11T21:25:38.8387890Z test_hash_device (jit.test_hash.TestHash) ... ok (0.007s) 2023-01-11T21:25:38.8388347Z test_hash_float (jit.test_hash.TestHash) ... ok (0.009s) 2023-01-11T21:25:38.8388765Z test_hash_int (jit.test_hash.TestHash) ... ok (0.007s) 2023-01-11T21:25:38.8389199Z test_hash_none (jit.test_hash.TestHash) ... ok (0.003s) 2023-01-11T21:25:38.8389645Z test_hash_string (jit.test_hash.TestHash) ... ok (0.006s) 2023-01-11T21:25:38.8390049Z test_hash_tensor (jit.test_hash.TestHash) 2023-01-11T21:25:38.8390453Z Tensors should hash by identity ... ok (0.005s) 2023-01-11T21:25:38.8390978Z test_hash_tuple (jit.test_hash.TestHash) ... ok (0.007s) 2023-01-11T21:25:38.8391466Z test_hash_tuple_nested_unhashable_type (jit.test_hash.TestHash) ... ok (0.003s) 2023-01-11T21:25:38.8392007Z test_forward_tuple_input (jit.test_hooks.TestHooks) ... ok (0.017s) 2023-01-11T21:25:38.8392535Z test_hook_compilation_hint (jit.test_hooks.TestHooks) ... skip: (0.000s) 2023-01-11T21:25:38.8393048Z test_hook_hook_name_collision (jit.test_hooks.TestHooks) ... ok (0.008s) 2023-01-11T21:25:38.8393577Z test_hook_method_name_collision (jit.test_hooks.TestHooks) ... ok (0.007s) 2023-01-11T21:25:38.8394112Z test_module_direct_forward_invocation (jit.test_hooks.TestHooks) ... ok (0.012s) 2023-01-11T21:25:38.8394663Z test_module_forward_multiple_inputs (jit.test_hooks.TestHooks) ... ok (0.022s) 2023-01-11T21:25:38.8395203Z test_module_forward_single_input (jit.test_hooks.TestHooks) ... ok (0.019s) 2023-01-11T21:25:38.8395732Z test_module_hook_return_nothing (jit.test_hooks.TestHooks) ... ok (0.018s) 2023-01-11T21:25:38.8396298Z test_module_multiple_hooks_multiple_inputs (jit.test_hooks.TestHooks) ... ok (0.031s) 2023-01-11T21:25:38.8396852Z test_module_multiple_hooks_single_input (jit.test_hooks.TestHooks) ... ok (0.028s) 2023-01-11T21:25:38.8397405Z test_module_no_forward_input (jit.test_hooks.TestHooks) ... ok (0.016s) 2023-01-11T21:25:38.8397921Z test_module_same_hook_repeated (jit.test_hooks.TestHooks) ... ok (0.021s) 2023-01-11T21:25:38.8398487Z test_submodule_called_directly_with_hooks (jit.test_hooks.TestHooks) ... ok (0.014s) 2023-01-11T21:25:38.8399054Z test_submodule_direct_forward_invocation (jit.test_hooks.TestHooks) ... ok (0.020s) 2023-01-11T21:25:38.8399633Z test_submodule_forward_multiple_inputs (jit.test_hooks.TestHooks) ... ok (0.033s) 2023-01-11T21:25:38.8400183Z test_submodule_forward_single_input (jit.test_hooks.TestHooks) ... ok (0.021s) 2023-01-11T21:25:38.8400753Z test_submodule_forward_single_input_return_not_tupled (jit.test_hooks.TestHooks) ... ok (0.021s) 2023-01-11T21:25:38.8401480Z test_submodule_hook_return_nothing (jit.test_hooks.TestHooks) ... ok (0.021s) 2023-01-11T21:25:38.8402062Z test_submodule_multiple_hooks_multiple_inputs (jit.test_hooks.TestHooks) ... ok (0.034s) 2023-01-11T21:25:38.8402642Z test_submodule_multiple_hooks_single_input (jit.test_hooks.TestHooks) ... ok (0.030s) 2023-01-11T21:25:38.8403179Z test_submodule_no_forward_input (jit.test_hooks.TestHooks) ... ok (0.016s) 2023-01-11T21:25:38.8403712Z test_submodule_same_hook_repeated (jit.test_hooks.TestHooks) ... ok (0.023s) 2023-01-11T21:25:38.8404229Z test_wrong_hook_signatures (jit.test_hooks.TestHooks) ... ok (0.035s) 2023-01-11T21:25:38.8404734Z test_wrong_pre_hook_signatures (jit.test_hooks.TestHooks) ... ok (0.042s) 2023-01-11T21:25:38.8405294Z test_add_out_ignorable_args (jit.test_ignorable_args.TestIgnorableArgs) ... ok (0.003s) 2023-01-11T21:25:38.8405911Z test_slice_ignorable_args_for_slice (jit.test_ignorable_args.TestIgnorableArgs) ... ok (0.004s) 2023-01-11T21:25:38.8406695Z test_with_ignore_context_manager_with_inp_out (jit.test_ignore_context_manager.TestIgnoreContextManager) ... ok (0.013s) 2023-01-11T21:25:38.8407453Z test_with_ignore_context_manager_with_just_inp (jit.test_ignore_context_manager.TestIgnoreContextManager) ... ok (0.004s) 2023-01-11T21:25:38.8408191Z test_with_ignore_context_manager_with_just_out (jit.test_ignore_context_manager.TestIgnoreContextManager) ... ok (0.005s) 2023-01-11T21:25:38.8408988Z test_inplace_to_functional_activation (jit.test_convert_activation.TestInplaceToFunctionalActivation) ... ok (0.039s) 2023-01-11T21:25:38.8409771Z test_resnet18_correctness (jit.test_convert_activation.TestInplaceToFunctionalActivation) ... ok (3.320s) 2023-01-11T21:25:38.8410399Z test_bool (jit.test_isinstance.TestIsinstance) ... ok (0.006s) 2023-01-11T21:25:38.8410910Z test_dict (jit.test_isinstance.TestIsinstance) ... ok (0.006s) 2023-01-11T21:25:38.8411444Z test_dict_nested (jit.test_isinstance.TestIsinstance) ... ok (0.006s) 2023-01-11T21:25:38.8412006Z test_dict_no_contained_type (jit.test_isinstance.TestIsinstance) ... ok (0.002s) 2023-01-11T21:25:38.8412558Z test_dict_tensor (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8413137Z test_empty_container_special_cases (jit.test_isinstance.TestIsinstance) ... ok (0.008s) 2023-01-11T21:25:38.8413768Z test_empty_container_throws_warning_in_eager (jit.test_isinstance.TestIsinstance) ... ok (0.001s) 2023-01-11T21:25:38.8414323Z test_float (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8414833Z test_if_else (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8415354Z test_in_if (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8415858Z test_in_while_loop (jit.test_isinstance.TestIsinstance) ... ok (0.007s) 2023-01-11T21:25:38.8416384Z test_int (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8416884Z test_list (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8417406Z test_list_nested (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8417973Z test_list_no_contained_type (jit.test_isinstance.TestIsinstance) ... ok (0.002s) 2023-01-11T21:25:38.8418530Z test_list_tensor (jit.test_isinstance.TestIsinstance) ... ok (0.005s) 2023-01-11T21:25:38.8419075Z test_list_tensor_type_true (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8419668Z test_nontuple_container_rhs_throws_in_eager (jit.test_isinstance.TestIsinstance) ... ok (0.001s) 2023-01-11T21:25:38.8420254Z test_optional (jit.test_isinstance.TestIsinstance) ... ok (0.006s) 2023-01-11T21:25:38.8420802Z test_optional_nested (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8421369Z test_optional_no_contained_type (jit.test_isinstance.TestIsinstance) ... ok (0.002s) 2023-01-11T21:25:38.8421938Z test_optional_none (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8422656Z test_tensor_type_false (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8423309Z test_tuple (jit.test_isinstance.TestIsinstance) ... ok (0.006s) 2023-01-11T21:25:38.8423808Z test_tuple_nested (jit.test_isinstance.TestIsinstance) ... ok (0.007s) 2023-01-11T21:25:38.8424360Z test_tuple_no_contained_type (jit.test_isinstance.TestIsinstance) ... ok (0.002s) 2023-01-11T21:25:38.8424917Z test_tuple_rhs (jit.test_isinstance.TestIsinstance) ... ok (0.007s) 2023-01-11T21:25:38.8425432Z test_tuple_tensor (jit.test_isinstance.TestIsinstance) ... ok (0.004s) 2023-01-11T21:25:38.8425965Z test_type_refinement (jit.test_isinstance.TestIsinstance) ... ok (0.016s) 2023-01-11T21:25:38.8426431Z test_ModuleList (__main__.TestJit) ... ok (0.054s) 2023-01-11T21:25:38.8426839Z test_Sequential (__main__.TestJit) ... ok (0.023s) 2023-01-11T21:25:38.8427216Z test_T_mT_H_mH (__main__.TestJit) ... ok (0.020s) 2023-01-11T21:25:38.8427641Z test_add_relu_fusion (__main__.TestJit) ... ok (0.026s) 2023-01-11T21:25:38.8428211Z test_arg_configurations (__main__.TestJit) 2023-01-11T21:25:38.8428789Z Different arg configurations should trigger different traces ... skip: Need to be adjusted to Graph Executor (0.001s) 2023-01-11T21:25:38.8429309Z test_attrs (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8429684Z test_batchnorm (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8430095Z test_big (__main__.TestJit) ... skip: Requires a lot of RAM (0.001s) 2023-01-11T21:25:38.8430551Z test_conj_transpose (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8431062Z test_constant_insertion (__main__.TestJit) ... ok (3.635s) 2023-01-11T21:25:38.8431535Z test_constant_prop_aliasing_type (__main__.TestJit) ... ok (0.006s) 2023-01-11T21:25:38.8432003Z test_constant_prop_exception (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8432470Z test_constant_prop_if_constant (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8432936Z test_constant_prop_if_inline (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8433410Z test_constant_prop_loop_constant (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8433893Z test_constant_prop_nested (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8434346Z test_constant_prop_none (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8434779Z test_constant_prop_print (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8435233Z test_constant_prop_rand (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8435698Z test_constant_prop_remove_output (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8436172Z test_constant_prop_simple (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8436593Z test_constants_pkl (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8436992Z test_cpp (__main__.TestJit) ... ok (0.029s) 2023-01-11T21:25:38.8437372Z test_cse (__main__.TestJit) ... ok (0.007s) 2023-01-11T21:25:38.8437792Z test_cse_not_introduce_aliasing (__main__.TestJit) ... ok (0.006s) 2023-01-11T21:25:38.8438258Z test_cu_escaped_number (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8438751Z test_cuda_export_restore (__main__.TestJit) ... skip: requires CUDA (0.001s) 2023-01-11T21:25:38.8439246Z test_debug_flush_compilation_cache (__main__.TestJit) ... ok (0.014s) 2023-01-11T21:25:38.8439717Z test_decompose_addmm (__main__.TestJit) ... ok (0.037s) 2023-01-11T21:25:38.8440179Z test_device_not_equal (__main__.TestJit) ... skip: requires CUDA (0.001s) 2023-01-11T21:25:38.8440686Z test_diff_subgraph_clones_constants (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8441116Z test_disabled (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8441520Z test_dropout (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8441995Z test_dropout_cuda (__main__.TestJit) ... skip: test_dropout_cuda require CUDA (0.001s) 2023-01-11T21:25:38.8442938Z test_dropout_func_requires_grad (__main__.TestJit) ... STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8443940Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8444756Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8445537Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8446304Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8447109Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8447886Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8448659Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8449443Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8450318Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8451085Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8451868Z STAGE:2023-01-11 21:24:08 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8452331Z ok (0.101s) 2023-01-11T21:25:38.8452793Z test_dropout_module_requires_grad (__main__.TestJit) ... skip: Testing differentiable graph (0.001s) 2023-01-11T21:25:38.8453296Z test_einsum (__main__.TestJit) ... ok (0.025s) 2023-01-11T21:25:38.8453691Z test_element_size (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8454139Z test_expand_fold_quant_inputs (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8454606Z test_expand_quantlint (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8455163Z test_export_batchnorm (__main__.TestJit) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:25:38.8455711Z test_export_dropout (__main__.TestJit) ... ok (0.006s) 2023-01-11T21:25:38.8456132Z test_export_lstm (__main__.TestJit) ... ok (0.041s) 2023-01-11T21:25:38.8456541Z test_export_opnames (__main__.TestJit) ... ok (0.012s) 2023-01-11T21:25:38.8456968Z test_export_rnn (__main__.TestJit) ... ok (0.070s) 2023-01-11T21:25:38.8457483Z test_flags (__main__.TestJit) ... skip: Need to instrument GraphExecutors a bit more (0.001s) 2023-01-11T21:25:38.8458013Z test_function_default_values (__main__.TestJit) ... ok (0.039s) 2023-01-11T21:25:38.8458478Z test_hide_source_ranges_context_manager (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8458945Z test_import_method (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8459385Z test_inferred_as_tensor (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8459785Z test_layout (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8460207Z test_matrix_conj_transpose (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8460667Z test_matrix_transpose (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8461107Z test_module_default_values (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8461571Z test_mutable_default_values (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8462086Z test_native_dropout_corner_case (__main__.TestJit) ... skip: test requires CUDA (0.001s) 2023-01-11T21:25:38.8462703Z test_nn_conv (__main__.TestJit) ... ok (0.242s) 2023-01-11T21:25:38.8463096Z test_nn_lp_pool1d (__main__.TestJit) ... ok (0.102s) 2023-01-11T21:25:38.8463505Z test_nn_lp_pool2d (__main__.TestJit) ... ok (0.139s) 2023-01-11T21:25:38.8463917Z test_nn_padding (__main__.TestJit) ... ok (0.265s) 2023-01-11T21:25:38.8464343Z test_nn_padding_functional (__main__.TestJit) ... ok (0.024s) 2023-01-11T21:25:38.8464805Z test_no_erroneous_warnings (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8465381Z test_non_ascii_string (__main__.TestJit) ... skip: temporarily disable the test for fwd compatibility (0.001s) 2023-01-11T21:25:38.8466037Z test_numel (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8466461Z test_pattern_based_module_rewrite (__main__.TestJit) ... ok (0.036s) 2023-01-11T21:25:38.8466925Z test_pattern_based_rewrite (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8467444Z test_pattern_based_rewrite_with_source_range_preserved (__main__.TestJit) ... ok (0.019s) 2023-01-11T21:25:38.8468272Z test_peephole_optimize_shape_ops (__main__.TestJit) ... skip: Simple executor doesn't have shape information (0.002s) 2023-01-11T21:25:38.8468848Z test_permute_inputs_binding (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8469301Z test_pretty_printer (__main__.TestJit) ... ok (0.038s) 2023-01-11T21:25:38.8469726Z test_print_classes_module (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8470156Z test_print_op_module (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8470598Z test_print_torch_ops_modules (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8472225Z test_profiler (__main__.TestJit) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/65521 for allplatform(s) . If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:25:38.8473167Z test_python_bindings (__main__.TestJit) ... ok (0.225s) 2023-01-11T21:25:38.8473588Z test_python_ir (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8474000Z test_python_ir_utils (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8474443Z test_python_ir_utils_graph (__main__.TestJit) ... ok (0.006s) 2023-01-11T21:25:38.8474935Z test_python_ivalue (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8475376Z test_pytorch_jit_env_off (__main__.TestJit) ... ok (1.575s) 2023-01-11T21:25:38.8475806Z test_recursive_cse (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8476263Z test_repeat_interleave_script (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8476715Z test_restore_device (__main__.TestJit) ... ok (0.005s) 2023-01-11T21:25:38.8477225Z test_restore_device_cuda (__main__.TestJit) ... skip: restore device requires CUDA (0.001s) 2023-01-11T21:25:38.8477832Z test_restore_shared_storage_on_cuda (__main__.TestJit) ... skip: restore device requires CUDA (0.001s) 2023-01-11T21:25:38.8478345Z test_script_autograd_grad (__main__.TestJit) ... ok (0.373s) 2023-01-11T21:25:38.8478784Z test_script_backward (__main__.TestJit) ... ok (0.011s) 2023-01-11T21:25:38.8479852Z test_script_backward_twice (__main__.TestJit) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_script.py:1243: UserWarning: `optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead 2023-01-11T21:25:38.8480515Z warnings.warn( 2023-01-11T21:25:38.8480813Z ok (0.014s) 2023-01-11T21:25:38.8481161Z test_script_fn_pkl (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8481598Z test_script_tensor_type (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8482063Z test_shape_analysis_broadcast (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8482556Z test_shape_analysis_masked_select (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8483068Z test_shape_analysis_unsqueeze_in_loop (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8483531Z test_sparse_csr_tensors (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8483969Z test_sparse_tensors (__main__.TestJit) ... ok (0.032s) 2023-01-11T21:25:38.8486026Z test_torch_complex (__main__.TestJit) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:471: UserWarning: An output with one or more elements was resized since it had shape [3, 4], which does not match the required output shape [2]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:25:38.8487395Z return callable(*args, **kwargs) 2023-01-11T21:25:38.8489280Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:471: UserWarning: An output with one or more elements was resized since it had shape [5, 2], which does not match the required output shape [2]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:25:38.8490483Z return callable(*args, **kwargs) 2023-01-11T21:25:38.8490785Z ok (0.026s) 2023-01-11T21:25:38.8491494Z test_torch_load_error (__main__.TestJit) ... skip: TODO: re-enable with https://github.com/pytorch/pytorch/pull/29339 (0.001s) 2023-01-11T21:25:38.8492191Z test_torch_load_zipfile_check (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8492661Z test_torch_ops_kwonly (__main__.TestJit) ... ok (0.003s) 2023-01-11T21:25:38.8493088Z test_torch_ops_overloaded (__main__.TestJit) ... ok (0.002s) 2023-01-11T21:25:38.8493519Z test_torch_sum (__main__.TestJit) ... ok (0.010s) 2023-01-11T21:25:38.8493945Z test_trace_retains_train (__main__.TestJit) ... ok (0.006s) 2023-01-11T21:25:38.8494352Z test_train_eval (__main__.TestJit) ... ok (0.044s) 2023-01-11T21:25:38.8494763Z test_transpose (__main__.TestJit) ... ok (0.004s) 2023-01-11T21:25:38.8495185Z test_unchecked_cast (__main__.TestJit) ... ok (0.011s) 2023-01-11T21:25:38.8495604Z test_unique_state_dict (__main__.TestJit) ... ok (0.001s) 2023-01-11T21:25:38.8496149Z test_verify (__main__.TestJit) ... skip: verify needs to be updated to work with GraphExecutors (0.001s) 2023-01-11T21:25:38.8496667Z test_warnings (__main__.TestJit) ... ok (0.009s) 2023-01-11T21:25:38.8497168Z test_nn_AdaptiveAvgPool1d (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8497743Z test_nn_AdaptiveAvgPool1d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8498362Z test_nn_AdaptiveAvgPool1d_one_output (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8498975Z test_nn_AdaptiveAvgPool2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8499549Z test_nn_AdaptiveAvgPool2d_single (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8500164Z test_nn_AdaptiveAvgPool2d_single_1x1output (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8500771Z test_nn_AdaptiveAvgPool2d_tuple (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8501375Z test_nn_AdaptiveAvgPool2d_tuple_none (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8501970Z test_nn_AdaptiveAvgPool3d_last_dim (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8502884Z test_nn_AdaptiveAvgPool3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8503306Z test_nn_AdaptiveAvgPool3d_single (__main__.TestJitGeneratedModule) ... ok (0.046s) 2023-01-11T21:25:38.8503637Z test_nn_AdaptiveAvgPool3d_tuple (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8503988Z test_nn_AdaptiveAvgPool3d_tuple_none (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8504330Z test_nn_AdaptiveMaxPool1d (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8504673Z test_nn_AdaptiveMaxPool1d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8505008Z test_nn_AdaptiveMaxPool2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.062s) 2023-01-11T21:25:38.8505357Z test_nn_AdaptiveMaxPool2d_single (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8505705Z test_nn_AdaptiveMaxPool2d_tuple (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8506171Z test_nn_AdaptiveMaxPool2d_tuple_none (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8506501Z test_nn_AdaptiveMaxPool3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8506842Z test_nn_AdaptiveMaxPool3d_single (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8507191Z test_nn_AdaptiveMaxPool3d_single_nonatomic (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8507523Z test_nn_AdaptiveMaxPool3d_tuple (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8507873Z test_nn_AdaptiveMaxPool3d_tuple_nonatomic (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8508222Z test_nn_AdaptiveMaxPool3d_tuple_none (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8508543Z test_nn_AvgPool1d (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8508887Z test_nn_AvgPool1d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8509209Z test_nn_AvgPool1d_stride (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8509525Z test_nn_AvgPool1d_stride_pad (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8509818Z test_nn_AvgPool2d (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8510123Z test_nn_AvgPool2d_divisor (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8510442Z test_nn_AvgPool2d_divisor_stride (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8510815Z test_nn_AvgPool2d_divisor_stride_pad (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8511131Z test_nn_AvgPool2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8511446Z test_nn_AvgPool2d_stride (__main__.TestJitGeneratedModule) ... ok (0.046s) 2023-01-11T21:25:38.8511763Z test_nn_AvgPool2d_stride_pad (__main__.TestJitGeneratedModule) ... ok (0.046s) 2023-01-11T21:25:38.8512066Z test_nn_AvgPool3d (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8512373Z test_nn_AvgPool3d_divisor (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8512695Z test_nn_AvgPool3d_divisor_stride (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8513039Z test_nn_AvgPool3d_divisor_stride1_pad0_gpu_input (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8513378Z test_nn_AvgPool3d_divisor_stride_pad (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8513741Z test_nn_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8514124Z test_nn_AvgPool3d_divisor_stride_pad_gpu_general_output (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8514496Z test_nn_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8514848Z test_nn_AvgPool3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8515173Z test_nn_AvgPool3d_stride (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8515502Z test_nn_AvgPool3d_stride1_pad0_gpu_input (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8515822Z test_nn_AvgPool3d_stride_pad (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8516164Z test_nn_AvgPool3d_stride_pad_gpu_fixedkw_output (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8516525Z test_nn_AvgPool3d_stride_pad_gpu_general_output (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8516877Z test_nn_AvgPool3d_stride_pad_gpu_input_nooverlap (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8517202Z test_nn_BCELoss (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8517511Z test_nn_BCELoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8517875Z test_nn_BCELoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8518187Z test_nn_BCELoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8518503Z test_nn_BCELoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8518820Z test_nn_BCELoss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8519136Z test_nn_BCELoss_scalar_weights (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8519450Z test_nn_BCELoss_weights (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8519770Z test_nn_BCELoss_weights_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8520105Z test_nn_BCELoss_weights_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8520422Z test_nn_BCEWithLogitsLoss (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8520808Z test_nn_BCEWithLogitsLoss_legacy_enum (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8521161Z test_nn_BCEWithLogitsLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8521510Z test_nn_BCEWithLogitsLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8521849Z test_nn_BCEWithLogitsLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8522192Z test_nn_BCEWithLogitsLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8522544Z test_nn_BCEWithLogitsLoss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8522885Z test_nn_BCEWithLogitsLoss_scalar_weights (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8523234Z test_nn_BCEWithLogitsLoss_weights (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8523566Z test_nn_BatchNorm1d_3d_input (__main__.TestJitGeneratedModule) ... ok (0.088s) 2023-01-11T21:25:38.8523898Z test_nn_BatchNorm1d_3d_input_not_affine (__main__.TestJitGeneratedModule) ... ok (0.089s) 2023-01-11T21:25:38.8524211Z test_nn_BatchNorm1d_affine (__main__.TestJitGeneratedModule) ... ok (0.090s) 2023-01-11T21:25:38.8524544Z test_nn_BatchNorm1d_affine_simple_average (__main__.TestJitGeneratedModule) ... ok (0.092s) 2023-01-11T21:25:38.8524877Z test_nn_BatchNorm1d_not_affine (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8525191Z test_nn_BatchNorm1d_not_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.085s) 2023-01-11T21:25:38.8525519Z test_nn_BatchNorm1d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.089s) 2023-01-11T21:25:38.8525830Z test_nn_BatchNorm2d (__main__.TestJitGeneratedModule) ... ok (0.086s) 2023-01-11T21:25:38.8526158Z test_nn_BatchNorm2d_2d_simple_average (__main__.TestJitGeneratedModule) ... ok (0.091s) 2023-01-11T21:25:38.8526472Z test_nn_BatchNorm2d_momentum (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8526796Z test_nn_BatchNorm2d_not_affine (__main__.TestJitGeneratedModule) ... ok (0.086s) 2023-01-11T21:25:38.8527125Z test_nn_BatchNorm2d_not_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.083s) 2023-01-11T21:25:38.8527443Z test_nn_BatchNorm2d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8527751Z test_nn_BatchNorm3d (__main__.TestJitGeneratedModule) ... ok (0.090s) 2023-01-11T21:25:38.8528070Z test_nn_BatchNorm3d_3d_simple_average (__main__.TestJitGeneratedModule) ... ok (0.090s) 2023-01-11T21:25:38.8528398Z test_nn_BatchNorm3d_momentum (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8528703Z test_nn_BatchNorm3d_not_affine (__main__.TestJitGeneratedModule) ... ok (0.085s) 2023-01-11T21:25:38.8529036Z test_nn_BatchNorm3d_not_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.081s) 2023-01-11T21:25:38.8529366Z test_nn_BatchNorm3d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.084s) 2023-01-11T21:25:38.8529697Z test_nn_Bilinear (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8553893Z test_nn_CELU (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8554223Z test_nn_CELU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8554527Z test_nn_CELU_scalar (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8554898Z test_nn_CTCLoss_2d_int_target_lengths_intlists (__main__.TestJitGeneratedModule) ... skip: module test skipped on JIT (0.002s) 2023-01-11T21:25:38.8555279Z test_nn_CTCLoss_2d_int_target_lengths_tensors (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8555620Z test_nn_CTCLoss_2d_lengths_tensors (__main__.TestJitGeneratedModule) ... ok (0.024s) 2023-01-11T21:25:38.8555969Z test_nn_CTCLoss_lengths_intlists (__main__.TestJitGeneratedModule) ... skip: module test skipped on JIT (0.002s) 2023-01-11T21:25:38.8556443Z test_nn_CTCLoss_lengths_tensors (__main__.TestJitGeneratedModule) ... ok (0.024s) 2023-01-11T21:25:38.8556770Z test_nn_ConstantPad1d (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8557075Z test_nn_ConstantPad1d_batch (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8557404Z test_nn_ConstantPad1d_complex (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8557724Z test_nn_ConstantPad2d (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8559064Z test_nn_ConstantPad2d_complex (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8559381Z test_nn_ConstantPad2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8559698Z test_nn_ConstantPad3d (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8560014Z test_nn_ConstantPad3d_complex (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8560343Z test_nn_ConstantPad3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8560639Z test_nn_Conv1d (__main__.TestJitGeneratedModule) ... ok (0.062s) 2023-01-11T21:25:38.8560949Z test_nn_Conv1d_circular_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.059s) 2023-01-11T21:25:38.8561263Z test_nn_Conv1d_dilated (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8561552Z test_nn_Conv1d_groups (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8561849Z test_nn_Conv1d_pad1 (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8562150Z test_nn_Conv1d_pad1size1 (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8562438Z test_nn_Conv1d_pad2 (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8562739Z test_nn_Conv1d_pad2size1 (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8563043Z test_nn_Conv1d_pad_same (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8563350Z test_nn_Conv1d_pad_same2 (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8563648Z test_nn_Conv1d_pad_same_dilated (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8563960Z test_nn_Conv1d_pad_valid (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8564279Z test_nn_Conv1d_reflect_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.057s) 2023-01-11T21:25:38.8564602Z test_nn_Conv1d_replicate_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.057s) 2023-01-11T21:25:38.8564914Z test_nn_Conv1d_stride (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8565214Z test_nn_Conv1d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8565527Z test_nn_Conv1d_zeros_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8565817Z test_nn_Conv2d (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8566124Z test_nn_Conv2d_circular_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8566516Z test_nn_Conv2d_depthwise (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8566820Z test_nn_Conv2d_depthwise_dilated (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8567152Z test_nn_Conv2d_depthwise_padded (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8567481Z test_nn_Conv2d_depthwise_strided (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8567814Z test_nn_Conv2d_depthwise_with_multiplier (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8568121Z test_nn_Conv2d_dilated (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8568422Z test_nn_Conv2d_groups (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8568733Z test_nn_Conv2d_groups_thnn (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8569022Z test_nn_Conv2d_no_bias (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8569370Z test_nn_Conv2d_pad_same (__main__.TestJitGeneratedModule) ... ok (0.065s) 2023-01-11T21:25:38.8569686Z test_nn_Conv2d_pad_same_dilated (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8569998Z test_nn_Conv2d_pad_valid (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8570290Z test_nn_Conv2d_padding (__main__.TestJitGeneratedModule) ... ok (0.056s) 2023-01-11T21:25:38.8570605Z test_nn_Conv2d_reflect_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8570938Z test_nn_Conv2d_replicate_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.060s) 2023-01-11T21:25:38.8571240Z test_nn_Conv2d_strided (__main__.TestJitGeneratedModule) ... ok (0.057s) 2023-01-11T21:25:38.8571542Z test_nn_Conv2d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8571855Z test_nn_Conv2d_zeros_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8572162Z test_nn_Conv3d (__main__.TestJitGeneratedModule) ... ok (0.059s) 2023-01-11T21:25:38.8572447Z test_nn_Conv3d_1x1x1_no_bias (__main__.TestJitGeneratedModule) ... ok (0.056s) 2023-01-11T21:25:38.8572763Z test_nn_Conv3d_circular_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.062s) 2023-01-11T21:25:38.8573078Z test_nn_Conv3d_dilated (__main__.TestJitGeneratedModule) ... ok (0.056s) 2023-01-11T21:25:38.8573375Z test_nn_Conv3d_dilated_strided (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8573683Z test_nn_Conv3d_groups (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8573979Z test_nn_Conv3d_no_bias (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8574279Z test_nn_Conv3d_pad_same (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8574581Z test_nn_Conv3d_pad_same_dilated (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8574890Z test_nn_Conv3d_pad_valid (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8575217Z test_nn_Conv3d_replicate_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.057s) 2023-01-11T21:25:38.8575519Z test_nn_Conv3d_stride (__main__.TestJitGeneratedModule) ... ok (0.063s) 2023-01-11T21:25:38.8575832Z test_nn_Conv3d_stride_padding (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8576142Z test_nn_Conv3d_zero_batch (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8576453Z test_nn_Conv3d_zeros_stride2_pad2 (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8576754Z test_nn_ConvTranspose1d (__main__.TestJitGeneratedModule) ... ok (0.101s) 2023-01-11T21:25:38.8577074Z test_nn_ConvTranspose1d_dilated (__main__.TestJitGeneratedModule) ... ok (0.103s) 2023-01-11T21:25:38.8577402Z test_nn_ConvTranspose1d_groups (__main__.TestJitGeneratedModule) ... ok (0.106s) 2023-01-11T21:25:38.8577714Z test_nn_ConvTranspose1d_no_bias (__main__.TestJitGeneratedModule) ... ok (0.104s) 2023-01-11T21:25:38.8578063Z test_nn_ConvTranspose2d (__main__.TestJitGeneratedModule) ... ok (0.120s) 2023-01-11T21:25:38.8578380Z test_nn_ConvTranspose2d_dilated (__main__.TestJitGeneratedModule) ... ok (0.113s) 2023-01-11T21:25:38.8578704Z test_nn_ConvTranspose2d_groups (__main__.TestJitGeneratedModule) ... ok (0.109s) 2023-01-11T21:25:38.8579017Z test_nn_ConvTranspose2d_no_bias (__main__.TestJitGeneratedModule) ... ok (0.106s) 2023-01-11T21:25:38.8579331Z test_nn_ConvTranspose3d (__main__.TestJitGeneratedModule) ... ok (0.105s) 2023-01-11T21:25:38.8579646Z test_nn_ConvTranspose3d_dilated (__main__.TestJitGeneratedModule) ... ok (0.104s) 2023-01-11T21:25:38.8579958Z test_nn_CosineEmbeddingLoss (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8580290Z test_nn_CosineEmbeddingLoss_margin (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8580643Z test_nn_CosineEmbeddingLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8581037Z test_nn_CosineEmbeddingLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8581389Z test_nn_CosineEmbeddingLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8581725Z test_nn_CrossEntropyLoss (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8582046Z test_nn_CrossEntropyLoss_2d (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8582515Z test_nn_CrossEntropyLoss_2d_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8582912Z test_nn_CrossEntropyLoss_2d_indices_target_smoothing (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8583302Z test_nn_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8583704Z test_nn_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8584092Z test_nn_CrossEntropyLoss_2d_indices_target_smoothing_weight (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8584460Z test_nn_CrossEntropyLoss_2d_prob_target (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8584818Z test_nn_CrossEntropyLoss_2d_prob_target_smoothing (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8585195Z test_nn_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8585571Z test_nn_CrossEntropyLoss_2d_prob_target_smoothing_weight (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8585943Z test_nn_CrossEntropyLoss_2d_prob_target_weights (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8586291Z test_nn_CrossEntropyLoss_2d_weights (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8586650Z test_nn_CrossEntropyLoss_3d_indices_target_smoothing (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8587027Z test_nn_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8587425Z test_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8587837Z test_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8588205Z test_nn_CrossEntropyLoss_3d_prob_target (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8588564Z test_nn_CrossEntropyLoss_3d_prob_target_smoothing (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8588940Z test_nn_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8589319Z test_nn_CrossEntropyLoss_3d_prob_target_weights (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8589660Z test_nn_CrossEntropyLoss_4d_prob_target (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8590079Z test_nn_CrossEntropyLoss_4d_prob_target_weights (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8590421Z test_nn_CrossEntropyLoss_dim_is_3 (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8590813Z test_nn_CrossEntropyLoss_higher_dim (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8591145Z test_nn_CrossEntropyLoss_weights (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8591469Z test_nn_CrossMapLRN2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8591763Z test_nn_ELU (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8592044Z test_nn_ELU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8592343Z test_nn_ELU_scalar (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8592637Z test_nn_Embedding (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8592999Z test_nn_EmbeddingBag_discontiguous (__main__.TestJitGeneratedModule) ... ok (0.063s) 2023-01-11T21:25:38.8593312Z test_nn_EmbeddingBag_max (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8593637Z test_nn_EmbeddingBag_max_padding_idx (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8593965Z test_nn_EmbeddingBag_mean (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8594278Z test_nn_EmbeddingBag_mean_padding_idx (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8594605Z test_nn_EmbeddingBag_sparse (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8594916Z test_nn_EmbeddingBag_sum (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8595240Z test_nn_EmbeddingBag_sum_padding_idx (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8595561Z test_nn_Embedding_discontiguous (__main__.TestJitGeneratedModule) ... ok (0.025s) 2023-01-11T21:25:38.8595885Z test_nn_Embedding_sparse (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8596183Z test_nn_Flatten (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8596471Z test_nn_Flatten_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8596768Z test_nn_Fold (__main__.TestJitGeneratedModule) ... ok (0.050s) 2023-01-11T21:25:38.8597056Z test_nn_Fold_int_input (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8597366Z test_nn_Fold_no_batch_dim_input (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8597676Z test_nn_Fold_no_batch_dim_int_input (__main__.TestJitGeneratedModule) ... ok (0.048s) 2023-01-11T21:25:38.8598012Z test_nn_FractionalMaxPool2d_ratio (__main__.TestJitGeneratedModule) ... ok (0.093s) 2023-01-11T21:25:38.8598367Z test_nn_FractionalMaxPool2d_ratio_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.078s) 2023-01-11T21:25:38.8598743Z test_nn_FractionalMaxPool2d_ratio_no_batch_dim_no_random_samples (__main__.TestJitGeneratedModule) ... ok (0.081s) 2023-01-11T21:25:38.8599129Z test_nn_FractionalMaxPool2d_ratio_return_indices (__main__.TestJitGeneratedModule) ... ok (0.074s) 2023-01-11T21:25:38.8599479Z test_nn_FractionalMaxPool2d_size (__main__.TestJitGeneratedModule) ... ok (0.075s) 2023-01-11T21:25:38.8599833Z test_nn_FractionalMaxPool2d_size_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.075s) 2023-01-11T21:25:38.8600203Z test_nn_FractionalMaxPool2d_size_no_batch_dim_no_random_samples (__main__.TestJitGeneratedModule) ... ok (0.078s) 2023-01-11T21:25:38.8600575Z test_nn_FractionalMaxPool3d_asymsize (__main__.TestJitGeneratedModule) ... ok (0.090s) 2023-01-11T21:25:38.8600920Z test_nn_FractionalMaxPool3d_ratio (__main__.TestJitGeneratedModule) ... ok (0.077s) 2023-01-11T21:25:38.8601258Z test_nn_FractionalMaxPool3d_ratio_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.089s) 2023-01-11T21:25:38.8601677Z test_nn_FractionalMaxPool3d_ratio_no_batch_dim_no_random_samples (__main__.TestJitGeneratedModule) ... ok (0.088s) 2023-01-11T21:25:38.8602062Z test_nn_FractionalMaxPool3d_ratio_return_indices (__main__.TestJitGeneratedModule) ... ok (0.073s) 2023-01-11T21:25:38.8602411Z test_nn_FractionalMaxPool3d_size (__main__.TestJitGeneratedModule) ... ok (0.074s) 2023-01-11T21:25:38.8602747Z test_nn_FractionalMaxPool3d_size_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.075s) 2023-01-11T21:25:38.8603126Z test_nn_FractionalMaxPool3d_size_no_batch_dim_no_random_samples (__main__.TestJitGeneratedModule) ... ok (0.078s) 2023-01-11T21:25:38.8603462Z test_nn_GELU (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8603756Z test_nn_GELU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8604042Z test_nn_GELU_scalar (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8604329Z test_nn_GLU (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8604643Z test_nn_GLU_dim (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8604930Z test_nn_GLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8605225Z test_nn_GRUCell (__main__.TestJitGeneratedModule) ... ok (0.071s) 2023-01-11T21:25:38.8605525Z test_nn_GroupNorm_1d_affine (__main__.TestJitGeneratedModule) ... ok (0.074s) 2023-01-11T21:25:38.8605840Z test_nn_GroupNorm_1d_affine_GN (__main__.TestJitGeneratedModule) ... ok (0.068s) 2023-01-11T21:25:38.8606154Z test_nn_GroupNorm_1d_affine_large_batch (__main__.TestJitGeneratedModule) ... ok (0.069s) 2023-01-11T21:25:38.8606482Z test_nn_GroupNorm_1d_no_affine_IN (__main__.TestJitGeneratedModule) ... ok (0.067s) 2023-01-11T21:25:38.8606802Z test_nn_GroupNorm_1d_no_affine_LN (__main__.TestJitGeneratedModule) ... ok (0.067s) 2023-01-11T21:25:38.8607103Z test_nn_GroupNorm_2d_affine (__main__.TestJitGeneratedModule) ... ok (0.069s) 2023-01-11T21:25:38.8607436Z test_nn_GroupNorm_2d_affine_large_feature (__main__.TestJitGeneratedModule) ... ok (0.075s) 2023-01-11T21:25:38.8607767Z test_nn_GroupNorm_2d_no_affine_IN (__main__.TestJitGeneratedModule) ... ok (0.068s) 2023-01-11T21:25:38.8608085Z test_nn_GroupNorm_2d_no_affine_LN (__main__.TestJitGeneratedModule) ... ok (0.067s) 2023-01-11T21:25:38.8608403Z test_nn_GroupNorm_2d_no_affine_large_feature (__main__.TestJitGeneratedModule) ... ok (0.080s) 2023-01-11T21:25:38.8608722Z test_nn_Hardshrink (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8609056Z test_nn_Hardshrink_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8609363Z test_nn_Hardshrink_scalar (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8609713Z test_nn_Hardsigmoid_no_batch_dim (__main__.TestJitGeneratedModule) ... skip: module test skipped on JIT (0.002s) 2023-01-11T21:25:38.8610099Z test_nn_Hardswish_no_batch_dim (__main__.TestJitGeneratedModule) ... skip: module test skipped on JIT (0.002s) 2023-01-11T21:25:38.8610437Z test_nn_Hardtanh (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8610734Z test_nn_Hardtanh_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8611042Z test_nn_Hardtanh_scalar (__main__.TestJitGeneratedModule) ... ok (0.042s) 2023-01-11T21:25:38.8611355Z test_nn_HingeEmbeddingLoss (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8611674Z test_nn_HingeEmbeddingLoss_margin (__main__.TestJitGeneratedModule) ... ok (0.027s) 2023-01-11T21:25:38.8612024Z test_nn_HingeEmbeddingLoss_margin_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8612387Z test_nn_HingeEmbeddingLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8612750Z test_nn_HingeEmbeddingLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8613095Z test_nn_HingeEmbeddingLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8613472Z test_nn_HingeEmbeddingLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8613825Z test_nn_HingeEmbeddingLoss_scalar_margin (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8614135Z test_nn_HuberLoss (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8614436Z test_nn_HuberLoss_delta (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8614753Z test_nn_HuberLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.024s) 2023-01-11T21:25:38.8615081Z test_nn_HuberLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.024s) 2023-01-11T21:25:38.8615397Z test_nn_HuberLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.023s) 2023-01-11T21:25:38.8615718Z test_nn_InstanceNorm1d (__main__.TestJitGeneratedModule) ... ok (0.095s) 2023-01-11T21:25:38.8616121Z test_nn_InstanceNorm1d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.095s) 2023-01-11T21:25:38.8616463Z test_nn_InstanceNorm1d_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.102s) 2023-01-11T21:25:38.8616806Z test_nn_InstanceNorm1d_tracking_stats_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.103s) 2023-01-11T21:25:38.8617142Z test_nn_InstanceNorm2d (__main__.TestJitGeneratedModule) ... ok (0.109s) 2023-01-11T21:25:38.8617465Z test_nn_InstanceNorm2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.094s) 2023-01-11T21:25:38.8617788Z test_nn_InstanceNorm2d_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.100s) 2023-01-11T21:25:38.8618145Z test_nn_InstanceNorm2d_tracking_stats_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.100s) 2023-01-11T21:25:38.8618481Z test_nn_InstanceNorm3d (__main__.TestJitGeneratedModule) ... ok (0.098s) 2023-01-11T21:25:38.8618806Z test_nn_InstanceNorm3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.095s) 2023-01-11T21:25:38.8619127Z test_nn_InstanceNorm3d_tracking_stats (__main__.TestJitGeneratedModule) ... ok (0.102s) 2023-01-11T21:25:38.8619479Z test_nn_InstanceNorm3d_tracking_stats_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.101s) 2023-01-11T21:25:38.8619807Z test_nn_KLDivLoss (__main__.TestJitGeneratedModule) ... ok (0.042s) 2023-01-11T21:25:38.8620101Z test_nn_KLDivLoss_log_target (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8620430Z test_nn_KLDivLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8620758Z test_nn_KLDivLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8621089Z test_nn_KLDivLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8621394Z test_nn_KLDivLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8621720Z test_nn_KLDivLoss_no_reduce_log_target (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8622052Z test_nn_KLDivLoss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8622573Z test_nn_KLDivLoss_no_reduce_scalar_log_target (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8622931Z test_nn_KLDivLoss_scalar (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8623256Z test_nn_KLDivLoss_scalar_log_target (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8623597Z test_nn_KLDivLoss_with_log_target_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8623928Z test_nn_KLDivLoss_with_target_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8624238Z test_nn_L1Loss (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8624544Z test_nn_L1Loss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8624853Z test_nn_L1Loss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8625233Z test_nn_L1Loss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8625546Z test_nn_L1Loss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8625862Z test_nn_L1Loss_no_reduce_complex (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8626169Z test_nn_L1Loss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8626478Z test_nn_L1Loss_scalar (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8626778Z test_nn_LPPool1d (__main__.TestJitGeneratedModule) ... ok (0.109s) 2023-01-11T21:25:38.8627072Z test_nn_LPPool1d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.109s) 2023-01-11T21:25:38.8627385Z test_nn_LPPool1d_norm (__main__.TestJitGeneratedModule) ... ok (0.113s) 2023-01-11T21:25:38.8627817Z test_nn_LPPool2d (__main__.TestJitGeneratedModule) ... ok (0.136s) 2023-01-11T21:25:38.8628116Z test_nn_LPPool2d_norm (__main__.TestJitGeneratedModule) ... ok (0.142s) 2023-01-11T21:25:38.8628442Z test_nn_LSTMCell (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8628763Z test_nn_LayerNorm_1d_elementwise_affine (__main__.TestJitGeneratedModule) ... ok (0.050s) 2023-01-11T21:25:38.8629113Z test_nn_LayerNorm_1d_empty_elementwise_affine (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8629453Z test_nn_LayerNorm_1d_no_elementwise_affine (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8629794Z test_nn_LayerNorm_3d_elementwise_affine (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8630137Z test_nn_LayerNorm_3d_no_affine_large_feature (__main__.TestJitGeneratedModule) ... ok (0.156s) 2023-01-11T21:25:38.8630480Z test_nn_LayerNorm_3d_no_elementwise_affine (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8630863Z test_nn_LeakyReLU (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8631180Z test_nn_LeakyReLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8631507Z test_nn_LeakyReLU_with_negval (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8631823Z test_nn_LeakyReLU_with_negval_scalar (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8632151Z test_nn_LeakyReLU_with_zero_negval (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8632459Z test_nn_Linear (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8632757Z test_nn_Linear_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8633051Z test_nn_Linear_no_bias (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8633406Z test_nn_LocalResponseNorm_1d (__main__.TestJitGeneratedModule) ... ok (0.166s) 2023-01-11T21:25:38.8633744Z test_nn_LocalResponseNorm_2d_uneven_pad (__main__.TestJitGeneratedModule) ... ok (0.182s) 2023-01-11T21:25:38.8634084Z test_nn_LocalResponseNorm_3d_custom_params (__main__.TestJitGeneratedModule) ... ok (0.203s) 2023-01-11T21:25:38.8634410Z test_nn_LogSigmoid (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8634723Z test_nn_LogSigmoid_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8635042Z test_nn_LogSigmoid_scalar (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8635331Z test_nn_LogSoftmax (__main__.TestJitGeneratedModule) ... ok (0.052s) 2023-01-11T21:25:38.8635640Z test_nn_LogSoftmax_multiparam (__main__.TestJitGeneratedModule) ... ok (0.051s) 2023-01-11T21:25:38.8635974Z test_nn_LogSoftmax_multiparam_scalar (__main__.TestJitGeneratedModule) ... ok (0.050s) 2023-01-11T21:25:38.8636302Z test_nn_LogSoftmax_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8636594Z test_nn_MSELoss (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8636904Z test_nn_MSELoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8637266Z test_nn_MSELoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8637573Z test_nn_MSELoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8637890Z test_nn_MSELoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8638204Z test_nn_MSELoss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8638517Z test_nn_MSELoss_prec (__main__.TestJitGeneratedModule) ... ok (0.046s) 2023-01-11T21:25:38.8638804Z test_nn_MSELoss_scalar (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8639115Z test_nn_MarginRankingLoss (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8639446Z test_nn_MarginRankingLoss_margin (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8639780Z test_nn_MarginRankingLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8640165Z test_nn_MarginRankingLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8640523Z test_nn_MarginRankingLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8640845Z test_nn_MaxPool1d (__main__.TestJitGeneratedModule) ... ok (0.056s) 2023-01-11T21:25:38.8641147Z test_nn_MaxPool1d_return_indices (__main__.TestJitGeneratedModule) ... ok (0.061s) 2023-01-11T21:25:38.8641469Z test_nn_MaxPool1d_stride (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8641779Z test_nn_MaxPool2d_3d_input (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8642076Z test_nn_MaxPool2d_4d_input (__main__.TestJitGeneratedModule) ... ok (0.053s) 2023-01-11T21:25:38.8642396Z test_nn_MaxPool2d_return_indices (__main__.TestJitGeneratedModule) ... ok (0.061s) 2023-01-11T21:25:38.8642705Z test_nn_MaxPool3d (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8643022Z test_nn_MaxPool3d_return_indices (__main__.TestJitGeneratedModule) ... ok (0.061s) 2023-01-11T21:25:38.8643328Z test_nn_MaxPool3d_stride (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8643644Z test_nn_MaxPool3d_stride_padding (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8643944Z test_nn_Mish (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8644225Z test_nn_Mish_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8644526Z test_nn_Mish_scalar (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8644839Z test_nn_MultiLabelMarginLoss (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8645183Z test_nn_MultiLabelMarginLoss_0d_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8645514Z test_nn_MultiLabelMarginLoss_1d (__main__.TestJitGeneratedModule) ... ok (0.027s) 2023-01-11T21:25:38.8645860Z test_nn_MultiLabelMarginLoss_1d_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8646223Z test_nn_MultiLabelMarginLoss_index_neg (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8646569Z test_nn_MultiLabelMarginLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.027s) 2023-01-11T21:25:38.8646935Z test_nn_MultiLabelMarginLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8647299Z test_nn_MultiLabelMarginLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8647656Z test_nn_MultiLabelMarginLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8647993Z test_nn_MultiLabelSoftMarginLoss (__main__.TestJitGeneratedModule) ... ok (0.085s) 2023-01-11T21:25:38.8648357Z test_nn_MultiLabelSoftMarginLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.064s) 2023-01-11T21:25:38.8648735Z test_nn_MultiLabelSoftMarginLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.064s) 2023-01-11T21:25:38.8649152Z test_nn_MultiLabelSoftMarginLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.065s) 2023-01-11T21:25:38.8649508Z test_nn_MultiLabelSoftMarginLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8649874Z test_nn_MultiLabelSoftMarginLoss_weights (__main__.TestJitGeneratedModule) ... ok (0.081s) 2023-01-11T21:25:38.8650248Z test_nn_MultiLabelSoftMarginLoss_weights_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.003s) 2023-01-11T21:25:38.8650579Z test_nn_MultiMarginLoss (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8650899Z test_nn_MultiMarginLoss_1d (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8651229Z test_nn_MultiMarginLoss_1d_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8651561Z test_nn_MultiMarginLoss_margin (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8651909Z test_nn_MultiMarginLoss_margin_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8652253Z test_nn_MultiMarginLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8652578Z test_nn_MultiMarginLoss_p (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8652892Z test_nn_MultiMarginLoss_p_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8653224Z test_nn_MultiMarginLoss_weights (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8653564Z test_nn_MultiMarginLoss_weights_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8653974Z test_nn_MultiheadAttention (__main__.TestJitGeneratedModule) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:25:38.8654332Z test_nn_NLLLoss (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8654639Z test_nn_NLLLoss2d_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8654972Z test_nn_NLLLoss2d_no_reduce_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8655307Z test_nn_NLLLoss2d_no_reduce_weights (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8655620Z test_nn_NLLLossNd_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8655948Z test_nn_NLLLossNd_no_reduce_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8656284Z test_nn_NLLLossNd_no_reduce_weights (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8656581Z test_nn_NLLLoss_2d (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8656894Z test_nn_NLLLoss_2d_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8657210Z test_nn_NLLLoss_2d_weights (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8657515Z test_nn_NLLLoss_dim_is_3 (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8657810Z test_nn_NLLLoss_higher_dim (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8658255Z test_nn_NLLLoss_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8658583Z test_nn_NLLLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8658893Z test_nn_NLLLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8659275Z test_nn_NLLLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8659674Z test_nn_NLLLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8660001Z test_nn_NLLLoss_no_reduce_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8660320Z test_nn_NLLLoss_no_reduce_weights (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8660660Z test_nn_NLLLoss_no_reduce_weights_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8661019Z test_nn_NLLLoss_no_reduce_weights_ignore_index_neg (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8661383Z test_nn_NLLLoss_weights (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8661707Z test_nn_NLLLoss_weights_ignore_index (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8662047Z test_nn_NLLLoss_weights_ignore_index_neg (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8662478Z test_nn_PReLU_1d (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8662774Z test_nn_PReLU_1d_multiparam (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8663081Z test_nn_PReLU_2d (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8663388Z test_nn_PReLU_2d_multiparam (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8663679Z test_nn_PReLU_3d (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8664039Z test_nn_PReLU_3d_multiparam (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8664362Z test_nn_PReLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8664674Z test_nn_PReLU_scalar (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8664980Z test_nn_Padding122112_3dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8665309Z test_nn_Padding1221_2dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8665633Z test_nn_Padding12_1dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8665940Z test_nn_Padding2322_2dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8666264Z test_nn_Padding31_1dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8666588Z test_nn_Padding322112_3dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8666914Z test_nn_Padding332122_3dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8667229Z test_nn_Padding3331_2dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8667545Z test_nn_Padding33_1dcircular (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8667863Z test_nn_PairwiseDistance (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8668183Z test_nn_PairwiseDistance_broadcast_lhs (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8668530Z test_nn_PairwiseDistance_broadcast_rhs (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8668873Z test_nn_PairwiseDistance_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8669224Z test_nn_PairwiseDistance_with_non_default_args (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8669545Z test_nn_PixelShuffle (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8669853Z test_nn_PixelUnshuffle (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8670179Z test_nn_PoissonNLLLoss_full_loss (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8670507Z test_nn_PoissonNLLLoss_full_loss_no_log_input (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8670913Z test_nn_PoissonNLLLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8671264Z test_nn_PoissonNLLLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8671614Z test_nn_PoissonNLLLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8671937Z test_nn_PoissonNLLLoss_no_full_loss (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8672288Z test_nn_PoissonNLLLoss_no_full_loss_no_log_input (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8672636Z test_nn_PoissonNLLLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8672934Z test_nn_RNNCell (__main__.TestJitGeneratedModule) ... ok (0.071s) 2023-01-11T21:25:38.8673270Z test_nn_RReLU (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8673566Z test_nn_RReLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.042s) 2023-01-11T21:25:38.8673876Z test_nn_RReLU_with_up_down (__main__.TestJitGeneratedModule) ... ok (0.043s) 2023-01-11T21:25:38.8674185Z test_nn_RReLU_with_up_down_scalar (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8674485Z test_nn_ReLU (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8674766Z test_nn_ReLU6 (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8675051Z test_nn_ReLU6_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8675352Z test_nn_ReLU6_scalar (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8675654Z test_nn_ReLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8675986Z test_nn_ReLU_scalar (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8676283Z test_nn_ReflectionPad1d (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8676606Z test_nn_ReflectionPad1d_batch (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8676941Z test_nn_ReflectionPad1d_complex (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8677251Z test_nn_ReflectionPad2d (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8677571Z test_nn_ReflectionPad2d_complex (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8677906Z test_nn_ReflectionPad2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8678229Z test_nn_ReflectionPad3d (__main__.TestJitGeneratedModule) ... ok (0.042s) 2023-01-11T21:25:38.8678541Z test_nn_ReflectionPad3d_complex (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8678876Z test_nn_ReflectionPad3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8679203Z test_nn_ReplicationPad1d (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8679518Z test_nn_ReplicationPad1d_batch (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8679849Z test_nn_ReplicationPad1d_complex (__main__.TestJitGeneratedModule) ... ok (0.045s) 2023-01-11T21:25:38.8680176Z test_nn_ReplicationPad2d (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8680503Z test_nn_ReplicationPad2d_complex (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8680833Z test_nn_ReplicationPad2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8681167Z test_nn_ReplicationPad3d (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8681495Z test_nn_ReplicationPad3d_complex (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8681817Z test_nn_ReplicationPad3d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8682132Z test_nn_SELU (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8682274Z test_nn_SELU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8682412Z test_nn_SELU_scalar (__main__.TestJitGeneratedModule) ... ok (0.038s) 2023-01-11T21:25:38.8682542Z test_nn_SiLU (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8682685Z test_nn_SiLU_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8682812Z test_nn_SiLU_scalar (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8682945Z test_nn_Sigmoid (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8683090Z test_nn_Sigmoid_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8683231Z test_nn_Sigmoid_scalar (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8683369Z test_nn_SmoothL1Loss (__main__.TestJitGeneratedModule) ... ok (0.036s) 2023-01-11T21:25:38.8683554Z test_nn_SmoothL1Loss_beta (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8683715Z test_nn_SmoothL1Loss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8683872Z test_nn_SmoothL1Loss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8684014Z test_nn_SmoothL1Loss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8684165Z test_nn_SmoothL1Loss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8684325Z test_nn_SmoothL1Loss_no_reduce_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8684473Z test_nn_SmoothL1Loss_scalar (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8684619Z test_nn_SmoothL1Loss_zero_beta (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8684762Z test_nn_SoftMarginLoss (__main__.TestJitGeneratedModule) ... ok (0.035s) 2023-01-11T21:25:38.8684958Z test_nn_SoftMarginLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8685122Z test_nn_SoftMarginLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8685282Z test_nn_SoftMarginLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.028s) 2023-01-11T21:25:38.8685423Z test_nn_SoftMarginLoss_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8685556Z test_nn_Softmax (__main__.TestJitGeneratedModule) ... ok (0.056s) 2023-01-11T21:25:38.8685695Z test_nn_Softmax2d (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8685843Z test_nn_Softmax2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8685988Z test_nn_Softmax_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8686127Z test_nn_Softmax_scalar (__main__.TestJitGeneratedModule) ... ok (0.049s) 2023-01-11T21:25:38.8686265Z test_nn_Softmin (__main__.TestJitGeneratedModule) ... ok (0.058s) 2023-01-11T21:25:38.8686405Z test_nn_Softmin_multidim (__main__.TestJitGeneratedModule) ... ok (0.057s) 2023-01-11T21:25:38.8686535Z test_nn_Softmin_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.055s) 2023-01-11T21:25:38.8686674Z test_nn_Softmin_scalar (__main__.TestJitGeneratedModule) ... ok (0.054s) 2023-01-11T21:25:38.8686810Z test_nn_Softplus (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8686951Z test_nn_Softplus_beta (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8687105Z test_nn_Softplus_beta_threshold (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8687265Z test_nn_Softplus_beta_threshold_scalar (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8687413Z test_nn_Softplus_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8687549Z test_nn_Softshrink (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8687684Z test_nn_Softshrink_lambda (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8687835Z test_nn_Softshrink_lambda_scalar (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8687986Z test_nn_Softshrink_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.032s) 2023-01-11T21:25:38.8688119Z test_nn_Softsign (__main__.TestJitGeneratedModule) ... ok (0.087s) 2023-01-11T21:25:38.8688265Z test_nn_Softsign_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.073s) 2023-01-11T21:25:38.8688409Z test_nn_Softsign_scalar (__main__.TestJitGeneratedModule) ... ok (0.071s) 2023-01-11T21:25:38.8688539Z test_nn_Tanh (__main__.TestJitGeneratedModule) ... ok (0.033s) 2023-01-11T21:25:38.8688682Z test_nn_Tanh_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8688806Z test_nn_Tanh_scalar (__main__.TestJitGeneratedModule) ... ok (0.031s) 2023-01-11T21:25:38.8688947Z test_nn_Tanhshrink (__main__.TestJitGeneratedModule) ... ok (0.124s) 2023-01-11T21:25:38.8689133Z test_nn_Tanhshrink_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.078s) 2023-01-11T21:25:38.8689277Z test_nn_Tanhshrink_scalar (__main__.TestJitGeneratedModule) ... ok (0.068s) 2023-01-11T21:25:38.8689425Z test_nn_Threshold_large_value (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8689575Z test_nn_Threshold_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8689730Z test_nn_Threshold_threshold_value (__main__.TestJitGeneratedModule) ... ok (0.041s) 2023-01-11T21:25:38.8689892Z test_nn_Threshold_threshold_value_scalar (__main__.TestJitGeneratedModule) ... ok (0.040s) 2023-01-11T21:25:38.8690091Z test_nn_Transformer (__main__.TestJitGeneratedModule) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:25:38.8690274Z test_nn_TransformerDecoderLayer_gelu_activation (__main__.TestJitGeneratedModule) ... ok (1.953s) 2023-01-11T21:25:38.8690496Z test_nn_TransformerDecoderLayer_relu_activation (__main__.TestJitGeneratedModule) ... ok (1.854s) 2023-01-11T21:25:38.8690670Z test_nn_TransformerEncoderLayer_gelu_activation (__main__.TestJitGeneratedModule) ... ok (1.717s) 2023-01-11T21:25:38.8690842Z test_nn_TransformerEncoderLayer_relu_activation (__main__.TestJitGeneratedModule) ... ok (1.710s) 2023-01-11T21:25:38.8690999Z test_nn_Transformer_multilayer_coder (__main__.TestJitGeneratedModule) ... ok (5.719s) 2023-01-11T21:25:38.8691170Z test_nn_TripletMarginLoss_no_batch_dim_mean (__main__.TestJitGeneratedModule) ... ok (0.039s) 2023-01-11T21:25:38.8691338Z test_nn_TripletMarginLoss_no_batch_dim_none (__main__.TestJitGeneratedModule) ... ok (0.029s) 2023-01-11T21:25:38.8691504Z test_nn_TripletMarginLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) ... ok (0.030s) 2023-01-11T21:25:38.8691640Z test_nn_Unflatten_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.034s) 2023-01-11T21:25:38.8691772Z test_nn_Unfold (__main__.TestJitGeneratedModule) ... ok (0.047s) 2023-01-11T21:25:38.8691917Z test_nn_Unfold_int_input (__main__.TestJitGeneratedModule) ... ok (0.044s) 2023-01-11T21:25:38.8692055Z test_nn_ZeroPad2d (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8692202Z test_nn_ZeroPad2d_complex (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8692354Z test_nn_ZeroPad2d_negative_dims (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8692505Z test_nn_ZeroPad2d_no_batch_dim (__main__.TestJitGeneratedModule) ... ok (0.037s) 2023-01-11T21:25:38.8692654Z test_nn_interpolate_bicubic_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8692801Z test_nn_interpolate_bicubic_2d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8692956Z test_nn_interpolate_bicubic_scale_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693131Z test_nn_interpolate_bicubic_scale_tuple_shared_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693303Z test_nn_interpolate_bicubic_scale_tuple_skewed_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693493Z test_nn_interpolate_bicubic_scale_tuple_skewed_2d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693649Z test_nn_interpolate_bicubic_tuple_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693822Z test_nn_interpolate_bicubic_tuple_2d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8693974Z test_nn_interpolate_bilinear_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8694133Z test_nn_interpolate_bilinear_2d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8694276Z test_nn_interpolate_bilinear_scale_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8694452Z test_nn_interpolate_bilinear_scale_tuple_shared_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8694654Z test_nn_interpolate_bilinear_scale_tuple_skewed_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8694845Z test_nn_interpolate_bilinear_scale_tuple_skewed_2d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695001Z test_nn_interpolate_bilinear_tuple_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695176Z test_nn_interpolate_bilinear_tuple_2d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695326Z test_nn_interpolate_linear_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695492Z test_nn_interpolate_linear_1d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695638Z test_nn_interpolate_linear_1d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695792Z test_nn_interpolate_linear_scale_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8695994Z test_nn_interpolate_linear_scale_1d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696148Z test_nn_interpolate_linear_tuple_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696298Z test_nn_interpolate_nearest_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696455Z test_nn_interpolate_nearest_1d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696608Z test_nn_interpolate_nearest_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696783Z test_nn_interpolate_nearest_2d_launch_configs (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8696942Z test_nn_interpolate_nearest_2d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697078Z test_nn_interpolate_nearest_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697237Z test_nn_interpolate_nearest_3d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697394Z test_nn_interpolate_nearest_scale_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697550Z test_nn_interpolate_nearest_scale_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697701Z test_nn_interpolate_nearest_scale_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8697860Z test_nn_interpolate_nearest_tuple_1d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698015Z test_nn_interpolate_nearest_tuple_2d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698169Z test_nn_interpolate_nearest_tuple_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698309Z test_nn_interpolate_trilinear_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698473Z test_nn_interpolate_trilinear_3d_zero_dim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698635Z test_nn_interpolate_trilinear_scale_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698812Z test_nn_interpolate_trilinear_scale_3d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8698971Z test_nn_interpolate_trilinear_tuple_3d (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699148Z test_nn_interpolate_trilinear_tuple_3d_align_corners (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699292Z test_nn_log_softmax_dim0 (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699438Z test_nn_log_softmax_dim3 (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699582Z test_nn_log_softmax_lastdim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699716Z test_nn_log_softmax_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8699866Z test_nn_log_softmax_spatial (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700056Z test_nn_log_softmax_spatial_special (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700231Z test_nn_multimarginloss_1d_input_0d_target_no_reduce (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700380Z test_nn_softmax_functional_dim0 (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700529Z test_nn_softmax_functional_dim3 (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700682Z test_nn_softmax_functional_scalar (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700823Z test_nn_softmax_lastdim (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8700959Z test_nn_softmax_lastdim_dtype (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8701101Z test_nn_softmax_spatial (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8701252Z test_nn_softmax_spatial_dtype (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8701438Z test_nn_softmax_spatial_special (__main__.TestJitGeneratedModule) ... ok (0.002s) 2023-01-11T21:25:38.8701606Z test_checkscriptassertraisesregex (jit.test_jit_utils.TestJitUtils) ... ok (0.002s) 2023-01-11T21:25:38.8701766Z test_get_callable_argument_names_hybrid (jit.test_jit_utils.TestJitUtils) ... ok (0.004s) 2023-01-11T21:25:38.8701931Z test_get_callable_argument_names_keyword_only (jit.test_jit_utils.TestJitUtils) ... ok (0.001s) 2023-01-11T21:25:38.8702109Z test_get_callable_argument_names_positional_only (jit.test_jit_utils.TestJitUtils) ... ok (0.003s) 2023-01-11T21:25:38.8702272Z test_get_callable_argument_names_positional_or_keyword (jit.test_jit_utils.TestJitUtils) ... ok (0.001s) 2023-01-11T21:25:38.8702557Z test_get_callable_argument_names_var_keyword (jit.test_jit_utils.TestJitUtils) ... ok (0.001s) 2023-01-11T21:25:38.8702732Z test_get_callable_argument_names_var_positional (jit.test_jit_utils.TestJitUtils) ... ok (0.001s) 2023-01-11T21:25:38.8702894Z test_no_tracer_warn_context_manager (jit.test_jit_utils.TestJitUtils) ... ok (0.001s) 2023-01-11T21:25:38.8703035Z test_comprehension_iterable (jit.test_list_dict.TestList) ... ok (0.018s) 2023-01-11T21:25:38.8703187Z test_comprehension_out_type_not_in_type (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8703328Z test_comprehensions_basic (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8703473Z test_comprehensions_basic_float (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8703615Z test_comprehensions_two_comps (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8703734Z test_copy_list_immutable (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8703867Z test_copy_list_mutable (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8703985Z test_del (jit.test_list_dict.TestList) ... ok (0.010s) 2023-01-11T21:25:38.8704138Z test_dict_keyword_is_correctly_typed (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8704299Z test_dict_keyword_with_dict_comprehension (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8704472Z test_dict_keyword_with_dict_comprehension_and_kwargs (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8704636Z test_dict_keyword_with_empty_dict_comprehension (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8704785Z test_dict_keyword_with_empty_iterable (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8704939Z test_dict_keyword_with_internal_aggregate_function (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8705081Z test_dict_keyword_with_iterable (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8705219Z test_dict_keyword_with_kwargs (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8705386Z test_dict_keyword_with_kwargs_using_container_values (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8705533Z test_dict_keyword_with_mapping (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8705771Z test_dict_keyword_with_mapping_and_kwargs (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8705931Z test_dict_keyword_with_mismatched_annotations (jit.test_list_dict.TestList) ... ok (0.002s) 2023-01-11T21:25:38.8706079Z test_dict_keyword_with_nested_call (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8706235Z test_dict_keyword_with_previously_declared_variable (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8706415Z test_dict_keyword_with_previously_declared_variable_and_kwargs (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8706555Z test_extend_list_immutable (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8706688Z test_extend_list_mutable (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8706813Z test_in_check (jit.test_list_dict.TestList) ... ok (0.013s) 2023-01-11T21:25:38.8706988Z test_list_bool_conversion (jit.test_list_dict.TestList) ... ok (0.019s) 2023-01-11T21:25:38.8707114Z test_list_count (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8707252Z test_list_count_not_existing (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8707363Z test_list_gather (jit.test_list_dict.TestList) ... ok (0.011s) 2023-01-11T21:25:38.8707487Z test_list_index (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8707627Z test_list_index_not_existing (jit.test_list_dict.TestList) ... ok (0.017s) 2023-01-11T21:25:38.8707753Z test_list_keyword (jit.test_list_dict.TestList) ... ok (0.014s) 2023-01-11T21:25:38.8707874Z test_list_len (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8707998Z test_list_literal (jit.test_list_dict.TestList) ... ok (0.012s) 2023-01-11T21:25:38.8708118Z test_list_none (jit.test_list_dict.TestList) ... ok (0.001s) 2023-01-11T21:25:38.8708241Z test_list_ops (jit.test_list_dict.TestList) ... ok (0.056s) 2023-01-11T21:25:38.8708354Z test_list_slice (jit.test_list_dict.TestList) ... ok (0.021s) 2023-01-11T21:25:38.8708473Z test_list_sort (jit.test_list_dict.TestList) ... ok (0.035s) 2023-01-11T21:25:38.8708609Z test_list_unification_hint (jit.test_list_dict.TestList) ... ok (0.001s) 2023-01-11T21:25:38.8708719Z test_list_variance (jit.test_list_dict.TestList) 2023-01-11T21:25:38.8708849Z `List[T1]` is not a subtype of `List[T2]`, even if `T1` is a ... ok (0.009s) 2023-01-11T21:25:38.8708975Z test_min_bool_list (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8709096Z test_min_max_list (jit.test_list_dict.TestList) ... ok (0.059s) 2023-01-11T21:25:38.8709230Z test_min_max_single_list (jit.test_list_dict.TestList) ... ok (0.041s) 2023-01-11T21:25:38.8709350Z test_mutable_list_append (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8709484Z test_mutable_list_append_2 (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8709622Z test_mutable_list_append_if (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8709770Z test_mutable_list_append_if_else (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8709910Z test_mutable_list_append_loop (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8710055Z test_mutable_list_append_loop_if (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8710186Z test_mutable_list_clear (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8710324Z test_mutable_list_clear_empty (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8710455Z test_mutable_list_function_inline (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8710589Z test_mutable_list_insert (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8710797Z test_mutable_list_insert_neg_out_of_bounds (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8710943Z test_mutable_list_insert_negative (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8711227Z test_mutable_list_insert_out_of_bounds (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8711371Z test_mutable_list_nested_loop (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8711500Z test_mutable_list_pop (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8711633Z test_mutable_list_pop2 (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8711752Z test_mutable_list_pop_at (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8711890Z test_mutable_list_pop_at2 (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8712038Z test_mutable_list_pop_at_negative (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8712182Z test_mutable_list_pop_at_negative2 (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8712321Z test_mutable_list_pop_empty (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8712489Z test_mutable_list_pop_slice (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8712628Z test_mutable_list_remove (jit.test_list_dict.TestList) ... ok (0.007s) 2023-01-11T21:25:38.8712761Z test_mutable_list_remove2 (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8712895Z test_mutable_list_remove_not_existing (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8713037Z test_mutable_list_remove_tensor (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8713169Z test_mutable_list_reverse (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8713310Z test_mutable_list_reverse_empty (jit.test_list_dict.TestList) ... ok (0.004s) 2023-01-11T21:25:38.8713453Z test_mutable_tensor_list_reverse (jit.test_list_dict.TestList) ... ok (0.006s) 2023-01-11T21:25:38.8713595Z test_no_element_type_annotation (jit.test_list_dict.TestList) ... ok (0.003s) 2023-01-11T21:25:38.8713718Z test_slice_index (jit.test_list_dict.TestList) ... ok (0.028s) 2023-01-11T21:25:38.8713854Z test_tensor_list_count (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8713987Z test_tensor_list_count_not_existing (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8714114Z test_tensor_list_index (jit.test_list_dict.TestList) ... ok (0.005s) 2023-01-11T21:25:38.8714255Z test_tensor_list_index_not_existing (jit.test_list_dict.TestList) ... ok (0.018s) 2023-01-11T21:25:38.8714360Z test_to_list (jit.test_list_dict.TestList) 2023-01-11T21:25:38.8714479Z Unit tests for Tensor.tolist() function. ... ok (0.229s) 2023-01-11T21:25:38.8714589Z test_to_list_gpu (jit.test_list_dict.TestList) 2023-01-11T21:25:38.8714734Z GPU tests for Tensor.tolist() function. ... skip: CUDA is not available (0.001s) 2023-01-11T21:25:38.8714874Z test_bump_numeric_counter (jit.test_logging.TestLogging) ... ok (0.008s) 2023-01-11T21:25:38.8715003Z test_counter_aggregation (jit.test_logging.TestLogging) ... ok (0.007s) 2023-01-11T21:25:38.8715144Z test_logging_levels_set (jit.test_logging.TestLogging) ... ok (0.001s) 2023-01-11T21:25:38.8715291Z test_time_measurement_counter (jit.test_logging.TestLogging) ... ok (0.002s) 2023-01-11T21:25:38.8715443Z test_time_measurement_counter_script (jit.test_logging.TestLogging) ... ok (0.006s) 2023-01-11T21:25:38.8715583Z test_trace_numeric_counter (jit.test_logging.TestLogging) ... ok (0.006s) 2023-01-11T21:25:38.8715747Z test_always_alive_values (jit.test_freezing.TestMKLDNNReinplacing) ... ok (0.020s) 2023-01-11T21:25:38.8715903Z test_merge_liveness (jit.test_freezing.TestMKLDNNReinplacing) ... ok (0.011s) 2023-01-11T21:25:38.8716054Z test_successful (jit.test_freezing.TestMKLDNNReinplacing) ... ok (0.012s) 2023-01-11T21:25:38.8716206Z test_switch_inputs_to_inplace (jit.test_freezing.TestMKLDNNReinplacing) ... ok (0.011s) 2023-01-11T21:25:38.8716320Z test_broadcasting_list (jit.test_misc.TestMisc) 2023-01-11T21:25:38.8716452Z Test BroadcastingList and torch.nn._size_N_t alias ... ok (0.006s) 2023-01-11T21:25:38.8716588Z test_export_opnames_interface (jit.test_misc.TestMisc) ... ok (0.024s) 2023-01-11T21:25:38.8716749Z test_future_isinstance (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8716869Z test_hacked_twin (jit.test_misc.TestMisc) ... ok (0.002s) 2023-01-11T21:25:38.8716982Z test_if_returning_any (jit.test_misc.TestMisc) 2023-01-11T21:25:38.8717093Z Check that an if statement can return different ... ok (0.005s) 2023-01-11T21:25:38.8717211Z test_joined_str (jit.test_misc.TestMisc) ... ok (0.004s) 2023-01-11T21:25:38.8717335Z test_kwarg_support (jit.test_misc.TestMisc) ... ok (0.008s) 2023-01-11T21:25:38.8717470Z test_legacy_tensor_constructor (jit.test_misc.TestMisc) ... ok (0.016s) 2023-01-11T21:25:38.8717594Z test_list_literal_infer (jit.test_misc.TestMisc) ... ok (0.008s) 2023-01-11T21:25:38.8717709Z test_math_inf (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8717832Z test_parse_ir_annotate (jit.test_misc.TestMisc) ... ok (0.001s) 2023-01-11T21:25:38.8718015Z test_parse_ir_single_element_tensor_negative (jit.test_misc.TestMisc) ... ok (0.001s) 2023-01-11T21:25:38.8718157Z test_parse_ir_single_element_tensor_positive (jit.test_misc.TestMisc) ... ok (0.001s) 2023-01-11T21:25:38.8718291Z test_script_many_decorators (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8718412Z test_str_refine_any (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8718551Z test_subexpression_Dict_int_Future (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8718691Z test_subexpression_Future_annotate (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8718826Z test_subexpression_List_Future (jit.test_misc.TestMisc) ... ok (0.002s) 2023-01-11T21:25:38.8718958Z test_subexpression_Optional (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8719105Z test_subexpression_Tuple_int_int_Future (jit.test_misc.TestMisc) ... ok (0.003s) 2023-01-11T21:25:38.8719228Z test_tuple_subscripted_assign (jit.test_misc.TestMisc) ... ok (0.002s) 2023-01-11T21:25:38.8719415Z test_call_script_fn_from_traced_module (jit.test_tracer.TestMixTracingScripting) ... ok (0.011s) 2023-01-11T21:25:38.8719601Z test_call_script_module_from_traced_module (jit.test_tracer.TestMixTracingScripting) ... ok (0.016s) 2023-01-11T21:25:38.8719775Z test_call_traced_fn_from_script_fn (jit.test_tracer.TestMixTracingScripting) ... ok (0.006s) 2023-01-11T21:25:38.8719948Z test_call_traced_mod_from_script_fn (jit.test_tracer.TestMixTracingScripting) ... ok (0.010s) 2023-01-11T21:25:38.8720126Z test_call_tracing_fn_from_script_module (jit.test_tracer.TestMixTracingScripting) ... ok (0.008s) 2023-01-11T21:25:38.8720310Z test_call_tracing_mod_from_script_module (jit.test_tracer.TestMixTracingScripting) ... ok (0.012s) 2023-01-11T21:25:38.8720489Z test_script_inline_trace_multiple_args (jit.test_tracer.TestMixTracingScripting) ... ok (0.011s) 2023-01-11T21:25:38.8720652Z test_trace_dict_mix_script (jit.test_tracer.TestMixTracingScripting) ... ok (0.043s) 2023-01-11T21:25:38.8720804Z test_trace_hierarchy (jit.test_tracer.TestMixTracingScripting) ... ok (0.024s) 2023-01-11T21:25:38.8720960Z test_trace_linear (jit.test_tracer.TestMixTracingScripting) ... ok (0.020s) 2023-01-11T21:25:38.8721146Z test_trace_mixed_by_script_with_dict_output (jit.test_tracer.TestMixTracingScripting) ... ok (0.011s) 2023-01-11T21:25:38.8721303Z test_trace_of_script (jit.test_tracer.TestMixTracingScripting) ... ok (0.011s) 2023-01-11T21:25:38.8721463Z test_trace_parameter (jit.test_tracer.TestMixTracingScripting) ... ok (0.022s) 2023-01-11T21:25:38.8721635Z test_trace_returning_dict_with_tensor_tuples (jit.test_tracer.TestMixTracingScripting) 2023-01-11T21:25:38.8721798Z Tracing over a module returning a dictionary whose values are tuples of tensors ... ok (0.012s) 2023-01-11T21:25:38.8721950Z test_trace_script (jit.test_tracer.TestMixTracingScripting) ... ok (0.153s) 2023-01-11T21:25:38.8722108Z test_trace_script_returning_complex_dict (jit.test_tracer.TestMixTracingScripting) 2023-01-11T21:25:38.8722296Z Tracing over a script function returning a dictionary should work. ... ok (0.025s) 2023-01-11T21:25:38.8722450Z test_trace_with_size (jit.test_tracer.TestMixTracingScripting) ... ok (0.008s) 2023-01-11T21:25:38.8722651Z test_traced_module_contains_scripted_interface_types (jit.test_tracer.TestMixTracingScripting) ... ok (0.067s) 2023-01-11T21:25:38.8722834Z test_traced_module_implements_interface (jit.test_tracer.TestMixTracingScripting) ... ok (0.186s) 2023-01-11T21:25:38.8722994Z test_tracing_indexing (jit.test_tracer.TestMixTracingScripting) ... ok (0.008s) 2023-01-11T21:25:38.8723149Z test_tracing_slicing (jit.test_tracer.TestMixTracingScripting) ... ok (0.008s) 2023-01-11T21:25:38.8723270Z test_alexnet (jit.test_models.TestModels) ... ok (0.579s) 2023-01-11T21:25:38.8723385Z test_dcgan_models (jit.test_models.TestModels) ... ok (0.304s) 2023-01-11T21:25:38.8723530Z test_dcgan_models_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8723682Z test_mnist (jit.test_models.TestModels) ... ok (0.340s) 2023-01-11T21:25:38.8723818Z test_mnist_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8723985Z test_mnist_training_leaks_no_memory_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8724185Z test_neural_style (jit.test_models.TestModels) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:25:38.8724330Z test_neural_style_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.000s) 2023-01-11T21:25:38.8724471Z test_reinforcement_learning (jit.test_models.TestModels) ... ok (0.039s) 2023-01-11T21:25:38.8724629Z test_reinforcement_learning_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.000s) 2023-01-11T21:25:38.8724831Z test_script_module_script_resnet (jit.test_models.TestModels) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.003s) 2023-01-11T21:25:38.8725048Z test_script_module_trace_resnet18 (jit.test_models.TestModels) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:25:38.8725237Z test_snli (jit.test_models.TestModels) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:25:38.8725378Z test_snli_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.000s) 2023-01-11T21:25:38.8725511Z test_snli_quantized (jit.test_models.TestModels) ... ok (0.663s) 2023-01-11T21:25:38.8725713Z test_super_resolution (jit.test_models.TestModels) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:25:38.8725865Z test_super_resolution_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.000s) 2023-01-11T21:25:38.8726008Z test_time_sequence_prediction (jit.test_models.TestModels) ... ok (0.131s) 2023-01-11T21:25:38.8726115Z test_vae (jit.test_models.TestModels) ... ok (0.172s) 2023-01-11T21:25:38.8726250Z test_vae_cuda (jit.test_models.TestModels) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8726381Z test_vae_quantized (jit.test_models.TestModels) ... ok (0.090s) 2023-01-11T21:25:38.8726530Z test_customized_state_dict_methods (jit.test_module_apis.TestModuleAPIs) 2023-01-11T21:25:38.8726665Z Tests that customized state dict methods are in effect ... ok (0.032s) 2023-01-11T21:25:38.8726812Z test_default_state_dict_methods (jit.test_module_apis.TestModuleAPIs) 2023-01-11T21:25:38.8726961Z Tests that default state dict methods are automatically available ... ok (0.025s) 2023-01-11T21:25:38.8727122Z test_submodule_customized_state_dict_methods (jit.test_module_apis.TestModuleAPIs) 2023-01-11T21:25:38.8727256Z Tests that customized state dict methods on submodules are in effect ... ok (0.040s) 2023-01-11T21:25:38.8727438Z test_custom_container_forward (jit.test_module_containers.TestModuleContainers) ... ok (0.070s) 2023-01-11T21:25:38.8727625Z test_empty_dict_override_contains (jit.test_module_containers.TestModuleContainers) ... ok (0.020s) 2023-01-11T21:25:38.8727842Z test_module_inplace_construct (jit.test_module_containers.TestModuleContainers) ... ok (0.018s) 2023-01-11T21:25:38.8728033Z test_module_interface_special_methods (jit.test_module_containers.TestModuleContainers) ... ok (0.051s) 2023-01-11T21:25:38.8728208Z test_module_properties (jit.test_module_containers.TestModuleContainers) ... ok (0.030s) 2023-01-11T21:25:38.8728372Z test_moduledict (jit.test_module_containers.TestModuleContainers) ... ok (0.188s) 2023-01-11T21:25:38.8728545Z test_moduledict_getitem (jit.test_module_containers.TestModuleContainers) ... ok (0.033s) 2023-01-11T21:25:38.8728721Z test_moduledict_keyerror (jit.test_module_containers.TestModuleContainers) ... ok (0.011s) 2023-01-11T21:25:38.8728892Z test_normal_list_attribute_with_modules_error (jit.test_module_containers.TestModuleContainers) 2023-01-11T21:25:38.8729045Z Test that an attempt to script a module with a regular list attribute ... ok (0.003s) 2023-01-11T21:25:38.8729274Z test_parameterdict_script_getitem (jit.test_module_containers.TestModuleContainers) ... ok (0.014s) 2023-01-11T21:25:38.8729459Z test_parameterlist_script_getitem (jit.test_module_containers.TestModuleContainers) ... ok (0.042s) 2023-01-11T21:25:38.8729642Z test_parameterlist_script_iter (jit.test_module_containers.TestModuleContainers) ... ok (0.060s) 2023-01-11T21:25:38.8729826Z test_script_module_list_sequential (jit.test_module_containers.TestModuleContainers) ... ok (0.023s) 2023-01-11T21:25:38.8730006Z test_script_modulelist_index (jit.test_module_containers.TestModuleContainers) ... ok (0.103s) 2023-01-11T21:25:38.8730193Z test_sequential_intermediary_types (jit.test_module_containers.TestModuleContainers) ... ok (0.026s) 2023-01-11T21:25:38.8730378Z test_special_method_with_override (jit.test_module_containers.TestModuleContainers) ... ok (0.023s) 2023-01-11T21:25:38.8730525Z test_typed_module_dict (jit.test_module_containers.TestModuleContainers) 2023-01-11T21:25:38.8730682Z Test that a type annotation can be provided for a ModuleDict that allows ... ok (0.060s) 2023-01-11T21:25:38.8730843Z test_typed_module_list (jit.test_module_containers.TestModuleContainers) 2023-01-11T21:25:38.8731000Z Test that a type annotation can be provided for a ModuleList that allows ... ok (0.051s) 2023-01-11T21:25:38.8731202Z test_freeze_module_with_inplace_mutation_in_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.022s) 2023-01-11T21:25:38.8731382Z test_freeze_module_with_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.019s) 2023-01-11T21:25:38.8731573Z test_freeze_module_with_interface_and_fork (jit.test_module_interface.TestModuleInterface) ... ok (0.034s) 2023-01-11T21:25:38.8731761Z test_freeze_module_with_mutated_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.027s) 2023-01-11T21:25:38.8731949Z test_freeze_module_with_setattr_in_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.020s) 2023-01-11T21:25:38.8732109Z test_module_apis_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.015s) 2023-01-11T21:25:38.8732276Z test_module_doc_string (jit.test_module_interface.TestModuleInterface) ... ok (0.022s) 2023-01-11T21:25:38.8732444Z test_module_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.094s) 2023-01-11T21:25:38.8732624Z test_module_interface_inheritance (jit.test_module_interface.TestModuleInterface) ... ok (0.001s) 2023-01-11T21:25:38.8732801Z test_module_interface_subtype (jit.test_module_interface.TestModuleInterface) ... ok (0.051s) 2023-01-11T21:25:38.8732961Z test_module_swap (jit.test_module_interface.TestModuleInterface) ... ok (0.022s) 2023-01-11T21:25:38.8733141Z test_module_swap_no_lazy_compile (jit.test_module_interface.TestModuleInterface) ... ok (0.025s) 2023-01-11T21:25:38.8733323Z test_module_swap_no_module_interface (jit.test_module_interface.TestModuleInterface) ... ok (0.016s) 2023-01-11T21:25:38.8733483Z test_module_swap_wrong_module (jit.test_module_interface.TestModuleInterface) ... ok (0.019s) 2023-01-11T21:25:38.8733697Z test_not_submodule_interface_call (jit.test_module_interface.TestModuleInterface) ... ok (0.011s) 2023-01-11T21:25:38.8733878Z test_script_module_as_interface_swap (jit.test_module_interface.TestModuleInterface) ... ok (0.023s) 2023-01-11T21:25:38.8734024Z test_script_module_with_constants_list (jit.test_modules.TestModules) 2023-01-11T21:25:38.8734159Z Test that a module that has __constants__ set to something ... ok (0.009s) 2023-01-11T21:25:38.8734299Z test_namedtuple (jit.test_list_dict.TestNamedTuple) ... ok (0.004s) 2023-01-11T21:25:38.8734446Z test_namedtuple_as_attr (jit.test_list_dict.TestNamedTuple) ... ok (0.005s) 2023-01-11T21:25:38.8734595Z test_namedtuple_constant (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8734736Z test_namedtuple_kwarg_construct (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8734877Z test_namedtuple_lower (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8735060Z test_namedtuple_resolution (jit.test_list_dict.TestNamedTuple) ... ok (0.004s) 2023-01-11T21:25:38.8735260Z test_namedtuple_serialization (jit.test_list_dict.TestNamedTuple) ... skip: broken while these tests were not in CI (0.001s) 2023-01-11T21:25:38.8735413Z test_namedtuple_slice_unpack (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8735569Z test_namedtuple_type_annotation (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8735717Z test_namedtuple_wrong_types (jit.test_list_dict.TestNamedTuple) ... ok (0.002s) 2023-01-11T21:25:38.8735861Z test_return_named_tuple (jit.test_list_dict.TestNamedTuple) ... ok (0.003s) 2023-01-11T21:25:38.8736266Z test_adaptive_avg_pool2d (jit.test_backend_nnapi.TestNnapiBackend) ... /var/lib/jenkins/workspace/test/test_nnapi.py:14: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:25:38.8736348Z t = torch.tensor(t) 2023-01-11T21:25:38.8736403Z ok (0.332s) 2023-01-11T21:25:38.8736549Z test_avg_pool2d (jit.test_backend_nnapi.TestNnapiBackend) ... ok (1.069s) 2023-01-11T21:25:38.8736686Z test_cat (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.047s) 2023-01-11T21:25:38.8736841Z test_compile_spec_santiy (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.008s) 2023-01-11T21:25:38.8736980Z test_conv2d (jit.test_backend_nnapi.TestNnapiBackend) ... ok (3.351s) 2023-01-11T21:25:38.8737131Z test_conv2d_transpose (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.616s) 2023-01-11T21:25:38.8737276Z test_dequantize (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.011s) 2023-01-11T21:25:38.8737401Z test_detach (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.022s) 2023-01-11T21:25:38.8737544Z test_flatten (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.096s) 2023-01-11T21:25:38.8737689Z test_hardtanh (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.026s) 2023-01-11T21:25:38.8737831Z test_linear (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.024s) 2023-01-11T21:25:38.8738450Z test_log_softmax (jit.test_backend_nnapi.TestNnapiBackend) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1482: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument. 2023-01-11T21:25:38.8738546Z return forward_call(*args, **kwargs) 2023-01-11T21:25:38.8738946Z /opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py:460: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument. 2023-01-11T21:25:38.8739058Z outs = wrap_retval(mod(*_clone_inputs(inputs))) 2023-01-11T21:25:38.8739127Z ok (0.022s) 2023-01-11T21:25:38.8739259Z test_max_pool2d (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.484s) 2023-01-11T21:25:38.8739398Z test_mean (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.067s) 2023-01-11T21:25:38.8739617Z test_multi_output (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.013s) 2023-01-11T21:25:38.8739766Z test_pointwise_binary (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.101s) 2023-01-11T21:25:38.8739922Z test_pointwise_binary_const (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.043s) 2023-01-11T21:25:38.8740075Z test_pointwise_unary (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.041s) 2023-01-11T21:25:38.8740214Z test_prelu (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.032s) 2023-01-11T21:25:38.8740353Z test_qadd (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.130s) 2023-01-11T21:25:38.8740478Z test_qlinear (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.093s) 2023-01-11T21:25:38.8740620Z test_quantize (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.016s) 2023-01-11T21:25:38.8740766Z test_reshape (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.029s) 2023-01-11T21:25:38.8740945Z test_seblock_mul (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.011s) 2023-01-11T21:25:38.8741088Z test_slice (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.040s) 2023-01-11T21:25:38.8741595Z test_softmax (jit.test_backend_nnapi.TestNnapiBackend) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1482: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. 2023-01-11T21:25:38.8741690Z return forward_call(*args, **kwargs) 2023-01-11T21:25:38.8742078Z /opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py:460: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. 2023-01-11T21:25:38.8742189Z outs = wrap_retval(mod(*_clone_inputs(inputs))) 2023-01-11T21:25:38.8742244Z ok (0.032s) 2023-01-11T21:25:38.8742513Z test_tensor_input (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.018s) 2023-01-11T21:25:38.8742657Z test_to (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.029s) 2023-01-11T21:25:38.8742806Z test_unsqueeze (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.050s) 2023-01-11T21:25:38.8742963Z test_upsample_nearest2d (jit.test_backend_nnapi.TestNnapiBackend) ... ok (0.367s) 2023-01-11T21:25:38.8743143Z test_op_decomposition (jit.test_op_decompositions.TestOpDecompositions) ... ok (0.072s) 2023-01-11T21:25:38.8743337Z test_registered_decomposition (jit.test_op_decompositions.TestOpDecompositions) ... ok (0.006s) 2023-01-11T21:25:38.8743601Z test_fuse_activation_with_pack_ops_linear_conv2d_1 (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.026s) 2023-01-11T21:25:38.8743853Z test_fuse_activation_with_pack_ops_linear_conv2d_2 (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.026s) 2023-01-11T21:25:38.8744113Z test_fuse_activation_with_pack_ops_linear_conv2d_3 (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.025s) 2023-01-11T21:25:38.8744374Z test_fuse_activation_with_pack_ops_linear_conv2d_4 (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.031s) 2023-01-11T21:25:38.8744642Z test_insert_pre_packed_linear_before_inline_and_conv_2d_op (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.016s) 2023-01-11T21:25:38.8744893Z test_insert_pre_packed_linear_op (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.010s) 2023-01-11T21:25:38.8745137Z test_replace_conv1d_with_conv2d (jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo) ... ok (0.006s) 2023-01-11T21:25:38.8745247Z test_any (jit.test_pdt.TestPDT) ... ok (0.011s) 2023-01-11T21:25:38.8745381Z test_class_as_profiled_types (jit.test_pdt.TestPDT) ... ok (0.017s) 2023-01-11T21:25:38.8745503Z test_class_methods (jit.test_pdt.TestPDT) ... ok (0.008s) 2023-01-11T21:25:38.8745712Z test_class_with_args_as_profiled_types (jit.test_pdt.TestPDT) ... ok (0.015s) 2023-01-11T21:25:38.8745834Z test_class_with_multiple_methods (jit.test_pdt.TestPDT) ... ok (0.010s) 2023-01-11T21:25:38.8745973Z test_fx_tracing_with_typing (jit.test_pdt.TestPDT) ... ok (0.007s) 2023-01-11T21:25:38.8746115Z test_multiple_class_with_same_method (jit.test_pdt.TestPDT) ... ok (0.015s) 2023-01-11T21:25:38.8746250Z test_nested_function_in_forward (jit.test_pdt.TestPDT) ... ok (0.008s) 2023-01-11T21:25:38.8746380Z test_nested_list_and_tuple (jit.test_pdt.TestPDT) ... ok (0.015s) 2023-01-11T21:25:38.8746510Z test_nested_nn_module_class (jit.test_pdt.TestPDT) ... ok (0.010s) 2023-01-11T21:25:38.8746652Z test_nested_nn_module_class_with_args (jit.test_pdt.TestPDT) ... ok (0.011s) 2023-01-11T21:25:38.8746766Z test_nn_module (jit.test_pdt.TestPDT) ... ok (0.007s) 2023-01-11T21:25:38.8746891Z test_nn_module_with_export_function (jit.test_pdt.TestPDT) ... ok (0.009s) 2023-01-11T21:25:38.8747055Z test_nn_parameter_as_arg (jit.test_pdt.TestPDT) ... ok (0.007s) 2023-01-11T21:25:38.8747197Z test_nonetype_as_optional_of_type (jit.test_pdt.TestPDT) ... ok (0.007s) 2023-01-11T21:25:38.8747307Z test_pdt (jit.test_pdt.TestPDT) ... ok (0.017s) 2023-01-11T21:25:38.8747421Z test_pdt_dict (jit.test_pdt.TestPDT) ... ok (0.006s) 2023-01-11T21:25:38.8747546Z test_pdt_list_and_tuple (jit.test_pdt.TestPDT) ... ok (0.008s) 2023-01-11T21:25:38.8747715Z test_scriptable (jit.test_parametrization.TestParametrization) ... ok (0.030s) 2023-01-11T21:25:38.8747855Z test_traceable (jit.test_parametrization.TestParametrization) 2023-01-11T21:25:38.8747997Z Test the jit scripting and tracing of a parametrized model. ... ok (0.030s) 2023-01-11T21:25:38.8748139Z test_conv_dim_folding (jit.test_peephole.TestPeephole) ... ok (0.061s) 2023-01-11T21:25:38.8748286Z test_integer_refinement (jit.test_peephole.TestPeephole) ... ok (0.017s) 2023-01-11T21:25:38.8748425Z test_noop_peephole (jit.test_peephole.TestPeephole) ... ok (0.015s) 2023-01-11T21:25:38.8748573Z test_normalized_is_op (jit.test_peephole.TestPeephole) ... ok (0.004s) 2023-01-11T21:25:38.8748718Z test_normalized_isnot_op (jit.test_peephole.TestPeephole) ... ok (0.004s) 2023-01-11T21:25:38.8748858Z test_normalized_rsub (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8749009Z test_optimize_out_comparison_same_value (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8749143Z test_peephole (jit.test_peephole.TestPeephole) ... ok (0.009s) 2023-01-11T21:25:38.8749286Z test_peephole_add_zero (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8749426Z test_peephole_arith (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8749598Z test_peephole_cuda (jit.test_peephole.TestPeephole) ... skip: cpp tests require CUDA (0.001s) 2023-01-11T21:25:38.8749786Z test_peephole_dict_getitem_no_optimization_dict_modified (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8749979Z test_peephole_dict_getitem_no_optimization_get_input_arg (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8750172Z test_peephole_dict_getitem_no_optimization_keys_might_overlap (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8750356Z test_peephole_dict_getitem_no_optimization_missing_key (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8750532Z test_peephole_dict_getitem_no_optimization_overlapping_keys (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8750773Z test_peephole_dict_getitem_no_optimization_unsupported_type (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8750933Z test_peephole_dict_getitem_simple (jit.test_peephole.TestPeephole) ... ok (0.008s) 2023-01-11T21:25:38.8751078Z test_peephole_dict_len (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8751266Z test_peephole_dict_len_no_optimization_keys_might_overlap (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8751490Z test_peephole_dict_len_no_optimization_overlapping_keys (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8751679Z test_peephole_dict_len_no_optimization_unsupported_type (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8751824Z test_peephole_dynamic (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8751948Z test_peephole_int (jit.test_peephole.TestPeephole) ... ok (0.002s) 2023-01-11T21:25:38.8752090Z test_peephole_len_list (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8752233Z test_peephole_list_len (jit.test_peephole.TestPeephole) ... ok (0.039s) 2023-01-11T21:25:38.8752374Z test_peephole_list_ops (jit.test_peephole.TestPeephole) ... ok (0.012s) 2023-01-11T21:25:38.8752531Z test_peephole_no_output_aliasing (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8752685Z test_peephole_optional_refine (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8752878Z test_peephole_slice_all_three_args (jit.test_peephole.TestPeephole) ... ok (0.004s) 2023-01-11T21:25:38.8753039Z test_peephole_slice_one_empty_arg (jit.test_peephole.TestPeephole) ... ok (0.011s) 2023-01-11T21:25:38.8753214Z test_peephole_slice_optimization_not_applied_list_modified (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8753403Z test_peephole_slice_optimization_not_applied_non_const_args (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8753558Z test_peephole_slice_two_empty_args (jit.test_peephole.TestPeephole) ... ok (0.011s) 2023-01-11T21:25:38.8753712Z test_peephole_type_refinements (jit.test_peephole.TestPeephole) ... ok (0.010s) 2023-01-11T21:25:38.8753870Z test_peephole_with_non_output_writes (jit.test_peephole.TestPeephole) ... ok (0.006s) 2023-01-11T21:25:38.8754016Z test_peephole_with_writes (jit.test_peephole.TestPeephole) ... ok (0.004s) 2023-01-11T21:25:38.8754166Z test_refine_integer_values (jit.test_peephole.TestPeephole) ... ok (0.003s) 2023-01-11T21:25:38.8754326Z test_short_circuit_optimization (jit.test_peephole.TestPeephole) ... ok (0.005s) 2023-01-11T21:25:38.8754454Z test_version (__main__.TestProducerVersion) ... ok (0.000s) 2023-01-11T21:25:38.8754580Z test_aliasing_merge (jit.test_profiler.TestProfiler) ... ok (0.045s) 2023-01-11T21:25:38.8754728Z test_autograd_fallback_graph (jit.test_profiler.TestProfiler) ... ok (0.040s) 2023-01-11T21:25:38.8754888Z test_fallback_graph_not_specialized (jit.test_profiler.TestProfiler) ... ok (0.024s) 2023-01-11T21:25:38.8755028Z test_iterative_fusion (jit.test_profiler.TestProfiler) ... ok (0.044s) 2023-01-11T21:25:38.8755173Z test_local_fusion_strategy (jit.test_profiler.TestProfiler) ... ok (0.025s) 2023-01-11T21:25:38.8755317Z test_not_fusing_scalar_ops (jit.test_profiler.TestProfiler) ... ok (0.002s) 2023-01-11T21:25:38.8755466Z test_not_optimizing_property (jit.test_profiler.TestProfiler) ... ok (0.024s) 2023-01-11T21:25:38.8755616Z test_specialize_backward (jit.test_profiler.TestProfiler) ... ok (0.094s) 2023-01-11T21:25:38.8755750Z test_specialized_types (jit.test_profiler.TestProfiler) ... ok (0.023s) 2023-01-11T21:25:38.8755887Z test_tensor_constant (jit.test_profiler.TestProfiler) ... ok (0.024s) 2023-01-11T21:25:38.8756055Z test_tensor_type_not_determined_by_inputs (jit.test_profiler.TestProfiler) ... ok (0.052s) 2023-01-11T21:25:38.8756194Z test_use_not_profiled (jit.test_profiler.TestProfiler) ... ok (0.025s) 2023-01-11T21:25:38.8756349Z test_add_input (jit.test_python_bindings.TestPythonBindings) ... ok (0.001s) 2023-01-11T21:25:38.8756500Z test_aliasdb (jit.test_python_bindings.TestPythonBindings) ... ok (0.003s) 2023-01-11T21:25:38.8756658Z test_canonicalize (jit.test_python_bindings.TestPythonBindings) ... ok (0.001s) 2023-01-11T21:25:38.8756821Z test_cu_create_function (jit.test_python_bindings.TestPythonBindings) ... ok (0.004s) 2023-01-11T21:25:38.8756967Z test_cu_get_functions (jit.test_python_bindings.TestPythonBindings) ... ok (0.004s) 2023-01-11T21:25:38.8757160Z test_graph_create (jit.test_python_bindings.TestPythonBindings) ... ok (0.002s) 2023-01-11T21:25:38.8757331Z test_graph_iterator_keepalive (jit.test_python_bindings.TestPythonBindings) ... ok (0.002s) 2023-01-11T21:25:38.8757486Z test_invalidation (jit.test_python_bindings.TestPythonBindings) ... ok (0.003s) 2023-01-11T21:25:38.8757634Z test_add (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.007s) 2023-01-11T21:25:38.8757799Z test_adv_indexing_list (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.022s) 2023-01-11T21:25:38.8757961Z test_advancedindex (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.022s) 2023-01-11T21:25:38.8758113Z test_gather (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.004s) 2023-01-11T21:25:38.8758249Z test_index (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.027s) 2023-01-11T21:25:38.8758410Z test_index_ellipses (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.093s) 2023-01-11T21:25:38.8758590Z test_inf (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.006s) 2023-01-11T21:25:38.8758748Z test_matmul_py3 (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.008s) 2023-01-11T21:25:38.8758893Z test_mul (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.006s) 2023-01-11T21:25:38.8759038Z test_pow (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.033s) 2023-01-11T21:25:38.8759191Z test_random (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.003s) 2023-01-11T21:25:38.8759339Z test_slice (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.014s) 2023-01-11T21:25:38.8759495Z test_stepped_tuple_slicing (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.006s) 2023-01-11T21:25:38.8759652Z test_str_to_float (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.006s) 2023-01-11T21:25:38.8759804Z test_triple (jit.test_python_builtins.TestPythonBuiltinOP) ... ok (0.005s) 2023-01-11T21:25:38.8759939Z test_param_strides (jit.test_python_ir.TestPythonIr) ... ok (0.004s) 2023-01-11T21:25:38.8760098Z test_attributes (jit.test_recursive_script.TestRecursiveScript) ... ok (0.044s) 2023-01-11T21:25:38.8760257Z test_class_compile (jit.test_recursive_script.TestRecursiveScript) ... ok (0.018s) 2023-01-11T21:25:38.8760423Z test_constants_with_final (jit.test_recursive_script.TestRecursiveScript) ... ok (0.018s) 2023-01-11T21:25:38.8760568Z test_dir (jit.test_recursive_script.TestRecursiveScript) ... ok (0.044s) 2023-01-11T21:25:38.8760708Z test_error_stack (jit.test_recursive_script.TestRecursiveScript) ... ok (0.004s) 2023-01-11T21:25:38.8760875Z test_error_stack_annotation (jit.test_recursive_script.TestRecursiveScript) ... ok (0.009s) 2023-01-11T21:25:38.8761035Z test_error_stack_class (jit.test_recursive_script.TestRecursiveScript) ... ok (0.008s) 2023-01-11T21:25:38.8761197Z test_error_stack_module (jit.test_recursive_script.TestRecursiveScript) ... ok (0.007s) 2023-01-11T21:25:38.8761378Z test_failed_function_compilation (jit.test_recursive_script.TestRecursiveScript) ... ok (0.003s) 2023-01-11T21:25:38.8761561Z test_function_attribute_in_submodule (jit.test_recursive_script.TestRecursiveScript) ... ok (0.019s) 2023-01-11T21:25:38.8761719Z test_ignore_class (jit.test_recursive_script.TestRecursiveScript) ... ok (0.009s) 2023-01-11T21:25:38.8761881Z test_inferred_nonetype (jit.test_recursive_script.TestRecursiveScript) ... ok (0.007s) 2023-01-11T21:25:38.8762034Z test_init_error (jit.test_recursive_script.TestRecursiveScript) ... ok (0.001s) 2023-01-11T21:25:38.8762190Z test_inner_traced_module (jit.test_recursive_script.TestRecursiveScript) ... ok (0.017s) 2023-01-11T21:25:38.8762352Z test_iterable_modules (jit.test_recursive_script.TestRecursiveScript) ... ok (0.047s) 2023-01-11T21:25:38.8762507Z test_method_call (jit.test_recursive_script.TestRecursiveScript) ... ok (0.009s) 2023-01-11T21:25:38.8762663Z test_module_basic (jit.test_recursive_script.TestRecursiveScript) ... ok (0.014s) 2023-01-11T21:25:38.8762867Z test_module_function_export (jit.test_recursive_script.TestRecursiveScript) ... ok (0.015s) 2023-01-11T21:25:38.8763025Z test_module_name (jit.test_recursive_script.TestRecursiveScript) ... ok (0.004s) 2023-01-11T21:25:38.8763178Z test_module_repr (jit.test_recursive_script.TestRecursiveScript) ... ok (0.020s) 2023-01-11T21:25:38.8763339Z test_optional_module (jit.test_recursive_script.TestRecursiveScript) ... ok (0.019s) 2023-01-11T21:25:38.8763505Z test_override_instance_method_ignore (jit.test_recursive_script.TestRecursiveScript) ... ok (0.003s) 2023-01-11T21:25:38.8763680Z test_prepare_scriptable_basic (jit.test_recursive_script.TestRecursiveScript) ... ok (0.005s) 2023-01-11T21:25:38.8763853Z test_prepare_scriptable_cycle (jit.test_recursive_script.TestRecursiveScript) ... ok (0.003s) 2023-01-11T21:25:38.8764041Z test_prepare_scriptable_iterable_modules (jit.test_recursive_script.TestRecursiveScript) ... ok (0.049s) 2023-01-11T21:25:38.8764241Z test_python_function_attribute (jit.test_recursive_script.TestRecursiveScript) ... ok (0.006s) 2023-01-11T21:25:38.8764410Z test_repeated_error_stack (jit.test_recursive_script.TestRecursiveScript) ... ok (0.005s) 2023-01-11T21:25:38.8764575Z test_script_after_eval (jit.test_recursive_script.TestRecursiveScript) ... ok (0.008s) 2023-01-11T21:25:38.8764731Z test_script_basic (jit.test_recursive_script.TestRecursiveScript) ... ok (0.005s) 2023-01-11T21:25:38.8764902Z test_script_function_attribute (jit.test_recursive_script.TestRecursiveScript) ... ok (0.016s) 2023-01-11T21:25:38.8765044Z test_script_loaded_module (jit.test_recursive_script.TestRecursiveScript) 2023-01-11T21:25:38.8765184Z Test that we can hold a loaded ScriptModule as a submodule. ... ok (0.012s) 2023-01-11T21:25:38.8765339Z test_aten_inplace (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.015s) 2023-01-11T21:25:38.8765505Z test_common_pytorch_list_ops (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.027s) 2023-01-11T21:25:38.8765657Z test_if_output (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.004s) 2023-01-11T21:25:38.8765814Z test_if_output_fail (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.006s) 2023-01-11T21:25:38.8765980Z test_list_indexing_removal (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.014s) 2023-01-11T21:25:38.8766138Z test_lists_append (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.005s) 2023-01-11T21:25:38.8766274Z test_lists_insert (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.003s) 2023-01-11T21:25:38.8766439Z test_special_mapped_op (jit.test_remove_mutation.TestRemoveMutation) ... ok (0.009s) 2023-01-11T21:25:38.8766569Z test_different_functions (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8766708Z Exercise the situation where we have the same qualified name ... ok (0.022s) 2023-01-11T21:25:38.8766840Z test_different_interfaces (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8766978Z Exercise the situation where we have the same qualified name ... ok (0.047s) 2023-01-11T21:25:38.8767108Z test_different_modules (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8767245Z Exercise the situation where we have the same qualified name ... ok (0.034s) 2023-01-11T21:25:38.8767368Z test_many_collisions (jit.test_save_load.TestSaveLoad) ... ok (0.183s) 2023-01-11T21:25:38.8767496Z test_save_load_meta_tensors (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8767653Z Check that parameters, buffers, and submodules are the same after loading ... ok (0.019s) 2023-01-11T21:25:38.8767802Z test_save_load_params_buffers_submodules (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8767959Z Check that parameters, buffers, and submodules are the same after loading. ... ok (0.008s) 2023-01-11T21:25:38.8768104Z test_save_load_using_pathlib (jit.test_save_load.TestSaveLoad) ... ok (0.006s) 2023-01-11T21:25:38.8768252Z test_save_load_with_extra_files (jit.test_save_load.TestSaveLoad) ... ok (0.012s) 2023-01-11T21:25:38.8768393Z test_save_namedtuple_input_only (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8768569Z Even if a NamedTuple is only used as an input argument, saving and ... ok (0.007s) 2023-01-11T21:25:38.8768705Z test_save_namedtuple_output_only (jit.test_save_load.TestSaveLoad) 2023-01-11T21:25:38.8768852Z Even if a NamedTuple is only used as an output argument, saving and ... ok (0.007s) 2023-01-11T21:25:38.8768989Z test_save_nonexit_file (jit.test_save_load.TestSaveLoad) ... ok (0.008s) 2023-01-11T21:25:38.8769138Z test_different_functions (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8769337Z Exercise the situation where we have the same qualified name ... skip: Need to enable flatbuffer to run the below tests (0.001s) 2023-01-11T21:25:38.8769488Z test_different_interfaces (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8769677Z Exercise the situation where we have the same qualified name ... skip: Need to enable flatbuffer to run the below tests (0.001s) 2023-01-11T21:25:38.8769853Z test_different_modules (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8770052Z Exercise the situation where we have the same qualified name ... skip: Need to enable flatbuffer to run the below tests (0.001s) 2023-01-11T21:25:38.8770267Z test_many_collisions (jit.test_save_load.TestSaveLoadFlatbuffer) ... skip: Need to enable flatbuffer to run the below tests (0.002s) 2023-01-11T21:25:38.8770489Z test_module_info_flatbuffer (jit.test_save_load.TestSaveLoadFlatbuffer) ... skip: Need to enable flatbuffer to run the below tests (0.001s) 2023-01-11T21:25:38.8770656Z test_save_load_params_buffers_submodules (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8770875Z Check that parameters, buffers, and submodules are the same after loading. ... skip: Need to enable flatbuffer to run the below tests (0.001s) 2023-01-11T21:25:38.8771095Z test_save_load_using_pathlib (jit.test_save_load.TestSaveLoadFlatbuffer) ... skip: Need to enable flatbuffer to run the below tests (0.000s) 2023-01-11T21:25:38.8771255Z test_save_load_with_extra_files (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8771466Z Check that parameters, buffers, and submodules are the same after loading. ... skip: Need to enable flatbuffer to run the below tests (0.000s) 2023-01-11T21:25:38.8771622Z test_save_namedtuple_input_only (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8771809Z Even if a NamedTuple is only used as an input argument, saving and ... skip: Need to enable flatbuffer to run the below tests (0.000s) 2023-01-11T21:25:38.8771969Z test_save_namedtuple_output_only (jit.test_save_load.TestSaveLoadFlatbuffer) 2023-01-11T21:25:38.8772166Z Even if a NamedTuple is only used as an output argument, saving and ... skip: Need to enable flatbuffer to run the below tests (0.000s) 2023-01-11T21:25:38.8772408Z test_versioned_div_scalar (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... Falsifying explicit example: test_versioned_div_scalar( 2023-01-11T21:25:38.8772608Z self=, 2023-01-11T21:25:38.8772697Z sample_input=(2, 3, 2.0, 3.0), 2023-01-11T21:25:38.8772758Z ) 2023-01-11T21:25:38.8772854Z skip: Failed to load fixture! (0.121s) 2023-01-11T21:25:38.8773041Z test_versioned_div_scalar_inplace (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.418s) 2023-01-11T21:25:38.8773244Z test_versioned_div_scalar_reciprocal (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.102s) 2023-01-11T21:25:38.8773440Z test_versioned_div_scalar_scalar (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.006s) 2023-01-11T21:25:38.8773625Z test_versioned_div_tensor (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.089s) 2023-01-11T21:25:38.8773822Z test_versioned_div_tensor_inplace (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.536s) 2023-01-11T21:25:38.8774016Z test_versioned_div_tensor_out (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (1.165s) 2023-01-11T21:25:38.8774232Z test_versioned_linspace (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.014s) 2023-01-11T21:25:38.8774424Z test_versioned_linspace_out (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.011s) 2023-01-11T21:25:38.8774611Z test_versioned_logspace (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.013s) 2023-01-11T21:25:38.8774788Z test_versioned_logspace_out (jit.test_save_load_for_op_version.TestSaveLoadForOpVersion) ... ok (0.010s) 2023-01-11T21:25:38.8774899Z test_add_out (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8775033Z test_add_tuple_different_types (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8775160Z test_add_tuple_non_optional (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8775282Z test_add_tuple_optional (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8775444Z test_add_tuple_same_types (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8775543Z test_addmm_grad (__main__.TestScript) 2023-01-11T21:25:38.8775650Z This test checks several things: ... ok (0.005s) 2023-01-11T21:25:38.8775777Z test_alias_covariant_type_containers (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8775882Z test_all (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8776005Z test_annot_ast_mypy_fn (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8776131Z test_annot_ast_mypy_method (__main__.TestScript) ... ok (0.034s) 2023-01-11T21:25:38.8776249Z test_annot_ast_py3_fn (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8776372Z test_annot_ast_py3_method (__main__.TestScript) ... ok (0.036s) 2023-01-11T21:25:38.8776493Z test_annot_string_mypy_fn (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8776620Z test_annot_string_mypy_method (__main__.TestScript) ... ok (0.024s) 2023-01-11T21:25:38.8776727Z test_annot_string_py3_fn (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8776859Z test_annot_string_py3_method (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8776981Z test_annotated_script_fn (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8777121Z test_annotated_script_fn_arg_mismatch (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8777267Z test_annotated_script_fn_return_mismatch (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8777395Z test_annotated_script_method (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8777515Z test_annoying_doubles (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8777603Z test_any (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8777743Z test_assert_is_scripting_metacompile (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8777879Z test_assertion_optional_refinement (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8778005Z test_attr_module_constants (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8778130Z test_attr_qscheme_script (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8778248Z test_attribute_in_init (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8778383Z test_attribute_serialization (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8778506Z test_attribute_unpickling (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8778612Z test_augmented_assign (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8778746Z test_autodiff_complex (__main__.TestScript) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8778872Z test_backend_cudnn_enabled (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8779002Z test_bad_multiline_annotations (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8779147Z test_bailout_loop_carried_deps_name_clash (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8779285Z test_bailout_loop_counter_transition (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8779478Z test_batch_norm_inference_backward_cuda (__main__.TestScript) ... skip: running tests on cuda to verify cudnn fix (0.002s) 2023-01-11T21:25:38.8779643Z test_batchnorm_fuser_cpu (__main__.TestScript) ... ok (0.303s) 2023-01-11T21:25:38.8779752Z test_big_float_literals (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8779871Z test_big_int_literals (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8779992Z test_binary_op_shape (__main__.TestScript) ... ok (0.218s) 2023-01-11T21:25:38.8780105Z test_bitwise_ops (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8780234Z test_block_input_grad_in_loop (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8780364Z test_bool_augassign_bitwise_and (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8780492Z test_bool_augassign_bitwise_or (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8780620Z test_bool_augassign_bitwise_xor (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8780726Z test_bool_dispatch (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8780909Z test_boolean_literal_constant_metacompile (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8781031Z test_break_continue_error (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8781149Z test_breaks_continues (__main__.TestScript) ... ok (0.117s) 2023-01-11T21:25:38.8781261Z test_builtin_args (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8781421Z test_builtin_args_fails (__main__.TestScript) ... You have not run this instance of FileCheck! 2023-01-11T21:25:38.8781495Z FileCheck checks: 2023-01-11T21:25:38.8781844Z [W ir_emitter.cpp:4385] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[List[int], int]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement. 2023-01-11T21:25:38.8781961Z File "/var/lib/jenkins/workspace/test/test_jit.py", line 10812 2023-01-11T21:25:38.8782044Z @torch.jit.script 2023-01-11T21:25:38.8782122Z def f6(a): 2023-01-11T21:25:38.8782210Z a.expand(size=[3, [4]]) 2023-01-11T21:25:38.8782532Z ~~~~~~ <--- HERE 2023-01-11T21:25:38.8782623Z (function emitListLiteral) 2023-01-11T21:25:38.8782693Z ok (0.006s) 2023-01-11T21:25:38.8782816Z test_builtin_function_attributes (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8782938Z test_builtin_use_as_value (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8783045Z test_call_ge (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8783181Z test_call_python_fn_from_script_fn (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8783323Z test_call_python_fn_from_script_module (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8783460Z test_call_python_fn_from_traced_module (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8783595Z test_call_python_fn_from_tracing_fn (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8783735Z test_call_python_mod_from_script_fn (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8783862Z test_call_python_mod_from_script_module (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8784002Z test_call_python_mod_from_traced_module (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8784141Z test_call_python_mod_from_tracing_fn (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8784276Z test_call_script_fn_from_script_fn (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8784414Z test_call_script_fn_from_script_module (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8784548Z test_call_script_fn_from_tracing_fn (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8784681Z test_call_script_mod_from_script_fn (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8784819Z test_call_script_mod_from_script_module (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8784976Z test_call_script_mod_from_tracing_fn (__main__.TestScript) ... skip: error in first class mode (0.001s) 2023-01-11T21:25:38.8785181Z test_call_traced_fn_from_tracing_fn (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8785349Z test_call_traced_mod_from_tracing_fn (__main__.TestScript) ... skip: error in first class mode (0.001s) 2023-01-11T21:25:38.8785644Z test_calls_in_type_annotations (__main__.TestScript) ... /opt/conda/lib/python3.10/site-packages/torch/__init__.py 2023-01-11T21:25:38.8785713Z ok (0.001s) 2023-01-11T21:25:38.8785850Z test_canonicalize_control_outputs (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8786047Z test_cast (__main__.TestScript) ... skip: RuntimeError: VariableType::ID() not implemented (0.000s) 2023-01-11T21:25:38.8786151Z test_cat (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8786248Z test_cat_lifts (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8786349Z test_chr (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8786459Z test_circular_dependency (__main__.TestScript) 2023-01-11T21:25:38.8786649Z https://github.com/pytorch/pytorch/issues/25871 ... ok (0.028s) 2023-01-11T21:25:38.8786773Z test_class_as_attribute (__main__.TestScript) ... ok (0.321s) 2023-01-11T21:25:38.8786893Z test_class_attribute (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8787024Z test_class_attribute_in_script (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8787170Z test_class_with_comment_at_lower_indentation (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8787265Z test_code_with_constants (__main__.TestScript) 2023-01-11T21:25:38.8787437Z Check that the `code_with_constants` property correctly returns graph CONSTANTS in the ... ok (0.006s) 2023-01-11T21:25:38.8787554Z test_code_with_constants_restore (__main__.TestScript) 2023-01-11T21:25:38.8787734Z Check that the `code_with_constants` property correctly works on restoration after save() + load() ... ok (0.011s) 2023-01-11T21:25:38.8787860Z test_comment_ignore_indent (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8787991Z test_compare_two_bool_inputs (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8788192Z test_compile_module_with_constant (__main__.TestScript) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:25:38.8788317Z test_conditional_casting (__main__.TestScript) ... ok (0.043s) 2023-01-11T21:25:38.8788422Z test_constant_as_attr (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8788567Z test_constant_pooling_introduce_aliasing (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8788692Z test_constant_pooling_none (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8788828Z test_constant_pooling_same_identity (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8788950Z test_context_manager (__main__.TestScript) ... ok (0.015s) 2023-01-11T21:25:38.8789064Z test_conv_error (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8789179Z test_convert_base (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8789311Z test_cpp_function_tensor_str (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8789419Z test_cpp_module_iterator (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8789537Z test_desugar_module (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8789651Z test_device_kwarg (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8789765Z test_device_type (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8789904Z test_device_type_cuda (__main__.TestScript) ... skip: Requires CUDA (0.000s) 2023-01-11T21:25:38.8790008Z test_dir (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8790115Z test_divmod (__main__.TestScript) ... ok (0.029s) 2023-01-11T21:25:38.8790263Z test_dominated_bailout (__main__.TestScript) ... skip: bailouts are being deprecated (0.002s) 2023-01-11T21:25:38.8790378Z test_dropout_eval (__main__.TestScript) ... ok (0.238s) 2023-01-11T21:25:38.8790488Z test_dtype_attr (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8790643Z test_dtype_op_shape (__main__.TestScript) ... ok (0.036s) 2023-01-11T21:25:38.8790825Z test_dtype_op_shape2 (__main__.TestScript) ... ok (0.048s) 2023-01-11T21:25:38.8790953Z test_early_return_closure (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8791080Z test_early_return_fork_join (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8791203Z test_early_return_rewrite (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8791322Z test_early_return_type_refinement (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8791445Z test_early_returns_loops (__main__.TestScript) ... ok (0.035s) 2023-01-11T21:25:38.8791567Z test_ellipsis_const_end (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8791689Z test_ellipsis_const_mid (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8791822Z test_ellipsis_const_mid_select (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8791984Z test_ellipsis_const_start (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8792103Z test_ellipsis_end (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8792216Z test_ellipsis_mid (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8792323Z test_ellipsis_mid_select (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8792441Z test_ellipsis_start (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8792576Z test_embedding_renorm_grad_error (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8792706Z test_empty_like_memory_format_bc (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8792825Z test_empty_tuple_str (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8792952Z test_enumerate_modlist_range (__main__.TestScript) ... ok (0.032s) 2023-01-11T21:25:38.8793077Z test_erase_number_types (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8793168Z test_error (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8793286Z test_error_stacktrace (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8793422Z test_error_stacktrace_interface (__main__.TestScript) ... ok (0.729s) 2023-01-11T21:25:38.8793534Z test_eval_python (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8793662Z test_exception_exits_closure (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8793797Z test_exceptions_with_control_flow (__main__.TestScript) ... ok (0.035s) 2023-01-11T21:25:38.8793902Z test_expand (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8794007Z test_fibb (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8794114Z test_fibb_totally_better (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8794246Z test_file_format_serialization (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8794362Z test_file_line_error (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8794488Z test_file_line_error_class_defn (__main__.TestScript) ... ok (0.698s) 2023-01-11T21:25:38.8794605Z test_file_line_graph (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8794728Z test_file_line_save_load (__main__.TestScript) ... ok (0.268s) 2023-01-11T21:25:38.8794845Z test_file_line_string (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8794961Z test_file_line_trace (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8795058Z test_filecheck (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8795176Z test_filecheck_parse (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8795295Z test_first_class_calls (__main__.TestScript) ... ok (0.400s) 2023-01-11T21:25:38.8795416Z test_first_class_module (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8795525Z test_floor_div (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8795635Z test_floordiv (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8795742Z test_for_else (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8795839Z test_for_in_dict (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8795989Z test_for_in_enumerate (__main__.TestScript) ... ok (0.029s) 2023-01-11T21:25:38.8796100Z test_for_in_range (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8796214Z test_for_in_range_ast (__main__.TestScript) ... ok (0.018s) 2023-01-11T21:25:38.8796336Z test_for_in_range_dynamic (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8796455Z test_for_in_range_if_ast (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8796576Z test_for_in_range_start_end (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8796710Z test_for_in_range_start_end_step (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8796820Z test_for_in_range_zero_step (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8796933Z test_for_in_string (__main__.TestScript) ... ok (0.015s) 2023-01-11T21:25:38.8797046Z test_for_in_tensors (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8797172Z test_for_in_tensors_fail_scalar (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8797330Z test_for_in_tensors_nested (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8797450Z test_for_in_tensors_rank0 (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8797561Z test_for_in_zip (__main__.TestScript) ... ok (0.022s) 2023-01-11T21:25:38.8797670Z test_for_in_zip_enumerate (__main__.TestScript) ... ok (0.017s) 2023-01-11T21:25:38.8797788Z test_for_tuple_assign (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8797904Z test_for_tuple_unpack (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8798009Z test_format (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8798117Z test_func_call (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8798250Z test_function_compilation_caching (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8798379Z test_function_overload_misuse (__main__.TestScript) ... ok (0.797s) 2023-01-11T21:25:38.8798519Z test_function_overloading_isinstance (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8798630Z test_function_overloads (__main__.TestScript) ... ok (0.050s) 2023-01-11T21:25:38.8798834Z test_fuser_double_float_codegen (__main__.TestScript) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:25:38.8798969Z test_fuser_double_literal_precision (__main__.TestScript) ... ok (0.240s) 2023-01-11T21:25:38.8799094Z test_fuser_multiple_blocks (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8799217Z test_gather_dynamic_index (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8799337Z test_generic_list_errors (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8799452Z test_get_set_state (__main__.TestScript) ... ok (0.028s) 2023-01-11T21:25:38.8799581Z test_get_set_state_with_tensors (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8799687Z test_grad_from_script (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8799791Z test_hash (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8799907Z test_hex_literals (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8800010Z test_id (__main__.TestScript) ... ok (0.363s) 2023-01-11T21:25:38.8800111Z test_if (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8800220Z test_if_define (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8800341Z test_if_different_type (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8800443Z test_if_for_in_range (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8800565Z test_if_is_none_dispatch (__main__.TestScript) ... ok (0.017s) 2023-01-11T21:25:38.8800675Z test_if_list_cat (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8800789Z test_if_nest_while (__main__.TestScript) ... ok (0.054s) 2023-01-11T21:25:38.8800896Z test_if_noelse (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8801017Z test_if_not_defined_error (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8801130Z test_if_supertype (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8801281Z test_ignore_decorator (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8801385Z test_ignored_as_value (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8801511Z test_ignored_method_binding (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8801626Z test_ignored_props (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8801765Z test_import_constants_not_specialized (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8801889Z test_in_for_and_comp_expr (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8802020Z test_in_operator_with_two_strings (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8802125Z test_index (__main__.TestScript) ... ok (0.081s) 2023-01-11T21:25:38.8802254Z test_index_select_shape_prop (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8802358Z test_index_with_tuple (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8802476Z test_indexing_error (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8802617Z test_infer_size (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8802727Z test_inferred_error_msg (__main__.TestScript) 2023-01-11T21:25:38.8802875Z Test that when we get a type mismatch on a function where we inferred ... ok (0.002s) 2023-01-11T21:25:38.8802994Z test_inherit_method (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8803200Z test_inline_and_run_annotated_script_fn (__main__.TestScript) ... skip: https://github.com/pytorch/pytorch/issues/9595 (0.000s) 2023-01-11T21:25:38.8803303Z test_inlined_graph (__main__.TestScript) 2023-01-11T21:25:38.8803443Z Check that the `inlined_graph` property correctly returns an inlined ... ok (0.010s) 2023-01-11T21:25:38.8803563Z test_inlining_cleanup (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8803676Z test_inplace_add (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8803801Z test_inplace_copy_script (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8803930Z test_input_keyword_in_schema (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8804042Z test_int_cast (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8804173Z test_integral_shape_inference (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8804278Z test_interpret_graph (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8804398Z test_interpreter_fuzz (__main__.TestScript) ... ok (0.289s) 2023-01-11T21:25:38.8804510Z test_intlist_args (__main__.TestScript) ... ok (0.127s) 2023-01-11T21:25:38.8804636Z test_invalid_call_arguments (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8804759Z test_invalid_lhs_assignment (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8804890Z test_invalid_prefix_annotation (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8805001Z test_irparser (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8805112Z test_is_after_use (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8805271Z test_is_isnot (__main__.TestScript) ... :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8805405Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8805528Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8805649Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8805775Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8805895Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806016Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806136Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806242Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806362Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806482Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806641Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806763Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8806884Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807003Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807109Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807229Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807348Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807473Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807597Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807718Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8807904Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808027Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808133Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808252Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808373Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808492Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808610Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808730Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808848Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8808965Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809069Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809196Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809316Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809435Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809556Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809676Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809801Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8809921Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810029Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810150Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810270Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810391Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810517Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810638Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810756Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810866Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8810986Z :4: SyntaxWarning: "is" with a literal. Did you mean "=="? 2023-01-11T21:25:38.8811117Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811246Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811375Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811503Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811631Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811761Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8811907Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812037Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812166Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812292Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812418Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812543Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812670Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812796Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8812909Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813061Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813189Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813319Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813442Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813567Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813692Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813818Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8813930Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814056Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814182Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814306Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814442Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814568Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814693Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814819Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8814932Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815059Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815184Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815309Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815433Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815559Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815690Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815814Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8815928Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816053Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816177Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816305Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816430Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816556Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816682Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816795Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8816923Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8817080Z :4: SyntaxWarning: "is not" with a literal. Did you mean "!="? 2023-01-11T21:25:38.8817147Z ok (0.045s) 2023-01-11T21:25:38.8817261Z test_is_optional (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8817377Z test_is_scripting (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8817510Z test_is_scripting_metacompile (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8817620Z test_isinstance (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8817729Z test_isinstance_dynamic (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8817858Z test_isinstance_metacompile (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8817983Z test_isinstance_refinement (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8818092Z test_jitter_bug (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8818201Z test_keyword (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8818358Z test_kwarg_expansion_error (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8818483Z test_kwargs_error_msg (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8818582Z test_lazy_script (__main__.TestScript) ... ok (0.018s) 2023-01-11T21:25:38.8818722Z test_lhs_advanced_indexing_assignment (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8818878Z test_lhs_advanced_indexing_augmented_assignment (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8818995Z test_lhs_indexing (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8819120Z test_lhs_indexing_increment (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8819252Z test_lhs_indexing_increment_list (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8819389Z test_lhs_indexing_increment_list_prim (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8819507Z test_lhs_indexing_list (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8819616Z test_lhs_indexing_multi (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8819733Z test_linear_grad (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8819870Z test_list_comprehension_modulelist (__main__.TestScript) ... ok (0.045s) 2023-01-11T21:25:38.8820011Z test_list_comprehension_variable_write (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8820133Z test_list_iterables (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8820249Z test_list_python_op (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8820358Z test_list_unify (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8820466Z test_literal (__main__.TestScript) ... ok (0.017s) 2023-01-11T21:25:38.8820562Z test_literals (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8820686Z test_logical_short_circuit (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8820800Z test_loop_liveness (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8820922Z test_loop_unroll_negative (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8821057Z test_loop_unroll_unused_counter (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8821174Z test_loop_unrolling (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8821297Z test_loop_unrolling_const (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8821421Z test_loop_unrolling_nested (__main__.TestScript) ... ok (0.052s) 2023-01-11T21:25:38.8821530Z test_lower_nested_tuples (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8821639Z test_math_ops (__main__.TestScript) ... ok (0.208s) 2023-01-11T21:25:38.8821812Z test_maxpool_guard_elimination (__main__.TestScript) ... skip: bailouts are being deprecated (0.001s) 2023-01-11T21:25:38.8822018Z test_meshgrid (__main__.TestScript) ... skip: Profiling executor fails to recognize that tensors in a list require gradients (0.001s) 2023-01-11T21:25:38.8822141Z test_method_casts_script (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8822256Z test_method_no_self (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8822538Z test_method_overloading (__main__.TestScript) ... ok (7.707s) 2023-01-11T21:25:38.8822662Z test_missing_getstate (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8822760Z test_mm_batching (__main__.TestScript) ... ok (0.519s) 2023-01-11T21:25:38.8822873Z test_module_apis (__main__.TestScript) ... ok (0.049s) 2023-01-11T21:25:38.8822988Z test_module_attrs (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8823122Z test_module_copy_with_attributes (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8823240Z test_module_copying (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8823354Z test_module_error (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8823488Z test_module_method_reassignment (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8823596Z test_module_none_attrs (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8823821Z test_module_parameters_and_buffers (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8823937Z test_module_str (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8824074Z test_module_with_params_called_fails (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8824192Z test_multi_reduction (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8824319Z test_multi_starred_expr_lhs (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8824448Z test_multiline_annot_ast_py3_fn (__main__.TestScript) ... ok (0.022s) 2023-01-11T21:25:38.8824591Z test_multiline_optional_future_refinement (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8824708Z test_multiline_string_dedents (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8824831Z test_multiple_assign (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8824956Z test_multiple_assignment (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8825069Z test_mutable_dce (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8825195Z test_mutable_dce_block (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8825325Z test_mutable_dce_graph_input (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8825469Z test_mutable_dce_indirect_wildcard_write (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8825605Z test_mutable_dce_indirect_wildcards (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8825711Z test_mutable_dce_list (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8825831Z test_mutable_dce_loop (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8825956Z test_mutable_dce_wildcards (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8826076Z test_mutate_constant (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8826194Z test_mypy_type_ignore (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8826329Z test_named_buffers_are_iterable (__main__.TestScript) ... ok (0.043s) 2023-01-11T21:25:38.8826449Z test_namedtuple_attr (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8826586Z test_namedtuple_default_values_Tensor_type (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8826734Z test_namedtuple_default_values_container_type (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8826872Z test_namedtuple_default_values_missing (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8827015Z test_namedtuple_default_values_simple_type (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8827183Z test_namedtuple_default_values_using_factory_constructor (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8827307Z test_namedtuple_python (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8827437Z test_namedtuple_type_inference (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8827549Z test_narrow_copy (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8827656Z test_nested_aug_assign (__main__.TestScript) ... ok (1.097s) 2023-01-11T21:25:38.8827775Z test_nested_bailouts (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8827926Z test_nested_breaks (__main__.TestScript) ... ok (0.026s) 2023-01-11T21:25:38.8828049Z test_nested_list_construct (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8828170Z test_nested_select_assign (__main__.TestScript) ... ok (0.017s) 2023-01-11T21:25:38.8828276Z test_nn_GRU (__main__.TestScript) ... ok (0.168s) 2023-01-11T21:25:38.8828385Z test_nn_LSTM (__main__.TestScript) ... ok (0.110s) 2023-01-11T21:25:38.8828503Z test_nn_LSTM_with_layers (__main__.TestScript) ... ok (0.109s) 2023-01-11T21:25:38.8828596Z test_nn_init (__main__.TestScript) ... ok (0.032s) 2023-01-11T21:25:38.8828711Z test_no_dtype_shape (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8828841Z test_no_self_arg_ignore_function (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8828956Z test_non_final_return (__main__.TestScript) ... ok (0.079s) 2023-01-11T21:25:38.8829068Z test_none_type_str (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8829201Z test_not (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8829323Z test_not_initialized_err (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8829430Z test_ntuple_builtins (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8829538Z test_number_abs (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8829656Z test_number_augassign (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8829793Z test_number_augassign_bitwise_lshift (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8829925Z test_number_augassign_bitwise_pow (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8830059Z test_number_augassign_bitwise_rshift (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8830170Z test_number_div (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8830282Z test_number_math (__main__.TestScript) ... ok (1.892s) 2023-01-11T21:25:38.8830377Z test_number_neg (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8830548Z test_old_models_bc (__main__.TestScript) ... skip: PyTorch is build without Caffe2 support (0.001s) 2023-01-11T21:25:38.8830665Z test_oneline_func (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8830831Z test_op_dtype (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8830957Z test_operator_precedence (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8831305Z test_optional_list (__main__.TestScript) ... skip: the current version of Profiler doesn't profile/specialize Optionals (0.001s) 2023-01-11T21:25:38.8831604Z test_optional_tensor (__main__.TestScript) ... skip: the current version of Profiler doesn't profile/specialize Optionals (0.001s) 2023-01-11T21:25:38.8831709Z test_ord (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8831814Z test_override_magic (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8831941Z test_pack_tuple_into_non_var (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8832063Z test_pack_unpack_nested (__main__.TestScript) ... ok (0.027s) 2023-01-11T21:25:38.8832191Z test_pack_unpack_state (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8832346Z test_parameter_order (__main__.TestScript) ... tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 2023-01-11T21:25:38.8832445Z 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 2023-01-11T21:25:38.8832544Z 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 2023-01-11T21:25:38.8832633Z 42., 43., 44., 45., 46., 47., 48., 49., 50., 51.], 2023-01-11T21:25:38.8832705Z grad_fn=) 2023-01-11T21:25:38.8832811Z tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 2023-01-11T21:25:38.8832911Z 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 2023-01-11T21:25:38.8833009Z 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 2023-01-11T21:25:38.8833099Z 42., 43., 44., 45., 46., 47., 48., 49., 50., 51.], 2023-01-11T21:25:38.8833222Z grad_fn=) 2023-01-11T21:25:38.8833288Z ok (0.010s) 2023-01-11T21:25:38.8833412Z test_parse_empty_tuple_annotation (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8833562Z test_parse_empty_tuple_annotation_element_error (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8833683Z test_parse_nested_names (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8833813Z test_parse_none_type_annotation (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8833939Z test_parse_tensor_constants (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8834058Z test_parser_kwargonly (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8834188Z test_parser_type_annotations (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8834325Z test_parser_type_annotations_comment (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8834469Z test_parser_type_annotations_incompatible_expression (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8834655Z test_parser_type_annotations_subscript_non_ident (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8834805Z test_parser_type_annotations_subscript_tensor (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8834947Z test_parser_type_annotations_unknown_type (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8835066Z test_partial_returns (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8835168Z test_pass (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8835290Z test_pickle_checkpoint (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8835432Z test_pickle_checkpoint_cuda (__main__.TestScript) ... skip: no CUDA (0.000s) 2023-01-11T21:25:38.8835543Z test_pickle_checkpoint_tup (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8835708Z test_pow_scalar_backward_cuda (__main__.TestScript) ... skip: device tests require CUDA (0.001s) 2023-01-11T21:25:38.8835833Z test_pretty_print_function (__main__.TestScript) ... ok (0.028s) 2023-01-11T21:25:38.8836011Z test_prim_grad_undefined (__main__.TestScript) ... skip: shape analysis is only enabled in Legacy (0.001s) 2023-01-11T21:25:38.8836116Z test_print (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8836228Z test_print_kwargs (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8836398Z test_profiling_graph_executor (__main__.TestScript) ... skip: bailouts are being deprecated (0.001s) 2023-01-11T21:25:38.8836518Z test_profiling_merge (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8836636Z test_pybind_type_comparisons (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8836747Z test_python_call (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8836872Z test_python_call_annotation (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8837007Z test_python_call_annoytation_failure (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8837130Z test_python_call_failure (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8837259Z test_python_call_non_tensor (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8837390Z test_python_call_non_tensor_wrong (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8837507Z test_python_frontend (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8837616Z test_python_frontend_py3 (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8837747Z test_python_frontend_source_range (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8837869Z test_python_op_builtins (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8837984Z test_python_op_name (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8838113Z test_python_val_doesnt_have_attr (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8838260Z test_rand (__main__.TestScript) ... skip: the original version of test_rand (0.001s) 2023-01-11T21:25:38.8838377Z test_rand_profiling (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8838487Z test_range_args (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8838626Z test_reassign_module_lhs (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8838748Z test_reassign_module_rhs (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8838872Z test_refine_tuple_types (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8838988Z test_remove_dropout (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8839120Z test_repeated_script_on_function (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8839235Z test_request_bailout (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8839388Z test_requires_grad_loop (__main__.TestScript) ... skip: Peeling is now disabled (0.001s) 2023-01-11T21:25:38.8839507Z test_rescripting_loaded_modules (__main__.TestScript) ... ok (0.026s) 2023-01-11T21:25:38.8839627Z test_resize_input_ops (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8839733Z test_return (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8839887Z test_return_stmt_not_at_end (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8840001Z test_return_tuple (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8840126Z test_robust_op_resolution (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8840230Z test_round (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8840351Z test_save_load_attr_error (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8840459Z test_script_annotation (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8840580Z test_script_bool_constant (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8840691Z test_script_chunk (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8840809Z test_script_clamp_none (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8840921Z test_script_copy (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8841030Z test_script_cu (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8841155Z test_script_define_order (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8841296Z test_script_define_order_recursive_fail (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8841403Z test_script_docstring (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8841542Z test_script_forward_method_replacement (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8841689Z test_script_get_device_cuda (__main__.TestScript) ... skip: requires CUDA (0.000s) 2023-01-11T21:25:38.8841816Z test_script_get_tracing_state (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8841933Z test_script_is_tracing (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8842057Z test_script_kwargs_fn_call (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8842187Z test_script_method_docstring (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8842319Z test_script_method_torch_function_overload (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8842434Z test_script_module (__main__.TestScript) ... ok (0.017s) 2023-01-11T21:25:38.8842568Z test_script_module_call_noscript (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8842688Z test_script_module_const (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8842829Z test_script_module_const_submodule_fail (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8842961Z test_script_module_export_blocks (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8843441Z test_script_module_export_shared_storage (__main__.TestScript) ... /var/lib/jenkins/workspace/test/test_jit.py:10624: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:25:38.8843623Z self.assertTrue(m_import.param1.storage().data_ptr() == m_import.param2.storage().data_ptr()) 2023-01-11T21:25:38.8844034Z /var/lib/jenkins/workspace/test/test_jit.py:10625: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:25:38.8844243Z self.assertTrue(m_import.param1.storage().data_ptr() != m_import.param3.storage().data_ptr()) 2023-01-11T21:25:38.8844311Z ok (0.008s) 2023-01-11T21:25:38.8844434Z test_script_module_export_submodule (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8844613Z test_script_module_export_tensor_cuda (__main__.TestScript) ... skip: testing cuda tensors require CUDA (0.001s) 2023-01-11T21:25:38.8845121Z test_script_module_export_tensor_type (__main__.TestScript) ... /var/lib/jenkins/workspace/test/test_jit.py:10560: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:25:38.8845250Z self.assertTrue(m_orig.param.storage().size() == 25) 2023-01-11T21:25:38.8845317Z ok (0.010s) 2023-01-11T21:25:38.8845446Z test_script_module_fail_exist (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8845567Z test_script_module_for (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8845688Z test_script_module_for2 (__main__.TestScript) ... ok (0.032s) 2023-01-11T21:25:38.8845820Z test_script_module_invalid_consts (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8845946Z test_script_module_nochange_submodule (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8846300Z test_script_module_none_exist_fail (__main__.TestScript) ... skip: [module dedupe] currently NoneType refinement on optional attributes doesn't work. (0.001s) 2023-01-11T21:25:38.8846428Z test_script_module_not_tuple (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8846574Z test_script_module_param_buffer_mutation (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8846704Z test_script_module_star_assign2 (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8846843Z test_script_module_star_assign2_inplace (__main__.TestScript) ... ok (0.013s) 2023-01-11T21:25:38.8846986Z test_script_module_star_assign_fail_builtin (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8847134Z test_script_module_star_assign_fail_pythonop (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8847263Z test_script_module_tensor_subclass_argument (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8847388Z test_script_nested_mod_list (__main__.TestScript) ... ok (0.035s) 2023-01-11T21:25:38.8847522Z test_script_non_tensor_args_outputs (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8847643Z test_script_optional_none (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8847761Z test_script_outputs (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8847897Z test_script_pack_padded_sequence (__main__.TestScript) ... ok (0.036s) 2023-01-11T21:25:38.8848039Z test_script_pad_sequence_pack_sequence (__main__.TestScript) ... ok (0.040s) 2023-01-11T21:25:38.8848153Z test_script_scope (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8848265Z test_script_sequential_for (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8848399Z test_script_sequential_in_mod_list (__main__.TestScript) ... ok (0.041s) 2023-01-11T21:25:38.8848539Z test_script_sequential_multi_output_fail (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8848670Z test_script_sequential_orderdict (__main__.TestScript) ... ok (0.029s) 2023-01-11T21:25:38.8848811Z test_script_sequential_sliced_iteration (__main__.TestScript) ... ok (0.030s) 2023-01-11T21:25:38.8848932Z test_script_star_assign (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8849053Z test_script_star_expr (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8849214Z test_script_star_expr_string (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8849326Z test_scriptable_fn_as_attr (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8849478Z test_scriptmodule_multi_head_attn_cuda (__main__.TestScript) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8849635Z test_scriptmodule_releases_tensors_cuda (__main__.TestScript) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8849784Z test_scriptmodule_transformer_cuda (__main__.TestScript) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8849906Z test_select_after_chunk (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8850026Z test_sequence_parsing (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8850163Z test_sequential_intermediary_types (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8850289Z test_serialization_big_ints (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8850431Z test_serialization_sharing (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8850556Z test_serialize_long_lines (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8850685Z test_serialized_source_ranges (__main__.TestScript) ... ok (0.277s) 2023-01-11T21:25:38.8850816Z test_serialized_source_ranges2 (__main__.TestScript) ... ok (0.404s) 2023-01-11T21:25:38.8850958Z test_serialized_source_ranges_dont_jitter (__main__.TestScript) ... ok (0.069s) 2023-01-11T21:25:38.8851093Z test_serialized_source_ranges_graph (__main__.TestScript) ... ok (0.275s) 2023-01-11T21:25:38.8851231Z test_serialized_source_ranges_no_dups (__main__.TestScript) ... ok (0.011s) 2023-01-11T21:25:38.8851353Z test_set_attribute_through_optional (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8851486Z test_shape_analysis_grad_property (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8851609Z test_shape_analysis_loop (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8851746Z test_shape_prop_promote_scalar_arg (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8851872Z test_shape_prop_promotion (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8851993Z test_signed_float_zero (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8852126Z test_single_starred_expr_for_loop (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8852245Z test_single_starred_lhs (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8852359Z test_singleton_tuple_unpack (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8852527Z test_slice_guard_elimination (__main__.TestScript) ... skip: bailouts are being deprecated (0.000s) 2023-01-11T21:25:38.8852633Z test_split (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8852734Z test_stack (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8852852Z test_static_if_prop (__main__.TestScript) ... ok (2.003s) 2023-01-11T21:25:38.8852965Z test_static_method_on_module (__main__.TestScript) 2023-01-11T21:25:38.8853123Z Check that the `@staticmethod` annotation on a function on a module works. ... ok (0.008s) 2023-01-11T21:25:38.8853240Z test_static_methods (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8853336Z test_str_cast (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8853445Z test_string_cu (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8853585Z test_string_device_implicit_conversion (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8853707Z test_string_frontend_elif (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8853821Z test_string_index (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8853931Z test_string_len (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8854043Z test_string_list (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8854160Z test_string_new_line (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8854258Z test_string_ops (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8854371Z test_string_print (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8854535Z test_string_single_escape (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8854652Z test_string_slicing (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8854764Z test_string_sort (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8854878Z test_string_sorted (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8855019Z test_submodule_attribute_serialization (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8855126Z test_submodule_twice (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8855229Z test_sum (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8855350Z test_sum_list_diff_elms (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8855464Z test_sum_list_empty (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8855580Z test_sum_list_literal (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8855690Z test_sum_list_one (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8855840Z test_sum_list_wrong_type (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8855960Z test_sys_stdout_override (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8856249Z test_tensor_as_tensor_shape_prop (__main__.TestScript) ... skip: Simple Executor doesn't have any shapes to propagate (0.001s) 2023-01-11T21:25:38.8856363Z test_tensor_data (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8856514Z test_tensor_device (__main__.TestScript) ... skip: device tests require CUDA (0.001s) 2023-01-11T21:25:38.8856629Z test_tensor_dtype (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8856743Z test_tensor_grad (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8856864Z test_tensor_import_export (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8856974Z test_tensor_len (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8857092Z test_tensor_number_math (__main__.TestScript) ... ok (0.398s) 2023-01-11T21:25:38.8857219Z test_tensor_number_math_cuda (__main__.TestScript) ... skip: No CUDA (0.000s) 2023-01-11T21:25:38.8857376Z test_tensor_requires_grad (__main__.TestScript) ... skip: testing legacy behavior (0.001s) 2023-01-11T21:25:38.8857489Z test_tensor_shape (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8858059Z test_tensor_subclasses (__main__.TestScript) ... /opt/conda/lib/python3.10/site-packages/torch/jit/annotations.py:309: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either. 2023-01-11T21:25:38.8858202Z warnings.warn("TorchScript will treat type annotations of Tensor " 2023-01-11T21:25:38.8858269Z ok (0.007s) 2023-01-11T21:25:38.8858381Z test_tensor_to (__main__.TestScript) ... ok (0.020s) 2023-01-11T21:25:38.8858494Z test_tensor_to_cpu (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8858630Z test_tensor_to_cuda (__main__.TestScript) ... skip: device tests require CUDA (0.000s) 2023-01-11T21:25:38.8858785Z test_tensor_to_device (__main__.TestScript) ... skip: device tests require CUDA (0.000s) 2023-01-11T21:25:38.8858893Z test_ternary (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8859022Z test_ternary_module_type_hint (__main__.TestScript) ... ok (0.035s) 2023-01-11T21:25:38.8859152Z test_ternary_right_associative (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8859271Z test_ternary_static_if (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8859379Z test_torch_any (__main__.TestScript) ... ok (0.015s) 2023-01-11T21:25:38.8859540Z test_torch_functional (__main__.TestScript) ... skip: Skipping while landing PR stack (0.002s) 2023-01-11T21:25:38.8859663Z test_torch_functional_tensordot_int (__main__.TestScript) ... ok (0.042s) 2023-01-11T21:25:38.8859800Z test_torch_functional_tensordot_list (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8859938Z test_torch_functional_tensordot_tensor (__main__.TestScript) ... ok (0.021s) 2023-01-11T21:25:38.8860110Z test_torch_functional_tensordot_tuple (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8860247Z test_torch_ignore_conversion_to_none (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8860364Z test_torch_manual_seed (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8860475Z test_torch_pow (__main__.TestScript) ... ok (0.031s) 2023-01-11T21:25:38.8861041Z test_torch_tensor_as_tensor (__main__.TestScript) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:471: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Copy.cpp:276.) 2023-01-11T21:25:38.8861131Z return callable(*args, **kwargs) 2023-01-11T21:25:38.8861185Z ok (0.513s) 2023-01-11T21:25:38.8861321Z test_torch_tensor_as_tensor_empty_list (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8861445Z test_torch_tensor_bad_input (__main__.TestScript) ... ok (0.027s) 2023-01-11T21:25:38.8861597Z test_torch_tensor_dtype (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8861730Z test_torchscript_memoryformat (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8861860Z test_torchscript_multi_head_attn (__main__.TestScript) ... ok (0.075s) 2023-01-11T21:25:38.8862006Z test_torchscript_multi_head_attn_fast_path (__main__.TestScript) ... ok (0.155s) 2023-01-11T21:25:38.8862111Z test_training_param (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8862233Z test_tuple_assignments (__main__.TestScript) ... ok (0.345s) 2023-01-11T21:25:38.8862470Z test_tuple_error_msg (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8862593Z test_tuple_index_to_list (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8862714Z test_tuple_indexing (__main__.TestScript) ... ok (0.019s) 2023-01-11T21:25:38.8862825Z test_tuple_len (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8862947Z test_tuple_nested_sort (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8863064Z test_tuple_sort (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8863172Z test_tuple_sort_reverse (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8863289Z test_tuple_sorted (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8863408Z test_tuple_to_opt_list (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8863541Z test_tuple_unsortable_diff_type (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8863678Z test_tuple_unsortable_element_type (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8863820Z test_tuple_unsortable_nested_diff_type (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8863937Z test_type_annotate (__main__.TestScript) ... ok (0.014s) 2023-01-11T21:25:38.8864049Z test_type_annotation_module (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8864172Z test_type_annotation_py3 (__main__.TestScript) ... ok (0.010s) 2023-01-11T21:25:38.8864295Z test_type_annotations (__main__.TestScript) ... ok (0.016s) 2023-01-11T21:25:38.8864439Z test_type_annotations_repeated_list (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8864571Z test_type_annotations_varargs (__main__.TestScript) ... ok (0.012s) 2023-01-11T21:25:38.8864693Z test_type_call_in_script (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8864802Z test_type_cast (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8864925Z test_type_comments_in_body (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8865040Z test_type_inferred_from_empty_annotation (__main__.TestScript) 2023-01-11T21:25:38.8865228Z Test that the type inferred from an empty or missing annotation is Torch.Tensor wtih `inferred=true` ... ok (0.002s) 2023-01-11T21:25:38.8865451Z test_unbind (__main__.TestScript) ... skip: Profiling executor will be using different heuristics for constructing differentiable graphs (0.001s) 2023-01-11T21:25:38.8865568Z test_unfold_zero_dim (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8865742Z test_unicode_comments (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8865860Z test_uninitialized (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8865977Z test_union_to_number (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8866097Z test_unknown_builtin (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8866214Z test_unmatched_type_annotation (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8866347Z test_unspecialized_any_binding (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8866526Z test_unsqueeze_guard_elimination (__main__.TestScript) ... skip: bailouts are being deprecated (0.000s) 2023-01-11T21:25:38.8866660Z test_unsupported_builtin_error (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8866783Z test_unused_decorator (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8866958Z test_unwrap_optional_builtin (__main__.TestScript) ... ok (0.009s) 2023-01-11T21:25:38.8867183Z test_var_aug_assign (__main__.TestScript) ... ok (0.940s) 2023-01-11T21:25:38.8867306Z test_vararg_zeros (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8867429Z test_view_listconstruct_shape_prop (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8867548Z test_view_shape_prop (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8867659Z test_view_write (__main__.TestScript) ... ok (0.006s) 2023-01-11T21:25:38.8867782Z test_weak_cuda (__main__.TestScript) ... skip: no CUDA (0.001s) 2023-01-11T21:25:38.8867889Z test_where (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8868003Z test_where_method (__main__.TestScript) ... ok (0.004s) 2023-01-11T21:25:38.8868107Z test_while (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8868221Z test_while_nest_if (__main__.TestScript) ... ok (0.042s) 2023-01-11T21:25:38.8868340Z test_while_nonexistent_cond_value (__main__.TestScript) ... ok (0.025s) 2023-01-11T21:25:38.8868467Z test_while_nonexistent_value (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8868604Z test_while_write_outer_then_read (__main__.TestScript) ... ok (0.007s) 2023-01-11T21:25:38.8868725Z test_wrong_attr_lookup (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8868851Z test_wrong_implicit_expand (__main__.TestScript) ... ok (0.005s) 2023-01-11T21:25:38.8868979Z test_wrong_method_call_inputs (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8869104Z test_wrong_module_attr_lookup (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8869209Z test_wrong_return_type (__main__.TestScript) ... ok (0.003s) 2023-01-11T21:25:38.8869332Z test_wrong_use_as_callable (__main__.TestScript) ... ok (0.001s) 2023-01-11T21:25:38.8869451Z test_wrong_use_as_tuple (__main__.TestScript) ... ok (0.002s) 2023-01-11T21:25:38.8869556Z test_zeros (__main__.TestScript) ... ok (0.008s) 2023-01-11T21:25:38.8869684Z test_zip_enumerate_modulelist (__main__.TestScript) ... ok (0.412s) 2023-01-11T21:25:38.8869800Z test_bool (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8869928Z Test the __bool__ method. This should return True ... ok (0.002s) 2023-01-11T21:25:38.8870049Z test_contains (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8870155Z Test membership checks (x in y, x not in y). ... ok (0.001s) 2023-01-11T21:25:38.8870270Z test_delitem (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8870353Z Test deletion. ... ok (0.001s) 2023-01-11T21:25:38.8870469Z test_getitem (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8870606Z Test accessing dictionary values using the [] operator. ... ok (0.001s) 2023-01-11T21:25:38.8870772Z test_items (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8870857Z Test .items(). ... ok (0.001s) 2023-01-11T21:25:38.8870955Z test_iter (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8871150Z Test iteration over a dictionary's keys. ... ok (0.001s) 2023-01-11T21:25:38.8871266Z test_len (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8871371Z Test len() builtin function. ... ok (0.001s) 2023-01-11T21:25:38.8871521Z test_nested (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8871759Z Test that reference semantics are honoured when the ScriptDict that is ... skip: Cannot pass until all dicts returned from TorchScript are ScriptDicts (0.001s) 2023-01-11T21:25:38.8871892Z test_reference_semantics (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8872043Z Test that reference semantics are honoured; that modifications made ... ok (0.003s) 2023-01-11T21:25:38.8872141Z test_repr (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8872238Z Test the __repr__ method. ... ok (0.001s) 2023-01-11T21:25:38.8872351Z test_setitem (jit.test_list_dict.TestScriptDict) 2023-01-11T21:25:38.8872483Z Test setting dictionary values using the [] operator. ... ok (0.001s) 2023-01-11T21:25:38.8872596Z test_append (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8872686Z Test append method. ... ok (0.001s) 2023-01-11T21:25:38.8872824Z test_bool (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8872937Z Test the __bool__ method. This should return True ... ok (0.001s) 2023-01-11T21:25:38.8873051Z test_clear (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8873129Z Test clear. ... ok (0.001s) 2023-01-11T21:25:38.8873244Z test_contains (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8873362Z Test membership checks (x in y, x not in y). ... ok (0.001s) 2023-01-11T21:25:38.8873476Z test_count (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8873562Z Test count method. ... ok (0.001s) 2023-01-11T21:25:38.8873664Z test_delitem (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8873747Z Test deletion. ... ok (0.001s) 2023-01-11T21:25:38.8873858Z test_extend (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8873939Z Test extend. ... ok (0.002s) 2023-01-11T21:25:38.8874053Z test_getitem (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8874181Z Test accessing list elements using the [] operator. ... ok (0.003s) 2023-01-11T21:25:38.8874299Z test_insert (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8874367Z Test insert. ... ok (0.001s) 2023-01-11T21:25:38.8874477Z test_iter (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8874642Z Test iteration over a list's elements. ... ok (0.001s) 2023-01-11T21:25:38.8874754Z test_len (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8874853Z Test len() builtin function. ... ok (0.001s) 2023-01-11T21:25:38.8874965Z test_nested (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8875202Z Test that reference semantics are honoured when the ScriptList that is ... skip: Cannot pass until all list returned from TorchScript are ScriptLists (0.001s) 2023-01-11T21:25:38.8875313Z test_pop (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8875378Z Test pop. ... ok (0.001s) 2023-01-11T21:25:38.8875509Z test_reference_semantics (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8875665Z Test that reference semantics are honoured; that modifications made ... ok (0.003s) 2023-01-11T21:25:38.8875781Z test_remove (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8875869Z Test remove method. ... ok (0.001s) 2023-01-11T21:25:38.8875979Z test_repr (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8876073Z Test the __repr__ method. ... ok (0.001s) 2023-01-11T21:25:38.8876174Z test_setitem (jit.test_list_dict.TestScriptList) 2023-01-11T21:25:38.8876298Z Test setting list elements using the [] operator. ... ok (0.002s) 2023-01-11T21:25:38.8876571Z test_annotated_class_level_annotation_and_init_annotation (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8876823Z test_annotated_class_level_annotation_only (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8877077Z test_annotated_class_level_jit_annotation (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8877346Z test_annotated_empty_dict (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.008s) 2023-01-11T21:25:38.8877575Z test_annotated_empty_list (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.007s) 2023-01-11T21:25:38.8877815Z test_annotated_empty_optional (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8878051Z test_annotated_empty_tensor (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8878291Z test_annotated_falsy_base_type (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8878524Z test_annotated_nonempty_container (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8878767Z test_annotated_with_jit_attribute (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8879041Z test_annotated_with_jit_empty_dict (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.007s) 2023-01-11T21:25:38.8879284Z test_annotated_with_jit_empty_list (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8879530Z test_annotated_with_jit_empty_optional (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.006s) 2023-01-11T21:25:38.8879773Z test_annotated_with_torch_jit_import (jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation) ... ok (0.005s) 2023-01-11T21:25:38.8879919Z test_basic (jit.test_script_profile.TestScriptProfile) ... ok (0.071s) 2023-01-11T21:25:38.8880066Z test_empty (jit.test_script_profile.TestScriptProfile) ... ok (0.001s) 2023-01-11T21:25:38.8880211Z test_multi (jit.test_script_profile.TestScriptProfile) ... ok (0.078s) 2023-01-11T21:25:38.8880344Z test_script (jit.test_script_profile.TestScriptProfile) ... ok (0.209s) 2023-01-11T21:25:38.8880496Z test_section (jit.test_script_profile.TestScriptProfile) ... ok (0.246s) 2023-01-11T21:25:38.8880629Z test_module_list_slicing (jit.test_slice.TestSlice) ... ok (0.020s) 2023-01-11T21:25:38.8880760Z test_slice_as_variable (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8880890Z test_slice_dynamic_index (jit.test_slice.TestSlice) ... ok (0.005s) 2023-01-11T21:25:38.8881011Z test_slice_kwarg (jit.test_slice.TestSlice) ... ok (0.001s) 2023-01-11T21:25:38.8881136Z test_slice_one_none (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881262Z test_slice_start_stop (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881384Z test_slice_start_stop_step (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881522Z test_slice_start_stop_with_none (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881655Z test_slice_stop_clipped (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881785Z test_slice_stop_only (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8881926Z test_slice_stop_only_with_nones (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882048Z test_slice_string (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882169Z test_slice_tensor (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882303Z test_slice_tensor_multidim (jit.test_slice.TestSlice) ... ok (0.005s) 2023-01-11T21:25:38.8882432Z test_slice_tensor_multidim_with_dots (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882558Z test_slice_three_nones (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882681Z test_slice_two_nones (jit.test_slice.TestSlice) ... ok (0.004s) 2023-01-11T21:25:38.8882807Z test_tuple_slicing (jit.test_slice.TestSlice) ... ok (0.005s) 2023-01-11T21:25:38.8882946Z test_freeze_sparse_coo (jit.test_sparse.TestSparse) ... ok (0.007s) 2023-01-11T21:25:38.8883079Z test_freeze_sparse_csr (jit.test_sparse.TestSparse) ... ok (0.008s) 2023-01-11T21:25:38.8883245Z test_serialize_sparse_coo (jit.test_sparse.TestSparse) ... ok (0.006s) 2023-01-11T21:25:38.8883367Z test_serialize_sparse_csr (jit.test_sparse.TestSparse) ... ok (0.006s) 2023-01-11T21:25:38.8883538Z test_modulo_operator (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8883756Z test_string_interpolation_with_alternate_digit_placeholder (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8883995Z test_string_interpolation_with_capital_exponent_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8884213Z test_string_interpolation_with_char_placeholder_and_char_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8884436Z test_string_interpolation_with_char_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8884709Z test_string_interpolation_with_char_placeholder_and_true_string_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8884934Z test_string_interpolation_with_digit_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8885157Z test_string_interpolation_with_digit_placeholder_and_string_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8885364Z test_string_interpolation_with_double_percent_in_string (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8885579Z test_string_interpolation_with_exponent_placeholder_and_string_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8885799Z test_string_interpolation_with_float_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8886022Z test_string_interpolation_with_float_placeholder_and_float_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8886262Z test_string_interpolation_with_lowercase_exponent_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8886469Z test_string_interpolation_with_multiple_placeholders (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8886668Z test_string_interpolation_with_percent_in_string (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8886892Z test_string_interpolation_with_string_placeholder_and_digit_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8887121Z test_string_interpolation_with_string_placeholder_and_format_string_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8887348Z test_string_interpolation_with_string_placeholder_and_string_variable (jit.test_string_formatting.TestStringFormatting) ... ok (0.003s) 2023-01-11T21:25:38.8887542Z test_string_interpolation_with_subscript (jit.test_string_formatting.TestStringFormatting) ... ok (0.004s) 2023-01-11T21:25:38.8887728Z test_string_interpolation_with_too_few_arguments (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8887927Z test_string_interpolation_with_too_many_arguments (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8888134Z test_string_interpolation_with_unknown_format_specifier (jit.test_string_formatting.TestStringFormatting) ... ok (0.006s) 2023-01-11T21:25:38.8888330Z test_adaptive_avg_pool2d (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.023s) 2023-01-11T21:25:38.8888516Z test_arange_shape (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.017s) 2023-01-11T21:25:38.8888722Z test_binary_shape_fns_inplace (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.005s) 2023-01-11T21:25:38.8888947Z test_binary_shape_functions (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.004s) 2023-01-11T21:25:38.8889140Z test_convolution_backward (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.015s) 2023-01-11T21:25:38.8889326Z test_if_propagation (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8889518Z test_partial_eval_graph_conv (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.011s) 2023-01-11T21:25:38.8889697Z test_partial_eval_stitching (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.033s) 2023-01-11T21:25:38.8889907Z test_refinement_through_graph_stitching (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.020s) 2023-01-11T21:25:38.8890113Z test_register_function_error_checking (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.011s) 2023-01-11T21:25:38.8890347Z test_returning_input_symbolic_shapes (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.008s) 2023-01-11T21:25:38.8890534Z test_shape_analysis (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.006s) 2023-01-11T21:25:38.8890715Z test_shape_concat (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.084s) 2023-01-11T21:25:38.8890906Z test_shape_embedding_bag (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.015s) 2023-01-11T21:25:38.8891147Z test_shape_function_includes (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... skip: shape functions not loaded in python (0.001s) 2023-01-11T21:25:38.8891336Z test_shared_shape_graph (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.001s) 2023-01-11T21:25:38.8891504Z test_size_and_sizes (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.005s) 2023-01-11T21:25:38.8891722Z test_stitching_concat (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.021s) 2023-01-11T21:25:38.8892006Z test_stitching_multi_output (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.017s) 2023-01-11T21:25:38.8892228Z test_sym_ir_parsing (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.001s) 2023-01-11T21:25:38.8892424Z test_unary_shape_fns_inplace (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.002s) 2023-01-11T21:25:38.8892614Z test_unary_shape_functions (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.002s) 2023-01-11T21:25:38.8892790Z test_write (jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis) ... ok (0.003s) 2023-01-11T21:25:38.8892943Z test_method_on_number (jit.test_builtins.TestTensorBuiltins) ... ok (0.002s) 2023-01-11T21:25:38.8893107Z test_scalar_to_num_conversions (jit.test_builtins.TestTensorBuiltins) ... ok (0.015s) 2023-01-11T21:25:38.8893238Z test_tensor_item (jit.test_builtins.TestTensorBuiltins) ... ok (0.005s) 2023-01-11T21:25:38.8893397Z test_tensor_properties (jit.test_builtins.TestTensorBuiltins) ... ok (0.012s) 2023-01-11T21:25:38.8893561Z test_tensor_subscript_assign (jit.test_builtins.TestTensorBuiltins) ... ok (0.027s) 2023-01-11T21:25:38.8893751Z test_tensor_subscript_assign_device (jit.test_builtins.TestTensorBuiltins) ... skip: requires CUDA (0.000s) 2023-01-11T21:25:38.8893930Z test_randperm_default_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8894111Z test_randperm_specifed_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8894292Z test_tril_indices_default_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8894475Z test_tril_indices_specified_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8894638Z test_triu_indices_default_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8894863Z test_triu_indices_specified_dtype (jit.test_tensor_creation_ops.TestTensorCreationOps) ... ok (0.005s) 2023-01-11T21:25:38.8895007Z test_getitem (jit.test_tensor_methods.TestTensorMethods) ... ok (0.005s) 2023-01-11T21:25:38.8895162Z test_getitem_invalid (jit.test_tensor_methods.TestTensorMethods) ... ok (0.001s) 2023-01-11T21:25:38.8895303Z test_default_args (jit.test_torchbind.TestTorchbind) ... ok (0.009s) 2023-01-11T21:25:38.8895455Z test_lambda_as_constructor (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8895849Z test_profiler_custom_op (jit.test_torchbind.TestTorchbind) ... STAGE:2023-01-11 21:25:35 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8896107Z STAGE:2023-01-11 21:25:35 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8896369Z STAGE:2023-01-11 21:25:35 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8896466Z ok (0.003s) 2023-01-11T21:25:38.8896610Z test_staticmethod (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8896748Z test_torchbind (jit.test_torchbind.TestTorchbind) ... ok (0.008s) 2023-01-11T21:25:38.8896906Z test_torchbind_attr_exception (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8897067Z test_torchbind_class_attr_recursive (jit.test_torchbind.TestTorchbind) ... ok (0.003s) 2023-01-11T21:25:38.8897226Z test_torchbind_class_attribute (jit.test_torchbind.TestTorchbind) ... ok (0.005s) 2023-01-11T21:25:38.8897374Z test_torchbind_deepcopy (jit.test_torchbind.TestTorchbind) ... ok (0.003s) 2023-01-11T21:25:38.8897543Z test_torchbind_def_property_getter_setter (jit.test_torchbind.TestTorchbind) ... ok (0.007s) 2023-01-11T21:25:38.8897693Z test_torchbind_def_property_just_getter (jit.test_torchbind.TestTorchbind) ... ok (0.004s) 2023-01-11T21:25:38.8897858Z test_torchbind_def_property_readwrite (jit.test_torchbind.TestTorchbind) ... ok (0.004s) 2023-01-11T21:25:38.8898009Z test_torchbind_getattr (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8898155Z test_torchbind_getstate (jit.test_torchbind.TestTorchbind) ... ok (0.005s) 2023-01-11T21:25:38.8898323Z test_torchbind_instantiate_missing_class (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8898476Z test_torchbind_lambda_method (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8898621Z test_torchbind_no_init (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8898785Z test_torchbind_optional_explicit_attr (jit.test_torchbind.TestTorchbind) ... ok (0.005s) 2023-01-11T21:25:38.8898926Z test_torchbind_pass_wrong_type (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8899088Z test_torchbind_pickle_serialization (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8899247Z test_torchbind_python_deepcopy (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8899404Z test_torchbind_return_instance (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8899572Z test_torchbind_return_instance_from_method (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8899726Z test_torchbind_return_tuple (jit.test_torchbind.TestTorchbind) ... ok (0.001s) 2023-01-11T21:25:38.8899874Z test_torchbind_save_load (jit.test_torchbind.TestTorchbind) ... ok (0.003s) 2023-01-11T21:25:38.8900023Z test_torchbind_take_as_arg (jit.test_torchbind.TestTorchbind) ... ok (0.012s) 2023-01-11T21:25:38.8900177Z test_torchbind_take_instance_as_method_arg (jit.test_torchbind.TestTorchbind) ... ok (0.002s) 2023-01-11T21:25:38.8900321Z test_torchbind_tracing (jit.test_torchbind.TestTorchbind) ... ok (0.011s) 2023-01-11T21:25:38.8900473Z test_torchbind_tracing_nested (jit.test_torchbind.TestTorchbind) ... ok (0.012s) 2023-01-11T21:25:38.8900623Z test_call_traced_fn_from_traced_module (jit.test_tracer.TestTracer) ... ok (0.013s) 2023-01-11T21:25:38.8900812Z test_call_traced_module_from_traced_module (jit.test_tracer.TestTracer) ... ok (0.018s) 2023-01-11T21:25:38.8900962Z test_canonicalize_tensor_iterator (jit.test_tracer.TestTracer) ... ok (0.010s) 2023-01-11T21:25:38.8901084Z test_constant (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8901202Z test_conv (jit.test_tracer.TestTracer) ... ok (0.022s) 2023-01-11T21:25:38.8901325Z test_export_no_reorder (jit.test_tracer.TestTracer) ... ok (0.016s) 2023-01-11T21:25:38.8901465Z test_force_outplace_check_fill (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8901603Z test_force_outplace_check_zero (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8901717Z test_ge (jit.test_tracer.TestTracer) ... ok (0.013s) 2023-01-11T21:25:38.8901858Z test_ge_cuda (jit.test_tracer.TestTracer) ... skip: requires CUDA (0.000s) 2023-01-11T21:25:38.8901986Z test_ge_optimized (jit.test_tracer.TestTracer) ... ok (0.178s) 2023-01-11T21:25:38.8902149Z test_ge_unoptimized (jit.test_tracer.TestTracer) ... ok (0.036s) 2023-01-11T21:25:38.8902273Z test_index_put (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8902539Z test_index_put_trace_with_view (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8902686Z test_index_put_trace_without_view (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8902817Z test_inplace_check (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8902946Z test_inplace_copy (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8903093Z test_inplace_copy_force_outplace (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8903222Z test_inplace_flags (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8903365Z test_inplace_transplant (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8903493Z test_inplace_warn (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8903624Z test_input_dict_checkTrace_mut (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8903758Z test_input_dict_empty (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8903897Z test_input_dict_empty_list (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8904030Z test_input_dict_insertion_order (jit.test_tracer.TestTracer) 2023-01-11T21:25:38.8904250Z Check that dictionary access doesn't care about insertion order ... ok (0.009s) 2023-01-11T21:25:38.8904386Z test_input_dict_of_dicts (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8904520Z test_input_dict_of_lists (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8904660Z test_input_dict_recursive (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8904776Z test_input_dict_remembers_keys (jit.test_tracer.TestTracer) 2023-01-11T21:25:38.8904920Z Check that the trace remembers which keys were in a dict input ... ok (0.011s) 2023-01-11T21:25:38.8905053Z test_input_dict_unify (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8905171Z test_input_flatten (jit.test_tracer.TestTracer) 2023-01-11T21:25:38.8905306Z Check that inputs to traced functions are flattened ... ok (0.011s) 2023-01-11T21:25:38.8905442Z test_input_list_mixed_type (jit.test_tracer.TestTracer) ... ok (0.003s) 2023-01-11T21:25:38.8905575Z test_input_list_of_tuples (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8905718Z test_input_list_toplevel_flatten (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8905857Z test_input_list_toplevel_flatten_direct (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8905991Z test_input_tuple_of_dicts (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8906126Z test_interpolate_trace (jit.test_tracer.TestTracer) ... ok (0.028s) 2023-01-11T21:25:38.8906282Z test_large_nbr_kernel_args (jit.test_tracer.TestTracer) ... skip: requires CUDA (0.001s) 2023-01-11T21:25:38.8906411Z test_lhs_index_fails (jit.test_tracer.TestTracer) ... ok (0.012s) 2023-01-11T21:25:38.8906594Z test_lhs_index_trivial (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8906717Z test_max_pool (jit.test_tracer.TestTracer) ... ok (0.013s) 2023-01-11T21:25:38.8906846Z test_nested_inplace (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8906964Z test_non_tensor_tracing (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8907087Z test_output_unflatten (jit.test_tracer.TestTracer) 2023-01-11T21:25:38.8907272Z Check that outputs of traced functions retain the original structure and nesting ... expected failure (0.007s) 2023-01-11T21:25:38.8907401Z test_python_function (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8907540Z test_python_function_tup (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8907672Z test_repeated_input (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8907804Z test_repeated_output (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8907970Z test_shared_param (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8908078Z test_simple (jit.test_tracer.TestTracer) ... ok (0.015s) 2023-01-11T21:25:38.8908224Z test_tensor_with_grad_as_constant (jit.test_tracer.TestTracer) ... ok (0.001s) 2023-01-11T21:25:38.8908371Z test_trace_aliased_parameter (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8908504Z test_trace_annotation (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8908631Z test_trace_arange (jit.test_tracer.TestTracer) ... ok (0.017s) 2023-01-11T21:25:38.8908769Z test_trace_arange_with_grad (jit.test_tracer.TestTracer) ... ok (0.018s) 2023-01-11T21:25:38.8908907Z test_trace_autograd_function (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8909078Z test_trace_c10_ops (jit.test_tracer.TestTracer) ... skip: Skip the test since c2 ops are not registered. (0.002s) 2023-01-11T21:25:38.8909202Z test_trace_casts (jit.test_tracer.TestTracer) ... ok (0.036s) 2023-01-11T21:25:38.8909349Z test_trace_checker_control_flow (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8909487Z test_trace_checker_dot_data (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8909635Z test_trace_checker_dropout_notrain (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8909779Z test_trace_checker_dropout_train (jit.test_tracer.TestTracer) ... ok (0.017s) 2023-01-11T21:25:38.8909927Z test_trace_checker_inplace_on_view (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8910072Z test_trace_checker_memoization (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8910212Z test_trace_checker_slice_lhs (jit.test_tracer.TestTracer) ... ok (0.011s) 2023-01-11T21:25:38.8910349Z test_trace_checking_with_global_name (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8910880Z test_trace_contiguous (jit.test_tracer.TestTracer) ... /var/lib/jenkins/workspace/test/jit/test_tracer.py:1673: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:25:38.8911029Z self.assertNotEqual(x.storage().data_ptr(), y.storage().data_ptr()) 2023-01-11T21:25:38.8911097Z ok (0.008s) 2023-01-11T21:25:38.8911247Z test_trace_contiguous_short_circuit (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8911376Z test_trace_detach (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8911516Z test_trace_detach_inplace (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8911670Z test_trace_detach_inplace_redispatch (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8911813Z test_trace_detach_redispatch (jit.test_tracer.TestTracer) ... ok (0.002s) 2023-01-11T21:25:38.8911936Z test_trace_dict_input (jit.test_tracer.TestTracer) ... ok (0.010s) 2023-01-11T21:25:38.8912137Z test_trace_dict_output (jit.test_tracer.TestTracer) ... ok (0.016s) 2023-01-11T21:25:38.8912267Z test_trace_export_fns (jit.test_tracer.TestTracer) ... ok (0.018s) 2023-01-11T21:25:38.8912410Z test_trace_export_fns_recursive (jit.test_tracer.TestTracer) ... ok (0.041s) 2023-01-11T21:25:38.8912553Z test_trace_fork_join_and_module (jit.test_tracer.TestTracer) ... ok (0.030s) 2023-01-11T21:25:38.8912697Z test_trace_full_dynamic_shape (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8912851Z test_trace_func_argument_names_captured (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8912978Z test_trace_index (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8913101Z test_trace_index_constant (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8913244Z test_trace_indexed_assignment (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8913408Z test_trace_inline_shape (jit.test_tracer.TestTracer) ... ok (0.014s) 2023-01-11T21:25:38.8913541Z test_trace_inverse (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8913689Z test_trace_invert_module_hierarchy (jit.test_tracer.TestTracer) ... ok (0.019s) 2023-01-11T21:25:38.8913822Z test_trace_legacy_ctor (jit.test_tracer.TestTracer) ... ok (0.009s) 2023-01-11T21:25:38.8913980Z test_trace_module_argument_names_captured (jit.test_tracer.TestTracer) ... ok (0.036s) 2023-01-11T21:25:38.8914114Z test_trace_modulelist (jit.test_tracer.TestTracer) ... ok (0.024s) 2023-01-11T21:25:38.8914281Z test_trace_multi_output_function (jit.test_tracer.TestTracer) ... graph(%self : __torch__.jit.test_tracer.Bar, 2023-01-11T21:25:38.8914403Z %x : Double(3, 2, strides=[2, 1], requires_grad=0, device=cpu), 2023-01-11T21:25:38.8914526Z %y : Double(1, 2, strides=[2, 1], requires_grad=0, device=cpu)): 2023-01-11T21:25:38.8914753Z %5 : Double(3, 2, strides=[2, 1], requires_grad=0, device=cpu) = aten::relu(%x) # /var/lib/jenkins/workspace/test/jit/test_tracer.py:1429:0 2023-01-11T21:25:38.8914955Z %6 : Double(1, 2, strides=[2, 1], requires_grad=0, device=cpu) = aten::relu(%y) # /var/lib/jenkins/workspace/test/jit/test_tracer.py:1430:0 2023-01-11T21:25:38.8915474Z %9 : (Double(1, 2, strides=[2, 1], requires_grad=0, device=cpu), Double(3, 2, strides=[2, 1], requires_grad=0, device=cpu)) = ^Foo[inplace=0, module="jit.test_tracer", Subgraph=]()(%5, %6) # /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py:429:0 2023-01-11T21:25:38.8915680Z %10 : Double(1, 2, strides=[2, 1], requires_grad=0, device=cpu), %11 : Double(3, 2, strides=[2, 1], requires_grad=0, device=cpu) = prim::TupleUnpack(%9) 2023-01-11T21:25:38.8915890Z %12 : (Double(1, 2, strides=[2, 1], requires_grad=0, device=cpu), Double(3, 2, strides=[2, 1], requires_grad=0, device=cpu)) = prim::TupleConstruct(%10, %11) 2023-01-11T21:25:38.8915962Z return (%12) 2023-01-11T21:25:38.8915973Z 2023-01-11T21:25:38.8916039Z ok (0.007s) 2023-01-11T21:25:38.8916180Z test_trace_namedtuple (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8916307Z test_trace_nested_datatypes (jit.test_tracer.TestTracer) ... ok (0.009s) 2023-01-11T21:25:38.8916442Z test_trace_nested_fn (jit.test_tracer.TestTracer) ... ok (0.015s) 2023-01-11T21:25:38.8916567Z test_trace_numel (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8916704Z test_trace_optioanl_dtype (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8916837Z test_trace_optional (jit.test_tracer.TestTracer) ... ok (0.014s) 2023-01-11T21:25:38.8917000Z test_trace_partial_func_argument_names_captured (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8917126Z test_trace_random (jit.test_tracer.TestTracer) ... ok (0.003s) 2023-01-11T21:25:38.8917246Z test_trace_records_names (jit.test_tracer.TestTracer) ... ok (0.009s) 2023-01-11T21:25:38.8917370Z test_trace_save (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8917538Z test_trace_save_load_copy (jit.test_tracer.TestTracer) ... ok (0.031s) 2023-01-11T21:25:38.8917675Z test_trace_single_tuple (jit.test_tracer.TestTracer) ... ok (0.004s) 2023-01-11T21:25:38.8917797Z test_trace_size (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8917932Z test_trace_size_with_grad (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8918074Z test_trace_skip_none_submodule (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8918197Z test_trace_slice (jit.test_tracer.TestTracer) ... ok (0.042s) 2023-01-11T21:25:38.8918332Z test_trace_slice_expr_complete_type (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8918465Z test_trace_slice_full_dim (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8918617Z test_trace_slice_setitem_dynamic_shape (jit.test_tracer.TestTracer) ... ok (0.007s) 2023-01-11T21:25:38.8918751Z test_trace_slice_with_grad (jit.test_tracer.TestTracer) ... ok (0.045s) 2023-01-11T21:25:38.8918921Z test_trace_tensor_factory (jit.test_tracer.TestTracer) ... ok (0.035s) 2023-01-11T21:25:38.8919046Z test_trace_topk (jit.test_tracer.TestTracer) ... ok (0.009s) 2023-01-11T21:25:38.8919172Z test_trace_tuple (jit.test_tracer.TestTracer) ... ok (0.009s) 2023-01-11T21:25:38.8919320Z test_trace_variable_instantiation (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8919429Z test_trace_warn (jit.test_tracer.TestTracer) ... ok (0.008s) 2023-01-11T21:25:38.8919582Z test_trace_with_conditional_property (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8919737Z test_trace_with_nested_tensor_list_output (jit.test_tracer.TestTracer) ... ok (0.003s) 2023-01-11T21:25:38.8919880Z test_trace_with_number_list_output (jit.test_tracer.TestTracer) ... ok (0.003s) 2023-01-11T21:25:38.8920025Z test_trace_with_tensor_list_output (jit.test_tracer.TestTracer) ... ok (0.015s) 2023-01-11T21:25:38.8920179Z test_traced_module_cuda (jit.test_tracer.TestTracer) ... skip: calls .cuda() (0.001s) 2023-01-11T21:25:38.8920328Z test_tracing_backward_hook_error (jit.test_tracer.TestTracer) ... ok (0.001s) 2023-01-11T21:25:38.8920454Z test_tracing_hooks (jit.test_tracer.TestTracer) ... ok (0.040s) 2023-01-11T21:25:38.8920584Z test_tracing_multiple_methods (jit.test_tracer.TestTracer) ... ok (0.128s) 2023-01-11T21:25:38.8920717Z test_typeas_trace_check (jit.test_tracer.TestTracer) ... ok (0.005s) 2023-01-11T21:25:38.8920848Z test_wrapped_number (jit.test_tracer.TestTracer) ... ok (0.006s) 2023-01-11T21:25:38.8920988Z test_assign_python_attr (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8921214Z Assigning a new (python-only) attribute should not change type sharing ... ok (0.031s) 2023-01-11T21:25:38.8921355Z test_basic (jit.test_type_sharing.TestTypeSharing) ... ok (0.006s) 2023-01-11T21:25:38.8921516Z test_builtin_function_different (jit.test_type_sharing.TestTypeSharing) ... ok (0.007s) 2023-01-11T21:25:38.8921673Z test_builtin_function_same (jit.test_type_sharing.TestTypeSharing) ... ok (0.005s) 2023-01-11T21:25:38.8921793Z test_constants (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8921979Z Types should be shared for identical constant values, and different for different constant values ... ok (0.010s) 2023-01-11T21:25:38.8922117Z test_diff_attr_values (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8922252Z Types should be shared even if attribute values differ ... ok (0.005s) 2023-01-11T21:25:38.8922403Z test_failed_attribute_compilation (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8922559Z Attributes whose type cannot be inferred should fail cleanly with nice hints ... ok (0.002s) 2023-01-11T21:25:38.8922705Z test_ignored_fns (jit.test_type_sharing.TestTypeSharing) ... ok (0.005s) 2023-01-11T21:25:38.8922831Z test_linear (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8922929Z Simple example with a real nn Module ... ok (0.011s) 2023-01-11T21:25:38.8923084Z test_loaded_modules_work (jit.test_type_sharing.TestTypeSharing) ... ok (0.015s) 2023-01-11T21:25:38.8923275Z test_module_dict_same_type_different_name (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8923428Z We should be able to differentiate between two ModuleDict instances ... ok (0.030s) 2023-01-11T21:25:38.8923563Z test_mutate_attr_value (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8923710Z Mutating the value of an attribute should not change type sharing ... ok (0.017s) 2023-01-11T21:25:38.8923847Z test_param_vs_attribute (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8924067Z The same module with an `foo` as a parameter vs. attribute shouldn't ... ok (0.007s) 2023-01-11T21:25:38.8924216Z test_python_function_attribute_different (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8924360Z Different functions passed in should lead to different types ... ok (0.010s) 2023-01-11T21:25:38.8924510Z test_python_function_attribute_same (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8924680Z Same functions passed in should lead to same types ... ok (0.007s) 2023-01-11T21:25:38.8924828Z test_same_but_different_classes (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8924980Z Even if everything about the module is the same, different originating ... ok (0.021s) 2023-01-11T21:25:38.8925138Z test_script_function_attribute_different (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8925281Z Different functions passed in should lead to different types ... ok (0.010s) 2023-01-11T21:25:38.8925419Z test_script_function_attribute_same (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8925549Z Same functions passed in should lead to same types ... ok (0.007s) 2023-01-11T21:25:38.8925730Z test_script_module_containing_traced_module (jit.test_type_sharing.TestTypeSharing) ... ok (0.020s) 2023-01-11T21:25:38.8925861Z test_submodules (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8925985Z If submodules differ, the types should differ. ... ok (0.042s) 2023-01-11T21:25:38.8926142Z test_tracing_gives_different_types (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8926359Z Since we can't guarantee that methods are the same between different ... ok (0.012s) 2023-01-11T21:25:38.8926519Z test_type_not_shared_ignored_attributes (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8926640Z Test that types are not shared if the exclusion of their ... ok (0.007s) 2023-01-11T21:25:38.8926792Z test_type_shared_ignored_attributes (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8926922Z Test that types are shared if the exclusion of their ... ok (0.005s) 2023-01-11T21:25:38.8927070Z test_type_sharing_define_in_init (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8927203Z Tests that types between instances of a ScriptModule ... ok (0.006s) 2023-01-11T21:25:38.8927344Z test_type_sharing_disabled (jit.test_type_sharing.TestTypeSharing) 2023-01-11T21:25:38.8927459Z Test that type sharing can be disabled. ... ok (0.019s) 2023-01-11T21:25:38.8927626Z test_annotate_outside_init (jit.test_types.TestTypesAndAnnotation) ... ok (0.015s) 2023-01-11T21:25:38.8927761Z test_bad_types (jit.test_types.TestTypesAndAnnotation) ... ok (0.002s) 2023-01-11T21:25:38.8927917Z test_ignore_with_types (jit.test_types.TestTypesAndAnnotation) ... ok (0.004s) 2023-01-11T21:25:38.8928071Z test_ignoring_module_attributes (jit.test_types.TestTypesAndAnnotation) 2023-01-11T21:25:38.8928192Z Test that module attributes can be ignored. ... ok (0.007s) 2023-01-11T21:25:38.8928360Z test_inferred_type_error_message (jit.test_types.TestTypesAndAnnotation) ... ok (0.003s) 2023-01-11T21:25:38.8928523Z test_mismatched_annotation (jit.test_types.TestTypesAndAnnotation) ... ok (0.001s) 2023-01-11T21:25:38.8928687Z test_optional_no_element_type_annotation (jit.test_types.TestTypesAndAnnotation) 2023-01-11T21:25:38.8928839Z Test that using an optional with no contained types produces an error. ... ok (0.002s) 2023-01-11T21:25:38.8928975Z test_parser_bug (jit.test_types.TestTypesAndAnnotation) ... ok (0.000s) 2023-01-11T21:25:38.8929158Z test_pep585_type (jit.test_types.TestTypesAndAnnotation) ... ok (0.008s) 2023-01-11T21:25:38.8929311Z test_python_callable (jit.test_types.TestTypesAndAnnotation) ... ok (0.002s) 2023-01-11T21:25:38.8929461Z test_reannotate (jit.test_types.TestTypesAndAnnotation) ... ok (0.001s) 2023-01-11T21:25:38.8929622Z test_tuple_no_element_type_annotation (jit.test_types.TestTypesAndAnnotation) 2023-01-11T21:25:38.8929770Z Test that using a tuple with no contained types produces an error. ... ok (0.002s) 2023-01-11T21:25:38.8929927Z test_type_annotate_py3 (jit.test_types.TestTypesAndAnnotation) ... ok (0.010s) 2023-01-11T21:25:38.8930082Z test_types_as_values (jit.test_types.TestTypesAndAnnotation) ... ok (0.010s) 2023-01-11T21:25:38.8930234Z test_unimported_type_resolution (jit.test_types.TestTypesAndAnnotation) ... ok (0.002s) 2023-01-11T21:25:38.8930362Z test_bool_list_io (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8930531Z test_dict_comprehension (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8930677Z test_dict_comprehension_scope (jit.test_typing.TestTyping) ... ok (0.006s) 2023-01-11T21:25:38.8930836Z test_dict_comprehension_with_type_annotation (jit.test_typing.TestTyping) ... ok (0.006s) 2023-01-11T21:25:38.8930963Z test_dict_in_not_in (jit.test_typing.TestTyping) ... ok (0.021s) 2023-01-11T21:25:38.8931105Z test_dict_invalid_annotations (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8931536Z test_dict_type_refinement_annotation_key_mismatch (jit.test_typing.TestTyping) ... [W ir_emitter.cpp:4385] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[int, str]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement. 2023-01-11T21:25:38.8931672Z File "/var/lib/jenkins/workspace/test/jit/test_typing.py", line 90 2023-01-11T21:25:38.8931731Z def fn(): 2023-01-11T21:25:38.8931810Z l1 = [1, 2, "foo", 3] 2023-01-11T21:25:38.8931937Z ~~~~~~~~~~~~~~~ <--- HERE 2023-01-11T21:25:38.8932029Z l2 = ["foo", "bar", "baz", "qux"] 2023-01-11T21:25:38.8932143Z d: Dict[int, str] = {k : v for k, v in zip(l1, l2)} 2023-01-11T21:25:38.8932233Z (function emitListLiteral) 2023-01-11T21:25:38.8932303Z ok (0.002s) 2023-01-11T21:25:38.8932731Z test_dict_type_refinement_annotation_value_mismatch (jit.test_typing.TestTyping) ... [W ir_emitter.cpp:4385] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[int, str]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement. 2023-01-11T21:25:38.8932855Z File "/var/lib/jenkins/workspace/test/jit/test_typing.py", line 104 2023-01-11T21:25:38.8932924Z def fn(): 2023-01-11T21:25:38.8933017Z l1 = ["foo", "bar", "baz", "qux"] 2023-01-11T21:25:38.8933092Z l2 = [1, 2, "foo", 3] 2023-01-11T21:25:38.8933216Z ~~~~~~~~~~~~~~~ <--- HERE 2023-01-11T21:25:38.8933331Z d: Dict[str, int] = {k : v for k, v in zip(l1, l2)} 2023-01-11T21:25:38.8933400Z return d 2023-01-11T21:25:38.8933477Z (function emitListLiteral) 2023-01-11T21:25:38.8933543Z ok (0.002s) 2023-01-11T21:25:38.8933668Z test_for_in_dict (jit.test_typing.TestTyping) ... ok (0.009s) 2023-01-11T21:25:38.8933796Z test_for_in_string (jit.test_typing.TestTyping) ... ok (0.014s) 2023-01-11T21:25:38.8933930Z test_for_tuple_assign (jit.test_typing.TestTyping) ... ok (0.008s) 2023-01-11T21:25:38.8934061Z test_for_tuple_unpack (jit.test_typing.TestTyping) ... ok (0.015s) 2023-01-11T21:25:38.8934181Z test_list_io (jit.test_typing.TestTyping) ... ok (0.004s) 2023-01-11T21:25:38.8934312Z test_list_iterables (jit.test_typing.TestTyping) ... ok (0.001s) 2023-01-11T21:25:38.8934454Z test_list_sum (jit.test_typing.TestTyping) ... ok (0.010s) 2023-01-11T21:25:38.8934630Z test_list_type_refinement_annotation_element_mismatch (jit.test_typing.TestTyping) ... ok (0.001s) 2023-01-11T21:25:38.8934765Z test_list_unification (jit.test_typing.TestTyping) ... ok (0.007s) 2023-01-11T21:25:38.8934895Z test_multiple_assign (jit.test_typing.TestTyping) ... ok (0.006s) 2023-01-11T21:25:38.8935034Z test_namedtuple_good_error (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8935163Z test_namedtuple_py2 (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8935300Z test_namedtuple_redefine (jit.test_typing.TestTyping) ... ok (0.007s) 2023-01-11T21:25:38.8935425Z test_nested_list (jit.test_typing.TestTyping) ... ok (0.004s) 2023-01-11T21:25:38.8935548Z test_opt_opt_refinement (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8935682Z test_optional_conversion (jit.test_typing.TestTyping) ... ok (0.012s) 2023-01-11T21:25:38.8935847Z test_optional_refinement (jit.test_typing.TestTyping) ... ok (0.003s) 2023-01-11T21:25:38.8935984Z test_optional_tuple (jit.test_typing.TestTyping) ... ok (0.006s) 2023-01-11T21:25:38.8936124Z test_singleton_tuple_unpack (jit.test_typing.TestTyping) ... ok (0.004s) 2023-01-11T21:25:38.8936256Z test_sum_list_diff_elms (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8936382Z test_sum_list_empty (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8936514Z test_sum_list_literal (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8936625Z test_sum_list_one (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8936757Z test_sum_list_wrong_type (jit.test_typing.TestTyping) ... ok (0.001s) 2023-01-11T21:25:38.8936893Z test_tuple_assignments (jit.test_typing.TestTyping) ... ok (0.020s) 2023-01-11T21:25:38.8937025Z test_tuple_create_return (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8937149Z test_tuple_io (jit.test_typing.TestTyping) ... ok (0.004s) 2023-01-11T21:25:38.8937284Z test_tuple_keyword (jit.test_typing.TestTyping) ... ok (0.004s) 2023-01-11T21:25:38.8937422Z test_tuple_specialization (jit.test_typing.TestTyping) ... ok (0.005s) 2023-01-11T21:25:38.8937544Z test_check_union_annotation (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8937698Z test_union_T_None_is_equivalent_to_optional_T (jit.test_union.TestUnion) ... ok (0.010s) 2023-01-11T21:25:38.8937843Z test_union_argument_order_is_ignored (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8938000Z test_union_argument_order_is_ignored_container (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8938132Z test_union_as_annotation (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8938283Z test_union_as_annotation_in_typed_container (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8938419Z test_union_as_annotation_py2 (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8938548Z test_union_as_dict_key (jit.test_union.TestUnion) ... ok (0.008s) 2023-01-11T21:25:38.8938664Z test_union_as_dict_value (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8938803Z test_union_as_internal_tuple_type (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8938974Z test_union_branching_does_not_autoinfer_undeclared_union (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8939145Z test_union_branching_does_not_widen_existing_inferred_type (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8939315Z test_union_branching_with_union_return_and_homogenous_types (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8939477Z test_union_does_not_replace_existing_annotated_type (jit.test_union.TestUnion) ... ok (0.001s) 2023-01-11T21:25:38.8939662Z test_union_does_not_replace_existing_annotated_type_empty_container (jit.test_union.TestUnion) ... ok (0.001s) 2023-01-11T21:25:38.8939834Z test_union_does_not_replace_existing_annotated_type_union (jit.test_union.TestUnion) ... ok (0.001s) 2023-01-11T21:25:38.8940006Z test_union_in_class_constructor (jit.test_union.TestUnion) ... ok (0.009s) 2023-01-11T21:25:38.8940618Z test_union_memory_aliasing (jit.test_union.TestUnion) ... /opt/conda/lib/python3.10/site-packages/torch/_jit_internal.py:1282: UserWarning: The inner type of a container is lost when calling torch.jit.isinstance in eager mode. For example, List[int] would become list and therefore falsely return True for List[float] or List[str]. 2023-01-11T21:25:38.8940682Z warnings.warn( 2023-01-11T21:25:38.8940750Z ok (0.009s) 2023-01-11T21:25:38.8940900Z test_union_module_with_union_class_variable (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8941054Z test_union_module_with_union_instance_variable (jit.test_union.TestUnion) ... ok (0.009s) 2023-01-11T21:25:38.8941204Z test_union_optional_of_union_is_flattened (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8941386Z test_union_redundant_arguments_are_skipped (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8941557Z test_union_redundant_arguments_are_skipped_container (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8941727Z test_union_redundant_arguments_are_skipped_optional (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8941879Z test_union_redundant_arguments_are_skipped_subtyping (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8942010Z test_union_return_type (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8942164Z test_union_schema_matching_on_internal_type (jit.test_union.TestUnion) ... ok (0.008s) 2023-01-11T21:25:38.8942460Z test_union_serialization_preserves_type_annotations (jit.test_union.TestUnion) ... ok (0.007s) 2023-01-11T21:25:38.8942605Z test_union_subclasses_larger_union (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8942749Z test_union_subtractive_refinement (jit.test_union.TestUnion) ... ok (0.007s) 2023-01-11T21:25:38.8942912Z test_union_subtractive_refinement_with_container (jit.test_union.TestUnion) ... ok (0.007s) 2023-01-11T21:25:38.8943050Z test_union_type_refinement (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8943198Z test_union_type_refinement_internal_declaration (jit.test_union.TestUnion) ... ok (0.005s) 2023-01-11T21:25:38.8943376Z test_union_type_refinement_partial_static_refinement_tuple_rhs (jit.test_union.TestUnion) ... ok (0.007s) 2023-01-11T21:25:38.8943551Z test_union_type_refinement_partial_static_refinement_union_rhs (jit.test_union.TestUnion) ... ok (0.006s) 2023-01-11T21:25:38.8943706Z test_union_type_refinement_statically_false (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8943858Z test_union_type_refinement_statically_true (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8944002Z test_union_type_refinement_tuple_rhs (jit.test_union.TestUnion) ... ok (0.010s) 2023-01-11T21:25:38.8944171Z test_union_type_refinement_tuple_rhs_noncontained_type (jit.test_union.TestUnion) ... ok (0.008s) 2023-01-11T21:25:38.8944328Z test_union_type_refinement_tuple_rhs_union (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8944474Z test_union_type_refinement_union_rhs (jit.test_union.TestUnion) ... ok (0.003s) 2023-01-11T21:25:38.8944608Z test_union_variable_can_be_reassigned (jit.test_union.TestUnion) ... ok (0.008s) 2023-01-11T21:25:38.8944743Z test_union_with_collections (jit.test_union.TestUnion) ... ok (0.006s) 2023-01-11T21:25:38.8944884Z test_union_with_dict_assignment (jit.test_union.TestUnion) ... ok (0.027s) 2023-01-11T21:25:38.8945011Z test_union_with_enum (jit.test_union.TestUnion) ... ok (0.011s) 2023-01-11T21:25:38.8945417Z test_union_with_list_assignment (jit.test_union.TestUnion) ... [W ir_emitter.cpp:4385] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[Tensor, int]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement. 2023-01-11T21:25:38.8945560Z File "", line 3 2023-01-11T21:25:38.8945569Z 2023-01-11T21:25:38.8945636Z def fn(): 2023-01-11T21:25:38.8945791Z x: Union[List[str], List[torch.Tensor]] = [torch.add(1, x) for x in [torch.arange(5), 1]] 2023-01-11T21:25:38.8945991Z ~~~~~~~~~~~~~~~~~~~ <--- HERE 2023-01-11T21:25:38.8946095Z if torch.jit.isinstance(x, List[torch.Tensor]): 2023-01-11T21:25:38.8946185Z x.append(torch.tensor(3)) 2023-01-11T21:25:38.8946274Z (function emitListLiteral) 2023-01-11T21:25:38.8946622Z [W ir_emitter.cpp:4385] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[Tensor, int]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement. 2023-01-11T21:25:38.8946701Z File "", line 3 2023-01-11T21:25:38.8946706Z 2023-01-11T21:25:38.8946776Z def fn(): 2023-01-11T21:25:38.8946964Z x: Union[List[torch.Tensor], int] = [torch.add(1, x) for x in [torch.arange(5), 1]] 2023-01-11T21:25:38.8947150Z ~~~~~~~~~~~~~~~~~~~ <--- HERE 2023-01-11T21:25:38.8947255Z if torch.jit.isinstance(x, List[torch.Tensor]): 2023-01-11T21:25:38.8947342Z x.append(torch.tensor(3)) 2023-01-11T21:25:38.8947430Z (function emitListLiteral) 2023-01-11T21:25:38.8947496Z ok (0.021s) 2023-01-11T21:25:38.8947637Z test_union_with_scalar_values (jit.test_union.TestUnion) ... ok (0.004s) 2023-01-11T21:25:38.8947786Z test_unions_of_a_single_argument_vanish (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8947929Z test_unions_of_unions_are_flattened (jit.test_union.TestUnion) ... ok (0.002s) 2023-01-11T21:25:38.8948107Z test_factory_ops_requires_grad_fail (jit.test_unsupported_ops.TestUnsupportedOps) ... ok (0.003s) 2023-01-11T21:25:38.8948246Z test_init_ops (jit.test_unsupported_ops.TestUnsupportedOps) ... ok (0.016s) 2023-01-11T21:25:38.8948404Z test_add_value_to_version_map (jit.test_upgraders.TestUpgraders) ... ok (0.001s) 2023-01-11T21:25:38.8948554Z test_aten_div_scalar_at_3 (jit.test_upgraders.TestUpgraders) ... ok (0.004s) 2023-01-11T21:25:38.8948701Z test_aten_div_tensor_at_3 (jit.test_upgraders.TestUpgraders) ... ok (0.004s) 2023-01-11T21:25:38.8948852Z test_aten_div_tensor_out_at_3 (jit.test_upgraders.TestUpgraders) ... ok (0.003s) 2023-01-11T21:25:38.8948990Z test_aten_full_at_4 (jit.test_upgraders.TestUpgraders) ... ok (0.003s) 2023-01-11T21:25:38.8949143Z test_aten_full_other_variants (jit.test_upgraders.TestUpgraders) ... ok (0.005s) 2023-01-11T21:25:38.8949287Z test_aten_full_out_at_4 (jit.test_upgraders.TestUpgraders) ... ok (0.003s) 2023-01-11T21:25:38.8949415Z test_aten_linspace (jit.test_upgraders.TestUpgraders) ... ok (0.002s) 2023-01-11T21:25:38.8949560Z test_aten_linspace_out (jit.test_upgraders.TestUpgraders) ... ok (0.002s) 2023-01-11T21:25:38.8949707Z test_aten_logspace (jit.test_upgraders.TestUpgraders) ... ok (0.002s) 2023-01-11T21:25:38.8949855Z test_aten_logspace_out (jit.test_upgraders.TestUpgraders) ... ok (0.002s) 2023-01-11T21:25:38.8950007Z test_aten_test_serialization (jit.test_upgraders.TestUpgraders) ... ok (0.006s) 2023-01-11T21:25:38.8950171Z test_populated_test_upgrader_graph (jit.test_upgraders.TestUpgraders) ... ok (0.003s) 2023-01-11T21:25:38.8960386Z test_populated_upgrader_graph (jit.test_upgraders.TestUpgraders) ... ok (0.003s) 2023-01-11T21:25:38.8960598Z test_warn (jit.test_warn.TestWarn) ... ok (0.002s) 2023-01-11T21:25:38.8960761Z test_warn_multiple_calls_multiple_warnings (jit.test_warn.TestWarn) ... ok (0.002s) 2023-01-11T21:25:38.8960918Z test_warn_multiple_calls_same_func_diff_stack (jit.test_warn.TestWarn) ... ok (0.007s) 2023-01-11T21:25:38.8961048Z test_warn_once_per_func (jit.test_warn.TestWarn) ... ok (0.006s) 2023-01-11T21:25:38.8961187Z test_warn_once_per_func_in_loop (jit.test_warn.TestWarn) ... ok (0.006s) 2023-01-11T21:25:38.8961429Z test_warn_only_once (jit.test_warn.TestWarn) ... ok (0.003s) 2023-01-11T21:25:38.8961566Z test_warn_only_once_in_loop_func (jit.test_warn.TestWarn) ... ok (0.004s) 2023-01-11T21:25:38.8961668Z test_with_as (jit.test_with.TestWith) 2023-01-11T21:25:38.8961936Z Check that with statements that use the 'as' keyword to bind expressions ... ok (0.065s) 2023-01-11T21:25:38.8962042Z test_with_errors (jit.test_with.TestWith) 2023-01-11T21:25:38.8962300Z Check that errors related to with-statements are detected and reported correctly. ... ok (0.027s) 2023-01-11T21:25:38.8962414Z test_with_exceptions (jit.test_with.TestWith) 2023-01-11T21:25:38.8962633Z Check that exceptions thrown in the bodies of with-statements are ... ok (0.025s) 2023-01-11T21:25:38.8962736Z test_with_no_as (jit.test_with.TestWith) 2023-01-11T21:25:38.8962974Z Check that with statements that do not use the 'as' keyword to bind expressions ... ok (0.067s) 2023-01-11T21:25:38.8963120Z test_with_no_grad (jit.test_with.TestWith) 2023-01-11T21:25:38.8963258Z Check that torch.no_grad() works. Most of these are adapted from ... ok (0.017s) 2023-01-11T21:25:38.8963373Z test_with_record_function (jit.test_with.TestWith) 2023-01-11T21:25:38.8963761Z Check that torch.autograd.profiler.record_function context manager is ... STAGE:2023-01-11 21:25:37 1925:1925 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:25:38.8964016Z STAGE:2023-01-11 21:25:37 1925:1925 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:25:38.8964276Z STAGE:2023-01-11 21:25:37 1925:1925 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:25:38.8964343Z ok (0.025s) 2023-01-11T21:25:38.8964350Z 2023-01-11T21:25:38.8964545Z ---------------------------------------------------------------------- 2023-01-11T21:25:38.8964615Z Ran 2544 tests in 130.977s 2023-01-11T21:25:38.8964621Z 2023-01-11T21:25:38.8964703Z OK (skipped=114, expected failures=2) 2023-01-11T21:25:38.8964715Z 2023-01-11T21:25:38.8964789Z Generating XML reports... 2023-01-11T21:25:38.8965112Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_alias_analysis.TestAliasAnalysis-20230111212326.xml 2023-01-11T21:25:38.8965397Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_async.TestAsync-20230111212326.xml 2023-01-11T21:25:38.8965691Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_aten_pow.TestAtenPow-20230111212326.xml 2023-01-11T21:25:38.8966000Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_autodiff.TestAutodiffJit-20230111212326.xml 2023-01-11T21:25:38.8966374Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing-20230111212326.xml 2023-01-11T21:25:38.8966673Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_backends.TestBackends-20230111212326.xml 2023-01-11T21:25:38.8967015Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_backends.TestBackendsWithCompiler-20230111212326.xml 2023-01-11T21:25:38.8967302Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_batch_mm.TestBatchMM-20230111212326.xml 2023-01-11T21:25:38.8967585Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_builtins.TestBuiltins-20230111212326.xml 2023-01-11T21:25:38.8967879Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_class_type.TestClassType-20230111212326.xml 2023-01-11T21:25:38.8968166Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_complex.TestComplex-20230111212326.xml 2023-01-11T21:25:38.8968499Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_custom_operators.TestCustomOperators-20230111212326.xml 2023-01-11T21:25:38.8968767Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_dce.TestDCE-20230111212326.xml 2023-01-11T21:25:38.8969084Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_dataclasses.TestDataclasses-20230111212326.xml 2023-01-11T21:25:38.8969442Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_device_analysis.TestDeviceAnalysis-20230111212326.xml 2023-01-11T21:25:38.8969720Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestDict-20230111212326.xml 2023-01-11T21:25:38.8970047Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_dtype_analysis.TestDtypeAnalysis-20230111212326.xml 2023-01-11T21:25:38.8970307Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_enum.TestEnum-20230111212326.xml 2023-01-11T21:25:38.8970602Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestFreezing-20230111212326.xml 2023-01-11T21:25:38.8970861Z Generated XML report: test-reports/python-unittest/test_jit/TEST-TestFrontend-20230111212326.xml 2023-01-11T21:25:38.8971222Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestFrozenOptimizations-20230111212326.xml 2023-01-11T21:25:38.8971559Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_functional_blocks.TestFunctionalBlocks-20230111212326.xml 2023-01-11T21:25:38.8971945Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_convert_activation.TestFunctionalToInplaceActivation-20230111212326.xml 2023-01-11T21:25:38.8972246Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_attr.TestGetDefaultAttr-20230111212326.xml 2023-01-11T21:25:38.8972593Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_graph_rewrite_passes.TestGraphRewritePasses-20230111212326.xml 2023-01-11T21:25:38.8972868Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_hash.TestHash-20230111212326.xml 2023-01-11T21:25:38.8973148Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_hooks.TestHooks-20230111212326.xml 2023-01-11T21:25:38.8973454Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_ignorable_args.TestIgnorableArgs-20230111212326.xml 2023-01-11T21:25:38.8973811Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_ignore_context_manager.TestIgnoreContextManager-20230111212326.xml 2023-01-11T21:25:38.8974200Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_convert_activation.TestInplaceToFunctionalActivation-20230111212326.xml 2023-01-11T21:25:38.8974507Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_isinstance.TestIsinstance-20230111212326.xml 2023-01-11T21:25:38.8974755Z Generated XML report: test-reports/python-unittest/test_jit/TEST-TestJit-20230111212326.xml 2023-01-11T21:25:38.8975048Z Generated XML report: test-reports/python-unittest/test_jit/TEST-TestJitGeneratedModule-20230111212326.xml 2023-01-11T21:25:38.8975341Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_jit_utils.TestJitUtils-20230111212326.xml 2023-01-11T21:25:38.8975625Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestList-20230111212326.xml 2023-01-11T21:25:38.8975914Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_logging.TestLogging-20230111212326.xml 2023-01-11T21:25:38.8976242Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestMKLDNNReinplacing-20230111212326.xml 2023-01-11T21:25:38.8976498Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_misc.TestMisc-20230111212326.xml 2023-01-11T21:25:38.8976822Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_tracer.TestMixTracingScripting-20230111212326.xml 2023-01-11T21:25:38.8977103Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_models.TestModels-20230111212326.xml 2023-01-11T21:25:38.8977402Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_module_apis.TestModuleAPIs-20230111212326.xml 2023-01-11T21:25:38.8977776Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_module_containers.TestModuleContainers-20230111212326.xml 2023-01-11T21:25:38.8978107Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_module_interface.TestModuleInterface-20230111212326.xml 2023-01-11T21:25:38.8978399Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_modules.TestModules-20230111212326.xml 2023-01-11T21:25:38.8978698Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestNamedTuple-20230111212326.xml 2023-01-11T21:25:38.8979007Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_backend_nnapi.TestNnapiBackend-20230111212326.xml 2023-01-11T21:25:38.8979343Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_op_decompositions.TestOpDecompositions-20230111212326.xml 2023-01-11T21:25:38.8979818Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo-20230111212326.xml 2023-01-11T21:25:38.8980089Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_pdt.TestPDT-20230111212326.xml 2023-01-11T21:25:38.8980435Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_parametrization.TestParametrization-20230111212326.xml 2023-01-11T21:25:38.8980734Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_peephole.TestPeephole-20230111212326.xml 2023-01-11T21:25:38.8981016Z Generated XML report: test-reports/python-unittest/test_jit/TEST-TestProducerVersion-20230111212326.xml 2023-01-11T21:25:38.8981311Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_profiler.TestProfiler-20230111212326.xml 2023-01-11T21:25:38.8981637Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_python_bindings.TestPythonBindings-20230111212326.xml 2023-01-11T21:25:38.8981970Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_python_builtins.TestPythonBuiltinOP-20230111212326.xml 2023-01-11T21:25:38.8982261Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_python_ir.TestPythonIr-20230111212326.xml 2023-01-11T21:25:38.8982720Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_recursive_script.TestRecursiveScript-20230111212326.xml 2023-01-11T21:25:38.8983045Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_remove_mutation.TestRemoveMutation-20230111212326.xml 2023-01-11T21:25:38.8983334Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoad-20230111212326.xml 2023-01-11T21:25:38.8983692Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load_for_op_version.TestSaveLoadForOpVersion-20230111212326.xml 2023-01-11T21:25:38.8983950Z Generated XML report: test-reports/python-unittest/test_jit/TEST-TestScript-20230111212326.xml 2023-01-11T21:25:38.8984255Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestScriptDict-20230111212326.xml 2023-01-11T21:25:38.8984550Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestScriptList-20230111212326.xml 2023-01-11T21:25:38.8984975Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation-20230111212326.xml 2023-01-11T21:25:38.8985296Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_script_profile.TestScriptProfile-20230111212326.xml 2023-01-11T21:25:38.8985575Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_slice.TestSlice-20230111212326.xml 2023-01-11T21:25:38.8985846Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_sparse.TestSparse-20230111212326.xml 2023-01-11T21:25:38.8986188Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_string_formatting.TestStringFormatting-20230111212326.xml 2023-01-11T21:25:38.8986608Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis-20230111212326.xml 2023-01-11T21:25:38.8986924Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_builtins.TestTensorBuiltins-20230111212326.xml 2023-01-11T21:25:38.8987259Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_tensor_creation_ops.TestTensorCreationOps-20230111212326.xml 2023-01-11T21:25:38.8987577Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_tensor_methods.TestTensorMethods-20230111212326.xml 2023-01-11T21:25:38.8987882Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_torchbind.TestTorchbind-20230111212326.xml 2023-01-11T21:25:38.8988167Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_tracer.TestTracer-20230111212326.xml 2023-01-11T21:25:38.8988513Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_type_sharing.TestTypeSharing-20230111212326.xml 2023-01-11T21:25:38.8988834Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_types.TestTypesAndAnnotation-20230111212326.xml 2023-01-11T21:25:38.8989110Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_typing.TestTyping-20230111212326.xml 2023-01-11T21:25:38.8989390Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_union.TestUnion-20230111212326.xml 2023-01-11T21:25:38.8989714Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_unsupported_ops.TestUnsupportedOps-20230111212326.xml 2023-01-11T21:25:38.8990019Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_upgraders.TestUpgraders-20230111212326.xml 2023-01-11T21:25:38.8990291Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_warn.TestWarn-20230111212326.xml 2023-01-11T21:25:38.8990563Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20230111212326.xml 2023-01-11T21:25:38.8990942Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20230111212326.xml 2023-01-11T21:25:38.8991267Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20230111212326.xml 2023-01-11T21:25:38.8991273Z 2023-01-11T21:25:38.8991653Z ##[endgroup] 2023-01-11T21:25:38.8991912Z FINISHED PRINTING LOG FILE of test_jit (/var/lib/jenkins/workspace/test/test-reports/test_jit_k9iebijm) 2023-01-11T21:25:38.8991917Z 2023-01-11T21:25:40.7216433Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:25:40.8154894Z Ignoring disabled issues: [] 2023-01-11T21:25:40.8963235Z Running test_masked ... [2023-01-11 21:25:40.895935] 2023-01-11T21:25:40.8964463Z Executing ['/opt/conda/bin/python', '-bb', 'test_masked.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:25:40.896214] 2023-01-11T21:25:43.5545772Z 2023-01-11T21:25:43.5547321Z Expand the folded group to see the log file of test_masked 2023-01-11T21:25:43.5548117Z ##[group]PRINTING LOG FILE of test_masked (/var/lib/jenkins/workspace/test/test-reports/test_masked_2slvrldx) 2023-01-11T21:25:43.5548357Z 2023-01-11T21:25:43.5548433Z Running tests... 2023-01-11T21:25:43.5548834Z ---------------------------------------------------------------------- 2023-01-11T21:25:43.5549009Z 2023-01-11T21:25:43.5549205Z ---------------------------------------------------------------------- 2023-01-11T21:25:43.5549434Z Ran 0 tests in 0.000s 2023-01-11T21:25:43.5549548Z 2023-01-11T21:25:43.5549610Z OK 2023-01-11T21:25:43.5549704Z 2023-01-11T21:25:43.5549791Z Generating XML reports... 2023-01-11T21:25:43.5550105Z Test results will be stored in test-reports/python-unittest/test_masked 2023-01-11T21:25:43.5550286Z 2023-01-11T21:25:43.5550512Z ##[endgroup] 2023-01-11T21:25:43.5550962Z FINISHED PRINTING LOG FILE of test_masked (/var/lib/jenkins/workspace/test/test-reports/test_masked_2slvrldx) 2023-01-11T21:25:43.5551380Z 2023-01-11T21:25:45.4233039Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:25:45.4894640Z Ignoring disabled issues: [] 2023-01-11T21:25:45.5035239Z Running test_meta ... [2023-01-11 21:25:45.503200] 2023-01-11T21:25:45.5036534Z Executing ['/opt/conda/bin/python', '-bb', 'test_meta.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:25:45.503467] 2023-01-11T21:25:48.4139399Z 2023-01-11T21:25:48.4139958Z Expand the folded group to see the log file of test_meta 2023-01-11T21:25:48.4140939Z ##[group]PRINTING LOG FILE of test_meta (/var/lib/jenkins/workspace/test/test-reports/test_meta_wkf1cvw6) 2023-01-11T21:25:48.4141335Z 2023-01-11T21:25:48.4141462Z Running tests... 2023-01-11T21:25:48.4142060Z ---------------------------------------------------------------------- 2023-01-11T21:25:48.4142827Z Test results will be stored in test-reports/python-unittest/test_meta 2023-01-11T21:25:48.4143701Z test_channels_last (__main__.TestMetaConverter) ... ok (0.114s) 2023-01-11T21:25:48.4144214Z test_channels_last_leaf (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4144673Z test_channels_last_non_leaf (__main__.TestMetaConverter) ... ok (0.002s) 2023-01-11T21:25:48.4145780Z test_complex_noncontiguous_bug (__main__.TestMetaConverter) ... /var/lib/jenkins/workspace/test/test_meta.py:233: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/EmptyTensor.cpp:32.) 2023-01-11T21:25:48.4146289Z x = torch.randn((2, 2, 4, 9), dtype=torch.complex32)[:, 0, :, :] 2023-01-11T21:25:48.4146556Z ok (0.002s) 2023-01-11T21:25:48.4146786Z test_empty_strided_non_dense_leaf (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4147071Z test_imag (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4147379Z test_leaf (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4147651Z test_non_leaf (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4148301Z test_non_leaf_torture (__main__.TestMetaConverter) ... /var/lib/jenkins/workspace/test/test_meta.py:212: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:25:48.4148896Z x.set_(x.storage(), 10, (2,), (2,)) 2023-01-11T21:25:48.4149082Z ok (0.001s) 2023-01-11T21:25:48.4149299Z test_requires_grad_false (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4149646Z test_tensor_outlives_converter (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4149940Z test_view_as_complex (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4150248Z test_view_as_real (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4150543Z test_view_dtype (__main__.TestMetaConverter) ... ok (0.001s) 2023-01-11T21:25:48.4150879Z test_view_of_leaf (__main__.TestMetaConverter) ... ok (0.003s) 2023-01-11T21:25:48.4151205Z test_view_of_non_leaf (__main__.TestMetaConverter) ... ok (0.003s) 2023-01-11T21:25:48.4151495Z test_view_of_view_of_leaf (__main__.TestMetaConverter) ... ok (0.002s) 2023-01-11T21:25:48.4151792Z test_weakref (__main__.TestMetaConverter) ... ok (0.002s) 2023-01-11T21:25:48.4151953Z 2023-01-11T21:25:48.4152170Z ---------------------------------------------------------------------- 2023-01-11T21:25:48.4152417Z Ran 18 tests in 0.141s 2023-01-11T21:25:48.4152567Z 2023-01-11T21:25:48.4152626Z OK 2023-01-11T21:25:48.4152718Z 2023-01-11T21:25:48.4152805Z Generating XML reports... 2023-01-11T21:25:48.4153221Z Generated XML report: test-reports/python-unittest/test_meta/TEST-TestMetaConverter-20230111212547.xml 2023-01-11T21:25:48.4153500Z 2023-01-11T21:25:48.4153744Z ##[endgroup] 2023-01-11T21:25:48.4154297Z FINISHED PRINTING LOG FILE of test_meta (/var/lib/jenkins/workspace/test/test-reports/test_meta_wkf1cvw6) 2023-01-11T21:25:48.4154505Z 2023-01-11T21:25:50.3845377Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:25:50.4675445Z Ignoring disabled issues: [] 2023-01-11T21:25:50.4814659Z Running test_proxy_tensor ... [2023-01-11 21:25:50.481213] 2023-01-11T21:25:50.4817673Z Executing ['/opt/conda/bin/python', '-bb', 'test_proxy_tensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:25:50.481490] 2023-01-11T21:26:30.4011104Z 2023-01-11T21:26:30.4012393Z Expand the folded group to see the log file of test_proxy_tensor 2023-01-11T21:26:30.4015931Z ##[group]PRINTING LOG FILE of test_proxy_tensor (/var/lib/jenkins/workspace/test/test-reports/test_proxy_tensor_5ujmqt7b) 2023-01-11T21:26:30.4016593Z 2023-01-11T21:26:30.4016974Z Running tests... 2023-01-11T21:26:30.4018128Z ---------------------------------------------------------------------- 2023-01-11T21:26:30.4019356Z Test results will be stored in test-reports/python-unittest/test_proxy_tensor 2023-01-11T21:26:30.4020104Z test_alias (__main__.TestFakeProxyTensor) ... ok (0.100s) 2023-01-11T21:26:30.4020748Z test_issue82547 (__main__.TestFakeProxyTensor) ... ok (0.005s) 2023-01-11T21:26:30.4021402Z test_meta (__main__.TestFakeProxyTensor) ... ok (0.006s) 2023-01-11T21:26:30.4067536Z test_use_fake_and_tensor (__main__.TestFakeProxyTensor) ... ok (0.006s) 2023-01-11T21:26:30.4068040Z test_allclose (__main__.TestGenericProxyTensorFake) ... ok (0.003s) 2023-01-11T21:26:30.4068707Z test_amp_cache (__main__.TestGenericProxyTensorFake) ... skip: CUDA-only test (0.001s) 2023-01-11T21:26:30.4069146Z test_constant_blowup (__main__.TestGenericProxyTensorFake) ... ok (0.004s) 2023-01-11T21:26:30.4069564Z test_constant_proxy_tensor_mut (__main__.TestGenericProxyTensorFake) ... ok (0.010s) 2023-01-11T21:26:30.4069989Z test_constant_random (__main__.TestGenericProxyTensorFake) ... ok (0.003s) 2023-01-11T21:26:30.4070426Z test_constant_unbind (__main__.TestGenericProxyTensorFake) ... ok (0.004s) 2023-01-11T21:26:30.4070839Z test_decomp_of_capture (__main__.TestGenericProxyTensorFake) ... ok (0.031s) 2023-01-11T21:26:30.4071391Z test_decomposition_interpreter (__main__.TestGenericProxyTensorFake) ... ok (0.024s) 2023-01-11T21:26:30.4071830Z test_inplace_metadata (__main__.TestGenericProxyTensorFake) ... ok (0.006s) 2023-01-11T21:26:30.4074626Z test_isolated_graphmodule (__main__.TestGenericProxyTensorFake) ... ok (0.232s) 2023-01-11T21:26:30.4075126Z test_make_fx_model_double_param (__main__.TestGenericProxyTensorFake) ... ok (0.057s) 2023-01-11T21:26:30.4075566Z test_make_fx_model_fwd_bwd (__main__.TestGenericProxyTensorFake) ... ok (0.039s) 2023-01-11T21:26:30.4076010Z test_make_fx_model_fwd_bwd_wgtupdate (__main__.TestGenericProxyTensorFake) ... ok (0.042s) 2023-01-11T21:26:30.4076436Z test_make_fx_overloads (__main__.TestGenericProxyTensorFake) ... ok (0.010s) 2023-01-11T21:26:30.4076890Z test_make_fx_reentrant_dispatch (__main__.TestGenericProxyTensorFake) ... ok (0.006s) 2023-01-11T21:26:30.4077317Z test_make_fx_simple (__main__.TestGenericProxyTensorFake) ... ok (0.004s) 2023-01-11T21:26:30.4077755Z test_mode_tracing_factory_function (__main__.TestGenericProxyTensorFake) ... ok (0.009s) 2023-01-11T21:26:30.4078169Z test_partial_decomp (__main__.TestGenericProxyTensorFake) ... ok (0.088s) 2023-01-11T21:26:30.4078581Z test_pickle_issue89626 (__main__.TestGenericProxyTensorFake) ... ok (0.004s) 2023-01-11T21:26:30.4081983Z test_pr_86917 (__main__.TestGenericProxyTensorFake) ... ok (0.022s) 2023-01-11T21:26:30.4082648Z test_proxy_tensor (__main__.TestGenericProxyTensorFake) ... ok (0.031s) 2023-01-11T21:26:30.4083307Z test_proxy_tensor_mode_with_decomp_table_preserves_proxy (__main__.TestGenericProxyTensorFake) ... ok (0.020s) 2023-01-11T21:26:30.4083819Z test_resnet18_backward_trace (__main__.TestGenericProxyTensorFake) ... ok (2.519s) 2023-01-11T21:26:30.4084700Z test_scalar_device (__main__.TestGenericProxyTensorFake) ... ok (0.006s) 2023-01-11T21:26:30.4085127Z test_strides (__main__.TestGenericProxyTensorFake) ... ok (0.064s) 2023-01-11T21:26:30.4085645Z test_tensor_constants (__main__.TestGenericProxyTensorFake) ... ok (0.009s) 2023-01-11T21:26:30.4086178Z test_trace_subclasses (__main__.TestGenericProxyTensorFake) ... ok (0.009s) 2023-01-11T21:26:30.4087739Z test_val_metadata_mutation (__main__.TestGenericProxyTensorFake) ... ok (0.004s) 2023-01-11T21:26:30.4088331Z test_varargs (__main__.TestGenericProxyTensorFake) ... ok (0.005s) 2023-01-11T21:26:30.4088869Z test_allclose (__main__.TestGenericProxyTensorReal) ... ok (0.009s) 2023-01-11T21:26:30.4089607Z test_amp_cache (__main__.TestGenericProxyTensorReal) ... skip: CUDA-only test (0.001s) 2023-01-11T21:26:30.4089933Z test_constant_blowup (__main__.TestGenericProxyTensorReal) ... ok (0.020s) 2023-01-11T21:26:30.4090263Z test_constant_proxy_tensor_mut (__main__.TestGenericProxyTensorReal) ... ok (0.020s) 2023-01-11T21:26:30.4090711Z test_constant_random (__main__.TestGenericProxyTensorReal) ... ok (0.015s) 2023-01-11T21:26:30.4091022Z test_constant_unbind (__main__.TestGenericProxyTensorReal) ... ok (0.016s) 2023-01-11T21:26:30.4091343Z test_decomp_of_capture (__main__.TestGenericProxyTensorReal) ... ok (0.031s) 2023-01-11T21:26:30.4091684Z test_decomposition_interpreter (__main__.TestGenericProxyTensorReal) ... ok (0.031s) 2023-01-11T21:26:30.4092020Z test_inplace_metadata (__main__.TestGenericProxyTensorReal) ... ok (0.019s) 2023-01-11T21:26:30.4092338Z test_isolated_graphmodule (__main__.TestGenericProxyTensorReal) ... ok (0.235s) 2023-01-11T21:26:30.4092680Z test_make_fx_model_double_param (__main__.TestGenericProxyTensorReal) ... ok (0.057s) 2023-01-11T21:26:30.4093016Z test_make_fx_model_fwd_bwd (__main__.TestGenericProxyTensorReal) ... ok (0.254s) 2023-01-11T21:26:30.4093345Z test_make_fx_model_fwd_bwd_wgtupdate (__main__.TestGenericProxyTensorReal) ... ok (0.328s) 2023-01-11T21:26:30.4093686Z test_make_fx_overloads (__main__.TestGenericProxyTensorReal) ... ok (0.033s) 2023-01-11T21:26:30.4094020Z test_make_fx_reentrant_dispatch (__main__.TestGenericProxyTensorReal) ... ok (0.042s) 2023-01-11T21:26:30.4094352Z test_make_fx_simple (__main__.TestGenericProxyTensorReal) ... ok (0.020s) 2023-01-11T21:26:30.4094675Z test_mode_tracing_factory_function (__main__.TestGenericProxyTensorReal) ... ok (0.027s) 2023-01-11T21:26:30.4095006Z test_partial_decomp (__main__.TestGenericProxyTensorReal) ... ok (0.120s) 2023-01-11T21:26:30.4095325Z test_pickle_issue89626 (__main__.TestGenericProxyTensorReal) ... ok (0.017s) 2023-01-11T21:26:30.4095620Z test_pr_86917 (__main__.TestGenericProxyTensorReal) ... ok (0.029s) 2023-01-11T21:26:30.4095921Z test_proxy_tensor (__main__.TestGenericProxyTensorReal) ... ok (0.249s) 2023-01-11T21:26:30.4096278Z test_proxy_tensor_mode_with_decomp_table_preserves_proxy (__main__.TestGenericProxyTensorReal) ... ok (0.031s) 2023-01-11T21:26:30.4096648Z test_resnet18_backward_trace (__main__.TestGenericProxyTensorReal) ... ok (6.637s) 2023-01-11T21:26:30.4096960Z test_scalar_device (__main__.TestGenericProxyTensorReal) ... ok (0.019s) 2023-01-11T21:26:30.4097271Z test_strides (__main__.TestGenericProxyTensorReal) ... ok (0.064s) 2023-01-11T21:26:30.4097581Z test_tensor_constants (__main__.TestGenericProxyTensorReal) ... ok (0.013s) 2023-01-11T21:26:30.4097889Z test_trace_subclasses (__main__.TestGenericProxyTensorReal) ... ok (0.043s) 2023-01-11T21:26:30.4098215Z test_val_metadata_mutation (__main__.TestGenericProxyTensorReal) ... ok (0.020s) 2023-01-11T21:26:30.4098532Z test_varargs (__main__.TestGenericProxyTensorReal) ... ok (0.023s) 2023-01-11T21:26:30.4098853Z test_allclose (__main__.TestGenericProxyTensorSymbolic) ... ok (0.039s) 2023-01-11T21:26:30.4099277Z test_amp_cache (__main__.TestGenericProxyTensorSymbolic) ... skip: CUDA-only test (0.001s) 2023-01-11T21:26:30.4099633Z test_constant_blowup (__main__.TestGenericProxyTensorSymbolic) ... ok (0.010s) 2023-01-11T21:26:30.4100040Z test_constant_proxy_tensor_mut (__main__.TestGenericProxyTensorSymbolic) ... ok (0.013s) 2023-01-11T21:26:30.4100511Z test_constant_random (__main__.TestGenericProxyTensorSymbolic) ... ok (0.008s) 2023-01-11T21:26:30.4100850Z test_constant_unbind (__main__.TestGenericProxyTensorSymbolic) ... ok (0.005s) 2023-01-11T21:26:30.4101195Z test_decomp_of_capture (__main__.TestGenericProxyTensorSymbolic) ... ok (0.032s) 2023-01-11T21:26:30.4101551Z test_decomposition_interpreter (__main__.TestGenericProxyTensorSymbolic) ... ok (0.069s) 2023-01-11T21:26:30.4101895Z test_inplace_metadata (__main__.TestGenericProxyTensorSymbolic) ... ok (0.035s) 2023-01-11T21:26:30.4102255Z test_isolated_graphmodule (__main__.TestGenericProxyTensorSymbolic) ... ok (0.241s) 2023-01-11T21:26:30.4102931Z test_make_fx_model_double_param (__main__.TestGenericProxyTensorSymbolic) ... ok (0.058s) 2023-01-11T21:26:30.4103491Z test_make_fx_model_fwd_bwd (__main__.TestGenericProxyTensorSymbolic) ... ok (0.243s) 2023-01-11T21:26:30.4105034Z test_make_fx_model_fwd_bwd_wgtupdate (__main__.TestGenericProxyTensorSymbolic) ... ok (0.198s) 2023-01-11T21:26:30.4105734Z test_make_fx_overloads (__main__.TestGenericProxyTensorSymbolic) ... expected failure (0.031s) 2023-01-11T21:26:30.4106315Z test_make_fx_reentrant_dispatch (__main__.TestGenericProxyTensorSymbolic) ... ok (0.024s) 2023-01-11T21:26:30.4108224Z test_make_fx_simple (__main__.TestGenericProxyTensorSymbolic) ... ok (0.016s) 2023-01-11T21:26:30.4108847Z test_mode_tracing_factory_function (__main__.TestGenericProxyTensorSymbolic) ... ok (0.022s) 2023-01-11T21:26:30.4109205Z test_partial_decomp (__main__.TestGenericProxyTensorSymbolic) ... ok (0.086s) 2023-01-11T21:26:30.4109549Z test_pickle_issue89626 (__main__.TestGenericProxyTensorSymbolic) ... ok (0.015s) 2023-01-11T21:26:30.4109861Z test_pr_86917 (__main__.TestGenericProxyTensorSymbolic) ... ok (0.068s) 2023-01-11T21:26:30.4110192Z test_proxy_tensor (__main__.TestGenericProxyTensorSymbolic) ... ok (0.167s) 2023-01-11T21:26:30.4110577Z test_proxy_tensor_mode_with_decomp_table_preserves_proxy (__main__.TestGenericProxyTensorSymbolic) ... ok (0.021s) 2023-01-11T21:26:30.4111014Z test_resnet18_backward_trace (__main__.TestGenericProxyTensorSymbolic) ... ok (21.911s) 2023-01-11T21:26:30.4111365Z test_scalar_device (__main__.TestGenericProxyTensorSymbolic) ... ok (0.024s) 2023-01-11T21:26:30.4111714Z test_strides (__main__.TestGenericProxyTensorSymbolic) ... ok (0.063s) 2023-01-11T21:26:30.4112048Z test_tensor_constants (__main__.TestGenericProxyTensorSymbolic) ... ok (0.011s) 2023-01-11T21:26:30.4112394Z test_trace_subclasses (__main__.TestGenericProxyTensorSymbolic) ... expected failure (0.009s) 2023-01-11T21:26:30.4112758Z test_val_metadata_mutation (__main__.TestGenericProxyTensorSymbolic) ... ok (0.022s) 2023-01-11T21:26:30.4113094Z test_varargs (__main__.TestGenericProxyTensorSymbolic) ... ok (0.027s) 2023-01-11T21:26:30.4113403Z test_binary_broadcast (__main__.TestSymbolicTracing) ... ok (0.033s) 2023-01-11T21:26:30.4113669Z test_cat (__main__.TestSymbolicTracing) ... ok (0.066s) 2023-01-11T21:26:30.4113959Z test_debug_interpreter (__main__.TestSymbolicTracing) ... ok (0.025s) 2023-01-11T21:26:30.4114271Z test_elementwise_meta_with_sym_numbers (__main__.TestSymbolicTracing) ... ok (0.049s) 2023-01-11T21:26:30.4114552Z test_expand (__main__.TestSymbolicTracing) ... ok (0.036s) 2023-01-11T21:26:30.4114826Z test_guards_equal (__main__.TestSymbolicTracing) ... ok (0.504s) 2023-01-11T21:26:30.4115101Z test_item (__main__.TestSymbolicTracing) ... ok (0.012s) 2023-01-11T21:26:30.4115356Z test_mega_guard (__main__.TestSymbolicTracing) ... ok (0.029s) 2023-01-11T21:26:30.4115629Z test_metadata (__main__.TestSymbolicTracing) ... ok (0.025s) 2023-01-11T21:26:30.4115912Z test_metadata_fresh (__main__.TestSymbolicTracing) ... ok (0.018s) 2023-01-11T21:26:30.4116200Z test_multiply_shape (__main__.TestSymbolicTracing) ... ok (0.017s) 2023-01-11T21:26:30.4116562Z test_neg_shape (__main__.TestSymbolicTracing) ... ok (0.030s) 2023-01-11T21:26:30.4116835Z test_new_empty (__main__.TestSymbolicTracing) ... ok (0.034s) 2023-01-11T21:26:30.4117138Z test_nonidentity_transitive_guards (__main__.TestSymbolicTracing) ... ok (0.149s) 2023-01-11T21:26:30.4117432Z test_resize_from_zero (__main__.TestSymbolicTracing) ... ok (0.012s) 2023-01-11T21:26:30.4117714Z test_return_symint (__main__.TestSymbolicTracing) ... ok (0.027s) 2023-01-11T21:26:30.4117992Z test_rmethod (__main__.TestSymbolicTracing) ... ok (0.015s) 2023-01-11T21:26:30.4118269Z test_size_with_tensor (__main__.TestSymbolicTracing) ... ok (0.065s) 2023-01-11T21:26:30.4118536Z test_sqrt_size (__main__.TestSymbolicTracing) ... ok (0.015s) 2023-01-11T21:26:30.4118818Z test_sym_storage_offset (__main__.TestSymbolicTracing) ... ok (0.020s) 2023-01-11T21:26:30.4119107Z test_symint_to_tensor (__main__.TestSymbolicTracing) ... ok (0.029s) 2023-01-11T21:26:30.4119414Z test_unary (__main__.TestSymbolicTracing) ... ok (0.024s) 2023-01-11T21:26:30.4119568Z 2023-01-11T21:26:30.4119804Z ---------------------------------------------------------------------- 2023-01-11T21:26:30.4120050Z Ran 113 tests in 36.517s 2023-01-11T21:26:30.4120166Z 2023-01-11T21:26:30.4120249Z OK (skipped=3, expected failures=2) 2023-01-11T21:26:30.4120376Z 2023-01-11T21:26:30.4120463Z Generating XML reports... 2023-01-11T21:26:30.4120907Z Generated XML report: test-reports/python-unittest/test_proxy_tensor/TEST-TestFakeProxyTensor-20230111212552.xml 2023-01-11T21:26:30.4124754Z Generated XML report: test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorFake-20230111212552.xml 2023-01-11T21:26:30.4125760Z Generated XML report: test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorReal-20230111212552.xml 2023-01-11T21:26:30.4126787Z Generated XML report: test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorSymbolic-20230111212552.xml 2023-01-11T21:26:30.4127751Z Generated XML report: test-reports/python-unittest/test_proxy_tensor/TEST-TestSymbolicTracing-20230111212552.xml 2023-01-11T21:26:30.4128137Z 2023-01-11T21:26:30.4128703Z ##[endgroup] 2023-01-11T21:26:30.4129325Z FINISHED PRINTING LOG FILE of test_proxy_tensor (/var/lib/jenkins/workspace/test/test-reports/test_proxy_tensor_5ujmqt7b) 2023-01-11T21:26:30.4129652Z 2023-01-11T21:26:32.5022898Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:32.5896581Z Ignoring disabled issues: [] 2023-01-11T21:26:32.6040629Z Running test_public_bindings ... [2023-01-11 21:26:32.603722] 2023-01-11T21:26:32.6042717Z Executing ['/opt/conda/bin/python', '-bb', 'test_public_bindings.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:32.604002] 2023-01-11T21:26:36.7910702Z 2023-01-11T21:26:36.7916939Z Expand the folded group to see the log file of test_public_bindings 2023-01-11T21:26:36.7917962Z ##[group]PRINTING LOG FILE of test_public_bindings (/var/lib/jenkins/workspace/test/test-reports/test_public_bindings_jnijx7zv) 2023-01-11T21:26:36.7918335Z 2023-01-11T21:26:36.7918437Z Running tests... 2023-01-11T21:26:36.7918981Z ---------------------------------------------------------------------- 2023-01-11T21:26:36.7919511Z Test results will be stored in test-reports/python-unittest/test_public_bindings 2023-01-11T21:26:36.7919929Z test_correct_module_names (__main__.TestPublicBindings) 2023-01-11T21:26:36.7920554Z An API is considered public, if its `__module__` starts with `torch.` ... No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:36.7920911Z ok (2.075s) 2023-01-11T21:26:36.7921203Z test_no_new_bindings (__main__.TestPublicBindings) 2023-01-11T21:26:36.7921608Z This test aims to stop the introduction of new JIT bindings into torch._C ... ok (0.003s) 2023-01-11T21:26:36.7921857Z 2023-01-11T21:26:36.7922135Z ---------------------------------------------------------------------- 2023-01-11T21:26:36.7922447Z Ran 2 tests in 2.078s 2023-01-11T21:26:36.7922875Z 2023-01-11T21:26:36.7922962Z OK 2023-01-11T21:26:36.7923089Z 2023-01-11T21:26:36.7923205Z Generating XML reports... 2023-01-11T21:26:36.7923806Z Generated XML report: test-reports/python-unittest/test_public_bindings/TEST-TestPublicBindings-20230111212634.xml 2023-01-11T21:26:36.7924141Z 2023-01-11T21:26:36.7924483Z ##[endgroup] 2023-01-11T21:26:36.7925048Z FINISHED PRINTING LOG FILE of test_public_bindings (/var/lib/jenkins/workspace/test/test-reports/test_public_bindings_jnijx7zv) 2023-01-11T21:26:36.7925359Z 2023-01-11T21:26:38.9053638Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:38.9935650Z Ignoring disabled issues: [] 2023-01-11T21:26:39.0078985Z Running test_python_dispatch ... [2023-01-11 21:26:39.007592] 2023-01-11T21:26:39.0081031Z Executing ['/opt/conda/bin/python', '-bb', 'test_python_dispatch.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:39.007874] 2023-01-11T21:26:41.1319238Z 2023-01-11T21:26:41.1320068Z Expand the folded group to see the log file of test_python_dispatch 2023-01-11T21:26:41.1321171Z ##[group]PRINTING LOG FILE of test_python_dispatch (/var/lib/jenkins/workspace/test/test-reports/test_python_dispatch_hr9qzlvq) 2023-01-11T21:26:41.1321491Z 2023-01-11T21:26:41.1321593Z Running tests... 2023-01-11T21:26:41.1322146Z ---------------------------------------------------------------------- 2023-01-11T21:26:41.1322836Z Test results will be stored in test-reports/python-unittest/test_python_dispatch 2023-01-11T21:26:41.1323393Z test_all_same_mode (__main__.TestPythonDispatch) ... ok (0.235s) 2023-01-11T21:26:41.1323898Z test_autograd_in_attr (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1324417Z test_basic (__main__.TestPythonDispatch) ... ok (0.006s) 2023-01-11T21:26:41.1324919Z test_capture_logs_with_torch_dispatch_mode (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1325426Z test_construct_int_tensor (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1325855Z test_custom_autograd (__main__.TestPythonDispatch) ... ok (0.004s) 2023-01-11T21:26:41.1326286Z test_deepcopy_non_wrapper_subclass (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1326764Z test_deepcopy_wrapper_subclass (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1327442Z test_deepcopy_wrapper_subclass_with_clone_returning_different_type (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1328007Z test_detach_appears_twice_when_called_once (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1328594Z test_device_slowpath (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1329272Z test_dim_slowpath (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1329981Z test_dispatch_super_call (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1330707Z test_dispatch_super_call_list_arg (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1331493Z test_dispatch_super_dont_autograd (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1332277Z test_error_using_class_method_on_mode (__main__.TestPythonDispatch) ... ok (0.005s) 2023-01-11T21:26:41.1333035Z test_exception_handling (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1333723Z test_fancy_strides (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1334378Z test_format (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1335096Z test_get_cur_mode (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1335772Z test_get_mode_stack (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1336495Z test_index_put_where_only_index_is_subclass (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1337565Z test_invalid_ret (__main__.TestPythonDispatch) ... /var/lib/jenkins/workspace/test/test_python_dispatch.py:447: DeprecationWarning: Please use assertRaisesRegex instead. 2023-01-11T21:26:41.1338462Z self.assertRaisesRegexp( 2023-01-11T21:26:41.1339125Z ok (0.001s) 2023-01-11T21:26:41.1339663Z test_is_contiguous_slow_path (__main__.TestPythonDispatch) ... ok (0.003s) 2023-01-11T21:26:41.1340361Z test_kwarg_only (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1341104Z test_kwarg_only_and_positional_default (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1341808Z test_layout_slow_path (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1342677Z test_like (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1343291Z test_list_ret (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1343988Z test_make_subclass_with_modes (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1344744Z test_make_wrapper_subclass_noalloc (__main__.TestPythonDispatch) ... ok (0.000s) 2023-01-11T21:26:41.1345574Z test_make_wrapper_subclass_propagates_metadata (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1346467Z test_maybe_tuple_bug (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1347173Z test_mode_with_make_subclass (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1347893Z test_multiple_ops_subclass (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1348671Z test_nested_push_logging_tensor_mode (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1349383Z test_nesting_same_mode (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1350038Z test_new_ones (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1350704Z test_none_wrapping (__main__.TestPythonDispatch) ... ok (0.008s) 2023-01-11T21:26:41.1351500Z test_notimplemented_mode (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1352200Z test_optional_tensor_list (__main__.TestPythonDispatch) ... woof 2023-01-11T21:26:41.1352738Z ok (0.002s) 2023-01-11T21:26:41.1353238Z test_out (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1353889Z test_produce_real_type (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1354518Z test_set_data (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1355208Z test_shallow_copy_and_detach (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1355880Z test_sizes_slow_path (__main__.TestPythonDispatch) ... ok (0.003s) 2023-01-11T21:26:41.1356588Z test_standard_is_not_subclass (__main__.TestPythonDispatch) ... ok (0.000s) 2023-01-11T21:26:41.1358255Z test_storage (__main__.TestPythonDispatch) ... /var/lib/jenkins/workspace/test/test_python_dispatch.py:469: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:26:41.1359832Z self.assertRaises(RuntimeError, lambda: x.storage()) 2023-01-11T21:26:41.1360341Z ok (0.002s) 2023-01-11T21:26:41.1361852Z test_storage_can_be_converted_to_python_object (__main__.TestPythonDispatch) ... /var/lib/jenkins/workspace/test/test_python_dispatch.py:1197: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:26:41.1363322Z s = torch.Storage() 2023-01-11T21:26:41.1363723Z ok (0.001s) 2023-01-11T21:26:41.1364266Z test_strides_slow_path (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1364985Z test_subclass_autograd_device_check (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1365697Z test_subclass_creation (__main__.TestPythonDispatch) ... ok (0.005s) 2023-01-11T21:26:41.1366394Z test_subclass_priority (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1367145Z test_tolist_numpy_with_torch_dispatch_mode (__main__.TestPythonDispatch) ... ok (0.003s) 2023-01-11T21:26:41.1368075Z test_torch_dispatch_mode_basic (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1368876Z test_torch_dispatch_mode_respects_no_dispatch (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1369704Z test_torch_dispatch_mode_subclass_priority (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1370504Z test_torch_dispatch_mode_unrelated_tensors (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1371228Z test_version (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1371937Z test_with_mode_created_separately (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1372655Z test_with_nested_modes (__main__.TestPythonDispatch) ... ok (0.001s) 2023-01-11T21:26:41.1373390Z test_wrapper_subclass_serializes (__main__.TestPythonDispatch) ... ok (0.002s) 2023-01-11T21:26:41.1374098Z test_basic (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T21:26:41.1374816Z test_lstsq (__main__.TestPythonDispatcher) ... ok (0.006s) 2023-01-11T21:26:41.1375537Z test_alias_analysis (__main__.TestPythonRegistration) ... ok (0.006s) 2023-01-11T21:26:41.1376268Z test_create_new_library (__main__.TestPythonRegistration) ... ok (0.002s) 2023-01-11T21:26:41.1377048Z test_error_for_unsupported_ns_or_kind (__main__.TestPythonRegistration) ... ok (0.001s) 2023-01-11T21:26:41.1377822Z test_error_if_fn_not_callable (__main__.TestPythonRegistration) ... ok (0.001s) 2023-01-11T21:26:41.1378629Z test_extend_library_with_dispatch_key_arg (__main__.TestPythonRegistration) ... ok (0.002s) 2023-01-11T21:26:41.1380450Z test_override_aten_ops_with_multiple_libraries (__main__.TestPythonRegistration) ... /opt/conda/lib/python3.10/site-packages/torch/library.py:126: UserWarning: Overriding a previously registered kernel for the same operator and the same dispatch key 2023-01-11T21:26:41.1381821Z operator: aten::mul.Tensor(Tensor self, Tensor other) -> Tensor 2023-01-11T21:26:41.1382761Z registered at /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterSchema.cpp:6 2023-01-11T21:26:41.1383387Z dispatch key: ZeroTensor 2023-01-11T21:26:41.1384153Z previous kernel: registered at /var/lib/jenkins/workspace/aten/src/ATen/LegacyBatchingRegistrations.cpp:1070 2023-01-11T21:26:41.1385239Z new kernel: registered at /dev/null:549 (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/core/dispatch/OperatorEntry.cpp:159.) 2023-01-11T21:26:41.1386232Z self.m.impl(name, dispatch_key if dispatch_key != "" else "CompositeImplicitAutograd", fn) 2023-01-11T21:26:41.1386852Z ok (0.004s) 2023-01-11T21:26:41.1387407Z test_override_cpu_sum (__main__.TestPythonRegistration) ... ok (0.001s) 2023-01-11T21:26:41.1388181Z test_override_cuda_with_jiterator (__main__.TestPythonRegistration) ... ok (0.002s) 2023-01-11T21:26:41.1388626Z 2023-01-11T21:26:41.1389123Z ---------------------------------------------------------------------- 2023-01-11T21:26:41.1389707Z Ran 72 tests in 0.365s 2023-01-11T21:26:41.1389975Z 2023-01-11T21:26:41.1390101Z OK 2023-01-11T21:26:41.1390320Z 2023-01-11T21:26:41.1390517Z Generating XML reports... 2023-01-11T21:26:41.1391624Z Generated XML report: test-reports/python-unittest/test_python_dispatch/TEST-TestPythonDispatch-20230111212640.xml 2023-01-11T21:26:41.1393001Z Generated XML report: test-reports/python-unittest/test_python_dispatch/TEST-TestPythonDispatcher-20230111212640.xml 2023-01-11T21:26:41.1394422Z Generated XML report: test-reports/python-unittest/test_python_dispatch/TEST-TestPythonRegistration-20230111212640.xml 2023-01-11T21:26:41.1395068Z 2023-01-11T21:26:41.1395647Z ##[endgroup] 2023-01-11T21:26:41.1396610Z FINISHED PRINTING LOG FILE of test_python_dispatch (/var/lib/jenkins/workspace/test/test-reports/test_python_dispatch_hr9qzlvq) 2023-01-11T21:26:41.1397165Z 2023-01-11T21:26:43.0285581Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:43.0932533Z Ignoring disabled issues: [] 2023-01-11T21:26:43.1074103Z Running test_scatter_gather_ops ... [2023-01-11 21:26:43.107077] 2023-01-11T21:26:43.1075474Z Executing ['/opt/conda/bin/python', '-bb', 'test_scatter_gather_ops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:43.107336] 2023-01-11T21:26:45.0588315Z 2023-01-11T21:26:45.0588913Z Expand the folded group to see the log file of test_scatter_gather_ops 2023-01-11T21:26:45.0589883Z ##[group]PRINTING LOG FILE of test_scatter_gather_ops (/var/lib/jenkins/workspace/test/test-reports/test_scatter_gather_ops_3lbxs6ip) 2023-01-11T21:26:45.0590123Z 2023-01-11T21:26:45.0590199Z Running tests... 2023-01-11T21:26:45.0590600Z ---------------------------------------------------------------------- 2023-01-11T21:26:45.0590771Z 2023-01-11T21:26:45.0591019Z ---------------------------------------------------------------------- 2023-01-11T21:26:45.0591243Z Ran 0 tests in 0.000s 2023-01-11T21:26:45.0591361Z 2023-01-11T21:26:45.0591422Z OK 2023-01-11T21:26:45.0591534Z 2023-01-11T21:26:45.0591824Z Generating XML reports... 2023-01-11T21:26:45.0592175Z Test results will be stored in test-reports/python-unittest/test_scatter_gather_ops 2023-01-11T21:26:45.0592352Z 2023-01-11T21:26:45.0592578Z ##[endgroup] 2023-01-11T21:26:45.0592993Z FINISHED PRINTING LOG FILE of test_scatter_gather_ops (/var/lib/jenkins/workspace/test/test-reports/test_scatter_gather_ops_3lbxs6ip) 2023-01-11T21:26:45.0593221Z 2023-01-11T21:26:46.9903727Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:47.0754566Z Ignoring disabled issues: [] 2023-01-11T21:26:47.0894884Z Running test_sort_and_select ... [2023-01-11 21:26:47.089149] 2023-01-11T21:26:47.0896020Z Executing ['/opt/conda/bin/python', '-bb', 'test_sort_and_select.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:47.089401] 2023-01-11T21:26:49.0010072Z 2023-01-11T21:26:49.0010638Z Expand the folded group to see the log file of test_sort_and_select 2023-01-11T21:26:49.0011996Z ##[group]PRINTING LOG FILE of test_sort_and_select (/var/lib/jenkins/workspace/test/test-reports/test_sort_and_select_r6czj8ih) 2023-01-11T21:26:49.0012516Z 2023-01-11T21:26:49.0012654Z Running tests... 2023-01-11T21:26:49.0013341Z ---------------------------------------------------------------------- 2023-01-11T21:26:49.0013592Z 2023-01-11T21:26:49.0013930Z ---------------------------------------------------------------------- 2023-01-11T21:26:49.0014305Z Ran 0 tests in 0.000s 2023-01-11T21:26:49.0014482Z 2023-01-11T21:26:49.0014571Z OK 2023-01-11T21:26:49.0014712Z 2023-01-11T21:26:49.0014833Z Generating XML reports... 2023-01-11T21:26:49.0015425Z Test results will be stored in test-reports/python-unittest/test_sort_and_select 2023-01-11T21:26:49.0015728Z 2023-01-11T21:26:49.0016130Z ##[endgroup] 2023-01-11T21:26:49.0016780Z FINISHED PRINTING LOG FILE of test_sort_and_select (/var/lib/jenkins/workspace/test/test-reports/test_sort_and_select_r6czj8ih) 2023-01-11T21:26:49.0017161Z 2023-01-11T21:26:50.8836181Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:50.9517895Z Ignoring disabled issues: [] 2023-01-11T21:26:50.9659788Z Running test_sparse ... [2023-01-11 21:26:50.965688] 2023-01-11T21:26:50.9662069Z Executing ['/opt/conda/bin/python', '-bb', 'test_sparse.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:50.965969] 2023-01-11T21:26:53.7607721Z 2023-01-11T21:26:53.7608251Z Expand the folded group to see the log file of test_sparse 2023-01-11T21:26:53.7609413Z ##[group]PRINTING LOG FILE of test_sparse (/var/lib/jenkins/workspace/test/test-reports/test_sparse_5v068md3) 2023-01-11T21:26:53.7609814Z 2023-01-11T21:26:53.7609940Z Running tests... 2023-01-11T21:26:53.7610573Z ---------------------------------------------------------------------- 2023-01-11T21:26:53.7611219Z Test results will be stored in test-reports/python-unittest/test_sparse 2023-01-11T21:26:53.7611716Z test_basic (__main__.TestSparseMeta) ... ok (0.004s) 2023-01-11T21:26:53.7612486Z test_cuda_from_cpu (__main__.TestSparseOneOff) ... skip: CUDA not available (0.001s) 2023-01-11T21:26:53.7613043Z test_cuda_sparse_cpu_dense_add (__main__.TestSparseOneOff) ... skip: CUDA not available (0.001s) 2023-01-11T21:26:53.7613351Z 2023-01-11T21:26:53.7613702Z ---------------------------------------------------------------------- 2023-01-11T21:26:53.7614084Z Ran 3 tests in 0.005s 2023-01-11T21:26:53.7614261Z 2023-01-11T21:26:53.7614379Z OK (skipped=2) 2023-01-11T21:26:53.7614552Z 2023-01-11T21:26:53.7614686Z Generating XML reports... 2023-01-11T21:26:53.7615366Z Generated XML report: test-reports/python-unittest/test_sparse/TEST-TestSparseMeta-20230111212653.xml 2023-01-11T21:26:53.7616258Z Generated XML report: test-reports/python-unittest/test_sparse/TEST-TestSparseOneOff-20230111212653.xml 2023-01-11T21:26:53.7616650Z 2023-01-11T21:26:53.7617067Z ##[endgroup] 2023-01-11T21:26:53.7617715Z FINISHED PRINTING LOG FILE of test_sparse (/var/lib/jenkins/workspace/test/test-reports/test_sparse_5v068md3) 2023-01-11T21:26:53.7618266Z 2023-01-11T21:26:55.7672448Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:26:55.8532978Z Ignoring disabled issues: [] 2023-01-11T21:26:55.8679553Z Running test_stateless ... [2023-01-11 21:26:55.867560] 2023-01-11T21:26:55.8680774Z Executing ['/opt/conda/bin/python', '-bb', 'test_stateless.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:26:55.867863] 2023-01-11T21:27:03.7789386Z 2023-01-11T21:27:03.7790224Z Expand the folded group to see the log file of test_stateless 2023-01-11T21:27:03.7791486Z ##[group]PRINTING LOG FILE of test_stateless (/var/lib/jenkins/workspace/test/test-reports/test_stateless_s5o9c9bb) 2023-01-11T21:27:03.7791891Z 2023-01-11T21:27:03.7792009Z Running tests... 2023-01-11T21:27:03.7792429Z ---------------------------------------------------------------------- 2023-01-11T21:27:03.7792802Z Test results will be stored in test-reports/python-unittest/test_stateless 2023-01-11T21:27:03.7793173Z test_runs_with_optimize_flag (__main__.TestPythonOptimizeMode) ... ok (4.227s) 2023-01-11T21:27:03.7793505Z test_private_stateless_warns (__main__.TestStatelessDeprecation) ... ok (1.497s) 2023-01-11T21:27:03.7793839Z test_circular_references (__main__.TestStatelessFunctionalAPI) ... ok (0.003s) 2023-01-11T21:27:03.7794157Z test_functional_batch_norm (__main__.TestStatelessFunctionalAPI) ... ok (0.002s) 2023-01-11T21:27:03.7794483Z test_functional_call (__main__.TestStatelessFunctionalAPI) ... ok (0.002s) 2023-01-11T21:27:03.7794955Z test_functional_call_with_data_parallel (__main__.TestStatelessFunctionalAPI) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:27:03.7795337Z test_functional_call_with_gradient (__main__.TestStatelessFunctionalAPI) ... ok (0.002s) 2023-01-11T21:27:03.7795665Z test_functional_call_with_jit (__main__.TestStatelessFunctionalAPI) ... ok (0.056s) 2023-01-11T21:27:03.7796029Z test_reparamertize_module_fail_reset_to_original (__main__.TestStatelessFunctionalAPI) ... ok (0.034s) 2023-01-11T21:27:03.7796432Z test_reparametrized_module_change_parametrization_original (__main__.TestStatelessFunctionalAPI) ... ok (0.004s) 2023-01-11T21:27:03.7796789Z test_setattr (__main__.TestStatelessFunctionalAPI) ... ok (0.001s) 2023-01-11T21:27:03.7797095Z test_tied_weights_warns (__main__.TestStatelessFunctionalAPI) ... ok (0.002s) 2023-01-11T21:27:03.7797278Z 2023-01-11T21:27:03.7797477Z ---------------------------------------------------------------------- 2023-01-11T21:27:03.7797727Z Ran 12 tests in 5.831s 2023-01-11T21:27:03.7797841Z 2023-01-11T21:27:03.7797899Z OK (skipped=1) 2023-01-11T21:27:03.7798004Z 2023-01-11T21:27:03.7798090Z Generating XML reports... 2023-01-11T21:27:03.7798528Z Generated XML report: test-reports/python-unittest/test_stateless/TEST-TestPythonOptimizeMode-20230111212657.xml 2023-01-11T21:27:03.7799085Z Generated XML report: test-reports/python-unittest/test_stateless/TEST-TestStatelessDeprecation-20230111212657.xml 2023-01-11T21:27:03.7799853Z Generated XML report: test-reports/python-unittest/test_stateless/TEST-TestStatelessFunctionalAPI-20230111212657.xml 2023-01-11T21:27:03.7800109Z 2023-01-11T21:27:03.7800357Z ##[endgroup] 2023-01-11T21:27:03.7800742Z FINISHED PRINTING LOG FILE of test_stateless (/var/lib/jenkins/workspace/test/test-reports/test_stateless_s5o9c9bb) 2023-01-11T21:27:03.7800945Z 2023-01-11T21:27:05.8309177Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:05.9163369Z Ignoring disabled issues: [] 2023-01-11T21:27:05.9306786Z Running test_testing ... [2023-01-11 21:27:05.930371] 2023-01-11T21:27:05.9308593Z Executing ['/opt/conda/bin/python', '-bb', 'test_testing.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:27:05.930650] 2023-01-11T21:27:30.6993419Z 2023-01-11T21:27:30.6993872Z Expand the folded group to see the log file of test_testing 2023-01-11T21:27:30.7025561Z ##[group]PRINTING LOG FILE of test_testing (/var/lib/jenkins/workspace/test/test-reports/test_testing_hmmsnhea) 2023-01-11T21:27:30.7026123Z 2023-01-11T21:27:30.7026298Z Running tests... 2023-01-11T21:27:30.7027257Z ---------------------------------------------------------------------- 2023-01-11T21:27:30.7028126Z Test results will be stored in test-reports/python-unittest/test_testing 2023-01-11T21:27:30.7028802Z test_bool (__main__.TestAssertClose) ... ok (0.003s) 2023-01-11T21:27:30.7029535Z test_default_tolerance_selection_mismatching_dtypes (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7030317Z test_docstring_examples (__main__.TestAssertClose) ... ok (0.032s) 2023-01-11T21:27:30.7030966Z test_matching (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7031688Z test_matching_atol (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7032334Z test_matching_conjugate_bit (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7032994Z test_matching_nan (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7033658Z test_matching_nan_with_equal_nan (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7034364Z test_matching_rtol (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7034955Z test_meta (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7035602Z test_mismatching_dtype (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7036329Z test_mismatching_dtype_no_check (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7037972Z test_mismatching_layout (__main__.TestAssertClose) ... /var/lib/jenkins/workspace/test/test_testing.py:619: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:56.) 2023-01-11T21:27:30.7039498Z sparse_csr = strided.to_sparse_csr() 2023-01-11T21:27:30.7039970Z ok (0.002s) 2023-01-11T21:27:30.7040561Z test_mismatching_layout_no_check (__main__.TestAssertClose) ... ok (0.005s) 2023-01-11T21:27:30.7041280Z test_mismatching_shape (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7041946Z test_mismatching_stride (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7042669Z test_mismatching_stride_no_check (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7043397Z test_mismatching_types (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7044079Z test_mismatching_types_subclasses (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7044846Z test_mismatching_types_type_equality (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7045578Z test_mismatching_values (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7046265Z test_mismatching_values_atol (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7046979Z test_mismatching_values_rtol (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7047851Z test_none (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7048463Z test_none_mismatch (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7049076Z test_numpy (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7049667Z test_only_atol (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7050270Z test_only_rtol (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7050866Z test_scalar (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7051528Z test_unexpected_error_compare (__main__.TestAssertClose) ... ok (0.002s) 2023-01-11T21:27:30.7052281Z test_unexpected_error_originate (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7052965Z test_unknown_layout (__main__.TestAssertClose) ... ok (0.001s) 2023-01-11T21:27:30.7053613Z test_unknown_type (__main__.TestAssertClose) ... ok (0.011s) 2023-01-11T21:27:30.7054353Z test_mapping_mismatching_keys (__main__.TestAssertCloseContainer) ... ok (0.001s) 2023-01-11T21:27:30.7055334Z test_mapping_mismatching_values_msg (__main__.TestAssertCloseContainer) ... ok (0.001s) 2023-01-11T21:27:30.7056165Z test_sequence_mismatching_len (__main__.TestAssertCloseContainer) ... ok (0.001s) 2023-01-11T21:27:30.7056996Z test_sequence_mismatching_values_msg (__main__.TestAssertCloseContainer) ... ok (0.001s) 2023-01-11T21:27:30.7057791Z test_abs_diff (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7058538Z test_abs_diff_scalar (__main__.TestAssertCloseErrorMessage) ... ok (0.001s) 2023-01-11T21:27:30.7059299Z test_atol (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7060074Z test_identifier_scalars (__main__.TestAssertCloseErrorMessage) ... ok (0.001s) 2023-01-11T21:27:30.7060891Z test_identifier_tensor_likes (__main__.TestAssertCloseErrorMessage) ... ok (0.002s) 2023-01-11T21:27:30.7061717Z test_mismatched_elements (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7062769Z test_msg_callable (__main__.TestAssertCloseErrorMessage) ... ok (0.001s) 2023-01-11T21:27:30.7063531Z test_msg_str (__main__.TestAssertCloseErrorMessage) ... ok (0.001s) 2023-01-11T21:27:30.7064258Z test_not_close (__main__.TestAssertCloseErrorMessage) ... ok (0.007s) 2023-01-11T21:27:30.7065012Z test_not_equal (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7065736Z test_rel_diff (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7066489Z test_rel_diff_scalar (__main__.TestAssertCloseErrorMessage) ... ok (0.001s) 2023-01-11T21:27:30.7067212Z test_rtol (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7067967Z test_zero_div_zero (__main__.TestAssertCloseErrorMessage) ... ok (0.003s) 2023-01-11T21:27:30.7068750Z test_matching_per_channel (__main__.TestAssertCloseQuantized) ... ok (0.002s) 2023-01-11T21:27:30.7069500Z test_matching_per_tensor (__main__.TestAssertCloseQuantized) ... ok (0.002s) 2023-01-11T21:27:30.7070265Z test_mismatching_is_quantized (__main__.TestAssertCloseQuantized) ... ok (0.001s) 2023-01-11T21:27:30.7071139Z test_mismatching_qscheme (__main__.TestAssertCloseQuantized) ... ok (0.001s) 2023-01-11T21:27:30.7071876Z test_matching (__main__.TestAssertCloseSparseBSC) ... ok (0.002s) 2023-01-11T21:27:30.7072626Z test_mismatching_ccol_indices_msg (__main__.TestAssertCloseSparseBSC) ... ok (0.003s) 2023-01-11T21:27:30.7073448Z test_mismatching_row_indices_msg (__main__.TestAssertCloseSparseBSC) ... ok (0.003s) 2023-01-11T21:27:30.7074222Z test_mismatching_values_msg (__main__.TestAssertCloseSparseBSC) ... ok (0.003s) 2023-01-11T21:27:30.7074901Z test_matching (__main__.TestAssertCloseSparseBSR) ... ok (0.002s) 2023-01-11T21:27:30.7075618Z test_mismatching_col_indices_msg (__main__.TestAssertCloseSparseBSR) ... ok (0.003s) 2023-01-11T21:27:30.7076389Z test_mismatching_crow_indices_msg (__main__.TestAssertCloseSparseBSR) ... ok (0.003s) 2023-01-11T21:27:30.7077156Z test_mismatching_values_msg (__main__.TestAssertCloseSparseBSR) ... ok (0.003s) 2023-01-11T21:27:30.7078088Z test_matching_coalesced (__main__.TestAssertCloseSparseCOO) ... ok (0.002s) 2023-01-11T21:27:30.7078871Z test_matching_uncoalesced (__main__.TestAssertCloseSparseCOO) ... ok (0.001s) 2023-01-11T21:27:30.7079668Z test_mismatching_indices_msg (__main__.TestAssertCloseSparseCOO) ... ok (0.003s) 2023-01-11T21:27:30.7080467Z test_mismatching_nnz (__main__.TestAssertCloseSparseCOO) ... ok (0.002s) 2023-01-11T21:27:30.7081269Z test_mismatching_sparse_dims (__main__.TestAssertCloseSparseCOO) ... ok (0.002s) 2023-01-11T21:27:30.7082076Z test_mismatching_values_msg (__main__.TestAssertCloseSparseCOO) ... ok (0.003s) 2023-01-11T21:27:30.7082825Z test_matching (__main__.TestAssertCloseSparseCSC) ... ok (0.002s) 2023-01-11T21:27:30.7083584Z test_mismatching_ccol_indices_msg (__main__.TestAssertCloseSparseCSC) ... ok (0.003s) 2023-01-11T21:27:30.7084413Z test_mismatching_row_indices_msg (__main__.TestAssertCloseSparseCSC) ... ok (0.003s) 2023-01-11T21:27:30.7085373Z test_mismatching_values_msg (__main__.TestAssertCloseSparseCSC) ... ok (0.003s) 2023-01-11T21:27:30.7085810Z test_hybrid_support (__main__.TestAssertCloseSparseCSR) ... expected failure (0.007s) 2023-01-11T21:27:30.7086256Z test_matching (__main__.TestAssertCloseSparseCSR) ... ok (0.002s) 2023-01-11T21:27:30.7086718Z test_mismatching_col_indices_msg (__main__.TestAssertCloseSparseCSR) ... ok (0.003s) 2023-01-11T21:27:30.7087209Z test_mismatching_crow_indices_msg (__main__.TestAssertCloseSparseCSR) ... ok (0.003s) 2023-01-11T21:27:30.7087680Z test_mismatching_values_msg (__main__.TestAssertCloseSparseCSR) ... ok (0.003s) 2023-01-11T21:27:30.7088140Z test_filtering_env_var (__main__.TestFrameworkUtils) ... ok (8.213s) 2023-01-11T21:27:30.7088540Z test_circular_dependencies (__main__.TestImports) 2023-01-11T21:27:30.7089250Z Checks that all modules inside torch can be imported ... No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:30.7090011Z 2023-01-11 21:27:24,141 - torch.distributed.nn.jit.instantiator - INFO - Created a temporary directory at /tmp/tmpeo02lgi9 2023-01-11T21:27:30.7090790Z 2023-01-11 21:27:24,142 - torch.distributed.nn.jit.instantiator - INFO - Writing /tmp/tmpeo02lgi9/_remote_module_non_scriptable.py 2023-01-11T21:27:30.7091225Z ok (8.694s) 2023-01-11T21:27:30.7091578Z test_no_mutate_global_logging_on_import_path_functorch (__main__.TestImports) ... ok (1.512s) 2023-01-11T21:27:30.7092049Z test_no_mutate_global_logging_on_import_path_torch (__main__.TestImports) ... ok (1.507s) 2023-01-11T21:27:30.7092654Z test_no_warning_on_import (__main__.TestImports) ... /var/lib/jenkins/workspace/test/test_testing.py:1836: DeprecationWarning: Please use assertEqual instead. 2023-01-11T21:27:30.7093134Z self.assertEquals(out, "") 2023-01-11T21:27:30.7093373Z ok (1.528s) 2023-01-11T21:27:30.7093685Z test_sample_input (__main__.TestOpInfos) ... ok (0.002s) 2023-01-11T21:27:30.7094066Z test_sample_input_metadata (__main__.TestOpInfos) ... ok (0.001s) 2023-01-11T21:27:30.7094481Z test_default_names (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7094961Z test_modules_decorator_misuse_error (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7095451Z test_multiple_handling_of_same_param_error (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7095898Z test_name_fn (__main__.TestTestParametrization) ... ok (0.002s) 2023-01-11T21:27:30.7096330Z test_ops_decorator_misuse_error (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7096797Z test_subtest_expected_failure_x_1 (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7097290Z test_subtest_expected_failure_x_2 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7097797Z test_subtest_expected_failure_x_3 (__main__.TestTestParametrization) ... ok (0.000s) 2023-01-11T21:27:30.7098254Z test_subtest_names (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7098882Z test_two_things_subtest_expected_failure_x_1_y_4 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7099422Z test_two_things_subtest_expected_failure_x_1_y_5 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7099985Z test_two_things_subtest_expected_failure_x_1_y_6 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7100512Z test_two_things_subtest_expected_failure_x_2_y_4 (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7101051Z test_two_things_subtest_expected_failure_x_2_y_5 (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7101564Z test_two_things_subtest_expected_failure_x_2_y_6 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7102089Z test_two_things_subtest_expected_failure_x_3_y_4 (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7102872Z test_two_things_subtest_expected_failure_x_3_y_5 (__main__.TestTestParametrization) ... ok (0.001s) 2023-01-11T21:27:30.7103430Z test_two_things_subtest_expected_failure_x_3_y_6 (__main__.TestTestParametrization) ... expected failure (0.001s) 2023-01-11T21:27:30.7103731Z 2023-01-11T21:27:30.7104061Z ---------------------------------------------------------------------- 2023-01-11T21:27:30.7104393Z Ran 103 tests in 21.677s 2023-01-11T21:27:30.7104558Z 2023-01-11T21:27:30.7104683Z OK (expected failures=7) 2023-01-11T21:27:30.7104844Z 2023-01-11T21:27:30.7104953Z Generating XML reports... 2023-01-11T21:27:30.7105562Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertClose-20230111212708.xml 2023-01-11T21:27:30.7106359Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseContainer-20230111212708.xml 2023-01-11T21:27:30.7107165Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseErrorMessage-20230111212708.xml 2023-01-11T21:27:30.7107968Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseQuantized-20230111212708.xml 2023-01-11T21:27:30.7108717Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseBSC-20230111212708.xml 2023-01-11T21:27:30.7109478Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseBSR-20230111212708.xml 2023-01-11T21:27:30.7110263Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCOO-20230111212708.xml 2023-01-11T21:27:30.7111141Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCSC-20230111212708.xml 2023-01-11T21:27:30.7111954Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCSR-20230111212708.xml 2023-01-11T21:27:30.7112729Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestFrameworkUtils-20230111212708.xml 2023-01-11T21:27:30.7113426Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestImports-20230111212708.xml 2023-01-11T21:27:30.7114100Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestOpInfos-20230111212708.xml 2023-01-11T21:27:30.7114844Z Generated XML report: test-reports/python-unittest/test_testing/TEST-TestTestParametrization-20230111212708.xml 2023-01-11T21:27:30.7115196Z 2023-01-11T21:27:30.7115623Z ##[endgroup] 2023-01-11T21:27:30.7116167Z FINISHED PRINTING LOG FILE of test_testing (/var/lib/jenkins/workspace/test/test-reports/test_testing_hmmsnhea) 2023-01-11T21:27:30.7116473Z 2023-01-11T21:27:32.7437301Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:32.8280251Z Ignoring disabled issues: [] 2023-01-11T21:27:32.8423556Z Running test_transformers ... [2023-01-11 21:27:32.842040] 2023-01-11T21:27:32.8425523Z Executing ['/opt/conda/bin/python', '-bb', 'test_transformers.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:27:32.842338] 2023-01-11T21:27:41.9101071Z 2023-01-11T21:27:41.9101620Z Expand the folded group to see the log file of test_transformers 2023-01-11T21:27:41.9102722Z ##[group]PRINTING LOG FILE of test_transformers (/var/lib/jenkins/workspace/test/test-reports/test_transformers_ohr3p3nk) 2023-01-11T21:27:41.9103069Z 2023-01-11T21:27:41.9103379Z Running tests... 2023-01-11T21:27:41.9103945Z ---------------------------------------------------------------------- 2023-01-11T21:27:41.9104685Z Test results will be stored in test-reports/python-unittest/test_transformers 2023-01-11T21:27:41.9105413Z test_bias_is_none (__main__.TestTransformers) ... ok (0.003s) 2023-01-11T21:27:41.9106837Z test_decoder_only_layer (__main__.TestTransformers) ... skip: Fairseq not found (0.005s) 2023-01-11T21:27:41.9107544Z test_flash_autocast_fp32_bfloat16 (__main__.TestTransformers) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:27:41.9108192Z test_flash_autocast_fp32_float16 (__main__.TestTransformers) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:27:41.9108709Z test_flash_fail_fp32t (__main__.TestTransformers) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:27:41.9109040Z test_fused_sdp_choice_type_dense (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9109338Z test_fused_sdp_choice_type_nested (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9109628Z test_mask_check_fastpath (__main__.TestTransformers) 2023-01-11T21:27:41.9112979Z Test that fastpath is executed independently of the masks that are passed. ... /var/lib/jenkins/workspace/test/test_transformers.py:916: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:27:41.9114442Z nested_tensor_return_value = torch.nested.nested_tensor([torch.ones((2, 2), dtype=torch.float)]) 2023-01-11T21:27:41.9115059Z ok (0.011s) 2023-01-11T21:27:41.9115748Z test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.003s) 2023-01-11T21:27:41.9116741Z test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9117742Z test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9118711Z test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9119716Z test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9120728Z test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9121751Z test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9122705Z test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9123662Z test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9143516Z test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.009s) 2023-01-11T21:27:41.9144068Z test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9144584Z test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9145222Z test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.020s) 2023-01-11T21:27:41.9146042Z test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.015s) 2023-01-11T21:27:41.9146433Z test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.017s) 2023-01-11T21:27:41.9146830Z test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9147232Z test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9147633Z test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9148027Z test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.009s) 2023-01-11T21:27:41.9148694Z test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9149435Z test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9150031Z test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9150765Z test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9151386Z test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9151797Z test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.009s) 2023-01-11T21:27:41.9152223Z test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9152759Z test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.010s) 2023-01-11T21:27:41.9153143Z test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_device_cpu (__main__.TestTransformers) ... ok (0.014s) 2023-01-11T21:27:41.9153569Z test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_2_device_cpu (__main__.TestTransformers) ... ok (0.016s) 2023-01-11T21:27:41.9154218Z test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_5_device_cpu (__main__.TestTransformers) ... ok (0.016s) 2023-01-11T21:27:41.9154990Z test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_dense_fused_kernel_flash (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.002s) 2023-01-11T21:27:41.9155959Z test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_dense_fused_kernel_mem_efficient (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.002s) 2023-01-11T21:27:41.9156493Z test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_nested_fused_kernel_flash (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.002s) 2023-01-11T21:27:41.9157027Z test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_nested_fused_kernel_mem_efficient (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.002s) 2023-01-11T21:27:41.9157554Z test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9158063Z test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9158654Z test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9159612Z test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9160229Z test_scaled_dot_product_attention_fused_kernels_type_dense_is_contiguous_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9161086Z test_scaled_dot_product_attention_fused_kernels_type_dense_is_contiguous_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9161575Z test_scaled_dot_product_attention_fused_kernels_type_nested_is_contiguous_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9162109Z test_scaled_dot_product_attention_fused_kernels_type_nested_is_contiguous_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9162573Z test_sdp_fused_grad_against_math_contiguous_inputs_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9163017Z test_sdp_fused_grad_against_math_contiguous_inputs_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9163453Z test_sdp_math_gradcheck_contiguous_inputs_False (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9163882Z test_sdp_math_gradcheck_contiguous_inputs_True (__main__.TestTransformers) ... skip: Flash Attention was not built for this system (0.001s) 2023-01-11T21:27:41.9164247Z test_sdp_runtime_dispatch (__main__.TestTransformers) ... skip: CUDA unavailable (0.002s) 2023-01-11T21:27:41.9164894Z test_self_attn_TxT_attn_mask (__main__.TestTransformers) ... skip: 4D mask not supported yet - activate when 4D mask supported (0.001s) 2023-01-11T21:27:41.9165329Z test_train_with_is_causal_device_cpu (__main__.TestTransformers) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:27:41.9165775Z test_train_with_pad_and_catch_error_device_cpu (__main__.TestTransformers) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:27:41.9166213Z test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) ... ok (0.031s) 2023-01-11T21:27:41.9166658Z test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_True_device_cpu (__main__.TestTransformers) ... ok (0.030s) 2023-01-11T21:27:41.9167093Z test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) ... ok (0.033s) 2023-01-11T21:27:41.9167530Z test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_True_device_cpu (__main__.TestTransformers) ... ok (0.033s) 2023-01-11T21:27:41.9167949Z test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) ... ok (0.027s) 2023-01-11T21:27:41.9168385Z test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_True_device_cpu (__main__.TestTransformers) ... ok (0.027s) 2023-01-11T21:27:41.9168812Z test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) ... ok (0.033s) 2023-01-11T21:27:41.9169239Z test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_True_device_cpu (__main__.TestTransformers) ... ok (0.033s) 2023-01-11T21:27:41.9169679Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_12 (__main__.TestTransformers) 2023-01-11T21:27:41.9170374Z Test TransformerEncoder fastpath output matches slowpath output ... /var/lib/jenkins/workspace/test/test_transformers.py:215: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:27:41.9170989Z torch.tensor(pair[0], device=device, dtype=torch.get_default_dtype()), # float input 2023-01-11T21:27:41.9171237Z ok (0.565s) 2023-01-11T21:27:41.9171569Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_256 (__main__.TestTransformers) 2023-01-11T21:27:41.9171961Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.807s) 2023-01-11T21:27:41.9172374Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_12 (__main__.TestTransformers) 2023-01-11T21:27:41.9173129Z Test TransformerEncoder fastpath output matches slowpath output ... /opt/conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling 2023-01-11T21:27:41.9173697Z warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') 2023-01-11T21:27:41.9173940Z ok (0.543s) 2023-01-11T21:27:41.9174268Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_256 (__main__.TestTransformers) 2023-01-11T21:27:41.9174669Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.835s) 2023-01-11T21:27:41.9175074Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_12 (__main__.TestTransformers) 2023-01-11T21:27:41.9175452Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.600s) 2023-01-11T21:27:41.9175858Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_256 (__main__.TestTransformers) 2023-01-11T21:27:41.9176256Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.932s) 2023-01-11T21:27:41.9176639Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_12 (__main__.TestTransformers) 2023-01-11T21:27:41.9177028Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.532s) 2023-01-11T21:27:41.9177431Z test_transformerencoder_fastpath_device_cpu_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_256 (__main__.TestTransformers) 2023-01-11T21:27:41.9177822Z Test TransformerEncoder fastpath output matches slowpath output ... ok (0.898s) 2023-01-11T21:27:41.9178199Z test_transformerencoder_square_input_with_no_grad_False_training_False_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) 2023-01-11T21:27:41.9178610Z Test for edge cases when input of shape (batch size, sequence length, embedding dimension) has ... ok (0.007s) 2023-01-11T21:27:41.9179026Z test_transformerencoder_square_input_with_no_grad_False_training_True_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) 2023-01-11T21:27:41.9179435Z Test for edge cases when input of shape (batch size, sequence length, embedding dimension) has ... ok (0.006s) 2023-01-11T21:27:41.9179832Z test_transformerencoder_square_input_with_no_grad_True_training_False_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) 2023-01-11T21:27:41.9180238Z Test for edge cases when input of shape (batch size, sequence length, embedding dimension) has ... ok (0.005s) 2023-01-11T21:27:41.9180645Z test_transformerencoder_square_input_with_no_grad_True_training_True_enable_nested_tensor_False_device_cpu (__main__.TestTransformers) 2023-01-11T21:27:41.9181049Z Test for edge cases when input of shape (batch size, sequence length, embedding dimension) has ... ok (0.005s) 2023-01-11T21:27:41.9181447Z test_transformerencoderlayer_src_mask_device_cpu_nhead_1 (__main__.TestTransformers) ... ok (0.003s) 2023-01-11T21:27:41.9181824Z test_transformerencoderlayer_src_mask_device_cpu_nhead_4 (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9182193Z test_transformerencoderlayer_src_mask_device_cpu_nhead_8 (__main__.TestTransformers) ... ok (0.002s) 2023-01-11T21:27:41.9182723Z test_unaligned_tensors (__main__.TestTransformers) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:27:41.9182897Z 2023-01-11T21:27:41.9183120Z ---------------------------------------------------------------------- 2023-01-11T21:27:41.9183367Z Ran 82 tests in 6.285s 2023-01-11T21:27:41.9183486Z 2023-01-11T21:27:41.9183562Z OK (skipped=25) 2023-01-11T21:27:41.9183674Z 2023-01-11T21:27:41.9183746Z Generating XML reports... 2023-01-11T21:27:41.9184169Z Generated XML report: test-reports/python-unittest/test_transformers/TEST-TestTransformers-20230111212735.xml 2023-01-11T21:27:41.9184403Z 2023-01-11T21:27:41.9184765Z ##[endgroup] 2023-01-11T21:27:41.9185233Z FINISHED PRINTING LOG FILE of test_transformers (/var/lib/jenkins/workspace/test/test-reports/test_transformers_ohr3p3nk) 2023-01-11T21:27:41.9185461Z 2023-01-11T21:27:43.9281997Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:44.0159733Z Ignoring disabled issues: [] 2023-01-11T21:27:44.0301828Z Running test_utils ... [2023-01-11 21:27:44.029901] 2023-01-11T21:27:44.0304332Z Executing ['/opt/conda/bin/python', '-bb', 'test_utils.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:27:44.030161] 2023-01-11T21:27:48.6727735Z 2023-01-11T21:27:48.6728269Z Expand the folded group to see the log file of test_utils 2023-01-11T21:27:48.6729399Z ##[group]PRINTING LOG FILE of test_utils (/var/lib/jenkins/workspace/test/test-reports/test_utils_3ng1pffq) 2023-01-11T21:27:48.6730255Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:48.6730534Z 2023-01-11T21:27:48.6730661Z Running tests... 2023-01-11T21:27:48.6731249Z ---------------------------------------------------------------------- 2023-01-11T21:27:48.6731833Z Test results will be stored in test-reports/python-unittest/test_utils 2023-01-11T21:27:48.6732244Z test_assert_scriptable (__main__.TestAssert) ... ok (0.243s) 2023-01-11T21:27:48.6732627Z test_assert_true (__main__.TestAssert) ... ok (0.001s) 2023-01-11T21:27:48.6733470Z test_bottleneck_cpu_only (__main__.TestBottleneck) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/68433 for allplatform(s) . If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T21:27:48.6734050Z test_bottleneck_cuda (__main__.TestBottleneck) ... skip: No CUDA (0.000s) 2023-01-11T21:27:48.6734321Z test_checkpoint (__main__.TestCheckpoint) ... ok (0.011s) 2023-01-11T21:27:48.6734603Z test_checkpoint_module_list (__main__.TestCheckpoint) ... ok (0.010s) 2023-01-11T21:27:48.6735187Z test_checkpoint_no_tensors (__main__.TestCheckpoint) ... /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None 2023-01-11T21:27:48.6735634Z warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") 2023-01-11T21:27:48.6735861Z ok (0.003s) 2023-01-11T21:27:48.6736088Z test_checkpoint_non_tensor (__main__.TestCheckpoint) ... ok (0.001s) 2023-01-11T21:27:48.6736396Z test_checkpoint_non_tensor_inputs_outputs (__main__.TestCheckpoint) ... ok (0.003s) 2023-01-11T21:27:48.6736741Z test_checkpoint_not_preserve_rng_state_and_without_reentrant (__main__.TestCheckpoint) ... skip: No CUDA (0.000s) 2023-01-11T21:27:48.6737078Z test_checkpoint_partial_grad (__main__.TestCheckpoint) ... ok (0.001s) 2023-01-11T21:27:48.6737366Z test_checkpoint_rng_cpu (__main__.TestCheckpoint) ... ok (0.011s) 2023-01-11T21:27:48.6737878Z test_checkpoint_rng_cuda (__main__.TestCheckpoint) ... skip: No CUDA (0.001s) 2023-01-11T21:27:48.6738192Z test_checkpoint_sequential_deprecated_multiple_args (__main__.TestCheckpoint) ... ok (0.001s) 2023-01-11T21:27:48.6738527Z test_checkpoint_sequential_deprecated_no_args (__main__.TestCheckpoint) ... ok (0.001s) 2023-01-11T21:27:48.6738835Z test_checkpoint_trigger (__main__.TestCheckpoint) ... ok (0.004s) 2023-01-11T21:27:48.6739099Z test_checkpoint_valid (__main__.TestCheckpoint) ... ok (0.003s) 2023-01-11T21:27:48.6739428Z test_checkpointing_without_reentrant_early_free (__main__.TestCheckpoint) ... skip: Test requires CUDA (0.001s) 2023-01-11T21:27:48.6740266Z test_smoke (__main__.TestCollectEnv) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/77345 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T21:27:48.6740900Z test_cc_compiler_is_ok (__main__.TestCppExtensionUtils) ... ok (0.010s) 2023-01-11T21:27:48.6741208Z test_cpp_compiler_is_ok (__main__.TestCppExtensionUtils) ... ok (0.009s) 2023-01-11T21:27:48.6742014Z test_multi_drop (__main__.TestDataLoaderUtils) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/82865 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T21:27:48.6743058Z test_multi_keep (__main__.TestDataLoaderUtils) ... skip: FIXME: Intermittent CUDA out-of-memory error on Windows and time-out under ASAN (0.000s) 2023-01-11T21:27:48.6743429Z test_random_seed (__main__.TestDataLoaderUtils) ... ok (0.133s) 2023-01-11T21:27:48.6743709Z test_single_drop (__main__.TestDataLoaderUtils) ... ok (0.002s) 2023-01-11T21:27:48.6743973Z test_single_keep (__main__.TestDataLoaderUtils) ... ok (0.001s) 2023-01-11T21:27:48.6744267Z test_external_module_register (__main__.TestExtensionUtils) ... ok (0.002s) 2023-01-11T21:27:48.6744549Z test_import_hipify (__main__.TestHipify) ... ok (0.000s) 2023-01-11T21:27:48.6744806Z test_check_onnx_broadcast (__main__.TestONNXUtils) ... ok (0.001s) 2023-01-11T21:27:48.6745092Z test_prepare_onnx_paddings (__main__.TestONNXUtils) ... ok (0.001s) 2023-01-11T21:27:48.6745384Z test_load_standalone (__main__.TestStandaloneCPPJIT) ... ok (2.070s) 2023-01-11T21:27:48.6745554Z 2023-01-11T21:27:48.6745756Z ---------------------------------------------------------------------- 2023-01-11T21:27:48.6745981Z Ran 31 tests in 2.528s 2023-01-11T21:27:48.6746093Z 2023-01-11T21:27:48.6746167Z OK (skipped=8) 2023-01-11T21:27:48.6746273Z 2023-01-11T21:27:48.6746357Z Generating XML reports... 2023-01-11T21:27:48.6746729Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestAssert-20230111212745.xml 2023-01-11T21:27:48.6747224Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestCheckpoint-20230111212745.xml 2023-01-11T21:27:48.6747742Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestCppExtensionUtils-20230111212745.xml 2023-01-11T21:27:48.6748266Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestDataLoaderUtils-20230111212745.xml 2023-01-11T21:27:48.6748756Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestExtensionUtils-20230111212745.xml 2023-01-11T21:27:48.6749237Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestHipify-20230111212745.xml 2023-01-11T21:27:48.6749711Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestONNXUtils-20230111212745.xml 2023-01-11T21:27:48.6750213Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestStandaloneCPPJIT-20230111212745.xml 2023-01-11T21:27:48.6750696Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestBottleneck-20230111212745.xml 2023-01-11T21:27:48.6751310Z Generated XML report: test-reports/python-unittest/test_utils/TEST-TestCollectEnv-20230111212745.xml 2023-01-11T21:27:48.6751527Z 2023-01-11T21:27:48.6751829Z ##[endgroup] 2023-01-11T21:27:48.6752186Z FINISHED PRINTING LOG FILE of test_utils (/var/lib/jenkins/workspace/test/test-reports/test_utils_3ng1pffq) 2023-01-11T21:27:48.6752392Z 2023-01-11T21:27:50.5671561Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:50.6343857Z Ignoring disabled issues: [] 2023-01-11T21:27:50.6484852Z Running backends/xeon/test_launch ... [2023-01-11 21:27:50.648244] 2023-01-11T21:27:50.6487589Z Executing ['/opt/conda/bin/python', '-bb', 'backends/xeon/test_launch.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:27:50.648486] 2023-01-11T21:27:54.0805991Z 2023-01-11T21:27:54.0806562Z Expand the folded group to see the log file of backends/xeon/test_launch 2023-01-11T21:27:54.0808050Z ##[group]PRINTING LOG FILE of backends/xeon/test_launch (/var/lib/jenkins/workspace/test/test-reports/backends-xeon-test_launch_vgmgy27h) 2023-01-11T21:27:54.0808526Z 2023-01-11T21:27:54.0808654Z Running tests... 2023-01-11T21:27:54.0809319Z ---------------------------------------------------------------------- 2023-01-11T21:27:54.0810054Z Test results will be stored in test-reports/python-unittest/backends.xeon.test_launch 2023-01-11T21:27:54.0811570Z test_cpu_info (__main__.TestTorchrun) ... 2023-01-11 21:27:52,117 - torch.backends.xeon.run_cpu - WARNING - Numa Aware: cores:[2, 3, 4, 5] on different NUMA nodes:[0, 1]. To avoid this behavior, please use --ncores_per_instance knob to make sure number of cores is divisible by --ncores_per_instance. Alternatively, please use --skip_cross_node_cores knob. 2023-01-11T21:27:54.0812418Z ok (0.137s) 2023-01-11T21:27:54.0812794Z test_multi_threads (__main__.TestTorchrun) ... ok (1.699s) 2023-01-11T21:27:54.0813015Z 2023-01-11T21:27:54.0813226Z ---------------------------------------------------------------------- 2023-01-11T21:27:54.0813585Z Ran 2 tests in 1.836s 2023-01-11T21:27:54.0813712Z 2023-01-11T21:27:54.0813775Z OK 2023-01-11T21:27:54.0813868Z 2023-01-11T21:27:54.0813953Z Generating XML reports... 2023-01-11T21:27:54.0814391Z Generated XML report: test-reports/python-unittest/backends.xeon.test_launch/TEST-TestTorchrun-20230111212751.xml 2023-01-11T21:27:54.0814623Z 2023-01-11T21:27:54.0814874Z ##[endgroup] 2023-01-11T21:27:54.0815347Z FINISHED PRINTING LOG FILE of backends/xeon/test_launch (/var/lib/jenkins/workspace/test/test-reports/backends-xeon-test_launch_vgmgy27h) 2023-01-11T21:27:54.0815590Z 2023-01-11T21:27:55.9220311Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:55.9862309Z Ignoring disabled issues: [] 2023-01-11T21:27:56.0003333Z Running benchmark_utils/test_benchmark_utils ... [2023-01-11 21:27:56.000005] 2023-01-11T21:27:56.0004797Z Executing ['/opt/conda/bin/python', '-bb', 'benchmark_utils/test_benchmark_utils.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:27:56.000251] 2023-01-11T21:27:58.5770561Z 2023-01-11T21:27:58.5771484Z Expand the folded group to see the log file of benchmark_utils/test_benchmark_utils 2023-01-11T21:27:58.5773138Z ##[group]PRINTING LOG FILE of benchmark_utils/test_benchmark_utils (/var/lib/jenkins/workspace/test/test-reports/benchmark_utils-test_benchmark_utils__beoo97f) 2023-01-11T21:27:58.5774294Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:27:58.5774684Z 2023-01-11T21:27:58.5774843Z Running tests... 2023-01-11T21:27:58.5775552Z ---------------------------------------------------------------------- 2023-01-11T21:27:58.5776551Z Test results will be stored in test-reports/python-unittest/benchmark_utils.test_benchmark_utils 2023-01-11T21:27:58.5777361Z test_adaptive_timer (__main__.TestBenchmarkUtils) ... ok (0.312s) 2023-01-11T21:27:58.5778237Z test_collect_callgrind (__main__.TestBenchmarkUtils) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:27:58.5779474Z test_collect_cpp_callgrind (__main__.TestBenchmarkUtils) ... skip: Failing on clang, see 74398 (0.001s) 2023-01-11T21:27:58.5780197Z test_compare (__main__.TestBenchmarkUtils) ... ok (0.108s) 2023-01-11T21:27:58.5780907Z test_cpp_timer (__main__.TestBenchmarkUtils) ... skip: Failing on clang, see 74398 (0.000s) 2023-01-11T21:27:58.5781597Z test_fuzzer (__main__.TestBenchmarkUtils) ... ok (0.002s) 2023-01-11T21:27:58.5782281Z test_manipulate_callgrind_stats (__main__.TestBenchmarkUtils) ... ok (0.037s) 2023-01-11T21:27:58.5783164Z test_timer (__main__.TestBenchmarkUtils) ... ok (0.029s) 2023-01-11T21:27:58.5783911Z test_timer_tiny_fast_snippet (__main__.TestBenchmarkUtils) ... skip: Failing on clang, see 74398 (0.000s) 2023-01-11T21:27:58.5784378Z 2023-01-11T21:27:58.5784860Z ---------------------------------------------------------------------- 2023-01-11T21:27:58.5785403Z Ran 9 tests in 0.492s 2023-01-11T21:27:58.5785652Z 2023-01-11T21:27:58.5785823Z OK (skipped=4) 2023-01-11T21:27:58.5786244Z 2023-01-11T21:27:58.5786443Z Generating XML reports... 2023-01-11T21:27:58.5787566Z Generated XML report: test-reports/python-unittest/benchmark_utils.test_benchmark_utils/TEST-TestBenchmarkUtils-20230111212757.xml 2023-01-11T21:27:58.5788214Z 2023-01-11T21:27:58.5788742Z ##[endgroup] 2023-01-11T21:27:58.5789842Z FINISHED PRINTING LOG FILE of benchmark_utils/test_benchmark_utils (/var/lib/jenkins/workspace/test/test-reports/benchmark_utils-test_benchmark_utils__beoo97f) 2023-01-11T21:27:58.5790500Z 2023-01-11T21:28:00.4830597Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:28:00.5668814Z Ignoring disabled issues: [] 2023-01-11T21:28:00.5811606Z Running dynamo/test_aot_autograd ... [2023-01-11 21:28:00.580867] 2023-01-11T21:28:00.5813323Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_aot_autograd.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:28:00.581117] 2023-01-11T21:28:06.5918470Z 2023-01-11T21:28:06.5919088Z Expand the folded group to see the log file of dynamo/test_aot_autograd 2023-01-11T21:28:06.5920157Z ##[group]PRINTING LOG FILE of dynamo/test_aot_autograd (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_aot_autograd_0yukwqll) 2023-01-11T21:28:06.5920543Z 2023-01-11T21:28:06.5920661Z Running tests... 2023-01-11T21:28:06.5921300Z ---------------------------------------------------------------------- 2023-01-11T21:28:06.5921954Z Test results will be stored in test-reports/python-unittest/dynamo.test_aot_autograd 2023-01-11T21:28:06.5922958Z test_LSTM (__main__.AotAutogradFallbackTests) ... [2023-01-11 21:28:05,125] torch._dynamo.optimizations.training: [WARNING] Unable to use Aot Autograd because of presence of LSTM 2023-01-11T21:28:06.5923525Z ok (3.184s) 2023-01-11T21:28:06.5924177Z test_arg_dupe_via_dynamo_recompiles (__main__.AotAutogradFallbackTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:06.5924928Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:06.5925424Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:06.5925942Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:28:06.5926513Z aot_autograd [('total', 3), ('ok', 3)] 2023-01-11T21:28:06.5926840Z ok (0.094s) 2023-01-11T21:28:06.5927490Z test_arg_dupe_via_dynamo_recompiles_many_args (__main__.AotAutogradFallbackTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:06.5928220Z stats [('calls_captured', 28), ('fusions_possible', 24), ('unique_graphs', 4)] 2023-01-11T21:28:06.5928691Z aot_autograd [('total', 4), ('ok', 4)] 2023-01-11T21:28:06.5929026Z ok (0.169s) 2023-01-11T21:28:06.5929946Z test_call_fn_with_non_const_inputs_aot_safe (__main__.AotAutogradFallbackTests) ... inline_call [('inline in skipfiles: tree_flatten_spec /opt/conda/lib/python3.10/site-packages/torch/fx/_pytree.py', 1)] 2023-01-11T21:28:06.5930760Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:06.5931535Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:06.5931903Z unimplemented [] 2023-01-11T21:28:06.5932952Z graph_break [('inline in skipfiles: tree_flatten_spec /opt/conda/lib/python3.10/site-packages/torch/fx/_pytree.py', 1), ('call_function args: ListVariable() UserDefinedObjectVariable(LeafSpec) ', 1)] 2023-01-11T21:28:06.5933544Z ok (0.045s) 2023-01-11T21:28:06.5934520Z test_call_fn_with_non_const_inputs_aot_unsafe (__main__.AotAutogradFallbackTests) ... inline_call [('inline in skipfiles: tree_flatten_spec /opt/conda/lib/python3.10/site-packages/torch/fx/_pytree.py', 1)] 2023-01-11T21:28:06.5935449Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:06.5935912Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:06.5936184Z unimplemented [] 2023-01-11T21:28:06.5937094Z graph_break [('inline in skipfiles: tree_flatten_spec /opt/conda/lib/python3.10/site-packages/torch/fx/_pytree.py', 1), ('call_function args: ListVariable() UserDefinedObjectVariable(LeafSpec) ', 1)] 2023-01-11T21:28:06.5937906Z ok (0.021s) 2023-01-11T21:28:06.5938584Z test_call_fn_with_non_const_inputs_aot_unsafe_control_flow (__main__.AotAutogradFallbackTests) ... frames [('total', 8), ('ok', 8)] 2023-01-11T21:28:06.5939245Z inline_call [('generic_jump TensorVariable()', 2)] 2023-01-11T21:28:06.5939634Z unimplemented [] 2023-01-11T21:28:06.5940076Z graph_break [('generic_jump TensorVariable()', 2)] 2023-01-11T21:28:06.5940664Z stats [('calls_captured', 14), ('unique_graphs', 8), ('fusions_possible', 6)] 2023-01-11T21:28:06.5941046Z ok (0.045s) 2023-01-11T21:28:06.5941652Z test_mutation (__main__.AotAutogradFallbackTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:06.5942591Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:06.5942977Z ok (0.006s) 2023-01-11T21:28:06.5943576Z test_mutation1 (__main__.AotAutogradFallbackTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:06.5944200Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:06.5944580Z ok (0.264s) 2023-01-11T21:28:06.5945186Z test_negative_testing (__main__.AotAutogradFallbackTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:06.5945816Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:06.5946188Z ok (0.006s) 2023-01-11T21:28:06.5946778Z test_negative_testing_mutation (__main__.AotAutogradFallbackTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:06.5947500Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:06.5947926Z ok (0.271s) 2023-01-11T21:28:06.5948608Z test_requires_grad_fake_via_dynamo_recompiles (__main__.AotAutogradFallbackTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:06.5949320Z stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:28:06.5949797Z aot_autograd [('total', 3), ('ok', 3)] 2023-01-11T21:28:06.5950151Z ok (0.043s) 2023-01-11T21:28:06.5950354Z 2023-01-11T21:28:06.5950725Z ---------------------------------------------------------------------- 2023-01-11T21:28:06.5951193Z Ran 11 tests in 4.147s 2023-01-11T21:28:06.5951387Z 2023-01-11T21:28:06.5951490Z OK 2023-01-11T21:28:06.5951638Z 2023-01-11T21:28:06.5969149Z Generating XML reports... 2023-01-11T21:28:06.5970176Z Generated XML report: test-reports/python-unittest/dynamo.test_aot_autograd/TEST-AotAutogradFallbackTests-20230111212802.xml 2023-01-11T21:28:06.5970639Z 2023-01-11T21:28:06.5971096Z ##[endgroup] 2023-01-11T21:28:06.5971827Z FINISHED PRINTING LOG FILE of dynamo/test_aot_autograd (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_aot_autograd_0yukwqll) 2023-01-11T21:28:06.5972216Z 2023-01-11T21:28:08.4945090Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:28:08.5595619Z Ignoring disabled issues: [] 2023-01-11T21:28:08.5737573Z Running dynamo/test_aot_cudagraphs ... [2023-01-11 21:28:08.573472] 2023-01-11T21:28:08.5739665Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_aot_cudagraphs.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:28:08.573731] 2023-01-11T21:28:10.3179681Z 2023-01-11T21:28:10.3180298Z Expand the folded group to see the log file of dynamo/test_aot_cudagraphs 2023-01-11T21:28:10.3181463Z ##[group]PRINTING LOG FILE of dynamo/test_aot_cudagraphs (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_aot_cudagraphs_0b9tux2z) 2023-01-11T21:28:10.3181912Z 2023-01-11T21:28:10.3182044Z Running tests... 2023-01-11T21:28:10.3182855Z ---------------------------------------------------------------------- 2023-01-11T21:28:10.3183680Z test_basic (__main__.TestAotCudagraphs) ... Test results will be stored in test-reports/python-unittest/dynamo.test_aot_cudagraphs 2023-01-11T21:28:10.3184203Z skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3184690Z test_dead_fill (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.001s) 2023-01-11T21:28:10.3185501Z test_dtoh (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3186096Z test_factory (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3186635Z test_htod (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3187194Z test_mutate_constant (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3187789Z test_mutate_input (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3188337Z test_mutated_metadata (__main__.TestAotCudagraphs) ... skip: these tests require cuda (0.000s) 2023-01-11T21:28:10.3188672Z 2023-01-11T21:28:10.3189076Z ---------------------------------------------------------------------- 2023-01-11T21:28:10.3189496Z Ran 8 tests in 0.004s 2023-01-11T21:28:10.3189691Z 2023-01-11T21:28:10.3189815Z OK (skipped=8) 2023-01-11T21:28:10.3189992Z 2023-01-11T21:28:10.3190121Z Generating XML reports... 2023-01-11T21:28:10.3190905Z Generated XML report: test-reports/python-unittest/dynamo.test_aot_cudagraphs/TEST-TestAotCudagraphs-20230111212810.xml 2023-01-11T21:28:10.3191406Z 2023-01-11T21:28:10.3191837Z ##[endgroup] 2023-01-11T21:28:10.3192517Z FINISHED PRINTING LOG FILE of dynamo/test_aot_cudagraphs (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_aot_cudagraphs_0b9tux2z) 2023-01-11T21:28:10.3192882Z 2023-01-11T21:28:12.2309348Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:28:12.2990711Z Ignoring disabled issues: [] 2023-01-11T21:28:12.3135907Z Running dynamo/test_comptime ... [2023-01-11 21:28:12.313260] 2023-01-11T21:28:12.3137342Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_comptime.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:28:12.313510] 2023-01-11T21:28:14.5121754Z 2023-01-11T21:28:14.5122303Z Expand the folded group to see the log file of dynamo/test_comptime 2023-01-11T21:28:14.5123311Z ##[group]PRINTING LOG FILE of dynamo/test_comptime (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_comptime_pfy01_f9) 2023-01-11T21:28:14.5123725Z 2023-01-11T21:28:14.5123830Z Running tests... 2023-01-11T21:28:14.5124428Z ---------------------------------------------------------------------- 2023-01-11T21:28:14.5125088Z Test results will be stored in test-reports/python-unittest/dynamo.test_comptime 2023-01-11T21:28:14.5125587Z test_get_local (__main__.ComptimeTests) ... ok (0.250s) 2023-01-11T21:28:14.5126183Z test_graph_break (__main__.ComptimeTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5127807Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:14.5144614Z frames [('total', 6), ('ok', 6)] 2023-01-11T21:28:14.5145186Z stats [('calls_captured', 5), ('unique_graphs', 4), ('fusions_possible', 1)] 2023-01-11T21:28:14.5145608Z unimplemented [] 2023-01-11T21:28:14.5146108Z graph_break [('ComptimeContext.graph_break', 2)] 2023-01-11T21:28:14.5146645Z inline_call [('ComptimeContext.graph_break', 1)] 2023-01-11T21:28:14.5147236Z ok (0.019s) 2023-01-11T21:28:14.5147734Z test_print_bt (__main__.ComptimeTests) ... File "/var/lib/jenkins/workspace/test/dynamo/test_comptime.py", line 152, in f 2023-01-11T21:28:14.5148280Z y = g(y) 2023-01-11T21:28:14.5148702Z File "/var/lib/jenkins/workspace/test/dynamo/test_comptime.py", line 145, in g 2023-01-11T21:28:14.5149137Z comptime.print_bt() 2023-01-11T21:28:14.5149331Z 2023-01-11T21:28:14.5149538Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5149842Z inline_call [] 2023-01-11T21:28:14.5150367Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:14.5150758Z ok (0.089s) 2023-01-11T21:28:14.5151242Z test_print_disas (__main__.ComptimeTests) ... 54 0 LOAD_FAST 0 (x) 2023-01-11T21:28:14.5151663Z 2 LOAD_CONST 1 (2) 2023-01-11T21:28:14.5151999Z 4 BINARY_MULTIPLY 2023-01-11T21:28:14.5152335Z 6 STORE_FAST 1 (y) 2023-01-11T21:28:14.5152711Z 2023-01-11T21:28:14.5152862Z 56 8 LOAD_GLOBAL 0 (comptime) 2023-01-11T21:28:14.5153081Z 2023-01-11T21:28:14.5153406Z 57 10 LOAD_CONST 2 () 2023-01-11T21:28:14.5154224Z 12 LOAD_CONST 3 ('ComptimeTests.test_print_disas..f.._') 2023-01-11T21:28:14.5154698Z 14 MAKE_FUNCTION 0 2023-01-11T21:28:14.5155028Z 16 CALL_FUNCTION 1 2023-01-11T21:28:14.5155370Z 18 STORE_FAST 2 (_) 2023-01-11T21:28:14.5155576Z 2023-01-11T21:28:14.5155727Z 60 20 LOAD_GLOBAL 0 (comptime) 2023-01-11T21:28:14.5156081Z 22 LOAD_METHOD 1 (print_disas) 2023-01-11T21:28:14.5156424Z 24 CALL_METHOD 0 2023-01-11T21:28:14.5156791Z --> 26 POP_TOP 2023-01-11T21:28:14.5156983Z 2023-01-11T21:28:14.5157134Z 62 28 LOAD_FAST 1 (y) 2023-01-11T21:28:14.5157456Z 30 LOAD_CONST 4 (3) 2023-01-11T21:28:14.5157764Z 32 BINARY_ADD 2023-01-11T21:28:14.5158062Z 34 RETURN_VALUE 2023-01-11T21:28:14.5158253Z 2023-01-11T21:28:14.5158450Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5158751Z inline_call [] 2023-01-11T21:28:14.5159286Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:14.5159658Z ok (0.007s) 2023-01-11T21:28:14.5160002Z test_print_graph (__main__.ComptimeTests) ... 2023-01-11T21:28:14.5160247Z 2023-01-11T21:28:14.5160255Z 2023-01-11T21:28:14.5160424Z def forward(self, x : torch.Tensor): 2023-01-11T21:28:14.5160894Z # File: /var/lib/jenkins/workspace/test/dynamo/test_comptime.py:26, code: y = x * 2 2023-01-11T21:28:14.5161323Z mul = x * 2; x = None 2023-01-11T21:28:14.5161605Z 2023-01-11T21:28:14.5161970Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5162273Z inline_call [] 2023-01-11T21:28:14.5162807Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:14.5163195Z ok (0.006s) 2023-01-11T21:28:14.5163542Z test_print_guards (__main__.ComptimeTests) ... - 2023-01-11T21:28:14.5164000Z local 'x' TENSOR_MATCH 2023-01-11T21:28:14.5164303Z { 2023-01-11T21:28:14.5164657Z 'guard_types': None, 2023-01-11T21:28:14.5165035Z 'code': None, 2023-01-11T21:28:14.5165411Z 'obj_weakref': None 2023-01-11T21:28:14.5165805Z 'guarded_class': None 2023-01-11T21:28:14.5166097Z } 2023-01-11T21:28:14.5166355Z 2023-01-11T21:28:14.5166713Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5167016Z inline_call [] 2023-01-11T21:28:14.5167536Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:14.5167894Z ok (0.006s) 2023-01-11T21:28:14.5168439Z test_print_locals (__main__.ComptimeTests) ... x = TensorVariable() 2023-01-11T21:28:14.5168854Z y = TensorVariable() 2023-01-11T21:28:14.5169187Z _ = ConstantVariable(NoneType) 2023-01-11T21:28:14.5169619Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5169928Z inline_call [] 2023-01-11T21:28:14.5170423Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:14.5170773Z ok (0.006s) 2023-01-11T21:28:14.5171275Z test_print_value_stack (__main__.ComptimeTests) ... - TensorVariable() 2023-01-11T21:28:14.5171679Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:14.5171941Z inline_call [] 2023-01-11T21:28:14.5172413Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:14.5172765Z ok (0.007s) 2023-01-11T21:28:14.5172911Z 2023-01-11T21:28:14.5173252Z ---------------------------------------------------------------------- 2023-01-11T21:28:14.5173647Z Ran 8 tests in 0.391s 2023-01-11T21:28:14.5173832Z 2023-01-11T21:28:14.5173931Z OK 2023-01-11T21:28:14.5174194Z 2023-01-11T21:28:14.5174322Z Generating XML reports... 2023-01-11T21:28:14.5175036Z Generated XML report: test-reports/python-unittest/dynamo.test_comptime/TEST-ComptimeTests-20230111212813.xml 2023-01-11T21:28:14.5175444Z 2023-01-11T21:28:14.5175901Z ##[endgroup] 2023-01-11T21:28:14.5176609Z FINISHED PRINTING LOG FILE of dynamo/test_comptime (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_comptime_pfy01_f9) 2023-01-11T21:28:14.5177000Z 2023-01-11T21:28:16.4527504Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:28:16.5413309Z Ignoring disabled issues: [] 2023-01-11T21:28:16.5556819Z Running dynamo/test_dynamic_shapes ... [2023-01-11 21:28:16.555371] 2023-01-11T21:28:16.5558762Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_dynamic_shapes.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:28:16.555650] 2023-01-11T21:28:59.0910521Z 2023-01-11T21:28:59.0911097Z Expand the folded group to see the log file of dynamo/test_dynamic_shapes 2023-01-11T21:28:59.0915404Z ##[group]PRINTING LOG FILE of dynamo/test_dynamic_shapes (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_dynamic_shapes_kfuvrwhx) 2023-01-11T21:28:59.0915943Z 2023-01-11T21:28:59.0916154Z Running tests... 2023-01-11T21:28:59.0916768Z ---------------------------------------------------------------------- 2023-01-11T21:28:59.0917387Z Test results will be stored in test-reports/python-unittest/dynamo.test_dynamic_shapes 2023-01-11T21:28:59.0918197Z test_dict_return_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.390s) 2023-01-11T21:28:59.0919116Z test_dict_return_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0919752Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0920000Z ok (0.072s) 2023-01-11T21:28:59.0920637Z test_dupes_2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0921108Z ok (0.027s) 2023-01-11T21:28:59.0921827Z test_dupes_2_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0922293Z ok (0.039s) 2023-01-11T21:28:59.0922976Z test_dupes_and_bypass_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0923354Z ok (0.030s) 2023-01-11T21:28:59.0924089Z test_dupes_and_bypass_reorder_with_non_tensor_arg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0924862Z ok (0.030s) 2023-01-11T21:28:59.0925666Z test_dupes_and_bypass_reorder_with_non_tensor_arg_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0926164Z ok (0.044s) 2023-01-11T21:28:59.0926818Z test_dupes_and_bypass_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0927222Z ok (0.043s) 2023-01-11T21:28:59.0927954Z test_dupes_and_bypass_with_non_tensor_arg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0928349Z ok (0.031s) 2023-01-11T21:28:59.0929215Z test_dupes_and_bypass_with_non_tensor_arg_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0929777Z ok (0.044s) 2023-01-11T21:28:59.0930566Z test_dupes_and_bypass_with_non_tensor_output_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.0931073Z ok (0.036s) 2023-01-11T21:28:59.0931876Z test_dupes_and_bypass_with_non_tensor_output_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.0932422Z ok (0.034s) 2023-01-11T21:28:59.0932976Z test_dupes_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0933358Z ok (0.027s) 2023-01-11T21:28:59.0933914Z test_dupes_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0934259Z ok (0.039s) 2023-01-11T21:28:59.0934830Z test_export_compare_optimize_with_make_fx_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.0935204Z ok (0.328s) 2023-01-11T21:28:59.0935550Z test_export_decomp_asserts_bad_args_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.0936106Z test_export_decomp_asserts_bad_args_mode_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.0937019Z test_export_decomp_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.0937516Z ok (0.093s) 2023-01-11T21:28:59.0937936Z test_export_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0938531Z stats [('calls_captured', 80), ('fusions_possible', 78), ('unique_graphs', 2)] 2023-01-11T21:28:59.0938875Z ok (0.115s) 2023-01-11T21:28:59.0939555Z test_export_graph_bypass_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0939896Z ok (0.034s) 2023-01-11T21:28:59.0940396Z test_export_graph_bypass_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0940920Z ok (0.045s) 2023-01-11T21:28:59.0941690Z test_export_graph_with_complex_reorder_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0942219Z ok (0.054s) 2023-01-11T21:28:59.0943182Z test_export_graph_with_complex_reorder_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0943734Z ok (0.079s) 2023-01-11T21:28:59.0944425Z test_export_graph_with_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0944751Z ok (0.038s) 2023-01-11T21:28:59.0945353Z test_export_graph_with_list_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0945717Z ok (0.049s) 2023-01-11T21:28:59.0946315Z test_export_meta_val_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0946777Z ok (0.072s) 2023-01-11T21:28:59.0947501Z test_export_mismatched_out_2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0948008Z ok (0.034s) 2023-01-11T21:28:59.0948792Z test_export_mismatched_out_2_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0949287Z ok (0.046s) 2023-01-11T21:28:59.0949807Z test_export_mismatched_out_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0950156Z ok (0.035s) 2023-01-11T21:28:59.0950670Z test_export_mismatched_out_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0951014Z ok (0.046s) 2023-01-11T21:28:59.0951515Z test_export_shape_control_flow_1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.0951965Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.0952181Z ok (0.048s) 2023-01-11T21:28:59.0952493Z test_export_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0952932Z stats [('calls_captured', 80), ('fusions_possible', 78), ('unique_graphs', 2)] 2023-01-11T21:28:59.0953163Z ok (0.361s) 2023-01-11T21:28:59.0953781Z test_export_with_constant_dict_values_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0954267Z ok (0.030s) 2023-01-11T21:28:59.0955031Z test_export_with_constant_free_function_and_class_method_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0955565Z ok (0.048s) 2023-01-11T21:28:59.0956363Z test_export_with_constant_free_function_and_class_method_multiarg_diff_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0956876Z ok (0.023s) 2023-01-11T21:28:59.0957539Z test_export_with_constant_free_function_and_class_method_multiarg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0957919Z ok (0.062s) 2023-01-11T21:28:59.0958421Z test_export_with_constant_free_function_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0958773Z ok (0.060s) 2023-01-11T21:28:59.0959351Z test_export_with_constant_list_nonzero_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... unimplemented [('call_function BuiltinVariable(iter) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.0959902Z expected failure (0.014s) 2023-01-11T21:28:59.0960781Z test_export_with_constant_list_nonzero_free_function_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... unimplemented [('call_function BuiltinVariable(iter) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.0961375Z expected failure (0.011s) 2023-01-11T21:28:59.0962170Z test_export_with_constant_method_on_module_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0962636Z ok (0.046s) 2023-01-11T21:28:59.0963177Z test_export_with_constant_method_on_module_invoke_twice_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0963547Z ok (0.058s) 2023-01-11T21:28:59.0964065Z test_export_with_constant_none_control_flow_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0964410Z ok (0.008s) 2023-01-11T21:28:59.0964944Z test_export_with_constant_none_control_flow_free_func_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0965471Z ok (0.007s) 2023-01-11T21:28:59.0966230Z test_export_with_constant_not_none_control_flow_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0966721Z ok (0.020s) 2023-01-11T21:28:59.0967496Z test_export_with_constant_not_none_control_flow_free_func_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0968031Z ok (0.020s) 2023-01-11T21:28:59.0968700Z test_export_with_constant_not_none_control_flow_pos_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0969056Z ok (0.021s) 2023-01-11T21:28:59.0969570Z test_export_with_constant_not_return_const_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0969923Z ok (0.008s) 2023-01-11T21:28:59.0970452Z test_export_with_constant_tuple_nonzero_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... unimplemented [('call_function BuiltinVariable(iter) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.0970830Z expected failure (0.094s) 2023-01-11T21:28:59.0971155Z test_export_with_module_layer_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0971581Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0971855Z ok (0.075s) 2023-01-11T21:28:59.0972354Z test_export_with_stack_trace_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.0972693Z ok (0.160s) 2023-01-11T21:28:59.0972995Z test_func_return_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0973613Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.0973936Z ok (0.063s) 2023-01-11T21:28:59.0974402Z test_func_return_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0975001Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.0975358Z ok (0.098s) 2023-01-11T21:28:59.0976171Z test_input_container_type_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.0976607Z ok (0.100s) 2023-01-11T21:28:59.0977087Z test_list_unpack_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0977418Z ok (0.036s) 2023-01-11T21:28:59.0977924Z test_list_unpack_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.0978267Z ok (0.051s) 2023-01-11T21:28:59.0978776Z test_zeroes_in_and_out_different_shape_on_test_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0979145Z ok (0.047s) 2023-01-11T21:28:59.0979694Z test_zeroes_in_and_out_different_shape_on_test_with_aten_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.0980068Z ok (0.079s) 2023-01-11T21:28:59.0980564Z test_zeroes_in_new_shape_scalar_out_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 16), ('fusions_possible', 14), ('unique_graphs', 2)] 2023-01-11T21:28:59.0980913Z ok (0.057s) 2023-01-11T21:28:59.0981456Z test_zeroes_in_new_shape_scalar_out_permute_dupe_and_bypass_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 22), ('fusions_possible', 20), ('unique_graphs', 2)] 2023-01-11T21:28:59.0981818Z ok (0.072s) 2023-01-11T21:28:59.0982511Z test_zeroes_in_new_shape_scalar_out_permute_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 22), ('fusions_possible', 20), ('unique_graphs', 2)] 2023-01-11T21:28:59.0983052Z ok (0.072s) 2023-01-11T21:28:59.0983826Z test_T_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.0984141Z ok (0.033s) 2023-01-11T21:28:59.0984678Z test_add__dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /var/lib/jenkins/workspace/test/dynamo/test_functions.py:73: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0985201Z a_copy = torch.tensor(a) 2023-01-11T21:28:59.0985836Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0986388Z return node.target(*args, **kwargs) 2023-01-11T21:28:59.0986793Z .331:5: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0987188Z tensor = torch.tensor(a); a = None 2023-01-11T21:28:59.0987521Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0987735Z ok (0.041s) 2023-01-11T21:28:59.0988208Z test_add_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.0988537Z ok (0.021s) 2023-01-11T21:28:59.0989122Z test_addcdiv__dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /var/lib/jenkins/workspace/test/dynamo/test_functions.py:84: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0989642Z a_copy = torch.tensor(a) 2023-01-11T21:28:59.0990267Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0990707Z return node.target(*args, **kwargs) 2023-01-11T21:28:59.0991120Z .336:5: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:28:59.0991561Z tensor = torch.tensor(a); a = None 2023-01-11T21:28:59.0991979Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.0992342Z ok (0.050s) 2023-01-11T21:28:59.0993039Z test_addcdiv_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0993494Z ok (0.039s) 2023-01-11T21:28:59.0993955Z test_build_list_unpack_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.0994572Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.0994914Z ok (0.065s) 2023-01-11T21:28:59.0995375Z test_chunks1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.0996281Z test_const_tuple_add1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.0996750Z ok (0.041s) 2023-01-11T21:28:59.0997219Z test_const_tuple_add2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.0997554Z ok (0.041s) 2023-01-11T21:28:59.0998227Z test_constant1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0998693Z ok (0.038s) 2023-01-11T21:28:59.0999363Z test_constant2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.0999838Z ok (0.036s) 2023-01-11T21:28:59.1000526Z test_constant3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1001103Z ok (0.018s) 2023-01-11T21:28:59.1001615Z test_constant4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1002102Z ok (0.020s) 2023-01-11T21:28:59.1002808Z test_default_dict_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1003255Z ok (0.032s) 2023-01-11T21:28:59.1003950Z test_del_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1004428Z ok (0.035s) 2023-01-11T21:28:59.1005122Z test_device_constant_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1005449Z ok (0.028s) 2023-01-11T21:28:59.1005928Z test_device_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1006255Z ok (0.019s) 2023-01-11T21:28:59.1006715Z test_dict_copy_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1007043Z ok (0.019s) 2023-01-11T21:28:59.1007560Z test_dict_ops_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:28:59.1008065Z ok (0.073s) 2023-01-11T21:28:59.1008764Z test_dict_param_keys_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1009242Z ok (0.020s) 2023-01-11T21:28:59.1009984Z test_distributed_is_available_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1010483Z ok (0.019s) 2023-01-11T21:28:59.1011082Z test_distributed_is_initialized_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1011433Z ok (0.019s) 2023-01-11T21:28:59.1011914Z test_dtype_compare_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1012247Z ok (0.028s) 2023-01-11T21:28:59.1012703Z test_dtype_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1013026Z ok (0.018s) 2023-01-11T21:28:59.1013491Z test_finfo_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1013801Z ok (0.023s) 2023-01-11T21:28:59.1014256Z test_float_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1014577Z ok (0.019s) 2023-01-11T21:28:59.1015057Z test_fn_with_self_set_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1015373Z ok (0.103s) 2023-01-11T21:28:59.1015840Z test_fstrings1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1016235Z ok (0.021s) 2023-01-11T21:28:59.1016541Z test_fstrings2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1017165Z test_fstrings3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1017499Z ok (0.019s) 2023-01-11T21:28:59.1017801Z test_funcdef_closure_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1018208Z stats [('calls_captured', 10), ('fusions_possible', 9), ('unique_graphs', 1)] 2023-01-11T21:28:59.1018435Z ok (0.086s) 2023-01-11T21:28:59.1018963Z test_get_default_dtype_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1019305Z ok (0.020s) 2023-01-11T21:28:59.1019762Z test_globalfn_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1020088Z ok (0.021s) 2023-01-11T21:28:59.1020563Z test_globalmodule_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1020895Z ok (0.033s) 2023-01-11T21:28:59.1021495Z test_globalvar_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1021951Z ok (0.031s) 2023-01-11T21:28:59.1022784Z test_import1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1023235Z ok (0.030s) 2023-01-11T21:28:59.1023934Z test_indirect1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1024403Z ok (0.021s) 2023-01-11T21:28:59.1025033Z test_indirect2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1025342Z ok (0.021s) 2023-01-11T21:28:59.1025815Z test_indirect3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1026146Z ok (0.021s) 2023-01-11T21:28:59.1026443Z test_inline_jit_annotations_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1027097Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1027415Z ok (0.034s) 2023-01-11T21:28:59.1028123Z test_inline_softmax_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1028584Z ok (0.073s) 2023-01-11T21:28:59.1029034Z test_inline_with_default_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1029656Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1029949Z ok (0.031s) 2023-01-11T21:28:59.1030237Z test_inner_function_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1030767Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1030995Z ok (0.019s) 2023-01-11T21:28:59.1031382Z test_is_contiguous_memory_format_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1032033Z test_is_fx_tracing_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1032365Z ok (0.021s) 2023-01-11T21:28:59.1032849Z test_is_in_onnx_export_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1033172Z ok (0.020s) 2023-01-11T21:28:59.1033699Z test_is_not_null_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1034033Z ok (0.021s) 2023-01-11T21:28:59.1034496Z test_is_quantized_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1034825Z ok (0.020s) 2023-01-11T21:28:59.1035294Z test_is_sparse_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1035620Z ok (0.018s) 2023-01-11T21:28:59.1036082Z test_islice_chain_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1036410Z ok (0.058s) 2023-01-11T21:28:59.1037077Z test_jit_annotate_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1037546Z ok (0.027s) 2023-01-11T21:28:59.1038242Z test_len_constant_dict_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1038724Z ok (0.020s) 2023-01-11T21:28:59.1039430Z test_len_constant_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1039905Z ok (0.020s) 2023-01-11T21:28:59.1040403Z test_len_constant_misc_iterables_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1040747Z ok (0.020s) 2023-01-11T21:28:59.1041057Z test_len_tensor_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... expected failure (0.008s) 2023-01-11T21:28:59.1041649Z test_list_add_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1041977Z ok (0.021s) 2023-01-11T21:28:59.1042450Z test_list_clear_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1042775Z ok (0.037s) 2023-01-11T21:28:59.1043236Z test_list_convert_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1043566Z ok (0.043s) 2023-01-11T21:28:59.1044044Z test_list_reversed_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1044437Z ok (0.052s) 2023-01-11T21:28:59.1044916Z test_list_slice_assignment_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1045261Z ok (0.021s) 2023-01-11T21:28:59.1045738Z test_list_truth_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1046066Z ok (0.021s) 2023-01-11T21:28:59.1046524Z test_listarg1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1046856Z ok (0.030s) 2023-01-11T21:28:59.1047385Z test_listarg2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1047700Z ok (0.030s) 2023-01-11T21:28:59.1048172Z test_listarg3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1048498Z ok (0.031s) 2023-01-11T21:28:59.1048959Z test_listarg4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1049265Z ok (0.031s) 2023-01-11T21:28:59.1049727Z test_listarg5_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1050057Z ok (0.030s) 2023-01-11T21:28:59.1050536Z test_load_global_bool_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1050856Z ok (0.019s) 2023-01-11T21:28:59.1051145Z test_map_sum_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1051556Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:28:59.1051767Z ok (0.077s) 2023-01-11T21:28:59.1052063Z test_methodcall1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1052477Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1052692Z ok (0.039s) 2023-01-11T21:28:59.1052987Z test_methodcall2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1053398Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1053620Z ok (0.035s) 2023-01-11T21:28:59.1054045Z test_methodcall3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1054638Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1054980Z ok (0.036s) 2023-01-11T21:28:59.1055645Z test_min_max_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 11), ('fusions_possible', 10), ('unique_graphs', 1)] 2023-01-11T21:28:59.1056120Z ok (0.051s) 2023-01-11T21:28:59.1056806Z test_module_constant_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1057265Z ok (0.045s) 2023-01-11T21:28:59.1057718Z test_ndim_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1058106Z ok (0.019s) 2023-01-11T21:28:59.1058576Z test_pop_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1058907Z ok (0.045s) 2023-01-11T21:28:59.1059378Z test_range1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1059862Z ok (0.015s) 2023-01-11T21:28:59.1061853Z test_range2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 15), ('fusions_possible', 14), ('unique_graphs', 1)] 2023-01-11T21:28:59.1062463Z ok (0.114s) 2023-01-11T21:28:59.1063136Z test_reduce_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1063633Z ok (0.040s) 2023-01-11T21:28:59.1064354Z test_return_dict2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1064823Z ok (0.023s) 2023-01-11T21:28:59.1065520Z test_return_dict_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1065983Z ok (0.023s) 2023-01-11T21:28:59.1066679Z test_return_tuple1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1067037Z ok (0.030s) 2023-01-11T21:28:59.1067626Z test_return_tuple2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1068105Z ok (0.021s) 2023-01-11T21:28:59.1068552Z test_shape1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1069196Z test_shape2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1070074Z test_slice1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1070550Z ok (0.016s) 2023-01-11T21:28:59.1071015Z test_slice2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1071404Z ok (0.014s) 2023-01-11T21:28:59.1071881Z test_slice3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1072209Z ok (0.043s) 2023-01-11T21:28:59.1072663Z test_slice4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1072988Z ok (0.018s) 2023-01-11T21:28:59.1073456Z test_slice5_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1073785Z ok (0.033s) 2023-01-11T21:28:59.1074236Z test_slice6_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1074560Z ok (0.040s) 2023-01-11T21:28:59.1075031Z test_startswith_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1075436Z ok (0.029s) 2023-01-11T21:28:59.1075748Z test_tensor_len_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... expected failure (0.019s) 2023-01-11T21:28:59.1076214Z test_tensor_new_with_shape_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1076694Z test_tensor_new_with_size_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1077145Z test_tensor_type2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.000s) 2023-01-11T21:28:59.1077797Z test_tensor_type_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1078134Z ok (0.053s) 2023-01-11T21:28:59.1078627Z test_transpose_for_scores_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1078955Z ok (0.036s) 2023-01-11T21:28:59.1079423Z test_tuple1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1079751Z ok (0.021s) 2023-01-11T21:28:59.1080222Z test_tuple2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1080533Z ok (0.021s) 2023-01-11T21:28:59.1081013Z test_tuple_contains_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1081354Z ok (0.021s) 2023-01-11T21:28:59.1081827Z test_tuple_iadd_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1082140Z ok (0.029s) 2023-01-11T21:28:59.1082610Z test_unpack1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1082934Z ok (0.036s) 2023-01-11T21:28:59.1083386Z test_unpack2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1083756Z ok (0.034s) 2023-01-11T21:28:59.1084440Z test_unpack3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1084907Z ok (0.033s) 2023-01-11T21:28:59.1085578Z test_unpack_ex1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1086050Z ok (0.048s) 2023-01-11T21:28:59.1086733Z test_unpack_ex2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1087191Z ok (0.049s) 2023-01-11T21:28:59.1087723Z test_unpack_ex3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1088056Z ok (0.049s) 2023-01-11T21:28:59.1088530Z test_viamethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1088900Z ok (0.021s) 2023-01-11T21:28:59.1089487Z test_viatorch_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1089958Z ok (0.020s) 2023-01-11T21:28:59.1090580Z test_allow_in_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1091196Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1091553Z ok (0.042s) 2023-01-11T21:28:59.1092256Z test_autocast_cpu_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1092682Z ok (0.034s) 2023-01-11T21:28:59.1093032Z test_autocast_device_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1093667Z test_autocast_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1094301Z test_autocast_float64_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1094944Z test_autograd_function_equivalence_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1095593Z stats [('calls_captured', 4), ('unique_graphs', 4), ('fusions_possible', 0)] 2023-01-11T21:28:59.1095948Z ok (0.084s) 2023-01-11T21:28:59.1096669Z test_autograd_profiler_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:28:59.1097219Z STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:28:59.1097699Z STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:28:59.1098326Z [2023-01-11 21:28:25,422] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-01-11T21:28:59.1098783Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1099091Z unimplemented [] 2023-01-11T21:28:59.1099459Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:28:59.1099997Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1100332Z ok (0.065s) 2023-01-11T21:28:59.1101089Z test_autograd_profiler_enabled_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:28:59.1101658Z STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:28:59.1102109Z STAGE:2023-01-11 21:28:25 3597:3597 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:28:59.1102582Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1102779Z unimplemented [] 2023-01-11T21:28:59.1103092Z graph_break [('torch.autograd._profiler_enabled not supported yet', 1)] 2023-01-11T21:28:59.1103448Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1103679Z ok (0.044s) 2023-01-11T21:28:59.1104155Z test_boolarg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:28:59.1104468Z ok (0.054s) 2023-01-11T21:28:59.1104776Z test_build_tuple_unpack_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1105204Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1105546Z ok (0.059s) 2023-01-11T21:28:59.1105980Z test_builder_for_class_with_metaclass_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1106421Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1106650Z ok (0.019s) 2023-01-11T21:28:59.1107125Z test_builtin_isinstance_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1107466Z ok (0.018s) 2023-01-11T21:28:59.1107792Z test_builtin_subclasses_as_method_on_class_type_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.003s) 2023-01-11T21:28:59.1108298Z test_builtin_subclasses_as_method_on_var_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.004s) 2023-01-11T21:28:59.1108749Z test_call_parent_non_class_methods_from_child_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1109195Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1109422Z ok (0.031s) 2023-01-11T21:28:59.1109898Z test_callpacked_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1110211Z ok (0.059s) 2023-01-11T21:28:59.1110626Z test_cell_output1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1111053Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1111336Z ok (0.018s) 2023-01-11T21:28:59.1111742Z test_cell_output2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1112050Z unimplemented [] 2023-01-11T21:28:59.1112444Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1112848Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1113073Z ok (0.028s) 2023-01-11T21:28:59.1113911Z test_change_backends_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T21:28:59.1114582Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T21:28:59.1114928Z stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:28:59.1115211Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1115390Z ok (0.064s) 2023-01-11T21:28:59.1115774Z test_cond_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1116066Z inline_call [] 2023-01-11T21:28:59.1116377Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1116725Z ok (0.026s) 2023-01-11T21:28:59.1117150Z test_cond_export_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1117758Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1118102Z ok (0.045s) 2023-01-11T21:28:59.1119326Z test_cond_export_single_arg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/91143 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:28:59.1120544Z test_cond_nested_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1120992Z inline_call [] 2023-01-11T21:28:59.1121460Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1121797Z ok (0.043s) 2023-01-11T21:28:59.1122264Z test_cond_side_effects_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... expected failure (0.001s) 2023-01-11T21:28:59.1123088Z test_config_getattr_default_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1123742Z stats [('calls_captured', 21), ('fusions_possible', 18), ('unique_graphs', 3)] 2023-01-11T21:28:59.1123960Z ok (0.158s) 2023-01-11T21:28:59.1124510Z test_config_log_level_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1125146Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1125501Z ok (0.016s) 2023-01-11T21:28:59.1126109Z test_config_obj_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1126719Z stats [('calls_captured', 8), ('fusions_possible', 4), ('unique_graphs', 4)] 2023-01-11T21:28:59.1127091Z ok (0.074s) 2023-01-11T21:28:59.1127499Z test_const_dict_variable_python_type_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.1128437Z test_cross_entropy_loss_fancy_ctor_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /opt/conda/lib/python3.10/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. 2023-01-11T21:28:59.1129132Z warnings.warn(warning.format(ret)) 2023-01-11T21:28:59.1129453Z ok (0.002s) 2023-01-11T21:28:59.1129890Z test_cross_entropy_loss_simple_ctor_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.1130712Z test_dataclass_fields_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1131158Z inline_call [] 2023-01-11T21:28:59.1131468Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1131698Z ok (0.047s) 2023-01-11T21:28:59.1132248Z test_dict_mutation_side_effect_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1132903Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1133250Z ok (0.017s) 2023-01-11T21:28:59.1133931Z test_dict_reconstruct_keeps_original_order_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 13), ('ok', 12)] 2023-01-11T21:28:59.1134705Z unimplemented [("Guard setup for uninitialized class ", 1)] 2023-01-11T21:28:59.1135795Z graph_break [('UnspecializedNNModuleVariable missing add_module', 3), ('construct nn.Module: ReLU', 1), ('call_function in skip_files /opt/conda/lib/python3.10/collections/__init__.py', 1), ('construct nn.Module: ModuleDict', 1), ('Patched init cannot be inlined.', 1), ('construct nn.Module: Linear', 1), ('construct nn.Module: Sigmoid', 1), ('call_method ConstDictVariable() update [TupleVariable()] {}', 1)] 2023-01-11T21:28:59.1136770Z inline_call [('inline __setitem__', 2), ('Patched init cannot be inlined.', 1)] 2023-01-11T21:28:59.1137157Z ok (0.040s) 2023-01-11T21:28:59.1137782Z test_dictcomp_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1138227Z inline_call [] 2023-01-11T21:28:59.1138671Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1139020Z ok (0.022s) 2023-01-11T21:28:59.1139338Z test_disable_flag_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.002s) 2023-01-11T21:28:59.1139742Z test_disable_optimize_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.002s) 2023-01-11T21:28:59.1140289Z test_disallow_in_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1140596Z unimplemented [] 2023-01-11T21:28:59.1141036Z graph_break [('call_function UserDefinedObjectVariable(sub) [TensorVariable(), ConstantVariable(int)] {}', 1)] 2023-01-11T21:28:59.1141669Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1142003Z ok (0.044s) 2023-01-11T21:28:59.1142619Z test_dunder_methods_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1143242Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1143595Z ok (0.060s) 2023-01-11T21:28:59.1144050Z test_duplicate_graph_break_warning_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... break 2023-01-11T21:28:59.1144479Z break 2023-01-11T21:28:59.1144698Z frames [('total', 9), ('ok', 9)] 2023-01-11T21:28:59.1145038Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2)] 2023-01-11T21:28:59.1145300Z unimplemented [] 2023-01-11T21:28:59.1145625Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 4)] 2023-01-11T21:28:59.1146001Z stats [('calls_captured', 6), ('unique_graphs', 4), ('fusions_possible', 2)] 2023-01-11T21:28:59.1146309Z ok (0.118s) 2023-01-11T21:28:59.1147049Z test_dynamo_min_operator_with_shape_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1147569Z ok (0.012s) 2023-01-11T21:28:59.1148180Z test_empty_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1148799Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1149147Z ok (0.040s) 2023-01-11T21:28:59.1149714Z test_enum_no_graphbreaks_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1150059Z ok (0.032s) 2023-01-11T21:28:59.1150475Z test_error_on_nested_fx_trace_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1150915Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1151137Z ok (0.022s) 2023-01-11T21:28:59.1151667Z test_fold_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1151981Z ok (0.019s) 2023-01-11T21:28:59.1152487Z test_frozenset_torch_func_contains_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1152836Z ok (0.030s) 2023-01-11T21:28:59.1153326Z test_function_annotation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1153761Z ok (0.030s) 2023-01-11T21:28:59.1154238Z test_generate_tensor_from_list_of_numpy_primitive_type_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1154589Z unimplemented [] 2023-01-11T21:28:59.1154945Z graph_break [('numpy', 1)] 2023-01-11T21:28:59.1155414Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1155771Z ok (0.009s) 2023-01-11T21:28:59.1156218Z test_get_device_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1156978Z test_grad_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1157505Z unimplemented [] 2023-01-11T21:28:59.1157894Z graph_break [('Tensor.backward', 1)] 2023-01-11T21:28:59.1158220Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1158459Z ok (0.068s) 2023-01-11T21:28:59.1158876Z test_grad_mode_guard_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1159165Z unimplemented [] 2023-01-11T21:28:59.1159403Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:28:59.1159724Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1159990Z ok (0.049s) 2023-01-11T21:28:59.1160612Z test_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1161070Z unimplemented [] 2023-01-11T21:28:59.1161642Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 2)] 2023-01-11T21:28:59.1162240Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:28:59.1162581Z ok (0.085s) 2023-01-11T21:28:59.1163268Z test_guard_failure_fn2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1163608Z ok (0.030s) 2023-01-11T21:28:59.1164088Z test_guard_failure_fn_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.1164422Z ok (0.071s) 2023-01-11T21:28:59.1164899Z test_id_of_nn_module_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1165220Z ok (0.031s) 2023-01-11T21:28:59.1165704Z test_if_cond_nn_mod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1166040Z ok (0.045s) 2023-01-11T21:28:59.1166458Z test_inference_mode_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1166875Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1167101Z ok (0.024s) 2023-01-11T21:28:59.1167526Z test_inline_dict_mutation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1167822Z inline_call [] 2023-01-11T21:28:59.1168120Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1168344Z ok (0.018s) 2023-01-11T21:28:59.1168792Z test_inline_func_jump_on_tensor_condition_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1169261Z inline_call [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1169482Z unimplemented [] 2023-01-11T21:28:59.1169749Z graph_break [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1170077Z stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:28:59.1170305Z ok (0.042s) 2023-01-11T21:28:59.1170819Z test_inline_list_mutation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1171246Z inline_call [] 2023-01-11T21:28:59.1171682Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1172031Z ok (0.016s) 2023-01-11T21:28:59.1172697Z test_inplace_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1173200Z ok (0.019s) 2023-01-11T21:28:59.1173906Z test_inplace_param_update_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1174314Z ok (0.011s) 2023-01-11T21:28:59.1174714Z test_is_compiling_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1175143Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1175370Z ok (0.010s) 2023-01-11T21:28:59.1175857Z test_is_floating_point2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1176178Z ok (0.036s) 2023-01-11T21:28:59.1176671Z test_is_floating_point_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1177017Z ok (0.035s) 2023-01-11T21:28:59.1177427Z test_is_tensor2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1177837Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1178062Z ok (0.037s) 2023-01-11T21:28:59.1178530Z test_is_tensor_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1178845Z ok (0.034s) 2023-01-11T21:28:59.1179386Z test_is_tensor_like2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1179822Z unimplemented [] 2023-01-11T21:28:59.1180333Z graph_break [('call_function args: UserDefinedObjectVariable(MyTensor) ', 1)] 2023-01-11T21:28:59.1180878Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1181240Z ok (0.017s) 2023-01-11T21:28:59.1181938Z test_is_tensor_like_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1182525Z ok (0.024s) 2023-01-11T21:28:59.1183033Z test_item_changes_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1183367Z ok (0.041s) 2023-01-11T21:28:59.1184083Z test_item_changes_new_shape_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1184570Z ok (0.044s) 2023-01-11T21:28:59.1185275Z test_item_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1185882Z ok (0.023s) 2023-01-11T21:28:59.1186315Z test_large_reduction_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.013s) 2023-01-11T21:28:59.1186877Z test_linetable_writer_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.1187439Z test_list_append_return_none_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1187875Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1188088Z ok (0.012s) 2023-01-11T21:28:59.1188641Z test_list_mul_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1189149Z ok (0.003s) 2023-01-11T21:28:59.1189779Z test_listcomp_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1190210Z inline_call [] 2023-01-11T21:28:59.1190677Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1191041Z ok (0.042s) 2023-01-11T21:28:59.1191636Z test_lnotab_writer_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: use lnotab when python < 3.10 (0.000s) 2023-01-11T21:28:59.1192272Z test_manual_seed_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1192784Z ok (0.030s) 2023-01-11T21:28:59.1193483Z test_matmul1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1193942Z ok (0.012s) 2023-01-11T21:28:59.1194394Z test_module_complex_iter_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.011s) 2023-01-11T21:28:59.1195190Z test_module_deepcopy_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 6), ('ok', 6)] 2023-01-11T21:28:59.1195607Z unimplemented [] 2023-01-11T21:28:59.1195938Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/copy.py', 2)] 2023-01-11T21:28:59.1196180Z inline_call [] 2023-01-11T21:28:59.1196481Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1196747Z ok (0.064s) 2023-01-11T21:28:59.1197217Z test_named_parameters_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.024s) 2023-01-11T21:28:59.1198010Z test_namedtuple1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1198628Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1198980Z ok (0.021s) 2023-01-11T21:28:59.1199595Z test_namedtuple2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1200133Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1200348Z ok (0.028s) 2023-01-11T21:28:59.1200758Z test_namedtuple3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1201179Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1201391Z ok (0.023s) 2023-01-11T21:28:59.1201789Z test_nan_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1202486Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1202820Z ok (0.021s) 2023-01-11T21:28:59.1203268Z test_nested_closure_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1203890Z stats [('calls_captured', 9), ('fusions_possible', 7), ('unique_graphs', 2)] 2023-01-11T21:28:59.1204228Z ok (0.071s) 2023-01-11T21:28:59.1204644Z test_nested_closure_mutation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1205232Z stats [('calls_captured', 11), ('fusions_possible', 9), ('unique_graphs', 2)] 2023-01-11T21:28:59.1205460Z ok (0.039s) 2023-01-11T21:28:59.1206355Z test_nested_disable_decorator_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... [2023-01-11 21:28:28,088] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT fn3 /var/lib/jenkins/workspace/test/dynamo/test_misc.py line 1197 2023-01-11T21:28:59.1206935Z due to: 2023-01-11T21:28:59.1207231Z Traceback (most recent call last): 2023-01-11T21:28:59.1207813Z File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 67, in unimplemented 2023-01-11T21:28:59.1208200Z raise Unsupported(msg) 2023-01-11T21:28:59.1208739Z torch._dynamo.exc.Unsupported: call torch._dynamo.disable() wrapped function .fn1 at 0x7f8c4c165000> 2023-01-11T21:28:59.1209085Z 2023-01-11T21:28:59.1209159Z from user code: 2023-01-11T21:28:59.1209403Z File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 1199, in fn3 2023-01-11T21:28:59.1209665Z return fn2(x) 2023-01-11T21:28:59.1210051Z File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 1192, in fn2 2023-01-11T21:28:59.1210413Z x = fn1(x) # graph break 2023-01-11T21:28:59.1210571Z 2023-01-11T21:28:59.1210783Z Set torch._dynamo.config.verbose=True for more information 2023-01-11T21:28:59.1211026Z 2023-01-11T21:28:59.1211032Z 2023-01-11T21:28:59.1211224Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1211529Z unimplemented [] 2023-01-11T21:28:59.1212146Z graph_break [('call torch._dynamo.disable() wrapped function .fn1 at 0x7f8c4c165000>', 1)] 2023-01-11T21:28:59.1212790Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1213303Z inline_call [('call torch._dynamo.disable() wrapped function .fn1 at 0x7f8c4c165000>', 1)] 2023-01-11T21:28:59.1213766Z ok (0.081s) 2023-01-11T21:28:59.1214219Z test_nested_optimize_decorator_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1214871Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1215214Z ok (0.052s) 2023-01-11T21:28:59.1215955Z test_nested_optimize_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.1216432Z ok (0.086s) 2023-01-11T21:28:59.1216999Z test_nested_optimize_run_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.1217336Z ok (0.089s) 2023-01-11T21:28:59.1218013Z test_nn_functional_reduction_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1218517Z ok (0.026s) 2023-01-11T21:28:59.1218981Z test_nn_sequential_invocation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1219628Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1220089Z ok (0.174s) 2023-01-11T21:28:59.1220597Z test_nn_sequential_invocation_reposition_indices_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1221085Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1221307Z ok (0.052s) 2023-01-11T21:28:59.1221962Z test_no_error_on_nested_fx_trace_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1222746Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1223085Z ok (0.022s) 2023-01-11T21:28:59.1223774Z test_no_grad_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 40), ('fusions_possible', 32), ('unique_graphs', 8)] 2023-01-11T21:28:59.1224266Z ok (0.275s) 2023-01-11T21:28:59.1224955Z test_not_dynamic_scope_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1225272Z inline_call [] 2023-01-11T21:28:59.1225563Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1225797Z ok (0.012s) 2023-01-11T21:28:59.1226263Z test_numel_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1226577Z ok (0.035s) 2023-01-11T21:28:59.1226997Z test_numpy_int_constant_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1227429Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1227640Z ok (0.022s) 2023-01-11T21:28:59.1228077Z test_numpy_variable_isinstance_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1228513Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1228736Z ok (0.012s) 2023-01-11T21:28:59.1229034Z test_object_classmethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1229455Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1229681Z ok (0.012s) 2023-01-11T21:28:59.1229977Z test_object_staticmethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1230402Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1230628Z ok (0.011s) 2023-01-11T21:28:59.1231128Z test_onnx_shape_as_tensor_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 15), ('fusions_possible', 10), ('unique_graphs', 5)] 2023-01-11T21:28:59.1231521Z ok (0.049s) 2023-01-11T21:28:59.1232013Z test_optimize_on_module_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1232350Z ok (0.022s) 2023-01-11T21:28:59.1233016Z test_pair_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:28:59.1233461Z ok (0.044s) 2023-01-11T21:28:59.1234067Z test_python_slice_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1234692Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1235002Z ok (0.039s) 2023-01-11T21:28:59.1235620Z test_raise_on_backend_error_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1)] 2023-01-11T21:28:59.1236179Z stats [('calls_captured', 3), ('fusions_possible', 2)] 2023-01-11T21:28:59.1236390Z ok (0.028s) 2023-01-11T21:28:59.1236781Z test_raises_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1237084Z unimplemented [] 2023-01-11T21:28:59.1237517Z graph_break [('call_function BuiltinVariable(str) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1238043Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1238407Z ok (0.041s) 2023-01-11T21:28:59.1238859Z test_rand_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1239452Z test_range_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1240146Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1240486Z ok (0.035s) 2023-01-11T21:28:59.1241143Z test_recursive_inline_list_mutation_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1241592Z inline_call [] 2023-01-11T21:28:59.1242069Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1242429Z ok (0.016s) 2023-01-11T21:28:59.1243039Z test_release_input_memory_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1243707Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1243951Z ok (0.014s) 2023-01-11T21:28:59.1244501Z test_release_module_memory_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1245124Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1245490Z ok (0.033s) 2023-01-11T21:28:59.1246248Z test_repro_graph_breaks_in__get_item_by_idx_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1246740Z ok (0.034s) 2023-01-11T21:28:59.1247398Z test_restore_graphstate_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1247802Z inline_call [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1248019Z unimplemented [] 2023-01-11T21:28:59.1248328Z graph_break [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1248856Z stats [('calls_captured', 6), ('unique_graphs', 4), ('fusions_possible', 2)] 2023-01-11T21:28:59.1249214Z ok (0.075s) 2023-01-11T21:28:59.1249957Z test_restore_graphstate_internals_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1250473Z ok (0.031s) 2023-01-11T21:28:59.1251114Z test_return_nested_function_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1251663Z stats [('calls_captured', 7), ('fusions_possible', 5), ('unique_graphs', 2)] 2023-01-11T21:28:59.1251876Z ok (0.059s) 2023-01-11T21:28:59.1252170Z test_sample_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.620s) 2023-01-11T21:28:59.1252908Z test_setattr_mutation1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1253644Z unimplemented [('call_method UserDefinedObjectVariable(member_descriptor) __mul__ [ConstantVariable(int)] {}', 1)] 2023-01-11T21:28:59.1254520Z graph_break [("isinstance called on UserDefinedClass UserDefinedObjectVariable(member_descriptor) ", 1)] 2023-01-11T21:28:59.1255056Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1255550Z stats [('calls_captured', 12), ('fusions_possible', 11), ('unique_graphs', 1)] 2023-01-11T21:28:59.1255773Z ok (0.097s) 2023-01-11T21:28:59.1256372Z test_setattr_mutation2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1256819Z inline_call [] 2023-01-11T21:28:59.1257284Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1257617Z ok (0.074s) 2023-01-11T21:28:59.1258261Z test_setattr_mutation3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1258723Z inline_call [] 2023-01-11T21:28:59.1259213Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1259444Z ok (0.073s) 2023-01-11T21:28:59.1259984Z test_shape_unpack_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1260595Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1260967Z ok (0.021s) 2023-01-11T21:28:59.1261642Z test_side_effects_codegen_update_mutated_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 6), ('ok', 6)] 2023-01-11T21:28:59.1262113Z unimplemented [] 2023-01-11T21:28:59.1262585Z graph_break [('Tensor.item', 4)] 2023-01-11T21:28:59.1263083Z stats [('calls_captured', 8), ('fusions_possible', 4), ('unique_graphs', 4)] 2023-01-11T21:28:59.1263313Z ok (0.132s) 2023-01-11T21:28:59.1263717Z test_size_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1264310Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1264643Z ok (0.040s) 2023-01-11T21:28:59.1265324Z test_slice_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('unique_graphs', 3), ('fusions_possible', 0)] 2023-01-11T21:28:59.1265817Z ok (0.033s) 2023-01-11T21:28:59.1266309Z test_tensor_build_list_unpack_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1267188Z test_tensor_data_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1267508Z ok (0.020s) 2023-01-11T21:28:59.1267925Z test_tensor_dict1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1268455Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1268815Z ok (0.019s) 2023-01-11T21:28:59.1269413Z test_tensor_dict2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1270030Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:28:59.1270398Z ok (0.039s) 2023-01-11T21:28:59.1271024Z test_tensor_dot_grad_no_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1271533Z unimplemented [] 2023-01-11T21:28:59.1271784Z graph_break [('Tensor.backward', 1)] 2023-01-11T21:28:59.1272284Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1272615Z ok (0.052s) 2023-01-11T21:28:59.1273432Z test_tensor_is_contiguous_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1274077Z stats [('calls_captured', 10), ('fusions_possible', 8), ('unique_graphs', 2)] 2023-01-11T21:28:59.1274388Z ok (0.077s) 2023-01-11T21:28:59.1275060Z test_tensor_item_capture_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1275506Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1275719Z ok (0.036s) 2023-01-11T21:28:59.1276267Z test_tensor_item_no_capture_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1276715Z unimplemented [] 2023-01-11T21:28:59.1277105Z graph_break [('Tensor.item', 1)] 2023-01-11T21:28:59.1277562Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1278050Z ok (0.029s) 2023-01-11T21:28:59.1278757Z test_tensor_layout_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1279209Z ok (0.021s) 2023-01-11T21:28:59.1279780Z test_tensor_types_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 10), ('ok', 10)] 2023-01-11T21:28:59.1280426Z stats [('calls_captured', 10), ('unique_graphs', 10), ('fusions_possible', 0)] 2023-01-11T21:28:59.1280787Z ok (0.087s) 2023-01-11T21:28:59.1281493Z test_top_package_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1281995Z ok (0.022s) 2023-01-11T21:28:59.1282690Z test_torch_cuda_is_available_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1283024Z ok (0.015s) 2023-01-11T21:28:59.1283361Z test_torch_cudnn_is_acceptable_bad_inputs_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1283844Z test_torch_cudnn_is_acceptable_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.000s) 2023-01-11T21:28:59.1284562Z test_torch_nn_parameter_isinstance_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1285194Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1285555Z ok (0.021s) 2023-01-11T21:28:59.1286343Z test_torch_profiler_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... STAGE:2023-01-11 21:28:31 3597:3597 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:28:59.1287148Z STAGE:2023-01-11 21:28:31 3597:3597 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:28:59.1287670Z STAGE:2023-01-11 21:28:31 3597:3597 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:28:59.1288153Z [2023-01-11 21:28:31,040] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-01-11T21:28:59.1288753Z [2023-01-11 21:28:31,049] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-01-11T21:28:59.1289212Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1289486Z unimplemented [] 2023-01-11T21:28:59.1289854Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:28:59.1290357Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1290684Z ok (0.067s) 2023-01-11T21:28:59.1291373Z test_torch_seed_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1291790Z ok (0.011s) 2023-01-11T21:28:59.1292257Z test_torch_size_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1292586Z ok (0.024s) 2023-01-11T21:28:59.1293200Z test_type_copy_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1293831Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1294149Z ok (0.061s) 2023-01-11T21:28:59.1294804Z test_typing_variable_isinstance_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1295453Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1295833Z ok (0.023s) 2023-01-11T21:28:59.1296310Z test_unpack4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:28:59.1296635Z ok (0.051s) 2023-01-11T21:28:59.1297176Z test_unpack5_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:28:59.1297637Z ok (0.049s) 2023-01-11T21:28:59.1298317Z test_update_locals_and_stack_uses_shared_cache_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1298795Z inline_call [] 2023-01-11T21:28:59.1299081Z unimplemented [] 2023-01-11T21:28:59.1299577Z graph_break [('call_method ListVariable() extend [ListIteratorVariable()] {}', 1)] 2023-01-11T21:28:59.1299967Z ok (0.021s) 2023-01-11T21:28:59.1300539Z test_user_defined_class_name_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1300864Z ok (0.034s) 2023-01-11T21:28:59.1301194Z test_user_function_variable_supports_enum_argument_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1301554Z ok (0.020s) 2023-01-11T21:28:59.1302031Z test_user_function_variable_supports_function_argument_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1302821Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1303169Z ok (0.021s) 2023-01-11T21:28:59.1303663Z test_user_function_variable_supports_type_abcmeta_argument_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1304302Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1304666Z ok (0.022s) 2023-01-11T21:28:59.1305175Z test_user_getattr1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1305476Z inline_call [] 2023-01-11T21:28:59.1305931Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1306265Z ok (0.024s) 2023-01-11T21:28:59.1306885Z test_user_getattr2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1307308Z inline_call [] 2023-01-11T21:28:59.1307783Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1308129Z ok (0.029s) 2023-01-11T21:28:59.1308755Z test_user_property_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1309172Z inline_call [] 2023-01-11T21:28:59.1309474Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1309687Z ok (0.022s) 2023-01-11T21:28:59.1310037Z test_usr_cls_classmethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1310656Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1311004Z ok (0.030s) 2023-01-11T21:28:59.1311511Z test_usr_cls_staticmethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1312154Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1312479Z ok (0.030s) 2023-01-11T21:28:59.1312908Z test_version_ci_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.001s) 2023-01-11T21:28:59.1313546Z test_write_to_closures_in_inlining_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1313990Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1314216Z ok (0.040s) 2023-01-11T21:28:59.1314509Z test_access_by_keys_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1314929Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1315154Z ok (0.117s) 2023-01-11T21:28:59.1315620Z test_basicmodule1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1315950Z ok (0.059s) 2023-01-11T21:28:59.1316435Z test_basicmodule2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1316761Z ok (0.057s) 2023-01-11T21:28:59.1317061Z test_call_fn_with_non_const_inputs_safe_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1317493Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1317719Z ok (0.212s) 2023-01-11T21:28:59.1318189Z test_cfgmod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1318501Z ok (0.096s) 2023-01-11T21:28:59.1318969Z test_children_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1319298Z ok (0.079s) 2023-01-11T21:28:59.1319759Z test_constloop_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1320085Z ok (0.148s) 2023-01-11T21:28:59.1320381Z test_densenet_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1320791Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1321002Z ok (0.116s) 2023-01-11T21:28:59.1321299Z test_enumvalues_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1321710Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1321918Z ok (0.113s) 2023-01-11T21:28:59.1322392Z test_fnmember_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1322769Z ok (0.048s) 2023-01-11T21:28:59.1323242Z test_fnmembercmp1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1323563Z ok (0.049s) 2023-01-11T21:28:59.1324038Z test_fnmembercmp2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1324371Z ok (0.060s) 2023-01-11T21:28:59.1324657Z test_forward_directly_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1325079Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1325305Z ok (0.077s) 2023-01-11T21:28:59.1325604Z test_generation_tag_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.002s) 2023-01-11T21:28:59.1326218Z test_hasattr_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1326550Z ok (0.037s) 2023-01-11T21:28:59.1327017Z test_intarg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1327343Z ok (0.052s) 2023-01-11T21:28:59.1327793Z test_iseval1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1328123Z ok (0.046s) 2023-01-11T21:28:59.1328592Z test_iseval2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1328901Z ok (0.047s) 2023-01-11T21:28:59.1329382Z test_isnonelayer_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1329716Z ok (0.034s) 2023-01-11T21:28:59.1330188Z test_istraining1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1330503Z ok (0.047s) 2023-01-11T21:28:59.1330972Z test_istraining2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1331299Z ok (0.046s) 2023-01-11T21:28:59.1331776Z test_layerlist_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1332093Z ok (0.065s) 2023-01-11T21:28:59.1332786Z test_lazy_module_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:28:59.1333396Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:28:59.1334065Z [2023-01-11 21:28:33,313] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [SizeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:28:59.1334799Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:28:59.1335289Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:28:59.1335922Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:28:59.1336400Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:28:59.1337048Z [2023-01-11 21:28:33,442] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [SizeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:28:59.1337791Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:28:59.1338273Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:28:59.1338598Z frames [('total', 16), ('ok', 14)] 2023-01-11T21:28:59.1339353Z inline_call [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2), ('call_function UserDefinedObjectVariable(_infer_parameters) [NNModuleVariable(), TupleVariable()] {}', 1), ('call_function UserDefinedObjectVariable(_infer_parameters) [UnspecializedNNModuleVariable(LazyModule), TupleVariable()] {}', 1)] 2023-01-11T21:28:59.1340017Z unimplemented [("Guard setup for uninitialized class ", 2)] 2023-01-11T21:28:59.1340420Z graph_break [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2)] 2023-01-11T21:28:59.1340787Z stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:28:59.1341003Z ok (0.305s) 2023-01-11T21:28:59.1341327Z test_module_attribute_precedence_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1341765Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1341994Z ok (0.045s) 2023-01-11T21:28:59.1342288Z test_module_class_method_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1342862Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1343091Z ok (0.128s) 2023-01-11T21:28:59.1343520Z test_module_forward_has_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 2)] 2023-01-11T21:28:59.1343838Z inline_call [] 2023-01-11T21:28:59.1344124Z unimplemented [('reconstruct: ConstantVariable(dict)', 2)] 2023-01-11T21:28:59.1344589Z graph_break [('call_function BuiltinVariable(dict) [ListIteratorVariable()] {}', 1), ('call_method NNModuleVariable() buffers [] {}', 1)] 2023-01-11T21:28:59.1345001Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1345231Z ok (0.184s) 2023-01-11T21:28:59.1345728Z test_module_name_string_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1346055Z ok (0.055s) 2023-01-11T21:28:59.1346364Z test_module_property_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1346784Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1347010Z ok (0.027s) 2023-01-11T21:28:59.1347302Z test_module_static_method_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1347723Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1347948Z ok (0.126s) 2023-01-11T21:28:59.1348419Z test_moduledict_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1348813Z ok (0.032s) 2023-01-11T21:28:59.1349292Z test_modulelist_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 40), ('fusions_possible', 39), ('unique_graphs', 1)] 2023-01-11T21:28:59.1349625Z ok (0.522s) 2023-01-11T21:28:59.1349916Z test_modulemethod1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1350339Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1350569Z ok (0.128s) 2023-01-11T21:28:59.1350856Z test_modulemethod2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1351335Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:28:59.1351567Z ok (0.127s) 2023-01-11T21:28:59.1352117Z test_nn_moduledict_contains_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:28:59.1352503Z frames [('total', 2), ('ok', 1)] 2023-01-11T21:28:59.1352782Z inline_call [('Patched init cannot be inlined.', 1)] 2023-01-11T21:28:59.1353243Z unimplemented [("Guard setup for uninitialized class .M'>", 1)] 2023-01-11T21:28:59.1353626Z graph_break [('Patched init cannot be inlined.', 1)] 2023-01-11T21:28:59.1353836Z ok (0.046s) 2023-01-11T21:28:59.1354137Z test_parameters1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1354555Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1354773Z ok (0.023s) 2023-01-11T21:28:59.1355072Z test_parameters2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1355487Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1355697Z ok (0.024s) 2023-01-11T21:28:59.1356174Z test_parameters3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1356508Z ok (0.073s) 2023-01-11T21:28:59.1356987Z test_self_mutating1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:28:59.1357308Z ok (0.068s) 2023-01-11T21:28:59.1357769Z test_seq_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1358090Z ok (0.078s) 2023-01-11T21:28:59.1358384Z test_simple_torch_function_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1358811Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1359036Z ok (0.109s) 2023-01-11T21:28:59.1359515Z test_stringmember_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1359831Z ok (0.048s) 2023-01-11T21:28:59.1360127Z test_submodules1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1360540Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1360752Z ok (0.111s) 2023-01-11T21:28:59.1361047Z test_submodules2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1361503Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1361726Z ok (0.111s) 2023-01-11T21:28:59.1362003Z test_super1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1362412Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1362638Z ok (0.066s) 2023-01-11T21:28:59.1362916Z test_super2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1363324Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1363553Z ok (0.057s) 2023-01-11T21:28:59.1363856Z test_super_class_method_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1364263Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1364490Z ok (0.021s) 2023-01-11T21:28:59.1364994Z test_tensorlist_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1365308Z ok (0.059s) 2023-01-11T21:28:59.1365811Z test_torch_function_with_closure_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1366159Z ok (0.102s) 2023-01-11T21:28:59.1366588Z test_unsupportedmethod_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1367099Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1367395Z unimplemented [] 2023-01-11T21:28:59.1367787Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1368192Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:28:59.1368421Z ok (0.095s) 2023-01-11T21:28:59.1368850Z test_unsupportedmodule_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1369367Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1369647Z unimplemented [] 2023-01-11T21:28:59.1370028Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1370440Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:28:59.1370651Z ok (0.097s) 2023-01-11T21:28:59.1370956Z test_viamodulecall_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1371383Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1371609Z ok (0.072s) 2023-01-11T21:28:59.1371993Z test_Size_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1372469Z inline_call [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:28:59.1372742Z unimplemented [] 2023-01-11T21:28:59.1373084Z graph_break [('inline in skipfiles: assertIsInstance /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:28:59.1373476Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1373702Z ok (0.028s) 2023-01-11T21:28:59.1374113Z test_abc_setattr_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1374406Z unimplemented [] 2023-01-11T21:28:59.1374839Z graph_break [('setattr(UserDefinedObjectVariable) .Derived.__setattr__ at 0x7f8c1d39be20>', 1)] 2023-01-11T21:28:59.1375186Z inline_call [] 2023-01-11T21:28:59.1375473Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1375700Z ok (0.025s) 2023-01-11T21:28:59.1376141Z test_avoid_dupe_specialization_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1376588Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1376863Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1377049Z ok (0.155s) 2023-01-11T21:28:59.1377352Z test_batch_norm_act_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1377754Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1377978Z ok (0.172s) 2023-01-11T21:28:59.1378480Z test_batchnorm_e2e_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:28:59.1378848Z frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1379036Z inline_call [] 2023-01-11T21:28:59.1379332Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1379618Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1379788Z ok (2.052s) 2023-01-11T21:28:59.1380224Z test_bigbird_unsqueeze_inplace_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1380664Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1380938Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1381122Z ok (0.438s) 2023-01-11T21:28:59.1381421Z test_boxes_len_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1381825Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1382046Z ok (0.027s) 2023-01-11T21:28:59.1382494Z test_chunk_reformer_ff_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1383129Z test_class_member_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1383451Z ok (0.045s) 2023-01-11T21:28:59.1383898Z test_convert_boxes_to_pooler_format_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1383971Z inline_call [] 2023-01-11T21:28:59.1384046Z unimplemented [] 2023-01-11T21:28:59.1384259Z graph_break [('dynamic shape operator: aten.repeat_interleave.Tensor', 2)] 2023-01-11T21:28:59.1384464Z stats [('calls_captured', 18), ('fusions_possible', 14), ('unique_graphs', 4)] 2023-01-11T21:28:59.1384533Z expected failure (0.180s) 2023-01-11T21:28:59.1384777Z test_create_rand_mask_from_inputs_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires static shapes (0.001s) 2023-01-11T21:28:59.1384974Z test_dict_iter_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.005s) 2023-01-11T21:28:59.1385298Z test_dict_list_values_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:28:59.1385378Z unimplemented [] 2023-01-11T21:28:59.1385723Z graph_break [('call_function in skip_files Builtin count', 2), ('call_function BuiltinVariable(zip) [UserDefinedObjectVariable(count), ListVariable()] {}', 2)] 2023-01-11T21:28:59.1385791Z ok (0.059s) 2023-01-11T21:28:59.1386102Z test_do_paste_mask_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1)] 2023-01-11T21:28:59.1386227Z expected failure (0.032s) 2023-01-11T21:28:59.1386635Z test_dynamic_shapes_right_side_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1386705Z ok (0.050s) 2023-01-11T21:28:59.1387085Z test_ellipsis_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1387153Z ok (0.143s) 2023-01-11T21:28:59.1387472Z test_exec_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:28:59.1387693Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1387773Z unimplemented [] 2023-01-11T21:28:59.1388045Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1388099Z ok (0.004s) 2023-01-11T21:28:59.1388432Z test_exec_wildcard_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:28:59.1388647Z inline_call [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1388728Z unimplemented [] 2023-01-11T21:28:59.1388941Z graph_break [('call_function BuiltinVariable(exec) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1389138Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1389205Z ok (0.016s) 2023-01-11T21:28:59.1389547Z test_for_loop_graph_break_before_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1389610Z unimplemented [] 2023-01-11T21:28:59.1389887Z graph_break [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:28:59.1389959Z inline_call [] 2023-01-11T21:28:59.1390163Z stats [('calls_captured', 100), ('fusions_possible', 99), ('unique_graphs', 1)] 2023-01-11T21:28:59.1390230Z ok (0.720s) 2023-01-11T21:28:59.1390559Z test_for_loop_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1390634Z inline_call [] 2023-01-11T21:28:59.1390896Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:28:59.1391094Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1391160Z ok (0.032s) 2023-01-11T21:28:59.1391434Z test_get_parameter_dtype_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1391641Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1391709Z ok (0.030s) 2023-01-11T21:28:59.1391945Z test_grad_mode_carrying_correct_state_after_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... Break 2023-01-11T21:28:59.1392066Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1392130Z unimplemented [] 2023-01-11T21:28:59.1392354Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1392549Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1392615Z ok (0.033s) 2023-01-11T21:28:59.1392951Z test_guard_fail_nested_tuple_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1393149Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1393215Z ok (0.037s) 2023-01-11T21:28:59.1393600Z test_guard_fail_tensor_bool_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 12), ('ok', 12)] 2023-01-11T21:28:59.1393791Z unimplemented [('FOR_ITER UserDefinedObjectVariable(product)', 1)] 2023-01-11T21:28:59.1394232Z graph_break [('call torch._dynamo.disable() wrapped function .fn..get_expected at 0x7f8c8f784670>', 5), ('data dependent operator: aten.allclose.default', 5)] 2023-01-11T21:28:59.1394429Z stats [('calls_captured', 5), ('unique_graphs', 5), ('fusions_possible', 0)] 2023-01-11T21:28:59.1394495Z ok (0.211s) 2023-01-11T21:28:59.1394707Z test_guard_ordering_shape_fail_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.002s) 2023-01-11T21:28:59.1394915Z test_hf_model_output_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1395147Z stats [('calls_captured', 4), ('unique_graphs', 4), ('fusions_possible', 0)] 2023-01-11T21:28:59.1395213Z ok (0.104s) 2023-01-11T21:28:59.1395415Z test_hf_t5_forward_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1395606Z stats [('calls_captured', 14), ('fusions_possible', 13), ('unique_graphs', 1)] 2023-01-11T21:28:59.1395688Z expected failure (0.663s) 2023-01-11T21:28:59.1396016Z test_indexing_with_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1396144Z inline_call [('Tensor.numpy', 1)] 2023-01-11T21:28:59.1396351Z unimplemented [('COMPARE_OP ConstantVariable(tuple) == SizeVariable()', 1)] 2023-01-11T21:28:59.1396479Z graph_break [('Tensor.numpy', 1)] 2023-01-11T21:28:59.1396545Z ok (0.036s) 2023-01-11T21:28:59.1396760Z test_is_symbolic_tracing_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1396945Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1397010Z ok (0.016s) 2023-01-11T21:28:59.1397213Z test_isinstance_dtype_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... ok (0.007s) 2023-01-11T21:28:59.1397758Z test_isinstance_storage_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1484: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:28:59.1397880Z bools = torch.BoolStorage.from_buffer(f, "big") 2023-01-11T21:28:59.1398000Z frames [('total', 9), ('ok', 9)] 2023-01-11T21:28:59.1398079Z unimplemented [] 2023-01-11T21:28:59.1398454Z graph_break [('call_function BuiltinVariable(bytearray) [ListVariable()] {}', 1), ('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:28:59.1398718Z inline_call [('inline in skipfiles: from_buffer /opt/conda/lib/python3.10/site-packages/torch/storage.py', 1)] 2023-01-11T21:28:59.1398900Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1398983Z expected failure (0.025s) 2023-01-11T21:28:59.1399191Z test_issue1466_size_aot_autograd_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... arf 2023-01-11T21:28:59.1399255Z arf 2023-01-11T21:28:59.1399373Z frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1399447Z unimplemented [] 2023-01-11T21:28:59.1399671Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1399864Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1400020Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1400087Z ok (0.172s) 2023-01-11T21:28:59.1400286Z test_issue175_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1400489Z stats [('calls_captured', 12), ('fusions_possible', 11), ('unique_graphs', 1)] 2023-01-11T21:28:59.1400557Z ok (0.467s) 2023-01-11T21:28:59.1400948Z test_longformer_chunk_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 38), ('fusions_possible', 36), ('unique_graphs', 2)] 2023-01-11T21:28:59.1401014Z ok (1.390s) 2023-01-11T21:28:59.1401233Z test_maml_item_capture_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... expected failure (0.002s) 2023-01-11T21:28:59.1401579Z test_maml_no_item_capture_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:28:59.1401782Z inline_call [('inlining disallowed: ', 1)] 2023-01-11T21:28:59.1401859Z unimplemented [] 2023-01-11T21:28:59.1402196Z graph_break [('Tensor.item', 2), ('call_function in skip_files /opt/conda/lib/python3.10/copy.py', 1), ('inlining disallowed: ', 1)] 2023-01-11T21:28:59.1402397Z stats [('calls_captured', 36), ('fusions_possible', 31), ('unique_graphs', 5)] 2023-01-11T21:28:59.1402464Z ok (0.681s) 2023-01-11T21:28:59.1402839Z test_modules_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1402905Z ok (0.069s) 2023-01-11T21:28:59.1403214Z test_multi_dot_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1403507Z inline_call [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:28:59.1403585Z unimplemented [] 2023-01-11T21:28:59.1403870Z graph_break [('inline in skipfiles: symbolic_trace /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py', 1)] 2023-01-11T21:28:59.1404068Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1404135Z ok (0.027s) 2023-01-11T21:28:59.1404361Z test_multi_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires detectron2 (0.000s) 2023-01-11T21:28:59.1404746Z test_named_buffers_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1404812Z ok (0.051s) 2023-01-11T21:28:59.1405120Z test_nn_parameter_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1405362Z inline_call [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:28:59.1405438Z unimplemented [] 2023-01-11T21:28:59.1405673Z graph_break [('inline in skipfiles: assertTrue /opt/conda/lib/python3.10/unittest/case.py', 1)] 2023-01-11T21:28:59.1405869Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1405935Z ok (0.016s) 2023-01-11T21:28:59.1406152Z test_norm_dtype_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1406525Z test_not_rewrite_assert_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1406578Z ok (0.021s) 2023-01-11T21:28:59.1406933Z test_not_rewrite_assert_for_other_errors_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1407185Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1407251Z ok (0.025s) 2023-01-11T21:28:59.1407565Z test_numpy_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:28:59.1407640Z unimplemented [] 2023-01-11T21:28:59.1407951Z graph_break [('call torch._dynamo.disable() wrapped function .rand_gen at 0x7f8c876a2f80>', 1)] 2023-01-11T21:28:59.1408035Z expected failure (0.032s) 2023-01-11T21:28:59.1408352Z test_optimized_deepcopy_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1408551Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1408618Z ok (0.037s) 2023-01-11T21:28:59.1408960Z test_primtorch_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1409241Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:28:59.1409320Z unimplemented [] 2023-01-11T21:28:59.1409600Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:28:59.1409665Z ok (0.013s) 2023-01-11T21:28:59.1410143Z test_primtorch_no_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py', 1)] 2023-01-11T21:28:59.1410213Z expected failure (0.006s) 2023-01-11T21:28:59.1410537Z test_recursive_map_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1410614Z inline_call [] 2023-01-11T21:28:59.1410812Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1410879Z ok (0.044s) 2023-01-11T21:28:59.1411084Z test_reformer_eval_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1411284Z stats [('calls_captured', 10), ('fusions_possible', 9), ('unique_graphs', 1)] 2023-01-11T21:28:59.1411350Z ok (0.426s) 2023-01-11T21:28:59.1411548Z test_reformer_min_chunk_len_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1411748Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1411813Z ok (0.018s) 2023-01-11T21:28:59.1412025Z test_reformer_remove_unused_args_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... foo 2023-01-11T21:28:59.1412087Z foo 2023-01-11T21:28:59.1412212Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1412434Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1412496Z unimplemented [] 2023-01-11T21:28:59.1412720Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2)] 2023-01-11T21:28:59.1412917Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1413049Z aot_autograd [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1413113Z ok (0.168s) 2023-01-11T21:28:59.1413321Z test_reformer_sorting_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... inline_call [] 2023-01-11T21:28:59.1413521Z stats [('calls_captured', 28), ('fusions_possible', 27), ('unique_graphs', 1)] 2023-01-11T21:28:59.1413586Z ok (0.098s) 2023-01-11T21:28:59.1413893Z test_reformer_train_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1414220Z inline_call [('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:28:59.1414296Z unimplemented [] 2023-01-11T21:28:59.1414661Z graph_break [('autograd.Function with requires_grad', 1), ('inline in skipfiles: save_for_backward /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py', 1)] 2023-01-11T21:28:59.1414860Z stats [('calls_captured', 10), ('fusions_possible', 6), ('unique_graphs', 4)] 2023-01-11T21:28:59.1414927Z ok (0.429s) 2023-01-11T21:28:59.1415248Z test_reinplacing_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1415445Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1415559Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1415624Z ok (0.488s) 2023-01-11T21:28:59.1416045Z test_relative_import_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1416114Z ok (0.034s) 2023-01-11T21:28:59.1416523Z test_relative_import_no_modulename_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1416592Z ok (0.033s) 2023-01-11T21:28:59.1416989Z test_rewrite_assert_noop_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:28:59.1417055Z ok (0.091s) 2023-01-11T21:28:59.1417448Z test_rewrite_assert_with_fstring_msg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... unimplemented [('generic_jump TensorVariable()', 1)] 2023-01-11T21:28:59.1417500Z ok (0.022s) 2023-01-11T21:28:59.1417905Z test_rewrite_assert_with_msg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 18), ('fusions_possible', 15), ('unique_graphs', 3)] 2023-01-11T21:28:59.1417973Z ok (0.113s) 2023-01-11T21:28:59.1418382Z test_rewrite_assert_without_msg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 12), ('fusions_possible', 10), ('unique_graphs', 2)] 2023-01-11T21:28:59.1418448Z ok (0.076s) 2023-01-11T21:28:59.1418767Z test_rng_state_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1419012Z unimplemented [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 1)] 2023-01-11T21:28:59.1419245Z graph_break [('TODO: make torch.random.set_rng_state work with FakeTensor/aot_autograd', 2)] 2023-01-11T21:28:59.1419442Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1419499Z ok (0.039s) 2023-01-11T21:28:59.1419887Z test_seq_append_list_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1419956Z ok (0.115s) 2023-01-11T21:28:59.1420632Z test_sigmoid_out_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1436199Z torch.sigmoid(inp, out=out1) 2023-01-11T21:28:59.1437447Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:28:59.1437531Z warnings.warn(msg) 2023-01-11T21:28:59.1438149Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1543: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3, 5]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1438244Z torch.sigmoid(inp, out=out1) 2023-01-11T21:28:59.1438354Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:28:59.1438555Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:28:59.1438628Z ok (0.035s) 2023-01-11T21:28:59.1439036Z test_slice_into_list_mutable_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 30), ('fusions_possible', 29), ('unique_graphs', 1)] 2023-01-11T21:28:59.1439102Z ok (0.091s) 2023-01-11T21:28:59.1439443Z test_slicing_dynamic_shape_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1439613Z unimplemented [('Dynamic slicing not supported', 2)] 2023-01-11T21:28:59.1439814Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1439867Z ok (0.028s) 2023-01-11T21:28:59.1440219Z test_slicing_dynamic_shape_setitem_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:28:59.1440385Z unimplemented [('Dynamic slicing not supported', 1)] 2023-01-11T21:28:59.1440550Z graph_break [('Dynamic slicing not supported', 1)] 2023-01-11T21:28:59.1440748Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1440873Z aot_autograd [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1440937Z ok (0.040s) 2023-01-11T21:28:59.1441608Z test_sort_out_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1441717Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:28:59.1442596Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1442691Z return node.target(*args, **kwargs) 2023-01-11T21:28:59.1443251Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1443396Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:28:59.1443956Z /var/lib/jenkins/workspace/test/dynamo/test_repros.py:1527: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:28:59.1444090Z torch.sort(tensor, out=(values1, indices1)) 2023-01-11T21:28:59.1444203Z frames [('total', 7), ('ok', 7)] 2023-01-11T21:28:59.1444401Z inline_call [('call_function UserDefinedClassVariable() [] {}', 1)] 2023-01-11T21:28:59.1444468Z ok (0.046s) 2023-01-11T21:28:59.1444802Z test_specialized_stride_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1445000Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1445068Z ok (0.012s) 2023-01-11T21:28:59.1445401Z test_swin_base_tensor_attr_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1445601Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1445653Z ok (0.090s) 2023-01-11T21:28:59.1445993Z test_tensor_isinstance_tuple_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1446195Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1446262Z ok (0.011s) 2023-01-11T21:28:59.1446581Z test_tokenization_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1446821Z inline_call [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:28:59.1446898Z unimplemented [] 2023-01-11T21:28:59.1447141Z graph_break [('inline in skipfiles: __init__ /opt/conda/lib/python3.10/collections/__init__.py', 2)] 2023-01-11T21:28:59.1447193Z ok (0.019s) 2023-01-11T21:28:59.1447579Z test_torch_ops_aten_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1447646Z ok (0.022s) 2023-01-11T21:28:59.1448044Z test_vdd_duplicate_error_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1448109Z ok (0.071s) 2023-01-11T21:28:59.1448444Z test_while_loop_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1448514Z inline_call [] 2023-01-11T21:28:59.1448788Z unimplemented [('call_function in skip_files /opt/conda/lib/python3.10/site-packages/torch/_dynamo/__init__.py', 1)] 2023-01-11T21:28:59.1448974Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1449038Z ok (0.029s) 2023-01-11T21:28:59.1449248Z test_with_on_graph_break_inst_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... Hello world 2023-01-11T21:28:59.1449313Z Hello world 2023-01-11T21:28:59.1449435Z frames [('total', 6), ('ok', 6)] 2023-01-11T21:28:59.1449689Z inline_call [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 1)] 2023-01-11T21:28:59.1449764Z unimplemented [] 2023-01-11T21:28:59.1450011Z graph_break [('call_function BuiltinVariable(print) [ConstantVariable(str)] {}', 2), ('Tensor.backward', 1)] 2023-01-11T21:28:59.1450214Z stats [('calls_captured', 11), ('fusions_possible', 7), ('unique_graphs', 4)] 2023-01-11T21:28:59.1450280Z ok (0.118s) 2023-01-11T21:28:59.1450597Z test_capi_call1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1450673Z unimplemented [] 2023-01-11T21:28:59.1450955Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1451152Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1451218Z ok (0.023s) 2023-01-11T21:28:59.1451544Z test_capi_call2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1451623Z unimplemented [] 2023-01-11T21:28:59.1451899Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1452098Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1452163Z ok (0.046s) 2023-01-11T21:28:59.1452473Z test_capi_call3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1452548Z unimplemented [] 2023-01-11T21:28:59.1452820Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1453000Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1453065Z ok (0.024s) 2023-01-11T21:28:59.1453384Z test_control_flow1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1453582Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:28:59.1453650Z ok (0.047s) 2023-01-11T21:28:59.1453967Z test_control_flow2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1454166Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1454230Z ok (0.024s) 2023-01-11T21:28:59.1454527Z test_control_flow3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1454720Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:28:59.1454783Z ok (0.077s) 2023-01-11T21:28:59.1455104Z test_control_flow4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:28:59.1455303Z stats [('calls_captured', 5), ('unique_graphs', 3), ('fusions_possible', 2)] 2023-01-11T21:28:59.1455369Z ok (0.041s) 2023-01-11T21:28:59.1455677Z test_control_flow5_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:28:59.1455876Z stats [('calls_captured', 13), ('fusions_possible', 7), ('unique_graphs', 6)] 2023-01-11T21:28:59.1455927Z ok (0.101s) 2023-01-11T21:28:59.1456249Z test_dynamic_duck_size_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1456443Z stats [('calls_captured', 10), ('fusions_possible', 8), ('unique_graphs', 2)] 2023-01-11T21:28:59.1456508Z ok (0.042s) 2023-01-11T21:28:59.1456820Z test_dynamic_kwarg_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1457053Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1457118Z ok (0.023s) 2023-01-11T21:28:59.1457458Z test_dynamic_order_dependence_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1457641Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:28:59.1457707Z ok (0.069s) 2023-01-11T21:28:59.1458034Z test_dynamic_shapes_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 11), ('ok', 11)] 2023-01-11T21:28:59.1458235Z stats [('calls_captured', 22), ('fusions_possible', 11), ('unique_graphs', 11)] 2023-01-11T21:28:59.1458301Z ok (0.062s) 2023-01-11T21:28:59.1458637Z test_dynamic_zero_inference_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1458871Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1458938Z ok (0.025s) 2023-01-11T21:28:59.1459276Z test_enumerate_not_break_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1459460Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1459542Z expected failure (0.027s) 2023-01-11T21:28:59.1459863Z test_extended_args_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1460073Z stats [('calls_captured', 1026), ('fusions_possible', 1023), ('unique_graphs', 3)] 2023-01-11T21:28:59.1460138Z ok (7.169s) 2023-01-11T21:28:59.1460463Z test_graph_break_on_item_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1460542Z unimplemented [] 2023-01-11T21:28:59.1460671Z graph_break [('Tensor.item', 1)] 2023-01-11T21:28:59.1460858Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:28:59.1460923Z ok (0.052s) 2023-01-11T21:28:59.1461257Z test_indirect_unsupported1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1461537Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1461613Z unimplemented [] 2023-01-11T21:28:59.1461892Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1462087Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1462139Z ok (0.045s) 2023-01-11T21:28:59.1462616Z test_indirect_unsupported2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1462906Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1462982Z unimplemented [] 2023-01-11T21:28:59.1463261Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1463457Z stats [('calls_captured', 5), ('unique_graphs', 3), ('fusions_possible', 2)] 2023-01-11T21:28:59.1463523Z ok (0.070s) 2023-01-11T21:28:59.1463857Z test_indirect_unsupported3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1464136Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1464199Z unimplemented [] 2023-01-11T21:28:59.1464479Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1464734Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:28:59.1464801Z ok (0.045s) 2023-01-11T21:28:59.1465116Z test_multigraph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1465315Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:28:59.1465379Z ok (0.044s) 2023-01-11T21:28:59.1465715Z test_no_graph_break_on_item_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1465897Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:28:59.1465964Z ok (0.048s) 2023-01-11T21:28:59.1466285Z test_pop_after_resume_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1466418Z unimplemented [] 2023-01-11T21:28:59.1466701Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1466899Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1466976Z ok (0.044s) 2023-01-11T21:28:59.1467412Z test_restore_range_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1467535Z unimplemented [] 2023-01-11T21:28:59.1468018Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1468387Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1468544Z ok (0.044s) 2023-01-11T21:28:59.1469052Z test_restore_range_iter_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1469219Z unimplemented [] 2023-01-11T21:28:59.1469647Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1469995Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:28:59.1470119Z ok (0.025s) 2023-01-11T21:28:59.1470638Z test_restore_state_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 1)] 2023-01-11T21:28:59.1470816Z unimplemented [] 2023-01-11T21:28:59.1471348Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1471716Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1471881Z expected failure (0.037s) 2023-01-11T21:28:59.1472364Z test_resume1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1472532Z unimplemented [] 2023-01-11T21:28:59.1472941Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1473308Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1473465Z ok (0.057s) 2023-01-11T21:28:59.1473984Z test_resume2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1474454Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1474628Z unimplemented [] 2023-01-11T21:28:59.1475063Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1475390Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:28:59.1475500Z ok (0.079s) 2023-01-11T21:28:59.1475994Z test_resume3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1476600Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1476777Z unimplemented [] 2023-01-11T21:28:59.1477252Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1477626Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:28:59.1477734Z ok (0.078s) 2023-01-11T21:28:59.1478166Z test_resume4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1478431Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1478509Z unimplemented [] 2023-01-11T21:28:59.1478829Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1479152Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:28:59.1479276Z ok (0.082s) 2023-01-11T21:28:59.1479594Z test_resume5_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:28:59.1479701Z 1.5000]) 2023-01-11T21:28:59.1479857Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:28:59.1479949Z 1.5000]) 2023-01-11T21:28:59.1480115Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:28:59.1480236Z 1.5000]) 2023-01-11T21:28:59.1480410Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:28:59.1480527Z 1.5000]) 2023-01-11T21:28:59.1480737Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1480881Z unimplemented [] 2023-01-11T21:28:59.1481165Z graph_break [('call_function BuiltinVariable(print) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1481454Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1481574Z ok (0.062s) 2023-01-11T21:28:59.1482046Z test_resume_freevars_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1482163Z unimplemented [] 2023-01-11T21:28:59.1482563Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1482868Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:28:59.1482984Z ok (0.061s) 2023-01-11T21:28:59.1483386Z test_resume_paths_join_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:28:59.1483593Z stats [('calls_captured', 10), ('unique_graphs', 7), ('fusions_possible', 3)] 2023-01-11T21:28:59.1483661Z ok (0.135s) 2023-01-11T21:28:59.1484114Z test_resume_tuple_iterator_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1484244Z unimplemented [] 2023-01-11T21:28:59.1484642Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1484959Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:28:59.1485073Z ok (0.081s) 2023-01-11T21:28:59.1485524Z test_resume_with_no_grad1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1485667Z unimplemented [] 2023-01-11T21:28:59.1485894Z graph_break [('Tensor.tolist', 2)] 2023-01-11T21:28:59.1486201Z stats [('calls_captured', 18), ('fusions_possible', 14), ('unique_graphs', 4)] 2023-01-11T21:28:59.1486398Z ok (0.108s) 2023-01-11T21:28:59.1486889Z test_resume_with_no_grad2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1487007Z unimplemented [] 2023-01-11T21:28:59.1487134Z graph_break [('Tensor.tolist', 2)] 2023-01-11T21:28:59.1487337Z stats [('calls_captured', 13), ('fusions_possible', 10), ('unique_graphs', 3)] 2023-01-11T21:28:59.1487402Z ok (0.080s) 2023-01-11T21:28:59.1487730Z test_resume_with_no_grad3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1487807Z unimplemented [] 2023-01-11T21:28:59.1487937Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:28:59.1488136Z stats [('calls_captured', 19), ('fusions_possible', 17), ('unique_graphs', 2)] 2023-01-11T21:28:59.1488224Z ok (0.056s) 2023-01-11T21:28:59.1488746Z test_stack_state1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1488867Z unimplemented [] 2023-01-11T21:28:59.1489289Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1489592Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:28:59.1489693Z ok (0.064s) 2023-01-11T21:28:59.1490164Z test_stack_state2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1490570Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1490685Z unimplemented [] 2023-01-11T21:28:59.1491080Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1491385Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:28:59.1491457Z ok (0.087s) 2023-01-11T21:28:59.1491667Z test_start1_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) 2023-01-11T21:28:59.1491831Z tensor([-2., -2., -2., -2., -2., -2., -2., -2., -2., -2.]) 2023-01-11T21:28:59.1491922Z tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) 2023-01-11T21:28:59.1492080Z tensor([-2., -2., -2., -2., -2., -2., -2., -2., -2., -2.]) 2023-01-11T21:28:59.1492200Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1492266Z unimplemented [] 2023-01-11T21:28:59.1492492Z graph_break [('call_function BuiltinVariable(print) [TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1492813Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1492935Z ok (0.041s) 2023-01-11T21:28:59.1493394Z test_start2_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1493822Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1493939Z unimplemented [] 2023-01-11T21:28:59.1494318Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1494636Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:28:59.1494739Z ok (0.070s) 2023-01-11T21:28:59.1495186Z test_start3_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:28:59.1495323Z unimplemented [] 2023-01-11T21:28:59.1495723Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:28:59.1495927Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1495993Z ok (0.037s) 2023-01-11T21:28:59.1496354Z test_start4_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1496552Z stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:28:59.1496617Z ok (0.053s) 2023-01-11T21:28:59.1496851Z test_tuple_iterator_mutate_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: not working yet (0.001s) 2023-01-11T21:28:59.1497319Z test_tuple_iterator_return_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1497448Z unimplemented [] 2023-01-11T21:28:59.1497852Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:28:59.1498170Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:28:59.1498265Z ok (0.073s) 2023-01-11T21:28:59.1498665Z test_builtin_functions_on_cuda_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... skip: requires cuda (0.001s) 2023-01-11T21:28:59.1499139Z test_builtin_getitem_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1)] 2023-01-11T21:28:59.1499264Z expected failure (0.015s) 2023-01-11T21:28:59.1499744Z test_builtin_max_min_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1500065Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:28:59.1500137Z ok (0.024s) 2023-01-11T21:28:59.1500494Z test_feed_random_values_into_graph_only_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1500689Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1500747Z ok (0.020s) 2023-01-11T21:28:59.1501138Z test_multiple_consecutive_random_calls_before_graph_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1501452Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1501571Z ok (0.030s) 2023-01-11T21:28:59.1502044Z test_no_recompilations_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1502469Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:28:59.1502582Z ok (0.021s) 2023-01-11T21:28:59.1503054Z test_numpy_correctness_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:28:59.1503339Z unimplemented [('reconstruct: ConstantVariable(float64)', 1)] 2023-01-11T21:28:59.1503578Z graph_break [('Tensor.numpy', 2), ('numpy', 2)] 2023-01-11T21:28:59.1503885Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:28:59.1504001Z ok (0.080s) 2023-01-11T21:28:59.1504513Z test_random_call_with_while_loop_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1504713Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:28:59.1504779Z ok (0.018s) 2023-01-11T21:28:59.1505122Z test_random_values_with_graph_break_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:28:59.1505185Z unimplemented [] 2023-01-11T21:28:59.1505310Z graph_break [('Tensor.item', 2)] 2023-01-11T21:28:59.1505590Z stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:28:59.1505704Z ok (0.071s) 2023-01-11T21:28:59.1506194Z test_unspec_float_precision_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches..DummyTestClass) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:28:59.1506641Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:28:59.1506762Z ok (0.716s) 2023-01-11T21:28:59.1506774Z 2023-01-11T21:28:59.1507071Z ---------------------------------------------------------------------- 2023-01-11T21:28:59.1507194Z Ran 510 tests in 39.763s 2023-01-11T21:28:59.1507202Z 2023-01-11T21:28:59.1507366Z OK (skipped=24, expected failures=16) 2023-01-11T21:28:59.1507376Z 2023-01-11T21:28:59.1507530Z Generating XML reports... 2023-01-11T21:28:59.1508087Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesExportTests-20230111212818.xml 2023-01-11T21:28:59.1508659Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesFunctionTests-20230111212818.xml 2023-01-11T21:28:59.1509099Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesMiscTests-20230111212818.xml 2023-01-11T21:28:59.1509617Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesNNModuleTests-20230111212818.xml 2023-01-11T21:28:59.1510168Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesReproTests-20230111212818.xml 2023-01-11T21:28:59.1510715Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesSubGraphTests-20230111212818.xml 2023-01-11T21:28:59.1511314Z Generated XML report: test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesUnspecTests-20230111212818.xml 2023-01-11T21:28:59.1511327Z 2023-01-11T21:28:59.1511805Z ##[endgroup] 2023-01-11T21:28:59.1512294Z FINISHED PRINTING LOG FILE of dynamo/test_dynamic_shapes (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_dynamic_shapes_kfuvrwhx) 2023-01-11T21:28:59.1512306Z 2023-01-11T21:29:01.0842591Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:29:01.1717196Z Ignoring disabled issues: [] 2023-01-11T21:29:01.1864945Z Running dynamo/test_export_mutations ... [2023-01-11 21:29:01.186193] 2023-01-11T21:29:01.1867093Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_export_mutations.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:29:01.186478] 2023-01-11T21:29:03.3440742Z 2023-01-11T21:29:03.3441282Z Expand the folded group to see the log file of dynamo/test_export_mutations 2023-01-11T21:29:03.3442219Z ##[group]PRINTING LOG FILE of dynamo/test_export_mutations (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_export_mutations_u16fd47j) 2023-01-11T21:29:03.3442908Z 2023-01-11T21:29:03.3443154Z Running tests... 2023-01-11T21:29:03.3443853Z ---------------------------------------------------------------------- 2023-01-11T21:29:03.3444497Z Test results will be stored in test-reports/python-unittest/dynamo.test_export_mutations 2023-01-11T21:29:03.3445765Z test_module_attribute_mutation_violation_negative_1 (__main__.MutationExportTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/88468 for platform(s) linux, macos, mac. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.235s) 2023-01-11T21:29:03.3447643Z test_module_attribute_mutation_violation_negative_2 (__main__.MutationExportTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/88475 for platform(s) linux, mac, macos. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:29:03.3450078Z test_module_attribute_mutation_violation_negative_3 (__main__.MutationExportTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/88466 for platform(s) linux, mac, macos. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T21:29:03.3452737Z test_module_attribute_mutation_violation_negative_4 (__main__.MutationExportTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/88467 for platform(s) linux, macos. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:29:03.3454115Z test_module_attribute_mutation_violation_positive_1 (__main__.MutationExportTests) ... ok (0.027s) 2023-01-11T21:29:03.3455040Z test_module_attribute_mutation_violation_positive_2 (__main__.MutationExportTests) ... ok (0.003s) 2023-01-11T21:29:03.3455834Z test_module_attribute_mutation_violation_positive_3 (__main__.MutationExportTests) ... ok (0.002s) 2023-01-11T21:29:03.3456631Z test_module_attribute_mutation_violation_positive_4 (__main__.MutationExportTests) ... inline_call [] 2023-01-11T21:29:03.3457173Z ok (0.004s) 2023-01-11T21:29:03.3457394Z 2023-01-11T21:29:03.3457837Z ---------------------------------------------------------------------- 2023-01-11T21:29:03.3458349Z Ran 8 tests in 0.273s 2023-01-11T21:29:03.3501037Z 2023-01-11T21:29:03.3501360Z OK (skipped=4) 2023-01-11T21:29:03.3501560Z 2023-01-11T21:29:03.3501723Z Generating XML reports... 2023-01-11T21:29:03.3504375Z Generated XML report: test-reports/python-unittest/dynamo.test_export_mutations/TEST-MutationExportTests-20230111212902.xml 2023-01-11T21:29:03.3504840Z 2023-01-11T21:29:03.3505366Z ##[endgroup] 2023-01-11T21:29:03.3506197Z FINISHED PRINTING LOG FILE of dynamo/test_export_mutations (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_export_mutations_u16fd47j) 2023-01-11T21:29:03.3506585Z 2023-01-11T21:29:05.3034573Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:29:05.3880994Z Ignoring disabled issues: [] 2023-01-11T21:29:05.4025802Z Running dynamo/test_functions ... [2023-01-11 21:29:05.402287] 2023-01-11T21:29:05.4027374Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_functions.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:29:05.402536] 2023-01-11T21:29:08.5404848Z 2023-01-11T21:29:08.5405593Z Expand the folded group to see the log file of dynamo/test_functions 2023-01-11T21:29:08.5406694Z ##[group]PRINTING LOG FILE of dynamo/test_functions (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_functions_lnsbaz_r) 2023-01-11T21:29:08.5407122Z 2023-01-11T21:29:08.5407269Z Running tests... 2023-01-11T21:29:08.5407958Z ---------------------------------------------------------------------- 2023-01-11T21:29:08.5408682Z Test results will be stored in test-reports/python-unittest/dynamo.test_functions 2023-01-11T21:29:08.5409047Z test_T (__main__.FunctionTests) ... ok (0.348s) 2023-01-11T21:29:08.5409457Z test_add (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5409852Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5410068Z ok (0.007s) 2023-01-11T21:29:08.5410535Z test_add_ (__main__.FunctionTests) ... /var/lib/jenkins/workspace/test/dynamo/test_functions.py:73: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5410998Z a_copy = torch.tensor(a) 2023-01-11T21:29:08.5464450Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5465598Z return node.target(*args, **kwargs) 2023-01-11T21:29:08.5466341Z .7:5: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5466750Z tensor = torch.tensor(a); a = None 2023-01-11T21:29:08.5467095Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5467328Z ok (0.016s) 2023-01-11T21:29:08.5467683Z test_addcdiv (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5467987Z ok (0.009s) 2023-01-11T21:29:08.5468736Z test_addcdiv_ (__main__.FunctionTests) ... /var/lib/jenkins/workspace/test/dynamo/test_functions.py:84: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5469555Z a_copy = torch.tensor(a) 2023-01-11T21:29:08.5470620Z /opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py:1052: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5471160Z return node.target(*args, **kwargs) 2023-01-11T21:29:08.5472156Z .12:5: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:29:08.5472558Z tensor = torch.tensor(a); a = None 2023-01-11T21:29:08.5473081Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5473481Z ok (0.016s) 2023-01-11T21:29:08.5473889Z test_build_list_unpack (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5474390Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:29:08.5474713Z ok (0.013s) 2023-01-11T21:29:08.5475077Z test_chunks1 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5475325Z ok (0.008s) 2023-01-11T21:29:08.5475700Z test_const_tuple_add1 (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5475970Z ok (0.008s) 2023-01-11T21:29:08.5476331Z test_const_tuple_add2 (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5476598Z ok (0.008s) 2023-01-11T21:29:08.5477047Z test_constant1 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5477415Z ok (0.007s) 2023-01-11T21:29:08.5477771Z test_constant2 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5478066Z ok (0.007s) 2023-01-11T21:29:08.5478428Z test_constant3 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5478686Z ok (0.006s) 2023-01-11T21:29:08.5479031Z test_constant4 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5479287Z ok (0.005s) 2023-01-11T21:29:08.5479652Z test_default_dict (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5479916Z ok (0.010s) 2023-01-11T21:29:08.5480256Z test_del (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5480626Z ok (0.007s) 2023-01-11T21:29:08.5481439Z test_device (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5481752Z ok (0.005s) 2023-01-11T21:29:08.5482127Z test_device_constant (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5482394Z ok (0.011s) 2023-01-11T21:29:08.5482742Z test_dict_copy (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5483001Z ok (0.005s) 2023-01-11T21:29:08.5483365Z test_dict_ops (__main__.FunctionTests) ... stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:29:08.5483621Z ok (0.014s) 2023-01-11T21:29:08.5483982Z test_dict_param_keys (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5484247Z ok (0.006s) 2023-01-11T21:29:08.5484682Z test_distributed_is_available (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5484954Z ok (0.005s) 2023-01-11T21:29:08.5485347Z test_distributed_is_initialized (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5485621Z ok (0.005s) 2023-01-11T21:29:08.5485980Z test_dtype (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5486225Z ok (0.005s) 2023-01-11T21:29:08.5486591Z test_dtype_compare (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5486851Z ok (0.007s) 2023-01-11T21:29:08.5487193Z test_finfo (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5487446Z ok (0.008s) 2023-01-11T21:29:08.5487807Z test_float (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5488057Z ok (0.006s) 2023-01-11T21:29:08.5488426Z test_fn_with_self_set (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5488691Z ok (0.010s) 2023-01-11T21:29:08.5489057Z test_fstrings1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5489303Z ok (0.006s) 2023-01-11T21:29:08.5489661Z test_fstrings2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5489921Z ok (0.006s) 2023-01-11T21:29:08.5490270Z test_fstrings3 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5490528Z ok (0.005s) 2023-01-11T21:29:08.5490749Z test_funcdef_closure (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5491104Z stats [('calls_captured', 10), ('fusions_possible', 9), ('unique_graphs', 1)] 2023-01-11T21:29:08.5491320Z ok (0.015s) 2023-01-11T21:29:08.5491696Z test_get_default_dtype (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5491960Z ok (0.005s) 2023-01-11T21:29:08.5492314Z test_globalfn (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5492574Z ok (0.006s) 2023-01-11T21:29:08.5492941Z test_globalmodule (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5493195Z ok (0.011s) 2023-01-11T21:29:08.5493554Z test_globalvar (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5493813Z ok (0.007s) 2023-01-11T21:29:08.5494168Z test_import1 (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5494458Z ok (0.007s) 2023-01-11T21:29:08.5494821Z test_indirect1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5495082Z ok (0.006s) 2023-01-11T21:29:08.5495434Z test_indirect2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5495691Z ok (0.006s) 2023-01-11T21:29:08.5496047Z test_indirect3 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5496288Z ok (0.006s) 2023-01-11T21:29:08.5496518Z test_inline_jit_annotations (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5496880Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5497107Z ok (0.008s) 2023-01-11T21:29:08.5497467Z test_inline_softmax (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5497772Z ok (0.011s) 2023-01-11T21:29:08.5497996Z test_inline_with_default (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5498482Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5498707Z ok (0.008s) 2023-01-11T21:29:08.5498927Z test_inner_function (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5499265Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5499489Z ok (0.006s) 2023-01-11T21:29:08.5499883Z test_is_contiguous_memory_format (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5500160Z ok (0.007s) 2023-01-11T21:29:08.5500514Z test_is_fx_tracing (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5500776Z ok (0.006s) 2023-01-11T21:29:08.5501150Z test_is_in_onnx_export (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5501409Z ok (0.006s) 2023-01-11T21:29:08.5501774Z test_is_not_null (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5502031Z ok (0.006s) 2023-01-11T21:29:08.5504285Z test_is_quantized (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5504696Z ok (0.005s) 2023-01-11T21:29:08.5505275Z test_is_sparse (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5505648Z ok (0.005s) 2023-01-11T21:29:08.5506177Z test_islice_chain (__main__.FunctionTests) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:29:08.5506573Z ok (0.012s) 2023-01-11T21:29:08.5507117Z test_jit_annotate (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5507513Z ok (0.007s) 2023-01-11T21:29:08.5508037Z test_len_constant_dict (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5508416Z ok (0.006s) 2023-01-11T21:29:08.5508957Z test_len_constant_list (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5509348Z ok (0.006s) 2023-01-11T21:29:08.5509937Z test_len_constant_misc_iterables (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5510341Z ok (0.006s) 2023-01-11T21:29:08.5510869Z test_len_tensor (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5511225Z ok (0.005s) 2023-01-11T21:29:08.5511839Z test_list_add (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5512419Z ok (0.006s) 2023-01-11T21:29:08.5512951Z test_list_clear (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5513329Z ok (0.008s) 2023-01-11T21:29:08.5513855Z test_list_convert (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5514221Z ok (0.009s) 2023-01-11T21:29:08.5514755Z test_list_reversed (__main__.FunctionTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:29:08.5515143Z ok (0.010s) 2023-01-11T21:29:08.5515684Z test_list_slice_assignment (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5516071Z ok (0.007s) 2023-01-11T21:29:08.5516606Z test_list_truth (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5516985Z ok (0.006s) 2023-01-11T21:29:08.5517655Z test_listarg1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5518036Z ok (0.006s) 2023-01-11T21:29:08.5518566Z test_listarg2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5518934Z ok (0.006s) 2023-01-11T21:29:08.5519512Z test_listarg3 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5519837Z ok (0.007s) 2023-01-11T21:29:08.5520351Z test_listarg4 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5520685Z ok (0.007s) 2023-01-11T21:29:08.5521191Z test_listarg5 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5521573Z ok (0.007s) 2023-01-11T21:29:08.5522143Z test_load_global_bool (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5522535Z ok (0.006s) 2023-01-11T21:29:08.5522854Z test_map_sum (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5523363Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:29:08.5523695Z ok (0.015s) 2023-01-11T21:29:08.5523991Z test_methodcall1 (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5524470Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5524786Z ok (0.009s) 2023-01-11T21:29:08.5525099Z test_methodcall2 (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5525597Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5525919Z ok (0.008s) 2023-01-11T21:29:08.5526223Z test_methodcall3 (__main__.FunctionTests) ... inline_call [] 2023-01-11T21:29:08.5526717Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5527070Z ok (0.008s) 2023-01-11T21:29:08.5527601Z test_min_max (__main__.FunctionTests) ... stats [('calls_captured', 11), ('fusions_possible', 10), ('unique_graphs', 1)] 2023-01-11T21:29:08.5527982Z ok (0.019s) 2023-01-11T21:29:08.5528525Z test_module_constant (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5528893Z ok (0.011s) 2023-01-11T21:29:08.5529358Z test_ndim (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5529709Z ok (0.007s) 2023-01-11T21:29:08.5530237Z test_pop (__main__.FunctionTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:29:08.5530607Z ok (0.012s) 2023-01-11T21:29:08.5531154Z test_range1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5531548Z ok (0.007s) 2023-01-11T21:29:08.5532210Z test_range2 (__main__.FunctionTests) ... stats [('calls_captured', 13), ('fusions_possible', 12), ('unique_graphs', 1)] 2023-01-11T21:29:08.5532583Z ok (0.018s) 2023-01-11T21:29:08.5533119Z test_reduce (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5533498Z ok (0.010s) 2023-01-11T21:29:08.5534042Z test_return_dict (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5534427Z ok (0.007s) 2023-01-11T21:29:08.5534976Z test_return_dict2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5535351Z ok (0.007s) 2023-01-11T21:29:08.5535896Z test_return_tuple1 (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5536278Z ok (0.007s) 2023-01-11T21:29:08.5536895Z test_return_tuple2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5537280Z ok (0.007s) 2023-01-11T21:29:08.5537819Z test_shape1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5538219Z ok (0.006s) 2023-01-11T21:29:08.5538777Z test_shape2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5539171Z ok (0.006s) 2023-01-11T21:29:08.5539699Z test_slice1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5540048Z ok (0.005s) 2023-01-11T21:29:08.5540574Z test_slice2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5540948Z ok (0.005s) 2023-01-11T21:29:08.5541471Z test_slice3 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5541820Z ok (0.006s) 2023-01-11T21:29:08.5542530Z test_slice4 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5542900Z ok (0.005s) 2023-01-11T21:29:08.5543407Z test_slice5 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5543764Z ok (0.006s) 2023-01-11T21:29:08.5544310Z test_slice6 (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5544690Z ok (0.007s) 2023-01-11T21:29:08.5545206Z test_startswith (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5545581Z ok (0.007s) 2023-01-11T21:29:08.5546107Z test_tensor_len (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5546463Z ok (0.008s) 2023-01-11T21:29:08.5547028Z test_tensor_new_with_shape (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5547405Z ok (0.018s) 2023-01-11T21:29:08.5547950Z test_tensor_new_with_size (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5548362Z ok (0.018s) 2023-01-11T21:29:08.5548915Z test_tensor_type (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5549307Z ok (0.040s) 2023-01-11T21:29:08.5549641Z test_tensor_type2 (__main__.FunctionTests) ... skip: requires cuda (0.000s) 2023-01-11T21:29:08.5550286Z test_transpose_for_scores (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5550676Z ok (0.007s) 2023-01-11T21:29:08.5551206Z test_tuple1 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5551875Z ok (0.006s) 2023-01-11T21:29:08.5552519Z test_tuple2 (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5552915Z ok (0.006s) 2023-01-11T21:29:08.5553420Z test_tuple_contains (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5553801Z ok (0.006s) 2023-01-11T21:29:08.5554350Z test_tuple_iadd (__main__.FunctionTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:08.5554729Z ok (0.007s) 2023-01-11T21:29:08.5555297Z test_unpack1 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5555715Z ok (0.007s) 2023-01-11T21:29:08.5556279Z test_unpack2 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5556692Z ok (0.007s) 2023-01-11T21:29:08.5557407Z test_unpack3 (__main__.FunctionTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:29:08.5557802Z ok (0.007s) 2023-01-11T21:29:08.5558327Z test_unpack_ex1 (__main__.FunctionTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:29:08.5558708Z ok (0.009s) 2023-01-11T21:29:08.5559237Z test_unpack_ex2 (__main__.FunctionTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:29:08.5559615Z ok (0.009s) 2023-01-11T21:29:08.5560140Z test_unpack_ex3 (__main__.FunctionTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:29:08.5560537Z ok (0.009s) 2023-01-11T21:29:08.5561082Z test_viamethod (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5561456Z ok (0.005s) 2023-01-11T21:29:08.5562001Z test_viatorch (__main__.FunctionTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:08.5562385Z ok (0.005s) 2023-01-11T21:29:08.5562550Z 2023-01-11T21:29:08.5562836Z ---------------------------------------------------------------------- 2023-01-11T21:29:08.5563173Z Ran 109 tests in 1.228s 2023-01-11T21:29:08.5563350Z 2023-01-11T21:29:08.5563453Z OK (skipped=1) 2023-01-11T21:29:08.5563606Z 2023-01-11T21:29:08.5563735Z Generating XML reports... 2023-01-11T21:29:08.5564347Z Generated XML report: test-reports/python-unittest/dynamo.test_functions/TEST-FunctionTests-20230111212906.xml 2023-01-11T21:29:08.5564646Z 2023-01-11T21:29:08.5565174Z ##[endgroup] 2023-01-11T21:29:08.5565788Z FINISHED PRINTING LOG FILE of dynamo/test_functions (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_functions_lnsbaz_r) 2023-01-11T21:29:08.5566128Z 2023-01-11T21:29:10.4886553Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:29:10.5708567Z Ignoring disabled issues: [] 2023-01-11T21:29:10.5854089Z Running dynamo/test_global ... [2023-01-11 21:29:10.585118] 2023-01-11T21:29:10.5856179Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_global.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:29:10.585365] 2023-01-11T21:29:12.8665219Z 2023-01-11T21:29:12.8665780Z Expand the folded group to see the log file of dynamo/test_global 2023-01-11T21:29:12.8666853Z ##[group]PRINTING LOG FILE of dynamo/test_global (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_global_sbipu6f9) 2023-01-11T21:29:12.8667221Z 2023-01-11T21:29:12.8667316Z Running tests... 2023-01-11T21:29:12.8667968Z ---------------------------------------------------------------------- 2023-01-11T21:29:12.8668688Z Test results will be stored in test-reports/python-unittest/dynamo.test_global 2023-01-11T21:29:12.8669221Z test_store_global_1 (__main__.TestGlobals) ... ok (0.333s) 2023-01-11T21:29:12.8669854Z test_store_global_2 (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8670655Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8671169Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:29:12.8671785Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:29:12.8672167Z ok (0.010s) 2023-01-11T21:29:12.8672603Z test_store_global_cross_file (__main__.TestGlobals) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:29:12.8672837Z unimplemented [] 2023-01-11T21:29:12.8673250Z graph_break [('call_function BuiltinVariable(setattr) [PythonModuleVariable(), ConstantVariable(str), TensorVariable()] {}', 1)] 2023-01-11T21:29:12.8673675Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:12.8673887Z ok (0.008s) 2023-01-11T21:29:12.8674187Z test_store_global_dict (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8674541Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8674854Z ok (0.005s) 2023-01-11T21:29:12.8675149Z test_store_global_dict_2 (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8675386Z inline_call [] 2023-01-11T21:29:12.8675686Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8675900Z ok (0.008s) 2023-01-11T21:29:12.8676208Z test_store_global_inline_1 (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8676444Z inline_call [] 2023-01-11T21:29:12.8676727Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8676953Z ok (0.009s) 2023-01-11T21:29:12.8677258Z test_store_global_inline_2 (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8677479Z inline_call [] 2023-01-11T21:29:12.8677770Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8678015Z ok (0.010s) 2023-01-11T21:29:12.8678321Z test_store_global_list (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8678667Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8678888Z ok (0.005s) 2023-01-11T21:29:12.8679191Z test_store_global_list_2 (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8679406Z inline_call [] 2023-01-11T21:29:12.8679698Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8679924Z ok (0.007s) 2023-01-11T21:29:12.8680214Z test_store_global_new (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8680569Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:29:12.8680790Z ok (0.005s) 2023-01-11T21:29:12.8681091Z test_store_global_object (__main__.TestGlobals) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:29:12.8681436Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:29:12.8681663Z ok (0.005s) 2023-01-11T21:29:12.8681763Z 2023-01-11T21:29:12.8681962Z ---------------------------------------------------------------------- 2023-01-11T21:29:12.8682192Z Ran 11 tests in 0.406s 2023-01-11T21:29:12.8682304Z 2023-01-11T21:29:12.8682364Z OK 2023-01-11T21:29:12.8682454Z 2023-01-11T21:29:12.8682539Z Generating XML reports... 2023-01-11T21:29:12.8682935Z Generated XML report: test-reports/python-unittest/dynamo.test_global/TEST-TestGlobals-20230111212912.xml 2023-01-11T21:29:12.8683164Z 2023-01-11T21:29:12.8683442Z ##[endgroup] 2023-01-11T21:29:12.8683844Z FINISHED PRINTING LOG FILE of dynamo/test_global (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_global_sbipu6f9) 2023-01-11T21:29:12.8684067Z 2023-01-11T21:29:14.8543199Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:29:14.9451715Z Ignoring disabled issues: [] 2023-01-11T21:29:14.9597535Z Running dynamo/test_global_declaration ... [2023-01-11 21:29:14.959432] 2023-01-11T21:29:14.9599305Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_global_declaration.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:29:14.959693] 2023-01-11T21:29:16.4133400Z 2023-01-11T21:29:16.4134206Z Expand the folded group to see the log file of dynamo/test_global_declaration 2023-01-11T21:29:16.4134966Z ##[group]PRINTING LOG FILE of dynamo/test_global_declaration (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_global_declaration_mfubw6qh) 2023-01-11T21:29:16.4135277Z 2023-01-11T21:29:16.4135505Z ##[endgroup] 2023-01-11T21:29:16.4136103Z FINISHED PRINTING LOG FILE of dynamo/test_global_declaration (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_global_declaration_mfubw6qh) 2023-01-11T21:29:16.4136358Z 2023-01-11T21:29:18.3589194Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:29:18.4244923Z Ignoring disabled issues: [] 2023-01-11T21:29:18.4395137Z Running dynamo/test_minifier ... [2023-01-11 21:29:18.439150] 2023-01-11T21:29:18.4396937Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_minifier.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:29:18.439450] 2023-01-11T21:30:01.7141150Z 2023-01-11T21:30:01.7141600Z Expand the folded group to see the log file of dynamo/test_minifier 2023-01-11T21:30:01.7142653Z ##[group]PRINTING LOG FILE of dynamo/test_minifier (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_minifier_m_7uzj__) 2023-01-11T21:30:01.7144470Z 2023-01-11T21:30:01.7145848Z Running tests... 2023-01-11T21:30:01.7146373Z ---------------------------------------------------------------------- 2023-01-11T21:30:01.7146789Z Test results will be stored in test-reports/python-unittest/dynamo.test_minifier 2023-01-11T21:30:01.7147241Z test_after_dynamo_cpu_accuracy_backend_passes (__main__.MinifierTests) ... ok (1.935s) 2023-01-11T21:30:01.7147837Z test_after_dynamo_cpu_accuracy_error (__main__.MinifierTests) ... ok (5.241s) 2023-01-11T21:30:01.7148238Z test_after_dynamo_cpu_compile_backend_passes (__main__.MinifierTests) ... ok (1.687s) 2023-01-11T21:30:01.7148568Z test_after_dynamo_cpu_compile_error (__main__.MinifierTests) ... ok (5.160s) 2023-01-11T21:30:01.7152798Z test_after_dynamo_cpu_runtime_backend_passes (__main__.MinifierTests) ... ok (1.655s) 2023-01-11T21:30:01.7153437Z test_after_dynamo_cpu_runtime_error (__main__.MinifierTests) ... ok (5.125s) 2023-01-11T21:30:01.7154015Z test_after_dynamo_cuda_accuracy_backend_passes (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7154656Z test_after_dynamo_cuda_accuracy_error (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7155246Z test_after_dynamo_cuda_compile_backend_passes (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7155862Z test_after_dynamo_cuda_compile_error (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7156469Z test_after_dynamo_cuda_runtime_backend_passes (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7224321Z test_after_dynamo_cuda_runtime_error (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:30:01.7225139Z test_after_dynamo_custom_backend (__main__.MinifierTests) ... ok (3.316s) 2023-01-11T21:30:01.7225884Z test_after_dynamo_with_modified_config_cpu_accuracy_error (__main__.MinifierTests) ... ok (5.271s) 2023-01-11T21:30:01.7226663Z test_after_dynamo_with_modified_config_cpu_compile_error (__main__.MinifierTests) ... ok (5.134s) 2023-01-11T21:30:01.7227423Z test_cpu_cuda_module_after_dynamo (__main__.MinifierTests) ... skip: requires cuda (0.001s) 2023-01-11T21:30:01.7228112Z test_dynamo_config_serialization (__main__.MinifierTests) ... ok (1.568s) 2023-01-11T21:30:01.7228786Z test_if_graph_minified (__main__.MinifierTests) ... ok (5.261s) 2023-01-11T21:30:01.7229167Z 2023-01-11T21:30:01.7229710Z ---------------------------------------------------------------------- 2023-01-11T21:30:01.7230278Z Ran 18 tests in 41.360s 2023-01-11T21:30:01.7230792Z 2023-01-11T21:30:01.7230967Z OK (skipped=7) 2023-01-11T21:30:01.7231221Z 2023-01-11T21:30:01.7231426Z Generating XML reports... 2023-01-11T21:30:01.7232522Z Generated XML report: test-reports/python-unittest/dynamo.test_minifier/TEST-MinifierTests-20230111212919.xml 2023-01-11T21:30:01.7233082Z 2023-01-11T21:30:01.7233763Z ##[endgroup] 2023-01-11T21:30:01.7234740Z FINISHED PRINTING LOG FILE of dynamo/test_minifier (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_minifier_m_7uzj__) 2023-01-11T21:30:01.7235304Z 2023-01-11T21:30:03.6771632Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:03.7638242Z Ignoring disabled issues: [] 2023-01-11T21:30:03.7783119Z Running dynamo/test_model_output ... [2023-01-11 21:30:03.777987] 2023-01-11T21:30:03.7785627Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_model_output.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:03.778253] 2023-01-11T21:30:05.5479500Z 2023-01-11T21:30:05.5480364Z Expand the folded group to see the log file of dynamo/test_model_output 2023-01-11T21:30:05.5481343Z ##[group]PRINTING LOG FILE of dynamo/test_model_output (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_model_output_3tbp2nt7) 2023-01-11T21:30:05.5481595Z 2023-01-11T21:30:05.5481668Z Running tests... 2023-01-11T21:30:05.5482088Z ---------------------------------------------------------------------- 2023-01-11T21:30:05.5482476Z Test results will be stored in test-reports/python-unittest/dynamo.test_model_output 2023-01-11T21:30:05.5482902Z test_pretrained (__main__.TestHFPretrained) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5483436Z test_mo_assign (__main__.TestModelOutput) ... skip: requires HuggingFace (0.001s) 2023-01-11T21:30:05.5483950Z test_mo_create (__main__.TestModelOutput) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5484517Z test_mo_getattr (__main__.TestModelOutput) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5485118Z test_mo_getitem (__main__.TestModelOutput) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5485703Z test_mo_index (__main__.TestModelOutput) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5486261Z test_mo_init (__main__.TestModelOutput) ... skip: requires HuggingFace (0.001s) 2023-01-11T21:30:05.5486819Z test_mo_tuple (__main__.TestModelOutput) ... skip: requires HuggingFace (0.000s) 2023-01-11T21:30:05.5486995Z 2023-01-11T21:30:05.5487197Z ---------------------------------------------------------------------- 2023-01-11T21:30:05.5487439Z Ran 8 tests in 0.003s 2023-01-11T21:30:05.5487551Z 2023-01-11T21:30:05.5487609Z OK (skipped=8) 2023-01-11T21:30:05.5487715Z 2023-01-11T21:30:05.5487798Z Generating XML reports... 2023-01-11T21:30:05.5488234Z Generated XML report: test-reports/python-unittest/dynamo.test_model_output/TEST-TestHFPretrained-20230111213005.xml 2023-01-11T21:30:05.5488776Z Generated XML report: test-reports/python-unittest/dynamo.test_model_output/TEST-TestModelOutput-20230111213005.xml 2023-01-11T21:30:05.5489020Z 2023-01-11T21:30:05.5489235Z ##[endgroup] 2023-01-11T21:30:05.5489650Z FINISHED PRINTING LOG FILE of dynamo/test_model_output (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_model_output_3tbp2nt7) 2023-01-11T21:30:05.5489882Z 2023-01-11T21:30:07.4863529Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:07.5772889Z Ignoring disabled issues: [] 2023-01-11T21:30:07.5921944Z Running dynamo/test_modules ... [2023-01-11 21:30:07.591827] 2023-01-11T21:30:07.5923522Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_modules.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:07.592134] 2023-01-11T21:30:11.1065156Z 2023-01-11T21:30:11.1065872Z Expand the folded group to see the log file of dynamo/test_modules 2023-01-11T21:30:11.1066851Z ##[group]PRINTING LOG FILE of dynamo/test_modules (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_modules_bv9g5cs9) 2023-01-11T21:30:11.1067364Z 2023-01-11T21:30:11.1067447Z Running tests... 2023-01-11T21:30:11.1067967Z ---------------------------------------------------------------------- 2023-01-11T21:30:11.1068467Z Test results will be stored in test-reports/python-unittest/dynamo.test_modules 2023-01-11T21:30:11.1068825Z test_access_by_keys (__main__.NNModuleTests) ... ok (0.360s) 2023-01-11T21:30:11.1069085Z test_basicmodule1 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1069501Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:30:11.1069867Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1070087Z ok (0.016s) 2023-01-11T21:30:11.1070502Z test_basicmodule2 (__main__.NNModuleTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1070785Z ok (0.013s) 2023-01-11T21:30:11.1071080Z test_call_fn_with_non_const_inputs_safe (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1071606Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1071900Z ok (0.023s) 2023-01-11T21:30:11.1072411Z test_cfgmod (__main__.NNModuleTests) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:30:11.1072879Z ok (0.028s) 2023-01-11T21:30:11.1073442Z test_children (__main__.NNModuleTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1073892Z ok (0.019s) 2023-01-11T21:30:11.1074559Z test_constloop (__main__.NNModuleTests) ... stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:30:11.1074866Z ok (0.026s) 2023-01-11T21:30:11.1075085Z test_densenet (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1075444Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:30:11.1075724Z ok (0.017s) 2023-01-11T21:30:11.1075929Z test_enumvalues (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1076326Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:30:11.1076686Z ok (0.016s) 2023-01-11T21:30:11.1077090Z test_fnmember (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1077376Z ok (0.012s) 2023-01-11T21:30:11.1077746Z test_fnmembercmp1 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1078060Z ok (0.012s) 2023-01-11T21:30:11.1078431Z test_fnmembercmp2 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1078740Z ok (0.012s) 2023-01-11T21:30:11.1078963Z test_forward_directly (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1079307Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1079587Z ok (0.020s) 2023-01-11T21:30:11.1079817Z test_generation_tag (__main__.NNModuleTests) ... ok (0.002s) 2023-01-11T21:30:11.1080270Z test_hasattr (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1080535Z ok (0.009s) 2023-01-11T21:30:11.1080923Z test_intarg (__main__.NNModuleTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1081184Z ok (0.013s) 2023-01-11T21:30:11.1081544Z test_iseval1 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1081855Z ok (0.012s) 2023-01-11T21:30:11.1082216Z test_iseval2 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1082522Z ok (0.012s) 2023-01-11T21:30:11.1082892Z test_isnonelayer (__main__.NNModuleTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1083172Z ok (0.010s) 2023-01-11T21:30:11.1083630Z test_istraining1 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1083901Z ok (0.012s) 2023-01-11T21:30:11.1084312Z test_istraining2 (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1084573Z ok (0.012s) 2023-01-11T21:30:11.1084977Z test_layerlist (__main__.NNModuleTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1085239Z ok (0.016s) 2023-01-11T21:30:11.1085880Z test_lazy_module (__main__.NNModuleTests) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:30:11.1086466Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:30:11.1087228Z [2023-01-11 21:30:09,810] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [ShapeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:30:11.1088032Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:30:11.1088574Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:30:11.1089147Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:30:11.1089679Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:30:11.1090403Z [2023-01-11 21:30:09,882] torch._dynamo.symbolic_convert: [WARNING] /opt/conda/lib/python3.10/site-packages/torch/nn/parameter.py [ShapeVariable()] {} missing a required argument: 'shape' 2023-01-11T21:30:11.1091205Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:30:11.1091746Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:30:11.1092034Z frames [('total', 16), ('ok', 14)] 2023-01-11T21:30:11.1106416Z inline_call [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2), ('call_function UserDefinedObjectVariable(_infer_parameters) [NNModuleVariable(), TupleVariable()] {}', 1), ('call_function UserDefinedObjectVariable(_infer_parameters) [UnspecializedNNModuleVariable(LazyModule), TupleVariable()] {}', 1)] 2023-01-11T21:30:11.1107560Z unimplemented [("Guard setup for uninitialized class ", 2)] 2023-01-11T21:30:11.1108249Z graph_break [('Patched init cannot be inlined.', 3), ('arg mismatch inlining', 2)] 2023-01-11T21:30:11.1108923Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1109336Z ok (0.206s) 2023-01-11T21:30:11.1109764Z test_module_attribute_precedence (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1110382Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1110753Z ok (0.011s) 2023-01-11T21:30:11.1111177Z test_module_class_method (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1111910Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:30:11.1112290Z ok (0.030s) 2023-01-11T21:30:11.1112853Z test_module_forward_has_graph_break (__main__.NNModuleTests) ... frames [('total', 3), ('ok', 2)] 2023-01-11T21:30:11.1113297Z inline_call [] 2023-01-11T21:30:11.1113776Z unimplemented [('reconstruct: ConstantVariable(dict)', 2)] 2023-01-11T21:30:11.1114720Z graph_break [('call_function BuiltinVariable(dict) [ListIteratorVariable()] {}', 1), ('call_method NNModuleVariable() buffers [] {}', 1)] 2023-01-11T21:30:11.1115448Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:11.1115855Z ok (0.077s) 2023-01-11T21:30:11.1116273Z test_module_name_string (__main__.NNModuleTests) ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1116544Z ok (0.014s) 2023-01-11T21:30:11.1116755Z test_module_property (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1117106Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1117331Z ok (0.007s) 2023-01-11T21:30:11.1117545Z test_module_static_method (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1117942Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:30:11.1118164Z ok (0.029s) 2023-01-11T21:30:11.1118986Z test_moduledict (__main__.NNModuleTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1119419Z ok (0.010s) 2023-01-11T21:30:11.1119777Z test_modulelist (__main__.NNModuleTests) ... stats [('calls_captured', 40), ('fusions_possible', 39), ('unique_graphs', 1)] 2023-01-11T21:30:11.1120041Z ok (0.106s) 2023-01-11T21:30:11.1120263Z test_modulemethod1 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1120597Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:30:11.1120819Z ok (0.030s) 2023-01-11T21:30:11.1121038Z test_modulemethod2 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1121369Z stats [('calls_captured', 9), ('fusions_possible', 8), ('unique_graphs', 1)] 2023-01-11T21:30:11.1121590Z ok (0.030s) 2023-01-11T21:30:11.1121978Z test_nn_moduledict_contains (__main__.NNModuleTests) ... stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:30:11.1122314Z frames [('total', 2), ('ok', 1)] 2023-01-11T21:30:11.1122581Z inline_call [('Patched init cannot be inlined.', 1)] 2023-01-11T21:30:11.1123019Z unimplemented [("Guard setup for uninitialized class .M'>", 1)] 2023-01-11T21:30:11.1123401Z graph_break [('Patched init cannot be inlined.', 1)] 2023-01-11T21:30:11.1123593Z ok (0.020s) 2023-01-11T21:30:11.1123811Z test_parameters1 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1124157Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1124368Z ok (0.010s) 2023-01-11T21:30:11.1124584Z test_parameters2 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1124922Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1125141Z ok (0.009s) 2023-01-11T21:30:11.1125499Z test_parameters3 (__main__.NNModuleTests) ... stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:30:11.1125767Z ok (0.024s) 2023-01-11T21:30:11.1126133Z test_self_mutating1 (__main__.NNModuleTests) ... stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:30:11.1126386Z ok (0.039s) 2023-01-11T21:30:11.1126733Z test_seq (__main__.NNModuleTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1126986Z ok (0.018s) 2023-01-11T21:30:11.1127199Z test_simple_torch_function (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1127552Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1127772Z ok (0.013s) 2023-01-11T21:30:11.1128138Z test_stringmember (__main__.NNModuleTests) ... stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:11.1128404Z ok (0.013s) 2023-01-11T21:30:11.1128669Z test_submodules1 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1129016Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:30:11.1129287Z ok (0.026s) 2023-01-11T21:30:11.1129507Z test_submodules2 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1129846Z stats [('calls_captured', 7), ('fusions_possible', 6), ('unique_graphs', 1)] 2023-01-11T21:30:11.1130052Z ok (0.026s) 2023-01-11T21:30:11.1130262Z test_super1 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1130598Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1130803Z ok (0.016s) 2023-01-11T21:30:11.1131008Z test_super2 (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1131337Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1131554Z ok (0.015s) 2023-01-11T21:30:11.1131767Z test_super_class_method (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1132115Z stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:11.1132376Z ok (0.008s) 2023-01-11T21:30:11.1132726Z test_tensorlist (__main__.NNModuleTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1132989Z ok (0.011s) 2023-01-11T21:30:11.1133376Z test_torch_function_with_closure (__main__.NNModuleTests) ... stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1133639Z ok (0.009s) 2023-01-11T21:30:11.1133948Z test_unsupportedmethod (__main__.NNModuleTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:11.1134396Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:11.1134689Z unimplemented [] 2023-01-11T21:30:11.1135053Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:11.1135463Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:30:11.1135689Z ok (0.029s) 2023-01-11T21:30:11.1135990Z test_unsupportedmodule (__main__.NNModuleTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:11.1136433Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:11.1136720Z unimplemented [] 2023-01-11T21:30:11.1137083Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:11.1137489Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:30:11.1137711Z ok (0.031s) 2023-01-11T21:30:11.1137931Z test_viamodulecall (__main__.NNModuleTests) ... inline_call [] 2023-01-11T21:30:11.1138267Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:11.1138486Z ok (0.015s) 2023-01-11T21:30:11.1138699Z test_attr (__main__.OptimizedModuleTest) ... ok (0.002s) 2023-01-11T21:30:11.1139051Z test_composition (__main__.OptimizedModuleTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:11.1139293Z inline_call [] 2023-01-11T21:30:11.1139587Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1139798Z ok (0.009s) 2023-01-11T21:30:11.1140132Z test_composition_with_opt_mod (__main__.OptimizedModuleTest) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:11.1140511Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:30:11.1140943Z inline_call [('inline in skipfiles: forward /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 1)] 2023-01-11T21:30:11.1141213Z unimplemented [] 2023-01-11T21:30:11.1141591Z graph_break [('inline in skipfiles: forward /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 1)] 2023-01-11T21:30:11.1141858Z ok (0.010s) 2023-01-11T21:30:11.1142156Z test_nn_module (__main__.OptimizedModuleTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:11.1142715Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1143020Z ok (0.012s) 2023-01-11T21:30:11.1143312Z test_recursion (__main__.OptimizedModuleTest) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:11.1143674Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:11.1143900Z ok (0.013s) 2023-01-11T21:30:11.1144193Z test_to (__main__.OptimizedModuleTest) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:11.1144535Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:30:11.1144755Z ok (0.033s) 2023-01-11T21:30:11.1144856Z 2023-01-11T21:30:11.1145058Z ---------------------------------------------------------------------- 2023-01-11T21:30:11.1145289Z Ran 57 tests in 1.634s 2023-01-11T21:30:11.1145399Z 2023-01-11T21:30:11.1145463Z OK 2023-01-11T21:30:11.1145553Z 2023-01-11T21:30:11.1145638Z Generating XML reports... 2023-01-11T21:30:11.1146087Z Generated XML report: test-reports/python-unittest/dynamo.test_modules/TEST-NNModuleTests-20230111213009.xml 2023-01-11T21:30:11.1146634Z Generated XML report: test-reports/python-unittest/dynamo.test_modules/TEST-OptimizedModuleTest-20230111213009.xml 2023-01-11T21:30:11.1146880Z 2023-01-11T21:30:11.1147213Z ##[endgroup] 2023-01-11T21:30:11.1147611Z FINISHED PRINTING LOG FILE of dynamo/test_modules (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_modules_bv9g5cs9) 2023-01-11T21:30:11.1147822Z 2023-01-11T21:30:13.0267209Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:13.0939208Z Ignoring disabled issues: [] 2023-01-11T21:30:13.1087528Z Running dynamo/test_nops ... [2023-01-11 21:30:13.108388] 2023-01-11T21:30:13.1088368Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_nops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:13.108638] 2023-01-11T21:30:15.2979648Z 2023-01-11T21:30:15.2980225Z Expand the folded group to see the log file of dynamo/test_nops 2023-01-11T21:30:15.2981220Z ##[group]PRINTING LOG FILE of dynamo/test_nops (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_nops_z393rkaq) 2023-01-11T21:30:15.2981469Z 2023-01-11T21:30:15.2981532Z Running tests... 2023-01-11T21:30:15.2981947Z ---------------------------------------------------------------------- 2023-01-11T21:30:15.2982558Z Test results will be stored in test-reports/python-unittest/dynamo.test_nops 2023-01-11T21:30:15.2982829Z test1 (__main__.NopTests) ... ok (0.241s) 2023-01-11T21:30:15.2983048Z test2 (__main__.NopTests) ... ok (0.002s) 2023-01-11T21:30:15.2983265Z test3 (__main__.NopTests) ... ok (0.002s) 2023-01-11T21:30:15.2983498Z test_extended_args (__main__.NopTests) ... ok (0.032s) 2023-01-11T21:30:15.2983629Z 2023-01-11T21:30:15.2983831Z ---------------------------------------------------------------------- 2023-01-11T21:30:15.2984073Z Ran 4 tests in 0.277s 2023-01-11T21:30:15.2984185Z 2023-01-11T21:30:15.2984244Z OK 2023-01-11T21:30:15.2984334Z 2023-01-11T21:30:15.2984405Z Generating XML reports... 2023-01-11T21:30:15.2984813Z Generated XML report: test-reports/python-unittest/dynamo.test_nops/TEST-NopTests-20230111213014.xml 2023-01-11T21:30:15.2985033Z 2023-01-11T21:30:15.2985267Z ##[endgroup] 2023-01-11T21:30:15.2985646Z FINISHED PRINTING LOG FILE of dynamo/test_nops (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_nops_z393rkaq) 2023-01-11T21:30:15.2985862Z 2023-01-11T21:30:17.2531390Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:17.3411631Z Ignoring disabled issues: [] 2023-01-11T21:30:17.3561093Z Running dynamo/test_optimizers ... [2023-01-11 21:30:17.355766] 2023-01-11T21:30:17.3562807Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_optimizers.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:17.356057] 2023-01-11T21:30:20.9131824Z 2023-01-11T21:30:20.9132405Z Expand the folded group to see the log file of dynamo/test_optimizers 2023-01-11T21:30:20.9133524Z ##[group]PRINTING LOG FILE of dynamo/test_optimizers (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_optimizers_0uizc92k) 2023-01-11T21:30:20.9134023Z 2023-01-11T21:30:20.9134087Z Running tests... 2023-01-11T21:30:20.9134489Z ---------------------------------------------------------------------- 2023-01-11T21:30:20.9134890Z Test results will be stored in test-reports/python-unittest/dynamo.test_optimizers 2023-01-11T21:30:20.9135212Z test_optimizing_over_tensor_with_requires_grad (__main__.End2EndTests) ... ok (0.445s) 2023-01-11T21:30:20.9135513Z frames [('total', 8), ('ok', 8)] 2023-01-11T21:30:20.9135954Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 4), ('inline with __closure__', 1)] 2023-01-11T21:30:20.9136261Z unimplemented [] 2023-01-11T21:30:20.9136701Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2), ('Tensor.backward', 1), ('inline with __closure__', 1)] 2023-01-11T21:30:20.9137215Z stats [('calls_captured', 24), ('fusions_possible', 22), ('unique_graphs', 2)] 2023-01-11T21:30:20.9137587Z test_adadelta (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9138012Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9138412Z stats [('calls_captured', 48), ('fusions_possible', 47), ('unique_graphs', 1)] 2023-01-11T21:30:20.9138639Z ok (0.088s) 2023-01-11T21:30:20.9138935Z test_adagrad (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9139344Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9139745Z stats [('calls_captured', 48), ('fusions_possible', 47), ('unique_graphs', 1)] 2023-01-11T21:30:20.9139968Z ok (0.090s) 2023-01-11T21:30:20.9140242Z test_adam (__main__.OptimizerTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:30:20.9140664Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 3)] 2023-01-11T21:30:20.9140944Z unimplemented [] 2023-01-11T21:30:20.9141315Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 1)] 2023-01-11T21:30:20.9141702Z stats [('calls_captured', 80), ('fusions_possible', 79), ('unique_graphs', 1)] 2023-01-11T21:30:20.9141925Z ok (0.141s) 2023-01-11T21:30:20.9142218Z test_adamax (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9142825Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9143231Z stats [('calls_captured', 72), ('fusions_possible', 71), ('unique_graphs', 1)] 2023-01-11T21:30:20.9143456Z ok (0.118s) 2023-01-11T21:30:20.9143741Z test_adamw (__main__.OptimizerTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:30:20.9144146Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 3)] 2023-01-11T21:30:20.9144426Z unimplemented [] 2023-01-11T21:30:20.9144796Z graph_break [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 1)] 2023-01-11T21:30:20.9145181Z stats [('calls_captured', 80), ('fusions_possible', 79), ('unique_graphs', 1)] 2023-01-11T21:30:20.9145407Z ok (0.130s) 2023-01-11T21:30:20.9145694Z test_asgd (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9146100Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9146507Z stats [('calls_captured', 76), ('fusions_possible', 75), ('unique_graphs', 1)] 2023-01-11T21:30:20.9146732Z ok (0.125s) 2023-01-11T21:30:20.9147017Z test_nadam (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9147426Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9147917Z stats [('calls_captured', 152), ('fusions_possible', 151), ('unique_graphs', 1)] 2023-01-11T21:30:20.9148147Z ok (0.194s) 2023-01-11T21:30:20.9148526Z test_radam (__main__.OptimizerTests) ... [2023-01-11 21:30:20,189] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-01-11T21:30:20.9148860Z frames [('total', 8), ('ok', 8)] 2023-01-11T21:30:20.9149299Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 3), ('generic_jump TensorVariable()', 1)] 2023-01-11T21:30:20.9149612Z unimplemented [] 2023-01-11T21:30:20.9149861Z graph_break [('generic_jump TensorVariable()', 1)] 2023-01-11T21:30:20.9150068Z ok (0.105s) 2023-01-11T21:30:20.9150361Z test_rmsprop (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9150772Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9151225Z stats [('calls_captured', 28), ('fusions_possible', 27), ('unique_graphs', 1)] 2023-01-11T21:30:20.9151453Z ok (0.063s) 2023-01-11T21:30:20.9151797Z test_rprop (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9152220Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9152621Z stats [('calls_captured', 64), ('fusions_possible', 63), ('unique_graphs', 1)] 2023-01-11T21:30:20.9152845Z ok (0.094s) 2023-01-11T21:30:20.9153115Z test_sgd (__main__.OptimizerTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:20.9153533Z inline_call [('inline in skipfiles: _fn /opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py', 2)] 2023-01-11T21:30:20.9153929Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:30:20.9154143Z ok (0.039s) 2023-01-11T21:30:20.9154245Z 2023-01-11T21:30:20.9154444Z ---------------------------------------------------------------------- 2023-01-11T21:30:20.9154693Z Ran 12 tests in 1.633s 2023-01-11T21:30:20.9154806Z 2023-01-11T21:30:20.9154867Z OK 2023-01-11T21:30:20.9154943Z 2023-01-11T21:30:20.9155027Z Generating XML reports... 2023-01-11T21:30:20.9155439Z Generated XML report: test-reports/python-unittest/dynamo.test_optimizers/TEST-End2EndTests-20230111213018.xml 2023-01-11T21:30:20.9155965Z Generated XML report: test-reports/python-unittest/dynamo.test_optimizers/TEST-OptimizerTests-20230111213018.xml 2023-01-11T21:30:20.9156201Z 2023-01-11T21:30:20.9156470Z ##[endgroup] 2023-01-11T21:30:20.9156879Z FINISHED PRINTING LOG FILE of dynamo/test_optimizers (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_optimizers_0uizc92k) 2023-01-11T21:30:20.9157111Z 2023-01-11T21:30:22.9773595Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:23.0634878Z Ignoring disabled issues: [] 2023-01-11T21:30:23.0786486Z Running dynamo/test_python_autograd ... [2023-01-11 21:30:23.078311] 2023-01-11T21:30:23.0789174Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_python_autograd.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:23.078663] 2023-01-11T21:30:25.6664147Z 2023-01-11T21:30:25.6664723Z Expand the folded group to see the log file of dynamo/test_python_autograd 2023-01-11T21:30:25.6665903Z ##[group]PRINTING LOG FILE of dynamo/test_python_autograd (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_python_autograd_s19r7hj1) 2023-01-11T21:30:25.6666287Z 2023-01-11T21:30:25.6666389Z Running tests... 2023-01-11T21:30:25.6666921Z ---------------------------------------------------------------------- 2023-01-11T21:30:25.6667624Z Test results will be stored in test-reports/python-unittest/dynamo.test_python_autograd 2023-01-11T21:30:25.6668039Z test_backwards1 (__main__.TestPythonAutograd) ... ok (0.437s) 2023-01-11T21:30:25.6668329Z test_backwards2 (__main__.TestPythonAutograd) ... inline_call [] 2023-01-11T21:30:25.6668690Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:30:25.6669138Z inline_call [] 2023-01-11T21:30:25.6669444Z stats [('calls_captured', 8), ('fusions_possible', 7), ('unique_graphs', 1)] 2023-01-11T21:30:25.6669661Z ok (0.103s) 2023-01-11T21:30:25.6669892Z test_forwards1 (__main__.TestPythonAutograd) ... inline_call [] 2023-01-11T21:30:25.6670249Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:25.6670479Z ok (0.031s) 2023-01-11T21:30:25.6670693Z test_forwards2 (__main__.TestPythonAutograd) ... inline_call [] 2023-01-11T21:30:25.6671050Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:25.6671274Z ok (0.030s) 2023-01-11T21:30:25.6671483Z test_split (__main__.TestPythonAutograd) ... inline_call [] 2023-01-11T21:30:25.6671903Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:30:25.6672146Z ok (0.097s) 2023-01-11T21:30:25.6672248Z 2023-01-11T21:30:25.6672516Z ---------------------------------------------------------------------- 2023-01-11T21:30:25.6672769Z Ran 5 tests in 0.699s 2023-01-11T21:30:25.6672882Z 2023-01-11T21:30:25.6672944Z OK 2023-01-11T21:30:25.6673037Z 2023-01-11T21:30:25.6673124Z Generating XML reports... 2023-01-11T21:30:25.6673564Z Generated XML report: test-reports/python-unittest/dynamo.test_python_autograd/TEST-TestPythonAutograd-20230111213024.xml 2023-01-11T21:30:25.6673818Z 2023-01-11T21:30:25.6674081Z ##[endgroup] 2023-01-11T21:30:25.6674512Z FINISHED PRINTING LOG FILE of dynamo/test_python_autograd (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_python_autograd_s19r7hj1) 2023-01-11T21:30:25.6674742Z 2023-01-11T21:30:27.7137208Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:27.8006273Z Ignoring disabled issues: [] 2023-01-11T21:30:27.8156706Z Running dynamo/test_recompile_ux ... [2023-01-11 21:30:27.815249] 2023-01-11T21:30:27.8157603Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_recompile_ux.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:27.815519] 2023-01-11T21:30:29.5979671Z 2023-01-11T21:30:29.5980275Z Expand the folded group to see the log file of dynamo/test_recompile_ux 2023-01-11T21:30:29.5981705Z ##[group]PRINTING LOG FILE of dynamo/test_recompile_ux (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_recompile_ux_xiwn7yge) 2023-01-11T21:30:29.5982324Z 2023-01-11T21:30:29.5982995Z ##[endgroup] 2023-01-11T21:30:29.5984143Z FINISHED PRINTING LOG FILE of dynamo/test_recompile_ux (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_recompile_ux_xiwn7yge) 2023-01-11T21:30:29.5984744Z 2023-01-11T21:30:31.6068264Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:31.6963570Z Ignoring disabled issues: [] 2023-01-11T21:30:31.7115568Z Running dynamo/test_replay_record ... [2023-01-11 21:30:31.711184] 2023-01-11T21:30:31.7116557Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_replay_record.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:31.711451] 2023-01-11T21:30:33.5365220Z 2023-01-11T21:30:33.5365867Z Expand the folded group to see the log file of dynamo/test_replay_record 2023-01-11T21:30:33.5367418Z ##[group]PRINTING LOG FILE of dynamo/test_replay_record (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_replay_record_kxslac6b) 2023-01-11T21:30:33.5368041Z 2023-01-11T21:30:33.5368208Z Running tests... 2023-01-11T21:30:33.5368928Z ---------------------------------------------------------------------- 2023-01-11T21:30:33.5369889Z Test results will be stored in test-reports/python-unittest/dynamo.test_replay_record 2023-01-11T21:30:33.5370707Z test_fn_call_args (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5371480Z test_local_module (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5372261Z test_nonlocal_fn_call (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5373077Z test_nonlocal_module_class (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5374171Z test_nonlocal_module_fn_call (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5374947Z test_successful_inline (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5375761Z test_unsuccessful_inline (__main__.ReplayRecordTests) ... skip: requires dill (0.000s) 2023-01-11T21:30:33.5376196Z 2023-01-11T21:30:33.5376690Z ---------------------------------------------------------------------- 2023-01-11T21:30:33.5377266Z Ran 7 tests in 0.003s 2023-01-11T21:30:33.5377541Z 2023-01-11T21:30:33.5377687Z OK (skipped=7) 2023-01-11T21:30:33.5377933Z 2023-01-11T21:30:33.5378134Z Generating XML reports... 2023-01-11T21:30:33.5379194Z Generated XML report: test-reports/python-unittest/dynamo.test_replay_record/TEST-ReplayRecordTests-20230111213033.xml 2023-01-11T21:30:33.5379791Z 2023-01-11T21:30:33.5380333Z ##[endgroup] 2023-01-11T21:30:33.5381473Z FINISHED PRINTING LOG FILE of dynamo/test_replay_record (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_replay_record_kxslac6b) 2023-01-11T21:30:33.5382078Z 2023-01-11T21:30:35.5192969Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:35.6033121Z Ignoring disabled issues: [] 2023-01-11T21:30:35.6180501Z Running dynamo/test_skip_non_tensor ... [2023-01-11 21:30:35.617665] 2023-01-11T21:30:35.6181723Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_skip_non_tensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:35.617940] 2023-01-11T21:30:37.9647896Z 2023-01-11T21:30:37.9648468Z Expand the folded group to see the log file of dynamo/test_skip_non_tensor 2023-01-11T21:30:37.9649372Z ##[group]PRINTING LOG FILE of dynamo/test_skip_non_tensor (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_skip_non_tensor_gn0673yk) 2023-01-11T21:30:37.9649640Z 2023-01-11T21:30:37.9649718Z Running tests... 2023-01-11T21:30:37.9650158Z ---------------------------------------------------------------------- 2023-01-11T21:30:37.9650568Z Test results will be stored in test-reports/python-unittest/dynamo.test_skip_non_tensor 2023-01-11T21:30:37.9650892Z test_add_skip (__main__.SkipNonTensorTests) ... ok (0.269s) 2023-01-11T21:30:37.9651342Z test_add_tensor1 (__main__.SkipNonTensorTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:37.9651605Z ok (0.086s) 2023-01-11T21:30:37.9651991Z test_add_tensor2 (__main__.SkipNonTensorTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:37.9652267Z ok (0.005s) 2023-01-11T21:30:37.9652653Z test_add_tensor_dict (__main__.SkipNonTensorTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:37.9652921Z ok (0.005s) 2023-01-11T21:30:37.9653300Z test_add_tensor_list (__main__.SkipNonTensorTests) ... stats [('calls_captured', 1), ('unique_graphs', 1), ('fusions_possible', 0)] 2023-01-11T21:30:37.9653584Z ok (0.005s) 2023-01-11T21:30:37.9653795Z test_custom_list (__main__.SkipNonTensorTests) ... ok (0.001s) 2023-01-11T21:30:37.9654078Z test_recursive_list (__main__.SkipNonTensorTests) ... ok (0.001s) 2023-01-11T21:30:37.9654240Z 2023-01-11T21:30:37.9654440Z ---------------------------------------------------------------------- 2023-01-11T21:30:37.9654673Z Ran 7 tests in 0.374s 2023-01-11T21:30:37.9654788Z 2023-01-11T21:30:37.9654850Z OK 2023-01-11T21:30:37.9654942Z 2023-01-11T21:30:37.9655030Z Generating XML reports... 2023-01-11T21:30:37.9655471Z Generated XML report: test-reports/python-unittest/dynamo.test_skip_non_tensor/TEST-SkipNonTensorTests-20230111213037.xml 2023-01-11T21:30:37.9655722Z 2023-01-11T21:30:37.9655928Z ##[endgroup] 2023-01-11T21:30:37.9656356Z FINISHED PRINTING LOG FILE of dynamo/test_skip_non_tensor (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_skip_non_tensor_gn0673yk) 2023-01-11T21:30:37.9656602Z 2023-01-11T21:30:39.9706231Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:40.0573451Z Ignoring disabled issues: [] 2023-01-11T21:30:40.0723874Z Running dynamo/test_subgraphs ... [2023-01-11 21:30:40.072065] 2023-01-11T21:30:40.0726860Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_subgraphs.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:40.072361] 2023-01-11T21:30:44.0662055Z 2023-01-11T21:30:44.0662729Z Expand the folded group to see the log file of dynamo/test_subgraphs 2023-01-11T21:30:44.0663815Z ##[group]PRINTING LOG FILE of dynamo/test_subgraphs (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_subgraphs_v9l8y_qo) 2023-01-11T21:30:44.0664265Z 2023-01-11T21:30:44.0664396Z Running tests... 2023-01-11T21:30:44.0665076Z ---------------------------------------------------------------------- 2023-01-11T21:30:44.0665771Z Test results will be stored in test-reports/python-unittest/dynamo.test_subgraphs 2023-01-11T21:30:44.0666319Z test_capi_call1 (__main__.SubGraphTests) ... ok (0.354s) 2023-01-11T21:30:44.0667151Z test_capi_call2 (__main__.SubGraphTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:44.0667583Z unimplemented [] 2023-01-11T21:30:44.0668294Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0669104Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:44.0669603Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0669934Z unimplemented [] 2023-01-11T21:30:44.0670552Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0671109Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:30:44.0671378Z ok (0.013s) 2023-01-11T21:30:44.0671987Z test_capi_call3 (__main__.SubGraphTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:44.0672372Z unimplemented [] 2023-01-11T21:30:44.0673033Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0673795Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:44.0674209Z ok (0.007s) 2023-01-11T21:30:44.0674763Z test_control_flow1 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0675314Z stats [('calls_captured', 5), ('fusions_possible', 4), ('unique_graphs', 1)] 2023-01-11T21:30:44.0675692Z ok (0.012s) 2023-01-11T21:30:44.0676243Z test_control_flow2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0676941Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:44.0677325Z ok (0.007s) 2023-01-11T21:30:44.0677756Z test_control_flow3 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0678119Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:30:44.0678332Z ok (0.016s) 2023-01-11T21:30:44.0678641Z test_control_flow4 (__main__.SubGraphTests) ... frames [('total', 5), ('ok', 5)] 2023-01-11T21:30:44.0679001Z stats [('calls_captured', 5), ('unique_graphs', 3), ('fusions_possible', 2)] 2023-01-11T21:30:44.0679227Z ok (0.014s) 2023-01-11T21:30:44.0679517Z test_control_flow5 (__main__.SubGraphTests) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:30:44.0679876Z stats [('calls_captured', 13), ('fusions_possible', 7), ('unique_graphs', 6)] 2023-01-11T21:30:44.0680101Z ok (0.038s) 2023-01-11T21:30:44.0680395Z test_dynamic_duck_size (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0680762Z stats [('calls_captured', 10), ('fusions_possible', 8), ('unique_graphs', 2)] 2023-01-11T21:30:44.0680991Z ok (0.073s) 2023-01-11T21:30:44.0681280Z test_dynamic_kwarg (__main__.SubGraphTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:44.0681642Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:44.0682008Z ok (0.023s) 2023-01-11T21:30:44.0682326Z test_dynamic_order_dependence (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0682708Z stats [('calls_captured', 9), ('fusions_possible', 6), ('unique_graphs', 3)] 2023-01-11T21:30:44.0682937Z ok (0.068s) 2023-01-11T21:30:44.0683247Z test_dynamic_shapes (__main__.SubGraphTests) ... frames [('total', 11), ('ok', 11)] 2023-01-11T21:30:44.0683604Z stats [('calls_captured', 22), ('fusions_possible', 11), ('unique_graphs', 11)] 2023-01-11T21:30:44.0683840Z ok (0.062s) 2023-01-11T21:30:44.0684160Z test_dynamic_zero_inference (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0684519Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:44.0684747Z ok (0.026s) 2023-01-11T21:30:44.0685078Z test_enumerate_not_break_graph (__main__.SubGraphTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:44.0685484Z stats [('calls_captured', 2), ('fusions_possible', 1), ('unique_graphs', 1)] 2023-01-11T21:30:44.0685717Z ok (0.008s) 2023-01-11T21:30:44.0686020Z test_extended_args (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0686395Z stats [('calls_captured', 1026), ('fusions_possible', 1023), ('unique_graphs', 3)] 2023-01-11T21:30:44.0686615Z ok (0.832s) 2023-01-11T21:30:44.0686928Z test_graph_break_on_item (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0687172Z unimplemented [] 2023-01-11T21:30:44.0687401Z graph_break [('Tensor.item', 1)] 2023-01-11T21:30:44.0687723Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:30:44.0687954Z ok (0.014s) 2023-01-11T21:30:44.0688257Z test_indirect_unsupported1 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0688716Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0689014Z unimplemented [] 2023-01-11T21:30:44.0689407Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0689810Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:30:44.0690041Z ok (0.011s) 2023-01-11T21:30:44.0690359Z test_indirect_unsupported2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0690798Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0691095Z unimplemented [] 2023-01-11T21:30:44.0691484Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0691885Z stats [('calls_captured', 5), ('unique_graphs', 3), ('fusions_possible', 2)] 2023-01-11T21:30:44.0692116Z ok (0.016s) 2023-01-11T21:30:44.0692435Z test_indirect_unsupported3 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0695026Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0715488Z unimplemented [] 2023-01-11T21:30:44.0716294Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0717111Z stats [('calls_captured', 3), ('unique_graphs', 2), ('fusions_possible', 1)] 2023-01-11T21:30:44.0717531Z ok (0.011s) 2023-01-11T21:30:44.0718063Z test_multigraph (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0718681Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:30:44.0718903Z ok (0.011s) 2023-01-11T21:30:44.0719203Z test_no_graph_break_on_item (__main__.SubGraphTests) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:44.0719664Z stats [('calls_captured', 6), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:30:44.0720009Z ok (0.009s) 2023-01-11T21:30:44.0720713Z test_pop_after_resume (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0721514Z unimplemented [] 2023-01-11T21:30:44.0722248Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0723022Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:44.0723368Z ok (0.016s) 2023-01-11T21:30:44.0723788Z test_restore_range (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0724190Z unimplemented [] 2023-01-11T21:30:44.0724784Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0725200Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:30:44.0725425Z ok (0.012s) 2023-01-11T21:30:44.0725731Z test_restore_range_iter (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0725959Z unimplemented [] 2023-01-11T21:30:44.0726442Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0726867Z stats [('calls_captured', 2), ('unique_graphs', 2), ('fusions_possible', 0)] 2023-01-11T21:30:44.0727078Z ok (0.011s) 2023-01-11T21:30:44.0727382Z test_restore_state (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0727616Z unimplemented [] 2023-01-11T21:30:44.0727986Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0728398Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:30:44.0728619Z ok (0.011s) 2023-01-11T21:30:44.0728905Z test_resume1 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0729115Z unimplemented [] 2023-01-11T21:30:44.0729496Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0729911Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:44.0730124Z ok (0.011s) 2023-01-11T21:30:44.0730410Z test_resume2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0730840Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0731115Z unimplemented [] 2023-01-11T21:30:44.0731497Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0731907Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:30:44.0732130Z ok (0.016s) 2023-01-11T21:30:44.0732405Z test_resume3 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0732829Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0733120Z unimplemented [] 2023-01-11T21:30:44.0733494Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0733899Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:30:44.0734126Z ok (0.016s) 2023-01-11T21:30:44.0734414Z test_resume4 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0734830Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0735117Z unimplemented [] 2023-01-11T21:30:44.0735493Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0735885Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:30:44.0736103Z ok (0.016s) 2023-01-11T21:30:44.0736346Z test_resume5 (__main__.SubGraphTests) ... tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:30:44.0736612Z 1.5000]) 2023-01-11T21:30:44.0736808Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:30:44.0736993Z 1.5000]) 2023-01-11T21:30:44.0737189Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:30:44.0737366Z 1.5000]) 2023-01-11T21:30:44.0737560Z tensor([1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 2023-01-11T21:30:44.0737757Z 1.5000]) 2023-01-11T21:30:44.0737961Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:30:44.0738150Z unimplemented [] 2023-01-11T21:30:44.0738469Z graph_break [('call_function BuiltinVariable(print) [TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0738819Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:44.0739042Z ok (0.013s) 2023-01-11T21:30:44.0739344Z test_resume_freevars (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0739570Z unimplemented [] 2023-01-11T21:30:44.0740023Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0740434Z stats [('calls_captured', 5), ('fusions_possible', 3), ('unique_graphs', 2)] 2023-01-11T21:30:44.0740661Z ok (0.012s) 2023-01-11T21:30:44.0740956Z test_resume_paths_join (__main__.SubGraphTests) ... frames [('total', 7), ('ok', 7)] 2023-01-11T21:30:44.0741320Z stats [('calls_captured', 10), ('unique_graphs', 7), ('fusions_possible', 3)] 2023-01-11T21:30:44.0741546Z ok (0.033s) 2023-01-11T21:30:44.0741847Z test_resume_tuple_iterator (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0742090Z unimplemented [] 2023-01-11T21:30:44.0742701Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0743131Z stats [('calls_captured', 8), ('fusions_possible', 6), ('unique_graphs', 2)] 2023-01-11T21:30:44.0743346Z ok (0.018s) 2023-01-11T21:30:44.0743668Z test_resume_with_no_grad1 (__main__.SubGraphTests) ... frames [('total', 4), ('ok', 4)] 2023-01-11T21:30:44.0743973Z unimplemented [] 2023-01-11T21:30:44.0744200Z graph_break [('Tensor.tolist', 2)] 2023-01-11T21:30:44.0744594Z stats [('calls_captured', 18), ('fusions_possible', 14), ('unique_graphs', 4)] 2023-01-11T21:30:44.0744940Z ok (0.025s) 2023-01-11T21:30:44.0745487Z test_resume_with_no_grad2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0745923Z unimplemented [] 2023-01-11T21:30:44.0746351Z graph_break [('Tensor.tolist', 2)] 2023-01-11T21:30:44.0746937Z stats [('calls_captured', 13), ('fusions_possible', 10), ('unique_graphs', 3)] 2023-01-11T21:30:44.0747350Z ok (0.020s) 2023-01-11T21:30:44.0747919Z test_resume_with_no_grad3 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0748313Z unimplemented [] 2023-01-11T21:30:44.0748535Z graph_break [('Tensor.tolist', 1)] 2023-01-11T21:30:44.0748861Z stats [('calls_captured', 19), ('fusions_possible', 17), ('unique_graphs', 2)] 2023-01-11T21:30:44.0749096Z ok (0.018s) 2023-01-11T21:30:44.0749384Z test_stack_state1 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0749619Z unimplemented [] 2023-01-11T21:30:44.0750006Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0750408Z stats [('calls_captured', 6), ('fusions_possible', 4), ('unique_graphs', 2)] 2023-01-11T21:30:44.0750632Z ok (0.012s) 2023-01-11T21:30:44.0750931Z test_stack_state2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0751368Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0751746Z unimplemented [] 2023-01-11T21:30:44.0752138Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0752651Z stats [('calls_captured', 7), ('fusions_possible', 4), ('unique_graphs', 3)] 2023-01-11T21:30:44.0752866Z ok (0.017s) 2023-01-11T21:30:44.0753106Z test_start1 (__main__.SubGraphTests) ... tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) 2023-01-11T21:30:44.0753564Z tensor([-2., -2., -2., -2., -2., -2., -2., -2., -2., -2.]) 2023-01-11T21:30:44.0753882Z tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) 2023-01-11T21:30:44.0754275Z tensor([-2., -2., -2., -2., -2., -2., -2., -2., -2., -2.]) 2023-01-11T21:30:44.0754633Z frames [('total', 4), ('ok', 4)] 2023-01-11T21:30:44.0754967Z unimplemented [] 2023-01-11T21:30:44.0755416Z graph_break [('call_function BuiltinVariable(print) [TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0755925Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:44.0756262Z ok (0.010s) 2023-01-11T21:30:44.0756732Z test_start2 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0757619Z inline_call [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0758145Z unimplemented [] 2023-01-11T21:30:44.0758765Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0759419Z stats [('calls_captured', 4), ('fusions_possible', 2), ('unique_graphs', 2)] 2023-01-11T21:30:44.0759672Z ok (0.013s) 2023-01-11T21:30:44.0759960Z test_start3 (__main__.SubGraphTests) ... frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:44.0760178Z unimplemented [] 2023-01-11T21:30:44.0760567Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 1)] 2023-01-11T21:30:44.0760983Z stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:44.0761196Z ok (0.008s) 2023-01-11T21:30:44.0761483Z test_start4 (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0761838Z stats [('calls_captured', 4), ('unique_graphs', 3), ('fusions_possible', 1)] 2023-01-11T21:30:44.0762067Z ok (0.012s) 2023-01-11T21:30:44.0762300Z test_tuple_iterator_mutate (__main__.SubGraphTests) ... skip: not working yet (0.001s) 2023-01-11T21:30:44.0762689Z test_tuple_iterator_return (__main__.SubGraphTests) ... frames [('total', 3), ('ok', 3)] 2023-01-11T21:30:44.0762936Z unimplemented [] 2023-01-11T21:30:44.0763307Z graph_break [('call_function UserDefinedObjectVariable(unsupported) [TensorVariable(), TensorVariable()] {}', 2)] 2023-01-11T21:30:44.0763723Z stats [('calls_captured', 6), ('fusions_possible', 3), ('unique_graphs', 3)] 2023-01-11T21:30:44.0763949Z ok (0.023s) 2023-01-11T21:30:44.0764050Z 2023-01-11T21:30:44.0764237Z ---------------------------------------------------------------------- 2023-01-11T21:30:44.0764479Z Ran 44 tests in 1.972s 2023-01-11T21:30:44.0764591Z 2023-01-11T21:30:44.0764662Z OK (skipped=1) 2023-01-11T21:30:44.0764768Z 2023-01-11T21:30:44.0764851Z Generating XML reports... 2023-01-11T21:30:44.0765262Z Generated XML report: test-reports/python-unittest/dynamo.test_subgraphs/TEST-SubGraphTests-20230111213041.xml 2023-01-11T21:30:44.0765500Z 2023-01-11T21:30:44.0765890Z ##[endgroup] 2023-01-11T21:30:44.0766302Z FINISHED PRINTING LOG FILE of dynamo/test_subgraphs (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_subgraphs_v9l8y_qo) 2023-01-11T21:30:44.0766534Z 2023-01-11T21:30:46.1606493Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:46.2493224Z Ignoring disabled issues: [] 2023-01-11T21:30:46.2647993Z Running dynamo/test_torchxla_num_output ... [2023-01-11 21:30:46.264414] 2023-01-11T21:30:46.2649323Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_torchxla_num_output.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:46.264697] 2023-01-11T21:30:47.9090135Z 2023-01-11T21:30:47.9090735Z Expand the folded group to see the log file of dynamo/test_torchxla_num_output 2023-01-11T21:30:47.9091768Z ##[group]PRINTING LOG FILE of dynamo/test_torchxla_num_output (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_num_output_7jmnji7l) 2023-01-11T21:30:47.9092234Z 2023-01-11T21:30:47.9092483Z ##[endgroup] 2023-01-11T21:30:47.9093080Z FINISHED PRINTING LOG FILE of dynamo/test_torchxla_num_output (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_num_output_7jmnji7l) 2023-01-11T21:30:47.9093437Z 2023-01-11T21:30:49.9425457Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:50.0386112Z Ignoring disabled issues: [] 2023-01-11T21:30:50.0536013Z Running dynamo/test_torchxla_util ... [2023-01-11 21:30:50.053200] 2023-01-11T21:30:50.0537136Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_torchxla_util.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:50.053494] 2023-01-11T21:30:50.0900389Z 2023-01-11T21:30:50.0900835Z Expand the folded group to see the log file of dynamo/test_torchxla_util 2023-01-11T21:30:50.0902221Z ##[group]PRINTING LOG FILE of dynamo/test_torchxla_util (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_util_i3q0g9n8) 2023-01-11T21:30:50.0902700Z 2023-01-11T21:30:50.0902947Z ##[endgroup] 2023-01-11T21:30:50.0903472Z FINISHED PRINTING LOG FILE of dynamo/test_torchxla_util (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_torchxla_util_i3q0g9n8) 2023-01-11T21:30:50.0903708Z 2023-01-11T21:30:52.1241873Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:52.2190672Z Ignoring disabled issues: [] 2023-01-11T21:30:52.2336919Z Running dynamo/test_verify_correctness ... [2023-01-11 21:30:52.233351] 2023-01-11T21:30:52.2338405Z Executing ['/opt/conda/bin/python', '-bb', 'dynamo/test_verify_correctness.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:52.233627] 2023-01-11T21:30:54.5654324Z 2023-01-11T21:30:54.5654918Z Expand the folded group to see the log file of dynamo/test_verify_correctness 2023-01-11T21:30:54.5656009Z ##[group]PRINTING LOG FILE of dynamo/test_verify_correctness (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_verify_correctness_8fofd13v) 2023-01-11T21:30:54.5656430Z 2023-01-11T21:30:54.5656594Z Running tests... 2023-01-11T21:30:54.5657218Z ---------------------------------------------------------------------- 2023-01-11T21:30:54.5657944Z Test results will be stored in test-reports/python-unittest/dynamo.test_verify_correctness 2023-01-11T21:30:54.5658522Z test_example_inputs (__main__.TestVerifyCorrectness) ... ok (0.337s) 2023-01-11T21:30:54.5659039Z test_incorrect_verify_false (__main__.TestVerifyCorrectness) 2023-01-11T21:30:54.5659764Z The bad optimization return a graph that ... stats [('calls_captured', 3), ('fusions_possible', 2), ('unique_graphs', 1)] 2023-01-11T21:30:54.5660315Z frames [('total', 2), ('ok', 2)] 2023-01-11T21:30:54.5660877Z stats [('calls_captured', 7), ('fusions_possible', 5), ('unique_graphs', 2)] 2023-01-11T21:30:54.5661276Z ok (0.018s) 2023-01-11T21:30:54.5661698Z test_incorrect_verify_true (__main__.TestVerifyCorrectness) 2023-01-11T21:30:54.5662717Z If a bad optimization return a graph that ... [2023-01-11 21:30:54,072] torch._dynamo.output_graph: [ERROR] error in verify_correctness 2023-01-11T21:30:54.5663212Z Traceback (most recent call last): 2023-01-11T21:30:54.5663850Z File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 173, in __call__ 2023-01-11T21:30:54.5664375Z raise RuntimeError(f"incorrect results of backend {self}") 2023-01-11T21:30:54.5664970Z RuntimeError: incorrect results of backend 2023-01-11T21:30:54.5665539Z frames [('total', 2), ('ok', 1)] 2023-01-11T21:30:54.5666089Z stats [('calls_captured', 7), ('fusions_possible', 5), ('unique_graphs', 1)] 2023-01-11T21:30:54.5666437Z ok (0.016s) 2023-01-11T21:30:54.5666851Z test_ipex_fp32 (__main__.TestVerifyCorrectness) ... skip: requires ipex (0.001s) 2023-01-11T21:30:54.5667503Z test_nnc (__main__.TestVerifyCorrectness) ... frames [('total', 1), ('ok', 1)] 2023-01-11T21:30:54.5668404Z stats [('calls_captured', 4), ('fusions_possible', 3), ('unique_graphs', 1)] 2023-01-11T21:30:54.5668791Z ok (0.089s) 2023-01-11T21:30:54.5668970Z 2023-01-11T21:30:54.5669330Z ---------------------------------------------------------------------- 2023-01-11T21:30:54.5669718Z Ran 5 tests in 0.461s 2023-01-11T21:30:54.5669908Z 2023-01-11T21:30:54.5670023Z OK (skipped=1) 2023-01-11T21:30:54.5670201Z 2023-01-11T21:30:54.5670353Z Generating XML reports... 2023-01-11T21:30:54.5671086Z Generated XML report: test-reports/python-unittest/dynamo.test_verify_correctness/TEST-TestVerifyCorrectness-20230111213053.xml 2023-01-11T21:30:54.5671488Z 2023-01-11T21:30:54.5671986Z ##[endgroup] 2023-01-11T21:30:54.5672755Z FINISHED PRINTING LOG FILE of dynamo/test_verify_correctness (/var/lib/jenkins/workspace/test/test-reports/dynamo-test_verify_correctness_8fofd13v) 2023-01-11T21:30:54.5673186Z 2023-01-11T21:30:56.5820463Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:30:56.6732205Z Ignoring disabled issues: [] 2023-01-11T21:30:56.6882013Z Running inductor/test_minifier ... [2023-01-11 21:30:56.687793] 2023-01-11T21:30:56.6883353Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_minifier.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:30:56.688069] 2023-01-11T21:32:34.0706431Z 2023-01-11T21:32:34.0707009Z Expand the folded group to see the log file of inductor/test_minifier 2023-01-11T21:32:34.0708146Z ##[group]PRINTING LOG FILE of inductor/test_minifier (/var/lib/jenkins/workspace/test/test-reports/inductor-test_minifier_3y8_71cj) 2023-01-11T21:32:34.0708573Z 2023-01-11T21:32:34.0744367Z Running tests... 2023-01-11T21:32:34.0745208Z ---------------------------------------------------------------------- 2023-01-11T21:32:34.0746168Z Test results will be stored in test-reports/python-unittest/inductor.test_minifier 2023-01-11T21:32:34.0747010Z test_after_aot_cpu_accuracy_backend_passes (__main__.MinifierTests) ... ok (10.074s) 2023-01-11T21:32:34.0747937Z test_after_aot_cpu_accuracy_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0748843Z test_after_aot_cpu_compile_backend_passes (__main__.MinifierTests) ... ok (8.301s) 2023-01-11T21:32:34.0749736Z test_after_aot_cpu_compile_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0750622Z test_after_aot_cpu_runtime_backend_passes (__main__.MinifierTests) ... ok (8.333s) 2023-01-11T21:32:34.0751519Z test_after_aot_cpu_runtime_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0762106Z test_after_aot_cuda_accuracy_backend_passes (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:32:34.0762675Z test_after_aot_cuda_accuracy_error (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:32:34.0763243Z test_after_aot_cuda_compile_backend_passes (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:32:34.0763746Z test_after_aot_cuda_compile_error (__main__.MinifierTests) ... skip: requires cuda (0.000s) 2023-01-11T21:32:34.0764255Z test_after_aot_cuda_runtime_backend_passes (__main__.MinifierTests) ... skip: (0.000s) 2023-01-11T21:32:34.0764711Z test_after_aot_cuda_runtime_error (__main__.MinifierTests) ... skip: (0.000s) 2023-01-11T21:32:34.0765353Z test_after_aot_with_modified_config_accuracy_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0766077Z test_after_aot_with_modified_config_compile_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0766667Z test_inductor_config_serialization (__main__.MinifierTests) ... ok (1.569s) 2023-01-11T21:32:34.0767330Z test_torch_compile_after_aot_accuracy_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0768290Z test_torch_compile_after_aot_compile_error (__main__.MinifierTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:32:34.0768934Z test_torch_compile_after_dynamo_accuracy_error (__main__.MinifierTests) ... ok (29.803s) 2023-01-11T21:32:34.0769464Z test_torch_compile_after_dynamo_compile_error (__main__.MinifierTests) ... ok (37.338s) 2023-01-11T21:32:34.0769745Z 2023-01-11T21:32:34.0770145Z ---------------------------------------------------------------------- 2023-01-11T21:32:34.0770541Z Ran 19 tests in 95.423s 2023-01-11T21:32:34.0805824Z 2023-01-11T21:32:34.0806054Z OK (skipped=13) 2023-01-11T21:32:34.0806239Z 2023-01-11T21:32:34.0806381Z Generating XML reports... 2023-01-11T21:32:34.0807254Z Generated XML report: test-reports/python-unittest/inductor.test_minifier/TEST-MinifierTests-20230111213058.xml 2023-01-11T21:32:34.0807860Z 2023-01-11T21:32:34.0808370Z ##[endgroup] 2023-01-11T21:32:34.0808841Z FINISHED PRINTING LOG FILE of inductor/test_minifier (/var/lib/jenkins/workspace/test/test-reports/inductor-test_minifier_3y8_71cj) 2023-01-11T21:32:34.0809977Z 2023-01-11T21:32:35.9825660Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:36.0713605Z Ignoring disabled issues: [] 2023-01-11T21:32:36.0859658Z Running inductor/test_perf ... [2023-01-11 21:32:36.085622] 2023-01-11T21:32:36.0861058Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_perf.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:32:36.085868] 2023-01-11T21:32:44.3424199Z 2023-01-11T21:32:44.3424773Z Expand the folded group to see the log file of inductor/test_perf 2023-01-11T21:32:44.3425696Z ##[group]PRINTING LOG FILE of inductor/test_perf (/var/lib/jenkins/workspace/test/test-reports/inductor-test_perf_wvrgf29e) 2023-01-11T21:32:44.3426443Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:44.3426686Z 2023-01-11T21:32:44.3426994Z ##[endgroup] 2023-01-11T21:32:44.3427576Z FINISHED PRINTING LOG FILE of inductor/test_perf (/var/lib/jenkins/workspace/test/test-reports/inductor-test_perf_wvrgf29e) 2023-01-11T21:32:44.3427911Z 2023-01-11T21:32:46.2521993Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:46.3346083Z Ignoring disabled issues: [] 2023-01-11T21:32:46.3492487Z Running inductor/test_smoke ... [2023-01-11 21:32:46.348843] 2023-01-11T21:32:46.3493676Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_smoke.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:32:46.349096] 2023-01-11T21:32:48.1734116Z 2023-01-11T21:32:48.1734882Z Expand the folded group to see the log file of inductor/test_smoke 2023-01-11T21:32:48.1735718Z ##[group]PRINTING LOG FILE of inductor/test_smoke (/var/lib/jenkins/workspace/test/test-reports/inductor-test_smoke_q262m6v1) 2023-01-11T21:32:48.1735986Z 2023-01-11T21:32:48.1736211Z ##[endgroup] 2023-01-11T21:32:48.1736716Z FINISHED PRINTING LOG FILE of inductor/test_smoke (/var/lib/jenkins/workspace/test/test-reports/inductor-test_smoke_q262m6v1) 2023-01-11T21:32:48.1736944Z 2023-01-11T21:32:50.0678806Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:50.1564468Z Ignoring disabled issues: [] 2023-01-11T21:32:50.1713968Z Running lazy/test_bindings ... [2023-01-11 21:32:50.171078] 2023-01-11T21:32:50.1715684Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_bindings.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:32:50.171359] 2023-01-11T21:32:51.5790229Z 2023-01-11T21:32:51.5790817Z Expand the folded group to see the log file of lazy/test_bindings 2023-01-11T21:32:51.5791709Z ##[group]PRINTING LOG FILE of lazy/test_bindings (/var/lib/jenkins/workspace/test/test-reports/lazy-test_bindings_jgkdvxws) 2023-01-11T21:32:51.5791941Z 2023-01-11T21:32:51.5792237Z ##[endgroup] 2023-01-11T21:32:51.5792954Z FINISHED PRINTING LOG FILE of lazy/test_bindings (/var/lib/jenkins/workspace/test/test-reports/lazy-test_bindings_jgkdvxws) 2023-01-11T21:32:51.5793173Z 2023-01-11T21:32:53.5205952Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:53.5852280Z Ignoring disabled issues: [] 2023-01-11T21:32:53.5999096Z Running lazy/test_debug_util ... [2023-01-11 21:32:53.599624] 2023-01-11T21:32:53.6001581Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_debug_util.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:32:53.599869] 2023-01-11T21:32:55.6364309Z 2023-01-11T21:32:55.6364926Z Expand the folded group to see the log file of lazy/test_debug_util 2023-01-11T21:32:55.6366019Z ##[group]PRINTING LOG FILE of lazy/test_debug_util (/var/lib/jenkins/workspace/test/test-reports/lazy-test_debug_util_f1fxs1su) 2023-01-11T21:32:55.6366434Z 2023-01-11T21:32:55.6366581Z Running tests... 2023-01-11T21:32:55.6367522Z ---------------------------------------------------------------------- 2023-01-11T21:32:55.6368242Z Test results will be stored in test-reports/python-unittest/lazy.test_debug_util 2023-01-11T21:32:55.6368794Z test_get_python_frames (__main__.DebugUtilTest) ... ok (0.286s) 2023-01-11T21:32:55.6369078Z 2023-01-11T21:32:55.6369290Z ---------------------------------------------------------------------- 2023-01-11T21:32:55.6369532Z Ran 1 test in 0.286s 2023-01-11T21:32:55.6369646Z 2023-01-11T21:32:55.6369707Z OK 2023-01-11T21:32:55.6369798Z 2023-01-11T21:32:55.6369882Z Generating XML reports... 2023-01-11T21:32:55.6370290Z Generated XML report: test-reports/python-unittest/lazy.test_debug_util/TEST-DebugUtilTest-20230111213254.xml 2023-01-11T21:32:55.6370524Z 2023-01-11T21:32:55.6370752Z ##[endgroup] 2023-01-11T21:32:55.6371148Z FINISHED PRINTING LOG FILE of lazy/test_debug_util (/var/lib/jenkins/workspace/test/test-reports/lazy-test_debug_util_f1fxs1su) 2023-01-11T21:32:55.6371355Z 2023-01-11T21:32:57.5716275Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:32:57.6547816Z Ignoring disabled issues: [] 2023-01-11T21:32:57.6695595Z Running lazy/test_extract_compiled_graph ... [2023-01-11 21:32:57.669183] 2023-01-11T21:32:57.6696656Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_extract_compiled_graph.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:32:57.669452] 2023-01-11T21:32:59.1016211Z 2023-01-11T21:32:59.1018612Z Expand the folded group to see the log file of lazy/test_extract_compiled_graph 2023-01-11T21:32:59.1019381Z ##[group]PRINTING LOG FILE of lazy/test_extract_compiled_graph (/var/lib/jenkins/workspace/test/test-reports/lazy-test_extract_compiled_graph_ix5ue00z) 2023-01-11T21:32:59.1019640Z 2023-01-11T21:32:59.1019844Z ##[endgroup] 2023-01-11T21:32:59.1020519Z FINISHED PRINTING LOG FILE of lazy/test_extract_compiled_graph (/var/lib/jenkins/workspace/test/test-reports/lazy-test_extract_compiled_graph_ix5ue00z) 2023-01-11T21:32:59.1020774Z 2023-01-11T21:33:01.0637953Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:01.1461654Z Ignoring disabled issues: [] 2023-01-11T21:33:01.1608317Z Running lazy/test_meta_kernel ... [2023-01-11 21:33:01.160510] 2023-01-11T21:33:01.1610066Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_meta_kernel.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:01.160764] 2023-01-11T21:33:02.8503123Z 2023-01-11T21:33:02.8503615Z Expand the folded group to see the log file of lazy/test_meta_kernel 2023-01-11T21:33:02.8504390Z ##[group]PRINTING LOG FILE of lazy/test_meta_kernel (/var/lib/jenkins/workspace/test/test-reports/lazy-test_meta_kernel_kd0mxbk5) 2023-01-11T21:33:02.8504633Z 2023-01-11T21:33:02.8504888Z ##[endgroup] 2023-01-11T21:33:02.8505409Z FINISHED PRINTING LOG FILE of lazy/test_meta_kernel (/var/lib/jenkins/workspace/test/test-reports/lazy-test_meta_kernel_kd0mxbk5) 2023-01-11T21:33:02.8505685Z 2023-01-11T21:33:04.7812573Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:04.8662896Z Ignoring disabled issues: [] 2023-01-11T21:33:04.8809938Z Running lazy/test_reuse_ir ... [2023-01-11 21:33:04.880649] 2023-01-11T21:33:04.8811170Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_reuse_ir.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:04.880917] 2023-01-11T21:33:07.2339078Z 2023-01-11T21:33:07.2339674Z Expand the folded group to see the log file of lazy/test_reuse_ir 2023-01-11T21:33:07.2340562Z ##[group]PRINTING LOG FILE of lazy/test_reuse_ir (/var/lib/jenkins/workspace/test/test-reports/lazy-test_reuse_ir_i_vjtzrn) 2023-01-11T21:33:07.2340788Z 2023-01-11T21:33:07.2340861Z Running tests... 2023-01-11T21:33:07.2341255Z ---------------------------------------------------------------------- 2023-01-11T21:33:07.2341641Z Test results will be stored in test-reports/python-unittest/lazy.test_reuse_ir 2023-01-11T21:33:07.2341933Z testAdd (__main__.TestLazyReuseIr) ... ok (0.268s) 2023-01-11T21:33:07.2342531Z testAddSub (__main__.TestLazyReuseIr) ... ok (0.068s) 2023-01-11T21:33:07.2342825Z testAddSubFallback (__main__.TestLazyReuseIr) ... ok (0.003s) 2023-01-11T21:33:07.2343101Z testBatchNorm (__main__.TestLazyReuseIr) ... ok (0.289s) 2023-01-11T21:33:07.2343248Z 2023-01-11T21:33:07.2343455Z ---------------------------------------------------------------------- 2023-01-11T21:33:07.2343683Z Ran 4 tests in 0.628s 2023-01-11T21:33:07.2343796Z 2023-01-11T21:33:07.2343857Z OK 2023-01-11T21:33:07.2343946Z 2023-01-11T21:33:07.2344031Z Generating XML reports... 2023-01-11T21:33:07.2344460Z Generated XML report: test-reports/python-unittest/lazy.test_reuse_ir/TEST-TestLazyReuseIr-20230111213306.xml 2023-01-11T21:33:07.2344681Z 2023-01-11T21:33:07.2344918Z ##[endgroup] 2023-01-11T21:33:07.2345305Z FINISHED PRINTING LOG FILE of lazy/test_reuse_ir (/var/lib/jenkins/workspace/test/test-reports/lazy-test_reuse_ir_i_vjtzrn) 2023-01-11T21:33:07.2345522Z 2023-01-11T21:33:09.1813114Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:09.2674203Z Ignoring disabled issues: [] 2023-01-11T21:33:09.2822214Z Running lazy/test_step_closures ... [2023-01-11 21:33:09.281885] 2023-01-11T21:33:09.2823566Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_step_closures.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:09.282138] 2023-01-11T21:33:13.3077768Z 2023-01-11T21:33:13.3078394Z Expand the folded group to see the log file of lazy/test_step_closures 2023-01-11T21:33:13.3079328Z ##[group]PRINTING LOG FILE of lazy/test_step_closures (/var/lib/jenkins/workspace/test/test-reports/lazy-test_step_closures_75rxie_y) 2023-01-11T21:33:13.3079588Z 2023-01-11T21:33:13.3079649Z Running tests... 2023-01-11T21:33:13.3080113Z ---------------------------------------------------------------------- 2023-01-11T21:33:13.3080550Z Test results will be stored in test-reports/python-unittest/lazy.test_step_closures 2023-01-11T21:33:13.3080868Z test_asynchronous (__main__.ClosuresTest) ... ok (0.247s) 2023-01-11T21:33:13.3081153Z test_asynchronous_exception (__main__.ClosuresTest) ... ok (1.001s) 2023-01-11T21:33:13.3081482Z test_synchronous (__main__.ClosuresTest) ... ok (1.002s) 2023-01-11T21:33:13.3081752Z test_synchronous_exception (__main__.ClosuresTest) ... ok (0.001s) 2023-01-11T21:33:13.3081911Z 2023-01-11T21:33:13.3082152Z ---------------------------------------------------------------------- 2023-01-11T21:33:13.3082395Z Ran 4 tests in 2.252s 2023-01-11T21:33:13.3082509Z 2023-01-11T21:33:13.3082569Z OK 2023-01-11T21:33:13.3082658Z 2023-01-11T21:33:13.3082746Z Generating XML reports... 2023-01-11T21:33:13.3083205Z Generated XML report: test-reports/python-unittest/lazy.test_step_closures/TEST-ClosuresTest-20230111213310.xml 2023-01-11T21:33:13.3083439Z 2023-01-11T21:33:13.3083710Z ##[endgroup] 2023-01-11T21:33:13.3084109Z FINISHED PRINTING LOG FILE of lazy/test_step_closures (/var/lib/jenkins/workspace/test/test-reports/lazy-test_step_closures_75rxie_y) 2023-01-11T21:33:13.3084555Z 2023-01-11T21:33:15.1965916Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:15.2872230Z Ignoring disabled issues: [] 2023-01-11T21:33:15.3019890Z Running lazy/test_ts_opinfo ... [2023-01-11 21:33:15.301686] 2023-01-11T21:33:15.3022608Z Executing ['/opt/conda/bin/python', '-bb', 'lazy/test_ts_opinfo.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:15.301955] 2023-01-11T21:33:18.1714354Z 2023-01-11T21:33:18.1714927Z Expand the folded group to see the log file of lazy/test_ts_opinfo 2023-01-11T21:33:18.1715995Z ##[group]PRINTING LOG FILE of lazy/test_ts_opinfo (/var/lib/jenkins/workspace/test/test-reports/lazy-test_ts_opinfo_tbq39ol5) 2023-01-11T21:33:18.1716311Z 2023-01-11T21:33:18.1716375Z Running tests... 2023-01-11T21:33:18.1716935Z ---------------------------------------------------------------------- 2023-01-11T21:33:18.1717423Z Test results will be stored in test-reports/python-unittest/lazy.test_ts_opinfo 2023-01-11T21:33:18.1717928Z test_nonzero_dynamic (__main__.TestLazyDynamicOps) ... ok (0.167s) 2023-01-11T21:33:18.1718340Z testConvolutionBackward (__main__.TestLazyTensor) ... skip: Disable until autograd supports symints (0.001s) 2023-01-11T21:33:18.1718667Z test_tensor_ctr (__main__.TestLazyTensor) ... ok (0.002s) 2023-01-11T21:33:18.1719018Z test_view_mark_step_preserved (__main__.TestLazyTensor) ... ok (0.003s) 2023-01-11T21:33:18.1719186Z 2023-01-11T21:33:18.1719378Z ---------------------------------------------------------------------- 2023-01-11T21:33:18.1719679Z Ran 4 tests in 0.173s 2023-01-11T21:33:18.1719796Z 2023-01-11T21:33:18.1719908Z OK (skipped=1) 2023-01-11T21:33:18.1720080Z 2023-01-11T21:33:18.1720170Z Generating XML reports... 2023-01-11T21:33:18.1720660Z Generated XML report: test-reports/python-unittest/lazy.test_ts_opinfo/TEST-TestLazyDynamicOps-20230111213317.xml 2023-01-11T21:33:18.1721357Z Generated XML report: test-reports/python-unittest/lazy.test_ts_opinfo/TEST-TestLazyTensor-20230111213317.xml 2023-01-11T21:33:18.1721777Z 2023-01-11T21:33:18.1722212Z ##[endgroup] 2023-01-11T21:33:18.1722609Z FINISHED PRINTING LOG FILE of lazy/test_ts_opinfo (/var/lib/jenkins/workspace/test/test-reports/lazy-test_ts_opinfo_tbq39ol5) 2023-01-11T21:33:18.1722831Z 2023-01-11T21:33:20.0861164Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:20.1732868Z Ignoring disabled issues: [] 2023-01-11T21:33:20.1886608Z Running nn/test_dropout ... [2023-01-11 21:33:20.188211] 2023-01-11T21:33:20.1887159Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_dropout.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:20.188482] 2023-01-11T21:33:22.3667910Z 2023-01-11T21:33:22.3672352Z Expand the folded group to see the log file of nn/test_dropout 2023-01-11T21:33:22.3677062Z ##[group]PRINTING LOG FILE of nn/test_dropout (/var/lib/jenkins/workspace/test/test-reports/nn-test_dropout_bmle1pyu) 2023-01-11T21:33:22.3677315Z 2023-01-11T21:33:22.3677391Z Running tests... 2023-01-11T21:33:22.3677868Z ---------------------------------------------------------------------- 2023-01-11T21:33:22.3678241Z Test results will be stored in test-reports/python-unittest/nn.test_dropout 2023-01-11T21:33:22.3678602Z test_AlphaDropout (__main__.TestDropoutNN) ... ok (0.003s) 2023-01-11T21:33:22.3678895Z test_FeatureAlphaDropout (__main__.TestDropoutNN) ... ok (0.124s) 2023-01-11T21:33:22.3679219Z test_invalid_dropout_p (__main__.TestDropoutNN) ... ok (0.001s) 2023-01-11T21:33:22.3679534Z test_native_dropout_corner_case (__main__.TestDropoutNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:33:22.3679723Z 2023-01-11T21:33:22.3679973Z ---------------------------------------------------------------------- 2023-01-11T21:33:22.3680215Z Ran 4 tests in 0.129s 2023-01-11T21:33:22.3680326Z 2023-01-11T21:33:22.3680384Z OK (skipped=1) 2023-01-11T21:33:22.3680489Z 2023-01-11T21:33:22.3680575Z Generating XML reports... 2023-01-11T21:33:22.3681052Z Generated XML report: test-reports/python-unittest/nn.test_dropout/TEST-TestDropoutNN-20230111213321.xml 2023-01-11T21:33:22.3681519Z 2023-01-11T21:33:22.3812256Z ##[endgroup] 2023-01-11T21:33:22.3812805Z FINISHED PRINTING LOG FILE of nn/test_dropout (/var/lib/jenkins/workspace/test/test-reports/nn-test_dropout_bmle1pyu) 2023-01-11T21:33:22.3813028Z 2023-01-11T21:33:24.2658406Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:24.3349618Z Ignoring disabled issues: [] 2023-01-11T21:33:24.3501638Z Running nn/test_embedding ... [2023-01-11 21:33:24.349812] 2023-01-11T21:33:24.3503316Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_embedding.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:24.350098] 2023-01-11T21:33:26.3802534Z 2023-01-11T21:33:26.3803242Z Expand the folded group to see the log file of nn/test_embedding 2023-01-11T21:33:26.3804293Z ##[group]PRINTING LOG FILE of nn/test_embedding (/var/lib/jenkins/workspace/test/test-reports/nn-test_embedding_3n1jpjm3) 2023-01-11T21:33:26.3804689Z 2023-01-11T21:33:26.3805057Z Running tests... 2023-01-11T21:33:26.3805634Z ---------------------------------------------------------------------- 2023-01-11T21:33:26.3806213Z Test results will be stored in test-reports/python-unittest/nn.test_embedding 2023-01-11T21:33:26.3806668Z test_embedding_bag_from_pretrained (__main__.TestEmbeddingNN) ... ok (0.003s) 2023-01-11T21:33:26.3807154Z test_embedding_bag_from_pretrained_padding_idx (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3807618Z test_embedding_bag_functional (__main__.TestEmbeddingNN) ... ok (0.002s) 2023-01-11T21:33:26.3808073Z test_embedding_bag_padding_idx_error (__main__.TestEmbeddingNN) ... ok (0.011s) 2023-01-11T21:33:26.3808529Z test_embedding_from_pretrained_float32 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3808989Z test_embedding_from_pretrained_float64 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3809449Z test_embedding_from_pretrained_int16 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3809898Z test_embedding_from_pretrained_int32 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3810351Z test_embedding_from_pretrained_int64 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3810808Z test_embedding_from_pretrained_int8 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3811268Z test_embedding_from_pretrained_options (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3811728Z test_embedding_from_pretrained_padding_idx (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3812200Z test_embedding_from_pretrained_uint8 (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3812637Z test_embedding_functional (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3813032Z test_embedding_max_norm (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3813534Z test_embedding_max_norm_unsorted_repeating_indices (__main__.TestEmbeddingNN) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:33:26.3814053Z test_embedding_sparse_basic (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3814487Z test_embedding_sparse_empty_tensor (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3814918Z test_embeddingbag_from_pretrained (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3815389Z test_embeddingbag_from_pretrained_options (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3815868Z test_embeddingbag_include_last_offset (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3816311Z test_move_sparse_half_embedding (__main__.TestEmbeddingNN) ... ok (0.001s) 2023-01-11T21:33:26.3816556Z 2023-01-11T21:33:26.3816921Z ---------------------------------------------------------------------- 2023-01-11T21:33:26.3817276Z Ran 22 tests in 0.038s 2023-01-11T21:33:26.3817436Z 2023-01-11T21:33:26.3817537Z OK (skipped=1) 2023-01-11T21:33:26.3817670Z 2023-01-11T21:33:26.3817792Z Generating XML reports... 2023-01-11T21:33:26.3818646Z Generated XML report: test-reports/python-unittest/nn.test_embedding/TEST-TestEmbeddingNN-20230111213325.xml 2023-01-11T21:33:26.3819011Z 2023-01-11T21:33:26.3819386Z ##[endgroup] 2023-01-11T21:33:26.3819948Z FINISHED PRINTING LOG FILE of nn/test_embedding (/var/lib/jenkins/workspace/test/test-reports/nn-test_embedding_3n1jpjm3) 2023-01-11T21:33:26.3820272Z 2023-01-11T21:33:28.3173429Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:28.4052196Z Ignoring disabled issues: [] 2023-01-11T21:33:28.4203685Z Running nn/test_init ... [2023-01-11 21:33:28.420020] 2023-01-11T21:33:28.4205086Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_init.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:28.420300] 2023-01-11T21:33:30.3130045Z 2023-01-11T21:33:30.3130602Z Expand the folded group to see the log file of nn/test_init 2023-01-11T21:33:30.3131622Z ##[group]PRINTING LOG FILE of nn/test_init (/var/lib/jenkins/workspace/test/test-reports/nn-test_init_8shnnm85) 2023-01-11T21:33:30.3132165Z 2023-01-11T21:33:30.3132388Z ##[endgroup] 2023-01-11T21:33:30.3132924Z FINISHED PRINTING LOG FILE of nn/test_init (/var/lib/jenkins/workspace/test/test-reports/nn-test_init_8shnnm85) 2023-01-11T21:33:30.3133133Z 2023-01-11T21:33:32.3075176Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:32.3910817Z Ignoring disabled issues: [] 2023-01-11T21:33:32.4060347Z Running nn/test_lazy_modules ... [2023-01-11 21:33:32.405721] 2023-01-11T21:33:32.4061727Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_lazy_modules.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:32.405957] 2023-01-11T21:33:34.8905214Z 2023-01-11T21:33:34.8905722Z Expand the folded group to see the log file of nn/test_lazy_modules 2023-01-11T21:33:34.8906767Z ##[group]PRINTING LOG FILE of nn/test_lazy_modules (/var/lib/jenkins/workspace/test/test-reports/nn-test_lazy_modules_0hh1l97v) 2023-01-11T21:33:34.8907178Z 2023-01-11T21:33:34.8907285Z Running tests... 2023-01-11T21:33:34.8907788Z ---------------------------------------------------------------------- 2023-01-11T21:33:34.8908189Z Test results will be stored in test-reports/python-unittest/nn.test_lazy_modules 2023-01-11T21:33:34.8908630Z test_chained_initialization (__main__.TestLazyModules) ... ok (0.010s) 2023-01-11T21:33:34.8909153Z test_invalid_functions (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8910281Z test_lazy_batchnorm1d (__main__.TestLazyModules) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:33:34.8910974Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:33:34.8911207Z ok (0.009s) 2023-01-11T21:33:34.8911502Z test_lazy_batchnorm1d_pickle (__main__.TestLazyModules) ... ok (0.008s) 2023-01-11T21:33:34.8912037Z test_lazy_batchnorm1d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8912600Z test_lazy_batchnorm2d (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8913057Z test_lazy_batchnorm2d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8913348Z test_lazy_batchnorm2d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8913621Z test_lazy_batchnorm3d (__main__.TestLazyModules) ... ok (0.006s) 2023-01-11T21:33:34.8914011Z test_lazy_batchnorm3d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8914300Z test_lazy_batchnorm3d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8914561Z test_lazy_conv1d (__main__.TestLazyModules) ... ok (0.005s) 2023-01-11T21:33:34.8914833Z test_lazy_conv1d_pickle (__main__.TestLazyModules) ... ok (0.003s) 2023-01-11T21:33:34.8915110Z test_lazy_conv1d_state (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8915377Z test_lazy_conv2d (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8915910Z test_lazy_conv2d_pickle (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8916189Z test_lazy_conv2d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8916460Z test_lazy_conv3d (__main__.TestLazyModules) ... ok (0.016s) 2023-01-11T21:33:34.8916716Z test_lazy_conv3d_pickle (__main__.TestLazyModules) ... ok (0.008s) 2023-01-11T21:33:34.8916992Z test_lazy_conv3d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8917281Z test_lazy_conv_transpose1d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8917572Z test_lazy_conv_transpose1d_state (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8917865Z test_lazy_conv_transpose2d (__main__.TestLazyModules) ... ok (0.014s) 2023-01-11T21:33:34.8918158Z test_lazy_conv_transpose2d_pickle (__main__.TestLazyModules) ... ok (0.006s) 2023-01-11T21:33:34.8918464Z test_lazy_conv_transpose2d_state (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8918808Z test_lazy_conv_transpose3d (__main__.TestLazyModules) ... ok (0.175s) 2023-01-11T21:33:34.8919110Z test_lazy_conv_transpose3d_pickle (__main__.TestLazyModules) ... ok (0.060s) 2023-01-11T21:33:34.8919410Z test_lazy_conv_transpose3d_state (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8919693Z test_lazy_conv_transposed1d (__main__.TestLazyModules) ... ok (0.008s) 2023-01-11T21:33:34.8919963Z test_lazy_forward_hook (__main__.TestLazyModules) 2023-01-11T21:33:34.8920664Z This test is to test whether lazymodule can register other forward hook ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:33:34.8921225Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:33:34.8921446Z ok (0.001s) 2023-01-11T21:33:34.8921674Z test_lazy_instancenorm1d (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8921975Z test_lazy_instancenorm1d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8925309Z test_lazy_instancenorm1d_state (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8925786Z test_lazy_instancenorm2d (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8926308Z test_lazy_instancenorm2d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8926863Z test_lazy_instancenorm2d_state (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8927222Z test_lazy_instancenorm3d (__main__.TestLazyModules) ... ok (0.006s) 2023-01-11T21:33:34.8927682Z test_lazy_instancenorm3d_pickle (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8928182Z test_lazy_instancenorm3d_state (__main__.TestLazyModules) ... ok (0.004s) 2023-01-11T21:33:34.8928702Z test_lazy_linear_pickle (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8929192Z test_lazy_module_buffer (__main__.TestLazyModules) ... ok (0.002s) 2023-01-11T21:33:34.8929719Z test_lazy_module_jit_buffer (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8930159Z test_lazy_module_jit_param (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8930429Z test_lazy_module_parameter (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8930702Z test_lazy_pre_forward_hook (__main__.TestLazyModules) 2023-01-11T21:33:34.8931423Z This test is to test whether lazymodule can register other pre-forward hook ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:33:34.8932469Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:33:34.8932704Z ok (0.001s) 2023-01-11T21:33:34.8932937Z test_lazy_share_memory_buffer (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8933234Z test_lazy_share_memory_param (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8933586Z test_linear (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8933833Z test_linear_state (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8934133Z test_materialize_device (__main__.TestLazyModules) ... skip: CUDA not available (0.000s) 2023-01-11T21:33:34.8934442Z test_materialize_dtype (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8934719Z test_optimizer_pass (__main__.TestLazyModules) ... ok (0.005s) 2023-01-11T21:33:34.8934975Z test_spectral_norm (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8935243Z test_weight_norm (__main__.TestLazyModules) ... ok (0.001s) 2023-01-11T21:33:34.8935394Z 2023-01-11T21:33:34.8935597Z ---------------------------------------------------------------------- 2023-01-11T21:33:34.8935826Z Ran 54 tests in 0.428s 2023-01-11T21:33:34.8935942Z 2023-01-11T21:33:34.8936014Z OK (skipped=1) 2023-01-11T21:33:34.8936123Z 2023-01-11T21:33:34.8936251Z Generating XML reports... 2023-01-11T21:33:34.8936673Z Generated XML report: test-reports/python-unittest/nn.test_lazy_modules/TEST-TestLazyModules-20230111213334.xml 2023-01-11T21:33:34.8936899Z 2023-01-11T21:33:34.8937203Z ##[endgroup] 2023-01-11T21:33:34.8937602Z FINISHED PRINTING LOG FILE of nn/test_lazy_modules (/var/lib/jenkins/workspace/test/test-reports/nn-test_lazy_modules_0hh1l97v) 2023-01-11T21:33:34.8937824Z 2023-01-11T21:33:36.7843193Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:36.8513917Z Ignoring disabled issues: [] 2023-01-11T21:33:36.8660777Z Running nn/test_module_hooks ... [2023-01-11 21:33:36.865828] 2023-01-11T21:33:36.8662975Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_module_hooks.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:36.866074] 2023-01-11T21:33:39.0303899Z 2023-01-11T21:33:39.0304457Z Expand the folded group to see the log file of nn/test_module_hooks 2023-01-11T21:33:39.0305552Z ##[group]PRINTING LOG FILE of nn/test_module_hooks (/var/lib/jenkins/workspace/test/test-reports/nn-test_module_hooks_l0byf1fd) 2023-01-11T21:33:39.0306036Z 2023-01-11T21:33:39.0306168Z Running tests... 2023-01-11T21:33:39.0306836Z ---------------------------------------------------------------------- 2023-01-11T21:33:39.0307569Z Test results will be stored in test-reports/python-unittest/nn.test_module_hooks 2023-01-11T21:33:39.0309283Z test_global_and_local_hooks_order (__main__.TestModuleGlobalHooks) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1331: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. 2023-01-11T21:33:39.0310676Z warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " 2023-01-11T21:33:39.0311159Z ok (0.011s) 2023-01-11T21:33:39.0311640Z test_module_backward_global_hook_writeable (__main__.TestModuleGlobalHooks) ... ok (0.002s) 2023-01-11T21:33:39.0312326Z test_module_forward_forward_hook_removable (__main__.TestModuleGlobalHooks) 2023-01-11T21:33:39.0312912Z This test is to test when multiple forward hook functions can be registered ... ok (0.002s) 2023-01-11T21:33:39.0313537Z test_module_forward_preforward_hook_removable (__main__.TestModuleGlobalHooks) 2023-01-11T21:33:39.0314317Z This test is to test when multiple pre-forward hook functions can be ... ok (0.001s) 2023-01-11T21:33:39.0314936Z test_module_global_forward_preforward_hook_writeable (__main__.TestModuleGlobalHooks) ... ok (0.001s) 2023-01-11T21:33:39.0315593Z test_module_global_hook_invalid_outputs (__main__.TestModuleGlobalHooks) ... ok (0.001s) 2023-01-11T21:33:39.0316183Z test_module_global_hooks (__main__.TestModuleGlobalHooks) ... ok (0.009s) 2023-01-11T21:33:39.0316751Z test_backward_hooks_interaction (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0317570Z test_hook_backward_size (__main__.TestModuleHookNN) ... ok (0.002s) 2023-01-11T21:33:39.0318106Z test_hook_backward_writeable (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0318666Z test_hook_buffer_registration (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0319172Z test_hook_cpp (__main__.TestModuleHookNN) ... ok (0.002s) 2023-01-11T21:33:39.0319663Z test_hook_extra_input (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0320205Z test_hook_forward_preforward_writable (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0320731Z test_hook_inplace (__main__.TestModuleHookNN) ... ok (0.024s) 2023-01-11T21:33:39.0321254Z test_hook_invalid_outputs (__main__.TestModuleHookNN) ... ok (0.002s) 2023-01-11T21:33:39.0321797Z test_hook_last_arg_requires_grad (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0322335Z test_hook_no_requires_grad (__main__.TestModuleHookNN) ... ok (0.002s) 2023-01-11T21:33:39.0323030Z test_hook_non_full_warning (__main__.TestModuleHookNN) ... ok (0.010s) 2023-01-11T21:33:39.0323600Z test_hook_parameter_registration (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0324158Z test_hook_requires_grad (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0324684Z test_hook_submodule_registration (__main__.TestModuleHookNN) ... ok (0.001s) 2023-01-11T21:33:39.0326314Z test_hooks (__main__.TestModuleHookNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1331: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. 2023-01-11T21:33:39.0327646Z warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " 2023-01-11T21:33:39.0328122Z ok (0.014s) 2023-01-11T21:33:39.0328501Z test_forward_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0329004Z test_forward_pre_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0329514Z test_full_backward_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0330035Z test_full_backward_pre_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0330529Z test_kwarg_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0331005Z test_mixed_hooks (__main__.TestModuleHooks) ... ok (0.004s) 2023-01-11T21:33:39.0331485Z test_remove_kwarg_hooks (__main__.TestModuleHooks) ... ok (0.003s) 2023-01-11T21:33:39.0332010Z test_load_state_dict_module_pre_hook (__main__.TestStateDictHooks) ... ok (0.002s) 2023-01-11T21:33:39.0332586Z test_load_state_dict_post_hook (__main__.TestStateDictHooks) ... ok (0.002s) 2023-01-11T21:33:39.0333204Z test_load_state_dict_post_hook_backward_compatibility (__main__.TestStateDictHooks) ... ok (0.011s) 2023-01-11T21:33:39.0333796Z test_load_state_dict_pre_hook (__main__.TestStateDictHooks) ... ok (0.002s) 2023-01-11T21:33:39.0334316Z test_no_extra_ref_to_module (__main__.TestStateDictHooks) ... ok (0.001s) 2023-01-11T21:33:39.0334818Z test_pickled_hook (__main__.TestStateDictHooks) ... ok (0.001s) 2023-01-11T21:33:39.0335102Z 2023-01-11T21:33:39.0335486Z ---------------------------------------------------------------------- 2023-01-11T21:33:39.0335894Z Ran 36 tests in 0.142s 2023-01-11T21:33:39.0336091Z 2023-01-11T21:33:39.0336196Z OK 2023-01-11T21:33:39.0336359Z 2023-01-11T21:33:39.0336506Z Generating XML reports... 2023-01-11T21:33:39.0337310Z Generated XML report: test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleGlobalHooks-20230111213338.xml 2023-01-11T21:33:39.0338271Z Generated XML report: test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleHookNN-20230111213338.xml 2023-01-11T21:33:39.0339232Z Generated XML report: test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleHooks-20230111213338.xml 2023-01-11T21:33:39.0340339Z Generated XML report: test-reports/python-unittest/nn.test_module_hooks/TEST-TestStateDictHooks-20230111213338.xml 2023-01-11T21:33:39.0340778Z 2023-01-11T21:33:39.0341199Z ##[endgroup] 2023-01-11T21:33:39.0341909Z FINISHED PRINTING LOG FILE of nn/test_module_hooks (/var/lib/jenkins/workspace/test/test-reports/nn-test_module_hooks_l0byf1fd) 2023-01-11T21:33:39.0342311Z 2023-01-11T21:33:40.9411234Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:41.0059645Z Ignoring disabled issues: [] 2023-01-11T21:33:41.0206473Z Running nn/test_multihead_attention ... [2023-01-11 21:33:41.020370] 2023-01-11T21:33:41.0208261Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_multihead_attention.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:41.020615] 2023-01-11T21:33:47.4481271Z 2023-01-11T21:33:47.4481788Z Expand the folded group to see the log file of nn/test_multihead_attention 2023-01-11T21:33:47.4483609Z ##[group]PRINTING LOG FILE of nn/test_multihead_attention (/var/lib/jenkins/workspace/test/test-reports/nn-test_multihead_attention_lx0axkkl) 2023-01-11T21:33:47.4484085Z 2023-01-11T21:33:47.4484264Z Running tests... 2023-01-11T21:33:47.4484954Z ---------------------------------------------------------------------- 2023-01-11T21:33:47.4485772Z Test results will be stored in test-reports/python-unittest/nn.test_multihead_attention 2023-01-11T21:33:47.4486712Z test_multihead_attention_average_attn_weights_False (__main__.TestMultiheadAttentionNN) ... ok (2.246s) 2023-01-11T21:33:47.4487655Z test_multihead_attention_average_attn_weights_True (__main__.TestMultiheadAttentionNN) ... ok (2.223s) 2023-01-11T21:33:47.4488552Z test_multihead_attn_3d_attn_mask (__main__.TestMultiheadAttentionNN) ... ok (0.007s) 2023-01-11T21:33:47.4489392Z test_multihead_attn_fast_path_invalid_shape (__main__.TestMultiheadAttentionNN) ... ok (0.004s) 2023-01-11T21:33:47.4490256Z test_multihead_attn_invalid_shape (__main__.TestMultiheadAttentionNN) ... ok (0.002s) 2023-01-11T21:33:47.4491897Z test_multihead_attn_nested_tensor_outside_fast_path (__main__.TestMultiheadAttentionNN) ... /var/lib/jenkins/workspace/test/nn/test_multihead_attention.py:494: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:33:47.4493300Z nt = torch.nested.nested_tensor([torch.randn(4, 4)]) 2023-01-11T21:33:47.4493767Z ok (0.003s) 2023-01-11T21:33:47.4494373Z test_multihead_attn_no_bias (__main__.TestMultiheadAttentionNN) ... ok (0.001s) 2023-01-11T21:33:47.4494798Z 2023-01-11T21:33:47.4495306Z ---------------------------------------------------------------------- 2023-01-11T21:33:47.4495884Z Ran 7 tests in 4.486s 2023-01-11T21:33:47.4496156Z 2023-01-11T21:33:47.4496299Z OK 2023-01-11T21:33:47.4496511Z 2023-01-11T21:33:47.4496712Z Generating XML reports... 2023-01-11T21:33:47.4497863Z Generated XML report: test-reports/python-unittest/nn.test_multihead_attention/TEST-TestMultiheadAttentionNN-20230111213342.xml 2023-01-11T21:33:47.4498559Z 2023-01-11T21:33:47.4499085Z ##[endgroup] 2023-01-11T21:33:47.4500127Z FINISHED PRINTING LOG FILE of nn/test_multihead_attention (/var/lib/jenkins/workspace/test/test-reports/nn-test_multihead_attention_lx0axkkl) 2023-01-11T21:33:47.4500713Z 2023-01-11T21:33:49.3814652Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:49.4669143Z Ignoring disabled issues: [] 2023-01-11T21:33:49.4817109Z Running nn/test_packed_sequence ... [2023-01-11 21:33:49.481366] 2023-01-11T21:33:49.4818237Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_packed_sequence.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:49.481623] 2023-01-11T21:33:51.6588995Z 2023-01-11T21:33:51.6590580Z Expand the folded group to see the log file of nn/test_packed_sequence 2023-01-11T21:33:51.6591794Z ##[group]PRINTING LOG FILE of nn/test_packed_sequence (/var/lib/jenkins/workspace/test/test-reports/nn-test_packed_sequence_i3w_vdz0) 2023-01-11T21:33:51.6592501Z 2023-01-11T21:33:51.6592621Z Running tests... 2023-01-11T21:33:51.6593221Z ---------------------------------------------------------------------- 2023-01-11T21:33:51.6593868Z Test results will be stored in test-reports/python-unittest/nn.test_packed_sequence 2023-01-11T21:33:51.6594405Z test_pack_padded_sequence (__main__.PackedSequenceTest) ... ok (0.264s) 2023-01-11T21:33:51.6594883Z test_pack_sequence (__main__.PackedSequenceTest) ... ok (0.089s) 2023-01-11T21:33:51.6595487Z test_pad_sequence (__main__.PackedSequenceTest) ... ok (0.017s) 2023-01-11T21:33:51.6596012Z test_pad_sequence_with_non_iterable_sequences (__main__.PackedSequenceTest) ... ok (0.001s) 2023-01-11T21:33:51.6596547Z test_pad_sequence_with_tensor_sequences (__main__.PackedSequenceTest) ... ok (0.001s) 2023-01-11T21:33:51.6597027Z test_to (__main__.PackedSequenceTest) ... ok (0.002s) 2023-01-11T21:33:51.6642207Z test_to_memory_format (__main__.PackedSequenceTest) ... ok (0.001s) 2023-01-11T21:33:51.6642703Z test_total_length (__main__.PackedSequenceTest) ... ok (0.003s) 2023-01-11T21:33:51.6643138Z test_type_casts (__main__.PackedSequenceTest) 2023-01-11T21:33:51.6643605Z Test type casting of `PackedSequence` against type casting of tensor ... ok (0.024s) 2023-01-11T21:33:51.6644096Z test_unpack_sequence (__main__.PackedSequenceTest) ... ok (0.012s) 2023-01-11T21:33:51.6644559Z test_unpad_sequence (__main__.PackedSequenceTest) ... ok (0.009s) 2023-01-11T21:33:51.6644996Z test_wrong_order (__main__.PackedSequenceTest) ... ok (0.004s) 2023-01-11T21:33:51.6645253Z 2023-01-11T21:33:51.6645648Z ---------------------------------------------------------------------- 2023-01-11T21:33:51.6646045Z Ran 12 tests in 0.428s 2023-01-11T21:33:51.6646221Z 2023-01-11T21:33:51.6646302Z OK 2023-01-11T21:33:51.6646444Z 2023-01-11T21:33:51.6646577Z Generating XML reports... 2023-01-11T21:33:51.6647304Z Generated XML report: test-reports/python-unittest/nn.test_packed_sequence/TEST-PackedSequenceTest-20230111213350.xml 2023-01-11T21:33:51.6647708Z 2023-01-11T21:33:51.6648107Z ##[endgroup] 2023-01-11T21:33:51.6648811Z FINISHED PRINTING LOG FILE of nn/test_packed_sequence (/var/lib/jenkins/workspace/test/test-reports/nn-test_packed_sequence_i3w_vdz0) 2023-01-11T21:33:51.6649204Z 2023-01-11T21:33:53.5959178Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:53.6785400Z Ignoring disabled issues: [] 2023-01-11T21:33:53.6934449Z Running nn/test_parametrization ... [2023-01-11 21:33:53.693082] 2023-01-11T21:33:53.6935663Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_parametrization.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:53.693350] 2023-01-11T21:33:56.0521401Z 2023-01-11T21:33:56.0521951Z Expand the folded group to see the log file of nn/test_parametrization 2023-01-11T21:33:56.0523347Z ##[group]PRINTING LOG FILE of nn/test_parametrization (/var/lib/jenkins/workspace/test/test-reports/nn-test_parametrization_y1d91i7c) 2023-01-11T21:33:56.0524036Z 2023-01-11T21:33:56.0524202Z Running tests... 2023-01-11T21:33:56.0525007Z ---------------------------------------------------------------------- 2023-01-11T21:33:56.0525954Z Test results will be stored in test-reports/python-unittest/nn.test_parametrization 2023-01-11T21:33:56.0526743Z test_caching_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0527417Z Test the caching system of a parametrization ... ok (0.012s) 2023-01-11T21:33:56.0528264Z test_caching_parametrization_with_transfer_parametrizations_and_params (__main__.TestNNParametrization) 2023-01-11T21:33:56.0529354Z Test that transferring parametrizations doesn't cause issues with caching ... ok (0.003s) 2023-01-11T21:33:56.0530167Z test_deepcopy_after_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0531132Z Test that we are able to create a deepcopy of the module when it's parametrized. ... ok (0.008s) 2023-01-11T21:33:56.0532298Z test_errors_parametrized_tensor_parametrization (__main__.TestNNParametrization) ... ok (0.004s) 2023-01-11T21:33:56.0533229Z test_errors_unparametrized_tensor_parametrization (__main__.TestNNParametrization) ... ok (0.005s) 2023-01-11T21:33:56.0534067Z test_initialization_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0534843Z Test that it is possible to initialize a parametrization when it ... ok (0.005s) 2023-01-11T21:33:56.0535631Z test_multiple_inputs_parametrization (__main__.TestNNParametrization) ... ok (0.008s) 2023-01-11T21:33:56.0536413Z test_new_spectral_norm (__main__.TestNNParametrization) ... ok (0.041s) 2023-01-11T21:33:56.0537113Z test_new_spectral_norm_dim (__main__.TestNNParametrization) ... ok (0.003s) 2023-01-11T21:33:56.0537874Z test_new_spectral_norm_forward (__main__.TestNNParametrization) ... ok (0.003s) 2023-01-11T21:33:56.0538673Z test_new_spectral_norm_load_state_dict (__main__.TestNNParametrization) ... ok (0.016s) 2023-01-11T21:33:56.0539613Z test_orthogonal_errors (__main__.TestNNParametrization) ... ok (0.005s) 2023-01-11T21:33:56.0540359Z test_orthogonal_parametrization (__main__.TestNNParametrization) ... ok (0.181s) 2023-01-11T21:33:56.0541156Z test_parametrization_same_training_mode (__main__.TestNNParametrization) 2023-01-11T21:33:56.0541928Z Test training mode updated on parametrization registration ... ok (0.001s) 2023-01-11T21:33:56.0542946Z test_register_and_remove_buffer_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0543769Z Test that it is possible to add and remove parametrizations on buffers ... ok (0.002s) 2023-01-11T21:33:56.0544573Z test_register_and_remove_nested_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0545338Z Test that it is possible to nest the parametrizations ... ok (0.002s) 2023-01-11T21:33:56.0546077Z test_register_and_remove_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0546852Z Test that it is possible to add a few parametrizations ... ok (0.018s) 2023-01-11T21:33:56.0547579Z test_serialization_parametrization (__main__.TestNNParametrization) 2023-01-11T21:33:56.0548301Z Test that it is possible to serialize a parametrized model via state_dict ... ok (0.005s) 2023-01-11T21:33:56.0549103Z test_transfer_parametrizations_and_params (__main__.TestNNParametrization) 2023-01-11T21:33:56.0549969Z Test that all parametrizations and their associated parameters are transferred. ... ok (0.004s) 2023-01-11T21:33:56.0550900Z test_transfer_parametrizations_and_params_many_to_one (__main__.TestNNParametrization) ... ok (0.005s) 2023-01-11T21:33:56.0551828Z test_transfer_parametrizations_and_params_right_inverse (__main__.TestNNParametrization) 2023-01-11T21:33:56.0552784Z Test that all parametrizations and their associated parameters are transferred. ... ok (0.002s) 2023-01-11T21:33:56.0553678Z test_transfer_parametrizations_and_params_single_param (__main__.TestNNParametrization) 2023-01-11T21:33:56.0554587Z Test that all parametrizations and their associated parameters are transferred. ... ok (0.003s) 2023-01-11T21:33:56.0555406Z test_type_before_parametrizations (__main__.TestNNParametrization) 2023-01-11T21:33:56.0556211Z Test that type_before_parametrizations always retrieves original type ... ok (0.001s) 2023-01-11T21:33:56.0556654Z 2023-01-11T21:33:56.0557171Z ---------------------------------------------------------------------- 2023-01-11T21:33:56.0557718Z Ran 23 tests in 0.337s 2023-01-11T21:33:56.0557987Z 2023-01-11T21:33:56.0558122Z OK 2023-01-11T21:33:56.0558335Z 2023-01-11T21:33:56.0558522Z Generating XML reports... 2023-01-11T21:33:56.0559563Z Generated XML report: test-reports/python-unittest/nn.test_parametrization/TEST-TestNNParametrization-20230111213355.xml 2023-01-11T21:33:56.0560196Z 2023-01-11T21:33:56.0560781Z ##[endgroup] 2023-01-11T21:33:56.0561799Z FINISHED PRINTING LOG FILE of nn/test_parametrization (/var/lib/jenkins/workspace/test/test-reports/nn-test_parametrization_y1d91i7c) 2023-01-11T21:33:56.0562569Z 2023-01-11T21:33:57.9911437Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:33:58.0562310Z Ignoring disabled issues: [] 2023-01-11T21:33:58.0710566Z Running nn/test_pruning ... [2023-01-11 21:33:58.070744] 2023-01-11T21:33:58.0712134Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_pruning.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:33:58.070998] 2023-01-11T21:34:00.1352270Z 2023-01-11T21:34:00.1352821Z Expand the folded group to see the log file of nn/test_pruning 2023-01-11T21:34:00.1353911Z ##[group]PRINTING LOG FILE of nn/test_pruning (/var/lib/jenkins/workspace/test/test-reports/nn-test_pruning_doeazfs5) 2023-01-11T21:34:00.1354289Z 2023-01-11T21:34:00.1354416Z Running tests... 2023-01-11T21:34:00.1355069Z ---------------------------------------------------------------------- 2023-01-11T21:34:00.1355742Z Test results will be stored in test-reports/python-unittest/nn.test_pruning 2023-01-11T21:34:00.1356514Z test_compute_nparams_to_prune (__main__.TestPruningNN) 2023-01-11T21:34:00.1357044Z Test that requested pruning `amount` gets translated into the ... ok (0.010s) 2023-01-11T21:34:00.1357524Z test_custom_from_mask_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1357980Z Test that the CustomFromMask is capable of receiving ... ok (0.001s) 2023-01-11T21:34:00.1358444Z test_global_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1358930Z Test that global l1 unstructured pruning over 2 parameters removes ... ok (0.002s) 2023-01-11T21:34:00.1359462Z test_global_pruning_importance_scores (__main__.TestPruningNN) 2023-01-11T21:34:00.1359988Z Test that global l1 unstructured pruning over 2 parameters removes ... ok (0.002s) 2023-01-11T21:34:00.1360479Z test_identity_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1360941Z Test that a mask of 1s does not change forward or backward. ... ok (0.003s) 2023-01-11T21:34:00.1361404Z test_l1_unstructured_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1361990Z Test that l1 unstructured pruning actually removes the lowest ... ok (0.002s) 2023-01-11T21:34:00.1362540Z test_l1_unstructured_pruning_with_importance_scores (__main__.TestPruningNN) 2023-01-11T21:34:00.1363072Z Test that l1 unstructured pruning actually removes the lowest ... ok (0.002s) 2023-01-11T21:34:00.1363565Z test_ln_structured_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1363983Z Check Ln structured pruning by hand. ... ok (0.002s) 2023-01-11T21:34:00.1364471Z test_ln_structured_pruning_importance_scores (__main__.TestPruningNN) 2023-01-11T21:34:00.1364936Z Check Ln structured pruning by hand. ... ok (0.002s) 2023-01-11T21:34:00.1365391Z test_multiple_pruning_calls (__main__.TestPruningNN) ... ok (0.002s) 2023-01-11T21:34:00.1365851Z test_prune (__main__.TestPruningNN) ... ok (0.001s) 2023-01-11T21:34:00.1366301Z test_prune_importance_scores (__main__.TestPruningNN) ... ok (0.001s) 2023-01-11T21:34:00.1366842Z test_prune_importance_scores_mimic_default (__main__.TestPruningNN) ... ok (0.001s) 2023-01-11T21:34:00.1367379Z test_pruning_container (__main__.TestPruningNN) ... ok (0.001s) 2023-01-11T21:34:00.1367859Z test_pruning_container_compute_mask (__main__.TestPruningNN) 2023-01-11T21:34:00.1368351Z Test `compute_mask` of pruning container with a known `t` and ... ok (0.002s) 2023-01-11T21:34:00.1368846Z test_pruning_id_consistency (__main__.TestPruningNN) 2023-01-11T21:34:00.1369538Z Test that pruning doesn't change the id of the parameters, which ... ok (0.001s) 2023-01-11T21:34:00.1370016Z test_pruning_rollback (__main__.TestPruningNN) 2023-01-11T21:34:00.1370502Z Test that if something fails when the we try to compute the mask, ... ok (0.002s) 2023-01-11T21:34:00.1371043Z test_pruning_serialization_model (__main__.TestPruningNN) ... ok (0.003s) 2023-01-11T21:34:00.1371563Z test_pruning_serialization_state_dict (__main__.TestPruningNN) ... ok (0.003s) 2023-01-11T21:34:00.1372069Z test_random_pruning (__main__.TestPruningNN) ... ok (0.002s) 2023-01-11T21:34:00.1372733Z test_random_pruning_0perc (__main__.TestPruningNN) 2023-01-11T21:34:00.1373219Z Test that a mask of 1s does not change forward or backward. ... ok (0.003s) 2023-01-11T21:34:00.1373695Z test_random_pruning_forward (__main__.TestPruningNN) 2023-01-11T21:34:00.1374129Z check forward with mask (by hand). ... ok (0.002s) 2023-01-11T21:34:00.1374577Z test_random_pruning_new_weight (__main__.TestPruningNN) 2023-01-11T21:34:00.1375050Z Test that module.name now contains a pruned version of ... ok (0.002s) 2023-01-11T21:34:00.1375534Z test_random_pruning_orig (__main__.TestPruningNN) 2023-01-11T21:34:00.1376154Z Test that original tensor is correctly stored in 'orig' ... ok (0.002s) 2023-01-11T21:34:00.1376647Z test_random_pruning_pickle (__main__.TestPruningNN) ... ok (0.004s) 2023-01-11T21:34:00.1377122Z test_random_pruning_sizes (__main__.TestPruningNN) 2023-01-11T21:34:00.1377625Z Test that the new parameters and buffers created by the pruning ... ok (0.002s) 2023-01-11T21:34:00.1378281Z test_random_structured_pruning_amount (__main__.TestPruningNN) ... ok (0.001s) 2023-01-11T21:34:00.1378762Z test_remove_pruning (__main__.TestPruningNN) 2023-01-11T21:34:00.1379253Z `prune.remove` removes the hook and the reparametrization ... ok (0.002s) 2023-01-11T21:34:00.1379769Z test_remove_pruning_exception (__main__.TestPruningNN) 2023-01-11T21:34:00.1380247Z Removing from an unpruned tensor throws an assertion error ... ok (0.001s) 2023-01-11T21:34:00.1380751Z test_remove_pruning_forward (__main__.TestPruningNN) 2023-01-11T21:34:00.1381248Z Remove pruning and check forward is unchanged from previous ... ok (0.001s) 2023-01-11T21:34:00.1381732Z test_rnn_pruning (__main__.TestPruningNN) ... ok (0.002s) 2023-01-11T21:34:00.1382219Z test_unstructured_pruning_same_magnitude (__main__.TestPruningNN) 2023-01-11T21:34:00.1383033Z Since it may happen that the tensor to prune has entries with the ... ok (0.001s) 2023-01-11T21:34:00.1383545Z test_validate_pruning_amount (__main__.TestPruningNN) 2023-01-11T21:34:00.1384037Z Tests the second util function that validates the pruning ... ok (0.001s) 2023-01-11T21:34:00.1384543Z test_validate_pruning_amount_init (__main__.TestPruningNN) 2023-01-11T21:34:00.1385037Z Test the first util function that validates the pruning ... ok (0.001s) 2023-01-11T21:34:00.1385335Z 2023-01-11T21:34:00.1385711Z ---------------------------------------------------------------------- 2023-01-11T21:34:00.1386141Z Ran 34 tests in 0.073s 2023-01-11T21:34:00.1386337Z 2023-01-11T21:34:00.1386446Z OK 2023-01-11T21:34:00.1386604Z 2023-01-11T21:34:00.1386748Z Generating XML reports... 2023-01-11T21:34:00.1387473Z Generated XML report: test-reports/python-unittest/nn.test_pruning/TEST-TestPruningNN-20230111213359.xml 2023-01-11T21:34:00.1387888Z 2023-01-11T21:34:00.1388334Z ##[endgroup] 2023-01-11T21:34:00.1389017Z FINISHED PRINTING LOG FILE of nn/test_pruning (/var/lib/jenkins/workspace/test/test-reports/nn-test_pruning_doeazfs5) 2023-01-11T21:34:00.1389400Z 2023-01-11T21:34:02.1262904Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:02.2118938Z Ignoring disabled issues: [] 2023-01-11T21:34:02.2270557Z Running profiler/test_memory_profiler ... [2023-01-11 21:34:02.226700] 2023-01-11T21:34:02.2271496Z Executing ['/opt/conda/bin/python', '-bb', 'profiler/test_memory_profiler.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:02.226966] 2023-01-11T21:34:07.2513401Z 2023-01-11T21:34:07.2514125Z Expand the folded group to see the log file of profiler/test_memory_profiler 2023-01-11T21:34:07.2515094Z ##[group]PRINTING LOG FILE of profiler/test_memory_profiler (/var/lib/jenkins/workspace/test/test-reports/profiler-test_memory_profiler_5mscg61c) 2023-01-11T21:34:07.2515577Z 2023-01-11T21:34:07.2515694Z Running tests... 2023-01-11T21:34:07.2516213Z ---------------------------------------------------------------------- 2023-01-11T21:34:07.2516629Z Test results will be stored in test-reports/python-unittest/profiler.test_memory_profiler 2023-01-11T21:34:07.2517385Z test_data_flow_graph_complicated (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2517871Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2518325Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2519106Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:336: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2520144Z lines.append(f"{name + ':':<8} T{storage_to_id[t.storage().data_ptr()]}") 2023-01-11T21:34:07.2520619Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2521061Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2521509Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2521768Z ok (0.298s) 2023-01-11T21:34:07.2522205Z test_data_flow_graph_non_op_allocations (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2522702Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2523148Z STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2523396Z ok (0.061s) 2023-01-11T21:34:07.2523828Z test_data_flow_graph_simple (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:03 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2524315Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2524757Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2525942Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:337: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:485.) 2023-01-11T21:34:07.2526678Z if t.grad is not None: 2023-01-11T21:34:07.2527024Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2527457Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2527905Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2528165Z ok (0.110s) 2023-01-11T21:34:07.2528598Z test_data_flow_graph_simple_backward (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2543809Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2544551Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2545348Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:338: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2546532Z grad_id = storage_to_id[t.grad.storage().data_ptr()] 2023-01-11T21:34:07.2546748Z ok (0.061s) 2023-01-11T21:34:07.2547266Z test_data_flow_graph_simple_inplace (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2547780Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2548553Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2549230Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2549981Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2550920Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2551400Z ok (0.126s) 2023-01-11T21:34:07.2552052Z test_data_flow_graph_stacked (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2552598Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2553048Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2553484Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2553919Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2554350Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2554815Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2555403Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2555848Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2556096Z ok (0.204s) 2023-01-11T21:34:07.2556544Z test_data_flow_graph_with_annotations (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2557035Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2557465Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2557721Z ok (0.087s) 2023-01-11T21:34:07.2558139Z test_match_schemas (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2558637Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2559068Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2559326Z ok (0.007s) 2023-01-11T21:34:07.2559758Z test_match_schemas_backward (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2560237Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2560664Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2560923Z ok (0.005s) 2023-01-11T21:34:07.2561354Z test_match_schemas_tensorlist (__main__.TestDataFlow) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2561918Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2562348Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2562611Z ok (0.002s) 2023-01-11T21:34:07.2563073Z test_extract_gradients_from_module (__main__.TestIdentifyGradients) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2563564Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2564010Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2564679Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:117: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2565236Z return tensor.storage().data_ptr() == key.storage.ptr 2023-01-11T21:34:07.2565807Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:147: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2566337Z allowed_set = {t.storage().data_ptr() for t in tensors} 2023-01-11T21:34:07.2566724Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2567160Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2567606Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2568031Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2568461Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2568899Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2569334Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2569747Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2570187Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2570445Z ok (0.037s) 2023-01-11T21:34:07.2570918Z test_extract_gradients_from_module_and_optimizer (__main__.TestIdentifyGradients) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2571436Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2571880Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2572138Z ok (0.008s) 2023-01-11T21:34:07.2572590Z test_extract_gradients_from_optimizer (__main__.TestIdentifyGradients) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2573092Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2573534Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2573955Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2574388Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2574866Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2575301Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2585244Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2586383Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2586919Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2587383Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2588129Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2588540Z ok (0.040s) 2023-01-11T21:34:07.2589535Z test_extract_gradients_from_optimizer_set_to_none (__main__.TestIdentifyGradients) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2590501Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2591323Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2592101Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2592970Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2593787Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2594233Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2594654Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2595101Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2595534Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2595947Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2596387Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2596646Z ok (0.047s) 2023-01-11T21:34:07.2597106Z test_extract_gradients_low_level (__main__.TestIdentifyGradients) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2597590Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2598030Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2598460Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2598870Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2599309Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2599564Z ok (0.011s) 2023-01-11T21:34:07.2599991Z test_config_check (__main__.TestMemoryProfiler) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2600452Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2600890Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2601322Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2601830Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2602259Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2602694Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2603122Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2603549Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2603806Z ok (0.004s) 2023-01-11T21:34:07.2604274Z test_categories_e2e_sequential_fwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2605006Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:81: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2605601Z if isinstance(t, torch.Tensor) and t.storage() 2023-01-11T21:34:07.2606165Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:78: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2606651Z return tuple( 2023-01-11T21:34:07.2607174Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:79: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2607689Z (t._cdata, t.storage().data_ptr()) 2023-01-11T21:34:07.2608070Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2608504Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2608764Z ok (0.072s) 2023-01-11T21:34:07.2609233Z test_categories_e2e_sequential_fwd_bwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2609851Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2624426Z STAGE:2023-01-11 21:34:04 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2624722Z ok (0.322s) 2023-01-11T21:34:07.2625197Z test_categories_e2e_simple_fwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2625693Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2626139Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2626395Z ok (0.037s) 2023-01-11T21:34:07.2626849Z test_categories_e2e_simple_fwd_bwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2627348Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2627788Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2628165Z ok (0.191s) 2023-01-11T21:34:07.2628624Z test_categories_e2e_simple_fwd_bwd_step (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2629132Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2629574Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2629836Z ok (0.231s) 2023-01-11T21:34:07.2630336Z test_categories_e2e_simple_module_fwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2630827Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2631266Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2631524Z ok (0.029s) 2023-01-11T21:34:07.2632045Z test_categories_e2e_simple_module_fwd_bwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2632628Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2633077Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2633337Z ok (0.126s) 2023-01-11T21:34:07.2633804Z test_categories_e2e_simple_module_fwd_bwd_step (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2634315Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2634760Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2635021Z ok (0.203s) 2023-01-11T21:34:07.2635441Z test_inputs_fwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2635922Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2636362Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2636986Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:828: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2637486Z storage = t.storage() 2023-01-11T21:34:07.2638020Z /var/lib/jenkins/workspace/test/profiler/test_memory_profiler.py:836: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:07.2638589Z if key.storage.ptr == storage.data_ptr() and key.device == storage.device 2023-01-11T21:34:07.2638822Z ok (0.025s) 2023-01-11T21:34:07.2639249Z test_inputs_fwd_bwd (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2639734Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2640179Z STAGE:2023-01-11 21:34:05 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2640439Z ok (0.148s) 2023-01-11T21:34:07.2640867Z test_inputs_fwd_lazy (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2641397Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2641834Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2642077Z ok (0.030s) 2023-01-11T21:34:07.2642516Z test_lazily_initialized (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2643006Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2643442Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2643687Z ok (0.077s) 2023-01-11T21:34:07.2644132Z test_manual_optimizer_step (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2644674Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2645116Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2645355Z ok (0.040s) 2023-01-11T21:34:07.2645790Z test_memory_timeline (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2646273Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2646700Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2646957Z ok (0.183s) 2023-01-11T21:34:07.2647410Z test_parameters_and_gradients (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2647909Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2648332Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2648760Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2649187Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2649615Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2650042Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2650467Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2650902Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2651322Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2651748Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2652183Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2652435Z ok (0.154s) 2023-01-11T21:34:07.2652897Z test_parameters_and_gradients_set_to_none (__main__.TestMemoryProfilerE2E) ... STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2653393Z [W CPUAllocator.cpp:231] Memory block of unknown size was allocated before the profiling started, profiler results will not include the deallocation event 2023-01-11T21:34:07.2653879Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2654318Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2654768Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:07.2655194Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:07.2655629Z STAGE:2023-01-11 21:34:06 5825:5825 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:07.2655871Z ok (0.236s) 2023-01-11T21:34:07.2655975Z 2023-01-11T21:34:07.2656172Z ---------------------------------------------------------------------- 2023-01-11T21:34:07.2656414Z Ran 32 tests in 3.214s 2023-01-11T21:34:07.2656528Z 2023-01-11T21:34:07.2656589Z OK 2023-01-11T21:34:07.2656667Z 2023-01-11T21:34:07.2656752Z Generating XML reports... 2023-01-11T21:34:07.2657172Z Generated XML report: test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestDataFlow-20230111213403.xml 2023-01-11T21:34:07.2657788Z Generated XML report: test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestIdentifyGradients-20230111213403.xml 2023-01-11T21:34:07.2658347Z Generated XML report: test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestMemoryProfiler-20230111213403.xml 2023-01-11T21:34:07.2658919Z Generated XML report: test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestMemoryProfilerE2E-20230111213403.xml 2023-01-11T21:34:07.2659171Z 2023-01-11T21:34:07.2659511Z ##[endgroup] 2023-01-11T21:34:07.2659948Z FINISHED PRINTING LOG FILE of profiler/test_memory_profiler (/var/lib/jenkins/workspace/test/test-reports/profiler-test_memory_profiler_5mscg61c) 2023-01-11T21:34:07.2660181Z 2023-01-11T21:34:09.1586697Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:09.2426580Z Ignoring disabled issues: [] 2023-01-11T21:34:09.2577735Z Running profiler/test_profiler ... [2023-01-11 21:34:09.257474] 2023-01-11T21:34:09.2579062Z Executing ['/opt/conda/bin/python', '-bb', 'profiler/test_profiler.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:09.257719] 2023-01-11T21:34:18.2347886Z 2023-01-11T21:34:18.2348456Z Expand the folded group to see the log file of profiler/test_profiler 2023-01-11T21:34:18.2350658Z ##[group]PRINTING LOG FILE of profiler/test_profiler (/var/lib/jenkins/workspace/test/test-reports/profiler-test_profiler_fqkfqero) 2023-01-11T21:34:18.2351549Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:18.2351807Z 2023-01-11T21:34:18.2351897Z Running tests... 2023-01-11T21:34:18.2352476Z ---------------------------------------------------------------------- 2023-01-11T21:34:18.2353146Z Test results will be stored in test-reports/python-unittest/profiler.test_profiler 2023-01-11T21:34:18.2353697Z test_execution_graph_alone (__main__.TestExecutionGraph) ... ok (0.010s) 2023-01-11T21:34:18.2354187Z test_execution_graph_no_capture (__main__.TestExecutionGraph) ... ok (0.001s) 2023-01-11T21:34:18.2354741Z test_execution_graph_repeat_in_loop (__main__.TestExecutionGraph) ... ok (0.012s) 2023-01-11T21:34:18.2355305Z test_execution_graph_start_stop (__main__.TestExecutionGraph) ... ok (0.010s) 2023-01-11T21:34:18.2356006Z test_execution_graph_with_kineto (__main__.TestExecutionGraph) ... [W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation 2023-01-11T21:34:18.2356700Z [W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation 2023-01-11T21:34:18.2357275Z [W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation 2023-01-11T21:34:18.2358062Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2358847Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2359374Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2359814Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2360463Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2360902Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2361167Z ok (0.030s) 2023-01-11T21:34:18.2361395Z test_bfs (__main__.TestExperimentalUtils) ... ok (0.001s) 2023-01-11T21:34:18.2361655Z test_dfs (__main__.TestExperimentalUtils) ... ok (0.001s) 2023-01-11T21:34:18.2362223Z test_profiler_conv2d_bias_followed_by_batchnorm2d_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2362752Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2363199Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2363711Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2364149Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2364589Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2365028Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2365446Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2365890Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2366149Z ok (0.008s) 2023-01-11T21:34:18.2366420Z test_profiler_extra_cuda_copy_pattern (__main__.TestExperimentalUtils) ... skip: CUDA is required (0.001s) 2023-01-11T21:34:18.2366816Z test_profiler_extra_cuda_copy_pattern_benchmark (__main__.TestExperimentalUtils) ... skip: CUDA is required (0.000s) 2023-01-11T21:34:18.2367408Z test_profiler_for_loop_indexing_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2367922Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2368504Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2368945Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2369378Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2369823Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2370247Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2370681Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2371126Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2371567Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2371982Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2372428Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2372867Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2373286Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2373732Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2374034Z ok (0.084s) 2023-01-11T21:34:18.2374313Z test_profiler_fp32_matmul_pattern (__main__.TestExperimentalUtils) ... skip: CUDA is required (0.000s) 2023-01-11T21:34:18.2374871Z test_profiler_grad_not_set_to_none_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2375380Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2375827Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2376260Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2376681Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2377158Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2377593Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2378005Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2378449Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2378890Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2379320Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2379748Z STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2380011Z ok (0.045s) 2023-01-11T21:34:18.2380294Z test_profiler_matmul_dim_fp16_pattern (__main__.TestExperimentalUtils) ... skip: CUDA is required (0.001s) 2023-01-11T21:34:18.2380839Z test_profiler_name_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:11 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2381333Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2381779Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2382038Z ok (2.164s) 2023-01-11T21:34:18.2382764Z test_profiler_optimizer_single_tensor_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2383311Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2383767Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2384222Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2384650Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2385092Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2385529Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2385956Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2386384Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2386821Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2387251Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2387688Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2388232Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2388685Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2389135Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2389561Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2389989Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2390436Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2390697Z ok (0.167s) 2023-01-11T21:34:18.2391196Z test_profiler_pattern_match_helper (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2391709Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2392154Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2392463Z ok (0.004s) 2023-01-11T21:34:18.2392943Z test_profiler_pattern_matcher_json_report (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2393461Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2393911Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2394156Z ok (0.025s) 2023-01-11T21:34:18.2394643Z test_profiler_synchronized_dataloader_pattern (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2395158Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2395605Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2395851Z ok (0.090s) 2023-01-11T21:34:18.2396094Z test_utils_compute_idle_time (__main__.TestExperimentalUtils) ... ok (0.002s) 2023-01-11T21:34:18.2396412Z test_utils_compute_queue_depth (__main__.TestExperimentalUtils) ... ok (0.001s) 2023-01-11T21:34:18.2396963Z test_utils_compute_queue_depth_when_no_cuda_events (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2397488Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2397934Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2398202Z ok (0.044s) 2023-01-11T21:34:18.2398642Z test_utils_compute_self_time (__main__.TestExperimentalUtils) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2399136Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2399581Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2399825Z ok (0.007s) 2023-01-11T21:34:18.2400081Z test_utils_get_optimizable_events (__main__.TestExperimentalUtils) ... ok (0.007s) 2023-01-11T21:34:18.2400398Z test_utils_intervals_overlap (__main__.TestExperimentalUtils) ... 5 2023-01-11T21:34:18.2400630Z ok (0.001s) 2023-01-11T21:34:18.2401035Z test_export_stacks (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2401576Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2402024Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2402270Z ok (0.005s) 2023-01-11T21:34:18.2402675Z test_flops (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2403141Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2403593Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2403838Z ok (0.080s) 2023-01-11T21:34:18.2404045Z test_high_level_trace (__main__.TestProfiler) 2023-01-11T21:34:18.2404522Z Checks that python side high level events are recorded. ... STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2405032Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2405476Z STAGE:2023-01-11 21:34:13 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2405911Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2406342Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2406772Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2407200Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2407630Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2408075Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2408322Z ok (0.121s) 2023-01-11T21:34:18.2409082Z test_kineto (__main__.TestProfiler) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/88377 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:34:18.2409667Z test_kineto_multigpu (__main__.TestProfiler) ... skip: Multiple GPUs needed (0.001s) 2023-01-11T21:34:18.2410173Z test_kineto_profiler_api (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2410638Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2411082Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2411524Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2411954Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2412388Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2412824Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2413253Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2413681Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2414112Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2414542Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2415029Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2415271Z ok (0.010s) 2023-01-11T21:34:18.2415725Z test_kineto_profiler_multiple_steppers (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2416219Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2416658Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2417073Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2417498Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2417936Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2418388Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2418814Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2419254Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2419512Z ok (0.031s) 2023-01-11T21:34:18.2420260Z test_memory_profiler (__main__.TestProfiler) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/72280 for platform(s) linux, mac. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.002s) 2023-01-11T21:34:18.2421013Z test_module_hierarchy (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2421492Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2421932Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2422493Z ERROR:2023-01-11 21:34:14 5872:5872 CudaDeviceProperties.cpp:27] cudaGetDeviceCount failed with code 35 2023-01-11T21:34:18.2422758Z ok (0.019s) 2023-01-11T21:34:18.2423265Z test_nested_tensor_with_shapes (__main__.TestProfiler) ... /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1318: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:34:18.2423780Z inp = torch.nested.nested_tensor([a, b]) 2023-01-11T21:34:18.2424150Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2424587Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2425037Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2425298Z ok (0.003s) 2023-01-11T21:34:18.2425500Z test_oom_tracing (__main__.TestProfiler) ... ok (0.002s) 2023-01-11T21:34:18.2425760Z test_profiler_correlation_id (__main__.TestProfiler) 2023-01-11T21:34:18.2426296Z We expect the correlation_id to be unique across multiple invokation of the profiler, ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2426797Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2427239Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2427669Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2428166Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2428601Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2429045Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2429475Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2429919Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2430338Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2430767Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2431213Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2431678Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2432107Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2432630Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2432898Z ok (0.159s) 2023-01-11T21:34:18.2433280Z test_profiler_fwd_bwd_link (__main__.TestProfiler) ... skip: Disable forward->backward link to workaround profiler crash (0.002s) 2023-01-11T21:34:18.2433830Z test_profiler_metadata (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2434314Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2434759Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2435011Z ok (0.003s) 2023-01-11T21:34:18.2435434Z test_profiler_tracing (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2435909Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2436340Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2436772Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2437196Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2437639Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2438056Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2438485Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2438926Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2439341Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2439767Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2440206Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2440462Z ok (0.008s) 2023-01-11T21:34:18.2440867Z test_profiler_type (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2441342Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2441831Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2442089Z ok (0.002s) 2023-01-11T21:34:18.2442269Z test_source (__main__.TestProfiler) 2023-01-11T21:34:18.2442765Z Checks that source code attribution works for eager, TS and autograd mode ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2443266Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2443697Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2443954Z ok (0.057s) 2023-01-11T21:34:18.2444393Z test_tensorboard_trace_handler (__main__.TestProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2444879Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2445346Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2445791Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2446220Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2446647Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2447077Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2447510Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2447951Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2448370Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2448801Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2449240Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2449672Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2450085Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2450522Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2450953Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2451379Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2451807Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2452239Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2452669Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2453095Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2453353Z ok (0.018s) 2023-01-11T21:34:18.2453614Z test_custom_module_input_op_ids (__main__.TestProfilerCUDA) ... skip: CUDA is required (0.001s) 2023-01-11T21:34:18.2453901Z test_mem_leak (__main__.TestProfilerCUDA) 2023-01-11T21:34:18.2454265Z Checks that there's no memory leak when using profiler with CUDA ... skip: CUDA is required (0.001s) 2023-01-11T21:34:18.2454590Z test_custom_module_input_op_ids (__main__.TestProfilerITT) ... ok (0.001s) 2023-01-11T21:34:18.2454909Z test_datapipe_delegation_with_profiler (__main__.TestRecordFunction) ... ok (0.002s) 2023-01-11T21:34:18.2455481Z test_datapipe_with_record_function (__main__.TestRecordFunction) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2455987Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2456432Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2456691Z ok (0.004s) 2023-01-11T21:34:18.2457139Z test_datapipe_with_record_function_fork (__main__.TestRecordFunction) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2457640Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2458084Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2458331Z ok (0.004s) 2023-01-11T21:34:18.2458808Z test_record_function (__main__.TestRecordFunction) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2459298Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2459743Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2459988Z ok (0.002s) 2023-01-11T21:34:18.2460445Z test_allocation_id_uniqueness (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2460944Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2461386Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2461633Z ok (0.246s) 2023-01-11T21:34:18.2462077Z test_allocation_ids (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2462698Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2463128Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2463386Z ok (0.180s) 2023-01-11T21:34:18.2463853Z test_allocation_ids_with_other_ops (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2464357Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2464788Z STAGE:2023-01-11 21:34:14 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2465055Z ok (0.160s) 2023-01-11T21:34:18.2465494Z test_allocations (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2465986Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2466415Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2466848Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2467277Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2467700Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2467960Z ok (0.187s) 2023-01-11T21:34:18.2468393Z test_extra_fields (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2468880Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2469425Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2469684Z ok (0.002s) 2023-01-11T21:34:18.2470116Z test_impl_reuse (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2470584Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2471025Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2471284Z ok (0.237s) 2023-01-11T21:34:18.2471722Z test_mkldnn_tensors (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2472297Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2472761Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2473023Z ok (0.003s) 2023-01-11T21:34:18.2473475Z test_module_and_optimizer_ids (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2473955Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2474395Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2474828Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2475243Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2475699Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2475958Z ok (0.014s) 2023-01-11T21:34:18.2476400Z test_nnmodule_params (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2476872Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2477313Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2477940Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1985: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2478580Z expected = [(name, val.storage().data_ptr(), val.grad.storage().data_ptr()) for name, val in net.fc1._parameters.items()] 2023-01-11T21:34:18.2479213Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1986: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2479807Z expected += [(name, val.storage().data_ptr(), val.grad.storage().data_ptr()) for name, val in net.fc2._parameters.items()] 2023-01-11T21:34:18.2480082Z ok (0.004s) 2023-01-11T21:34:18.2480512Z test_optimizer (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2480998Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2481432Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2481754Z ok (0.012s) 2023-01-11T21:34:18.2482216Z test_optimizer_parameters_adam (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2482706Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2483152Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2483766Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:2009: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2484319Z [(v.storage().data_ptr()) for v in group.get("params", [])], 2023-01-11T21:34:18.2484923Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:2022: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2485477Z {name: value.storage().data_ptr() for name, value in parameter_state.items()}, 2023-01-11T21:34:18.2486063Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:2023: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2486600Z observed_state.get(parameter.storage().data_ptr(), []) 2023-01-11T21:34:18.2486816Z ok (0.037s) 2023-01-11T21:34:18.2487266Z test_optimizer_parameters_sgd (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2487772Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2488220Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2488480Z ok (0.020s) 2023-01-11T21:34:18.2489047Z test_pointers_and_ids (__main__.TestTorchTidyProfiler) ... /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1366: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2489635Z a_initial_storage_data = a.storage().data_ptr() 2023-01-11T21:34:18.2490188Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1371: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2490719Z c_initial_storage_data = c.storage().data_ptr() 2023-01-11T21:34:18.2491101Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2491508Z [W CPUAllocator.cpp:231] Memory block of unknown size was allocated before the profiling started, profiler results will not include the deallocation event 2023-01-11T21:34:18.2492170Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1384: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2492694Z c.set_(d.storage()) 2023-01-11T21:34:18.2493054Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2493486Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2493746Z ok (0.005s) 2023-01-11T21:34:18.2494176Z test_scalar_ins (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2494665Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2495095Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2495354Z ok (0.002s) 2023-01-11T21:34:18.2495824Z test_sparse_tensors (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2496304Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2496746Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2497004Z ok (0.004s) 2023-01-11T21:34:18.2497435Z test_tensor_lists (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2497907Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2498350Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2498971Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1958: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2499539Z self.assertEqual(x.storage().data_ptr(), inputs[0][0].storage_data_ptr) 2023-01-11T21:34:18.2500107Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1959: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2500668Z self.assertEqual(y.storage().data_ptr(), inputs[0][1].storage_data_ptr) 2023-01-11T21:34:18.2500892Z ok (0.002s) 2023-01-11T21:34:18.2501343Z test_tensor_properties (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2501829Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2502280Z STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2502673Z ok (0.003s) 2023-01-11T21:34:18.2503137Z test_tensorimpl_invalidation_full (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:15 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2503809Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1546: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2504312Z x_storages = [x.storage()] 2023-01-11T21:34:18.2504848Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1549: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2505417Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2505952Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1550: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2506454Z x_storages.append(x.storage()) 2023-01-11T21:34:18.2507037Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1559: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2507546Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2508077Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1563: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2508577Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2508939Z STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2509386Z STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2509642Z ok (0.508s) 2023-01-11T21:34:18.2510106Z test_tensorimpl_invalidation_keep_alive (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2510789Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1478: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2511286Z x_storages = [x.storage()] 2023-01-11T21:34:18.2511819Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1481: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2512414Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2512938Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1485: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2513448Z x_storages.append(x.storage()) 2023-01-11T21:34:18.2513981Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1499: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2514481Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2514856Z STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2515293Z STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2515772Z STAGE:2023-01-11 21:34:16 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2516204Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2516634Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2516891Z ok (1.172s) 2023-01-11T21:34:18.2517364Z test_tensorimpl_invalidation_scalar_args (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2517873Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2518302Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2518560Z ok (0.181s) 2023-01-11T21:34:18.2519056Z test_tensorimpl_invalidation_set (__main__.TestTorchTidyProfiler) ... STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2519737Z /var/lib/jenkins/workspace/test/profiler/test_profiler.py:1454: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:18.2520235Z x.set_(torch.ones((1,)).storage()) 2023-01-11T21:34:18.2520605Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2521050Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2521481Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:34:18.2521905Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:34:18.2522347Z STAGE:2023-01-11 21:34:17 5872:5872 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:34:18.2522606Z ok (0.312s) 2023-01-11T21:34:18.2522712Z 2023-01-11T21:34:18.2522896Z ---------------------------------------------------------------------- 2023-01-11T21:34:18.2523142Z Ran 71 tests in 6.548s 2023-01-11T21:34:18.2523257Z 2023-01-11T21:34:18.2523329Z OK (skipped=10) 2023-01-11T21:34:18.2523436Z 2023-01-11T21:34:18.2523519Z Generating XML reports... 2023-01-11T21:34:18.2523938Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestExecutionGraph-20230111213411.xml 2023-01-11T21:34:18.2524507Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestExperimentalUtils-20230111213411.xml 2023-01-11T21:34:18.2525047Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestProfiler-20230111213411.xml 2023-01-11T21:34:18.2525564Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestProfilerITT-20230111213411.xml 2023-01-11T21:34:18.2526109Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestRecordFunction-20230111213411.xml 2023-01-11T21:34:18.2526665Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestTorchTidyProfiler-20230111213411.xml 2023-01-11T21:34:18.2527210Z Generated XML report: test-reports/python-unittest/profiler.test_profiler/TEST-TestProfilerCUDA-20230111213411.xml 2023-01-11T21:34:18.2527449Z 2023-01-11T21:34:18.2527780Z ##[endgroup] 2023-01-11T21:34:18.2528198Z FINISHED PRINTING LOG FILE of profiler/test_profiler (/var/lib/jenkins/workspace/test/test-reports/profiler-test_profiler_fqkfqero) 2023-01-11T21:34:18.2528433Z 2023-01-11T21:34:20.1434642Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:20.2104021Z Ignoring disabled issues: [] 2023-01-11T21:34:20.2256994Z Running test_autocast ... [2023-01-11 21:34:20.225285] 2023-01-11T21:34:20.2258204Z Executing ['/opt/conda/bin/python', '-bb', 'test_autocast.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:20.225546] 2023-01-11T21:34:22.3162397Z 2023-01-11T21:34:22.3162962Z Expand the folded group to see the log file of test_autocast 2023-01-11T21:34:22.3164027Z ##[group]PRINTING LOG FILE of test_autocast (/var/lib/jenkins/workspace/test/test-reports/test_autocast_wlibco_t) 2023-01-11T21:34:22.3164274Z 2023-01-11T21:34:22.3164340Z Running tests... 2023-01-11T21:34:22.3164750Z ---------------------------------------------------------------------- 2023-01-11T21:34:22.3165127Z Test results will be stored in test-reports/python-unittest/test_autocast 2023-01-11T21:34:22.3165449Z test_autocast_methods_expect_builtin_promote (__main__.TestAutocastCPU) ... ok (0.236s) 2023-01-11T21:34:22.3165755Z test_autocast_nn_bf16 (__main__.TestAutocastCPU) ... ok (0.007s) 2023-01-11T21:34:22.3166241Z test_autocast_nn_fp32 (__main__.TestAutocastCPU) ... ok (0.009s) 2023-01-11T21:34:22.3166527Z test_autocast_torch_bf16 (__main__.TestAutocastCPU) ... ok (0.019s) 2023-01-11T21:34:22.3166819Z test_autocast_torch_expect_builtin_promote (__main__.TestAutocastCPU) ... ok (0.008s) 2023-01-11T21:34:22.3167121Z test_autocast_torch_fp32 (__main__.TestAutocastCPU) ... ok (0.082s) 2023-01-11T21:34:22.3167425Z test_autocast_torch_need_autocast_promote (__main__.TestAutocastCPU) ... ok (0.007s) 2023-01-11T21:34:22.3167704Z test_cast_cache_is_global (__main__.TestAutocastGPU) 2023-01-11T21:34:22.3168012Z Verifies that the autocast cache is global. This is done by ... skip: requires cuda (0.001s) 2023-01-11T21:34:22.3168328Z test_autocast_fast_dtype (__main__.TestTorchAutocast) ... ok (0.001s) 2023-01-11T21:34:22.3168494Z 2023-01-11T21:34:22.3168702Z ---------------------------------------------------------------------- 2023-01-11T21:34:22.3168933Z Ran 9 tests in 0.370s 2023-01-11T21:34:22.3169053Z 2023-01-11T21:34:22.3169127Z OK (skipped=1) 2023-01-11T21:34:22.3169234Z 2023-01-11T21:34:22.3169318Z Generating XML reports... 2023-01-11T21:34:22.3169725Z Generated XML report: test-reports/python-unittest/test_autocast/TEST-TestAutocastCPU-20230111213421.xml 2023-01-11T21:34:22.3170244Z Generated XML report: test-reports/python-unittest/test_autocast/TEST-TestTorchAutocast-20230111213421.xml 2023-01-11T21:34:22.3170756Z Generated XML report: test-reports/python-unittest/test_autocast/TEST-TestAutocastGPU-20230111213421.xml 2023-01-11T21:34:22.3170983Z 2023-01-11T21:34:22.3171211Z ##[endgroup] 2023-01-11T21:34:22.3171584Z FINISHED PRINTING LOG FILE of test_autocast (/var/lib/jenkins/workspace/test/test-reports/test_autocast_wlibco_t) 2023-01-11T21:34:22.3171795Z 2023-01-11T21:34:24.2462862Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:24.3128515Z Ignoring disabled issues: [] 2023-01-11T21:34:24.3287108Z Running test_binary_ufuncs ... [2023-01-11 21:34:24.328393] 2023-01-11T21:34:24.3290263Z Executing ['/opt/conda/bin/python', '-bb', 'test_binary_ufuncs.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:24.328729] 2023-01-11T21:34:26.9987863Z 2023-01-11T21:34:26.9988415Z Expand the folded group to see the log file of test_binary_ufuncs 2023-01-11T21:34:26.9989435Z ##[group]PRINTING LOG FILE of test_binary_ufuncs (/var/lib/jenkins/workspace/test/test-reports/test_binary_ufuncs_gnv1g3qt) 2023-01-11T21:34:26.9989678Z 2023-01-11T21:34:26.9989755Z Running tests... 2023-01-11T21:34:26.9990156Z ---------------------------------------------------------------------- 2023-01-11T21:34:26.9990332Z 2023-01-11T21:34:26.9990532Z ---------------------------------------------------------------------- 2023-01-11T21:34:26.9990773Z Ran 0 tests in 0.000s 2023-01-11T21:34:26.9990891Z 2023-01-11T21:34:26.9990940Z OK 2023-01-11T21:34:26.9991032Z 2023-01-11T21:34:26.9991119Z Generating XML reports... 2023-01-11T21:34:26.9991707Z Test results will be stored in test-reports/python-unittest/test_binary_ufuncs 2023-01-11T21:34:26.9991893Z 2023-01-11T21:34:26.9992115Z ##[endgroup] 2023-01-11T21:34:26.9992585Z FINISHED PRINTING LOG FILE of test_binary_ufuncs (/var/lib/jenkins/workspace/test/test-reports/test_binary_ufuncs_gnv1g3qt) 2023-01-11T21:34:26.9992809Z 2023-01-11T21:34:28.9150372Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:28.9994317Z Ignoring disabled issues: [] 2023-01-11T21:34:29.0146420Z Running test_bundled_inputs ... [2023-01-11 21:34:29.014319] 2023-01-11T21:34:29.0147937Z Executing ['/opt/conda/bin/python', '-bb', 'test_bundled_inputs.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:29.014594] 2023-01-11T21:34:32.4159309Z 2023-01-11T21:34:32.4159767Z Expand the folded group to see the log file of test_bundled_inputs 2023-01-11T21:34:32.4161143Z ##[group]PRINTING LOG FILE of test_bundled_inputs (/var/lib/jenkins/workspace/test/test-reports/test_bundled_inputs_6051al9e) 2023-01-11T21:34:32.4162015Z 2023-01-11T21:34:32.4162184Z Running tests... 2023-01-11T21:34:32.4163011Z ---------------------------------------------------------------------- 2023-01-11T21:34:32.4163921Z Test results will be stored in test-reports/python-unittest/test_bundled_inputs 2023-01-11T21:34:32.4164668Z test_bad_inputs (__main__.TestBundledInputs) ... ok (0.258s) 2023-01-11T21:34:32.4166227Z test_dict_args (__main__.TestBundledInputs) ... /var/lib/jenkins/workspace/test/test_bundled_inputs.py:361: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:32.4167653Z assert ret.storage().size() == 1 2023-01-11T21:34:32.4169624Z /opt/conda/lib/python3.10/site-packages/torch/utils/bundled_inputs.py:394: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:32.4171068Z if arg._typed_storage().size() <= MAX_RAW_TENSOR_SIZE or skip_size_check: 2023-01-11T21:34:32.4171622Z ok (1.201s) 2023-01-11T21:34:32.4172179Z test_double_augment_fail (__main__.TestBundledInputs) ... ok (0.007s) 2023-01-11T21:34:32.4172870Z test_double_augment_non_mutator (__main__.TestBundledInputs) ... ok (0.007s) 2023-01-11T21:34:32.4173606Z test_double_augment_success (__main__.TestBundledInputs) ... ok (0.009s) 2023-01-11T21:34:32.4174326Z test_large_tensor_with_inflation (__main__.TestBundledInputs) ... ok (0.012s) 2023-01-11T21:34:32.4175057Z test_multiple_methods_with_inputs (__main__.TestBundledInputs) ... ok (0.048s) 2023-01-11T21:34:32.4175873Z test_multiple_methods_with_inputs_both_defined_failure (__main__.TestBundledInputs) ... ok (0.005s) 2023-01-11T21:34:32.4176744Z test_multiple_methods_with_inputs_neither_defined_failure (__main__.TestBundledInputs) ... ok (0.004s) 2023-01-11T21:34:32.4177521Z test_non_tensors (__main__.TestBundledInputs) ... ok (0.008s) 2023-01-11T21:34:32.4179869Z test_rejected_tensors (__main__.TestBundledInputs) ... /opt/conda/lib/python3.10/site-packages/torch/utils/bundled_inputs.py:410: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:34:32.4181445Z f"a tensor with storage size {arg._typed_storage().size()}. " 2023-01-11T21:34:32.4181967Z ok (0.010s) 2023-01-11T21:34:32.4182715Z test_single_tensors (__main__.TestBundledInputs) ... ok (0.050s) 2023-01-11T21:34:32.4183102Z 2023-01-11T21:34:32.4183584Z ---------------------------------------------------------------------- 2023-01-11T21:34:32.4184306Z Ran 12 tests in 1.619s 2023-01-11T21:34:32.4184563Z 2023-01-11T21:34:32.4184702Z OK 2023-01-11T21:34:32.4184921Z 2023-01-11T21:34:32.4185102Z Generating XML reports... 2023-01-11T21:34:32.4186064Z Generated XML report: test-reports/python-unittest/test_bundled_inputs/TEST-TestBundledInputs-20230111213430.xml 2023-01-11T21:34:32.4186652Z 2023-01-11T21:34:32.4187205Z ##[endgroup] 2023-01-11T21:34:32.4188180Z FINISHED PRINTING LOG FILE of test_bundled_inputs (/var/lib/jenkins/workspace/test/test-reports/test_bundled_inputs_6051al9e) 2023-01-11T21:34:32.4188729Z 2023-01-11T21:34:34.4071938Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:34.4949774Z Ignoring disabled issues: [] 2023-01-11T21:34:34.5102784Z Running test_comparison_utils ... [2023-01-11 21:34:34.509916] 2023-01-11T21:34:34.5104311Z Executing ['/opt/conda/bin/python', '-bb', 'test_comparison_utils.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:34.510190] 2023-01-11T21:34:36.1893769Z 2023-01-11T21:34:36.1894381Z Expand the folded group to see the log file of test_comparison_utils 2023-01-11T21:34:36.1895484Z ##[group]PRINTING LOG FILE of test_comparison_utils (/var/lib/jenkins/workspace/test/test-reports/test_comparison_utils_2qqxnon_) 2023-01-11T21:34:36.1895923Z 2023-01-11T21:34:36.1897016Z ##[endgroup] 2023-01-11T21:34:36.1897857Z FINISHED PRINTING LOG FILE of test_comparison_utils (/var/lib/jenkins/workspace/test/test-reports/test_comparison_utils_2qqxnon_) 2023-01-11T21:34:36.1898278Z 2023-01-11T21:34:38.1049243Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:38.1901941Z Ignoring disabled issues: [] 2023-01-11T21:34:38.2051694Z Running test_complex ... [2023-01-11 21:34:38.204884] 2023-01-11T21:34:38.2053547Z Executing ['/opt/conda/bin/python', '-bb', 'test_complex.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:38.205137] 2023-01-11T21:34:40.2088986Z 2023-01-11T21:34:40.2089552Z Expand the folded group to see the log file of test_complex 2023-01-11T21:34:40.2090578Z ##[group]PRINTING LOG FILE of test_complex (/var/lib/jenkins/workspace/test/test-reports/test_complex_9sm1gm2r) 2023-01-11T21:34:40.2090963Z 2023-01-11T21:34:40.2091091Z Running tests... 2023-01-11T21:34:40.2091735Z ---------------------------------------------------------------------- 2023-01-11T21:34:40.2092041Z 2023-01-11T21:34:40.2092386Z ---------------------------------------------------------------------- 2023-01-11T21:34:40.2092784Z Ran 0 tests in 0.000s 2023-01-11T21:34:40.2092979Z 2023-01-11T21:34:40.2093080Z OK 2023-01-11T21:34:40.2093236Z 2023-01-11T21:34:40.2093373Z Generating XML reports... 2023-01-11T21:34:40.2093936Z Test results will be stored in test-reports/python-unittest/test_complex 2023-01-11T21:34:40.2094252Z 2023-01-11T21:34:40.2094630Z ##[endgroup] 2023-01-11T21:34:40.2095318Z FINISHED PRINTING LOG FILE of test_complex (/var/lib/jenkins/workspace/test/test-reports/test_complex_9sm1gm2r) 2023-01-11T21:34:40.2095703Z 2023-01-11T21:34:42.1722531Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:42.2586794Z Ignoring disabled issues: [] 2023-01-11T21:34:42.2735617Z Running test_cuda_sanitizer ... [2023-01-11 21:34:42.273278] 2023-01-11T21:34:42.2737922Z Executing ['/opt/conda/bin/python', '-bb', 'test_cuda_sanitizer.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:42.273532] 2023-01-11T21:34:43.9708273Z 2023-01-11T21:34:43.9708843Z Expand the folded group to see the log file of test_cuda_sanitizer 2023-01-11T21:34:43.9709866Z ##[group]PRINTING LOG FILE of test_cuda_sanitizer (/var/lib/jenkins/workspace/test/test-reports/test_cuda_sanitizer_6g1nc35w) 2023-01-11T21:34:43.9710424Z CUDA not available, skipping tests 2023-01-11T21:34:43.9710649Z 2023-01-11T21:34:43.9710761Z Running tests... 2023-01-11T21:34:43.9711383Z ---------------------------------------------------------------------- 2023-01-11T21:34:43.9711936Z 2023-01-11T21:34:43.9712372Z ---------------------------------------------------------------------- 2023-01-11T21:34:43.9712780Z Ran 0 tests in 0.000s 2023-01-11T21:34:43.9712979Z 2023-01-11T21:34:43.9713086Z OK 2023-01-11T21:34:43.9713245Z 2023-01-11T21:34:43.9713396Z Generating XML reports... 2023-01-11T21:34:43.9713992Z Test results will be stored in test-reports/python-unittest/test_cuda_sanitizer 2023-01-11T21:34:43.9714322Z 2023-01-11T21:34:43.9714718Z ##[endgroup] 2023-01-11T21:34:43.9715444Z FINISHED PRINTING LOG FILE of test_cuda_sanitizer (/var/lib/jenkins/workspace/test/test-reports/test_cuda_sanitizer_6g1nc35w) 2023-01-11T21:34:43.9715849Z 2023-01-11T21:34:45.9824850Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:34:46.1015218Z Ignoring disabled issues: [] 2023-01-11T21:34:46.1247515Z Running test_dataloader ... [2023-01-11 21:34:46.124401] 2023-01-11T21:34:46.1249308Z Executing ['/opt/conda/bin/python', '-bb', 'test_dataloader.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:34:46.124695] 2023-01-11T21:36:14.9700212Z 2023-01-11T21:36:14.9701340Z Expand the folded group to see the log file of test_dataloader 2023-01-11T21:36:14.9707178Z ##[group]PRINTING LOG FILE of test_dataloader (/var/lib/jenkins/workspace/test/test-reports/test_dataloader_itutkeds) 2023-01-11T21:36:14.9707675Z 2023-01-11T21:36:14.9708100Z Running tests... 2023-01-11T21:36:14.9708724Z ---------------------------------------------------------------------- 2023-01-11T21:36:14.9709171Z Test results will be stored in test-reports/python-unittest/test_dataloader 2023-01-11T21:36:14.9709499Z test_shuffler_iterdatapipe (__main__.IntegrationTestDataLoaderDataPipe) 2023-01-11T21:36:14.9709952Z Verify ``IterDataPipe.shuffle`` is controlled by ``DataLoader`` ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:36:14.9767422Z test_add_dataset (__main__.TestConcatDataset) ... ok (0.224s) 2023-01-11T21:36:14.9767934Z test_concat_raises_index_error (__main__.TestConcatDataset) ... ok (0.001s) 2023-01-11T21:36:14.9768397Z test_concat_two_non_singletons (__main__.TestConcatDataset) ... ok (0.001s) 2023-01-11T21:36:14.9768858Z test_concat_two_non_singletons_with_empty (__main__.TestConcatDataset) ... ok (0.001s) 2023-01-11T21:36:14.9769293Z test_concat_two_singletons (__main__.TestConcatDataset) ... ok (0.001s) 2023-01-11T21:36:14.9769848Z test_iterable_dataset_err (__main__.TestConcatDataset) ... ok (0.001s) 2023-01-11T21:36:14.9771606Z test_conv_after_fork (__main__.TestConvAfterFork) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/75492 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T21:36:14.9772635Z test_custom_batch_pin (__main__.TestCustomPinFn) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:14.9773254Z test_custom_batch_pin_worker (__main__.TestCustomPinFn) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:14.9774395Z test_batch_sampler (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9775021Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9775458Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9775806Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9776293Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9781142Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9782004Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9783021Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9783776Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9784361Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9784788Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9785138Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9785775Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9786292Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9787133Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9788056Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9788539Z ok (2.433s) 2023-01-11T21:36:14.9789029Z test_builtin_collection_conversion (__main__.TestDataLoader) ... ok (0.274s) 2023-01-11T21:36:14.9789681Z test_bulk_loading_nobatch (__main__.TestDataLoader) ... ok (0.066s) 2023-01-11T21:36:14.9790321Z test_chain_iterable_style_dataset (__main__.TestDataLoader) ... ok (0.171s) 2023-01-11T21:36:14.9790905Z test_default_collate_bad_numpy_types (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9791491Z test_default_collate_bad_sequence_type (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9792040Z test_default_collate_dtype (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9792588Z test_default_collate_mapping_keep_type (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9794926Z test_default_collate_numpy_memmap (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/utils/tensor_numpy.cpp:210.) 2023-01-11T21:36:14.9796646Z return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) 2023-01-11T21:36:14.9797158Z ok (0.003s) 2023-01-11T21:36:14.9797677Z test_default_collate_sequence_dont_keep_type (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9798364Z test_default_collate_sequence_keep_type (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9799026Z test_default_collate_shared_tensor (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9799695Z test_default_convert_mapping_keep_type (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9800376Z test_default_convert_sequence_dont_keep_type (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9801062Z test_default_convert_sequence_keep_type (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9801744Z test_distributed_sampler_invalid_rank (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9802403Z test_duplicating_data_with_drop_last (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9802993Z test_error (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9803537Z test_error_in_init (__main__.TestDataLoader) ... ok (0.083s) 2023-01-11T21:36:14.9804100Z test_error_workers (__main__.TestDataLoader) ... ok (0.068s) 2023-01-11T21:36:14.9804706Z test_excessive_thread_creation_warning (__main__.TestDataLoader) ... ok (0.008s) 2023-01-11T21:36:14.9805330Z test_fd_limit_exceeded (__main__.TestDataLoader) ... ok (1.554s) 2023-01-11T21:36:14.9806485Z test_get_worker_info (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9807446Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9808395Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9809154Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9809632Z ok (3.467s) 2023-01-11T21:36:14.9810082Z test_growing_dataset (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9810690Z test_invalid_assign_after_init (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9811344Z test_invalid_ctor_args_combinations (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9811960Z test_iterable_style_dataset (__main__.TestDataLoader) ... ok (0.900s) 2023-01-11T21:36:14.9812575Z test_iterabledataset_len (__main__.TestDataLoader) ... ok (0.002s) 2023-01-11T21:36:14.9813826Z test_large_sampler_indices (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9814669Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9815613Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9816372Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9816861Z ok (3.214s) 2023-01-11T21:36:14.9817273Z test_len (__main__.TestDataLoader) ... ok (0.007s) 2023-01-11T21:36:14.9817866Z test_multi_epochs_reproducibility (__main__.TestDataLoader) ... ok (0.132s) 2023-01-11T21:36:14.9819066Z test_multiple_dataloaders (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9819897Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9820839Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9821600Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9822726Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9823419Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9824375Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9825121Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9826090Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9826793Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9827751Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9828496Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9829433Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9830144Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9831102Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9831855Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9832314Z ok (3.432s) 2023-01-11T21:36:14.9833450Z test_multiprocessing_contexts (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9834442Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9835401Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9836130Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9837084Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9837795Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9838753Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9839506Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9840468Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9841178Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9842201Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9842960Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9848793Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9850334Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9851083Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9851642Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9852451Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9853031Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9853795Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9854424Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9855233Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9855808Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9856531Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9857142Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9857565Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9857887Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9858326Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9858666Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9859085Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9859412Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9859838Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9860165Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9860590Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9860921Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9861349Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9861775Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9862216Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9862720Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9863153Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9863478Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9863910Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9864235Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9864652Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9865068Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9865513Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9865834Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9866251Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9866590Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9866813Z ok (7.001s) 2023-01-11T21:36:14.9867511Z test_multiprocessing_iterdatapipe (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/graph_settings.py:88: UserWarning: `shuffle=True` was set, but the datapipe does not contain a `Shuffler`. Adding one at the end. Be aware that the default buffer size might not be sufficient for your task. 2023-01-11T21:36:14.9868035Z warnings.warn( 2023-01-11T21:36:14.9868425Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9868766Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9869194Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9869531Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9869957Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9870283Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9870712Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9871040Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9871474Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9871802Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9872214Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9872554Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9875141Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9875705Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9876485Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9877067Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9877818Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9878577Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9879375Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9880006Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9880772Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9881381Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9881817Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9882160Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9882591Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9882974Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9883394Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9883738Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9884169Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9884479Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9884907Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9885246Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9885467Z ok (6.605s) 2023-01-11T21:36:14.9885907Z test_no_segfault (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9886285Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9886714Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9887041Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9887263Z ok (1.712s) 2023-01-11T21:36:14.9887473Z test_numpy (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9887733Z test_numpy_gen_state (__main__.TestDataLoader) ... ok (0.006s) 2023-01-11T21:36:14.9887987Z test_numpy_scalars (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9888249Z test_partial_workers (__main__.TestDataLoader) 2023-01-11T21:36:14.9888528Z Check that workers exit even if the iterator is not exhausted. ... ok (0.086s) 2023-01-11T21:36:14.9888783Z test_proper_exit (__main__.TestDataLoader) 2023-01-11T21:36:14.9889201Z There might be ConnectionResetError or leaked semaphore warning (due to dirty process exit), but they are all safe to ignore ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.003s) 2023-01-11T21:36:14.9889619Z test_random_sampler (__main__.TestDataLoader) ... ok (0.003s) 2023-01-11T21:36:14.9889918Z test_random_sampler_len_with_replacement (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9890225Z test_random_sampler_len_without_replacement (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9890749Z test_sampler (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9891119Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9891542Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9891894Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9892380Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9892709Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9893131Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9893476Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9893910Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9894378Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9894797Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9895139Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9895619Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9895939Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9896367Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9896715Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9896935Z ok (2.465s) 2023-01-11T21:36:14.9897159Z test_sampler_reproducibility (__main__.TestDataLoader) ... ok (0.028s) 2023-01-11T21:36:14.9897674Z test_segfault (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9898043Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9898466Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9898828Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9899054Z ok (5.407s) 2023-01-11T21:36:14.9899282Z test_seqential_batch_workers (__main__.TestDataLoader) ... ok (0.110s) 2023-01-11T21:36:14.9899577Z test_seqential_batch_workers_prefetch (__main__.TestDataLoader) ... ok (0.111s) 2023-01-11T21:36:14.9899874Z test_sequential_batch (__main__.TestDataLoader) ... ok (0.041s) 2023-01-11T21:36:14.9900152Z test_sequential_nonbatch (__main__.TestDataLoader) ... ok (0.022s) 2023-01-11T21:36:14.9900449Z test_sequential_pin_memory (__main__.TestDataLoader) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:14.9900756Z test_sequential_workers (__main__.TestDataLoader) ... ok (0.174s) 2023-01-11T21:36:14.9901020Z test_shuffle (__main__.TestDataLoader) ... ok (0.062s) 2023-01-11T21:36:14.9901278Z test_shuffle_batch (__main__.TestDataLoader) ... ok (0.054s) 2023-01-11T21:36:14.9901534Z test_shuffle_batch_none (__main__.TestDataLoader) ... ok (0.056s) 2023-01-11T21:36:14.9901819Z test_shuffle_batch_workers (__main__.TestDataLoader) ... ok (0.158s) 2023-01-11T21:36:14.9902117Z test_shuffle_batch_workers_prefetch (__main__.TestDataLoader) ... ok (0.144s) 2023-01-11T21:36:14.9904378Z test_shuffle_pin_memory (__main__.TestDataLoader) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:36:14.9904709Z test_shuffle_reproducibility (__main__.TestDataLoader) ... ok (0.309s) 2023-01-11T21:36:14.9904999Z test_shuffle_workers (__main__.TestDataLoader) ... ok (0.210s) 2023-01-11T21:36:14.9905506Z test_timeout (__main__.TestDataLoader) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9905880Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9906317Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9906669Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9906976Z ok (2.683s) 2023-01-11T21:36:14.9907189Z test_typing (__main__.TestDataLoader) ... ok (0.001s) 2023-01-11T21:36:14.9907450Z test_worker_init_fn (__main__.TestDataLoader) ... ok (0.045s) 2023-01-11T21:36:14.9907703Z test_worker_seed (__main__.TestDataLoader) ... ok (0.092s) 2023-01-11T21:36:14.9907986Z test_worker_seed_reproducibility (__main__.TestDataLoader) ... ok (0.167s) 2023-01-11T21:36:14.9908566Z test_batch_sampler (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9908974Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9909394Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9909738Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9910225Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9910556Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9910978Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9911320Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9911758Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9912071Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9912497Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9912910Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9913349Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9913665Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9914095Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9914436Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9914645Z ok (2.448s) 2023-01-11T21:36:14.9914926Z test_builtin_collection_conversion (__main__.TestDataLoaderPersistentWorkers) ... ok (0.227s) 2023-01-11T21:36:14.9915291Z test_bulk_loading_nobatch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.068s) 2023-01-11T21:36:14.9915654Z test_chain_iterable_style_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok (0.102s) 2023-01-11T21:36:14.9916002Z test_dataset_not_reset (__main__.TestDataLoaderPersistentWorkers) ... ok (0.093s) 2023-01-11T21:36:14.9916374Z test_default_collate_bad_numpy_types (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9916754Z test_default_collate_bad_sequence_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9917109Z test_default_collate_dtype (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9917480Z test_default_collate_mapping_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9917851Z test_default_collate_numpy_memmap (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s) 2023-01-11T21:36:14.9918230Z test_default_collate_sequence_dont_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9918602Z test_default_collate_sequence_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9918977Z test_default_collate_shared_tensor (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9919352Z test_default_convert_mapping_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9919787Z test_default_convert_sequence_dont_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9920164Z test_default_convert_sequence_keep_type (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9920551Z test_distributed_sampler_invalid_rank (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9920931Z test_duplicating_data_with_drop_last (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9921271Z test_early_exit (__main__.TestDataLoaderPersistentWorkers) ... ok (1.518s) 2023-01-11T21:36:14.9921597Z test_error (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s) 2023-01-11T21:36:14.9921922Z test_error_in_init (__main__.TestDataLoaderPersistentWorkers) ... ok (0.080s) 2023-01-11T21:36:14.9922251Z test_error_workers (__main__.TestDataLoaderPersistentWorkers) ... ok (0.058s) 2023-01-11T21:36:14.9922656Z test_excessive_thread_creation_warning (__main__.TestDataLoaderPersistentWorkers) ... ok (0.013s) 2023-01-11T21:36:14.9923027Z test_fd_limit_exceeded (__main__.TestDataLoaderPersistentWorkers) ... ok (1.589s) 2023-01-11T21:36:14.9923630Z test_get_worker_info (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9924042Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9924464Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9924815Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9925041Z ok (3.527s) 2023-01-11T21:36:14.9925285Z test_growing_dataset (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9925641Z test_invalid_assign_after_init (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9926015Z test_invalid_ctor_args_combinations (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9926487Z test_iterable_style_dataset (__main__.TestDataLoaderPersistentWorkers) ... Exception ignored in: 2023-01-11T21:36:14.9926856Z Traceback (most recent call last): 2023-01-11T21:36:14.9927240Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1480, in __del__ 2023-01-11T21:36:14.9927606Z Exception ignored in: 2023-01-11T21:36:14.9927876Z self._shutdown_workers() 2023-01-11T21:36:14.9928258Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1463, in _shutdown_workers 2023-01-11T21:36:14.9928549Z Traceback (most recent call last): 2023-01-11T21:36:14.9928919Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1480, in __del__ 2023-01-11T21:36:14.9929169Z if w.is_alive(): 2023-01-11T21:36:14.9929424Z File "/opt/conda/lib/python3.10/multiprocessing/process.py", line 160, in is_alive 2023-01-11T21:36:14.9929803Z assert self._parent_pid == os.getpid(), 'can only test a child process' 2023-01-11T21:36:14.9930069Z AssertionError: can only test a child process 2023-01-11T21:36:14.9930288Z self._shutdown_workers() 2023-01-11T21:36:14.9930668Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1463, in _shutdown_workers 2023-01-11T21:36:14.9931028Z Exception ignored in: 2023-01-11T21:36:14.9931320Z Traceback (most recent call last): 2023-01-11T21:36:14.9931690Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1480, in __del__ 2023-01-11T21:36:14.9931950Z if w.is_alive(): 2023-01-11T21:36:14.9932193Z File "/opt/conda/lib/python3.10/multiprocessing/process.py", line 160, in is_alive 2023-01-11T21:36:14.9932611Z assert self._parent_pid == os.getpid(), 'can only test a child process' 2023-01-11T21:36:14.9932886Z AssertionError: can only test a child process 2023-01-11T21:36:14.9933095Z self._shutdown_workers() 2023-01-11T21:36:14.9933475Z File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1463, in _shutdown_workers 2023-01-11T21:36:14.9933744Z if w.is_alive(): 2023-01-11T21:36:14.9933987Z File "/opt/conda/lib/python3.10/multiprocessing/process.py", line 160, in is_alive 2023-01-11T21:36:14.9934361Z assert self._parent_pid == os.getpid(), 'can only test a child process' 2023-01-11T21:36:14.9934631Z AssertionError: can only test a child process 2023-01-11T21:36:14.9934831Z ok (0.794s) 2023-01-11T21:36:14.9935086Z test_iterabledataset_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9935740Z test_large_sampler_indices (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9936159Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9936580Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9936931Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9937154Z ok (3.232s) 2023-01-11T21:36:14.9937398Z test_len (__main__.TestDataLoaderPersistentWorkers) ... ok (0.007s) 2023-01-11T21:36:14.9937738Z test_multi_epochs_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok (0.050s) 2023-01-11T21:36:14.9938367Z test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9938784Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9939213Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9939561Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9939998Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9940326Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9940741Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9941087Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9941519Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9941843Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9942258Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9942749Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9943185Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9943494Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9943918Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9944256Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9944480Z ok (3.431s) 2023-01-11T21:36:14.9944990Z test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9945414Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9945925Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9946270Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9946689Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9947013Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9947439Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9947766Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9948196Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9948520Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9948996Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9949327Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9949759Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9950083Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9950496Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9950835Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9951265Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9951589Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9952007Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9952352Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9952839Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9953162Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9953579Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9953923Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9954345Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9954657Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9955085Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9955429Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9955857Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9956167Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9956584Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9956909Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9957326Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9957671Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9958108Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9958491Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9958910Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9959239Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9959665Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9960002Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9960416Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9960744Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9961167Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9961559Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9961996Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9962322Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9962750Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9963075Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9963298Z ok (6.832s) 2023-01-11T21:36:14.9964053Z test_multiprocessing_iterdatapipe (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/graph_settings.py:88: UserWarning: `shuffle=True` was set, but the datapipe does not contain a `Shuffler`. Adding one at the end. Be aware that the default buffer size might not be sufficient for your task. 2023-01-11T21:36:14.9964583Z warnings.warn( 2023-01-11T21:36:14.9964962Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9965290Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9965718Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9966048Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9966481Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9966807Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9967234Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9967557Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9967997Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9968325Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9968738Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9969076Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9969504Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9969826Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9970239Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9970580Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9971015Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9971382Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9971795Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9972134Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9972562Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9972871Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9973297Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9973635Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9974062Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9974407Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9974838Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9975174Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9975587Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9975909Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9976337Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9976674Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9976884Z ok (6.618s) 2023-01-11T21:36:14.9977383Z test_no_segfault (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9977784Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9978210Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9978538Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9978764Z ok (1.745s) 2023-01-11T21:36:14.9979009Z test_numpy (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9979329Z test_numpy_gen_state (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s) 2023-01-11T21:36:14.9979665Z test_numpy_scalars (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9979998Z test_partial_workers (__main__.TestDataLoaderPersistentWorkers) 2023-01-11T21:36:14.9980296Z Check that workers exit even if the iterator is not exhausted. ... ok (0.070s) 2023-01-11T21:36:14.9980607Z test_proper_exit (__main__.TestDataLoaderPersistentWorkers) 2023-01-11T21:36:14.9981057Z There might be ConnectionResetError or leaked semaphore warning (due to dirty process exit), but they are all safe to ignore ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.005s) 2023-01-11T21:36:14.9981516Z test_random_sampler (__main__.TestDataLoaderPersistentWorkers) ... ok (0.004s) 2023-01-11T21:36:14.9981873Z test_random_sampler_len_with_replacement (__main__.TestDataLoaderPersistentWorkers) ... ok (0.002s) 2023-01-11T21:36:14.9982267Z test_random_sampler_len_without_replacement (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9983054Z test_sampler (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9983459Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9983888Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9984298Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9984734Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9985064Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9985481Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9985821Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9986252Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9986564Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9986992Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9987383Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9987816Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9988131Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9988557Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9988896Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9989104Z ok (2.546s) 2023-01-11T21:36:14.9989377Z test_sampler_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok (0.015s) 2023-01-11T21:36:14.9989969Z test_segfault (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9990371Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9990789Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9991132Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9991353Z ok (3.328s) 2023-01-11T21:36:14.9991607Z test_seqential_batch_workers (__main__.TestDataLoaderPersistentWorkers) ... ok (0.118s) 2023-01-11T21:36:14.9991978Z test_seqential_batch_workers_prefetch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.116s) 2023-01-11T21:36:14.9992339Z test_sequential_batch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.039s) 2023-01-11T21:36:14.9992741Z test_sequential_nonbatch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.021s) 2023-01-11T21:36:14.9993119Z test_sequential_pin_memory (__main__.TestDataLoaderPersistentWorkers) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:14.9993500Z test_sequential_workers (__main__.TestDataLoaderPersistentWorkers) ... ok (0.180s) 2023-01-11T21:36:14.9993841Z test_shuffle (__main__.TestDataLoaderPersistentWorkers) ... ok (0.062s) 2023-01-11T21:36:14.9994160Z test_shuffle_batch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.053s) 2023-01-11T21:36:14.9994504Z test_shuffle_batch_none (__main__.TestDataLoaderPersistentWorkers) ... ok (0.055s) 2023-01-11T21:36:14.9994852Z test_shuffle_batch_workers (__main__.TestDataLoaderPersistentWorkers) ... ok (0.158s) 2023-01-11T21:36:14.9995213Z test_shuffle_batch_workers_prefetch (__main__.TestDataLoaderPersistentWorkers) ... ok (0.154s) 2023-01-11T21:36:14.9995585Z test_shuffle_pin_memory (__main__.TestDataLoaderPersistentWorkers) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:36:14.9995966Z test_shuffle_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok (0.271s) 2023-01-11T21:36:14.9996314Z test_shuffle_workers (__main__.TestDataLoaderPersistentWorkers) ... ok (0.214s) 2023-01-11T21:36:14.9996899Z test_timeout (__main__.TestDataLoaderPersistentWorkers) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T21:36:14.9997334Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T21:36:14.9997770Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T21:36:14.9998116Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T21:36:14.9998339Z ok (2.721s) 2023-01-11T21:36:14.9998658Z test_typing (__main__.TestDataLoaderPersistentWorkers) ... ok (0.001s) 2023-01-11T21:36:14.9998989Z test_worker_init_fn (__main__.TestDataLoaderPersistentWorkers) ... ok (0.050s) 2023-01-11T21:36:14.9999325Z test_worker_seed (__main__.TestDataLoaderPersistentWorkers) ... ok (0.099s) 2023-01-11T21:36:14.9999669Z test_worker_seed_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok (0.186s) 2023-01-11T21:36:15.0000069Z test_incomplete_fractional_splits (__main__.TestDatasetRandomSplit) ... ok (0.001s) 2023-01-11T21:36:15.0000410Z test_lengths_must_equal_dataset_size (__main__.TestDatasetRandomSplit) ... ok (0.001s) 2023-01-11T21:36:15.0000724Z test_slicing_of_subset_of_dataset (__main__.TestDatasetRandomSplit) ... ok (0.002s) 2023-01-11T21:36:15.0001049Z test_slicing_of_subset_of_subset (__main__.TestDatasetRandomSplit) ... ok (0.002s) 2023-01-11T21:36:15.0001374Z test_splits_are_mutually_exclusive (__main__.TestDatasetRandomSplit) ... ok (0.001s) 2023-01-11T21:36:15.0001691Z test_splits_generator (__main__.TestDatasetRandomSplit) ... ok (0.002s) 2023-01-11T21:36:15.0001993Z test_splits_have_correct_size (__main__.TestDatasetRandomSplit) ... ok (0.002s) 2023-01-11T21:36:15.0002300Z test_splits_indexing_type (__main__.TestDatasetRandomSplit) 2023-01-11T21:36:15.0002567Z Indices generated by random_split ... ok (0.003s) 2023-01-11T21:36:15.0002843Z test_splits_reproducibility (__main__.TestDatasetRandomSplit) ... ok (0.007s) 2023-01-11T21:36:15.0003174Z test_pin_memory (__main__.TestDictDataLoader) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:15.0003501Z test_pin_memory_device (__main__.TestDictDataLoader) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:15.0003846Z test_pin_memory_with_only_device (__main__.TestDictDataLoader) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:36:15.0004159Z test_sequential_batch (__main__.TestDictDataLoader) ... ok (0.037s) 2023-01-11T21:36:15.0005051Z test_ind_worker_queue (__main__.TestIndividualWorkerQueue) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/68643 for platform(s) macos, linux, win. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:36:15.0005686Z test_dataloader_with_namedtuple (__main__.TestNamedTupleDataLoader) ... ok (0.002s) 2023-01-11T21:36:15.0006015Z test_set_affinity_in_worker_init (__main__.TestSetAffinity) ... ok (0.053s) 2023-01-11T21:36:15.0006332Z test_shuffle_pin_memory (__main__.TestStringDataLoader) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:36:15.0006637Z test_getitem (__main__.TestTensorDataset) ... ok (0.005s) 2023-01-11T21:36:15.0006903Z test_getitem_1d (__main__.TestTensorDataset) ... ok (0.004s) 2023-01-11T21:36:15.0007151Z test_len (__main__.TestTensorDataset) ... ok (0.001s) 2023-01-11T21:36:15.0007419Z test_many_tensors (__main__.TestTensorDataset) ... ok (0.004s) 2023-01-11T21:36:15.0007694Z test_single_tensor (__main__.TestTensorDataset) ... ok (0.001s) 2023-01-11T21:36:15.0007851Z 2023-01-11T21:36:15.0008059Z ---------------------------------------------------------------------- 2023-01-11T21:36:15.0008291Z Ran 164 tests in 86.915s 2023-01-11T21:36:15.0008405Z 2023-01-11T21:36:15.0008479Z OK (skipped=15) 2023-01-11T21:36:15.0008585Z 2023-01-11T21:36:15.0008674Z Generating XML reports... 2023-01-11T21:36:15.0009129Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestConcatDataset-20230111213447.xml 2023-01-11T21:36:15.0009650Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestDataLoader-20230111213447.xml 2023-01-11T21:36:15.0010217Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestDataLoaderPersistentWorkers-20230111213447.xml 2023-01-11T21:36:15.0010792Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestDatasetRandomSplit-20230111213447.xml 2023-01-11T21:36:15.0011314Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestDictDataLoader-20230111213447.xml 2023-01-11T21:36:15.0011858Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestNamedTupleDataLoader-20230111213447.xml 2023-01-11T21:36:15.0012381Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestSetAffinity-20230111213447.xml 2023-01-11T21:36:15.0012996Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestTensorDataset-20230111213447.xml 2023-01-11T21:36:15.0013559Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-IntegrationTestDataLoaderDataPipe-20230111213447.xml 2023-01-11T21:36:15.0014130Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestConvAfterFork-20230111213447.xml 2023-01-11T21:36:15.0014641Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestCustomPinFn-20230111213447.xml 2023-01-11T21:36:15.0015180Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestIndividualWorkerQueue-20230111213447.xml 2023-01-11T21:36:15.0015712Z Generated XML report: test-reports/python-unittest/test_dataloader/TEST-TestStringDataLoader-20230111213447.xml 2023-01-11T21:36:15.0015950Z 2023-01-11T21:36:15.0016323Z ##[endgroup] 2023-01-11T21:36:15.0016711Z FINISHED PRINTING LOG FILE of test_dataloader (/var/lib/jenkins/workspace/test/test-reports/test_dataloader_itutkeds) 2023-01-11T21:36:15.0016932Z 2023-01-11T21:36:16.9499687Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:17.0352570Z Ignoring disabled issues: [] 2023-01-11T21:36:17.0504339Z Running test_datapipe ... [2023-01-11 21:36:17.050121] 2023-01-11T21:36:17.0506426Z Executing ['/opt/conda/bin/python', '-bb', 'test_datapipe.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:17.050435] 2023-01-11T21:36:19.4561049Z 2023-01-11T21:36:19.4561606Z Expand the folded group to see the log file of test_datapipe 2023-01-11T21:36:19.4562677Z ##[group]PRINTING LOG FILE of test_datapipe (/var/lib/jenkins/workspace/test/test-reports/test_datapipe_fixg6u32) 2023-01-11T21:36:19.4563026Z 2023-01-11T21:36:19.4563268Z Running tests... 2023-01-11T21:36:19.4563954Z ---------------------------------------------------------------------- 2023-01-11T21:36:19.4564807Z test_basic_capture (__main__.TestCaptureDataFrame) ... Test results will be stored in test-reports/python-unittest/test_datapipe 2023-01-11T21:36:19.4565430Z skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4565938Z test_circular_serialization_with_dill (__main__.TestCircularSerialization) ... skip: no dill (0.001s) 2023-01-11T21:36:19.4567126Z test_circular_serialization_with_pickle (__main__.TestCircularSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py:297: UserWarning: Some child DataPipes are not exhausted when __iter__ is called. We are resetting the buffer and each child DataPipe will read from the start again. 2023-01-11T21:36:19.4567718Z warnings.warn("Some child DataPipes are not exhausted when __iter__ is called. We are resetting " 2023-01-11T21:36:19.4567974Z ok (0.248s) 2023-01-11T21:36:19.4568187Z test_as_string (__main__.TestDataChunk) ... ok (0.001s) 2023-01-11T21:36:19.4568441Z test_getitem (__main__.TestDataChunk) ... ok (0.001s) 2023-01-11T21:36:19.4568672Z test_iter (__main__.TestDataChunk) ... ok (0.000s) 2023-01-11T21:36:19.4569097Z test_len (__main__.TestDataChunk) ... ok (0.000s) 2023-01-11T21:36:19.4569352Z test_random_shuffle (__main__.TestDataChunk) ... ok (0.001s) 2023-01-11T21:36:19.4569594Z test_reverse (__main__.TestDataChunk) ... ok (0.001s) 2023-01-11T21:36:19.4569840Z test_sort (__main__.TestDataChunk) ... ok (0.001s) 2023-01-11T21:36:19.4570132Z test_batch (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4570456Z test_capture (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4570795Z test_collate (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.001s) 2023-01-11T21:36:19.4571126Z test_filter (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4571451Z test_shuffle (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4571767Z test_unbatch (__main__.TestDataFramesPipes) ... skip: no dataframes (pandas) (0.000s) 2023-01-11T21:36:19.4572168Z test_batch_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.003s) 2023-01-11T21:36:19.4572908Z test_collate_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4573364Z warnings.warn( 2023-01-11T21:36:19.4573526Z ok (0.004s) 2023-01-11T21:36:19.4573781Z test_concat_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.003s) 2023-01-11T21:36:19.4574510Z test_demux_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4574943Z warnings.warn( 2023-01-11T21:36:19.4575475Z /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4575849Z warnings.warn( 2023-01-11T21:36:19.4576372Z /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4576724Z warnings.warn( 2023-01-11T21:36:19.4577244Z /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4577615Z warnings.warn( 2023-01-11T21:36:19.4577787Z ok (0.008s) 2023-01-11T21:36:19.4578022Z test_filter_datapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.005s) 2023-01-11T21:36:19.4578351Z test_fork_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.010s) 2023-01-11T21:36:19.4578697Z test_iterable_wrapper_datapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4579039Z test_map_dict_with_col_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.016s) 2023-01-11T21:36:19.4579770Z test_map_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4580216Z warnings.warn( 2023-01-11T21:36:19.4580385Z ok (0.005s) 2023-01-11T21:36:19.4580645Z test_map_tuple_list_with_col_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.031s) 2023-01-11T21:36:19.4580992Z test_mux_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4581328Z test_sampler_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.001s) 2023-01-11T21:36:19.4582088Z test_serializable (__main__.TestFunctionalIterDataPipe) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py:297: UserWarning: Some child DataPipes are not exhausted when __iter__ is called. We are resetting the buffer and each child DataPipe will read from the start again. 2023-01-11T21:36:19.4582898Z warnings.warn("Some child DataPipes are not exhausted when __iter__ is called. We are resetting " 2023-01-11T21:36:19.4583575Z /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py:297: UserWarning: Some child DataPipes are not exhausted when __iter__ is called. We are resetting the buffer and each child DataPipe will read from the start again. 2023-01-11T21:36:19.4584077Z warnings.warn("Some child DataPipes are not exhausted when __iter__ is called. We are resetting " 2023-01-11T21:36:19.4584332Z ok (0.038s) 2023-01-11T21:36:19.4584564Z test_serializable_with_dill (__main__.TestFunctionalIterDataPipe) 2023-01-11T21:36:19.4584924Z Only for DataPipes that take in a function as argument ... ok (0.020s) 2023-01-11T21:36:19.4585248Z test_shuffler_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.005s) 2023-01-11T21:36:19.4585582Z test_stream_reader_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.001s) 2023-01-11T21:36:19.4585931Z test_unbatch_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.004s) 2023-01-11T21:36:19.4586264Z test_zip_iterdatapipe (__main__.TestFunctionalIterDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4586600Z test_batch_mapdatapipe (__main__.TestFunctionalMapDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4586913Z test_concat_mapdatapipe (__main__.TestFunctionalMapDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4587635Z test_map_mapdatapipe (__main__.TestFunctionalMapDataPipe) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4588087Z warnings.warn( 2023-01-11T21:36:19.4588247Z ok (0.003s) 2023-01-11T21:36:19.4588506Z test_sequence_wrapper_datapipe (__main__.TestFunctionalMapDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4588836Z test_serializable (__main__.TestFunctionalMapDataPipe) ... ok (0.014s) 2023-01-11T21:36:19.4589148Z test_serializable_with_dill (__main__.TestFunctionalMapDataPipe) 2023-01-11T21:36:19.4589437Z Only for DataPipes that take in a function as argument ... ok (0.009s) 2023-01-11T21:36:19.4589752Z test_shuffler_mapdatapipe (__main__.TestFunctionalMapDataPipe) ... ok (0.006s) 2023-01-11T21:36:19.4590083Z test_zip_mapdatapipe (__main__.TestFunctionalMapDataPipe) ... ok (0.002s) 2023-01-11T21:36:19.4590741Z test_simple_traverse (__main__.TestGraph) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4591159Z warnings.warn( 2023-01-11T21:36:19.4591332Z ok (0.002s) 2023-01-11T21:36:19.4591562Z test_traverse_circular_datapipe (__main__.TestGraph) ... ok (0.001s) 2023-01-11T21:36:19.4591824Z test_traverse_forked (__main__.TestGraph) ... ok (0.002s) 2023-01-11T21:36:19.4592093Z test_traverse_mapdatapipe (__main__.TestGraph) ... ok (0.001s) 2023-01-11T21:36:19.4592366Z test_traverse_mixdatapipe (__main__.TestGraph) ... ok (0.001s) 2023-01-11T21:36:19.4592700Z test_traverse_unhashable_datapipe (__main__.TestGraph) ... ok (0.001s) 2023-01-11T21:36:19.4593075Z test_iterdatapipe_sample_yielded_generator_function (__main__.TestIterDataPipeCountSampleYielded) ... ok (0.001s) 2023-01-11T21:36:19.4593529Z test_iterdatapipe_sample_yielded_generator_function_exception (__main__.TestIterDataPipeCountSampleYielded) ... ok (0.002s) 2023-01-11T21:36:19.4593959Z test_iterdatapipe_sample_yielded_next (__main__.TestIterDataPipeCountSampleYielded) ... ok (0.001s) 2023-01-11T21:36:19.4594363Z test_iterdatapipe_sample_yielded_next_exception (__main__.TestIterDataPipeCountSampleYielded) ... ok (0.001s) 2023-01-11T21:36:19.4594847Z test_iterdatapipe_sample_yielded_return_self (__main__.TestIterDataPipeCountSampleYielded) ... ok (0.001s) 2023-01-11T21:36:19.4595254Z test_simple_snapshot_custom_non_generator (__main__.TestIterDataPipeGraphFastForward) ... ok (0.001s) 2023-01-11T21:36:19.4595645Z test_simple_snapshot_custom_self_next (__main__.TestIterDataPipeGraphFastForward) ... ok (0.001s) 2023-01-11T21:36:19.4596006Z test_simple_snapshot_graph (__main__.TestIterDataPipeGraphFastForward) ... ok (0.012s) 2023-01-11T21:36:19.4596846Z test_simple_snapshot_graph_repeated (__main__.TestIterDataPipeGraphFastForward) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py:297: UserWarning: Some child DataPipes are not exhausted when __iter__ is called. We are resetting the buffer and each child DataPipe will read from the start again. 2023-01-11T21:36:19.4597488Z warnings.warn("Some child DataPipes are not exhausted when __iter__ is called. We are resetting " 2023-01-11T21:36:19.4597748Z ok (0.005s) 2023-01-11T21:36:19.4598032Z test_simple_snapshot_graph_with_serialization (__main__.TestIterDataPipeGraphFastForward) ... ok (0.010s) 2023-01-11T21:36:19.4598423Z test_iterdatapipe_singleton_buggy (__main__.TestIterDataPipeSingletonConstraint) 2023-01-11T21:36:19.4598873Z Buggy test case case where IterDataPipe's `__iter__` returns a new object, but also has ... ok (0.002s) 2023-01-11T21:36:19.4599247Z test_iterdatapipe_singleton_constraint_multiple_outputs (__main__.TestIterDataPipeSingletonConstraint) 2023-01-11T21:36:19.4599635Z Testing for the case where IterDataPipe has multiple child DataPipes as outputs. ... ok (0.006s) 2023-01-11T21:36:19.4600003Z test_iterdatapipe_singleton_generator (__main__.TestIterDataPipeSingletonConstraint) 2023-01-11T21:36:19.4600429Z Testing for the case where IterDataPipe's `__iter__` is a generator function. ... ok (0.003s) 2023-01-11T21:36:19.4600798Z test_iterdatapipe_singleton_new_object (__main__.TestIterDataPipeSingletonConstraint) 2023-01-11T21:36:19.4601237Z Testing for the case where IterDataPipe's `__iter__` isn't a generator nor returns `self`, ... ok (0.002s) 2023-01-11T21:36:19.4601603Z test_iterdatapipe_singleton_self_next (__main__.TestIterDataPipeSingletonConstraint) 2023-01-11T21:36:19.4602062Z Testing for the case where IterDataPipe's `__iter__` returns `self` and there is a `__next__` method ... ok (0.003s) 2023-01-11T21:36:19.4602801Z test_demux_mux_datapipe (__main__.TestIterableDataPipeBasic) ... /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py:137: UserWarning: Local function is not supported by pickle, please use regular python function or functools.partial instead. 2023-01-11T21:36:19.4603229Z warnings.warn( 2023-01-11T21:36:19.4603400Z ok (0.004s) 2023-01-11T21:36:19.4603660Z test_groupby_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.006s) 2023-01-11T21:36:19.4604011Z test_listdirfiles_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.003s) 2023-01-11T21:36:19.4604384Z test_listdirfilesdeterministic_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.002s) 2023-01-11T21:36:19.4604762Z test_map_with_col_file_handle_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.003s) 2023-01-11T21:36:19.4605123Z test_openfilesfromdisk_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.004s) 2023-01-11T21:36:19.4605474Z test_routeddecoder_iterable_datapipe (__main__.TestIterableDataPipeBasic) ... ok (0.004s) 2023-01-11T21:36:19.4605806Z test_spawn_lambdas_iter (__main__.TestSerialization) ... skip: no dill (0.000s) 2023-01-11T21:36:19.4606117Z test_spawn_lambdas_map (__main__.TestSerialization) ... skip: no dill (0.000s) 2023-01-11T21:36:19.4606406Z test_multi_sharding (__main__.TestSharding) ... ok (0.003s) 2023-01-11T21:36:19.4606658Z test_old_dataloader (__main__.TestSharding) ... ok (0.051s) 2023-01-11T21:36:19.4606970Z test_sharding_groups (__main__.TestSharding) ... ok (0.002s) 2023-01-11T21:36:19.4607234Z test_sharding_length (__main__.TestSharding) ... ok (0.002s) 2023-01-11T21:36:19.4607484Z test_simple_sharding (__main__.TestSharding) ... ok (0.003s) 2023-01-11T21:36:19.4607743Z test_api (__main__.TestStreamWrapper) ... ok (0.001s) 2023-01-11T21:36:19.4608001Z test_dir (__main__.TestStreamWrapper) ... ok (0.001s) 2023-01-11T21:36:19.4608246Z test_pickle (__main__.TestStreamWrapper) ... ok (0.002s) 2023-01-11T21:36:19.4608508Z test_repr (__main__.TestStreamWrapper) ... ok (0.002s) 2023-01-11T21:36:19.4608794Z test_compile_time (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.002s) 2023-01-11T21:36:19.4609104Z test_construct_time (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.001s) 2023-01-11T21:36:19.4609371Z test_isinstance (__main__.TestTyping) ... ok (0.001s) 2023-01-11T21:36:19.4609652Z test_issubinstance (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.001s) 2023-01-11T21:36:19.4610007Z test_protocol (__main__.TestTyping) ... ok (0.001s) 2023-01-11T21:36:19.4610269Z test_reinforce (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.001s) 2023-01-11T21:36:19.4610562Z test_runtime (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.001s) 2023-01-11T21:36:19.4610853Z test_subtype (__main__.TestTyping) ... skip: TODO: Fix typing bug (0.001s) 2023-01-11T21:36:19.4611021Z 2023-01-11T21:36:19.4611229Z ---------------------------------------------------------------------- 2023-01-11T21:36:19.4611461Z Ran 89 tests in 0.622s 2023-01-11T21:36:19.4611575Z 2023-01-11T21:36:19.4611650Z OK (skipped=16) 2023-01-11T21:36:19.4611758Z 2023-01-11T21:36:19.4611843Z Generating XML reports... 2023-01-11T21:36:19.4612275Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestCircularSerialization-20230111213618.xml 2023-01-11T21:36:19.4612809Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestDataChunk-20230111213618.xml 2023-01-11T21:36:19.4613350Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestFunctionalIterDataPipe-20230111213618.xml 2023-01-11T21:36:19.4613921Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestFunctionalMapDataPipe-20230111213618.xml 2023-01-11T21:36:19.4614426Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestGraph-20230111213618.xml 2023-01-11T21:36:19.4614980Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeCountSampleYielded-20230111213618.xml 2023-01-11T21:36:19.4615596Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeGraphFastForward-20230111213618.xml 2023-01-11T21:36:19.4616215Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeSingletonConstraint-20230111213618.xml 2023-01-11T21:36:19.4616795Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestIterableDataPipeBasic-20230111213618.xml 2023-01-11T21:36:19.4617317Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestSharding-20230111213618.xml 2023-01-11T21:36:19.4617816Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestStreamWrapper-20230111213618.xml 2023-01-11T21:36:19.4618308Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestTyping-20230111213618.xml 2023-01-11T21:36:19.4618796Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestCaptureDataFrame-20230111213618.xml 2023-01-11T21:36:19.4619327Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestDataFramesPipes-20230111213618.xml 2023-01-11T21:36:19.4619845Z Generated XML report: test-reports/python-unittest/test_datapipe/TEST-TestSerialization-20230111213618.xml 2023-01-11T21:36:19.4620073Z 2023-01-11T21:36:19.4620378Z ##[endgroup] 2023-01-11T21:36:19.4620765Z FINISHED PRINTING LOG FILE of test_datapipe (/var/lib/jenkins/workspace/test/test-reports/test_datapipe_fixg6u32) 2023-01-11T21:36:19.4621028Z 2023-01-11T21:36:21.3856081Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:21.4712349Z Ignoring disabled issues: [] 2023-01-11T21:36:21.4865106Z Running test_decomp ... [2023-01-11 21:36:21.486156] 2023-01-11T21:36:21.4866934Z Executing ['/opt/conda/bin/python', '-bb', 'test_decomp.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:21.486434] 2023-01-11T21:36:24.1080777Z 2023-01-11T21:36:24.1081325Z Expand the folded group to see the log file of test_decomp 2023-01-11T21:36:24.1082192Z ##[group]PRINTING LOG FILE of test_decomp (/var/lib/jenkins/workspace/test/test-reports/test_decomp_ovlbxq8p) 2023-01-11T21:36:24.1082426Z 2023-01-11T21:36:24.1082502Z Running tests... 2023-01-11T21:36:24.1082885Z ---------------------------------------------------------------------- 2023-01-11T21:36:24.1083057Z 2023-01-11T21:36:24.1083251Z ---------------------------------------------------------------------- 2023-01-11T21:36:24.1083488Z Ran 0 tests in 0.000s 2023-01-11T21:36:24.1083830Z 2023-01-11T21:36:24.1083879Z OK 2023-01-11T21:36:24.1083968Z 2023-01-11T21:36:24.1084053Z Generating XML reports... 2023-01-11T21:36:24.1084380Z Test results will be stored in test-reports/python-unittest/test_decomp 2023-01-11T21:36:24.1084556Z 2023-01-11T21:36:24.1084774Z ##[endgroup] 2023-01-11T21:36:24.1085151Z FINISHED PRINTING LOG FILE of test_decomp (/var/lib/jenkins/workspace/test/test-reports/test_decomp_ovlbxq8p) 2023-01-11T21:36:24.1085361Z 2023-01-11T21:36:26.0309828Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:26.1143667Z Ignoring disabled issues: [] 2023-01-11T21:36:26.1295554Z Running test_deploy ... [2023-01-11 21:36:26.129209] 2023-01-11T21:36:26.1297620Z Executing ['/opt/conda/bin/python', '-bb', '-m', 'pytest', 'test_deploy.py', '-v'] ... [2023-01-11 21:36:26.129483] 2023-01-11T21:36:28.9085575Z 2023-01-11T21:36:28.9086387Z Expand the folded group to see the log file of test_deploy 2023-01-11T21:36:28.9087202Z ##[group]PRINTING LOG FILE of test_deploy (/var/lib/jenkins/workspace/test/test-reports/test_deploy_r5le7pbk) 2023-01-11T21:36:28.9087522Z ============================= test session starts ============================== 2023-01-11T21:36:28.9088005Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T21:36:28.9088276Z cachedir: .pytest_cache 2023-01-11T21:36:28.9088715Z hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/var/lib/jenkins/workspace/test/.hypothesis/examples') 2023-01-11T21:36:28.9089077Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T21:36:28.9089509Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T21:36:28.9089809Z collecting ... collected 1 item 2023-01-11T21:36:28.9090087Z Running 1 items in this shard: test/test_deploy.py::TestFreezer::test_compile_string 2023-01-11T21:36:28.9090256Z 2023-01-11T21:36:28.9090403Z test_deploy.py::TestFreezer::test_compile_string PASSED [100%] 2023-01-11T21:36:28.9090574Z 2023-01-11T21:36:28.9090682Z ============================== 1 passed in 1.48s =============================== 2023-01-11T21:36:28.9090816Z 2023-01-11T21:36:28.9091033Z ##[endgroup] 2023-01-11T21:36:28.9091391Z FINISHED PRINTING LOG FILE of test_deploy (/var/lib/jenkins/workspace/test/test-reports/test_deploy_r5le7pbk) 2023-01-11T21:36:28.9091599Z 2023-01-11T21:36:30.8563793Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:30.9613623Z Ignoring disabled issues: [] 2023-01-11T21:36:30.9804408Z Running test_dlpack ... [2023-01-11 21:36:30.979941] 2023-01-11T21:36:30.9805896Z Executing ['/opt/conda/bin/python', '-bb', 'test_dlpack.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:30.980290] 2023-01-11T21:36:32.9305133Z 2023-01-11T21:36:32.9305789Z Expand the folded group to see the log file of test_dlpack 2023-01-11T21:36:32.9307191Z ##[group]PRINTING LOG FILE of test_dlpack (/var/lib/jenkins/workspace/test/test-reports/test_dlpack_lm9sk8oi) 2023-01-11T21:36:32.9307921Z 2023-01-11T21:36:32.9308039Z Running tests... 2023-01-11T21:36:32.9308645Z ---------------------------------------------------------------------- 2023-01-11T21:36:32.9308915Z 2023-01-11T21:36:32.9309210Z ---------------------------------------------------------------------- 2023-01-11T21:36:32.9309580Z Ran 0 tests in 0.000s 2023-01-11T21:36:32.9309744Z 2023-01-11T21:36:32.9309834Z OK 2023-01-11T21:36:32.9309970Z 2023-01-11T21:36:32.9310100Z Generating XML reports... 2023-01-11T21:36:32.9310615Z Test results will be stored in test-reports/python-unittest/test_dlpack 2023-01-11T21:36:32.9310884Z 2023-01-11T21:36:32.9311273Z ##[endgroup] 2023-01-11T21:36:32.9311885Z FINISHED PRINTING LOG FILE of test_dlpack (/var/lib/jenkins/workspace/test/test-reports/test_dlpack_lm9sk8oi) 2023-01-11T21:36:32.9312191Z 2023-01-11T21:36:34.8349765Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:34.9253499Z Ignoring disabled issues: [] 2023-01-11T21:36:34.9407622Z Running test_dynamic_shapes ... [2023-01-11 21:36:34.940370] 2023-01-11T21:36:34.9409228Z Executing ['/opt/conda/bin/python', '-bb', 'test_dynamic_shapes.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:34.940677] 2023-01-11T21:36:37.8049131Z 2023-01-11T21:36:37.8049699Z Expand the folded group to see the log file of test_dynamic_shapes 2023-01-11T21:36:37.8050789Z ##[group]PRINTING LOG FILE of test_dynamic_shapes (/var/lib/jenkins/workspace/test/test-reports/test_dynamic_shapes_ezsjxa5o) 2023-01-11T21:36:37.8051217Z 2023-01-11T21:36:37.8051920Z Running tests... 2023-01-11T21:36:37.8052557Z ---------------------------------------------------------------------- 2023-01-11T21:36:37.8053211Z Test results will be stored in test-reports/python-unittest/test_dynamic_shapes 2023-01-11T21:36:37.8053699Z test_arith_ops (__main__.TestPySymInt) ... ok (0.508s) 2023-01-11T21:36:37.8054125Z test_aten_ops (__main__.TestPySymInt) ... ok (0.021s) 2023-01-11T21:36:37.8054545Z test_binary (__main__.TestPySymInt) ... ok (0.025s) 2023-01-11T21:36:37.8054968Z test_fx_trace_intlist (__main__.TestPySymInt) ... ok (0.010s) 2023-01-11T21:36:37.8055368Z test_guard_int (__main__.TestPySymInt) ... ok (0.003s) 2023-01-11T21:36:37.8055780Z test_int_conversion (__main__.TestPySymInt) ... ok (0.001s) 2023-01-11T21:36:37.8056224Z test_int_to_float (__main__.TestPySymInt) ... ok (0.002s) 2023-01-11T21:36:37.8056667Z test_meta_symint (__main__.TestPySymInt) ... ok (0.003s) 2023-01-11T21:36:37.8057065Z test_numel (__main__.TestPySymInt) ... ok (0.003s) 2023-01-11T21:36:37.8057498Z test_print_readable_with_symints (__main__.TestPySymInt) ... ok (0.090s) 2023-01-11T21:36:37.8057984Z test_reverse_arith_ops (__main__.TestPySymInt) ... ok (0.065s) 2023-01-11T21:36:37.8058385Z test_roundtrip (__main__.TestPySymInt) ... ok (0.025s) 2023-01-11T21:36:37.8058817Z test_size_expressions (__main__.TestPySymInt) ... ok (0.007s) 2023-01-11T21:36:37.8059283Z test_stride (__main__.TestPySymInt) ... ok (0.003s) 2023-01-11T21:36:37.8059708Z test_sym_floor (__main__.TestPySymInt) ... ok (0.069s) 2023-01-11T21:36:37.8060147Z test_sym_int (__main__.TestPySymInt) ... ok (0.020s) 2023-01-11T21:36:37.8060585Z test_sym_sqrt (__main__.TestPySymInt) ... ok (0.040s) 2023-01-11T21:36:37.8060977Z test_symint_args (__main__.TestPySymInt) ... ok (0.031s) 2023-01-11T21:36:37.8061415Z test_symint_as_scalar (__main__.TestPySymInt) ... ok (0.001s) 2023-01-11T21:36:37.8061835Z test_symint_vargs (__main__.TestPySymInt) ... ok (0.024s) 2023-01-11T21:36:37.8062657Z test_method_fn_add_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: add is not a float magic method (0.001s) 2023-01-11T21:36:37.8063390Z test_method_fn_add_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: add is not a float magic method (0.001s) 2023-01-11T21:36:37.8064061Z test_method_fn_add_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.002s) 2023-01-11T21:36:37.8064931Z test_method_fn_add_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8065664Z test_method_fn_ceil_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: ceil is not a float magic method (0.001s) 2023-01-11T21:36:37.8066372Z test_method_fn_ceil_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: ceil is not a float magic method (0.001s) 2023-01-11T21:36:37.8067131Z test_method_fn_ceil_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: ceil is unary and already tested (0.001s) 2023-01-11T21:36:37.8067689Z test_method_fn_ceil_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8068095Z test_method_fn_eq_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: eq is not a float magic method (0.001s) 2023-01-11T21:36:37.8068628Z test_method_fn_eq_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: eq is not a float magic method (0.001s) 2023-01-11T21:36:37.8069015Z test_method_fn_eq_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8069387Z test_method_fn_eq_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8069796Z test_method_fn_floor_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: floor is not a float magic method (0.001s) 2023-01-11T21:36:37.8070242Z test_method_fn_floor_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: floor is not a float magic method (0.001s) 2023-01-11T21:36:37.8070673Z test_method_fn_floor_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: floor is unary and already tested (0.001s) 2023-01-11T21:36:37.8071084Z test_method_fn_floor_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8071505Z test_method_fn_floordiv_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: floordiv is not a float magic method (0.001s) 2023-01-11T21:36:37.8071960Z test_method_fn_floordiv_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: floordiv is not a float magic method (0.001s) 2023-01-11T21:36:37.8072371Z test_method_fn_floordiv_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.003s) 2023-01-11T21:36:37.8072823Z test_method_fn_floordiv_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8073235Z test_method_fn_ge_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: ge is not a float magic method (0.001s) 2023-01-11T21:36:37.8073678Z test_method_fn_ge_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: ge is not a float magic method (0.001s) 2023-01-11T21:36:37.8074069Z test_method_fn_ge_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8074444Z test_method_fn_ge_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8074846Z test_method_fn_gt_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: gt is not a float magic method (0.001s) 2023-01-11T21:36:37.8075278Z test_method_fn_gt_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: gt is not a float magic method (0.001s) 2023-01-11T21:36:37.8075660Z test_method_fn_gt_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8076030Z test_method_fn_gt_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8076440Z test_method_fn_le_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: le is not a float magic method (0.001s) 2023-01-11T21:36:37.8076915Z test_method_fn_le_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: le is not a float magic method (0.001s) 2023-01-11T21:36:37.8077295Z test_method_fn_le_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8077665Z test_method_fn_le_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8078065Z test_method_fn_lt_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: lt is not a float magic method (0.001s) 2023-01-11T21:36:37.8078501Z test_method_fn_lt_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: lt is not a float magic method (0.001s) 2023-01-11T21:36:37.8078884Z test_method_fn_lt_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8079285Z test_method_fn_lt_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8079696Z test_method_fn_max_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: max is not a float magic method (0.001s) 2023-01-11T21:36:37.8080130Z test_method_fn_max_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: max is not a float magic method (0.001s) 2023-01-11T21:36:37.8080526Z test_method_fn_max_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8080898Z test_method_fn_max_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8081301Z test_method_fn_min_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: min is not a float magic method (0.001s) 2023-01-11T21:36:37.8081740Z test_method_fn_min_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: min is not a float magic method (0.001s) 2023-01-11T21:36:37.8082136Z test_method_fn_min_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8082514Z test_method_fn_min_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8082924Z test_method_fn_mod_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: mod is not a float magic method (0.001s) 2023-01-11T21:36:37.8083358Z test_method_fn_mod_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: mod is not a float magic method (0.001s) 2023-01-11T21:36:37.8083751Z test_method_fn_mod_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8084121Z test_method_fn_mod_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8084532Z test_method_fn_mul_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: mul is not a float magic method (0.001s) 2023-01-11T21:36:37.8084958Z test_method_fn_mul_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: mul is not a float magic method (0.001s) 2023-01-11T21:36:37.8085360Z test_method_fn_mul_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8085732Z test_method_fn_mul_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8086138Z test_method_fn_neg_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: neg is not a float magic method (0.001s) 2023-01-11T21:36:37.8086558Z test_method_fn_neg_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: neg is not a float magic method (0.001s) 2023-01-11T21:36:37.8086993Z test_method_fn_neg_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: neg is unary and already tested (0.001s) 2023-01-11T21:36:37.8087433Z test_method_fn_neg_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8087841Z test_method_fn_pow_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: pow is not a float magic method (0.001s) 2023-01-11T21:36:37.8088267Z test_method_fn_pow_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: pow is not a float magic method (0.001s) 2023-01-11T21:36:37.8088669Z test_method_fn_pow_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8089051Z test_method_fn_pow_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8089455Z test_method_fn_sub_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: sub is not a float magic method (0.001s) 2023-01-11T21:36:37.8089900Z test_method_fn_sub_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: sub is not a float magic method (0.001s) 2023-01-11T21:36:37.8090301Z test_method_fn_sub_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8090669Z test_method_fn_sub_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8091088Z test_method_fn_sym_float_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: sym_float is not a float magic method (0.001s) 2023-01-11T21:36:37.8091535Z test_method_fn_sym_float_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: sym_float is not a float magic method (0.001s) 2023-01-11T21:36:37.8091992Z test_method_fn_sym_float_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: sym_float is unary and already tested (0.001s) 2023-01-11T21:36:37.8092408Z test_method_fn_sym_float_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8092832Z test_method_fn_sym_sqrt_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: sym_sqrt is not a float magic method (0.001s) 2023-01-11T21:36:37.8093285Z test_method_fn_sym_sqrt_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: sym_sqrt is not a float magic method (0.001s) 2023-01-11T21:36:37.8093722Z test_method_fn_sym_sqrt_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: sym_sqrt is unary and already tested (0.001s) 2023-01-11T21:36:37.8094130Z test_method_fn_sym_sqrt_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8094549Z test_method_fn_truediv_first_type_float_second_type_float (__main__.TestSymNumberMagicMethods) ... skip: truediv is not a float magic method (0.001s) 2023-01-11T21:36:37.8095007Z test_method_fn_truediv_first_type_float_second_type_int (__main__.TestSymNumberMagicMethods) ... skip: truediv is not a float magic method (0.001s) 2023-01-11T21:36:37.8095418Z test_method_fn_truediv_first_type_int_second_type_float (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8095800Z test_method_fn_truediv_first_type_int_second_type_int (__main__.TestSymNumberMagicMethods) ... ok (0.001s) 2023-01-11T21:36:37.8096001Z 2023-01-11T21:36:37.8096279Z ---------------------------------------------------------------------- 2023-01-11T21:36:37.8096512Z Ran 96 tests in 1.029s 2023-01-11T21:36:37.8096627Z 2023-01-11T21:36:37.8096699Z OK (skipped=43) 2023-01-11T21:36:37.8096807Z 2023-01-11T21:36:37.8096894Z Generating XML reports... 2023-01-11T21:36:37.8097317Z Generated XML report: test-reports/python-unittest/test_dynamic_shapes/TEST-TestPySymInt-20230111213636.xml 2023-01-11T21:36:37.8097861Z Generated XML report: test-reports/python-unittest/test_dynamic_shapes/TEST-TestSymNumberMagicMethods-20230111213636.xml 2023-01-11T21:36:37.8098121Z 2023-01-11T21:36:37.8098442Z ##[endgroup] 2023-01-11T21:36:37.8098843Z FINISHED PRINTING LOG FILE of test_dynamic_shapes (/var/lib/jenkins/workspace/test/test-reports/test_dynamic_shapes_ezsjxa5o) 2023-01-11T21:36:37.8099115Z 2023-01-11T21:36:39.7052091Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:39.7733273Z Ignoring disabled issues: [] 2023-01-11T21:36:39.7887253Z Running test_expanded_weights ... [2023-01-11 21:36:39.788452] 2023-01-11T21:36:39.7889421Z Executing ['/opt/conda/bin/python', '-bb', 'test_expanded_weights.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:39.788734] 2023-01-11T21:36:42.7764927Z 2023-01-11T21:36:42.7765461Z Expand the folded group to see the log file of test_expanded_weights 2023-01-11T21:36:42.7766649Z ##[group]PRINTING LOG FILE of test_expanded_weights (/var/lib/jenkins/workspace/test/test-reports/test_expanded_weights_1qdkv6ra) 2023-01-11T21:36:42.7767107Z 2023-01-11T21:36:42.7767240Z Running tests... 2023-01-11T21:36:42.7767912Z ---------------------------------------------------------------------- 2023-01-11T21:36:42.7768900Z Test results will be stored in test-reports/python-unittest/test_expanded_weights 2023-01-11T21:36:42.7769395Z test_Conv1d (__main__.TestExpandedWeightModule) ... ok (0.005s) 2023-01-11T21:36:42.7769956Z test_Conv1d_circular_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7770620Z test_Conv1d_circular_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.042s) 2023-01-11T21:36:42.7771248Z test_Conv1d_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7771761Z test_Conv1d_pad1 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7772307Z test_Conv1d_pad1_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.007s) 2023-01-11T21:36:42.7772855Z test_Conv1d_pad1size1 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7773444Z test_Conv1d_pad1size1_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7774038Z test_Conv1d_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7774616Z test_Conv1d_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7775136Z test_Conv1d_pad2size1 (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7775852Z test_Conv1d_pad2size1_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7776477Z test_Conv1d_reflect_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7777131Z test_Conv1d_reflect_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.011s) 2023-01-11T21:36:42.7777732Z test_Conv1d_replicate_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7779320Z test_Conv1d_replicate_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.006s) 2023-01-11T21:36:42.7779706Z test_Conv1d_stride (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7780038Z test_Conv1d_stride_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7780576Z test_Conv1d_zero_batch (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7781163Z test_Conv1d_zero_batch_multiple_inputs (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7781565Z test_Conv1d_zeros_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7781919Z test_Conv1d_zeros_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7782230Z test_Conv2d (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7782652Z test_Conv2d_circular_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7783015Z test_Conv2d_circular_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7783491Z test_Conv2d_dilated (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7783806Z test_Conv2d_dilated_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7784140Z test_Conv2d_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7784459Z test_Conv2d_no_bias (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7784766Z test_Conv2d_no_bias_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7785085Z test_Conv2d_padding (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7785418Z test_Conv2d_padding_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7785758Z test_Conv2d_reflect_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7786150Z test_Conv2d_reflect_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7786511Z test_Conv2d_replicate_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7786867Z test_Conv2d_replicate_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7787194Z test_Conv2d_strided (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7787517Z test_Conv2d_strided_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7788031Z test_Conv2d_zero_batch (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7788614Z test_Conv2d_zero_batch_multiple_inputs (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7789009Z test_Conv2d_zeros_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7789364Z test_Conv2d_zeros_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7789692Z test_Conv3d (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7790000Z test_Conv3d_1x1x1_no_bias (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7790318Z test_Conv3d_1x1x1_no_bias_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7790659Z test_Conv3d_circular_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7791017Z test_Conv3d_circular_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.006s) 2023-01-11T21:36:42.7791348Z test_Conv3d_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7791660Z test_Conv3d_no_bias (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7791982Z test_Conv3d_no_bias_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7792328Z test_Conv3d_replicate_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7792675Z test_Conv3d_replicate_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.006s) 2023-01-11T21:36:42.7793073Z test_Conv3d_stride (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7793400Z test_Conv3d_stride_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7793720Z test_Conv3d_stride_padding (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7794068Z test_Conv3d_stride_padding_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.004s) 2023-01-11T21:36:42.7794594Z test_Conv3d_zero_batch (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7795186Z test_Conv3d_zero_batch_multiple_inputs (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7795618Z test_Conv3d_zeros_stride2_pad2 (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7795971Z test_Conv3d_zeros_stride2_pad2_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7796304Z test_Embedding (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7796623Z test_Embedding_discontiguous (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7796964Z test_Embedding_discontiguous_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7797312Z test_Embedding_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.001s) 2023-01-11T21:36:42.7797638Z test_GroupNorm_1d_affine (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7797944Z test_GroupNorm_1d_affine_GN (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7798321Z test_GroupNorm_1d_affine_GN_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7798682Z test_GroupNorm_1d_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7799022Z test_GroupNorm_1d_no_affine_IN (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7799356Z test_GroupNorm_1d_no_affine_IN_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7799699Z test_GroupNorm_1d_no_affine_LN (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7800044Z test_GroupNorm_1d_no_affine_LN_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7800383Z test_GroupNorm_2d_affine (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7800703Z test_GroupNorm_2d_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7801040Z test_GroupNorm_2d_no_affine_IN (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7801393Z test_GroupNorm_2d_no_affine_IN_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7801723Z test_GroupNorm_2d_no_affine_LN (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7802067Z test_GroupNorm_2d_no_affine_LN_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7802415Z test_LayerNorm_1d_elementwise_affine (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7802781Z test_LayerNorm_1d_elementwise_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7803334Z test_LayerNorm_1d_empty_elementwise_affine (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7803973Z test_LayerNorm_1d_empty_elementwise_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients when no batch dim or batch dim is 0 (0.001s) 2023-01-11T21:36:42.7804412Z test_LayerNorm_1d_no_elementwise_affine (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7804785Z test_LayerNorm_1d_no_elementwise_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7805137Z test_LayerNorm_3d_elementwise_affine (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7805502Z test_LayerNorm_3d_elementwise_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7805868Z test_LayerNorm_3d_no_affine_large_feature (__main__.TestExpandedWeightModule) ... ok (0.040s) 2023-01-11T21:36:42.7806242Z test_LayerNorm_3d_no_affine_large_feature_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.020s) 2023-01-11T21:36:42.7806594Z test_LayerNorm_3d_no_elementwise_affine (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7806961Z test_LayerNorm_3d_no_elementwise_affine_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7807334Z test_Linear (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7807632Z test_Linear_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7808114Z test_Linear_no_batch_dim (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients for input of rank 1 (0.001s) 2023-01-11T21:36:42.7808671Z test_Linear_no_batch_dim_multiple_inputs (__main__.TestExpandedWeightModule) ... skip: Can't get per sample gradients for input of rank 1 (0.001s) 2023-01-11T21:36:42.7809051Z test_Linear_no_bias (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7809366Z test_Linear_no_bias_multiple_inputs (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7809712Z test_per_sample_api_compute_batch_size (__main__.TestExpandedWeightModule) ... ok (0.002s) 2023-01-11T21:36:42.7810113Z test_per_sample_api_compute_batch_size_not_pytreeable (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7810473Z test_per_sample_api_failing (__main__.TestExpandedWeightModule) ... ok (0.003s) 2023-01-11T21:36:42.7810642Z 2023-01-11T21:36:42.7810848Z ---------------------------------------------------------------------- 2023-01-11T21:36:42.7811092Z Ran 99 tests in 0.370s 2023-01-11T21:36:42.7811209Z 2023-01-11T21:36:42.7811283Z OK (skipped=10) 2023-01-11T21:36:42.7811391Z 2023-01-11T21:36:42.7811462Z Generating XML reports... 2023-01-11T21:36:42.7811912Z Generated XML report: test-reports/python-unittest/test_expanded_weights/TEST-TestExpandedWeightModule-20230111213641.xml 2023-01-11T21:36:42.7812179Z 2023-01-11T21:36:42.7812477Z ##[endgroup] 2023-01-11T21:36:42.7812870Z FINISHED PRINTING LOG FILE of test_expanded_weights (/var/lib/jenkins/workspace/test/test-reports/test_expanded_weights_1qdkv6ra) 2023-01-11T21:36:42.7813099Z 2023-01-11T21:36:44.6663411Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:36:44.7317018Z Ignoring disabled issues: [] 2023-01-11T21:36:44.7471237Z Running test_functional_autograd_benchmark ... [2023-01-11 21:36:44.746795] 2023-01-11T21:36:44.7473432Z Executing ['/opt/conda/bin/python', '-bb', 'test_functional_autograd_benchmark.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:36:44.747072] 2023-01-11T21:37:11.4601642Z 2023-01-11T21:37:11.4602148Z Expand the folded group to see the log file of test_functional_autograd_benchmark 2023-01-11T21:37:11.4602964Z ##[group]PRINTING LOG FILE of test_functional_autograd_benchmark (/var/lib/jenkins/workspace/test/test-reports/test_functional_autograd_benchmark__eoex8y2) 2023-01-11T21:37:11.4603560Z 2023-01-11T21:37:11.4603925Z Running tests... 2023-01-11T21:37:11.4604585Z ---------------------------------------------------------------------- 2023-01-11T21:37:11.4605051Z Test results will be stored in test-reports/python-unittest/test_functional_autograd_benchmark 2023-01-11T21:37:11.4605699Z test_fast_tasks (__main__.TestFunctionalAutogradBenchmark) ... Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4606852Z Results for model resnet18 on task vjp: nans (var: nan) 2023-01-11T21:37:11.4607349Z Results for model resnet18 on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4607831Z Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4608085Z Results for model ppl_simple_reg on task vjp: nans (var: nan) 2023-01-11T21:37:11.4608365Z Results for model ppl_simple_reg on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4608623Z Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4608865Z Results for model ppl_robust_reg on task vjp: nans (var: nan) 2023-01-11T21:37:11.4609143Z Results for model ppl_robust_reg on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4609398Z Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4609633Z Results for model wav2letter on task vjp: nans (var: nan) 2023-01-11T21:37:11.4609913Z Results for model wav2letter on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4610318Z Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4610556Z Results for model transformer on task vjp: nans (var: nan) 2023-01-11T21:37:11.4610842Z Results for model transformer on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4611083Z Found functorch: 2.0.0a0+git8419ddd 2023-01-11T21:37:11.4611326Z Results for model multiheadattn on task vjp: nans (var: nan) 2023-01-11T21:37:11.4611618Z Results for model multiheadattn on task vjp using Functorch: nans (var: nan) 2023-01-11T21:37:11.4611839Z ok (24.940s) 2023-01-11T21:37:11.4612160Z test_slow_tasks (__main__.TestFunctionalAutogradBenchmark) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:37:11.4612488Z 2023-01-11T21:37:11.4612721Z ---------------------------------------------------------------------- 2023-01-11T21:37:11.4612964Z Ran 2 tests in 24.940s 2023-01-11T21:37:11.4613066Z 2023-01-11T21:37:11.4613135Z OK (skipped=1) 2023-01-11T21:37:11.4613247Z 2023-01-11T21:37:11.4613399Z Generating XML reports... 2023-01-11T21:37:11.4613913Z Generated XML report: test-reports/python-unittest/test_functional_autograd_benchmark/TEST-TestFunctionalAutogradBenchmark-20230111213646.xml 2023-01-11T21:37:11.4614210Z 2023-01-11T21:37:11.4614546Z ##[endgroup] 2023-01-11T21:37:11.4614999Z FINISHED PRINTING LOG FILE of test_functional_autograd_benchmark (/var/lib/jenkins/workspace/test/test-reports/test_functional_autograd_benchmark__eoex8y2) 2023-01-11T21:37:11.4615254Z 2023-01-11T21:37:13.3604509Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:37:13.4250890Z Ignoring disabled issues: [] 2023-01-11T21:37:13.4403837Z Running test_functional_optim ... [2023-01-11 21:37:13.440036] 2023-01-11T21:37:13.4405242Z Executing ['/opt/conda/bin/python', '-bb', 'test_functional_optim.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:37:13.440308] 2023-01-11T21:37:15.6697583Z 2023-01-11T21:37:15.6698239Z Expand the folded group to see the log file of test_functional_optim 2023-01-11T21:37:15.6699063Z ##[group]PRINTING LOG FILE of test_functional_optim (/var/lib/jenkins/workspace/test/test-reports/test_functional_optim_mrovt3mh) 2023-01-11T21:37:15.6699321Z 2023-01-11T21:37:15.6699393Z Running tests... 2023-01-11T21:37:15.6699789Z ---------------------------------------------------------------------- 2023-01-11T21:37:15.6700184Z Test results will be stored in test-reports/python-unittest/test_functional_optim 2023-01-11T21:37:15.6700517Z test_functional_optim_parity_adam (__main__.TestFunctionalOptimParity) ... ok (0.259s) 2023-01-11T21:37:15.6700867Z test_functional_optim_parity_adam_w (__main__.TestFunctionalOptimParity) ... ok (0.025s) 2023-01-11T21:37:15.6701213Z test_functional_optim_parity_sgd (__main__.TestFunctionalOptimParity) ... ok (0.022s) 2023-01-11T21:37:15.6701558Z test_functional_optim_registration (__main__.TestFunctionalOptimParity) ... ok (0.001s) 2023-01-11T21:37:15.6701741Z 2023-01-11T21:37:15.6701943Z ---------------------------------------------------------------------- 2023-01-11T21:37:15.6702183Z Ran 4 tests in 0.307s 2023-01-11T21:37:15.6702295Z 2023-01-11T21:37:15.6702523Z OK 2023-01-11T21:37:15.6702619Z 2023-01-11T21:37:15.6702691Z Generating XML reports... 2023-01-11T21:37:15.6703160Z Generated XML report: test-reports/python-unittest/test_functional_optim/TEST-TestFunctionalOptimParity-20230111213714.xml 2023-01-11T21:37:15.6703424Z 2023-01-11T21:37:15.6703651Z ##[endgroup] 2023-01-11T21:37:15.6704042Z FINISHED PRINTING LOG FILE of test_functional_optim (/var/lib/jenkins/workspace/test/test-reports/test_functional_optim_mrovt3mh) 2023-01-11T21:37:15.6704275Z 2023-01-11T21:37:17.5870075Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:37:17.6706785Z Ignoring disabled issues: [] 2023-01-11T21:37:17.6860394Z Running test_functionalization ... [2023-01-11 21:37:17.685719] 2023-01-11T21:37:17.6863107Z Executing ['/opt/conda/bin/python', '-bb', 'test_functionalization.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:37:17.685987] 2023-01-11T21:37:35.6202313Z 2023-01-11T21:37:35.6203150Z Expand the folded group to see the log file of test_functionalization 2023-01-11T21:37:35.6204330Z ##[group]PRINTING LOG FILE of test_functionalization (/var/lib/jenkins/workspace/test/test-reports/test_functionalization_rkizb6ok) 2023-01-11T21:37:35.6204881Z 2023-01-11T21:37:35.6204997Z Running tests... 2023-01-11T21:37:35.6205651Z ---------------------------------------------------------------------- 2023-01-11T21:37:35.6206400Z Test results will be stored in test-reports/python-unittest/test_functionalization 2023-01-11T21:37:35.6207044Z test_advanced_indexing (__main__.TestCrossRefFunctionalization) ... ok (0.464s) 2023-01-11T21:37:35.6207712Z test_advanced_indexing_correct_strides (__main__.TestCrossRefFunctionalization) ... ok (0.140s) 2023-01-11T21:37:35.6209298Z test_aliases_maintained_after_pass_when_reapplying_views (__main__.TestCrossRefFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:19: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6210385Z x_storage = StorageWeakRef(x.storage()) 2023-01-11T21:37:35.6211418Z /var/lib/jenkins/workspace/test/test_functionalization.py:20: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6212380Z y_storage = StorageWeakRef(y.storage()) 2023-01-11T21:37:35.6212715Z ok (0.001s) 2023-01-11T21:37:35.6213183Z test_as_strided (__main__.TestCrossRefFunctionalization) ... expected failure (0.006s) 2023-01-11T21:37:35.6213813Z test_batch_norm (__main__.TestCrossRefFunctionalization) ... ok (1.059s) 2023-01-11T21:37:35.6214409Z test_cat (__main__.TestCrossRefFunctionalization) ... ok (0.121s) 2023-01-11T21:37:35.6215011Z test_copy_ (__main__.TestCrossRefFunctionalization) ... expected failure (0.035s) 2023-01-11T21:37:35.6215650Z test_copy_stride_mismatch (__main__.TestCrossRefFunctionalization) ... ok (0.010s) 2023-01-11T21:37:35.6216311Z test_diagonal (__main__.TestCrossRefFunctionalization) ... expected failure (0.010s) 2023-01-11T21:37:35.6216996Z test_diagonal_mutated_input (__main__.TestCrossRefFunctionalization) ... expected failure (0.009s) 2023-01-11T21:37:35.6217674Z test_everything (__main__.TestCrossRefFunctionalization) ... expected failure (0.014s) 2023-01-11T21:37:35.6218310Z test_expand_symint (__main__.TestCrossRefFunctionalization) ... ok (0.043s) 2023-01-11T21:37:35.6218935Z test_fill_ (__main__.TestCrossRefFunctionalization) ... expected failure (0.006s) 2023-01-11T21:37:35.6219547Z test_freeze (__main__.TestCrossRefFunctionalization) ... ok (0.021s) 2023-01-11T21:37:35.6220181Z test_index_mutation_on_non_input (__main__.TestCrossRefFunctionalization) ... ok (0.205s) 2023-01-11T21:37:35.6220829Z test_inplace_on_non_view (__main__.TestCrossRefFunctionalization) ... ok (0.214s) 2023-01-11T21:37:35.6221402Z test_instance_norm (__main__.TestCrossRefFunctionalization) ... ok (1.435s) 2023-01-11T21:37:35.6221993Z test_metadata_change (__main__.TestCrossRefFunctionalization) ... ok (0.138s) 2023-01-11T21:37:35.6222862Z test_metadata_change_out_op (__main__.TestCrossRefFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6223520Z test_mixed_wrappers_invalid (__main__.TestCrossRefFunctionalization) ... ok (0.004s) 2023-01-11T21:37:35.6224138Z test_mixed_wrappers_valid (__main__.TestCrossRefFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6225750Z test_multi_out (__main__.TestCrossRefFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6227328Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6228770Z /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6230096Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6231427Z /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6232667Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6234027Z /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6235298Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6236661Z /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6237903Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6238276Z ok (0.202s) 2023-01-11T21:37:35.6238763Z test_multiple_views_of_same_base (__main__.TestCrossRefFunctionalization) ... ok (0.099s) 2023-01-11T21:37:35.6239419Z test_mutable_op_not_inplace_or_other (__main__.TestCrossRefFunctionalization) ... ok (0.079s) 2023-01-11T21:37:35.6240128Z test_nested_functions_propagate_updates (__main__.TestCrossRefFunctionalization) ... ok (0.077s) 2023-01-11T21:37:35.6240785Z test_only_one_view (__main__.TestCrossRefFunctionalization) ... ok (0.019s) 2023-01-11T21:37:35.6241399Z test_optional_tensor_list (__main__.TestCrossRefFunctionalization) ... ok (0.236s) 2023-01-11T21:37:35.6242040Z test_reapply_views_simple (__main__.TestCrossRefFunctionalization) ... ok (0.188s) 2023-01-11T21:37:35.6242675Z test_resize_larger_invalid (__main__.TestCrossRefFunctionalization) ... ok (0.014s) 2023-01-11T21:37:35.6243310Z test_resize_larger_valid (__main__.TestCrossRefFunctionalization) ... ok (0.252s) 2023-01-11T21:37:35.6243946Z test_resize_smaller (__main__.TestCrossRefFunctionalization) ... ok (0.556s) 2023-01-11T21:37:35.6244689Z test_save_for_backwards_segfault (__main__.TestCrossRefFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6245334Z test_scalars (__main__.TestCrossRefFunctionalization) ... ok (0.203s) 2023-01-11T21:37:35.6246580Z test_set_ (__main__.TestCrossRefFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:149: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6247642Z y.set_(x.storage()) 2023-01-11T21:37:35.6247949Z ok (0.001s) 2023-01-11T21:37:35.6248388Z test_simple (__main__.TestCrossRefFunctionalization) ... ok (0.275s) 2023-01-11T21:37:35.6249982Z test_simple_out (__main__.TestCrossRefFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:266: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [4, 2]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6251327Z torch.add(y, tmp, out=z) 2023-01-11T21:37:35.6253168Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:37:35.6254302Z warnings.warn(msg) 2023-01-11T21:37:35.6256030Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:37:35.6257166Z warnings.warn(msg) 2023-01-11T21:37:35.6258762Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:37:35.6259853Z warnings.warn(msg) 2023-01-11T21:37:35.6261600Z /opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py:145: UserWarning: An output with one or more elements was resized since it had shape torch.Size([]) which does not match the required output shape {str(shape)}. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). 2023-01-11T21:37:35.6262893Z warnings.warn(msg) 2023-01-11T21:37:35.6263202Z ok (0.245s) 2023-01-11T21:37:35.6263660Z test_split (__main__.TestCrossRefFunctionalization) ... expected failure (0.013s) 2023-01-11T21:37:35.6264276Z test_tensor_ctr (__main__.TestCrossRefFunctionalization) ... ok (0.227s) 2023-01-11T21:37:35.6264887Z test_tensor_list_composite (__main__.TestCrossRefFunctionalization) ... ok (0.042s) 2023-01-11T21:37:35.6265718Z test_tensor_list_mixed_functional_nonfunctional (__main__.TestCrossRefFunctionalization) ... ok (0.017s) 2023-01-11T21:37:35.6266747Z test_view_clone_view_inplace (__main__.TestCrossRefFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:185: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. 2023-01-11T21:37:35.6267632Z with torch.autograd.detect_anomaly(check_nan=False): 2023-01-11T21:37:35.6268583Z /opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py:197: UserWarning: Error detected in SumBackward0. Traceback of forward call that caused the error: 2023-01-11T21:37:35.6269347Z File "/var/lib/jenkins/workspace/test/test_functionalization.py", line 1487, in 2023-01-11T21:37:35.6269779Z run_tests() 2023-01-11T21:37:35.6270451Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 787, in run_tests 2023-01-11T21:37:35.6271163Z unittest.main(argv=argv, testRunner=xmlrunner.XMLTestRunner( 2023-01-11T21:37:35.6271698Z File "/opt/conda/lib/python3.10/unittest/main.py", line 101, in __init__ 2023-01-11T21:37:35.6272115Z self.runTests() 2023-01-11T21:37:35.6272546Z File "/opt/conda/lib/python3.10/unittest/main.py", line 271, in runTests 2023-01-11T21:37:35.6273064Z self.result = testRunner.run(self.test) 2023-01-11T21:37:35.6273719Z File "/opt/conda/lib/python3.10/site-packages/xmlrunner/runner.py", line 67, in run 2023-01-11T21:37:35.6274169Z test(result) 2023-01-11T21:37:35.6274594Z File "/opt/conda/lib/python3.10/unittest/suite.py", line 84, in __call__ 2023-01-11T21:37:35.6275116Z return self.run(*args, **kwds) 2023-01-11T21:37:35.6275550Z File "/opt/conda/lib/python3.10/unittest/suite.py", line 122, in run 2023-01-11T21:37:35.6275945Z test(result) 2023-01-11T21:37:35.6276372Z File "/opt/conda/lib/python3.10/unittest/suite.py", line 84, in __call__ 2023-01-11T21:37:35.6276805Z return self.run(*args, **kwds) 2023-01-11T21:37:35.6277273Z File "/opt/conda/lib/python3.10/unittest/suite.py", line 122, in run 2023-01-11T21:37:35.6277680Z test(result) 2023-01-11T21:37:35.6278229Z File "/opt/conda/lib/python3.10/unittest/case.py", line 650, in __call__ 2023-01-11T21:37:35.6278654Z return self.run(*args, **kwds) 2023-01-11T21:37:35.6279378Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2225, in run 2023-01-11T21:37:35.6279877Z self._run_with_retry( 2023-01-11T21:37:35.6280564Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2154, in _run_with_retry 2023-01-11T21:37:35.6281084Z super().run(result=result) 2023-01-11T21:37:35.6281515Z File "/opt/conda/lib/python3.10/unittest/case.py", line 591, in run 2023-01-11T21:37:35.6281949Z self._callTestMethod(testMethod) 2023-01-11T21:37:35.6282442Z File "/opt/conda/lib/python3.10/unittest/case.py", line 549, in _callTestMethod 2023-01-11T21:37:35.6282877Z method() 2023-01-11T21:37:35.6283358Z File "/var/lib/jenkins/workspace/test/test_functionalization.py", line 186, in test_view_clone_view_inplace 2023-01-11T21:37:35.6283923Z logs = self.get_logs(g, torch.ones(16, 64, 128, 128, requires_grad=True)) 2023-01-11T21:37:35.6284506Z File "/var/lib/jenkins/workspace/test/test_functionalization.py", line 66, in get_logs 2023-01-11T21:37:35.6285138Z traced_f = make_fx(_functionalize(func, reapply_views=reapply_views, crossref=self.crossref))(*inpts) 2023-01-11T21:37:35.6285966Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 692, in wrapped 2023-01-11T21:37:35.6286648Z t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-01-11T21:37:35.6287517Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 431, in dispatch_trace 2023-01-11T21:37:35.6288098Z graph = tracer.trace(root, concrete_args) 2023-01-11T21:37:35.6288872Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 739, in trace 2023-01-11T21:37:35.6289383Z (self.create_arg(fn(*args)),), 2023-01-11T21:37:35.6290090Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 447, in wrapped 2023-01-11T21:37:35.6290558Z out = f(*tensors) 2023-01-11T21:37:35.6290900Z File "", line 1, in 2023-01-11T21:37:35.6291417Z File "/var/lib/jenkins/workspace/test/test_functionalization.py", line 40, in wrapped 2023-01-11T21:37:35.6291889Z out = f(*inputs_functional) 2023-01-11T21:37:35.6292380Z File "/var/lib/jenkins/workspace/test/test_functionalization.py", line 177, in g 2023-01-11T21:37:35.6292836Z loss = f(x).sum() 2023-01-11T21:37:35.6293362Z (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/python_anomaly_mode.cpp:119.) 2023-01-11T21:37:35.6294019Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-01-11T21:37:35.6294577Z expected failure (0.308s) 2023-01-11T21:37:35.6295109Z test_view_inplace (__main__.TestCrossRefFunctionalization) ... expected failure (0.021s) 2023-01-11T21:37:35.6295723Z test_advanced_indexing (__main__.TestFunctionalization) ... ok (0.088s) 2023-01-11T21:37:35.6296317Z test_advanced_indexing_correct_strides (__main__.TestFunctionalization) ... ok (0.110s) 2023-01-11T21:37:35.6297684Z test_aliases_maintained_after_pass_when_reapplying_views (__main__.TestFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:19: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6298806Z x_storage = StorageWeakRef(x.storage()) 2023-01-11T21:37:35.6299881Z /var/lib/jenkins/workspace/test/test_functionalization.py:20: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6300901Z y_storage = StorageWeakRef(y.storage()) 2023-01-11T21:37:35.6301270Z ok (0.001s) 2023-01-11T21:37:35.6301671Z test_as_strided (__main__.TestFunctionalization) ... ok (0.104s) 2023-01-11T21:37:35.6302197Z test_batch_norm (__main__.TestFunctionalization) ... ok (0.759s) 2023-01-11T21:37:35.6302854Z test_cat (__main__.TestFunctionalization) ... ok (0.067s) 2023-01-11T21:37:35.6303342Z test_copy_ (__main__.TestFunctionalization) ... ok (0.942s) 2023-01-11T21:37:35.6303876Z test_copy_stride_mismatch (__main__.TestFunctionalization) ... ok (0.002s) 2023-01-11T21:37:35.6304405Z test_diagonal (__main__.TestFunctionalization) ... ok (0.245s) 2023-01-11T21:37:35.6304977Z test_diagonal_mutated_input (__main__.TestFunctionalization) ... ok (0.158s) 2023-01-11T21:37:35.6305560Z test_everything (__main__.TestFunctionalization) ... ok (1.467s) 2023-01-11T21:37:35.6306062Z test_expand_symint (__main__.TestFunctionalization) ... ok (0.034s) 2023-01-11T21:37:35.6306594Z test_fill_ (__main__.TestFunctionalization) ... ok (0.161s) 2023-01-11T21:37:35.6307115Z test_freeze (__main__.TestFunctionalization) ... ok (0.009s) 2023-01-11T21:37:35.6307629Z test_index_mutation_on_non_input (__main__.TestFunctionalization) ... ok (0.158s) 2023-01-11T21:37:35.6308197Z test_inplace_on_non_view (__main__.TestFunctionalization) ... ok (0.152s) 2023-01-11T21:37:35.6308745Z test_instance_norm (__main__.TestFunctionalization) ... ok (1.406s) 2023-01-11T21:37:35.6309286Z test_metadata_change (__main__.TestFunctionalization) ... ok (0.107s) 2023-01-11T21:37:35.6309829Z test_metadata_change_out_op (__main__.TestFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6310418Z test_mixed_wrappers_invalid (__main__.TestFunctionalization) ... ok (0.004s) 2023-01-11T21:37:35.6311100Z test_mixed_wrappers_valid (__main__.TestFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6312612Z test_multi_out (__main__.TestFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:304: UserWarning: An output with one or more elements was resized since it had shape [4], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6314042Z torch.aminmax(x, dim=0, out=(out_max, out_min)) 2023-01-11T21:37:35.6314391Z ok (0.124s) 2023-01-11T21:37:35.6314840Z test_multiple_views_of_same_base (__main__.TestFunctionalization) ... ok (0.082s) 2023-01-11T21:37:35.6315518Z test_mutable_op_not_inplace_or_other (__main__.TestFunctionalization) ... ok (0.063s) 2023-01-11T21:37:35.6316141Z test_nested_functions_propagate_updates (__main__.TestFunctionalization) ... ok (0.064s) 2023-01-11T21:37:35.6316731Z test_only_one_view (__main__.TestFunctionalization) ... ok (0.016s) 2023-01-11T21:37:35.6317286Z test_optional_tensor_list (__main__.TestFunctionalization) ... ok (0.144s) 2023-01-11T21:37:35.6317839Z test_reapply_views_simple (__main__.TestFunctionalization) ... ok (0.137s) 2023-01-11T21:37:35.6318407Z test_resize_larger_invalid (__main__.TestFunctionalization) ... ok (0.004s) 2023-01-11T21:37:35.6318970Z test_resize_larger_valid (__main__.TestFunctionalization) ... ok (0.204s) 2023-01-11T21:37:35.6319520Z test_resize_smaller (__main__.TestFunctionalization) ... ok (0.520s) 2023-01-11T21:37:35.6320086Z test_save_for_backwards_segfault (__main__.TestFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6320637Z test_scalars (__main__.TestFunctionalization) ... ok (0.154s) 2023-01-11T21:37:35.6321830Z test_set_ (__main__.TestFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:149: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:37:35.6322874Z y.set_(x.storage()) 2023-01-11T21:37:35.6323172Z ok (0.001s) 2023-01-11T21:37:35.6323580Z test_simple (__main__.TestFunctionalization) ... ok (0.206s) 2023-01-11T21:37:35.6325022Z test_simple_out (__main__.TestFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:266: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [4, 2]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:37:35.6326334Z torch.add(y, tmp, out=z) 2023-01-11T21:37:35.6326637Z ok (0.151s) 2023-01-11T21:37:35.6327052Z test_split (__main__.TestFunctionalization) ... ok (0.275s) 2023-01-11T21:37:35.6327588Z test_tensor_ctr (__main__.TestFunctionalization) ... ok (0.158s) 2023-01-11T21:37:35.6328114Z test_tensor_list_composite (__main__.TestFunctionalization) ... ok (0.028s) 2023-01-11T21:37:35.6328741Z test_tensor_list_mixed_functional_nonfunctional (__main__.TestFunctionalization) ... ok (0.001s) 2023-01-11T21:37:35.6329716Z test_view_clone_view_inplace (__main__.TestFunctionalization) ... /var/lib/jenkins/workspace/test/test_functionalization.py:185: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. 2023-01-11T21:37:35.6330644Z with torch.autograd.detect_anomaly(check_nan=False): 2023-01-11T21:37:35.6331018Z ok (0.667s) 2023-01-11T21:37:35.6331438Z test_view_inplace (__main__.TestFunctionalization) ... ok (0.204s) 2023-01-11T21:37:35.6331744Z 2023-01-11T21:37:35.6332158Z ---------------------------------------------------------------------- 2023-01-11T21:37:35.6332596Z Ran 84 tests in 16.201s 2023-01-11T21:37:35.6332785Z 2023-01-11T21:37:35.6332929Z OK (expected failures=9) 2023-01-11T21:37:35.6333149Z 2023-01-11T21:37:35.6333299Z Generating XML reports... 2023-01-11T21:37:35.6334193Z Generated XML report: test-reports/python-unittest/test_functionalization/TEST-TestCrossRefFunctionalization-20230111213719.xml 2023-01-11T21:37:35.6335314Z Generated XML report: test-reports/python-unittest/test_functionalization/TEST-TestFunctionalization-20230111213719.xml 2023-01-11T21:37:35.6335797Z 2023-01-11T21:37:35.6336258Z ##[endgroup] 2023-01-11T21:37:35.6337101Z FINISHED PRINTING LOG FILE of test_functionalization (/var/lib/jenkins/workspace/test/test-reports/test_functionalization_rkizb6ok) 2023-01-11T21:37:35.6337556Z 2023-01-11T21:37:37.5224273Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:37:37.5868266Z Ignoring disabled issues: [] 2023-01-11T21:37:37.6021269Z Running test_futures ... [2023-01-11 21:37:37.601786] 2023-01-11T21:37:37.6022905Z Executing ['/opt/conda/bin/python', '-bb', 'test_futures.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:37:37.602037] 2023-01-11T21:37:40.1746480Z 2023-01-11T21:37:40.1747179Z Expand the folded group to see the log file of test_futures 2023-01-11T21:37:40.1748148Z ##[group]PRINTING LOG FILE of test_futures (/var/lib/jenkins/workspace/test/test-reports/test_futures_7d97sleu) 2023-01-11T21:37:40.1748408Z 2023-01-11T21:37:40.1748484Z Running tests... 2023-01-11T21:37:40.1748887Z ---------------------------------------------------------------------- 2023-01-11T21:37:40.1749269Z Test results will be stored in test-reports/python-unittest/test_futures 2023-01-11T21:37:40.1749703Z test_add_done_callback_error_is_ignored (__main__.TestFuture) ... [E pybind_utils.h:212] Got the following error when running the callback: ValueError: Expected error 2023-01-11T21:37:40.1749955Z 2023-01-11T21:37:40.1750016Z At: 2023-01-11T21:37:40.1750231Z /var/lib/jenkins/workspace/test/test_futures.py(236): raise_value_error 2023-01-11T21:37:40.1750621Z /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py(244): set_result 2023-01-11T21:37:40.1750951Z /var/lib/jenkins/workspace/test/test_futures.py(229): _test_add_done_callback_error_ignored 2023-01-11T21:37:40.1751284Z /var/lib/jenkins/workspace/test/test_futures.py(238): test_add_done_callback_error_is_ignored 2023-01-11T21:37:40.1751579Z /opt/conda/lib/python3.10/unittest/case.py(549): _callTestMethod 2023-01-11T21:37:40.1751849Z /opt/conda/lib/python3.10/unittest/case.py(591): run 2023-01-11T21:37:40.1752245Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(2154): _run_with_retry 2023-01-11T21:37:40.1752658Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(2225): run 2023-01-11T21:37:40.1753013Z /opt/conda/lib/python3.10/unittest/case.py(650): __call__ 2023-01-11T21:37:40.1753281Z /opt/conda/lib/python3.10/unittest/suite.py(122): run 2023-01-11T21:37:40.1753528Z /opt/conda/lib/python3.10/unittest/suite.py(84): __call__ 2023-01-11T21:37:40.1753784Z /opt/conda/lib/python3.10/unittest/suite.py(122): run 2023-01-11T21:37:40.1754042Z /opt/conda/lib/python3.10/unittest/suite.py(84): __call__ 2023-01-11T21:37:40.1754383Z /opt/conda/lib/python3.10/site-packages/xmlrunner/runner.py(67): run 2023-01-11T21:37:40.1754656Z /opt/conda/lib/python3.10/unittest/main.py(271): runTests 2023-01-11T21:37:40.1754917Z /opt/conda/lib/python3.10/unittest/main.py(101): __init__ 2023-01-11T21:37:40.1755295Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(787): run_tests 2023-01-11T21:37:40.1755791Z /var/lib/jenkins/workspace/test/test_futures.py(340): 2023-01-11T21:37:40.1755948Z 2023-01-11T21:37:40.1756014Z ok (0.229s) 2023-01-11T21:37:40.1756265Z test_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.002s) 2023-01-11T21:37:40.1756782Z test_add_done_callback_no_arg_error_is_ignored (__main__.TestFuture) ... [E pybind_utils.h:212] Got the following error when running the callback: TypeError: TestFuture.test_add_done_callback_no_arg_error_is_ignored..no_arg() takes 0 positional arguments but 1 was given 2023-01-11T21:37:40.1757199Z ok (0.001s) 2023-01-11T21:37:40.1757413Z test_add_done_callback_simple (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1757681Z test_chained_then (__main__.TestFuture) ... ok (0.003s) 2023-01-11T21:37:40.1757932Z test_collect_all (__main__.TestFuture) ... ok (0.101s) 2023-01-11T21:37:40.1758156Z test_done (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1758400Z test_done_exception (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1758794Z test_interleaving_then_and_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1759244Z test_interleaving_then_and_add_done_callback_propagates_error (__main__.TestFuture) ... [E pybind_utils.h:212] Got the following error when running the callback: ValueError: Expected error 2023-01-11T21:37:40.1759519Z 2023-01-11T21:37:40.1759580Z At: 2023-01-11T21:37:40.1759804Z /var/lib/jenkins/workspace/test/test_futures.py(280): raise_value_error 2023-01-11T21:37:40.1760189Z /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py(244): set_result 2023-01-11T21:37:40.1760538Z /var/lib/jenkins/workspace/test/test_futures.py(285): test_interleaving_then_and_add_done_callback_propagates_error 2023-01-11T21:37:40.1760875Z /opt/conda/lib/python3.10/unittest/case.py(549): _callTestMethod 2023-01-11T21:37:40.1761142Z /opt/conda/lib/python3.10/unittest/case.py(591): run 2023-01-11T21:37:40.1761520Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(2154): _run_with_retry 2023-01-11T21:37:40.1761949Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(2225): run 2023-01-11T21:37:40.1762250Z /opt/conda/lib/python3.10/unittest/case.py(650): __call__ 2023-01-11T21:37:40.1762507Z /opt/conda/lib/python3.10/unittest/suite.py(122): run 2023-01-11T21:37:40.1762753Z /opt/conda/lib/python3.10/unittest/suite.py(84): __call__ 2023-01-11T21:37:40.1763007Z /opt/conda/lib/python3.10/unittest/suite.py(122): run 2023-01-11T21:37:40.1763261Z /opt/conda/lib/python3.10/unittest/suite.py(84): __call__ 2023-01-11T21:37:40.1763587Z /opt/conda/lib/python3.10/site-packages/xmlrunner/runner.py(67): run 2023-01-11T21:37:40.1763869Z /opt/conda/lib/python3.10/unittest/main.py(271): runTests 2023-01-11T21:37:40.1764124Z /opt/conda/lib/python3.10/unittest/main.py(101): __init__ 2023-01-11T21:37:40.1764502Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py(787): run_tests 2023-01-11T21:37:40.1764804Z /var/lib/jenkins/workspace/test/test_futures.py(340): 2023-01-11T21:37:40.1764960Z 2023-01-11T21:37:40.1765026Z ok (0.001s) 2023-01-11T21:37:40.1765238Z test_mark_future_twice (__main__.TestFuture) ... ok (0.002s) 2023-01-11T21:37:40.1765479Z test_pickle_future (__main__.TestFuture) ... ok (0.003s) 2023-01-11T21:37:40.1765728Z test_set_exception (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1765998Z test_set_exception_multithreading (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1766248Z test_then (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1766485Z test_then_no_arg (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1766726Z test_then_raise (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1766971Z test_then_wrong_arg (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1767199Z test_wait (__main__.TestFuture) ... ok (0.001s) 2023-01-11T21:37:40.1767431Z test_wait_all (__main__.TestFuture) ... [1, 2] 2023-01-11T21:37:40.1767674Z ok (0.001s) 2023-01-11T21:37:40.1767873Z test_wait_multi_thread (__main__.TestFuture) ... ok (0.502s) 2023-01-11T21:37:40.1768119Z test_wait_none (__main__.TestFuture) ... ok (0.005s) 2023-01-11T21:37:40.1768256Z 2023-01-11T21:37:40.1768460Z ---------------------------------------------------------------------- 2023-01-11T21:37:40.1768688Z Ran 22 tests in 0.861s 2023-01-11T21:37:40.1768799Z 2023-01-11T21:37:40.1768859Z OK 2023-01-11T21:37:40.1768949Z 2023-01-11T21:37:40.1769032Z Generating XML reports... 2023-01-11T21:37:40.1769419Z Generated XML report: test-reports/python-unittest/test_futures/TEST-TestFuture-20230111213738.xml 2023-01-11T21:37:40.1769624Z 2023-01-11T21:37:40.1769890Z ##[endgroup] 2023-01-11T21:37:40.1770270Z FINISHED PRINTING LOG FILE of test_futures (/var/lib/jenkins/workspace/test/test-reports/test_futures_7d97sleu) 2023-01-11T21:37:40.1770479Z 2023-01-11T21:37:42.0719233Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:37:42.1367946Z Ignoring disabled issues: [] 2023-01-11T21:37:42.1522114Z Running test_fx_experimental ... [2023-01-11 21:37:42.151942] 2023-01-11T21:37:42.1524178Z Executing ['/opt/conda/bin/python', '-bb', 'test_fx_experimental.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:37:42.152209] 2023-01-11T21:38:01.8404590Z 2023-01-11T21:38:01.8405126Z Expand the folded group to see the log file of test_fx_experimental 2023-01-11T21:38:01.8409357Z ##[group]PRINTING LOG FILE of test_fx_experimental (/var/lib/jenkins/workspace/test/test-reports/test_fx_experimental_uqdvcu62) 2023-01-11T21:38:01.8409864Z 2023-01-11T21:38:01.8409995Z Running tests... 2023-01-11T21:38:01.8410461Z ---------------------------------------------------------------------- 2023-01-11T21:38:01.8410854Z Test results will be stored in test-reports/python-unittest/test_fx_experimental 2023-01-11T21:38:01.8411172Z test_annotate_getitem_node (__main__.TestFXExperimental) ... ok (0.004s) 2023-01-11T21:38:01.8412042Z test_annotate_returns_with_schema (__main__.TestFXExperimental) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T21:38:01.8412678Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T21:38:01.8412901Z ok (1.910s) 2023-01-11T21:38:01.8413135Z test_aot_based_partition (__main__.TestFXExperimental) ... ok (0.006s) 2023-01-11T21:38:01.8413421Z test_call_to_assert_no_msg (__main__.TestFXExperimental) ... ok (0.004s) 2023-01-11T21:38:01.8413728Z test_call_to_assert_with_empty_msg (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8414037Z test_call_to_assert_with_msg (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8414346Z test_call_to_assert_with_multiline_message (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8414654Z test_conv_bn_fusion (__main__.TestFXExperimental) ... ok (1.192s) 2023-01-11T21:38:01.8414956Z test_conv_bn_fusion_not_running_state (__main__.TestFXExperimental) ... ok (0.010s) 2023-01-11T21:38:01.8415265Z test_cost_aware_partition (__main__.TestFXExperimental) ... ok (0.010s) 2023-01-11T21:38:01.8415531Z test_fetch (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8415809Z test_find_single_partition (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8416104Z test_lack_of_devices (__main__.TestFXExperimental) ... ok (0.002s) 2023-01-11T21:38:01.8416418Z test_large_node_error (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8416736Z test_merge_matmuls (__main__.TestFXExperimental) 2023-01-11T21:38:01.8417035Z A collection of test cases for torch.fx.experimental.merge_matmul, ... ok (0.025s) 2023-01-11T21:38:01.8417329Z test_meta_tracer (__main__.TestFXExperimental) ... ok (0.019s) 2023-01-11T21:38:01.8417771Z test_normalize_args (__main__.TestFXExperimental) ... ok (0.708s) 2023-01-11T21:38:01.8418521Z test_normalize_args_perserve_type (__main__.TestFXExperimental) ... /opt/conda/lib/python3.10/site-packages/torch/fx/operator_schemas.py:207: UserWarning: Does not support nested parametric types, got typing.List[~t]. Please file a bug. 2023-01-11T21:38:01.8418927Z warnings.warn( 2023-01-11T21:38:01.8419087Z ok (0.007s) 2023-01-11T21:38:01.8419333Z test_normalize_args_preserve_meta (__main__.TestFXExperimental) ... ok (0.006s) 2023-01-11T21:38:01.8419649Z test_normalize_binary_operators (__main__.TestFXExperimental) ... ok (0.054s) 2023-01-11T21:38:01.8419939Z test_normalize_modules_exhaustive (__main__.TestFXExperimental) 2023-01-11T21:38:01.8420851Z Exhaustively test `Node.normalized_arguments` on all standard ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:309: UserWarning: Using padding='same' with even kernel lengths and odd dilation may require a zero-padded copy of the input be created (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Convolution.cpp:997.) 2023-01-11T21:38:01.8421422Z return F.conv1d(input, weight, bias, self.stride, 2023-01-11T21:38:01.8421626Z ok (1.541s) 2023-01-11T21:38:01.8421853Z test_optimize_for_inference_cpu (__main__.TestFXExperimental) ... ok (0.263s) 2023-01-11T21:38:01.8422179Z test_optimize_for_inference_cpu_torchvision (__main__.TestFXExperimental) ... ok (10.326s) 2023-01-11T21:38:01.8422700Z test_partition_device_mapping (__main__.TestFXExperimental) ... ok (0.008s) 2023-01-11T21:38:01.8423008Z test_partition_latency (__main__.TestFXExperimental) ... ok (0.006s) 2023-01-11T21:38:01.8423301Z test_partition_node_manipulation (__main__.TestFXExperimental) ... ok (0.004s) 2023-01-11T21:38:01.8423616Z test_replace_target_nodes_with (__main__.TestFXExperimental) ... ok (0.003s) 2023-01-11T21:38:01.8423915Z test_saturate_host (__main__.TestFXExperimental) ... [0, 4] 2023-01-11T21:38:01.8424119Z [1, 2] 2023-01-11T21:38:01.8424276Z ok (0.006s) 2023-01-11T21:38:01.8424511Z test_size_based_partition (__main__.TestFXExperimental) ... ok (0.005s) 2023-01-11T21:38:01.8424800Z test_sparse_nn_partition (__main__.TestFXExperimental) ... ok (0.244s) 2023-01-11T21:38:01.8425110Z test_split_module_default_arg (__main__.TestFXExperimental) ... ok (0.007s) 2023-01-11T21:38:01.8425420Z test_split_module_kwargs_expansion (__main__.TestFXExperimental) ... ok (0.004s) 2023-01-11T21:38:01.8425730Z test_split_qualname_mapping (__main__.TestFXExperimental) ... ok (0.005s) 2023-01-11T21:38:01.8426016Z test_subgraph_creation (__main__.TestFXExperimental) ... ok (0.007s) 2023-01-11T21:38:01.8426316Z test_subgraph_trivial_resnet (__main__.TestFXExperimental) ... ok (0.254s) 2023-01-11T21:38:01.8426613Z test_subgraph_uniquename (__main__.TestFXExperimental) ... ok (0.006s) 2023-01-11T21:38:01.8427362Z test_to_folder (__main__.TestFXExperimental) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph_module.py:476: UserWarning: Was not able to save the following children modules as reprs -saved as pickled files instead: ['seq'] 2023-01-11T21:38:01.8427892Z warnings.warn("Was not able to save the following children modules as reprs -" 2023-01-11T21:38:01.8428127Z ok (0.009s) 2023-01-11T21:38:01.8428388Z test_traceable_function_with_nonstandard_name (__main__.TestFXExperimental) ... ok (0.002s) 2023-01-11T21:38:01.8428688Z test_type_matches (__main__.TestFXExperimental) ... ok (0.002s) 2023-01-11T21:38:01.8428845Z 2023-01-11T21:38:01.8429049Z ---------------------------------------------------------------------- 2023-01-11T21:38:01.8429295Z Ran 39 tests in 16.681s 2023-01-11T21:38:01.8429411Z 2023-01-11T21:38:01.8429459Z OK 2023-01-11T21:38:01.8429553Z 2023-01-11T21:38:01.8429637Z Generating XML reports... 2023-01-11T21:38:01.8430069Z Generated XML report: test-reports/python-unittest/test_fx_experimental/TEST-TestFXExperimental-20230111213744.xml 2023-01-11T21:38:01.8430399Z 2023-01-11T21:38:01.8430671Z ##[endgroup] 2023-01-11T21:38:01.8431080Z FINISHED PRINTING LOG FILE of test_fx_experimental (/var/lib/jenkins/workspace/test/test-reports/test_fx_experimental_uqdvcu62) 2023-01-11T21:38:01.8431308Z 2023-01-11T21:38:03.7432653Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:03.8083405Z Ignoring disabled issues: [] 2023-01-11T21:38:03.8236460Z Running test_fx_passes ... [2023-01-11 21:38:03.823360] 2023-01-11T21:38:03.8238142Z Executing ['/opt/conda/bin/python', '-bb', 'test_fx_passes.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:03.823606] 2023-01-11T21:38:06.1105389Z 2023-01-11T21:38:06.1105902Z Expand the folded group to see the log file of test_fx_passes 2023-01-11T21:38:06.1106970Z ##[group]PRINTING LOG FILE of test_fx_passes (/var/lib/jenkins/workspace/test/test-reports/test_fx_passes_qfw2fcdq) 2023-01-11T21:38:06.1107222Z 2023-01-11T21:38:06.1107299Z Running tests... 2023-01-11T21:38:06.1107896Z ---------------------------------------------------------------------- 2023-01-11T21:38:06.1108287Z Test results will be stored in test-reports/python-unittest/test_fx_passes 2023-01-11T21:38:06.1108591Z test_fuser_pass_deep_model (__main__.TestFXGraphPasses) ... ok (0.283s) 2023-01-11T21:38:06.1109010Z test_fuser_util_partition_[['add', 'add_1', 'add_2']] (__main__.TestFXGraphPasses) ... ok (0.007s) 2023-01-11T21:38:06.1109450Z test_fuser_util_partition_[['add', 'add_1'], ['add_5', 'add_6']] (__main__.TestFXGraphPasses) ... ok (0.006s) 2023-01-11T21:38:06.1109961Z test_fuser_util_partition_[['add', 'linear', 'add_1', 'param', 'add_2', 'add_3', 'add_4', 'linear2', 'add_5', 'add_6', 'relu']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1110496Z test_fuser_util_partition_[['add_2', 'add_3']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1110894Z test_fuser_util_partition_[['add_3', 'add_4']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1111325Z test_fuser_util_partition_[['add_4', 'add_1', 'add_3', 'add_2']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1111772Z test_fuser_util_partition_[['add_5', 'add_6'], ['add_1', 'add_2', 'add_3', 'add_4']] (__main__.TestFXGraphPasses) ... ok (0.006s) 2023-01-11T21:38:06.1112201Z test_fuser_util_partition_[['add_5', 'linear2']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1112597Z test_fuser_util_partition_[['add_6', 'add_5']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1112991Z test_fuser_util_partition_[['add_6', 'relu']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1113478Z test_fuser_util_partition_[['param', 'add_1', 'linear']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1113894Z test_fuser_util_partition_[['param', 'add_2']] (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1114314Z test_fuser_util_xfail_partition_[['add', 'add_1', 'add_3']] (__main__.TestFXGraphPasses) ... ok (0.003s) 2023-01-11T21:38:06.1114760Z test_fuser_util_xfail_partition_[['add', 'add_1'], ['add_1', 'add_5', 'add_6']] (__main__.TestFXGraphPasses) ... ok (0.003s) 2023-01-11T21:38:06.1115192Z test_fuser_util_xfail_partition_[['add_4', 'add_5']] (__main__.TestFXGraphPasses) ... ok (0.003s) 2023-01-11T21:38:06.1115608Z test_fuser_util_xfail_partition_[['relu', 'add_5']] (__main__.TestFXGraphPasses) ... ok (0.003s) 2023-01-11T21:38:06.1116262Z test_partitioner_fn__expected_partition_[['add_7', 'add_6'], ['add_5', 'add_4', 'add_3'], ['add_2', 'add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.006s) 2023-01-11T21:38:06.1117004Z test_partitioner_fn__expected_partition_[['add_3', 'add_2', 'add', 'add_1']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.012s) 2023-01-11T21:38:06.1117686Z test_partitioner_fn__expected_partition_[['add_1'], ['add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1118491Z test_partitioner_fn__expected_partition_[['add_2'], ['add_3', 'add_4', 'add_1'], ['add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.007s) 2023-01-11T21:38:06.1119204Z test_partitioner_fn__expected_partition_[['add_2', 'add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1119927Z test_partitioner_fn__expected_partition_[['add', 'std_mean', 'getitem', 'getitem_1']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.004s) 2023-01-11T21:38:06.1120757Z test_partitioner_fn__expected_partition_[['add_1', 'add', 'permute_1', 'view', 'permute_2', 'permute_3', 'permute']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1121496Z test_partitioner_fn__expected_partition_[['permute_1', 'add_1', 'add']]_bookend_non_compute_pass_True (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1122266Z test_partitioner_fn__expected_partition_[['add_1', 'add', 'permute_1', 'view', 'permute_2', 'permute_3', 'permute']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1123006Z test_partitioner_fn__expected_partition_[['permute_1', 'add_1', 'add']]_bookend_non_compute_pass_True (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1123724Z test_partitioner_fn__expected_partition_[['add_3', 'add_2'], ['add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1124423Z test_partitioner_fn__expected_partition_[['add_2', 'add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1125112Z test_partitioner_fn__expected_partition_[['add_2', 'add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1125776Z test_partitioner_fn__expected_partition_[['add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.004s) 2023-01-11T21:38:06.1126440Z test_partitioner_fn__expected_partition_[['add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.004s) 2023-01-11T21:38:06.1127139Z test_partitioner_fn__expected_partition_[['add_3', 'add_2', 'add', 'add_1']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.004s) 2023-01-11T21:38:06.1127843Z test_partitioner_fn__expected_partition_[['add_3', 'add_2', 'add', 'add_1']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.004s) 2023-01-11T21:38:06.1128550Z test_partitioner_fn__expected_partition_[['add_3', 'add_2', 'add_1', 'add']]_bookend_non_compute_pass_False (__main__.TestFXGraphPasses) ... ok (0.005s) 2023-01-11T21:38:06.1129107Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1129627Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1130204Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1130756Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1131288Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1131870Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1132469Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1133028Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1133585Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1134084Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1134569Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.005s) 2023-01-11T21:38:06.1135072Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.005s) 2023-01-11T21:38:06.1135540Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1136032Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1136513Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.004s) 2023-01-11T21:38:06.1136984Z test_subgraph_matcher_test_model_ (__main__.TestFXMatcherUtils) ... ok (0.003s) 2023-01-11T21:38:06.1137177Z 2023-01-11T21:38:06.1137380Z ---------------------------------------------------------------------- 2023-01-11T21:38:06.1137627Z Ran 51 tests in 0.513s 2023-01-11T21:38:06.1137742Z 2023-01-11T21:38:06.1137804Z OK 2023-01-11T21:38:06.1137896Z 2023-01-11T21:38:06.1137967Z Generating XML reports... 2023-01-11T21:38:06.1138383Z Generated XML report: test-reports/python-unittest/test_fx_passes/TEST-TestFXGraphPasses-20230111213805.xml 2023-01-11T21:38:06.1138914Z Generated XML report: test-reports/python-unittest/test_fx_passes/TEST-TestFXMatcherUtils-20230111213805.xml 2023-01-11T21:38:06.1139148Z 2023-01-11T21:38:06.1139405Z ##[endgroup] 2023-01-11T21:38:06.1139786Z FINISHED PRINTING LOG FILE of test_fx_passes (/var/lib/jenkins/workspace/test/test-reports/test_fx_passes_qfw2fcdq) 2023-01-11T21:38:06.1140002Z 2023-01-11T21:38:07.9878709Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:08.0526950Z Ignoring disabled issues: [] 2023-01-11T21:38:08.0680370Z Running test_fx_reinplace_pass ... [2023-01-11 21:38:08.067746] 2023-01-11T21:38:08.0682544Z Executing ['/opt/conda/bin/python', '-bb', 'test_fx_reinplace_pass.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:08.068003] 2023-01-11T21:38:11.0882853Z 2023-01-11T21:38:11.0883448Z Expand the folded group to see the log file of test_fx_reinplace_pass 2023-01-11T21:38:11.0884670Z ##[group]PRINTING LOG FILE of test_fx_reinplace_pass (/var/lib/jenkins/workspace/test/test-reports/test_fx_reinplace_pass_ftuarg3r) 2023-01-11T21:38:11.0885094Z 2023-01-11T21:38:11.0885234Z Running tests... 2023-01-11T21:38:11.0885869Z ---------------------------------------------------------------------- 2023-01-11T21:38:11.0886597Z Test results will be stored in test-reports/python-unittest/test_fx_reinplace_pass 2023-01-11T21:38:11.0887385Z test_out_node_updated (__main__.TestReinplacePass) ... ok (0.278s) 2023-01-11T21:38:11.0943857Z test_reinplace_basic (__main__.TestReinplacePass) ... ok (0.117s) 2023-01-11T21:38:11.0944664Z test_reinplace_different_metadata (__main__.TestReinplacePass) ... ok (0.028s) 2023-01-11T21:38:11.0945447Z test_reinplace_index_mutation (__main__.TestReinplacePass) ... ok (0.146s) 2023-01-11T21:38:11.0946226Z test_reinplace_overlapping_memory (__main__.TestReinplacePass) ... ok (0.029s) 2023-01-11T21:38:11.0946970Z test_reinplace_scatter_op (__main__.TestReinplacePass) ... ok (0.249s) 2023-01-11T21:38:11.0947721Z test_reinplace_scatter_twice (__main__.TestReinplacePass) ... ok (0.236s) 2023-01-11T21:38:11.0948545Z test_reinplace_scatter_twice_with_different_view_op_invalid (__main__.TestReinplacePass) ... ok (0.065s) 2023-01-11T21:38:11.0949474Z test_reinplace_scatter_twice_with_different_view_op_invalid2 (__main__.TestReinplacePass) ... ok (0.064s) 2023-01-11T21:38:11.0950602Z test_reinplace_scatter_twice_with_different_view_op_valid (__main__.TestReinplacePass) ... ok (0.066s) 2023-01-11T21:38:11.0951442Z test_reinplace_with_view (__main__.TestReinplacePass) ... ok (0.037s) 2023-01-11T21:38:11.0982859Z 2023-01-11T21:38:11.0983481Z ---------------------------------------------------------------------- 2023-01-11T21:38:11.0984001Z Ran 11 tests in 1.316s 2023-01-11T21:38:11.0984208Z 2023-01-11T21:38:11.0984316Z OK 2023-01-11T21:38:11.0984481Z 2023-01-11T21:38:11.0984621Z Generating XML reports... 2023-01-11T21:38:11.0985438Z Generated XML report: test-reports/python-unittest/test_fx_reinplace_pass/TEST-TestReinplacePass-20230111213809.xml 2023-01-11T21:38:11.0985776Z 2023-01-11T21:38:11.0986112Z ##[endgroup] 2023-01-11T21:38:11.0986513Z FINISHED PRINTING LOG FILE of test_fx_reinplace_pass (/var/lib/jenkins/workspace/test/test-reports/test_fx_reinplace_pass_ftuarg3r) 2023-01-11T21:38:11.0986746Z 2023-01-11T21:38:12.9814026Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:13.0662012Z Ignoring disabled issues: [] 2023-01-11T21:38:13.0816011Z Running test_hub ... [2023-01-11 21:38:13.081302] 2023-01-11T21:38:13.0818179Z Executing ['/opt/conda/bin/python', '-bb', 'test_hub.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:13.081554] 2023-01-11T21:38:14.7081903Z 2023-01-11T21:38:14.7082406Z Expand the folded group to see the log file of test_hub 2023-01-11T21:38:14.7083667Z ##[group]PRINTING LOG FILE of test_hub (/var/lib/jenkins/workspace/test/test-reports/test_hub_nevyfmcm) 2023-01-11T21:38:14.7084031Z 2023-01-11T21:38:14.7084392Z ##[endgroup] 2023-01-11T21:38:14.7085131Z FINISHED PRINTING LOG FILE of test_hub (/var/lib/jenkins/workspace/test/test-reports/test_hub_nevyfmcm) 2023-01-11T21:38:14.7085479Z 2023-01-11T21:38:16.6442242Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:16.7097187Z Ignoring disabled issues: [] 2023-01-11T21:38:16.7253637Z Running test_import_stats ... [2023-01-11 21:38:16.724990] 2023-01-11T21:38:16.7254746Z Executing ['/opt/conda/bin/python', '-bb', 'test_import_stats.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:16.725258] 2023-01-11T21:38:21.4839238Z 2023-01-11T21:38:21.4839911Z Expand the folded group to see the log file of test_import_stats 2023-01-11T21:38:21.4840929Z ##[group]PRINTING LOG FILE of test_import_stats (/var/lib/jenkins/workspace/test/test-reports/test_import_stats__ar66mzs) 2023-01-11T21:38:21.4841324Z 2023-01-11T21:38:21.4841459Z Running tests... 2023-01-11T21:38:21.4842112Z ---------------------------------------------------------------------- 2023-01-11T21:38:21.4842831Z Test results will be stored in test-reports/python-unittest/test_import_stats 2023-01-11T21:38:21.4843400Z test_time_cuda_device_count (__main__.TestImportTime) ... ok (1.635s) 2023-01-11T21:38:21.4843882Z test_time_import_torch (__main__.TestImportTime) ... ok (1.423s) 2023-01-11T21:38:21.4844168Z 2023-01-11T21:38:21.4844839Z ---------------------------------------------------------------------- 2023-01-11T21:38:21.4845279Z Ran 2 tests in 3.058s 2023-01-11T21:38:21.4845498Z 2023-01-11T21:38:21.4845607Z OK 2023-01-11T21:38:21.4845747Z 2023-01-11T21:38:21.4845858Z Generating XML reports... 2023-01-11T21:38:21.4846281Z Generated XML report: test-reports/python-unittest/test_import_stats/TEST-TestImportTime-20230111213818.xml 2023-01-11T21:38:21.4846514Z 2023-01-11T21:38:21.4846742Z ##[endgroup] 2023-01-11T21:38:21.4847120Z FINISHED PRINTING LOG FILE of test_import_stats (/var/lib/jenkins/workspace/test/test-reports/test_import_stats__ar66mzs) 2023-01-11T21:38:21.4847335Z 2023-01-11T21:38:23.3885868Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:23.4617088Z Ignoring disabled issues: [] 2023-01-11T21:38:23.4771347Z Running test_itt ... [2023-01-11 21:38:23.476741] 2023-01-11T21:38:23.4772144Z Executing ['/opt/conda/bin/python', '-bb', 'test_itt.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:23.476982] 2023-01-11T21:38:25.4158780Z 2023-01-11T21:38:25.4159314Z Expand the folded group to see the log file of test_itt 2023-01-11T21:38:25.4160298Z ##[group]PRINTING LOG FILE of test_itt (/var/lib/jenkins/workspace/test/test-reports/test_itt_58t6di67) 2023-01-11T21:38:25.4160671Z 2023-01-11T21:38:25.4160779Z Running tests... 2023-01-11T21:38:25.4161373Z ---------------------------------------------------------------------- 2023-01-11T21:38:25.4162024Z Test results will be stored in test-reports/python-unittest/test_itt 2023-01-11T21:38:25.4162505Z test_itt (__main__.TestItt) ... ok (0.230s) 2023-01-11T21:38:25.4162711Z 2023-01-11T21:38:25.4163062Z ---------------------------------------------------------------------- 2023-01-11T21:38:25.4163569Z Ran 1 test in 0.230s 2023-01-11T21:38:25.4163750Z 2023-01-11T21:38:25.4163875Z OK 2023-01-11T21:38:25.4164055Z 2023-01-11T21:38:25.4164182Z Generating XML reports... 2023-01-11T21:38:25.4164902Z Generated XML report: test-reports/python-unittest/test_itt/TEST-TestItt-20230111213824.xml 2023-01-11T21:38:25.4165254Z 2023-01-11T21:38:25.4165772Z ##[endgroup] 2023-01-11T21:38:25.4166357Z FINISHED PRINTING LOG FILE of test_itt (/var/lib/jenkins/workspace/test/test-reports/test_itt_58t6di67) 2023-01-11T21:38:25.4166710Z 2023-01-11T21:38:27.3003191Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:27.3657669Z Ignoring disabled issues: [] 2023-01-11T21:38:27.3812894Z Running test_jit_autocast ... [2023-01-11 21:38:27.380912] 2023-01-11T21:38:27.3813771Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit_autocast.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:27.381172] 2023-01-11T21:38:45.5963196Z 2023-01-11T21:38:45.5963716Z Expand the folded group to see the log file of test_jit_autocast 2023-01-11T21:38:45.5965073Z ##[group]PRINTING LOG FILE of test_jit_autocast (/var/lib/jenkins/workspace/test/test-reports/test_jit_autocast_7wc5d46q) 2023-01-11T21:38:45.5965872Z CUDA not available, skipping tests 2023-01-11T21:38:45.5966445Z 2023-01-11T21:38:45.5966615Z Running tests... 2023-01-11T21:38:45.5967451Z ---------------------------------------------------------------------- 2023-01-11T21:38:45.5968420Z Test results will be stored in test-reports/python-unittest/test_jit_autocast 2023-01-11T21:38:45.5969152Z test_autocast_api (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5969990Z test_autocast_api_not_supported (__main__.TestAutocast) ... skip: we need to provide dtype argument at this moment (0.000s) 2023-01-11T21:38:45.5970822Z test_autocast_autodiff (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5971631Z test_autocast_decorator (__main__.TestAutocast) ... skip: autocast decorators not supported (0.000s) 2023-01-11T21:38:45.5972463Z test_autocast_decorator_outside_jit (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5973195Z test_autocast_mixed_dtypes (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5974137Z test_callees (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5974834Z test_callees_with_autocast_off (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5975558Z test_callees_with_autocast_on (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5976302Z test_conditional_autocast (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5977120Z test_control_flow (__main__.TestAutocast) ... skip: broken due to lack of type propagation (0.001s) 2023-01-11T21:38:45.5977889Z test_divergent_autocast (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5978596Z test_divergent_types (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5979296Z test_duplicate_inputs (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5979991Z test_eager_and_script (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5980815Z test_explicit_casts (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5981484Z test_fp32_policy (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5982149Z test_fp32_policy_with_fp64 (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5983079Z test_fp32_set_opt_dtype_policy (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5983831Z test_fp32_set_opt_dtype_policy_fp64 (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5984516Z test_ignore_amp (__main__.TestAutocast) ... ok (0.014s) 2023-01-11T21:38:45.5985208Z test_implicitly_nested_autocast (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5985909Z test_inplace (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5986566Z test_jit_autocast_softmax_cpu (__main__.TestAutocast) ... ok (0.057s) 2023-01-11T21:38:45.5987258Z test_jit_autocast_softmax_gpu (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5987999Z test_jit_call_method_under_autocast (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5988772Z test_jit_executor_under_autocast (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5989524Z test_jit_freeze_autocast_basic (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5990213Z test_jit_freeze_autocast_constants (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5990935Z test_linear_bf16 (__main__.TestAutocast) ... skip: No cuda bfloat16 support (0.000s) 2023-01-11T21:38:45.5991633Z test_minimal (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5992293Z test_minimal_cpu (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5992947Z test_minimal_off (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5993701Z test_nested_autocast (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5994398Z test_promote_policy (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5995099Z test_promote_policy_fp64 (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5995778Z test_reused_autocast (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.5996574Z test_reused_autocast_expr (__main__.TestAutocast) ... skip: unsuported autocast syntax (0.001s) 2023-01-11T21:38:45.5997367Z test_runtime_autocast_state (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5998083Z test_runtime_autocast_state_expr (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5998832Z test_script_and_tracing (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.5999719Z test_script_and_tracing_with_autocast (__main__.TestAutocast) ... skip: autocast(False) is ignored inside traced functions (0.000s) 2023-01-11T21:38:45.6000544Z test_script_module (__main__.TestAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.6001204Z test_tracing_and_script (__main__.TestAutocast) ... skip: No cuda (0.000s) 2023-01-11T21:38:45.6002251Z test_tracing_with_autocast_and_script (__main__.TestAutocast) ... skip: scripted called from traced TorchScript is not yet working (0.000s) 2023-01-11T21:38:45.6003128Z test_cat_promote (__main__.TestJitTraceAutocast) ... ok (0.257s) 2023-01-11T21:38:45.6003850Z test_generate_autocast_jit_trace_model (__main__.TestJitTraceAutocast) ... ok (3.873s) 2023-01-11T21:38:45.6004666Z test_nchw_autocast_jit_trace_model (__main__.TestJitTraceAutocast) ... ok (4.886s) 2023-01-11T21:38:45.6005477Z test_nhwc_autocast_jit_trace_model (__main__.TestJitTraceAutocast) ... ok (5.434s) 2023-01-11T21:38:45.6022880Z test_script_autocast_cpu (__main__.TestJitTraceAutocast) ... ok (0.149s) 2023-01-11T21:38:45.6023672Z test_script_autocast_cuda (__main__.TestJitTraceAutocast) ... skip: No cuda (0.001s) 2023-01-11T21:38:45.6024481Z test_script_autocast_enable_and_check (__main__.TestJitTraceAutocast) ... ok (0.157s) 2023-01-11T21:38:45.6025415Z test_scripted_aliasing (__main__.TestJitTraceAutocast) ... ok (0.152s) 2023-01-11T21:38:45.6025825Z 2023-01-11T21:38:45.6026346Z ---------------------------------------------------------------------- 2023-01-11T21:38:45.6026922Z Ran 53 tests in 15.007s 2023-01-11T21:38:45.6027189Z 2023-01-11T21:38:45.6027359Z OK (skipped=44) 2023-01-11T21:38:45.6027621Z 2023-01-11T21:38:45.6027819Z Generating XML reports... 2023-01-11T21:38:45.6028781Z Generated XML report: test-reports/python-unittest/test_jit_autocast/TEST-TestAutocast-20230111213829.xml 2023-01-11T21:38:45.6030054Z Generated XML report: test-reports/python-unittest/test_jit_autocast/TEST-TestJitTraceAutocast-20230111213829.xml 2023-01-11T21:38:45.6030662Z 2023-01-11T21:38:45.6031243Z ##[endgroup] 2023-01-11T21:38:45.6032188Z FINISHED PRINTING LOG FILE of test_jit_autocast (/var/lib/jenkins/workspace/test/test-reports/test_jit_autocast_7wc5d46q) 2023-01-11T21:38:45.6032703Z 2023-01-11T21:38:47.4638691Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:38:47.5293824Z Ignoring disabled issues: [] 2023-01-11T21:38:47.5449123Z Running test_jit_fuser_te ... [2023-01-11 21:38:47.544624] 2023-01-11T21:38:47.5451229Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit_fuser_te.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:38:47.544875] 2023-01-11T21:40:24.0528505Z 2023-01-11T21:40:24.0529444Z Expand the folded group to see the log file of test_jit_fuser_te 2023-01-11T21:40:24.0531015Z ##[group]PRINTING LOG FILE of test_jit_fuser_te (/var/lib/jenkins/workspace/test/test-reports/test_jit_fuser_te_bqdb0ty0) 2023-01-11T21:40:24.0531559Z CUDA not available, skipping tests 2023-01-11T21:40:24.0531782Z 2023-01-11T21:40:24.0531918Z Running tests... 2023-01-11T21:40:24.0535194Z ---------------------------------------------------------------------- 2023-01-11T21:40:24.0536248Z Test results will be stored in test-reports/python-unittest/test_jit_fuser_te 2023-01-11T21:40:24.0537079Z test_autodiff_fallback (jit.test_fuser_common.TestFuserCommon) ... ok (0.111s) 2023-01-11T21:40:24.0537784Z test_abs (__main__.TestTEFuserDynamic) ... ok (0.280s) 2023-01-11T21:40:24.0538458Z test_adaptive_avg_pool2d (__main__.TestTEFuserDynamic) ... ok (0.062s) 2023-01-11T21:40:24.0539144Z test_add_bool (__main__.TestTEFuserDynamic) ... ok (0.274s) 2023-01-11T21:40:24.0539793Z test_addcmul (__main__.TestTEFuserDynamic) ... ok (0.089s) 2023-01-11T21:40:24.0540588Z test_arg_configurations_smoke (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0541771Z test_autocast_down (__main__.TestTEFuserDynamic) ... skip: half-precision NNC fusion requires CUDA (0.000s) 2023-01-11T21:40:24.0543084Z test_autocast_up (__main__.TestTEFuserDynamic) ... skip: half-precision NNC fusion requires CUDA (0.000s) 2023-01-11T21:40:24.0543874Z test_batch_norm (__main__.TestTEFuserDynamic) ... ok (0.660s) 2023-01-11T21:40:24.0544542Z test_binary_div_ops (__main__.TestTEFuserDynamic) ... ok (2.402s) 2023-01-11T21:40:24.0545510Z test_binary_ops (__main__.TestTEFuserDynamic) ... ok (8.674s) 2023-01-11T21:40:24.0546158Z test_binary_pow (__main__.TestTEFuserDynamic) ... ok (0.803s) 2023-01-11T21:40:24.0546825Z test_binary_scalar_ops (__main__.TestTEFuserDynamic) ... ok (1.026s) 2023-01-11T21:40:24.0547735Z test_binary_tensor_scalar_ops (__main__.TestTEFuserDynamic) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:40:24.0548575Z test_bitwise_ops (__main__.TestTEFuserDynamic) ... ok (1.911s) 2023-01-11T21:40:24.0549234Z test_broadcast (__main__.TestTEFuserDynamic) ... ok (0.232s) 2023-01-11T21:40:24.0549863Z test_cat_2k_args (__main__.TestTEFuserDynamic) ... ok (0.689s) 2023-01-11T21:40:24.0550529Z test_cat_graph_opt (__main__.TestTEFuserDynamic) ... ok (0.763s) 2023-01-11T21:40:24.0551300Z test_channels_last_dims_dynamic (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0552288Z test_checks_cat_inputs (__main__.TestTEFuserDynamic) ... ok (0.456s) 2023-01-11T21:40:24.0553055Z test_chunk (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0553933Z test_chunk_correctness (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0554827Z test_chunk_distributes (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0555752Z test_chunk_motion_deduplicates_inputs (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0556628Z test_chunk_mul_one (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0557447Z test_chunk_multiple (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0558137Z test_clamp (__main__.TestTEFuserDynamic) ... ok (3.515s) 2023-01-11T21:40:24.0558817Z test_clamp_double (__main__.TestTEFuserDynamic) ... ok (0.139s) 2023-01-11T21:40:24.0559499Z test_clamp_int (__main__.TestTEFuserDynamic) ... ok (0.133s) 2023-01-11T21:40:24.0560153Z test_comparison_eq_ne (__main__.TestTEFuserDynamic) ... ok (0.297s) 2023-01-11T21:40:24.0560873Z test_comparison_ge_le (__main__.TestTEFuserDynamic) ... ok (0.286s) 2023-01-11T21:40:24.0561572Z test_comparison_gt_lt (__main__.TestTEFuserDynamic) ... ok (0.287s) 2023-01-11T21:40:24.0562220Z test_concat (__main__.TestTEFuserDynamic) ... ok (0.567s) 2023-01-11T21:40:24.0562791Z test_concat_invariant (__main__.TestTEFuserDynamic) ... ok (0.755s) 2023-01-11T21:40:24.0563592Z test_constant_chunk_shapes (__main__.TestTEFuserDynamic) ... skip: TODO: chunk dynamic shapes (0.001s) 2023-01-11T21:40:24.0564695Z test_conv2d (__main__.TestTEFuserDynamic) ... skip: don't run conv with dynamic shapes (0.001s) 2023-01-11T21:40:24.0565772Z test_conv2d_depthwise (__main__.TestTEFuserDynamic) ... skip: don't run conv with dynamic shapes (0.001s) 2023-01-11T21:40:24.0566626Z test_cuda_half (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0567330Z test_dims (__main__.TestTEFuserDynamic) ... ok (0.128s) 2023-01-11T21:40:24.0567948Z test_disabled (__main__.TestTEFuserDynamic) ... ok (0.005s) 2023-01-11T21:40:24.0568552Z test_div_bool (__main__.TestTEFuserDynamic) ... ok (0.135s) 2023-01-11T21:40:24.0569189Z test_dynamic_cat (__main__.TestTEFuserDynamic) ... ok (0.009s) 2023-01-11T21:40:24.0569829Z test_dynamic_shapes (__main__.TestTEFuserDynamic) ... ok (2.038s) 2023-01-11T21:40:24.0570539Z test_eq_unsqueeze_type_as (__main__.TestTEFuserDynamic) ... ok (0.303s) 2023-01-11T21:40:24.0571189Z test_erf (__main__.TestTEFuserDynamic) ... ok (0.001s) 2023-01-11T21:40:24.0571888Z test_exhaust_specializations (__main__.TestTEFuserDynamic) ... ok (0.028s) 2023-01-11T21:40:24.0572563Z test_exp (__main__.TestTEFuserDynamic) ... ok (0.088s) 2023-01-11T21:40:24.0573296Z test_fusion_reuse_multi_gpu (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0574158Z test_gelu (__main__.TestTEFuserDynamic) ... ok (0.026s) 2023-01-11T21:40:24.0574828Z test_hardsigmoid_fwd_bwd (__main__.TestTEFuserDynamic) ... ok (0.267s) 2023-01-11T21:40:24.0575533Z test_hardswish_fwd_bwd (__main__.TestTEFuserDynamic) ... ok (0.292s) 2023-01-11T21:40:24.0576251Z test_inlined_optimized_graph (__main__.TestTEFuserDynamic) ... ok (0.409s) 2023-01-11T21:40:24.0576947Z test_isnan (__main__.TestTEFuserDynamic) ... ok (0.653s) 2023-01-11T21:40:24.0577671Z test_kernel_cache_multi_gpu (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0578404Z test_lerp (__main__.TestTEFuserDynamic) ... ok (0.339s) 2023-01-11T21:40:24.0579572Z test_list_ops (__main__.TestTEFuserDynamic) ... skip: FIXME: fuser doesn't include ListConstruct nodes to the group causing a failure (0.001s) 2023-01-11T21:40:24.0580416Z test_lstm (__main__.TestTEFuserDynamic) ... ok (0.749s) 2023-01-11T21:40:24.0581156Z test_lstm_concat (__main__.TestTEFuserDynamic) ... ok (0.242s) 2023-01-11T21:40:24.0581877Z test_lstm_gates_permutations (__main__.TestTEFuserDynamic) ... ok (10.584s) 2023-01-11T21:40:24.0582696Z test_lstm_traced (__main__.TestTEFuserDynamic) ... ok (0.430s) 2023-01-11T21:40:24.0583434Z test_masked_fill (__main__.TestTEFuserDynamic) ... skip: Temporarily disabled (0.001s) 2023-01-11T21:40:24.0584494Z test_matmul (__main__.TestTEFuserDynamic) ... skip: don't run conv with dynamic shapes (0.002s) 2023-01-11T21:40:24.0585513Z test_milstm (__main__.TestTEFuserDynamic) ... skip: don't run conv with dynamic shapes (0.001s) 2023-01-11T21:40:24.0586236Z test_minmax (__main__.TestTEFuserDynamic) ... ok (1.150s) 2023-01-11T21:40:24.0586906Z test_minmax_int_ops (__main__.TestTEFuserDynamic) ... ok (0.802s) 2023-01-11T21:40:24.0587562Z test_mul_bool (__main__.TestTEFuserDynamic) ... ok (0.169s) 2023-01-11T21:40:24.0588201Z test_neg_pow (__main__.TestTEFuserDynamic) ... ok (0.141s) 2023-01-11T21:40:24.0589146Z test_nonzero_device_cuda (__main__.TestTEFuserDynamic) ... skip: needs non-zero device (0.001s) 2023-01-11T21:40:24.0589896Z test_nop (__main__.TestTEFuserDynamic) ... ok (0.001s) 2023-01-11T21:40:24.0591078Z test_profiler (__main__.TestTEFuserDynamic) ... STAGE:2023-01-11 21:39:33 12456:12456 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:40:24.0592284Z STAGE:2023-01-11 21:39:33 12456:12456 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:40:24.0593381Z STAGE:2023-01-11 21:39:33 12456:12456 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:40:24.0594084Z ok (0.076s) 2023-01-11T21:40:24.0594698Z test_rand_broadcast_cuda (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0595541Z test_rand_cuda (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0596324Z test_rand_diamond (__main__.TestTEFuserDynamic) ... skip: fuser requires CUDA (0.000s) 2023-01-11T21:40:24.0597046Z test_relu (__main__.TestTEFuserDynamic) ... ok (0.139s) 2023-01-11T21:40:24.0597662Z test_relu_fwd_bwd (__main__.TestTEFuserDynamic) ... ok (0.162s) 2023-01-11T21:40:24.0598384Z test_remove_output_used_only_in_size (__main__.TestTEFuserDynamic) ... ok (0.029s) 2023-01-11T21:40:24.0599060Z test_scalar (__main__.TestTEFuserDynamic) ... ok (0.049s) 2023-01-11T21:40:24.0599705Z test_scalar_arg (__main__.TestTEFuserDynamic) ... ok (0.501s) 2023-01-11T21:40:24.0600373Z test_scalar_only_inputs (__main__.TestTEFuserDynamic) ... ok (0.048s) 2023-01-11T21:40:24.0601071Z test_skip_grad_in_check (__main__.TestTEFuserDynamic) ... ok (0.158s) 2023-01-11T21:40:24.0601758Z test_small_constant (__main__.TestTEFuserDynamic) ... ok (0.317s) 2023-01-11T21:40:24.0602404Z test_sub_gt_and (__main__.TestTEFuserDynamic) ... ok (0.181s) 2023-01-11T21:40:24.0603023Z test_sum_dim (__main__.TestTEFuserDynamic) ... ok (0.271s) 2023-01-11T21:40:24.0603668Z test_sum_keepdim_cast (__main__.TestTEFuserDynamic) ... ok (0.138s) 2023-01-11T21:40:24.0604474Z test_sum_simple (__main__.TestTEFuserDynamic) ... ok (0.091s) 2023-01-11T21:40:24.0605131Z test_superslomo (__main__.TestTEFuserDynamic) ... ok (1.293s) 2023-01-11T21:40:24.0605800Z test_tensor_scalar_ops (__main__.TestTEFuserDynamic) ... ok (0.653s) 2023-01-11T21:40:24.0606470Z test_ternary_norm_ops (__main__.TestTEFuserDynamic) ... ok (0.417s) 2023-01-11T21:40:24.0607127Z test_ternary_ops (__main__.TestTEFuserDynamic) ... ok (0.660s) 2023-01-11T21:40:24.0607781Z test_threshold (__main__.TestTEFuserDynamic) ... ok (0.210s) 2023-01-11T21:40:24.0608401Z test_to_device (__main__.TestTEFuserDynamic) ... ok (0.148s) 2023-01-11T21:40:24.0608975Z test_to_dtype (__main__.TestTEFuserDynamic) ... ok (0.381s) 2023-01-11T21:40:24.0609563Z test_torch_to (__main__.TestTEFuserDynamic) ... ok (3.948s) 2023-01-11T21:40:24.0610157Z test_type_as_cat (__main__.TestTEFuserDynamic) ... ok (2.110s) 2023-01-11T21:40:24.0610765Z test_typecheck (__main__.TestTEFuserDynamic) ... ok (0.052s) 2023-01-11T21:40:24.0611688Z test_unary_ops (__main__.TestTEFuserDynamic) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:40:24.0612526Z test_unrolled_cat (__main__.TestTEFuserDynamic) ... ok (0.012s) 2023-01-11T21:40:24.0613264Z test_unsqueeze_size_calculation (__main__.TestTEFuserDynamic) ... ok (0.347s) 2023-01-11T21:40:24.0613994Z test_unsqueeze_var_dim (__main__.TestTEFuserDynamic) ... ok (0.354s) 2023-01-11T21:40:24.0615894Z test_unsupported_dtypes (__main__.TestTEFuserDynamic) ... /var/lib/jenkins/workspace/test/test_jit_fuser_te.py:1186: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/EmptyTensor.cpp:32.) 2023-01-11T21:40:24.0617128Z return v.to(dtype) 2023-01-11T21:40:24.0617548Z ok (0.037s) 2023-01-11T21:40:24.0618097Z test_where_and_typing (__main__.TestTEFuserDynamic) ... ok (0.314s) 2023-01-11T21:40:24.0618770Z test_where_ops (__main__.TestTEFuserDynamic) ... ok (2.693s) 2023-01-11T21:40:24.0619594Z test_with_strict_fusion (__main__.TestTEFuserDynamic) ... /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py:216: UserWarning: Only works in script mode 2023-01-11T21:40:24.0620107Z warnings.warn("Only works in script mode") 2023-01-11T21:40:24.0620381Z ok (0.172s) 2023-01-11T21:40:24.0620726Z test_zero_element_tensors (__main__.TestTEFuserDynamic) ... ok (0.093s) 2023-01-11T21:40:24.0621120Z test_abs (__main__.TestTEFuserStatic) ... ok (0.050s) 2023-01-11T21:40:24.0621512Z test_adaptive_avg_pool2d (__main__.TestTEFuserStatic) ... ok (0.061s) 2023-01-11T21:40:24.0621933Z test_add_bool (__main__.TestTEFuserStatic) ... ok (0.107s) 2023-01-11T21:40:24.0622315Z test_addcmul (__main__.TestTEFuserStatic) ... ok (0.042s) 2023-01-11T21:40:24.0622875Z test_arg_configurations_smoke (__main__.TestTEFuserStatic) ... ok (0.041s) 2023-01-11T21:40:24.0623533Z test_autocast_down (__main__.TestTEFuserStatic) ... skip: half-precision NNC fusion requires CUDA (0.001s) 2023-01-11T21:40:24.0624237Z test_autocast_up (__main__.TestTEFuserStatic) ... skip: half-precision NNC fusion requires CUDA (0.000s) 2023-01-11T21:40:24.0624686Z test_batch_norm (__main__.TestTEFuserStatic) ... ok (0.289s) 2023-01-11T21:40:24.0625074Z test_binary_div_ops (__main__.TestTEFuserStatic) ... ok (1.112s) 2023-01-11T21:40:24.0625463Z test_binary_ops (__main__.TestTEFuserStatic) ... ok (3.260s) 2023-01-11T21:40:24.0625868Z test_binary_pow (__main__.TestTEFuserStatic) ... ok (0.275s) 2023-01-11T21:40:24.0626270Z test_binary_scalar_ops (__main__.TestTEFuserStatic) ... ok (1.029s) 2023-01-11T21:40:24.0626693Z test_binary_tensor_scalar_ops (__main__.TestTEFuserStatic) ... ok (8.356s) 2023-01-11T21:40:24.0627129Z test_bitwise_ops (__main__.TestTEFuserStatic) ... ok (0.733s) 2023-01-11T21:40:24.0627515Z test_broadcast (__main__.TestTEFuserStatic) ... ok (0.058s) 2023-01-11T21:40:24.0627890Z test_cat_2k_args (__main__.TestTEFuserStatic) ... ok (0.732s) 2023-01-11T21:40:24.0628433Z test_cat_graph_opt (__main__.TestTEFuserStatic) ... ok (0.080s) 2023-01-11T21:40:24.0628905Z test_channels_last_dims_dynamic (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0629348Z test_checks_cat_inputs (__main__.TestTEFuserStatic) ... ok (0.070s) 2023-01-11T21:40:24.0629731Z test_chunk (__main__.TestTEFuserStatic) ... ok (0.102s) 2023-01-11T21:40:24.0630114Z test_chunk_correctness (__main__.TestTEFuserStatic) ... ok (1.101s) 2023-01-11T21:40:24.0630528Z test_chunk_distributes (__main__.TestTEFuserStatic) ... ok (0.056s) 2023-01-11T21:40:24.0630975Z test_chunk_motion_deduplicates_inputs (__main__.TestTEFuserStatic) ... ok (0.102s) 2023-01-11T21:40:24.0631416Z test_chunk_mul_one (__main__.TestTEFuserStatic) ... ok (0.112s) 2023-01-11T21:40:24.0631812Z test_chunk_multiple (__main__.TestTEFuserStatic) ... ok (0.183s) 2023-01-11T21:40:24.0632173Z test_clamp (__main__.TestTEFuserStatic) ... ok (0.900s) 2023-01-11T21:40:24.0632665Z test_clamp_double (__main__.TestTEFuserStatic) ... ok (0.058s) 2023-01-11T21:40:24.0633061Z test_clamp_int (__main__.TestTEFuserStatic) ... ok (0.054s) 2023-01-11T21:40:24.0633458Z test_comparison_eq_ne (__main__.TestTEFuserStatic) ... ok (0.084s) 2023-01-11T21:40:24.0633960Z test_comparison_ge_le (__main__.TestTEFuserStatic) ... ok (0.084s) 2023-01-11T21:40:24.0634380Z test_comparison_gt_lt (__main__.TestTEFuserStatic) ... ok (0.084s) 2023-01-11T21:40:24.0634782Z test_concat (__main__.TestTEFuserStatic) ... ok (0.061s) 2023-01-11T21:40:24.0635161Z test_concat_invariant (__main__.TestTEFuserStatic) ... ok (0.079s) 2023-01-11T21:40:24.0636137Z test_constant_chunk_shapes (__main__.TestTEFuserStatic) ... /var/lib/jenkins/workspace/test/test_jit_fuser_te.py:2443: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. 2023-01-11T21:40:24.0636952Z r = torch.tensor(4) 2023-01-11T21:40:24.0637203Z ok (0.110s) 2023-01-11T21:40:24.0637514Z test_conv2d (__main__.TestTEFuserStatic) ... ok (0.026s) 2023-01-11T21:40:24.0637903Z test_conv2d_depthwise (__main__.TestTEFuserStatic) ... ok (0.868s) 2023-01-11T21:40:24.0638360Z test_cuda_half (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0638775Z test_dims (__main__.TestTEFuserStatic) ... ok (0.067s) 2023-01-11T21:40:24.0639137Z test_disabled (__main__.TestTEFuserStatic) ... ok (0.005s) 2023-01-11T21:40:24.0639532Z test_div_bool (__main__.TestTEFuserStatic) ... ok (0.035s) 2023-01-11T21:40:24.0639926Z test_dynamic_cat (__main__.TestTEFuserStatic) ... ok (0.008s) 2023-01-11T21:40:24.0640325Z test_dynamic_shapes (__main__.TestTEFuserStatic) ... ok (2.059s) 2023-01-11T21:40:24.0640728Z test_eq_unsqueeze_type_as (__main__.TestTEFuserStatic) ... ok (0.110s) 2023-01-11T21:40:24.0641092Z test_erf (__main__.TestTEFuserStatic) ... ok (0.001s) 2023-01-11T21:40:24.0641485Z test_exhaust_specializations (__main__.TestTEFuserStatic) ... ok (0.029s) 2023-01-11T21:40:24.0641847Z test_exp (__main__.TestTEFuserStatic) ... ok (0.047s) 2023-01-11T21:40:24.0642275Z test_fusion_reuse_multi_gpu (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0642728Z test_gelu (__main__.TestTEFuserStatic) ... ok (0.026s) 2023-01-11T21:40:24.0643132Z test_hardsigmoid_fwd_bwd (__main__.TestTEFuserStatic) ... ok (0.103s) 2023-01-11T21:40:24.0643572Z test_hardswish_fwd_bwd (__main__.TestTEFuserStatic) ... ok (0.120s) 2023-01-11T21:40:24.0644029Z test_inlined_optimized_graph (__main__.TestTEFuserStatic) ... ok (0.088s) 2023-01-11T21:40:24.0644446Z test_isnan (__main__.TestTEFuserStatic) ... ok (0.355s) 2023-01-11T21:40:24.0644889Z test_kernel_cache_multi_gpu (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0645408Z test_lerp (__main__.TestTEFuserStatic) ... ok (0.068s) 2023-01-11T21:40:24.0646109Z test_list_ops (__main__.TestTEFuserStatic) ... skip: FIXME: fuser doesn't include ListConstruct nodes to the group causing a failure (0.001s) 2023-01-11T21:40:24.0646600Z test_lstm (__main__.TestTEFuserStatic) ... ok (0.168s) 2023-01-11T21:40:24.0646979Z test_lstm_concat (__main__.TestTEFuserStatic) ... ok (0.164s) 2023-01-11T21:40:24.0647402Z test_lstm_gates_permutations (__main__.TestTEFuserStatic) ... ok (1.744s) 2023-01-11T21:40:24.0647821Z test_lstm_traced (__main__.TestTEFuserStatic) ... ok (0.140s) 2023-01-11T21:40:24.0648262Z test_masked_fill (__main__.TestTEFuserStatic) ... skip: Temporarily disabled (0.001s) 2023-01-11T21:40:24.0648676Z test_matmul (__main__.TestTEFuserStatic) ... ok (0.843s) 2023-01-11T21:40:24.0649058Z test_milstm (__main__.TestTEFuserStatic) ... ok (0.922s) 2023-01-11T21:40:24.0649412Z test_minmax (__main__.TestTEFuserStatic) ... ok (0.289s) 2023-01-11T21:40:24.0649866Z test_minmax_int_ops (__main__.TestTEFuserStatic) ... ok (0.309s) 2023-01-11T21:40:24.0650235Z test_mul_bool (__main__.TestTEFuserStatic) ... ok (0.038s) 2023-01-11T21:40:24.0650574Z test_neg_pow (__main__.TestTEFuserStatic) ... ok (0.139s) 2023-01-11T21:40:24.0651155Z test_nonzero_device_cuda (__main__.TestTEFuserStatic) ... skip: needs non-zero device (0.001s) 2023-01-11T21:40:24.0651613Z test_nop (__main__.TestTEFuserStatic) ... ok (0.001s) 2023-01-11T21:40:24.0652354Z test_profiler (__main__.TestTEFuserStatic) ... STAGE:2023-01-11 21:40:17 12456:12456 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:40:24.0653088Z STAGE:2023-01-11 21:40:17 12456:12456 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:40:24.0653775Z STAGE:2023-01-11 21:40:17 12456:12456 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:40:24.0654161Z ok (0.027s) 2023-01-11T21:40:24.0654558Z test_rand_broadcast_cuda (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0655030Z test_rand_cuda (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.001s) 2023-01-11T21:40:24.0655507Z test_rand_diamond (__main__.TestTEFuserStatic) ... skip: fuser requires CUDA (0.000s) 2023-01-11T21:40:24.0655953Z test_relu (__main__.TestTEFuserStatic) ... ok (0.047s) 2023-01-11T21:40:24.0656319Z test_relu_fwd_bwd (__main__.TestTEFuserStatic) ... ok (0.060s) 2023-01-11T21:40:24.0656741Z test_remove_output_used_only_in_size (__main__.TestTEFuserStatic) ... ok (0.027s) 2023-01-11T21:40:24.0657158Z test_scalar (__main__.TestTEFuserStatic) ... ok (0.048s) 2023-01-11T21:40:24.0657525Z test_scalar_arg (__main__.TestTEFuserStatic) ... ok (0.089s) 2023-01-11T21:40:24.0657929Z test_scalar_only_inputs (__main__.TestTEFuserStatic) ... ok (0.047s) 2023-01-11T21:40:24.0658341Z test_skip_grad_in_check (__main__.TestTEFuserStatic) ... ok (0.029s) 2023-01-11T21:40:24.0658796Z test_small_constant (__main__.TestTEFuserStatic) ... ok (0.073s) 2023-01-11T21:40:24.0659183Z test_sub_gt_and (__main__.TestTEFuserStatic) ... ok (0.059s) 2023-01-11T21:40:24.0659600Z test_sum_dim (__main__.TestTEFuserStatic) ... ok (0.121s) 2023-01-11T21:40:24.0659995Z test_sum_keepdim_cast (__main__.TestTEFuserStatic) ... ok (0.064s) 2023-01-11T21:40:24.0660382Z test_sum_simple (__main__.TestTEFuserStatic) ... ok (0.058s) 2023-01-11T21:40:24.0660776Z test_superslomo (__main__.TestTEFuserStatic) ... ok (0.620s) 2023-01-11T21:40:24.0661178Z test_tensor_scalar_ops (__main__.TestTEFuserStatic) ... ok (0.114s) 2023-01-11T21:40:24.0661588Z test_ternary_norm_ops (__main__.TestTEFuserStatic) ... ok (0.217s) 2023-01-11T21:40:24.0661976Z test_ternary_ops (__main__.TestTEFuserStatic) ... ok (0.259s) 2023-01-11T21:40:24.0662496Z test_threshold (__main__.TestTEFuserStatic) ... ok (0.062s) 2023-01-11T21:40:24.0662875Z test_to_device (__main__.TestTEFuserStatic) ... ok (0.049s) 2023-01-11T21:40:24.0663244Z test_to_dtype (__main__.TestTEFuserStatic) ... ok (0.103s) 2023-01-11T21:40:24.0663735Z test_torch_to (__main__.TestTEFuserStatic) ... ok (0.850s) 2023-01-11T21:40:24.0664117Z test_type_as_cat (__main__.TestTEFuserStatic) ... ok (1.410s) 2023-01-11T21:40:24.0664477Z test_typecheck (__main__.TestTEFuserStatic) ... ok (0.049s) 2023-01-11T21:40:24.0664979Z test_unary_ops (__main__.TestTEFuserStatic) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:40:24.0665473Z test_unrolled_cat (__main__.TestTEFuserStatic) ... ok (0.028s) 2023-01-11T21:40:24.0665887Z test_unsqueeze_size_calculation (__main__.TestTEFuserStatic) ... ok (0.084s) 2023-01-11T21:40:24.0666293Z test_unsqueeze_var_dim (__main__.TestTEFuserStatic) ... ok (0.224s) 2023-01-11T21:40:24.0666712Z test_unsupported_dtypes (__main__.TestTEFuserStatic) ... ok (0.036s) 2023-01-11T21:40:24.0667122Z test_where_and_typing (__main__.TestTEFuserStatic) ... ok (0.081s) 2023-01-11T21:40:24.0667604Z test_where_ops (__main__.TestTEFuserStatic) ... ok (0.613s) 2023-01-11T21:40:24.0668378Z test_with_strict_fusion (__main__.TestTEFuserStatic) ... /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py:216: UserWarning: Only works in script mode 2023-01-11T21:40:24.0668906Z warnings.warn("Only works in script mode") 2023-01-11T21:40:24.0669210Z ok (0.069s) 2023-01-11T21:40:24.0669531Z test_zero_element_tensors (__main__.TestTEFuserStatic) ... ok (0.045s) 2023-01-11T21:40:24.0669771Z 2023-01-11T21:40:24.0670085Z ---------------------------------------------------------------------- 2023-01-11T21:40:24.0670432Z Ran 207 tests in 93.405s 2023-01-11T21:40:24.0670605Z 2023-01-11T21:40:24.0670691Z OK (skipped=39) 2023-01-11T21:40:24.0670846Z 2023-01-11T21:40:24.0670968Z Generating XML reports... 2023-01-11T21:40:24.0671656Z Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-jit.test_fuser_common.TestFuserCommon-20230111213849.xml 2023-01-11T21:40:24.0672597Z Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserDynamic-20230111213849.xml 2023-01-11T21:40:24.0673432Z Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserStatic-20230111213849.xml 2023-01-11T21:40:24.0673804Z 2023-01-11T21:40:24.0674262Z ##[endgroup] 2023-01-11T21:40:24.0674866Z FINISHED PRINTING LOG FILE of test_jit_fuser_te (/var/lib/jenkins/workspace/test/test-reports/test_jit_fuser_te_bqdb0ty0) 2023-01-11T21:40:24.0675204Z 2023-01-11T21:40:25.9534010Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:26.0199169Z Ignoring disabled issues: [] 2023-01-11T21:40:26.0355007Z Running test_jit_llga_fuser ... [2023-01-11 21:40:26.035226] 2023-01-11T21:40:26.0357282Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit_llga_fuser.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:26.035478] 2023-01-11T21:40:29.8454523Z 2023-01-11T21:40:29.8455134Z Expand the folded group to see the log file of test_jit_llga_fuser 2023-01-11T21:40:29.8456167Z ##[group]PRINTING LOG FILE of test_jit_llga_fuser (/var/lib/jenkins/workspace/test/test-reports/test_jit_llga_fuser_clt8hj27) 2023-01-11T21:40:29.8456507Z 2023-01-11T21:40:29.8456949Z Running tests... 2023-01-11T21:40:29.8457658Z ---------------------------------------------------------------------- 2023-01-11T21:40:29.8458509Z test_dynamo_aot_ts_onednn (__main__.TestDynamoAOT) ... Test results will be stored in test-reports/python-unittest/test_jit_llga_fuser 2023-01-11T21:40:29.8459152Z skip: Enable when integration with dynamo aot_autograd is more stable (0.001s) 2023-01-11T21:40:29.8464318Z test_context_manager (__main__.TestEnableDisableLlgaFuser) ... ok (0.064s) 2023-01-11T21:40:29.8464823Z test_vision_alexnet_bfloat16 (__main__.TestModel) ... ok (0.035s) 2023-01-11T21:40:29.8465328Z test_vision_alexnet_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8465836Z test_vision_densenet121_bfloat16 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8466559Z test_vision_densenet121_float32 (__main__.TestModel) ... ok (0.036s) 2023-01-11T21:40:29.8467070Z test_vision_densenet161_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8467568Z test_vision_densenet161_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8467850Z test_vision_densenet169_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8468116Z test_vision_densenet169_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8468394Z test_vision_densenet201_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8468672Z test_vision_densenet201_float32 (__main__.TestModel) ... ok (0.029s) 2023-01-11T21:40:29.8468945Z test_vision_efficientnet_b0_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8469239Z test_vision_efficientnet_b0_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8469530Z test_vision_efficientnet_b1_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8469913Z test_vision_efficientnet_b1_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8470192Z test_vision_efficientnet_b2_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8470487Z test_vision_efficientnet_b2_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8470776Z test_vision_efficientnet_b3_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8471051Z test_vision_efficientnet_b3_float32 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8471343Z test_vision_efficientnet_b4_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8471633Z test_vision_efficientnet_b4_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8471919Z test_vision_efficientnet_b5_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8504421Z test_vision_efficientnet_b5_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8504946Z test_vision_efficientnet_b6_bfloat16 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8505466Z test_vision_efficientnet_b6_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8506013Z test_vision_efficientnet_b7_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8506495Z test_vision_efficientnet_b7_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8507123Z test_vision_googlenet_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8507618Z test_vision_googlenet_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8508114Z test_vision_mnasnet1_0_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8508574Z test_vision_mnasnet1_0_float32 (__main__.TestModel) ... ok (0.030s) 2023-01-11T21:40:29.8509030Z test_vision_mobilenet_v2_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8509521Z test_vision_mobilenet_v2_float32 (__main__.TestModel) ... ok (0.033s) 2023-01-11T21:40:29.8510003Z test_vision_mobilenet_v3_large_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8510499Z test_vision_mobilenet_v3_large_float32 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8510985Z test_vision_regnet_y_400mf_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8511460Z test_vision_regnet_y_400mf_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8511930Z test_vision_resnet50_bfloat16 (__main__.TestModel) ... ok (0.032s) 2023-01-11T21:40:29.8512383Z test_vision_resnet50_float32 (__main__.TestModel) ... ok (0.040s) 2023-01-11T21:40:29.8512872Z test_vision_resnext101_32x8d_bfloat16 (__main__.TestModel) ... ok (0.033s) 2023-01-11T21:40:29.8513361Z test_vision_resnext101_32x8d_float32 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8513918Z test_vision_resnext50_32x4d_bfloat16 (__main__.TestModel) ... ok (0.031s) 2023-01-11T21:40:29.8514392Z test_vision_resnext50_32x4d_float32 (__main__.TestModel) ... ok (0.035s) 2023-01-11T21:40:29.8514873Z test_vision_shufflenet_v2_x1_0_bfloat16 (__main__.TestModel) ... ok (0.035s) 2023-01-11T21:40:29.8515572Z test_vision_shufflenet_v2_x1_0_float32 (__main__.TestModel) ... ok (0.034s) 2023-01-11T21:40:29.8516044Z test_vision_squeezenet1_0_bfloat16 (__main__.TestModel) ... ok (0.034s) 2023-01-11T21:40:29.8516533Z test_vision_squeezenet1_0_float32 (__main__.TestModel) ... ok (0.035s) 2023-01-11T21:40:29.8517015Z test_vision_vgg16_bfloat16 (__main__.TestModel) ... ok (0.035s) 2023-01-11T21:40:29.8517447Z test_vision_vgg16_float32 (__main__.TestModel) ... ok (0.036s) 2023-01-11T21:40:29.8517925Z test_vision_wide_resnet50_2_bfloat16 (__main__.TestModel) ... ok (0.036s) 2023-01-11T21:40:29.8518398Z test_vision_wide_resnet50_2_float32 (__main__.TestModel) ... ok (0.033s) 2023-01-11T21:40:29.8518673Z 2023-01-11T21:40:29.8519104Z ---------------------------------------------------------------------- 2023-01-11T21:40:29.8519496Z Ran 52 tests in 1.677s 2023-01-11T21:40:29.8519693Z 2023-01-11T21:40:29.8519821Z OK (skipped=1) 2023-01-11T21:40:29.8520040Z 2023-01-11T21:40:29.8520385Z Generating XML reports... 2023-01-11T21:40:29.8521196Z Generated XML report: test-reports/python-unittest/test_jit_llga_fuser/TEST-TestEnableDisableLlgaFuser-20230111214027.xml 2023-01-11T21:40:29.8522144Z Generated XML report: test-reports/python-unittest/test_jit_llga_fuser/TEST-TestModel-20230111214027.xml 2023-01-11T21:40:29.8523033Z Generated XML report: test-reports/python-unittest/test_jit_llga_fuser/TEST-TestDynamoAOT-20230111214027.xml 2023-01-11T21:40:29.8523419Z 2023-01-11T21:40:29.8523855Z ##[endgroup] 2023-01-11T21:40:29.8524566Z FINISHED PRINTING LOG FILE of test_jit_llga_fuser (/var/lib/jenkins/workspace/test/test-reports/test_jit_llga_fuser_clt8hj27) 2023-01-11T21:40:29.8524957Z 2023-01-11T21:40:31.7634119Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:31.8279723Z Ignoring disabled issues: [] 2023-01-11T21:40:31.8434833Z Running test_jiterator ... [2023-01-11 21:40:31.843177] 2023-01-11T21:40:31.8436543Z Executing ['/opt/conda/bin/python', '-bb', 'test_jiterator.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:31.843428] 2023-01-11T21:40:33.7828589Z 2023-01-11T21:40:33.7829156Z Expand the folded group to see the log file of test_jiterator 2023-01-11T21:40:33.7830344Z ##[group]PRINTING LOG FILE of test_jiterator (/var/lib/jenkins/workspace/test/test-reports/test_jiterator_ol3snmf8) 2023-01-11T21:40:33.7830916Z CUDA not available, skipping tests 2023-01-11T21:40:33.7831160Z 2023-01-11T21:40:33.7831273Z Running tests... 2023-01-11T21:40:33.7831769Z ---------------------------------------------------------------------- 2023-01-11T21:40:33.7831943Z 2023-01-11T21:40:33.7832141Z ---------------------------------------------------------------------- 2023-01-11T21:40:33.7832381Z Ran 0 tests in 0.000s 2023-01-11T21:40:33.7832481Z 2023-01-11T21:40:33.7832542Z OK 2023-01-11T21:40:33.7832633Z 2023-01-11T21:40:33.7832719Z Generating XML reports... 2023-01-11T21:40:33.7833046Z Test results will be stored in test-reports/python-unittest/test_jiterator 2023-01-11T21:40:33.7833244Z 2023-01-11T21:40:33.7833454Z ##[endgroup] 2023-01-11T21:40:33.7833912Z FINISHED PRINTING LOG FILE of test_jiterator (/var/lib/jenkins/workspace/test/test-reports/test_jiterator_ol3snmf8) 2023-01-11T21:40:33.7834129Z 2023-01-11T21:40:35.6694321Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:35.7354835Z Ignoring disabled issues: [] 2023-01-11T21:40:35.7512944Z Running test_legacy_vmap ... [2023-01-11 21:40:35.750954] 2023-01-11T21:40:35.7513819Z Executing ['/opt/conda/bin/python', '-bb', 'test_legacy_vmap.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:35.751192] 2023-01-11T21:40:39.3928791Z 2023-01-11T21:40:39.3935011Z Expand the folded group to see the log file of test_legacy_vmap 2023-01-11T21:40:39.3936185Z ##[group]PRINTING LOG FILE of test_legacy_vmap (/var/lib/jenkins/workspace/test/test-reports/test_legacy_vmap_myjl1fm9) 2023-01-11T21:40:39.3936652Z 2023-01-11T21:40:39.3936784Z Running tests... 2023-01-11T21:40:39.3937711Z ---------------------------------------------------------------------- 2023-01-11T21:40:39.3938418Z Test results will be stored in test-reports/python-unittest/test_legacy_vmap 2023-01-11T21:40:39.3939221Z test_accepts_nested_inputs (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:377: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3939704Z out = vmap(lambda z: z[0] + z[1])((x, y)) 2023-01-11T21:40:39.3940023Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:379: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3940355Z out = vmap(lambda z: z[0] + z[1], in_dims=(0,))((x, y)) 2023-01-11T21:40:39.3940696Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:381: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3941019Z out = vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))((x, y)) 2023-01-11T21:40:39.3941430Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:384: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3941746Z out = vmap(lambda z: z[0] + z[1])([x, y]) 2023-01-11T21:40:39.3942071Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:386: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3942573Z out = vmap(lambda z: z[0] + z[1], in_dims=(0,))([x, y]) 2023-01-11T21:40:39.3942913Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:388: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3943242Z out = vmap(lambda z: z[0] + z[1], in_dims=([0, 0],))([x, y]) 2023-01-11T21:40:39.3943563Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:391: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3943958Z out = vmap(lambda z: z['x'] + z['y'])({'x': x, 'y': y}) 2023-01-11T21:40:39.3944299Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:393: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3944710Z out = vmap(lambda z: z['x'] + z['y'], in_dims=(0,))({'x': x, 'y': y}) 2023-01-11T21:40:39.3945045Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:395: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3945457Z out = vmap(lambda z: z['x'] + z['y'], in_dims=({'x': 0, 'y': 0},))({'x': x, 'y': y}) 2023-01-11T21:40:39.3945810Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:399: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3946215Z out_fn = vmap(lambda z: z['x'][0] + z['x'][1][0] + z['y'][0] + z['y'][1]) 2023-01-11T21:40:39.3946411Z ok (0.005s) 2023-01-11T21:40:39.3946783Z test_backward_unsupported_interaction (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:749: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3947165Z vmap(backward_on_vmapped_tensor)(x) 2023-01-11T21:40:39.3947483Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:755: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3947805Z vmap(backward_with_vmapped_grad)(x, grad) 2023-01-11T21:40:39.3948138Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:761: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3948461Z vmap(completely_unrelated_backward)(y) 2023-01-11T21:40:39.3948642Z ok (0.005s) 2023-01-11T21:40:39.3948994Z test_batched_gradient_basic (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:793: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3949353Z jacobian = vmap(vjp_mul)(batched_v) 2023-01-11T21:40:39.3949530Z ok (0.001s) 2023-01-11T21:40:39.3949880Z test_constant_function (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:64: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3950326Z output = vmap(lambda x: torch.tensor(3.14))(torch.ones(3)) 2023-01-11T21:40:39.3950541Z ok (0.001s) 2023-01-11T21:40:39.3950887Z test_different_map_dim_size_raises (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:42: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3951243Z vmap(torch.mul)(x, y) 2023-01-11T21:40:39.3951559Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:44: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3951871Z vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))((x, y)) 2023-01-11T21:40:39.3952204Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:46: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3952613Z vmap(lambda z: z['x'] + z['y'], in_dims=({'x': 0, 'y': 0},))({'x': x, 'y': y}) 2023-01-11T21:40:39.3952827Z ok (0.001s) 2023-01-11T21:40:39.3953204Z test_fallback_atan2 (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:555: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3953552Z result = vmap(op, (2, 0))(x, y) 2023-01-11T21:40:39.3953938Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:561: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3954242Z result = vmap(vmap(op), (2, 0))(x, y) 2023-01-11T21:40:39.3954643Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:567: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3954959Z result = vmap(vmap(vmap(op)))(x, y) 2023-01-11T21:40:39.3955151Z ok (0.096s) 2023-01-11T21:40:39.3955375Z test_fallback_does_not_warn_by_default (__main__.TestVmapAPI) ... ok (0.001s) 2023-01-11T21:40:39.3955804Z test_fallback_masked_fill (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:583: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3956207Z result = vmap(torch.index_add, (0, None, None, 0))(x, dim, index, values) 2023-01-11T21:40:39.3956578Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:583: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3956918Z result = vmap(torch.index_add, (0, None, None, 0))(x, dim, index, values) 2023-01-11T21:40:39.3957147Z ok (0.075s) 2023-01-11T21:40:39.3957505Z test_fallback_multiple_returns (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:601: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3957855Z result = vmap(torch.var_mean)(tensor) 2023-01-11T21:40:39.3958186Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:607: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3958511Z result = vmap(vmap(torch.var_mean))(tensor) 2023-01-11T21:40:39.3958850Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:613: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3959162Z result = vmap(vmap(vmap(torch.var_mean)))(tensor) 2023-01-11T21:40:39.3959369Z ok (0.068s) 2023-01-11T21:40:39.3959617Z test_fallback_warns_when_warnings_are_enabled (__main__.TestVmapAPI) ... ok (0.001s) 2023-01-11T21:40:39.3959914Z test_fallback_with_undefined_grad (__main__.TestVmapAPI) ... ok (0.003s) 2023-01-11T21:40:39.3960330Z test_fallback_zero_dim (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:526: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3960672Z vmap(op, (0, None))(x, y) 2023-01-11T21:40:39.3960990Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:528: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3961280Z vmap(op, (None, 0))(y, x) 2023-01-11T21:40:39.3961643Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:530: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3961938Z vmap(op)(x, x) 2023-01-11T21:40:39.3962233Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:535: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3962529Z vmap(op, (0, None))(x, y) 2023-01-11T21:40:39.3962839Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:537: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3963137Z vmap(op, (None, 0))(y, x) 2023-01-11T21:40:39.3963433Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:539: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3963732Z vmap(op)(x, x) 2023-01-11T21:40:39.3963902Z ok (0.015s) 2023-01-11T21:40:39.3964361Z test_func_with_no_inputs (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:58: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3968310Z vmap(foo)() 2023-01-11T21:40:39.3969109Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:61: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3969847Z vmap(bar)() 2023-01-11T21:40:39.3970246Z ok (0.001s) 2023-01-11T21:40:39.3971120Z test_functools_partial (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:799: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3972080Z result = vmap(functools.partial(torch.mul, x))(y) 2023-01-11T21:40:39.3972567Z ok (0.001s) 2023-01-11T21:40:39.3973439Z test_grad_unsupported_interaction (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:774: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3974367Z vmap(output_to_grad_is_vmapped)(input_tensor) 2023-01-11T21:40:39.3975242Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:782: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3976083Z vmap(input_to_grad_is_vmapped)(input_tensor) 2023-01-11T21:40:39.3976538Z ok (0.003s) 2023-01-11T21:40:39.3977475Z test_in_dim_not_in_tensor_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:464: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3978356Z vmap(foo)(torch.randn([])) 2023-01-11T21:40:39.3979144Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:466: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3979933Z vmap(foo, in_dims=(0,))(torch.randn([])) 2023-01-11T21:40:39.3980734Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:468: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3981648Z vmap(foo, in_dims=(-1,))(x) 2023-01-11T21:40:39.3982663Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:470: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3983432Z vmap(foo, in_dims=(2,))(y) 2023-01-11T21:40:39.3984192Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:472: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3984982Z vmap(lambda z: z[0] + z[1], in_dims=([3, 0],))([x, y]) 2023-01-11T21:40:39.3985774Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:474: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3986564Z vmap(foo, in_dims=(0,))(torch.randn(2, 3)) 2023-01-11T21:40:39.3987376Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:475: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3988162Z vmap(foo, in_dims=(1,))(torch.randn(2, 3)) 2023-01-11T21:40:39.3988606Z ok (0.002s) 2023-01-11T21:40:39.3989515Z test_in_dims_wrong_type_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:408: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3990587Z vmap(torch.mul, [0, 0])(x, y) 2023-01-11T21:40:39.3991372Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:410: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3992120Z vmap(torch.mul, set({0, 0}))(x, y) 2023-01-11T21:40:39.3992936Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:412: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3993895Z vmap(torch.mul, 'lol')(x, y) 2023-01-11T21:40:39.3994668Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:414: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3995448Z vmap(lambda z: z[0] + z[1], in_dims=[0, 0])([x, y]) 2023-01-11T21:40:39.3996242Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:416: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3997137Z vmap(torch.mul, (0, 0))(x, y) 2023-01-11T21:40:39.3997558Z ok (0.001s) 2023-01-11T21:40:39.3998469Z test_inplace_fallback_nary_different_levels (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:708: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.3999402Z vmap(op, in_dims=(0, None))(x, y) 2023-01-11T21:40:39.4002218Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:714: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4003291Z vmap(vmap(op, in_dims=(0, None)))(x, y) 2023-01-11T21:40:39.4004110Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:722: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4005036Z vmap(op, in_dims=(None, 0))(x, y) 2023-01-11T21:40:39.4005837Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:727: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4006701Z vmap(vmap(op, in_dims=(0, None)), in_dims=(None, 0))(x, y) 2023-01-11T21:40:39.4007509Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:732: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4008302Z vmap(vmap(op, in_dims=(0, None)), in_dims=(None, 1))(x, y) 2023-01-11T21:40:39.4009103Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:737: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4009868Z vmap(vmap(op, in_dims=(None, 0)))(x, y) 2023-01-11T21:40:39.4010296Z ok (0.013s) 2023-01-11T21:40:39.4011142Z test_inplace_fallback_nary_same_levels (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:673: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4012008Z vmap(op, (2, 0))(x, y) 2023-01-11T21:40:39.4012713Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:681: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4013392Z vmap(vmap(op), (2, 0))(x, y) 2023-01-11T21:40:39.4014097Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:689: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4014810Z result = vmap(vmap(vmap(op)))(x, y) 2023-01-11T21:40:39.4015204Z ok (0.077s) 2023-01-11T21:40:39.4016068Z test_inplace_fallback_unary (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:632: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4016934Z result = vmap(op)(x) 2023-01-11T21:40:39.4017708Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:639: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4018508Z result = vmap(op, out_dims=(1,))(x) 2023-01-11T21:40:39.4019312Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:646: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4020201Z result = vmap(vmap(op))(x) 2023-01-11T21:40:39.4020960Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:653: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4021717Z result = vmap(vmap(vmap(op)))(x) 2023-01-11T21:40:39.4022154Z ok (0.315s) 2023-01-11T21:40:39.4023241Z test_integer_in_dim_but_not_tensor_input_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:447: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4023831Z vmap(torch.sum)(x, 0) 2023-01-11T21:40:39.4024243Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:449: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4024705Z vmap(torch.sum, (0, 0))(x, 0) 2023-01-11T21:40:39.4025146Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:451: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4025742Z vmap(lambda z: z[0] + z[1], in_dims=([0, 0],))([x, 1]) 2023-01-11T21:40:39.4026266Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:453: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4026724Z vmap(torch.sum, (0, None))(x, 0) 2023-01-11T21:40:39.4026986Z ok (0.002s) 2023-01-11T21:40:39.4027496Z test_multiple_inputs (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:79: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4028012Z output = vmap(torch.mul)(x, y) 2023-01-11T21:40:39.4028268Z ok (0.001s) 2023-01-11T21:40:39.4028768Z test_multiple_out_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:221: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4029294Z result = vmap(foo, out_dims=(0, 1))(x) 2023-01-11T21:40:39.4029749Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:224: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4030359Z result = vmap(bar, out_dims=(-1, 0, 1, 2))(x, y) 2023-01-11T21:40:39.4030637Z ok (0.002s) 2023-01-11T21:40:39.4031139Z test_multiple_outputs (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:87: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4031629Z outputs = vmap(foo)(x) 2023-01-11T21:40:39.4031883Z ok (0.001s) 2023-01-11T21:40:39.4032411Z test_multiple_outputs_error_cases (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:107: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4032950Z vmap(returns_tuple_of_tensors)(x) 2023-01-11T21:40:39.4033430Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:112: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4033956Z vmap(returns_list_of_two_tensors)(x) 2023-01-11T21:40:39.4034438Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:114: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4034862Z vmap(returns_list_of_one_tensor)(x) 2023-01-11T21:40:39.4035148Z ok (0.001s) 2023-01-11T21:40:39.4035690Z test_nested_non_default_in_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:338: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4036235Z result = vmap(vmap(vmap(torch.mul), (1, 0)), (1, 2))(x, y) 2023-01-11T21:40:39.4036526Z ok (0.001s) 2023-01-11T21:40:39.4037022Z test_nested_out_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:237: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4037576Z result = vmap(lambda y: vmap(lambda x: x, out_dims=1)(y))(y) 2023-01-11T21:40:39.4038071Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:242: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4038714Z result = vmap(lambda y: vmap(lambda x: x, out_dims=1)(y), out_dims=1)(y) 2023-01-11T21:40:39.4039225Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:247: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4039866Z result = vmap(lambda y: vmap(lambda x: x, out_dims=-1)(y), out_dims=-1)(y) 2023-01-11T21:40:39.4040380Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:254: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4041002Z result = vmap(lambda y: vmap(lambda x: x * y, out_dims=1)(x), out_dims=-1)(y) 2023-01-11T21:40:39.4041320Z ok (0.003s) 2023-01-11T21:40:39.4041845Z test_nested_with_different_map_dim (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:128: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4042407Z output = vmap(lambda x: vmap(lambda y: x * y)(y))(x) 2023-01-11T21:40:39.4043011Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:133: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4043525Z output = vmap(lambda x: vmap(lambda y: vmap(lambda z: x * y * z)(z))(y))(x) 2023-01-11T21:40:39.4043831Z ok (0.002s) 2023-01-11T21:40:39.4044358Z test_nested_with_same_map_dim (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:119: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4044893Z output = vmap(vmap(torch.mul))(x, y) 2023-01-11T21:40:39.4045323Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:122: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4045754Z output = vmap(vmap(vmap(torch.mul)))(x, y) 2023-01-11T21:40:39.4046013Z ok (0.001s) 2023-01-11T21:40:39.4046480Z test_nn_module (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:805: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4046982Z result = vmap(model)(tensor) 2023-01-11T21:40:39.4047248Z ok (0.001s) 2023-01-11T21:40:39.4047768Z test_non_default_in_dims_out_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:345: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4048355Z result = vmap(lambda x: x, in_dims=1, out_dims=1)(x) 2023-01-11T21:40:39.4048848Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:350: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4049332Z result = vmap(lambda x: x, in_dims=2, out_dims=1)(x) 2023-01-11T21:40:39.4049806Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:359: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4050234Z result = vmap(foo, in_dims=1, out_dims=1)(x) 2023-01-11T21:40:39.4050724Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:363: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4051187Z result = vmap(foo, in_dims=2, out_dims=1)(x) 2023-01-11T21:40:39.4051681Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:368: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4052122Z result = vmap(vmap(foo, 1, 1), 1, 1)(x) 2023-01-11T21:40:39.4052403Z ok (0.002s) 2023-01-11T21:40:39.4052912Z test_non_tensor_output_raises (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:29: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4053455Z output = vmap(lambda x: 3.14)(torch.ones(3)) 2023-01-11T21:40:39.4053939Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:35: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4054366Z vmap(multiple_outputs)(torch.ones(3)) 2023-01-11T21:40:39.4054624Z ok (0.001s) 2023-01-11T21:40:39.4055177Z test_non_zero_in_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:311: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4055701Z output = vmap(lambda x: x, (1,))(tensor) 2023-01-11T21:40:39.4056193Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:317: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4056645Z output = vmap(torch.mul, (0, 1))(x, y) 2023-01-11T21:40:39.4057141Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:319: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4057619Z output = vmap(torch.mul, (1, 0))(x, y) 2023-01-11T21:40:39.4057892Z ok (0.002s) 2023-01-11T21:40:39.4058380Z test_none_in_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:327: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4058905Z output = vmap(torch.mul, (0, None))(x, y) 2023-01-11T21:40:39.4059496Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:332: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4059940Z output = vmap(torch.mul, (0, None))(x, 2) 2023-01-11T21:40:39.4060214Z ok (0.001s) 2023-01-11T21:40:39.4060740Z test_nonzero_out_dims (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:172: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4061267Z result = vmap(lambda x: x, out_dims=1)(tensor) 2023-01-11T21:40:39.4061744Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:178: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4062195Z result = vmap(lambda x: x, out_dims=2)(tensor) 2023-01-11T21:40:39.4062879Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:184: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4063493Z result = vmap(lambda x: x, out_dims=-1)(tensor) 2023-01-11T21:40:39.4063998Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:191: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4064515Z result = vmap(lambda x, y: (x, y), out_dims=2)(tensor, other) 2023-01-11T21:40:39.4065017Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:199: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4065464Z result = vmap(lambda x: x, out_dims=2)(tensor) 2023-01-11T21:40:39.4065932Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:207: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4066390Z result = vmap(foo, out_dims=1)(x, y) 2023-01-11T21:40:39.4066641Z ok (0.004s) 2023-01-11T21:40:39.4067139Z test_noop_in_inner_vmap (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:140: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4067666Z output = vmap(lambda x: vmap(lambda y: x)(y))(x) 2023-01-11T21:40:39.4067951Z ok (0.001s) 2023-01-11T21:40:39.4068439Z test_not_enough_in_dims_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:424: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4068940Z vmap(torch.mul, (0,))(x, y) 2023-01-11T21:40:39.4069394Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:426: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4069860Z vmap(torch.mul, (0, 0, 0))(x, y) 2023-01-11T21:40:39.4070310Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:428: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4070750Z vmap(lambda z: z[0] + z[1], in_dims=([0],))([x, y]) 2023-01-11T21:40:39.4071220Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:430: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4071800Z vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))([x, y]) 2023-01-11T21:40:39.4072288Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:432: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4072718Z vmap(torch.mul, (0, 0))(x, y) 2023-01-11T21:40:39.4072981Z ok (0.001s) 2023-01-11T21:40:39.4073497Z test_out_dim_out_of_bounds_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:303: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4074112Z vmap(lambda x: x, out_dims=3)(x) 2023-01-11T21:40:39.4074587Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:305: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4075123Z vmap(lambda x: x, out_dims=-4)(x) 2023-01-11T21:40:39.4075385Z ok (0.005s) 2023-01-11T21:40:39.4076037Z test_out_dims_and_num_outputs_mismatch_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:286: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4076628Z vmap(lambda x: x, out_dims=(0, 0))(x) 2023-01-11T21:40:39.4077144Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:288: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4077722Z vmap(lambda x: (x, x, x), out_dims=(0, 0, 0, 0))(x) 2023-01-11T21:40:39.4078222Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:292: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4078613Z vmap(lambda x: (x, x), out_dims=(0,))(x) 2023-01-11T21:40:39.4079090Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:294: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4079567Z vmap(lambda x: (x, x, x), out_dims=(0, 0))(x) 2023-01-11T21:40:39.4079871Z ok (0.001s) 2023-01-11T21:40:39.4080419Z test_out_dims_edge_case (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:264: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4080995Z expected = vmap(foo, out_dims=1)(tensor) 2023-01-11T21:40:39.4081498Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:265: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4081967Z result = vmap(foo, out_dims=(1,))(tensor) 2023-01-11T21:40:39.4082244Z ok (0.001s) 2023-01-11T21:40:39.4082789Z test_out_dims_must_be_int_or_tuple_of_int_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:272: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4083457Z vmap(lambda x: x, out_dims='lol')(tensor) 2023-01-11T21:40:39.4083945Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:274: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4084500Z vmap(lambda x: x, out_dims=('lol',))(tensor) 2023-01-11T21:40:39.4084989Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:276: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4085449Z vmap(lambda x: x, out_dims=None)(tensor) 2023-01-11T21:40:39.4085938Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:278: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4086395Z vmap(lambda x: x, out_dims=(None,))(tensor) 2023-01-11T21:40:39.4086694Z ok (0.001s) 2023-01-11T21:40:39.4087183Z test_single_input (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:73: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4087687Z output = vmap(square)(x) 2023-01-11T21:40:39.4087964Z ok (0.001s) 2023-01-11T21:40:39.4088465Z test_unsupported_op_err_msg (__main__.TestVmapAPI) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:151: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4089054Z vmap(torch.ravel)(tensor) 2023-01-11T21:40:39.4089513Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:157: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4089978Z vmap(out_op)(tensor, tensor) 2023-01-11T21:40:39.4090445Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:162: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4090907Z vmap(lambda t: torch.atleast_1d([t]))(tensor) 2023-01-11T21:40:39.4091409Z /var/lib/jenkins/workspace/test/test_legacy_vmap.py:167: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4091871Z vmap(torch.Tensor.item)(tensor) 2023-01-11T21:40:39.4092115Z ok (0.010s) 2023-01-11T21:40:39.4092428Z test_T_numpy (__main__.TestVmapOperators) ... ok (0.005s) 2023-01-11T21:40:39.4092829Z test_as_strided (__main__.TestVmapOperators) ... ok (0.024s) 2023-01-11T21:40:39.4093306Z test_binary_pointwise_ops (__main__.TestVmapOperators) ... ok (0.112s) 2023-01-11T21:40:39.4093716Z test_bmm (__main__.TestVmapOperators) ... ok (0.035s) 2023-01-11T21:40:39.4094086Z test_cat (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4094444Z test_chunk (__main__.TestVmapOperators) ... ok (0.017s) 2023-01-11T21:40:39.4094817Z test_clamp (__main__.TestVmapOperators) ... ok (0.014s) 2023-01-11T21:40:39.4095183Z test_clone (__main__.TestVmapOperators) ... ok (0.014s) 2023-01-11T21:40:39.4095550Z test_comparison_ops (__main__.TestVmapOperators) ... ok (0.051s) 2023-01-11T21:40:39.4095943Z test_conj (__main__.TestVmapOperators) ... ok (0.007s) 2023-01-11T21:40:39.4096324Z test_contiguous (__main__.TestVmapOperators) ... ok (0.007s) 2023-01-11T21:40:39.4096730Z test_diagonal (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4097108Z test_dot (__main__.TestVmapOperators) ... ok (0.012s) 2023-01-11T21:40:39.4097510Z test_expand_as (__main__.TestVmapOperators) ... ok (0.006s) 2023-01-11T21:40:39.4097937Z test_fill_and_zero_inplace (__main__.TestVmapOperators) ... ok (0.014s) 2023-01-11T21:40:39.4098315Z test_imag (__main__.TestVmapOperators) ... ok (0.007s) 2023-01-11T21:40:39.4098688Z test_is_complex (__main__.TestVmapOperators) ... ok (0.001s) 2023-01-11T21:40:39.4099078Z test_is_contiguous (__main__.TestVmapOperators) ... ok (0.006s) 2023-01-11T21:40:39.4099452Z test_is_floating_point (__main__.TestVmapOperators) ... ok (0.001s) 2023-01-11T21:40:39.4099847Z test_mm (__main__.TestVmapOperators) ... ok (0.013s) 2023-01-11T21:40:39.4100237Z test_movedim (__main__.TestVmapOperators) ... ok (0.007s) 2023-01-11T21:40:39.4100605Z test_mv (__main__.TestVmapOperators) ... ok (0.012s) 2023-01-11T21:40:39.4100961Z test_narrow (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4101345Z test_new_empty (__main__.TestVmapOperators) ... ok (0.001s) 2023-01-11T21:40:39.4101760Z test_new_empty_strided (__main__.TestVmapOperators) ... ok (0.008s) 2023-01-11T21:40:39.4102160Z test_new_zeros (__main__.TestVmapOperators) ... ok (0.002s) 2023-01-11T21:40:39.4102758Z test_no_random_op_support (__main__.TestVmapOperators) ... ok (0.098s) 2023-01-11T21:40:39.4103154Z test_real (__main__.TestVmapOperators) ... ok (0.008s) 2023-01-11T21:40:39.4103527Z test_reshape (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4103922Z test_reshape_as (__main__.TestVmapOperators) ... ok (0.005s) 2023-01-11T21:40:39.4104304Z test_result_type (__main__.TestVmapOperators) ... ok (0.005s) 2023-01-11T21:40:39.4104698Z test_select (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4105051Z test_slice (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4105422Z test_split (__main__.TestVmapOperators) ... ok (0.036s) 2023-01-11T21:40:39.4105796Z test_squeeze (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4106155Z test_stack (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4106532Z test_stride (__main__.TestVmapOperators) ... ok (0.001s) 2023-01-11T21:40:39.4107023Z test_sum_dim (__main__.TestVmapOperators) ... ok (0.005s) 2023-01-11T21:40:39.4107372Z test_t (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4107760Z test_tensor_split (__main__.TestVmapOperators) ... ok (0.085s) 2023-01-11T21:40:39.4108160Z test_to (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4108548Z test_trace (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4108936Z test_transpose (__main__.TestVmapOperators) ... ok (0.004s) 2023-01-11T21:40:39.4109350Z test_unary_pointwise_ops (__main__.TestVmapOperators) ... ok (0.092s) 2023-01-11T21:40:39.4109766Z test_unbind (__main__.TestVmapOperators) ... ok (0.122s) 2023-01-11T21:40:39.4110129Z test_unfold (__main__.TestVmapOperators) ... ok (0.003s) 2023-01-11T21:40:39.4110520Z test_view (__main__.TestVmapOperators) ... ok (0.011s) 2023-01-11T21:40:39.4110906Z test_view_as (__main__.TestVmapOperators) ... ok (0.012s) 2023-01-11T21:40:39.4111397Z test_view_as_complex (__main__.TestVmapOperators) ... ok (0.060s) 2023-01-11T21:40:39.4111828Z test_view_as_real (__main__.TestVmapOperators) ... ok (0.008s) 2023-01-11T21:40:39.4112236Z test_vmap_fallback_check (__main__.TestVmapOperators) ... ok (0.001s) 2023-01-11T21:40:39.4112862Z test_vmap_fallback_check_ok (__main__.TestVmapOperators) ... /var/lib/jenkins/workspace/test/test_legacy_vmap.py:965: UserWarning: Please use torch.vmap instead of torch._vmap_internals.vmap. 2023-01-11T21:40:39.4113408Z vmap(op_using_fallback)(torch.rand(3)) 2023-01-11T21:40:39.4113766Z ok (0.001s) 2023-01-11T21:40:39.4113923Z 2023-01-11T21:40:39.4114257Z ---------------------------------------------------------------------- 2023-01-11T21:40:39.4114597Z Ran 95 tests in 1.702s 2023-01-11T21:40:39.4114763Z 2023-01-11T21:40:39.4114853Z OK 2023-01-11T21:40:39.4114985Z 2023-01-11T21:40:39.4115116Z Generating XML reports... 2023-01-11T21:40:39.4115726Z Generated XML report: test-reports/python-unittest/test_legacy_vmap/TEST-TestVmapAPI-20230111214037.xml 2023-01-11T21:40:39.4116509Z Generated XML report: test-reports/python-unittest/test_legacy_vmap/TEST-TestVmapOperators-20230111214037.xml 2023-01-11T21:40:39.4116865Z 2023-01-11T21:40:39.4117393Z ##[endgroup] 2023-01-11T21:40:39.4117966Z FINISHED PRINTING LOG FILE of test_legacy_vmap (/var/lib/jenkins/workspace/test/test-reports/test_legacy_vmap_myjl1fm9) 2023-01-11T21:40:39.4118287Z 2023-01-11T21:40:41.2773398Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:41.3420779Z Ignoring disabled issues: [] 2023-01-11T21:40:41.3577257Z Running test_license ... [2023-01-11 21:40:41.357400] 2023-01-11T21:40:41.3578953Z Executing ['/opt/conda/bin/python', '-bb', 'test_license.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:41.357662] 2023-01-11T21:40:43.2977694Z 2023-01-11T21:40:43.2978741Z Expand the folded group to see the log file of test_license 2023-01-11T21:40:43.2979804Z ##[group]PRINTING LOG FILE of test_license (/var/lib/jenkins/workspace/test/test-reports/test_license_x55h2e9f) 2023-01-11T21:40:43.2980193Z 2023-01-11T21:40:43.2980316Z Running tests... 2023-01-11T21:40:43.2980947Z ---------------------------------------------------------------------- 2023-01-11T21:40:43.2981604Z Test results will be stored in test-reports/python-unittest/test_license 2023-01-11T21:40:43.2982050Z test_distinfo_license (__main__.TestLicense) 2023-01-11T21:40:43.2982696Z If run when pytorch is installed via a wheel, the license will be in ... ok (0.228s) 2023-01-11T21:40:43.2983285Z test_license_for_wheel (__main__.TestLicense) ... skip: can only be run in a source tree (0.000s) 2023-01-11T21:40:43.2983600Z 2023-01-11T21:40:43.2983962Z ---------------------------------------------------------------------- 2023-01-11T21:40:43.2984382Z Ran 2 tests in 0.228s 2023-01-11T21:40:43.2984567Z 2023-01-11T21:40:43.2984681Z OK (skipped=1) 2023-01-11T21:40:43.2984860Z 2023-01-11T21:40:43.2985000Z Generating XML reports... 2023-01-11T21:40:43.2985974Z Generated XML report: test-reports/python-unittest/test_license/TEST-TestLicense-20230111214042.xml 2023-01-11T21:40:43.2986368Z 2023-01-11T21:40:43.2986813Z ##[endgroup] 2023-01-11T21:40:43.2987486Z FINISHED PRINTING LOG FILE of test_license (/var/lib/jenkins/workspace/test/test-reports/test_license_x55h2e9f) 2023-01-11T21:40:43.2987833Z 2023-01-11T21:40:45.2171348Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:45.2844028Z Ignoring disabled issues: [] 2023-01-11T21:40:45.3001601Z Running test_logging ... [2023-01-11 21:40:45.299870] 2023-01-11T21:40:45.3003623Z Executing ['/opt/conda/bin/python', '-bb', 'test_logging.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:45.300140] 2023-01-11T21:40:49.0386088Z 2023-01-11T21:40:49.0388135Z Expand the folded group to see the log file of test_logging 2023-01-11T21:40:49.0389467Z ##[group]PRINTING LOG FILE of test_logging (/var/lib/jenkins/workspace/test/test-reports/test_logging_lvz_phm2) 2023-01-11T21:40:49.0389931Z 2023-01-11T21:40:49.0390061Z Running tests... 2023-01-11T21:40:49.0390712Z ---------------------------------------------------------------------- 2023-01-11T21:40:49.0391369Z Test results will be stored in test-reports/python-unittest/test_logging 2023-01-11T21:40:49.0391845Z testApiUsage (__main__.LoggingTest) 2023-01-11T21:40:49.0392330Z This test verifies that api usage logging is not triggered via static ... ok (2.060s) 2023-01-11T21:40:49.0392643Z 2023-01-11T21:40:49.0393008Z ---------------------------------------------------------------------- 2023-01-11T21:40:49.0393410Z Ran 1 test in 2.061s 2023-01-11T21:40:49.0393686Z 2023-01-11T21:40:49.0393793Z OK 2023-01-11T21:40:49.0393947Z 2023-01-11T21:40:49.0394089Z Generating XML reports... 2023-01-11T21:40:49.0394808Z Generated XML report: test-reports/python-unittest/test_logging/TEST-LoggingTest-20230111214046.xml 2023-01-11T21:40:49.0395202Z 2023-01-11T21:40:49.0395624Z ##[endgroup] 2023-01-11T21:40:49.0396318Z FINISHED PRINTING LOG FILE of test_logging (/var/lib/jenkins/workspace/test/test-reports/test_logging_lvz_phm2) 2023-01-11T21:40:49.0396682Z 2023-01-11T21:40:50.8934542Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:50.9592444Z Ignoring disabled issues: [] 2023-01-11T21:40:50.9747585Z Running test_maskedtensor ... [2023-01-11 21:40:50.974500] 2023-01-11T21:40:50.9749541Z Executing ['/opt/conda/bin/python', '-bb', 'test_maskedtensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:50.974746] 2023-01-11T21:40:53.9568528Z 2023-01-11T21:40:53.9569122Z Expand the folded group to see the log file of test_maskedtensor 2023-01-11T21:40:53.9570133Z ##[group]PRINTING LOG FILE of test_maskedtensor (/var/lib/jenkins/workspace/test/test-reports/test_maskedtensor_a6eyf7pr) 2023-01-11T21:40:53.9570522Z 2023-01-11T21:40:53.9570636Z Running tests... 2023-01-11T21:40:53.9571242Z ---------------------------------------------------------------------- 2023-01-11T21:40:53.9571970Z Test results will be stored in test-reports/python-unittest/test_maskedtensor 2023-01-11T21:40:53.9573563Z test_binary_fn_aten_add (__main__.TestBinary) ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:156: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project. 2023-01-11T21:40:53.9578495Z warnings.warn(("The PyTorch API of MaskedTensors is in prototype stage " 2023-01-11T21:40:53.9578857Z ok (0.004s) 2023-01-11T21:40:53.9579082Z test_binary_fn_aten_arctan2 (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9579405Z test_binary_fn_aten_atan2 (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9579680Z test_binary_fn_aten_bitwise_and (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9579966Z test_binary_fn_aten_bitwise_left_shift (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9580456Z test_binary_fn_aten_bitwise_or (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9580751Z test_binary_fn_aten_bitwise_right_shift (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9581075Z test_binary_fn_aten_bitwise_xor (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9581347Z test_binary_fn_aten_div (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9581665Z test_binary_fn_aten_divide (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9581931Z test_binary_fn_aten_eq (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9582188Z test_binary_fn_aten_floor_divide (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9582683Z test_binary_fn_aten_fmax (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9582948Z test_binary_fn_aten_fmin (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9583217Z test_binary_fn_aten_fmod (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9583586Z test_binary_fn_aten_ge (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9583854Z test_binary_fn_aten_greater (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9584165Z test_binary_fn_aten_greater_equal (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9584438Z test_binary_fn_aten_gt (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9584697Z test_binary_fn_aten_le (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9585004Z test_binary_fn_aten_less (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9585263Z test_binary_fn_aten_less_equal (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9585585Z test_binary_fn_aten_logaddexp (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9585869Z test_binary_fn_aten_logaddexp2 (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9586127Z test_binary_fn_aten_lt (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9586440Z test_binary_fn_aten_maximum (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9586710Z test_binary_fn_aten_minimum (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9586962Z test_binary_fn_aten_mul (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9587280Z test_binary_fn_aten_multiply (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9587547Z test_binary_fn_aten_ne (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9587862Z test_binary_fn_aten_nextafter (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9588126Z test_binary_fn_aten_not_equal (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9588398Z test_binary_fn_aten_remainder (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9588707Z test_binary_fn_aten_sub (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9588962Z test_binary_fn_aten_subtract (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9589238Z test_binary_fn_aten_true_divide (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9589570Z test_inplace_binary_fn_aten_add_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9589856Z test_inplace_binary_fn_aten_arctan2_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9590176Z test_inplace_binary_fn_aten_atan2_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9590468Z test_inplace_binary_fn_aten_bitwise_and_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9590782Z test_inplace_binary_fn_aten_bitwise_left_shift_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9591109Z test_inplace_binary_fn_aten_bitwise_or_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9591419Z test_inplace_binary_fn_aten_bitwise_right_shift_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9591774Z test_inplace_binary_fn_aten_bitwise_xor_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9592062Z test_inplace_binary_fn_aten_div_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9592376Z test_inplace_binary_fn_aten_divide_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9592744Z test_inplace_binary_fn_aten_eq_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9593042Z test_inplace_binary_fn_aten_floor_divide_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9593372Z test_inplace_binary_fn_aten_fmod_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9593710Z test_inplace_binary_fn_aten_ge_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9594046Z test_inplace_binary_fn_aten_greater_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9594349Z test_inplace_binary_fn_aten_greater_equal_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9594669Z test_inplace_binary_fn_aten_gt_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9594955Z test_inplace_binary_fn_aten_le_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9595235Z test_inplace_binary_fn_aten_less_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9595607Z test_inplace_binary_fn_aten_less_equal_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9595900Z test_inplace_binary_fn_aten_lt_ (__main__.TestBinary) ... ok (0.010s) 2023-01-11T21:40:53.9596233Z test_inplace_binary_fn_aten_mul_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9596512Z test_inplace_binary_fn_aten_multiply_ (__main__.TestBinary) ... ok (0.002s) 2023-01-11T21:40:53.9596797Z test_inplace_binary_fn_aten_ne_ (__main__.TestBinary) ... ok (0.009s) 2023-01-11T21:40:53.9597136Z test_inplace_binary_fn_aten_nextafter_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9597440Z test_inplace_binary_fn_aten_not_equal_ (__main__.TestBinary) ... ok (0.009s) 2023-01-11T21:40:53.9597766Z test_inplace_binary_fn_aten_remainder_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9598053Z test_inplace_binary_fn_aten_sub_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9598343Z test_inplace_binary_fn_aten_subtract_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9598686Z test_inplace_binary_fn_aten_true_divide_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9598975Z test_masks_match_fn_name_add (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9599301Z test_masks_match_fn_name_add_ (__main__.TestBinary) ... ok (0.001s) 2023-01-11T21:40:53.9599564Z test_all (__main__.TestReductions) ... ok (0.003s) 2023-01-11T21:40:53.9599798Z test_amax (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9600547Z test_amax_grad (__main__.TestReductions) ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:161: UserWarning: It is not recommended to create a MaskedTensor with a tensor that requires_grad. To avoid this, you can use data.clone().detach() 2023-01-11T21:40:53.9601128Z warnings.warn("It is not recommended to create a MaskedTensor with a tensor that requires_grad. " 2023-01-11T21:40:53.9601382Z ok (0.002s) 2023-01-11T21:40:53.9601624Z test_amin (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9601885Z test_amin_grad (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9602152Z test_grad_dtype (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9602770Z test_max_not_implemented (__main__.TestReductions) ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:299: UserWarning: max is not implemented in __torch_dispatch__ for MaskedTensor. 2023-01-11T21:40:53.9603399Z If you would like this operator to be supported, please file an issue for a feature request at https://github.com/pytorch/maskedtensor/issues with a minimal reproducible code snippet. 2023-01-11T21:40:53.9603927Z In the case that the semantics for the operator are not trivial, it would be appreciated to also include a proposal for the semantics. 2023-01-11T21:40:53.9604230Z warnings.warn(msg) 2023-01-11T21:40:53.9604393Z ok (0.001s) 2023-01-11T21:40:53.9604646Z test_mean (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9604907Z test_mean_dim_grad (__main__.TestReductions) ... ok (0.003s) 2023-01-11T21:40:53.9605187Z test_mean_grad_case_1a (__main__.TestReductions) 2023-01-11T21:40:53.9606049Z values.requires_grad = True ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:156: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project. 2023-01-11T21:40:53.9606632Z warnings.warn(("The PyTorch API of MaskedTensors is in prototype stage " 2023-01-11T21:40:53.9607295Z /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:161: UserWarning: It is not recommended to create a MaskedTensor with a tensor that requires_grad. To avoid this, you can use data.clone().detach() 2023-01-11T21:40:53.9607771Z warnings.warn("It is not recommended to create a MaskedTensor with a tensor that requires_grad. " 2023-01-11T21:40:53.9608012Z ok (0.007s) 2023-01-11T21:40:53.9608259Z test_mean_grad_case_1b (__main__.TestReductions) 2023-01-11T21:40:53.9608501Z values.requires_grad = False ... ok (0.003s) 2023-01-11T21:40:53.9608726Z test_mean_grad_case_1c (__main__.TestReductions) 2023-01-11T21:40:53.9609582Z values.requires_grad = True ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:156: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project. 2023-01-11T21:40:53.9610170Z warnings.warn(("The PyTorch API of MaskedTensors is in prototype stage " 2023-01-11T21:40:53.9610402Z ok (0.005s) 2023-01-11T21:40:53.9610592Z test_mean_grad_case_1d (__main__.TestReductions) 2023-01-11T21:40:53.9610831Z values.requires_grad = False ... ok (0.001s) 2023-01-11T21:40:53.9611066Z test_mean_grad_case_1e (__main__.TestReductions) 2023-01-11T21:40:53.9611679Z values.requires_grad = True ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:161: UserWarning: It is not recommended to create a MaskedTensor with a tensor that requires_grad. To avoid this, you can use data.clone().detach() 2023-01-11T21:40:53.9612176Z warnings.warn("It is not recommended to create a MaskedTensor with a tensor that requires_grad. " 2023-01-11T21:40:53.9612432Z ok (0.006s) 2023-01-11T21:40:53.9612633Z test_mean_grad_case_1f (__main__.TestReductions) 2023-01-11T21:40:53.9613480Z values.requires_grad = False ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:156: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project. 2023-01-11T21:40:53.9614067Z warnings.warn(("The PyTorch API of MaskedTensors is in prototype stage " 2023-01-11T21:40:53.9614304Z ok (0.001s) 2023-01-11T21:40:53.9614511Z test_prod (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9615142Z test_prod_grad (__main__.TestReductions) ... /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py:161: UserWarning: It is not recommended to create a MaskedTensor with a tensor that requires_grad. To avoid this, you can use data.clone().detach() 2023-01-11T21:40:53.9615657Z warnings.warn("It is not recommended to create a MaskedTensor with a tensor that requires_grad. " 2023-01-11T21:40:53.9615910Z ok (0.002s) 2023-01-11T21:40:53.9616103Z test_sum (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9616350Z test_sum_grad (__main__.TestReductions) ... ok (0.002s) 2023-01-11T21:40:53.9616617Z test_inplace_unary_fn_aten_abs_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9616902Z test_inplace_unary_fn_aten_absolute_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9617172Z test_inplace_unary_fn_aten_acos_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9617489Z test_inplace_unary_fn_aten_acosh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9617771Z test_inplace_unary_fn_aten_arccos_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9618041Z test_inplace_unary_fn_aten_arccosh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9618324Z test_inplace_unary_fn_aten_arcsin_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9618604Z test_inplace_unary_fn_aten_arcsinh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9618887Z test_inplace_unary_fn_aten_arctan_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9619153Z test_inplace_unary_fn_aten_arctanh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9619432Z test_inplace_unary_fn_aten_asin_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9619751Z test_inplace_unary_fn_aten_asinh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9620020Z test_inplace_unary_fn_aten_atan_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9620328Z test_inplace_unary_fn_aten_atanh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9620617Z test_inplace_unary_fn_aten_bitwise_not_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9620899Z test_inplace_unary_fn_aten_ceil_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9621160Z test_inplace_unary_fn_aten_clamp_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9621436Z test_inplace_unary_fn_aten_clip_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9621721Z test_inplace_unary_fn_aten_conj_physical_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9621996Z test_inplace_unary_fn_aten_cos_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9622269Z test_inplace_unary_fn_aten_cosh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9622720Z test_inplace_unary_fn_aten_deg2rad_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9622999Z test_inplace_unary_fn_aten_digamma_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9623284Z test_inplace_unary_fn_aten_erf_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9623561Z test_inplace_unary_fn_aten_erfc_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9623840Z test_inplace_unary_fn_aten_erfinv_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9624103Z test_inplace_unary_fn_aten_exp2_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9624377Z test_inplace_unary_fn_aten_exp_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9624657Z test_inplace_unary_fn_aten_expm1_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9624925Z test_inplace_unary_fn_aten_fix_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9625200Z test_inplace_unary_fn_aten_floor_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9625475Z test_inplace_unary_fn_aten_frac_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9625747Z test_inplace_unary_fn_aten_i0_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9626013Z test_inplace_unary_fn_aten_lgamma_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9626289Z test_inplace_unary_fn_aten_log10_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9626565Z test_inplace_unary_fn_aten_log1p_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9626827Z test_inplace_unary_fn_aten_log2_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9627101Z test_inplace_unary_fn_aten_log_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9627377Z test_inplace_unary_fn_aten_logit_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9627646Z test_inplace_unary_fn_aten_nan_to_num_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9627924Z test_inplace_unary_fn_aten_neg_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9628200Z test_inplace_unary_fn_aten_negative_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9628480Z test_inplace_unary_fn_aten_pow_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9628824Z test_inplace_unary_fn_aten_rad2deg_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9629112Z test_inplace_unary_fn_aten_reciprocal_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9629399Z test_inplace_unary_fn_aten_round_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9629665Z test_inplace_unary_fn_aten_rsqrt_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9629939Z test_inplace_unary_fn_aten_sgn_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9630217Z test_inplace_unary_fn_aten_sigmoid_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9630498Z test_inplace_unary_fn_aten_sign_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9630762Z test_inplace_unary_fn_aten_sin_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9631035Z test_inplace_unary_fn_aten_sinc_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9631309Z test_inplace_unary_fn_aten_sinh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9631618Z test_inplace_unary_fn_aten_sqrt_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9631893Z test_inplace_unary_fn_aten_square_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9632169Z test_inplace_unary_fn_aten_tan_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9632438Z test_inplace_unary_fn_aten_tanh_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9632702Z test_inplace_unary_fn_aten_trunc_ (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9632963Z test_unary_fn_aten_abs (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9633297Z test_unary_fn_aten_absolute (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9633546Z test_unary_fn_aten_acos (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9633857Z test_unary_fn_aten_acosh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9634120Z test_unary_fn_aten_angle (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9634369Z test_unary_fn_aten_arccos (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9634632Z test_unary_fn_aten_arccosh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9634892Z test_unary_fn_aten_arcsin (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9635152Z test_unary_fn_aten_arcsinh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9635395Z test_unary_fn_aten_arctan (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9635653Z test_unary_fn_aten_arctanh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9635913Z test_unary_fn_aten_asin (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9636163Z test_unary_fn_aten_asinh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9636419Z test_unary_fn_aten_atan (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9636675Z test_unary_fn_aten_atanh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9636929Z test_unary_fn_aten_bitwise_not (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9637196Z test_unary_fn_aten_ceil (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9637455Z test_unary_fn_aten_clamp (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9637711Z test_unary_fn_aten_clip (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9637964Z test_unary_fn_aten_conj_physical (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9638229Z test_unary_fn_aten_cos (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9638486Z test_unary_fn_aten_cosh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9638731Z test_unary_fn_aten_deg2rad (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9638997Z test_unary_fn_aten_digamma (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9639261Z test_unary_fn_aten_erf (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9639504Z test_unary_fn_aten_erfc (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9639762Z test_unary_fn_aten_erfinv (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9640071Z test_unary_fn_aten_exp (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9640328Z test_unary_fn_aten_exp2 (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9640573Z test_unary_fn_aten_expm1 (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9640833Z test_unary_fn_aten_fix (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9641091Z test_unary_fn_aten_floor (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9641336Z test_unary_fn_aten_frac (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9641590Z test_unary_fn_aten_i0 (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9641845Z test_unary_fn_aten_isnan (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9642105Z test_unary_fn_aten_lgamma (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9642351Z test_unary_fn_aten_log (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9642608Z test_unary_fn_aten_log10 (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9642903Z test_unary_fn_aten_log1p (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9643148Z test_unary_fn_aten_log2 (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9643404Z test_unary_fn_aten_logit (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9643673Z test_unary_fn_aten_nan_to_num (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9643924Z test_unary_fn_aten_neg (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9644183Z test_unary_fn_aten_negative (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9644455Z test_unary_fn_aten_positive (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9644717Z test_unary_fn_aten_pow (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9644966Z test_unary_fn_aten_rad2deg (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9645237Z test_unary_fn_aten_reciprocal (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9645506Z test_unary_fn_aten_round (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9645755Z test_unary_fn_aten_rsqrt (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9646011Z test_unary_fn_aten_sgn (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9646277Z test_unary_fn_aten_sigmoid (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9646527Z test_unary_fn_aten_sign (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9646788Z test_unary_fn_aten_signbit (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9647047Z test_unary_fn_aten_sin (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9647299Z test_unary_fn_aten_sinc (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9647539Z test_unary_fn_aten_sinh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9647787Z test_unary_fn_aten_sqrt (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9648044Z test_unary_fn_aten_square (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9648289Z test_unary_fn_aten_tan (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9648548Z test_unary_fn_aten_tanh (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9648804Z test_unary_fn_aten_trunc (__main__.TestUnary) ... ok (0.001s) 2023-01-11T21:40:53.9648950Z 2023-01-11T21:40:53.9649182Z ---------------------------------------------------------------------- 2023-01-11T21:40:53.9649427Z Ran 207 tests in 0.400s 2023-01-11T21:40:53.9649541Z 2023-01-11T21:40:53.9649603Z OK 2023-01-11T21:40:53.9649698Z 2023-01-11T21:40:53.9649785Z Generating XML reports... 2023-01-11T21:40:53.9650187Z Generated XML report: test-reports/python-unittest/test_maskedtensor/TEST-TestBinary-20230111214053.xml 2023-01-11T21:40:53.9650699Z Generated XML report: test-reports/python-unittest/test_maskedtensor/TEST-TestReductions-20230111214053.xml 2023-01-11T21:40:53.9651196Z Generated XML report: test-reports/python-unittest/test_maskedtensor/TEST-TestUnary-20230111214053.xml 2023-01-11T21:40:53.9651411Z 2023-01-11T21:40:53.9651706Z ##[endgroup] 2023-01-11T21:40:53.9652108Z FINISHED PRINTING LOG FILE of test_maskedtensor (/var/lib/jenkins/workspace/test/test-reports/test_maskedtensor_a6eyf7pr) 2023-01-11T21:40:53.9652373Z 2023-01-11T21:40:55.9178702Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:40:56.0060127Z Ignoring disabled issues: [] 2023-01-11T21:40:56.0222171Z Running test_matmul_cuda ... [2023-01-11 21:40:56.021874] 2023-01-11T21:40:56.0225219Z Executing ['/opt/conda/bin/python', '-bb', 'test_matmul_cuda.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:40:56.022179] 2023-01-11T21:40:58.1514036Z 2023-01-11T21:40:58.1515679Z Expand the folded group to see the log file of test_matmul_cuda 2023-01-11T21:40:58.1516811Z ##[group]PRINTING LOG FILE of test_matmul_cuda (/var/lib/jenkins/workspace/test/test-reports/test_matmul_cuda_tbwz3bgv) 2023-01-11T21:40:58.1517198Z 2023-01-11T21:40:58.1517336Z Running tests... 2023-01-11T21:40:58.1517872Z ---------------------------------------------------------------------- 2023-01-11T21:40:58.1518049Z 2023-01-11T21:40:58.1518450Z ---------------------------------------------------------------------- 2023-01-11T21:40:58.1518697Z Ran 0 tests in 0.000s 2023-01-11T21:40:58.1518813Z 2023-01-11T21:40:58.1518876Z OK 2023-01-11T21:40:58.1518969Z 2023-01-11T21:40:58.1519054Z Generating XML reports... 2023-01-11T21:40:58.1519379Z Test results will be stored in test-reports/python-unittest/test_matmul_cuda 2023-01-11T21:40:58.1519563Z 2023-01-11T21:40:58.1519792Z ##[endgroup] 2023-01-11T21:40:58.1520168Z FINISHED PRINTING LOG FILE of test_matmul_cuda (/var/lib/jenkins/workspace/test/test-reports/test_matmul_cuda_tbwz3bgv) 2023-01-11T21:40:58.1520386Z 2023-01-11T21:41:00.1095483Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:41:00.1954083Z Ignoring disabled issues: [] 2023-01-11T21:41:00.2112668Z Running test_mkl_verbose ... [2023-01-11 21:41:00.210888] 2023-01-11T21:41:00.2113771Z Executing ['/opt/conda/bin/python', '-bb', 'test_mkl_verbose.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:41:00.211157] 2023-01-11T21:41:05.6023407Z 2023-01-11T21:41:05.6023922Z Expand the folded group to see the log file of test_mkl_verbose 2023-01-11T21:41:05.6026446Z ##[group]PRINTING LOG FILE of test_mkl_verbose (/var/lib/jenkins/workspace/test/test-reports/test_mkl_verbose_nkxkp4k6) 2023-01-11T21:41:05.6026830Z 2023-01-11T21:41:05.6027001Z Running tests... 2023-01-11T21:41:05.6027947Z ---------------------------------------------------------------------- 2023-01-11T21:41:05.6028856Z Test results will be stored in test-reports/python-unittest/test_mkl_verbose 2023-01-11T21:41:05.6029579Z test_verbose_off (__main__.TestMKLVerbose) ... ok (1.634s) 2023-01-11T21:41:05.6030191Z test_verbose_on (__main__.TestMKLVerbose) ... ok (2.041s) 2023-01-11T21:41:05.6030540Z 2023-01-11T21:41:05.6030992Z ---------------------------------------------------------------------- 2023-01-11T21:41:05.6031555Z Ran 2 tests in 3.675s 2023-01-11T21:41:05.6031810Z 2023-01-11T21:41:05.6031944Z OK 2023-01-11T21:41:05.6032187Z 2023-01-11T21:41:05.6032402Z Generating XML reports... 2023-01-11T21:41:05.6033383Z Generated XML report: test-reports/python-unittest/test_mkl_verbose/TEST-TestMKLVerbose-20230111214101.xml 2023-01-11T21:41:05.6034009Z 2023-01-11T21:41:05.6034571Z ##[endgroup] 2023-01-11T21:41:05.6035507Z FINISHED PRINTING LOG FILE of test_mkl_verbose (/var/lib/jenkins/workspace/test/test-reports/test_mkl_verbose_nkxkp4k6) 2023-01-11T21:41:05.6036026Z 2023-01-11T21:41:07.5179798Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:41:07.6031373Z Ignoring disabled issues: [] 2023-01-11T21:41:07.6190798Z Running test_mkldnn ... [2023-01-11 21:41:07.618799] 2023-01-11T21:41:07.6192898Z Executing ['/opt/conda/bin/python', '-bb', 'test_mkldnn.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:41:07.619059] 2023-01-11T21:41:23.4173516Z 2023-01-11T21:41:23.4174037Z Expand the folded group to see the log file of inductor/test_torchinductor 2023-01-11T21:41:23.4175469Z ##[group]PRINTING LOG FILE of inductor/test_torchinductor (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_yykynwpa) 2023-01-11T21:41:23.4206349Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:41:23.4206745Z 2023-01-11T21:41:23.4206880Z Running tests... 2023-01-11T21:41:23.4207447Z ---------------------------------------------------------------------- 2023-01-11T21:41:23.4208194Z Test results will be stored in test-reports/python-unittest/inductor.test_torchinductor 2023-01-11T21:41:23.4208761Z test_auto_simd (__main__.CPUReproTests) ... ok (0.246s) 2023-01-11T21:41:23.4209258Z test_complex_memory_overlap (__main__.CPUReproTests) ... ok (0.002s) 2023-01-11T21:41:23.4210236Z test_conv_stride_constraints (__main__.CPUReproTests) ... [2023-01-11 21:23:39,164] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4211363Z [2023-01-11 21:23:40,756] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4212215Z [2023-01-11 21:23:40,778] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4213053Z [2023-01-11 21:23:40,793] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4213422Z 2023-01-11T21:41:23.4213592Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4213926Z import torch 2023-01-11T21:41:23.4214228Z import random 2023-01-11T21:41:23.4214611Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4215070Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4215346Z 2023-01-11T21:41:23.4215483Z aten = torch.ops.aten 2023-01-11T21:41:23.4215920Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4216348Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4216581Z 2023-01-11T21:41:23.4216588Z 2023-01-11T21:41:23.4216834Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4217435Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4218057Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4218477Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4218805Z { 2023-01-11T21:41:23.4219131Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4219455Z { 2023-01-11T21:41:23.4219770Z #pragma omp for collapse(2) 2023-01-11T21:41:23.4220130Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4220425Z { 2023-01-11T21:41:23.4220729Z for(long i1=0; i1<5; i1+=1) 2023-01-11T21:41:23.4221046Z { 2023-01-11T21:41:23.4221343Z #pragma GCC ivdep 2023-01-11T21:41:23.4221702Z for(long i2=0; i2<256; i2+=1) 2023-01-11T21:41:23.4222022Z { 2023-01-11T21:41:23.4222292Z { 2023-01-11T21:41:23.4222753Z { 2023-01-11T21:41:23.4223131Z auto tmp0 = in_ptr0[i2 + (256*i1) + (1280*i0)]; 2023-01-11T21:41:23.4223541Z out_ptr0[i1 + (5*i2) + (1280*i0)] = tmp0; 2023-01-11T21:41:23.4223887Z } 2023-01-11T21:41:23.4224183Z } 2023-01-11T21:41:23.4224468Z } 2023-01-11T21:41:23.4224733Z } 2023-01-11T21:41:23.4225010Z } 2023-01-11T21:41:23.4225277Z } 2023-01-11T21:41:23.4225523Z } 2023-01-11T21:41:23.4225822Z ''') 2023-01-11T21:41:23.4225983Z 2023-01-11T21:41:23.4225991Z 2023-01-11T21:41:23.4226150Z async_compile.wait(globals()) 2023-01-11T21:41:23.4226481Z del async_compile 2023-01-11T21:41:23.4226672Z 2023-01-11T21:41:23.4226797Z def call(args): 2023-01-11T21:41:23.4227117Z inp_1, weight_1 = args 2023-01-11T21:41:23.4227426Z args.clear() 2023-01-11T21:41:23.4228006Z buf0 = empty_strided((2, 5, 16, 16), (1280, 1, 80, 5), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4228680Z kernel_cpp_0(c_void_p(inp_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4229069Z del inp_1 2023-01-11T21:41:23.4229494Z buf1 = aten.convolution(buf0, weight_1, None, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.4229970Z assert_size_stride(buf1, (2, 6, 14, 14), (1176, 1, 84, 6)) 2023-01-11T21:41:23.4230298Z del buf0 2023-01-11T21:41:23.4230596Z del weight_1 2023-01-11T21:41:23.4230909Z return (buf1, ) 2023-01-11T21:41:23.4231102Z 2023-01-11T21:41:23.4231111Z 2023-01-11T21:41:23.4231243Z if __name__ == "__main__": 2023-01-11T21:41:23.4231622Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4232098Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4232778Z inp_1 = rand_strided((2, 5, 16, 16), (1280, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4233443Z weight_1 = rand_strided((6, 5, 3, 3), (45, 1, 15, 5), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4234100Z print_performance(lambda: call([inp_1, weight_1])) 2023-01-11T21:41:23.4234366Z 2023-01-11T21:41:23.4234372Z 2023-01-11T21:41:23.4234542Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4234894Z import torch 2023-01-11T21:41:23.4235177Z import random 2023-01-11T21:41:23.4235563Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4236036Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4236310Z 2023-01-11T21:41:23.4236432Z aten = torch.ops.aten 2023-01-11T21:41:23.4236871Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4237322Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4237553Z 2023-01-11T21:41:23.4237560Z 2023-01-11T21:41:23.4237716Z async_compile.wait(globals()) 2023-01-11T21:41:23.4238045Z del async_compile 2023-01-11T21:41:23.4238236Z 2023-01-11T21:41:23.4238361Z def call(args): 2023-01-11T21:41:23.4238682Z inp_1, weight_1 = args 2023-01-11T21:41:23.4238987Z args.clear() 2023-01-11T21:41:23.4239416Z buf0 = aten.convolution(inp_1, weight_1, None, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.4239892Z assert_size_stride(buf0, (2, 6, 14, 14), (1176, 196, 14, 1)) 2023-01-11T21:41:23.4240227Z del inp_1 2023-01-11T21:41:23.4240519Z del weight_1 2023-01-11T21:41:23.4240819Z return (buf0, ) 2023-01-11T21:41:23.4241011Z 2023-01-11T21:41:23.4241019Z 2023-01-11T21:41:23.4241134Z if __name__ == "__main__": 2023-01-11T21:41:23.4241527Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4241999Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4242674Z inp_1 = rand_strided((2, 5, 16, 16), (1280, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4243344Z weight_1 = rand_strided((6, 5, 3, 3), (45, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4243839Z print_performance(lambda: call([inp_1, weight_1])) 2023-01-11T21:41:23.4244104Z 2023-01-11T21:41:23.4244223Z ok (1.655s) 2023-01-11T21:41:23.4245021Z test_cpp_kernel_profile (__main__.CPUReproTests) ... STAGE:2023-01-11 21:23:40 1454:1454 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:41:23.4245910Z [2023-01-11 21:23:40,809] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 0 2023-01-11T21:41:23.4246741Z [2023-01-11 21:23:44,849] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 0 2023-01-11T21:41:23.4247553Z STAGE:2023-01-11 21:23:44 1454:1454 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:41:23.4248359Z STAGE:2023-01-11 21:23:44 1454:1454 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:41:23.4248722Z 2023-01-11T21:41:23.4248891Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4249244Z import torch 2023-01-11T21:41:23.4249526Z import random 2023-01-11T21:41:23.4249912Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4250391Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4250741Z 2023-01-11T21:41:23.4250881Z aten = torch.ops.aten 2023-01-11T21:41:23.4251313Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4251761Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4251992Z 2023-01-11T21:41:23.4252000Z 2023-01-11T21:41:23.4252249Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4252634Z #include 2023-01-11T21:41:23.4253231Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4253851Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4254293Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4254684Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4255012Z { 2023-01-11T21:41:23.4255428Z RECORD_FUNCTION("graph_0_kernel_cpp_0", c10::ArrayRef({})); 2023-01-11T21:41:23.4257380Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4257797Z { 2023-01-11T21:41:23.4258100Z #pragma omp for 2023-01-11T21:41:23.4258424Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.4258737Z { 2023-01-11T21:41:23.4259138Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4259633Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4260057Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4260427Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4260732Z } 2023-01-11T21:41:23.4261062Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4261434Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:23.4261742Z { 2023-01-11T21:41:23.4262041Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4262516Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4262869Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4263201Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.4263518Z } 2023-01-11T21:41:23.4263789Z } 2023-01-11T21:41:23.4264036Z } 2023-01-11T21:41:23.4264352Z ''') 2023-01-11T21:41:23.4264519Z 2023-01-11T21:41:23.4264527Z 2023-01-11T21:41:23.4264683Z async_compile.wait(globals()) 2023-01-11T21:41:23.4265013Z del async_compile 2023-01-11T21:41:23.4265211Z 2023-01-11T21:41:23.4265334Z def call(args): 2023-01-11T21:41:23.4265645Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.4266946Z args.clear() 2023-01-11T21:41:23.4267491Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4268066Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4268490Z del arg0_1 2023-01-11T21:41:23.4268784Z del arg1_1 2023-01-11T21:41:23.4269082Z return (buf0, ) 2023-01-11T21:41:23.4269276Z 2023-01-11T21:41:23.4269284Z 2023-01-11T21:41:23.4269413Z if __name__ == "__main__": 2023-01-11T21:41:23.4269789Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4270265Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4270896Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4271517Z arg1_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4271999Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.4272263Z 2023-01-11T21:41:23.4272380Z ok (4.059s) 2023-01-11T21:41:23.4272756Z test_cpu_vec_cosim (__main__.CPUReproTests) ... ok (0.001s) 2023-01-11T21:41:23.4273673Z test_inplace_add_alpha (__main__.CPUReproTests) ... [2023-01-11 21:23:44,873] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4274656Z [2023-01-11 21:23:46,439] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4275025Z 2023-01-11T21:41:23.4275191Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4275629Z import torch 2023-01-11T21:41:23.4275935Z import random 2023-01-11T21:41:23.4276324Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4276784Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4277060Z 2023-01-11T21:41:23.4277200Z aten = torch.ops.aten 2023-01-11T21:41:23.4277643Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4278090Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4278301Z 2023-01-11T21:41:23.4278308Z 2023-01-11T21:41:23.4278554Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4279155Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4279774Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4280199Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4280606Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4280939Z { 2023-01-11T21:41:23.4281317Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4281659Z { 2023-01-11T21:41:23.4281955Z #pragma omp for 2023-01-11T21:41:23.4282275Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.4282581Z { 2023-01-11T21:41:23.4282985Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4283497Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4283997Z auto tmp2 = at::vec::Vectorized(static_cast(0.55)); 2023-01-11T21:41:23.4284425Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.4284788Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.4285146Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4285470Z } 2023-01-11T21:41:23.4285803Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4286151Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.4286467Z { 2023-01-11T21:41:23.4286782Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.4287131Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4287516Z auto tmp2 = static_cast(0.55); 2023-01-11T21:41:23.4287894Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.4288233Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.4288576Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.4288877Z } 2023-01-11T21:41:23.4289127Z } 2023-01-11T21:41:23.4289386Z } 2023-01-11T21:41:23.4289676Z ''') 2023-01-11T21:41:23.4289840Z 2023-01-11T21:41:23.4289848Z 2023-01-11T21:41:23.4290003Z async_compile.wait(globals()) 2023-01-11T21:41:23.4290330Z del async_compile 2023-01-11T21:41:23.4290520Z 2023-01-11T21:41:23.4290641Z def call(args): 2023-01-11T21:41:23.4290931Z x_1, y_1 = args 2023-01-11T21:41:23.4291210Z args.clear() 2023-01-11T21:41:23.4291662Z kernel_cpp_0(c_void_p(x_1.data_ptr()), c_void_p(y_1.data_ptr()), c_void_p(x_1.data_ptr())) 2023-01-11T21:41:23.4292083Z del y_1 2023-01-11T21:41:23.4292371Z return (x_1, ) 2023-01-11T21:41:23.4292560Z 2023-01-11T21:41:23.4292567Z 2023-01-11T21:41:23.4292699Z if __name__ == "__main__": 2023-01-11T21:41:23.4293086Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4293540Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4294159Z x_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4294776Z y_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4295232Z print_performance(lambda: call([x_1, y_1])) 2023-01-11T21:41:23.4295466Z 2023-01-11T21:41:23.4295577Z ok (1.585s) 2023-01-11T21:41:23.4296406Z test_inplace_squeeze_needed (__main__.CPUReproTests) ... [2023-01-11 21:23:46,594] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 1 2023-01-11T21:41:23.4297322Z [2023-01-11 21:23:48,242] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 1 2023-01-11T21:41:23.4297747Z 2023-01-11T21:41:23.4297901Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4298247Z import torch 2023-01-11T21:41:23.4298551Z import random 2023-01-11T21:41:23.4298924Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4299398Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4299677Z 2023-01-11T21:41:23.4299817Z aten = torch.ops.aten 2023-01-11T21:41:23.4300253Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4300687Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4300914Z 2023-01-11T21:41:23.4300921Z 2023-01-11T21:41:23.4301163Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4301761Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4302468Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.4302908Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.4303393Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4303812Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.4304208Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:23.4304607Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4305002Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.4305379Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.4305764Z bool* __restrict__ out_ptr4) 2023-01-11T21:41:23.4306084Z { 2023-01-11T21:41:23.4306362Z auto in_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.4306714Z auto out_ptr1 = in_out_ptr1; 2023-01-11T21:41:23.4307029Z { 2023-01-11T21:41:23.4307512Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.4308007Z float tmp3 = 0; 2023-01-11T21:41:23.4308400Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:23.4308837Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4309169Z { 2023-01-11T21:41:23.4309513Z #pragma omp for reduction(+:tmp3_vec) 2023-01-11T21:41:23.4309895Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.4310193Z { 2023-01-11T21:41:23.4310590Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4311095Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4311511Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4311859Z tmp3_vec += tmp2; 2023-01-11T21:41:23.4312169Z } 2023-01-11T21:41:23.4312672Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:23.4313254Z #pragma omp for simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:23.4313652Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.4314032Z { 2023-01-11T21:41:23.4314335Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4314696Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4315057Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4315381Z tmp3 += tmp2; 2023-01-11T21:41:23.4315683Z } 2023-01-11T21:41:23.4315965Z } 2023-01-11T21:41:23.4316245Z out_ptr0[0] = tmp3; 2023-01-11T21:41:23.4316546Z } 2023-01-11T21:41:23.4316811Z { 2023-01-11T21:41:23.4317279Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.4317771Z float tmp8 = 0; 2023-01-11T21:41:23.4318165Z auto tmp8_vec = at::vec::Vectorized(tmp8); 2023-01-11T21:41:23.4318531Z float tmp9 = 0; 2023-01-11T21:41:23.4318927Z auto tmp9_vec = at::vec::Vectorized(tmp9); 2023-01-11T21:41:23.4319361Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4319791Z { 2023-01-11T21:41:23.4320180Z #pragma omp for reduction(+:tmp8_vec) reduction(+:tmp9_vec) 2023-01-11T21:41:23.4320609Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.4320925Z { 2023-01-11T21:41:23.4321319Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4321832Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4322328Z auto tmp3 = at::vec::Vectorized(out_ptr0[0]); 2023-01-11T21:41:23.4322727Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4323183Z auto tmp4 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:23.4323617Z auto tmp5 = tmp3 / tmp4; 2023-01-11T21:41:23.4324088Z auto tmp6 = tmp2 - tmp5; 2023-01-11T21:41:23.4324435Z auto tmp7 = tmp6.pow(2); 2023-01-11T21:41:23.4324788Z tmp8_vec += tmp7; 2023-01-11T21:41:23.4325185Z tmp9_vec += tmp2; 2023-01-11T21:41:23.4325481Z } 2023-01-11T21:41:23.4325999Z tmp8 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp8_vec); 2023-01-11T21:41:23.4326712Z tmp9 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp9_vec); 2023-01-11T21:41:23.4327332Z #pragma omp for simd simdlen(4) reduction(+:tmp8) reduction(+:tmp9) 2023-01-11T21:41:23.4327772Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.4328096Z { 2023-01-11T21:41:23.4328415Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4328756Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4329116Z auto tmp3 = out_ptr0[0]; 2023-01-11T21:41:23.4329476Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4329844Z auto tmp4 = static_cast(10); 2023-01-11T21:41:23.4330236Z auto tmp5 = tmp3 / tmp4; 2023-01-11T21:41:23.4330675Z auto tmp6 = tmp2 - tmp5; 2023-01-11T21:41:23.4331022Z auto tmp7 = tmp6 * tmp6; 2023-01-11T21:41:23.4331357Z tmp8 += tmp7; 2023-01-11T21:41:23.4331677Z tmp9 += tmp2; 2023-01-11T21:41:23.4331963Z } 2023-01-11T21:41:23.4332237Z } 2023-01-11T21:41:23.4332531Z out_ptr1[0] = tmp8; 2023-01-11T21:41:23.4332840Z out_ptr2[0] = tmp9; 2023-01-11T21:41:23.4333142Z } 2023-01-11T21:41:23.4333474Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4333802Z { 2023-01-11T21:41:23.4334093Z #pragma omp for 2023-01-11T21:41:23.4334430Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.4334730Z { 2023-01-11T21:41:23.4335005Z { 2023-01-11T21:41:23.4335289Z { 2023-01-11T21:41:23.4335609Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.4335989Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4336367Z auto tmp3 = out_ptr2[0]; 2023-01-11T21:41:23.4336720Z auto tmp7 = out_ptr1[0]; 2023-01-11T21:41:23.4337089Z auto tmp13 = in_ptr2[i0]; 2023-01-11T21:41:23.4337461Z auto tmp15 = in_ptr3[i0]; 2023-01-11T21:41:23.4337825Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4338204Z auto tmp4 = static_cast(10); 2023-01-11T21:41:23.4338596Z auto tmp5 = tmp3 / tmp4; 2023-01-11T21:41:23.4339051Z auto tmp6 = tmp2 - tmp5; 2023-01-11T21:41:23.4339401Z auto tmp8 = tmp7 / tmp4; 2023-01-11T21:41:23.4339910Z auto tmp9 = static_cast(1e-05); 2023-01-11T21:41:23.4340307Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:23.4340685Z auto tmp11 = 1 / std::sqrt(tmp10); 2023-01-11T21:41:23.4341069Z auto tmp12 = tmp6 * tmp11; 2023-01-11T21:41:23.4341507Z auto tmp14 = tmp12 * tmp13; 2023-01-11T21:41:23.4341867Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:23.4342232Z auto tmp17 = tmp16 * (tmp16>0); 2023-01-11T21:41:23.4342754Z auto tmp18 = static_cast(0); 2023-01-11T21:41:23.4343149Z auto tmp19 = tmp17 <= tmp18; 2023-01-11T21:41:23.4343505Z in_out_ptr0[i0] = tmp12; 2023-01-11T21:41:23.4343859Z out_ptr3[i0] = tmp17; 2023-01-11T21:41:23.4344211Z out_ptr4[i0] = tmp19; 2023-01-11T21:41:23.4344510Z } 2023-01-11T21:41:23.4344791Z } 2023-01-11T21:41:23.4345073Z } 2023-01-11T21:41:23.4345357Z #pragma omp single 2023-01-11T21:41:23.4345663Z { 2023-01-11T21:41:23.4345931Z { 2023-01-11T21:41:23.4346195Z { 2023-01-11T21:41:23.4346520Z auto tmp0 = out_ptr1[0]; 2023-01-11T21:41:23.4346997Z auto tmp1 = static_cast(10); 2023-01-11T21:41:23.4347377Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.4347889Z auto tmp3 = static_cast(1e-05); 2023-01-11T21:41:23.4348289Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.4348662Z auto tmp5 = 1 / std::sqrt(tmp4); 2023-01-11T21:41:23.4349039Z auto tmp6 = tmp5 / tmp1; 2023-01-11T21:41:23.4349397Z in_out_ptr1[0] = tmp6; 2023-01-11T21:41:23.4349699Z } 2023-01-11T21:41:23.4349975Z } 2023-01-11T21:41:23.4350251Z } 2023-01-11T21:41:23.4350500Z } 2023-01-11T21:41:23.4350753Z } 2023-01-11T21:41:23.4351039Z ''') 2023-01-11T21:41:23.4351203Z 2023-01-11T21:41:23.4351212Z 2023-01-11T21:41:23.4351368Z async_compile.wait(globals()) 2023-01-11T21:41:23.4351694Z del async_compile 2023-01-11T21:41:23.4351885Z 2023-01-11T21:41:23.4352012Z def call(args): 2023-01-11T21:41:23.4352431Z primals_1, primals_2, primals_3, primals_4, primals_5 = args 2023-01-11T21:41:23.4352809Z args.clear() 2023-01-11T21:41:23.4353350Z buf0 = empty_strided((1, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4353978Z aten.mm.out(as_strided(primals_5, (1, 10), (10, 1)), as_strided(primals_1, (10, 10), (1, 10)), out=buf0) 2023-01-11T21:41:23.4354391Z del primals_1 2023-01-11T21:41:23.4354922Z buf1 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4355547Z buf2 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4356152Z buf3 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4356616Z buf4 = as_strided(buf0, (10, ), (1, )); del buf0 # reuse 2023-01-11T21:41:23.4357206Z buf5 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4357818Z buf6 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.4358232Z buf7 = buf2; del buf2 # reuse 2023-01-11T21:41:23.4359012Z kernel_cpp_0(c_void_p(buf4.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(primals_3.data_ptr()), c_void_p(primals_4.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr())) 2023-01-11T21:41:23.4359705Z del buf1 2023-01-11T21:41:23.4359987Z del buf3 2023-01-11T21:41:23.4360260Z del primals_2 2023-01-11T21:41:23.4360564Z del primals_4 2023-01-11T21:41:23.4360990Z return (buf5, primals_3, as_strided(primals_5, (1, 10), (10, 1)), buf4, buf6, buf7, ) 2023-01-11T21:41:23.4361290Z 2023-01-11T21:41:23.4361297Z 2023-01-11T21:41:23.4361412Z if __name__ == "__main__": 2023-01-11T21:41:23.4361803Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4362281Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4362930Z primals_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4363682Z primals_2 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4364329Z primals_3 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4364979Z primals_4 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4365609Z primals_5 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4366177Z print_performance(lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])) 2023-01-11T21:41:23.4366515Z 2023-01-11T21:41:23.4366631Z ok (1.914s) 2023-01-11T21:41:23.4367484Z test_load_same_bool_tensor_twice (__main__.CPUReproTests) ... [2023-01-11 21:23:48,373] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 2 2023-01-11T21:41:23.4368410Z [2023-01-11 21:23:50,007] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 2 2023-01-11T21:41:23.4368773Z 2023-01-11T21:41:23.4368988Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4369333Z import torch 2023-01-11T21:41:23.4369616Z import random 2023-01-11T21:41:23.4369991Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4370460Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4370737Z 2023-01-11T21:41:23.4370862Z aten = torch.ops.aten 2023-01-11T21:41:23.4371291Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4371736Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4371966Z 2023-01-11T21:41:23.4371973Z 2023-01-11T21:41:23.4372215Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4372800Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4373411Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.4373847Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4374241Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4374641Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.4374973Z { 2023-01-11T21:41:23.4375285Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4375631Z { 2023-01-11T21:41:23.4375927Z #pragma omp for 2023-01-11T21:41:23.4376244Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.4376560Z { 2023-01-11T21:41:23.4376893Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.4377335Z flag_to_float(in_ptr0 + 8*i0, g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.4377836Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.4378357Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4378849Z flag_to_float(in_ptr0 + 8*i0, g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.4379464Z auto tmp1 = at::vec::Vectorized(static_cast(-33.0)); 2023-01-11T21:41:23.4379979Z auto tmp3 = decltype(tmp1)::blendv(tmp2, tmp1, tmp0); 2023-01-11T21:41:23.4380403Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4380774Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.4381085Z } 2023-01-11T21:41:23.4381415Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4381785Z for(long i0=32; i0<34; i0+=1) 2023-01-11T21:41:23.4382087Z { 2023-01-11T21:41:23.4382509Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4382869Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.4383337Z auto tmp1 = static_cast(-33.0); 2023-01-11T21:41:23.4383733Z auto tmp3 = tmp0 ? tmp1 : tmp2; 2023-01-11T21:41:23.4384091Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.4384405Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.4384706Z } 2023-01-11T21:41:23.4384980Z } 2023-01-11T21:41:23.4385231Z } 2023-01-11T21:41:23.4385525Z ''') 2023-01-11T21:41:23.4385693Z 2023-01-11T21:41:23.4385786Z 2023-01-11T21:41:23.4385952Z async_compile.wait(globals()) 2023-01-11T21:41:23.4386290Z del async_compile 2023-01-11T21:41:23.4386483Z 2023-01-11T21:41:23.4386609Z def call(args): 2023-01-11T21:41:23.4386926Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.4387226Z args.clear() 2023-01-11T21:41:23.4387773Z buf0 = empty_strided((2, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4388411Z buf1 = empty_strided((2, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4389030Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.4389498Z del arg0_1 2023-01-11T21:41:23.4389790Z del arg1_1 2023-01-11T21:41:23.4390099Z return (buf0, buf1, ) 2023-01-11T21:41:23.4390286Z 2023-01-11T21:41:23.4390309Z 2023-01-11T21:41:23.4390427Z if __name__ == "__main__": 2023-01-11T21:41:23.4390819Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4391359Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4391981Z arg0_1 = rand_strided((2, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4392616Z arg1_1 = rand_strided((2, 17), (17, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.4393088Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.4393348Z 2023-01-11T21:41:23.4393463Z ok (1.655s) 2023-01-11T21:41:23.4394314Z test_masked_fill_softmax (__main__.CPUReproTests) ... [2023-01-11 21:23:50,052] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 3 2023-01-11T21:41:23.4395232Z [2023-01-11 21:23:51,707] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 3 2023-01-11T21:41:23.4395593Z 2023-01-11T21:41:23.4395762Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4396096Z import torch 2023-01-11T21:41:23.4396397Z import random 2023-01-11T21:41:23.4396784Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4397267Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4397527Z 2023-01-11T21:41:23.4397664Z aten = torch.ops.aten 2023-01-11T21:41:23.4398101Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4398547Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4398759Z 2023-01-11T21:41:23.4398782Z 2023-01-11T21:41:23.4399012Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4399604Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4400220Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.4400663Z const unsigned char* __restrict__ in_ptr0, 2023-01-11T21:41:23.4401095Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4401499Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4401892Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.4402202Z { 2023-01-11T21:41:23.4402506Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:23.4402887Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4403210Z { 2023-01-11T21:41:23.4403499Z #pragma omp for 2023-01-11T21:41:23.4403837Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4404140Z { 2023-01-11T21:41:23.4404412Z { 2023-01-11T21:41:23.4405298Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:23.4406155Z float tmp5 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.4406635Z auto tmp5_vec = at::vec::Vectorized(tmp5); 2023-01-11T21:41:23.4407041Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.4407363Z { 2023-01-11T21:41:23.4407705Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.4408234Z flag_to_float(in_ptr0 + (8*i1) + (17*i0), g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.4408766Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.4409298Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (17*i0)); 2023-01-11T21:41:23.4409726Z auto tmp1 = (tmp0); 2023-01-11T21:41:23.4410321Z auto tmp2 = at::vec::Vectorized(static_cast(-33.0)); 2023-01-11T21:41:23.4410836Z auto tmp4 = decltype(tmp2)::blendv(tmp3, tmp2, tmp1); 2023-01-11T21:41:23.4411295Z tmp5_vec = at::vec::maximum(tmp5_vec, tmp4); 2023-01-11T21:41:23.4411655Z } 2023-01-11T21:41:23.4412208Z tmp5 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp5_vec); 2023-01-11T21:41:23.4412824Z #pragma omp simd simdlen(4) reduction(max:tmp5) 2023-01-11T21:41:23.4413282Z for(long i1=16; i1<17; i1+=1) 2023-01-11T21:41:23.4413604Z { 2023-01-11T21:41:23.4413948Z auto tmp0 = in_ptr0[i1 + (17*i0)]; 2023-01-11T21:41:23.4414321Z auto tmp3 = in_ptr1[i1 + (17*i0)]; 2023-01-11T21:41:23.4414720Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.4415239Z auto tmp2 = static_cast(-33.0); 2023-01-11T21:41:23.4415628Z auto tmp4 = tmp1 ? tmp2 : tmp3; 2023-01-11T21:41:23.4416020Z tmp5 = std::max(tmp5, tmp4); 2023-01-11T21:41:23.4416348Z } 2023-01-11T21:41:23.4416645Z out_ptr0[i0] = tmp5; 2023-01-11T21:41:23.4416949Z } 2023-01-11T21:41:23.4417225Z } 2023-01-11T21:41:23.4417503Z #pragma omp for 2023-01-11T21:41:23.4417833Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4418136Z { 2023-01-11T21:41:23.4418394Z { 2023-01-11T21:41:23.4418899Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.4419396Z float tmp8 = 0; 2023-01-11T21:41:23.4419803Z auto tmp8_vec = at::vec::Vectorized(tmp8); 2023-01-11T21:41:23.4420193Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.4420505Z { 2023-01-11T21:41:23.4420853Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.4421296Z flag_to_float(in_ptr0 + (8*i1) + (17*i0), g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.4421816Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.4426175Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (17*i0)); 2023-01-11T21:41:23.4426720Z auto tmp5 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:23.4427122Z auto tmp1 = (tmp0); 2023-01-11T21:41:23.4427735Z auto tmp2 = at::vec::Vectorized(static_cast(-33.0)); 2023-01-11T21:41:23.4428251Z auto tmp4 = decltype(tmp2)::blendv(tmp3, tmp2, tmp1); 2023-01-11T21:41:23.4428741Z auto tmp6 = tmp4 - tmp5; 2023-01-11T21:41:23.4429114Z auto tmp7 = tmp6.exp(); 2023-01-11T21:41:23.4429505Z tmp7.store(out_ptr1 + (8*i1) + (17*i0)); 2023-01-11T21:41:23.4429859Z tmp8_vec += tmp7; 2023-01-11T21:41:23.4430170Z } 2023-01-11T21:41:23.4430691Z tmp8 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp8_vec); 2023-01-11T21:41:23.4431273Z #pragma omp simd simdlen(4) reduction(+:tmp8) 2023-01-11T21:41:23.4431667Z for(long i1=16; i1<17; i1+=1) 2023-01-11T21:41:23.4431987Z { 2023-01-11T21:41:23.4432440Z auto tmp0 = in_ptr0[i1 + (17*i0)]; 2023-01-11T21:41:23.4432824Z auto tmp3 = in_ptr1[i1 + (17*i0)]; 2023-01-11T21:41:23.4433202Z auto tmp5 = out_ptr0[i0]; 2023-01-11T21:41:23.4433601Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.4434178Z auto tmp2 = static_cast(-33.0); 2023-01-11T21:41:23.4434583Z auto tmp4 = tmp1 ? tmp2 : tmp3; 2023-01-11T21:41:23.4435049Z auto tmp6 = tmp4 - tmp5; 2023-01-11T21:41:23.4435433Z auto tmp7 = std::exp(tmp6); 2023-01-11T21:41:23.4435794Z out_ptr1[i1 + (17*i0)] = tmp7; 2023-01-11T21:41:23.4436138Z tmp8 += tmp7; 2023-01-11T21:41:23.4436434Z } 2023-01-11T21:41:23.4436729Z out_ptr2[i0] = tmp8; 2023-01-11T21:41:23.4437038Z } 2023-01-11T21:41:23.4437313Z } 2023-01-11T21:41:23.4437591Z #pragma omp for 2023-01-11T21:41:23.4437993Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4438305Z { 2023-01-11T21:41:23.4438596Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.4438908Z { 2023-01-11T21:41:23.4439325Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + (8*i1) + (17*i0)); 2023-01-11T21:41:23.4439821Z auto tmp1 = at::vec::Vectorized(out_ptr2[i0]); 2023-01-11T21:41:23.4440235Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.4440633Z tmp2.store(in_out_ptr0 + (8*i1) + (17*i0)); 2023-01-11T21:41:23.4440958Z } 2023-01-11T21:41:23.4441289Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.4441657Z for(long i1=16; i1<17; i1+=1) 2023-01-11T21:41:23.4441971Z { 2023-01-11T21:41:23.4442293Z auto tmp0 = out_ptr1[i1 + (17*i0)]; 2023-01-11T21:41:23.4442669Z auto tmp1 = out_ptr2[i0]; 2023-01-11T21:41:23.4443029Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.4443396Z in_out_ptr0[i1 + (17*i0)] = tmp2; 2023-01-11T21:41:23.4443720Z } 2023-01-11T21:41:23.4443995Z } 2023-01-11T21:41:23.4444244Z } 2023-01-11T21:41:23.4444503Z } 2023-01-11T21:41:23.4444793Z ''') 2023-01-11T21:41:23.4444945Z 2023-01-11T21:41:23.4444952Z 2023-01-11T21:41:23.4445110Z async_compile.wait(globals()) 2023-01-11T21:41:23.4445459Z del async_compile 2023-01-11T21:41:23.4445656Z 2023-01-11T21:41:23.4445781Z def call(args): 2023-01-11T21:41:23.4446076Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.4446395Z args.clear() 2023-01-11T21:41:23.4446936Z buf0 = empty_strided((2, 1), (1, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4447564Z buf1 = empty_strided((2, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4448196Z buf2 = empty_strided((2, 1), (1, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4448627Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:23.4449222Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.4449721Z del arg0_1 2023-01-11T21:41:23.4450015Z del arg1_1 2023-01-11T21:41:23.4450310Z return (buf3, ) 2023-01-11T21:41:23.4450488Z 2023-01-11T21:41:23.4450512Z 2023-01-11T21:41:23.4450626Z if __name__ == "__main__": 2023-01-11T21:41:23.4451019Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4451494Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4452112Z arg0_1 = rand_strided((2, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4452749Z arg1_1 = rand_strided((2, 17), (17, 1), device='cpu', dtype=torch.uint8) 2023-01-11T21:41:23.4453231Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.4453490Z 2023-01-11T21:41:23.4453608Z ok (1.701s) 2023-01-11T21:41:23.4454418Z test_new_vec_op_cpu_only (__main__.CPUReproTests) ... [2023-01-11 21:23:51,739] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4455418Z [2023-01-11 21:23:53,889] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4455783Z 2023-01-11T21:41:23.4455951Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4456284Z import torch 2023-01-11T21:41:23.4456587Z import random 2023-01-11T21:41:23.4456970Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4457444Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4457706Z 2023-01-11T21:41:23.4457842Z aten = torch.ops.aten 2023-01-11T21:41:23.4458277Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4458715Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4458943Z 2023-01-11T21:41:23.4458950Z 2023-01-11T21:41:23.4459184Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4459835Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4460459Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4460878Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4461212Z { 2023-01-11T21:41:23.4461533Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4461872Z { 2023-01-11T21:41:23.4462152Z #pragma omp for 2023-01-11T21:41:23.4462611Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4462929Z { 2023-01-11T21:41:23.4463312Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4463739Z auto tmp1 = tmp0.erf(); 2023-01-11T21:41:23.4464105Z auto tmp2 = tmp1.expm1(); 2023-01-11T21:41:23.4464453Z auto tmp3 = tmp2.log1p(); 2023-01-11T21:41:23.4464821Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4465149Z } 2023-01-11T21:41:23.4465468Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4465844Z for(long i0=16; i0<18; i0+=1) 2023-01-11T21:41:23.4466157Z { 2023-01-11T21:41:23.4466451Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4466819Z auto tmp1 = std::erf(tmp0); 2023-01-11T21:41:23.4467192Z auto tmp2 = std::expm1(tmp1); 2023-01-11T21:41:23.4467551Z auto tmp3 = std::log1p(tmp2); 2023-01-11T21:41:23.4467912Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.4468217Z } 2023-01-11T21:41:23.4468474Z } 2023-01-11T21:41:23.4468736Z } 2023-01-11T21:41:23.4469027Z ''') 2023-01-11T21:41:23.4469195Z 2023-01-11T21:41:23.4469203Z 2023-01-11T21:41:23.4469363Z async_compile.wait(globals()) 2023-01-11T21:41:23.4469698Z del async_compile 2023-01-11T21:41:23.4469890Z 2023-01-11T21:41:23.4470017Z def call(args): 2023-01-11T21:41:23.4470311Z x_1, = args 2023-01-11T21:41:23.4470595Z args.clear() 2023-01-11T21:41:23.4471136Z buf0 = empty_strided((2, 9), (9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4471659Z kernel_cpp_0(c_void_p(x_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4472030Z del x_1 2023-01-11T21:41:23.4472323Z return (buf0, ) 2023-01-11T21:41:23.4472515Z 2023-01-11T21:41:23.4472522Z 2023-01-11T21:41:23.4472655Z if __name__ == "__main__": 2023-01-11T21:41:23.4473028Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4473498Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4474169Z x_1 = rand_strided((2, 9), (9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4474633Z print_performance(lambda: call([x_1])) 2023-01-11T21:41:23.4474862Z 2023-01-11T21:41:23.4474973Z ok (2.181s) 2023-01-11T21:41:23.4475777Z test_no_op_squeeze (__main__.CPUReproTests) ... [2023-01-11 21:23:53,908] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 4 2023-01-11T21:41:23.4476699Z [2023-01-11 21:23:53,910] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 4 2023-01-11T21:41:23.4477156Z 2023-01-11T21:41:23.4477308Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4477650Z import torch 2023-01-11T21:41:23.4477953Z import random 2023-01-11T21:41:23.4478328Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4478801Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4479080Z 2023-01-11T21:41:23.4479217Z aten = torch.ops.aten 2023-01-11T21:41:23.4479655Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4480084Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4480308Z 2023-01-11T21:41:23.4480316Z 2023-01-11T21:41:23.4480474Z async_compile.wait(globals()) 2023-01-11T21:41:23.4480818Z del async_compile 2023-01-11T21:41:23.4481008Z 2023-01-11T21:41:23.4481120Z def call(args): 2023-01-11T21:41:23.4481423Z arg0_1, = args 2023-01-11T21:41:23.4481716Z args.clear() 2023-01-11T21:41:23.4482008Z return (arg0_1, ) 2023-01-11T21:41:23.4482201Z 2023-01-11T21:41:23.4482213Z 2023-01-11T21:41:23.4482406Z if __name__ == "__main__": 2023-01-11T21:41:23.4482797Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4483268Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4483889Z arg0_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4484356Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.4484606Z 2023-01-11T21:41:23.4484722Z ok (0.021s) 2023-01-11T21:41:23.4485529Z test_parallel_num_threads (__main__.CPUReproTests) ... [2023-01-11 21:23:53,969] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 5 2023-01-11T21:41:23.4486450Z [2023-01-11 21:23:55,578] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 5 2023-01-11T21:41:23.4486809Z 2023-01-11T21:41:23.4486973Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4487322Z import torch 2023-01-11T21:41:23.4487605Z import random 2023-01-11T21:41:23.4487995Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4488463Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4488723Z 2023-01-11T21:41:23.4488859Z aten = torch.ops.aten 2023-01-11T21:41:23.4489289Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4489733Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4489958Z 2023-01-11T21:41:23.4489966Z 2023-01-11T21:41:23.4490197Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4490794Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4491415Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4491849Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4492236Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4492561Z { 2023-01-11T21:41:23.4492850Z for(long i0=0; i0<25; i0+=1) 2023-01-11T21:41:23.4493135Z { 2023-01-11T21:41:23.4493538Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4494036Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4494430Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4494784Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4495104Z } 2023-01-11T21:41:23.4495401Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.4495753Z for(long i0=200; i0<200; i0+=1) 2023-01-11T21:41:23.4496054Z { 2023-01-11T21:41:23.4496336Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4496677Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.4497018Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4497336Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.4497628Z } 2023-01-11T21:41:23.4497890Z } 2023-01-11T21:41:23.4498163Z ''') 2023-01-11T21:41:23.4498329Z 2023-01-11T21:41:23.4498336Z 2023-01-11T21:41:23.4498492Z async_compile.wait(globals()) 2023-01-11T21:41:23.4498831Z del async_compile 2023-01-11T21:41:23.4499084Z 2023-01-11T21:41:23.4499215Z def call(args): 2023-01-11T21:41:23.4499517Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.4499827Z args.clear() 2023-01-11T21:41:23.4500376Z buf0 = empty_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4500929Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4501367Z del arg0_1 2023-01-11T21:41:23.4501661Z del arg1_1 2023-01-11T21:41:23.4501944Z return (buf0, ) 2023-01-11T21:41:23.4502134Z 2023-01-11T21:41:23.4502142Z 2023-01-11T21:41:23.4502275Z if __name__ == "__main__": 2023-01-11T21:41:23.4502790Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4503255Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4503903Z arg0_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4504545Z arg1_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4505111Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.4505360Z 2023-01-11T21:41:23.4505472Z ok (1.671s) 2023-01-11T21:41:23.4506287Z test_sign_cpu_only (__main__.CPUReproTests) ... [2023-01-11 21:23:55,602] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4507207Z [2023-01-11 21:23:57,231] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4507575Z 2023-01-11T21:41:23.4507740Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4508069Z import torch 2023-01-11T21:41:23.4508375Z import random 2023-01-11T21:41:23.4508761Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4509224Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4509500Z 2023-01-11T21:41:23.4509637Z aten = torch.ops.aten 2023-01-11T21:41:23.4510069Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4510517Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4510742Z 2023-01-11T21:41:23.4510749Z 2023-01-11T21:41:23.4510991Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4511594Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4512214Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4512628Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4512955Z { 2023-01-11T21:41:23.4513280Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4513598Z { 2023-01-11T21:41:23.4513943Z #pragma omp for 2023-01-11T21:41:23.4514281Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4514576Z { 2023-01-11T21:41:23.4514973Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4515554Z auto tmp1 = decltype(tmp0)::blendv(decltype(tmp0)(0), decltype(tmp0)(1), decltype(tmp0)(0) < tmp0); 2023-01-11T21:41:23.4516182Z auto tmp2 = decltype(tmp0)::blendv(decltype(tmp0)(0), decltype(tmp0)(1), tmp0 < decltype(tmp0)(0)); 2023-01-11T21:41:23.4516744Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:23.4517120Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4517448Z } 2023-01-11T21:41:23.4517767Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4518139Z for(long i0=16; i0<18; i0+=1) 2023-01-11T21:41:23.4518451Z { 2023-01-11T21:41:23.4518740Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4519098Z auto tmp1 = tmp0 > 0 ? 1 : 0; 2023-01-11T21:41:23.4519457Z auto tmp2 = tmp0 < 0 ? 1 : 0; 2023-01-11T21:41:23.4519870Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:23.4520218Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.4520519Z } 2023-01-11T21:41:23.4520776Z } 2023-01-11T21:41:23.4521040Z } 2023-01-11T21:41:23.4521327Z ''') 2023-01-11T21:41:23.4521475Z 2023-01-11T21:41:23.4521600Z 2023-01-11T21:41:23.4521754Z async_compile.wait(globals()) 2023-01-11T21:41:23.4522096Z del async_compile 2023-01-11T21:41:23.4522289Z 2023-01-11T21:41:23.4522416Z def call(args): 2023-01-11T21:41:23.4522701Z x_1, = args 2023-01-11T21:41:23.4522996Z args.clear() 2023-01-11T21:41:23.4523533Z buf0 = empty_strided((2, 9), (9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4524035Z kernel_cpp_0(c_void_p(x_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4524425Z del x_1 2023-01-11T21:41:23.4524719Z return (buf0, ) 2023-01-11T21:41:23.4524910Z 2023-01-11T21:41:23.4524917Z 2023-01-11T21:41:23.4525051Z if __name__ == "__main__": 2023-01-11T21:41:23.4525430Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4525906Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4526524Z x_1 = rand_strided((2, 9), (9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4526964Z print_performance(lambda: call([x_1])) 2023-01-11T21:41:23.4527209Z 2023-01-11T21:41:23.4527378Z ok (1.649s) 2023-01-11T21:41:23.4527771Z test_timed_cpu_only (__main__.CPUReproTests) ... ok (0.003s) 2023-01-11T21:41:23.4528675Z test_vec_dynamic_shapes (__main__.CPUReproTests) ... [2023-01-11 21:23:57,396] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 6 2023-01-11T21:41:23.4529596Z [2023-01-11 21:23:59,007] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 6 2023-01-11T21:41:23.4529957Z 2023-01-11T21:41:23.4530127Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4530471Z import torch 2023-01-11T21:41:23.4530755Z import random 2023-01-11T21:41:23.4531139Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4531613Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4531888Z 2023-01-11T21:41:23.4532011Z aten = torch.ops.aten 2023-01-11T21:41:23.4532449Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4532906Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4533136Z 2023-01-11T21:41:23.4533143Z 2023-01-11T21:41:23.4533387Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4533971Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4534581Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.4535024Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4535409Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4535801Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.4536165Z const long ks0, 2023-01-11T21:41:23.4536490Z const long ks1) 2023-01-11T21:41:23.4536794Z { 2023-01-11T21:41:23.4537099Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:23.4537464Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4537802Z { 2023-01-11T21:41:23.4538095Z #pragma omp for 2023-01-11T21:41:23.4538431Z for(long i0=0; i0::infinity(); 2023-01-11T21:41:23.4540296Z for(long i1=0; i10); 2023-01-11T21:41:23.4584167Z auto tmp8 = std::cos(tmp7); 2023-01-11T21:41:23.4584545Z auto tmp9 = std::exp(tmp8); 2023-01-11T21:41:23.4584916Z auto tmp10 = std::sqrt(tmp9); 2023-01-11T21:41:23.4585295Z auto tmp11 = tmp10 + tmp0; 2023-01-11T21:41:23.4585766Z auto tmp13 = tmp11 - tmp12; 2023-01-11T21:41:23.4586129Z auto tmp14 = tmp13 * tmp0; 2023-01-11T21:41:23.4586497Z auto tmp15 = tmp14 / tmp0; 2023-01-11T21:41:23.4586867Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4587223Z auto tmp17 = tmp16 * tmp16; 2023-01-11T21:41:23.4587590Z auto tmp18 = tmp17 * tmp15; 2023-01-11T21:41:23.4587960Z auto tmp19 = tmp18 * tmp18; 2023-01-11T21:41:23.4588329Z auto tmp20 = std::log(tmp19); 2023-01-11T21:41:23.4588722Z auto tmp21 = std::floor(tmp20); 2023-01-11T21:41:23.4589112Z auto tmp22 = std::ceil(tmp21); 2023-01-11T21:41:23.4589505Z auto tmp23 = std::trunc(tmp22); 2023-01-11T21:41:23.4589891Z auto tmp24 = std::lgamma(tmp23); 2023-01-11T21:41:23.4590296Z auto tmp25 = std::fmod(tmp24, tmp12); 2023-01-11T21:41:23.4590790Z auto tmp26 = tmp25 > 0 ? 1 : 0; 2023-01-11T21:41:23.4591153Z auto tmp27 = tmp25 < 0 ? 1 : 0; 2023-01-11T21:41:23.4591624Z auto tmp28 = tmp26 - tmp27; 2023-01-11T21:41:23.4592000Z auto tmp29 = tmp28 + tmp12; 2023-01-11T21:41:23.4592343Z out_ptr0[i0] = tmp29; 2023-01-11T21:41:23.4592659Z } 2023-01-11T21:41:23.4592942Z } 2023-01-11T21:41:23.4593205Z } 2023-01-11T21:41:23.4593476Z } 2023-01-11T21:41:23.4593796Z } 2023-01-11T21:41:23.4594076Z ''') 2023-01-11T21:41:23.4594245Z 2023-01-11T21:41:23.4594253Z 2023-01-11T21:41:23.4594409Z async_compile.wait(globals()) 2023-01-11T21:41:23.4594757Z del async_compile 2023-01-11T21:41:23.4594950Z 2023-01-11T21:41:23.4595079Z def call(args): 2023-01-11T21:41:23.4595364Z x1_1, x2_1 = args 2023-01-11T21:41:23.4595670Z args.clear() 2023-01-11T21:41:23.4596295Z buf0 = empty_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4603579Z kernel_cpp_0(c_void_p(x1_1.data_ptr()), c_void_p(x2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4604894Z del x1_1 2023-01-11T21:41:23.4605182Z del x2_1 2023-01-11T21:41:23.4605455Z return (buf0, ) 2023-01-11T21:41:23.4605645Z 2023-01-11T21:41:23.4605653Z 2023-01-11T21:41:23.4605787Z if __name__ == "__main__": 2023-01-11T21:41:23.4606183Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4608924Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4609577Z x1_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4610210Z x2_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4610675Z print_performance(lambda: call([x1_1, x2_1])) 2023-01-11T21:41:23.4610914Z 2023-01-11T21:41:23.4610936Z 2023-01-11T21:41:23.4611086Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4611440Z import torch 2023-01-11T21:41:23.4611742Z import random 2023-01-11T21:41:23.4612115Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4612583Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4612857Z 2023-01-11T21:41:23.4612993Z aten = torch.ops.aten 2023-01-11T21:41:23.4618669Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4619142Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4619371Z 2023-01-11T21:41:23.4619378Z 2023-01-11T21:41:23.4619632Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4620224Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4620826Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4621262Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4621661Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4621974Z { 2023-01-11T21:41:23.4622313Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4622775Z { 2023-01-11T21:41:23.4623053Z #pragma omp for 2023-01-11T21:41:23.4623395Z for(long i0=0; i0<25; i0+=1) 2023-01-11T21:41:23.4623707Z { 2023-01-11T21:41:23.4624098Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4624601Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4625022Z auto tmp1 = tmp0.abs(); 2023-01-11T21:41:23.4625377Z auto tmp2 = tmp1.sin(); 2023-01-11T21:41:23.4625715Z auto tmp3 = tmp2.neg(); 2023-01-11T21:41:23.4626061Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:23.4626501Z auto tmp5 = decltype(tmp4)(1)/(decltype(tmp4)(1) + tmp4.neg().exp()); 2023-01-11T21:41:23.4626994Z auto tmp6 = at::vec::clamp_min(tmp5, decltype(tmp5)(0)); 2023-01-11T21:41:23.4627398Z auto tmp7 = tmp6.cos(); 2023-01-11T21:41:23.4627744Z auto tmp8 = tmp7.exp(); 2023-01-11T21:41:23.4628195Z auto tmp9 = tmp8.sqrt(); 2023-01-11T21:41:23.4628551Z auto tmp10 = tmp9 + tmp0; 2023-01-11T21:41:23.4629000Z auto tmp12 = tmp10 - tmp11; 2023-01-11T21:41:23.4629344Z auto tmp13 = tmp12 * tmp0; 2023-01-11T21:41:23.4629697Z auto tmp14 = tmp13 / tmp0; 2023-01-11T21:41:23.4630048Z auto tmp15 = tmp14 * tmp14; 2023-01-11T21:41:23.4630401Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4630738Z auto tmp17 = tmp16 * tmp14; 2023-01-11T21:41:23.4631089Z auto tmp18 = tmp17 * tmp17; 2023-01-11T21:41:23.4631439Z auto tmp19 = tmp18.log(); 2023-01-11T21:41:23.4631780Z auto tmp20 = tmp19.floor(); 2023-01-11T21:41:23.4632137Z auto tmp21 = tmp20.ceil(); 2023-01-11T21:41:23.4632493Z auto tmp22 = tmp21.trunc(); 2023-01-11T21:41:23.4632836Z auto tmp23 = tmp22.lgamma(); 2023-01-11T21:41:23.4633281Z auto tmp24 = tmp23.fmod(tmp11); 2023-01-11T21:41:23.4633863Z auto tmp25 = decltype(tmp24)::blendv(decltype(tmp24)(0), decltype(tmp24)(1), decltype(tmp24)(0) < tmp24); 2023-01-11T21:41:23.4634492Z auto tmp26 = decltype(tmp24)::blendv(decltype(tmp24)(0), decltype(tmp24)(1), tmp24 < decltype(tmp24)(0)); 2023-01-11T21:41:23.4635067Z auto tmp27 = tmp25 - tmp26; 2023-01-11T21:41:23.4635430Z auto tmp28 = tmp27 + tmp11; 2023-01-11T21:41:23.4635773Z tmp28.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4636080Z } 2023-01-11T21:41:23.4636407Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4636774Z for(long i0=200; i0<200; i0+=1) 2023-01-11T21:41:23.4637069Z { 2023-01-11T21:41:23.4637375Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4637731Z auto tmp12 = in_ptr1[i0]; 2023-01-11T21:41:23.4638075Z auto tmp1 = std::abs(tmp0); 2023-01-11T21:41:23.4638435Z auto tmp2 = std::sin(tmp1); 2023-01-11T21:41:23.4638857Z auto tmp3 = -tmp2; 2023-01-11T21:41:23.4639187Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:23.4639636Z auto tmp5 = std::exp(-tmp4); 2023-01-11T21:41:23.4639996Z auto tmp6 = 1 / (1 + tmp5); 2023-01-11T21:41:23.4640334Z auto tmp7 = tmp6 * (tmp6>0); 2023-01-11T21:41:23.4640705Z auto tmp8 = std::cos(tmp7); 2023-01-11T21:41:23.4641073Z auto tmp9 = std::exp(tmp8); 2023-01-11T21:41:23.4641446Z auto tmp10 = std::sqrt(tmp9); 2023-01-11T21:41:23.4641795Z auto tmp11 = tmp10 + tmp0; 2023-01-11T21:41:23.4642235Z auto tmp13 = tmp11 - tmp12; 2023-01-11T21:41:23.4642592Z auto tmp14 = tmp13 * tmp0; 2023-01-11T21:41:23.4642936Z auto tmp15 = tmp14 / tmp0; 2023-01-11T21:41:23.4643294Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4643647Z auto tmp17 = tmp16 * tmp16; 2023-01-11T21:41:23.4643993Z auto tmp18 = tmp17 * tmp15; 2023-01-11T21:41:23.4644353Z auto tmp19 = tmp18 * tmp18; 2023-01-11T21:41:23.4644718Z auto tmp20 = std::log(tmp19); 2023-01-11T21:41:23.4645082Z auto tmp21 = std::floor(tmp20); 2023-01-11T21:41:23.4645461Z auto tmp22 = std::ceil(tmp21); 2023-01-11T21:41:23.4645845Z auto tmp23 = std::trunc(tmp22); 2023-01-11T21:41:23.4646215Z auto tmp24 = std::lgamma(tmp23); 2023-01-11T21:41:23.4646617Z auto tmp25 = std::fmod(tmp24, tmp12); 2023-01-11T21:41:23.4647001Z auto tmp26 = tmp25 > 0 ? 1 : 0; 2023-01-11T21:41:23.4647363Z auto tmp27 = tmp25 < 0 ? 1 : 0; 2023-01-11T21:41:23.4647786Z auto tmp28 = tmp26 - tmp27; 2023-01-11T21:41:23.4648145Z auto tmp29 = tmp28 + tmp12; 2023-01-11T21:41:23.4648492Z out_ptr0[i0] = tmp29; 2023-01-11T21:41:23.4648780Z } 2023-01-11T21:41:23.4649051Z } 2023-01-11T21:41:23.4649312Z } 2023-01-11T21:41:23.4649583Z ''') 2023-01-11T21:41:23.4649817Z 2023-01-11T21:41:23.4649830Z 2023-01-11T21:41:23.4649985Z async_compile.wait(globals()) 2023-01-11T21:41:23.4650113Z del async_compile 2023-01-11T21:41:23.4650121Z 2023-01-11T21:41:23.4650245Z def call(args): 2023-01-11T21:41:23.4650368Z x1_1, x2_1 = args 2023-01-11T21:41:23.4650482Z args.clear() 2023-01-11T21:41:23.4650852Z buf0 = empty_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4651130Z kernel_cpp_0(c_void_p(x1_1.data_ptr()), c_void_p(x2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4651244Z del x1_1 2023-01-11T21:41:23.4651355Z del x2_1 2023-01-11T21:41:23.4651480Z return (buf0, ) 2023-01-11T21:41:23.4651488Z 2023-01-11T21:41:23.4651495Z 2023-01-11T21:41:23.4651626Z if __name__ == "__main__": 2023-01-11T21:41:23.4651831Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4652036Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4652442Z x1_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4652807Z x2_1 = rand_strided((10, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4652997Z print_performance(lambda: call([x1_1, x2_1])) 2023-01-11T21:41:23.4653005Z 2023-01-11T21:41:23.4653507Z [2023-01-11 21:24:03,604] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4653993Z [2023-01-11 21:24:03,826] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.4654478Z [2023-01-11 21:24:05,569] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.4654487Z 2023-01-11T21:41:23.4654651Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4654758Z import torch 2023-01-11T21:41:23.4654880Z import random 2023-01-11T21:41:23.4655088Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4655312Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4655330Z 2023-01-11T21:41:23.4655468Z aten = torch.ops.aten 2023-01-11T21:41:23.4655700Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4655864Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4655873Z 2023-01-11T21:41:23.4655880Z 2023-01-11T21:41:23.4656126Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4656482Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4656697Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4656879Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4657055Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4657160Z { 2023-01-11T21:41:23.4657335Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4657446Z { 2023-01-11T21:41:23.4657564Z #pragma omp for 2023-01-11T21:41:23.4657707Z for(long i0=0; i0<20; i0+=1) 2023-01-11T21:41:23.4657824Z { 2023-01-11T21:41:23.4657966Z #pragma GCC ivdep 2023-01-11T21:41:23.4658115Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:23.4658228Z { 2023-01-11T21:41:23.4658341Z { 2023-01-11T21:41:23.4658441Z { 2023-01-11T21:41:23.4658627Z auto tmp0 = in_ptr0[i0 + (20*i1)]; 2023-01-11T21:41:23.4658804Z auto tmp12 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:23.4658978Z auto tmp1 = std::abs(tmp0); 2023-01-11T21:41:23.4659147Z auto tmp2 = std::sin(tmp1); 2023-01-11T21:41:23.4659382Z auto tmp3 = -tmp2; 2023-01-11T21:41:23.4659545Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:23.4659800Z auto tmp5 = std::exp(-tmp4); 2023-01-11T21:41:23.4659959Z auto tmp6 = 1 / (1 + tmp5); 2023-01-11T21:41:23.4660125Z auto tmp7 = tmp6 * (tmp6>0); 2023-01-11T21:41:23.4660352Z auto tmp8 = std::cos(tmp7); 2023-01-11T21:41:23.4660523Z auto tmp9 = std::exp(tmp8); 2023-01-11T21:41:23.4660702Z auto tmp10 = std::sqrt(tmp9); 2023-01-11T21:41:23.4660864Z auto tmp11 = tmp10 + tmp0; 2023-01-11T21:41:23.4661121Z auto tmp13 = tmp11 - tmp12; 2023-01-11T21:41:23.4661268Z auto tmp14 = tmp13 * tmp0; 2023-01-11T21:41:23.4661423Z auto tmp15 = tmp14 / tmp0; 2023-01-11T21:41:23.4661585Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4661747Z auto tmp17 = tmp16 * tmp16; 2023-01-11T21:41:23.4661909Z auto tmp18 = tmp17 * tmp15; 2023-01-11T21:41:23.4662077Z auto tmp19 = tmp18 * tmp18; 2023-01-11T21:41:23.4662249Z auto tmp20 = std::log(tmp19); 2023-01-11T21:41:23.4662617Z auto tmp21 = std::floor(tmp20); 2023-01-11T21:41:23.4662802Z auto tmp22 = std::ceil(tmp21); 2023-01-11T21:41:23.4662981Z auto tmp23 = std::trunc(tmp22); 2023-01-11T21:41:23.4663165Z auto tmp24 = std::lgamma(tmp23); 2023-01-11T21:41:23.4663356Z auto tmp25 = std::fmod(tmp24, tmp12); 2023-01-11T21:41:23.4663521Z auto tmp26 = tmp25 > 0 ? 1 : 0; 2023-01-11T21:41:23.4663685Z auto tmp27 = tmp25 < 0 ? 1 : 0; 2023-01-11T21:41:23.4663944Z auto tmp28 = tmp26 - tmp27; 2023-01-11T21:41:23.4664093Z auto tmp29 = tmp28 + tmp12; 2023-01-11T21:41:23.4664256Z out_ptr0[i0 + (20*i1)] = tmp29; 2023-01-11T21:41:23.4664368Z } 2023-01-11T21:41:23.4664482Z } 2023-01-11T21:41:23.4664597Z } 2023-01-11T21:41:23.4664708Z } 2023-01-11T21:41:23.4664818Z } 2023-01-11T21:41:23.4664912Z } 2023-01-11T21:41:23.4665049Z ''') 2023-01-11T21:41:23.4665058Z 2023-01-11T21:41:23.4665065Z 2023-01-11T21:41:23.4665227Z async_compile.wait(globals()) 2023-01-11T21:41:23.4665359Z del async_compile 2023-01-11T21:41:23.4665367Z 2023-01-11T21:41:23.4665489Z def call(args): 2023-01-11T21:41:23.4665609Z x1_1, x2_1 = args 2023-01-11T21:41:23.4665730Z args.clear() 2023-01-11T21:41:23.4666077Z buf0 = empty_strided((20, 10), (1, 20), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4666354Z kernel_cpp_0(c_void_p(x1_1.data_ptr()), c_void_p(x2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4666471Z del x1_1 2023-01-11T21:41:23.4666581Z del x2_1 2023-01-11T21:41:23.4666707Z return (buf0, ) 2023-01-11T21:41:23.4666715Z 2023-01-11T21:41:23.4666723Z 2023-01-11T21:41:23.4666858Z if __name__ == "__main__": 2023-01-11T21:41:23.4667379Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4667600Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4667949Z x1_1 = rand_strided((20, 10), (1, 20), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4668302Z x2_1 = rand_strided((20, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4668495Z print_performance(lambda: call([x1_1, x2_1])) 2023-01-11T21:41:23.4668503Z 2023-01-11T21:41:23.4668510Z 2023-01-11T21:41:23.4668680Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4668800Z import torch 2023-01-11T21:41:23.4668924Z import random 2023-01-11T21:41:23.4669132Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4669351Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4669359Z 2023-01-11T21:41:23.4669480Z aten = torch.ops.aten 2023-01-11T21:41:23.4669720Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4669884Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4669891Z 2023-01-11T21:41:23.4669899Z 2023-01-11T21:41:23.4670137Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4670605Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4670816Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4671000Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.4671176Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4671274Z { 2023-01-11T21:41:23.4671448Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4671554Z { 2023-01-11T21:41:23.4671693Z #pragma omp for 2023-01-11T21:41:23.4671835Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.4671945Z { 2023-01-11T21:41:23.4672185Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4672404Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.4672549Z auto tmp1 = tmp0.abs(); 2023-01-11T21:41:23.4672696Z auto tmp2 = tmp1.sin(); 2023-01-11T21:41:23.4672892Z auto tmp3 = tmp2.neg(); 2023-01-11T21:41:23.4673043Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:23.4673283Z auto tmp5 = decltype(tmp4)(1)/(decltype(tmp4)(1) + tmp4.neg().exp()); 2023-01-11T21:41:23.4673503Z auto tmp6 = at::vec::clamp_min(tmp5, decltype(tmp5)(0)); 2023-01-11T21:41:23.4673635Z auto tmp7 = tmp6.cos(); 2023-01-11T21:41:23.4673835Z auto tmp8 = tmp7.exp(); 2023-01-11T21:41:23.4673987Z auto tmp9 = tmp8.sqrt(); 2023-01-11T21:41:23.4674132Z auto tmp10 = tmp9 + tmp0; 2023-01-11T21:41:23.4674367Z auto tmp12 = tmp10 - tmp11; 2023-01-11T21:41:23.4674516Z auto tmp13 = tmp12 * tmp0; 2023-01-11T21:41:23.4674661Z auto tmp14 = tmp13 / tmp0; 2023-01-11T21:41:23.4674794Z auto tmp15 = tmp14 * tmp14; 2023-01-11T21:41:23.4674939Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4675087Z auto tmp17 = tmp16 * tmp14; 2023-01-11T21:41:23.4675243Z auto tmp18 = tmp17 * tmp17; 2023-01-11T21:41:23.4675393Z auto tmp19 = tmp18.log(); 2023-01-11T21:41:23.4675546Z auto tmp20 = tmp19.floor(); 2023-01-11T21:41:23.4675694Z auto tmp21 = tmp20.ceil(); 2023-01-11T21:41:23.4675822Z auto tmp22 = tmp21.trunc(); 2023-01-11T21:41:23.4675980Z auto tmp23 = tmp22.lgamma(); 2023-01-11T21:41:23.4676144Z auto tmp24 = tmp23.fmod(tmp11); 2023-01-11T21:41:23.4676453Z auto tmp25 = decltype(tmp24)::blendv(decltype(tmp24)(0), decltype(tmp24)(1), decltype(tmp24)(0) < tmp24); 2023-01-11T21:41:23.4676760Z auto tmp26 = decltype(tmp24)::blendv(decltype(tmp24)(0), decltype(tmp24)(1), tmp24 < decltype(tmp24)(0)); 2023-01-11T21:41:23.4676994Z auto tmp27 = tmp25 - tmp26; 2023-01-11T21:41:23.4677141Z auto tmp28 = tmp27 + tmp11; 2023-01-11T21:41:23.4677297Z tmp28.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4677399Z } 2023-01-11T21:41:23.4677574Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4677717Z for(long i0=64; i0<70; i0+=1) 2023-01-11T21:41:23.4677828Z { 2023-01-11T21:41:23.4677974Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4678123Z auto tmp12 = in_ptr1[i0]; 2023-01-11T21:41:23.4678278Z auto tmp1 = std::abs(tmp0); 2023-01-11T21:41:23.4678417Z auto tmp2 = std::sin(tmp1); 2023-01-11T21:41:23.4678620Z auto tmp3 = -tmp2; 2023-01-11T21:41:23.4678767Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:23.4679006Z auto tmp5 = std::exp(-tmp4); 2023-01-11T21:41:23.4679149Z auto tmp6 = 1 / (1 + tmp5); 2023-01-11T21:41:23.4679297Z auto tmp7 = tmp6 * (tmp6>0); 2023-01-11T21:41:23.4679447Z auto tmp8 = std::cos(tmp7); 2023-01-11T21:41:23.4679587Z auto tmp9 = std::exp(tmp8); 2023-01-11T21:41:23.4679744Z auto tmp10 = std::sqrt(tmp9); 2023-01-11T21:41:23.4679891Z auto tmp11 = tmp10 + tmp0; 2023-01-11T21:41:23.4680166Z auto tmp13 = tmp11 - tmp12; 2023-01-11T21:41:23.4680312Z auto tmp14 = tmp13 * tmp0; 2023-01-11T21:41:23.4680456Z auto tmp15 = tmp14 / tmp0; 2023-01-11T21:41:23.4680600Z auto tmp16 = tmp15 * tmp15; 2023-01-11T21:41:23.4680730Z auto tmp17 = tmp16 * tmp16; 2023-01-11T21:41:23.4680873Z auto tmp18 = tmp17 * tmp15; 2023-01-11T21:41:23.4681014Z auto tmp19 = tmp18 * tmp18; 2023-01-11T21:41:23.4681171Z auto tmp20 = std::log(tmp19); 2023-01-11T21:41:23.4681336Z auto tmp21 = std::floor(tmp20); 2023-01-11T21:41:23.4681495Z auto tmp22 = std::ceil(tmp21); 2023-01-11T21:41:23.4681657Z auto tmp23 = std::trunc(tmp22); 2023-01-11T21:41:23.4681826Z auto tmp24 = std::lgamma(tmp23); 2023-01-11T21:41:23.4681986Z auto tmp25 = std::fmod(tmp24, tmp12); 2023-01-11T21:41:23.4682137Z auto tmp26 = tmp25 > 0 ? 1 : 0; 2023-01-11T21:41:23.4682337Z auto tmp27 = tmp25 < 0 ? 1 : 0; 2023-01-11T21:41:23.4682566Z auto tmp28 = tmp26 - tmp27; 2023-01-11T21:41:23.4682714Z auto tmp29 = tmp28 + tmp12; 2023-01-11T21:41:23.4682852Z out_ptr0[i0] = tmp29; 2023-01-11T21:41:23.4682959Z } 2023-01-11T21:41:23.4683051Z } 2023-01-11T21:41:23.4683152Z } 2023-01-11T21:41:23.4683289Z ''') 2023-01-11T21:41:23.4683298Z 2023-01-11T21:41:23.4683305Z 2023-01-11T21:41:23.4683464Z async_compile.wait(globals()) 2023-01-11T21:41:23.4683590Z del async_compile 2023-01-11T21:41:23.4683598Z 2023-01-11T21:41:23.4683722Z def call(args): 2023-01-11T21:41:23.4683841Z x1_1, x2_1 = args 2023-01-11T21:41:23.4683950Z args.clear() 2023-01-11T21:41:23.4684304Z buf0 = empty_strided((10, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4684581Z kernel_cpp_0(c_void_p(x1_1.data_ptr()), c_void_p(x2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4684693Z del x1_1 2023-01-11T21:41:23.4684807Z del x2_1 2023-01-11T21:41:23.4684930Z return (buf0, ) 2023-01-11T21:41:23.4684938Z 2023-01-11T21:41:23.4684945Z 2023-01-11T21:41:23.4685076Z if __name__ == "__main__": 2023-01-11T21:41:23.4685273Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4685479Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4685829Z x1_1 = rand_strided((10, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4686176Z x2_1 = rand_strided((10, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4686369Z print_performance(lambda: call([x1_1, x2_1])) 2023-01-11T21:41:23.4686377Z 2023-01-11T21:41:23.4686490Z ok (6.559s) 2023-01-11T21:41:23.4687357Z test_abs_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.4687585Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.4688065Z [2023-01-11 21:24:05,590] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 7 2023-01-11T21:41:23.4688554Z [2023-01-11 21:24:07,187] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 7 2023-01-11T21:41:23.4688563Z 2023-01-11T21:41:23.4688713Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4688829Z import torch 2023-01-11T21:41:23.4688956Z import random 2023-01-11T21:41:23.4689161Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4689377Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4689385Z 2023-01-11T21:41:23.4689521Z aten = torch.ops.aten 2023-01-11T21:41:23.4689761Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4689956Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4689978Z 2023-01-11T21:41:23.4689986Z 2023-01-11T21:41:23.4690214Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4690588Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4690799Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4690971Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.4691079Z { 2023-01-11T21:41:23.4691254Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4691359Z { 2023-01-11T21:41:23.4691477Z #pragma omp for 2023-01-11T21:41:23.4691618Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.4691725Z { 2023-01-11T21:41:23.4691956Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4692105Z auto tmp1 = tmp0.abs(); 2023-01-11T21:41:23.4692380Z auto tmp2 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.4692530Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.4692661Z auto tmp4 = tmp0 / tmp3; 2023-01-11T21:41:23.4692824Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4692932Z } 2023-01-11T21:41:23.4693102Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4693248Z for(long i0=16; i0<17; i0+=1) 2023-01-11T21:41:23.4693356Z { 2023-01-11T21:41:23.4693501Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4693641Z auto tmp1 = std::abs(tmp0); 2023-01-11T21:41:23.4693815Z auto tmp2 = static_cast(1); 2023-01-11T21:41:23.4693960Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.4694105Z auto tmp4 = tmp0 / tmp3; 2023-01-11T21:41:23.4694247Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.4694358Z } 2023-01-11T21:41:23.4694467Z } 2023-01-11T21:41:23.4694556Z } 2023-01-11T21:41:23.4694692Z ''') 2023-01-11T21:41:23.4694705Z 2023-01-11T21:41:23.4694716Z 2023-01-11T21:41:23.4694872Z async_compile.wait(globals()) 2023-01-11T21:41:23.4695001Z del async_compile 2023-01-11T21:41:23.4695009Z 2023-01-11T21:41:23.4695131Z def call(args): 2023-01-11T21:41:23.4695250Z arg0_1, = args 2023-01-11T21:41:23.4695372Z args.clear() 2023-01-11T21:41:23.4695699Z buf0 = empty_strided((17, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4695932Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.4696054Z del arg0_1 2023-01-11T21:41:23.4696179Z return (buf0, ) 2023-01-11T21:41:23.4696187Z 2023-01-11T21:41:23.4696194Z 2023-01-11T21:41:23.4696330Z if __name__ == "__main__": 2023-01-11T21:41:23.4696531Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4696754Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4697117Z arg0_1 = rand_strided((17, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4697298Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.4697325Z 2023-01-11T21:41:23.4697424Z ok (1.619s) 2023-01-11T21:41:23.4698327Z test_adaptive_avg_pool2d1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.4698551Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.4699025Z [2023-01-11 21:24:07,219] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 8 2023-01-11T21:41:23.4699467Z [2023-01-11 21:24:07,228] torch._inductor.ir: [WARNING] Using FallbackKernel: aten._adaptive_avg_pool2d 2023-01-11T21:41:23.4699475Z 2023-01-11T21:41:23.4699642Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4699820Z import torch 2023-01-11T21:41:23.4699945Z import random 2023-01-11T21:41:23.4700137Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4700355Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4700363Z 2023-01-11T21:41:23.4700501Z aten = torch.ops.aten 2023-01-11T21:41:23.4700743Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4700908Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4700916Z 2023-01-11T21:41:23.4700923Z 2023-01-11T21:41:23.4701166Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4701535Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4701746Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4701920Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4702078Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.4702187Z { 2023-01-11T21:41:23.4702554Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4702665Z { 2023-01-11T21:41:23.4702801Z #pragma omp for 2023-01-11T21:41:23.4702945Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.4703038Z { 2023-01-11T21:41:23.4703177Z #pragma GCC ivdep 2023-01-11T21:41:23.4703326Z for(long i1=0; i1<6; i1+=1) 2023-01-11T21:41:23.4703438Z { 2023-01-11T21:41:23.4703582Z #pragma GCC ivdep 2023-01-11T21:41:23.4703736Z for(long i2=0; i2<6; i2+=1) 2023-01-11T21:41:23.4703845Z { 2023-01-11T21:41:23.4703947Z { 2023-01-11T21:41:23.4704066Z { 2023-01-11T21:41:23.4704268Z auto tmp0 = static_cast(((8*i1) / 3)); 2023-01-11T21:41:23.4704480Z auto tmp1 = static_cast(((21 + (16*i1)) / 6)); 2023-01-11T21:41:23.4704647Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.4704860Z auto tmp3 = static_cast(((8*i2) / 3)); 2023-01-11T21:41:23.4705067Z auto tmp4 = static_cast(((21 + (16*i2)) / 6)); 2023-01-11T21:41:23.4705232Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:23.4705381Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:23.4705539Z float tmp7 = 0.0; 2023-01-11T21:41:23.4705671Z if(tmp6) 2023-01-11T21:41:23.4705801Z { 2023-01-11T21:41:23.4706014Z auto tmp8 = in_ptr0[(16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4706164Z tmp7 = tmp8; 2023-01-11T21:41:23.4706287Z } 2023-01-11T21:41:23.4706478Z auto tmp9 = static_cast(1 + (((8*i2) / 3))); 2023-01-11T21:41:23.4706645Z auto tmp10 = tmp9 < tmp4; 2023-01-11T21:41:23.4706823Z auto tmp11 = tmp2 & tmp10; 2023-01-11T21:41:23.4706977Z float tmp12 = 0.0; 2023-01-11T21:41:23.4707120Z if(tmp11) 2023-01-11T21:41:23.4707245Z { 2023-01-11T21:41:23.4707461Z auto tmp13 = in_ptr0[1 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4707613Z tmp12 = tmp13; 2023-01-11T21:41:23.4707718Z } 2023-01-11T21:41:23.4707886Z auto tmp14 = tmp12 + tmp7; 2023-01-11T21:41:23.4708093Z auto tmp15 = static_cast(2 + (((8*i2) / 3))); 2023-01-11T21:41:23.4708262Z auto tmp16 = tmp15 < tmp4; 2023-01-11T21:41:23.4708428Z auto tmp17 = tmp2 & tmp16; 2023-01-11T21:41:23.4708584Z float tmp18 = 0.0; 2023-01-11T21:41:23.4708721Z if(tmp17) 2023-01-11T21:41:23.4708894Z { 2023-01-11T21:41:23.4709113Z auto tmp19 = in_ptr0[2 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4709264Z tmp18 = tmp19; 2023-01-11T21:41:23.4709385Z } 2023-01-11T21:41:23.4822785Z auto tmp20 = tmp18 + tmp14; 2023-01-11T21:41:23.4823113Z auto tmp21 = static_cast(3 + (((8*i2) / 3))); 2023-01-11T21:41:23.4823291Z auto tmp22 = tmp21 < tmp4; 2023-01-11T21:41:23.4823460Z auto tmp23 = tmp2 & tmp22; 2023-01-11T21:41:23.4823615Z float tmp24 = 0.0; 2023-01-11T21:41:23.4823733Z if(tmp23) 2023-01-11T21:41:23.4823852Z { 2023-01-11T21:41:23.4824071Z auto tmp25 = in_ptr0[3 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4824401Z tmp24 = tmp25; 2023-01-11T21:41:23.4824524Z } 2023-01-11T21:41:23.4824696Z auto tmp26 = tmp24 + tmp20; 2023-01-11T21:41:23.4824902Z auto tmp27 = static_cast(1 + (((8*i1) / 3))); 2023-01-11T21:41:23.4825056Z auto tmp28 = tmp27 < tmp1; 2023-01-11T21:41:23.4825220Z auto tmp29 = tmp28 & tmp5; 2023-01-11T21:41:23.4825373Z float tmp30 = 0.0; 2023-01-11T21:41:23.4825508Z if(tmp29) 2023-01-11T21:41:23.4825630Z { 2023-01-11T21:41:23.4825847Z auto tmp31 = in_ptr0[16 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4825997Z tmp30 = tmp31; 2023-01-11T21:41:23.4826118Z } 2023-01-11T21:41:23.4826280Z auto tmp32 = tmp30 + tmp26; 2023-01-11T21:41:23.4826448Z auto tmp33 = tmp28 & tmp10; 2023-01-11T21:41:23.4826602Z float tmp34 = 0.0; 2023-01-11T21:41:23.4826735Z if(tmp33) 2023-01-11T21:41:23.4826854Z { 2023-01-11T21:41:23.4827068Z auto tmp35 = in_ptr0[17 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4827216Z tmp34 = tmp35; 2023-01-11T21:41:23.4827325Z } 2023-01-11T21:41:23.4827491Z auto tmp36 = tmp34 + tmp32; 2023-01-11T21:41:23.4827655Z auto tmp37 = tmp28 & tmp16; 2023-01-11T21:41:23.4827806Z float tmp38 = 0.0; 2023-01-11T21:41:23.4827938Z if(tmp37) 2023-01-11T21:41:23.4828059Z { 2023-01-11T21:41:23.4828279Z auto tmp39 = in_ptr0[18 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4828417Z tmp38 = tmp39; 2023-01-11T21:41:23.4828535Z } 2023-01-11T21:41:23.4828702Z auto tmp40 = tmp38 + tmp36; 2023-01-11T21:41:23.4828869Z auto tmp41 = tmp28 & tmp22; 2023-01-11T21:41:23.4829024Z float tmp42 = 0.0; 2023-01-11T21:41:23.4829158Z if(tmp41) 2023-01-11T21:41:23.4829278Z { 2023-01-11T21:41:23.4829490Z auto tmp43 = in_ptr0[19 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4829625Z tmp42 = tmp43; 2023-01-11T21:41:23.4829744Z } 2023-01-11T21:41:23.4829912Z auto tmp44 = tmp42 + tmp40; 2023-01-11T21:41:23.4830118Z auto tmp45 = static_cast(2 + (((8*i1) / 3))); 2023-01-11T21:41:23.4830361Z auto tmp46 = tmp45 < tmp1; 2023-01-11T21:41:23.4830526Z auto tmp47 = tmp46 & tmp5; 2023-01-11T21:41:23.4830677Z float tmp48 = 0.0; 2023-01-11T21:41:23.4830797Z if(tmp47) 2023-01-11T21:41:23.4830915Z { 2023-01-11T21:41:23.4831132Z auto tmp49 = in_ptr0[32 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4831393Z tmp48 = tmp49; 2023-01-11T21:41:23.4831514Z } 2023-01-11T21:41:23.4831683Z auto tmp50 = tmp48 + tmp44; 2023-01-11T21:41:23.4831850Z auto tmp51 = tmp46 & tmp10; 2023-01-11T21:41:23.4831987Z float tmp52 = 0.0; 2023-01-11T21:41:23.4832121Z if(tmp51) 2023-01-11T21:41:23.4832295Z { 2023-01-11T21:41:23.4832509Z auto tmp53 = in_ptr0[33 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4832658Z tmp52 = tmp53; 2023-01-11T21:41:23.4832783Z } 2023-01-11T21:41:23.4832948Z auto tmp54 = tmp52 + tmp50; 2023-01-11T21:41:23.4833116Z auto tmp55 = tmp46 & tmp16; 2023-01-11T21:41:23.4833252Z float tmp56 = 0.0; 2023-01-11T21:41:23.4833384Z if(tmp55) 2023-01-11T21:41:23.4833500Z { 2023-01-11T21:41:23.4833705Z auto tmp57 = in_ptr0[34 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4833915Z tmp56 = tmp57; 2023-01-11T21:41:23.4834029Z } 2023-01-11T21:41:23.4834191Z auto tmp58 = tmp56 + tmp54; 2023-01-11T21:41:23.4834352Z auto tmp59 = tmp46 & tmp22; 2023-01-11T21:41:23.4834505Z float tmp60 = 0.0; 2023-01-11T21:41:23.4834637Z if(tmp59) 2023-01-11T21:41:23.4834757Z { 2023-01-11T21:41:23.4834971Z auto tmp61 = in_ptr0[35 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4835118Z tmp60 = tmp61; 2023-01-11T21:41:23.4835241Z } 2023-01-11T21:41:23.4835391Z auto tmp62 = tmp60 + tmp58; 2023-01-11T21:41:23.4835597Z auto tmp63 = static_cast(3 + (((8*i1) / 3))); 2023-01-11T21:41:23.4835763Z auto tmp64 = tmp63 < tmp1; 2023-01-11T21:41:23.4835929Z auto tmp65 = tmp64 & tmp5; 2023-01-11T21:41:23.4836081Z float tmp66 = 0.0; 2023-01-11T21:41:23.4836221Z if(tmp65) 2023-01-11T21:41:23.4836341Z { 2023-01-11T21:41:23.4836551Z auto tmp67 = in_ptr0[48 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4836683Z tmp66 = tmp67; 2023-01-11T21:41:23.4836803Z } 2023-01-11T21:41:23.4836970Z auto tmp68 = tmp66 + tmp62; 2023-01-11T21:41:23.4837132Z auto tmp69 = tmp64 & tmp10; 2023-01-11T21:41:23.4837278Z float tmp70 = 0.0; 2023-01-11T21:41:23.4837403Z if(tmp69) 2023-01-11T21:41:23.4837516Z { 2023-01-11T21:41:23.4837716Z auto tmp71 = in_ptr0[49 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4837860Z tmp70 = tmp71; 2023-01-11T21:41:23.4837979Z } 2023-01-11T21:41:23.4838206Z auto tmp72 = tmp70 + tmp68; 2023-01-11T21:41:23.4838373Z auto tmp73 = tmp64 & tmp16; 2023-01-11T21:41:23.4838527Z float tmp74 = 0.0; 2023-01-11T21:41:23.4838657Z if(tmp73) 2023-01-11T21:41:23.4838768Z { 2023-01-11T21:41:23.4838979Z auto tmp75 = in_ptr0[50 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4839126Z tmp74 = tmp75; 2023-01-11T21:41:23.4839243Z } 2023-01-11T21:41:23.4839409Z auto tmp76 = tmp74 + tmp72; 2023-01-11T21:41:23.4839573Z auto tmp77 = tmp64 & tmp22; 2023-01-11T21:41:23.4839723Z float tmp78 = 0.0; 2023-01-11T21:41:23.4839856Z if(tmp77) 2023-01-11T21:41:23.4839965Z { 2023-01-11T21:41:23.4840229Z auto tmp79 = in_ptr0[51 + (16*(((8*i1) / 3))) + (256*i0) + (((8*i2) / 3))]; 2023-01-11T21:41:23.4840379Z tmp78 = tmp79; 2023-01-11T21:41:23.4840498Z } 2023-01-11T21:41:23.4840664Z auto tmp80 = tmp78 + tmp76; 2023-01-11T21:41:23.4840816Z float tmp81 = 0.0; 2023-01-11T21:41:23.4840945Z if(tmp6) 2023-01-11T21:41:23.4841051Z { 2023-01-11T21:41:23.4841242Z auto tmp82 = static_cast(1); 2023-01-11T21:41:23.4841392Z tmp81 = tmp82; 2023-01-11T21:41:23.4841510Z } 2023-01-11T21:41:23.4841660Z float tmp83 = 0.0; 2023-01-11T21:41:23.4841794Z if(tmp11) 2023-01-11T21:41:23.4841913Z { 2023-01-11T21:41:23.4842102Z auto tmp84 = static_cast(1); 2023-01-11T21:41:23.4842251Z tmp83 = tmp84; 2023-01-11T21:41:23.4842372Z } 2023-01-11T21:41:23.4842539Z auto tmp85 = tmp83 + tmp81; 2023-01-11T21:41:23.4842690Z float tmp86 = 0.0; 2023-01-11T21:41:23.4842822Z if(tmp17) 2023-01-11T21:41:23.4842945Z { 2023-01-11T21:41:23.4843122Z auto tmp87 = static_cast(1); 2023-01-11T21:41:23.4843269Z tmp86 = tmp87; 2023-01-11T21:41:23.4843392Z } 2023-01-11T21:41:23.4843560Z auto tmp88 = tmp86 + tmp85; 2023-01-11T21:41:23.4843711Z float tmp89 = 0.0; 2023-01-11T21:41:23.4843842Z if(tmp23) 2023-01-11T21:41:23.4843965Z { 2023-01-11T21:41:23.4844148Z auto tmp90 = static_cast(1); 2023-01-11T21:41:23.4844295Z tmp89 = tmp90; 2023-01-11T21:41:23.4844414Z } 2023-01-11T21:41:23.4844579Z auto tmp91 = tmp89 + tmp88; 2023-01-11T21:41:23.4844732Z float tmp92 = 0.0; 2023-01-11T21:41:23.4844865Z if(tmp29) 2023-01-11T21:41:23.4844982Z { 2023-01-11T21:41:23.4845160Z auto tmp93 = static_cast(1); 2023-01-11T21:41:23.4845309Z tmp92 = tmp93; 2023-01-11T21:41:23.4845426Z } 2023-01-11T21:41:23.4845593Z auto tmp94 = tmp92 + tmp91; 2023-01-11T21:41:23.4845744Z float tmp95 = 0.0; 2023-01-11T21:41:23.4845876Z if(tmp33) 2023-01-11T21:41:23.4845994Z { 2023-01-11T21:41:23.4846237Z auto tmp96 = static_cast(1); 2023-01-11T21:41:23.4846371Z tmp95 = tmp96; 2023-01-11T21:41:23.4846488Z } 2023-01-11T21:41:23.4846656Z auto tmp97 = tmp95 + tmp94; 2023-01-11T21:41:23.4846811Z float tmp98 = 0.0; 2023-01-11T21:41:23.4846946Z if(tmp37) 2023-01-11T21:41:23.4847065Z { 2023-01-11T21:41:23.4847253Z auto tmp99 = static_cast(1); 2023-01-11T21:41:23.4847383Z tmp98 = tmp99; 2023-01-11T21:41:23.4847502Z } 2023-01-11T21:41:23.4847672Z auto tmp100 = tmp98 + tmp97; 2023-01-11T21:41:23.4847828Z float tmp101 = 0.0; 2023-01-11T21:41:23.4847959Z if(tmp41) 2023-01-11T21:41:23.4848080Z { 2023-01-11T21:41:23.4848319Z auto tmp102 = static_cast(1); 2023-01-11T21:41:23.4848454Z tmp101 = tmp102; 2023-01-11T21:41:23.4848573Z } 2023-01-11T21:41:23.4848747Z auto tmp103 = tmp101 + tmp100; 2023-01-11T21:41:23.4848904Z float tmp104 = 0.0; 2023-01-11T21:41:23.4849035Z if(tmp47) 2023-01-11T21:41:23.4849158Z { 2023-01-11T21:41:23.4849350Z auto tmp105 = static_cast(1); 2023-01-11T21:41:23.4849484Z tmp104 = tmp105; 2023-01-11T21:41:23.4849604Z } 2023-01-11T21:41:23.4849775Z auto tmp106 = tmp104 + tmp103; 2023-01-11T21:41:23.4849927Z float tmp107 = 0.0; 2023-01-11T21:41:23.4850056Z if(tmp51) 2023-01-11T21:41:23.4850182Z { 2023-01-11T21:41:23.4850375Z auto tmp108 = static_cast(1); 2023-01-11T21:41:23.4850510Z tmp107 = tmp108; 2023-01-11T21:41:23.4850629Z } 2023-01-11T21:41:23.4850795Z auto tmp109 = tmp107 + tmp106; 2023-01-11T21:41:23.4850946Z float tmp110 = 0.0; 2023-01-11T21:41:23.4851072Z if(tmp55) 2023-01-11T21:41:23.4851187Z { 2023-01-11T21:41:23.4851375Z auto tmp111 = static_cast(1); 2023-01-11T21:41:23.4851511Z tmp110 = tmp111; 2023-01-11T21:41:23.4851625Z } 2023-01-11T21:41:23.4851792Z auto tmp112 = tmp110 + tmp109; 2023-01-11T21:41:23.4851944Z float tmp113 = 0.0; 2023-01-11T21:41:23.4852073Z if(tmp59) 2023-01-11T21:41:23.4852199Z { 2023-01-11T21:41:23.4852386Z auto tmp114 = static_cast(1); 2023-01-11T21:41:23.4852526Z tmp113 = tmp114; 2023-01-11T21:41:23.4852634Z } 2023-01-11T21:41:23.4852800Z auto tmp115 = tmp113 + tmp112; 2023-01-11T21:41:23.4852949Z float tmp116 = 0.0; 2023-01-11T21:41:23.4853075Z if(tmp65) 2023-01-11T21:41:23.4853192Z { 2023-01-11T21:41:23.4853380Z auto tmp117 = static_cast(1); 2023-01-11T21:41:23.4853522Z tmp116 = tmp117; 2023-01-11T21:41:23.4853628Z } 2023-01-11T21:41:23.4853794Z auto tmp118 = tmp116 + tmp115; 2023-01-11T21:41:23.4853939Z float tmp119 = 0.0; 2023-01-11T21:41:23.4854114Z if(tmp69) 2023-01-11T21:41:23.4854229Z { 2023-01-11T21:41:23.4854417Z auto tmp120 = static_cast(1); 2023-01-11T21:41:23.4854564Z tmp119 = tmp120; 2023-01-11T21:41:23.4854673Z } 2023-01-11T21:41:23.4854839Z auto tmp121 = tmp119 + tmp118; 2023-01-11T21:41:23.4854992Z float tmp122 = 0.0; 2023-01-11T21:41:23.4855114Z if(tmp73) 2023-01-11T21:41:23.4855224Z { 2023-01-11T21:41:23.4855410Z auto tmp123 = static_cast(1); 2023-01-11T21:41:23.4855557Z tmp122 = tmp123; 2023-01-11T21:41:23.4855667Z } 2023-01-11T21:41:23.4855833Z auto tmp124 = tmp122 + tmp121; 2023-01-11T21:41:23.4855978Z float tmp125 = 0.0; 2023-01-11T21:41:23.4856171Z if(tmp77) 2023-01-11T21:41:23.4856286Z { 2023-01-11T21:41:23.4856476Z auto tmp126 = static_cast(1); 2023-01-11T21:41:23.4856616Z tmp125 = tmp126; 2023-01-11T21:41:23.4856720Z } 2023-01-11T21:41:23.4856883Z auto tmp127 = tmp125 + tmp124; 2023-01-11T21:41:23.4857048Z auto tmp128 = tmp80 / tmp127; 2023-01-11T21:41:23.4857223Z out_ptr0[i2 + (6*i1) + (36*i0)] = tmp128; 2023-01-11T21:41:23.4857333Z } 2023-01-11T21:41:23.4857438Z } 2023-01-11T21:41:23.4857547Z } 2023-01-11T21:41:23.4857644Z } 2023-01-11T21:41:23.4857747Z } 2023-01-11T21:41:23.4857879Z #pragma omp for 2023-01-11T21:41:23.4858018Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.4858118Z { 2023-01-11T21:41:23.4858374Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4858601Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.4858734Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4858888Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.4858992Z } 2023-01-11T21:41:23.4859158Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4859305Z for(long i0=2048; i0<2048; i0+=1) 2023-01-11T21:41:23.4859414Z { 2023-01-11T21:41:23.4859558Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4859715Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.4859860Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.4859999Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.4860107Z } 2023-01-11T21:41:23.4860214Z } 2023-01-11T21:41:23.4860316Z } 2023-01-11T21:41:23.4860517Z ''') 2023-01-11T21:41:23.4860528Z 2023-01-11T21:41:23.4860540Z 2023-01-11T21:41:23.4860688Z async_compile.wait(globals()) 2023-01-11T21:41:23.4860814Z del async_compile 2023-01-11T21:41:23.4860822Z 2023-01-11T21:41:23.4860948Z def call(args): 2023-01-11T21:41:23.4861070Z arg0_1, = args 2023-01-11T21:41:23.4861192Z args.clear() 2023-01-11T21:41:23.4861593Z buf0 = empty_strided((2, 4, 6, 6), (144, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4861995Z buf1 = empty_strided((2, 4, 16, 16), (1024, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4862286Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.4862509Z del arg0_1 2023-01-11T21:41:23.4862701Z buf2 = aten._adaptive_avg_pool2d(buf1, [2, 5]) 2023-01-11T21:41:23.4862806Z del buf1 2023-01-11T21:41:23.4862927Z buf3 = buf2 2023-01-11T21:41:23.4863107Z assert_size_stride(buf3, (2, 4, 2, 5), (40, 10, 5, 1)) 2023-01-11T21:41:23.4863212Z del buf2 2023-01-11T21:41:23.4863437Z return (buf0, buf3, ) 2023-01-11T21:41:23.4863446Z 2023-01-11T21:41:23.4863454Z 2023-01-11T21:41:23.4863587Z if __name__ == "__main__": 2023-01-11T21:41:23.4863793Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4864013Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4864410Z arg0_1 = rand_strided((2, 4, 16, 16), (1024, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4864600Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.4865098Z [2023-01-11 21:24:09,417] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 8 2023-01-11T21:41:23.4865901Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.4866191Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.4866667Z [2023-01-11 21:24:09,448] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 9 2023-01-11T21:41:23.4866676Z 2023-01-11T21:41:23.4866683Z 2023-01-11T21:41:23.4866850Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4866973Z import torch 2023-01-11T21:41:23.4867095Z import random 2023-01-11T21:41:23.4867286Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4867499Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4867506Z 2023-01-11T21:41:23.4867644Z aten = torch.ops.aten 2023-01-11T21:41:23.4867883Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4868042Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4868050Z 2023-01-11T21:41:23.4868057Z 2023-01-11T21:41:23.4868300Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4868678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4868890Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4869049Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4869217Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.4869318Z { 2023-01-11T21:41:23.4869490Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4869597Z { 2023-01-11T21:41:23.4869729Z #pragma omp for 2023-01-11T21:41:23.4869868Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.4869959Z { 2023-01-11T21:41:23.4870094Z #pragma GCC ivdep 2023-01-11T21:41:23.4870236Z for(long i1=0; i1<6; i1+=1) 2023-01-11T21:41:23.4870349Z { 2023-01-11T21:41:23.4870487Z #pragma GCC ivdep 2023-01-11T21:41:23.4870631Z for(long i2=0; i2<6; i2+=1) 2023-01-11T21:41:23.4870738Z { 2023-01-11T21:41:23.4870838Z { 2023-01-11T21:41:23.4870959Z { 2023-01-11T21:41:23.4871150Z auto tmp0 = static_cast((i1 / 2)); 2023-01-11T21:41:23.4871353Z auto tmp1 = static_cast(((8 + (3*i1)) / 6)); 2023-01-11T21:41:23.4871513Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.4871698Z auto tmp3 = static_cast((i2 / 2)); 2023-01-11T21:41:23.4871903Z auto tmp4 = static_cast(((8 + (3*i2)) / 6)); 2023-01-11T21:41:23.4872055Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:23.4872208Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:23.4872478Z float tmp7 = 0.0; 2023-01-11T21:41:23.4872606Z if(tmp6) 2023-01-11T21:41:23.4872728Z { 2023-01-11T21:41:23.4900041Z auto tmp8 = in_ptr0[(3*(i1 / 2)) + (9*i0) + (i2 / 2)]; 2023-01-11T21:41:23.4901597Z tmp7 = tmp8; 2023-01-11T21:41:23.4901911Z } 2023-01-11T21:41:23.4902956Z auto tmp9 = static_cast(1 + (i2 / 2)); 2023-01-11T21:41:23.4904000Z auto tmp10 = tmp9 < tmp4; 2023-01-11T21:41:23.4904230Z auto tmp11 = tmp2 & tmp10; 2023-01-11T21:41:23.4904357Z float tmp12 = 0.0; 2023-01-11T21:41:23.4904467Z if(tmp11) 2023-01-11T21:41:23.4904569Z { 2023-01-11T21:41:23.4904965Z auto tmp13 = in_ptr0[1 + (3*(i1 / 2)) + (9*i0) + (i2 / 2)]; 2023-01-11T21:41:23.4905181Z tmp12 = tmp13; 2023-01-11T21:41:23.4905382Z } 2023-01-11T21:41:23.4905529Z auto tmp14 = tmp12 + tmp7; 2023-01-11T21:41:23.4906031Z auto tmp15 = static_cast(1 + (i1 / 2)); 2023-01-11T21:41:23.4906300Z auto tmp16 = tmp15 < tmp1; 2023-01-11T21:41:23.4906447Z auto tmp17 = tmp16 & tmp5; 2023-01-11T21:41:23.4906575Z float tmp18 = 0.0; 2023-01-11T21:41:23.4907029Z if(tmp17) 2023-01-11T21:41:23.4907433Z { 2023-01-11T21:41:23.4908005Z auto tmp19 = in_ptr0[3 + (3*(i1 / 2)) + (9*i0) + (i2 / 2)]; 2023-01-11T21:41:23.4908572Z tmp18 = tmp19; 2023-01-11T21:41:23.4909144Z } 2023-01-11T21:41:23.4909670Z auto tmp20 = tmp18 + tmp14; 2023-01-11T21:41:23.4910255Z auto tmp21 = tmp16 & tmp10; 2023-01-11T21:41:23.4910372Z float tmp22 = 0.0; 2023-01-11T21:41:23.4910474Z if(tmp21) 2023-01-11T21:41:23.4910575Z { 2023-01-11T21:41:23.4910839Z auto tmp23 = in_ptr0[4 + (3*(i1 / 2)) + (9*i0) + (i2 / 2)]; 2023-01-11T21:41:23.4910965Z tmp22 = tmp23; 2023-01-11T21:41:23.4911063Z } 2023-01-11T21:41:23.4911200Z auto tmp24 = tmp22 + tmp20; 2023-01-11T21:41:23.4911325Z float tmp25 = 0.0; 2023-01-11T21:41:23.4911416Z if(tmp6) 2023-01-11T21:41:23.4911513Z { 2023-01-11T21:41:23.4911670Z auto tmp26 = static_cast(1); 2023-01-11T21:41:23.4911790Z tmp25 = tmp26; 2023-01-11T21:41:23.4911886Z } 2023-01-11T21:41:23.4912007Z float tmp27 = 0.0; 2023-01-11T21:41:23.4912115Z if(tmp11) 2023-01-11T21:41:23.4912197Z { 2023-01-11T21:41:23.4912358Z auto tmp28 = static_cast(1); 2023-01-11T21:41:23.4912480Z tmp27 = tmp28; 2023-01-11T21:41:23.4912577Z } 2023-01-11T21:41:23.4912714Z auto tmp29 = tmp27 + tmp25; 2023-01-11T21:41:23.4912842Z float tmp30 = 0.0; 2023-01-11T21:41:23.4912949Z if(tmp17) 2023-01-11T21:41:23.4913028Z { 2023-01-11T21:41:23.4913183Z auto tmp31 = static_cast(1); 2023-01-11T21:41:23.4913299Z tmp30 = tmp31; 2023-01-11T21:41:23.4913394Z } 2023-01-11T21:41:23.4913530Z auto tmp32 = tmp30 + tmp29; 2023-01-11T21:41:23.4913653Z float tmp33 = 0.0; 2023-01-11T21:41:23.4913809Z if(tmp21) 2023-01-11T21:41:23.4913889Z { 2023-01-11T21:41:23.4914127Z auto tmp34 = static_cast(1); 2023-01-11T21:41:23.4914247Z tmp33 = tmp34; 2023-01-11T21:41:23.4914345Z } 2023-01-11T21:41:23.4914481Z auto tmp35 = tmp33 + tmp32; 2023-01-11T21:41:23.4914615Z auto tmp36 = tmp24 / tmp35; 2023-01-11T21:41:23.4914762Z out_ptr0[i2 + (6*i1) + (36*i0)] = tmp36; 2023-01-11T21:41:23.4914843Z } 2023-01-11T21:41:23.4914941Z } 2023-01-11T21:41:23.4915033Z } 2023-01-11T21:41:23.4915128Z } 2023-01-11T21:41:23.4915214Z } 2023-01-11T21:41:23.4915324Z #pragma omp for 2023-01-11T21:41:23.4915443Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.4915517Z { 2023-01-11T21:41:23.4915631Z #pragma GCC ivdep 2023-01-11T21:41:23.4915750Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.4915847Z { 2023-01-11T21:41:23.4916003Z #pragma GCC ivdep 2023-01-11T21:41:23.4916129Z for(long i2=0; i2<5; i2+=1) 2023-01-11T21:41:23.4916218Z { 2023-01-11T21:41:23.4916296Z { 2023-01-11T21:41:23.4916391Z { 2023-01-11T21:41:23.4916554Z auto tmp0 = static_cast(((3*i1) / 2)); 2023-01-11T21:41:23.4916722Z auto tmp1 = static_cast(2 + (((3*i1) / 2))); 2023-01-11T21:41:23.4916859Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.4917032Z auto tmp3 = static_cast(((3*i2) / 5)); 2023-01-11T21:41:23.4917198Z auto tmp4 = static_cast(((7 + (3*i2)) / 5)); 2023-01-11T21:41:23.4917312Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:23.4917440Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:23.4917571Z float tmp7 = 0.0; 2023-01-11T21:41:23.4917682Z if(tmp6) 2023-01-11T21:41:23.4917780Z { 2023-01-11T21:41:23.4917949Z auto tmp8 = in_ptr0[(3*(((3*i1) / 2))) + (9*i0) + (((3*i2) / 5))]; 2023-01-11T21:41:23.4918108Z auto tmp9 = static_cast(1); 2023-01-11T21:41:23.4918246Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:23.4918354Z tmp7 = tmp10; 2023-01-11T21:41:23.4918456Z } 2023-01-11T21:41:23.4918624Z auto tmp11 = static_cast(1 + (((3*i2) / 5))); 2023-01-11T21:41:23.4918762Z auto tmp12 = tmp11 < tmp4; 2023-01-11T21:41:23.4918896Z auto tmp13 = tmp2 & tmp12; 2023-01-11T21:41:23.4919022Z float tmp14 = 0.0; 2023-01-11T21:41:23.4919131Z if(tmp13) 2023-01-11T21:41:23.4919222Z { 2023-01-11T21:41:23.4919397Z auto tmp15 = in_ptr0[1 + (3*(((3*i1) / 2))) + (9*i0) + (((3*i2) / 5))]; 2023-01-11T21:41:23.4919554Z auto tmp16 = static_cast(1); 2023-01-11T21:41:23.4919701Z auto tmp17 = tmp15 + tmp16; 2023-01-11T21:41:23.4919824Z tmp14 = tmp17; 2023-01-11T21:41:23.4919922Z } 2023-01-11T21:41:23.4920059Z auto tmp18 = tmp14 + tmp7; 2023-01-11T21:41:23.4920207Z auto tmp19 = static_cast(1 + (((3*i1) / 2))); 2023-01-11T21:41:23.4920342Z auto tmp20 = tmp19 < tmp1; 2023-01-11T21:41:23.4920480Z auto tmp21 = tmp20 & tmp5; 2023-01-11T21:41:23.4920609Z float tmp22 = 0.0; 2023-01-11T21:41:23.4920725Z if(tmp21) 2023-01-11T21:41:23.4920894Z { 2023-01-11T21:41:23.4921074Z auto tmp23 = in_ptr0[3 + (3*(((3*i1) / 2))) + (9*i0) + (((3*i2) / 5))]; 2023-01-11T21:41:23.4921237Z auto tmp24 = static_cast(1); 2023-01-11T21:41:23.4921364Z auto tmp25 = tmp23 + tmp24; 2023-01-11T21:41:23.4921491Z tmp22 = tmp25; 2023-01-11T21:41:23.4921598Z } 2023-01-11T21:41:23.4921733Z auto tmp26 = tmp22 + tmp18; 2023-01-11T21:41:23.4921869Z auto tmp27 = tmp20 & tmp12; 2023-01-11T21:41:23.4921993Z float tmp28 = 0.0; 2023-01-11T21:41:23.4922103Z if(tmp27) 2023-01-11T21:41:23.4922186Z { 2023-01-11T21:41:23.4922362Z auto tmp29 = in_ptr0[4 + (3*(((3*i1) / 2))) + (9*i0) + (((3*i2) / 5))]; 2023-01-11T21:41:23.4922560Z auto tmp30 = static_cast(1); 2023-01-11T21:41:23.4922704Z auto tmp31 = tmp29 + tmp30; 2023-01-11T21:41:23.4922826Z tmp28 = tmp31; 2023-01-11T21:41:23.4922921Z } 2023-01-11T21:41:23.4923055Z auto tmp32 = tmp28 + tmp26; 2023-01-11T21:41:23.4923162Z float tmp33 = 0.0; 2023-01-11T21:41:23.4923272Z if(tmp6) 2023-01-11T21:41:23.4923367Z { 2023-01-11T21:41:23.4923523Z auto tmp34 = static_cast(1); 2023-01-11T21:41:23.4923644Z tmp33 = tmp34; 2023-01-11T21:41:23.4923744Z } 2023-01-11T21:41:23.4923863Z float tmp35 = 0.0; 2023-01-11T21:41:23.4923971Z if(tmp13) 2023-01-11T21:41:23.4924057Z { 2023-01-11T21:41:23.4924220Z auto tmp36 = static_cast(1); 2023-01-11T21:41:23.4924340Z tmp35 = tmp36; 2023-01-11T21:41:23.4924439Z } 2023-01-11T21:41:23.4924576Z auto tmp37 = tmp35 + tmp33; 2023-01-11T21:41:23.4924701Z float tmp38 = 0.0; 2023-01-11T21:41:23.4924805Z if(tmp21) 2023-01-11T21:41:23.4924890Z { 2023-01-11T21:41:23.4925048Z auto tmp39 = static_cast(1); 2023-01-11T21:41:23.4925168Z tmp38 = tmp39; 2023-01-11T21:41:23.4925264Z } 2023-01-11T21:41:23.4925397Z auto tmp40 = tmp38 + tmp37; 2023-01-11T21:41:23.4925520Z float tmp41 = 0.0; 2023-01-11T21:41:23.4925623Z if(tmp27) 2023-01-11T21:41:23.4925709Z { 2023-01-11T21:41:23.4925868Z auto tmp42 = static_cast(1); 2023-01-11T21:41:23.4925988Z tmp41 = tmp42; 2023-01-11T21:41:23.4926082Z } 2023-01-11T21:41:23.4926217Z auto tmp43 = tmp41 + tmp40; 2023-01-11T21:41:23.4926354Z auto tmp44 = tmp32 / tmp43; 2023-01-11T21:41:23.4926506Z out_ptr1[i2 + (5*i1) + (10*i0)] = tmp44; 2023-01-11T21:41:23.4926590Z } 2023-01-11T21:41:23.4926689Z } 2023-01-11T21:41:23.4926786Z } 2023-01-11T21:41:23.4926875Z } 2023-01-11T21:41:23.4926967Z } 2023-01-11T21:41:23.4927054Z } 2023-01-11T21:41:23.4927136Z } 2023-01-11T21:41:23.4927284Z ''') 2023-01-11T21:41:23.4927294Z 2023-01-11T21:41:23.4927299Z 2023-01-11T21:41:23.4927425Z async_compile.wait(globals()) 2023-01-11T21:41:23.4927528Z del async_compile 2023-01-11T21:41:23.4927577Z 2023-01-11T21:41:23.4927682Z def call(args): 2023-01-11T21:41:23.4927780Z arg0_1, = args 2023-01-11T21:41:23.4927881Z args.clear() 2023-01-11T21:41:23.4928201Z buf0 = empty_strided((2, 4, 6, 6), (144, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4928494Z buf1 = empty_strided((2, 4, 2, 5), (40, 10, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4928732Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.4928827Z del arg0_1 2023-01-11T21:41:23.4928932Z return (buf0, buf1, ) 2023-01-11T21:41:23.4928938Z 2023-01-11T21:41:23.4928945Z 2023-01-11T21:41:23.4929049Z if __name__ == "__main__": 2023-01-11T21:41:23.4929211Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.4929391Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.4929938Z arg0_1 = rand_strided((2, 4, 3, 3), (36, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.4930133Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.4930548Z [2023-01-11 21:24:11,283] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 9 2023-01-11T21:41:23.4931185Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.4931373Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.4931763Z [2023-01-11 21:24:11,444] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 10 2023-01-11T21:41:23.4931771Z 2023-01-11T21:41:23.4931778Z 2023-01-11T21:41:23.4931911Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.4932036Z import torch 2023-01-11T21:41:23.4932145Z import random 2023-01-11T21:41:23.4932313Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.4932471Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.4932496Z 2023-01-11T21:41:23.4932592Z aten = torch.ops.aten 2023-01-11T21:41:23.4932788Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.4932918Z async_compile = AsyncCompile() 2023-01-11T21:41:23.4932925Z 2023-01-11T21:41:23.4932930Z 2023-01-11T21:41:23.4933122Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.4933419Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.4933589Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.4933726Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.4933859Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.4933939Z { 2023-01-11T21:41:23.4934079Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.4934170Z { 2023-01-11T21:41:23.4934279Z #pragma omp for 2023-01-11T21:41:23.4934397Z for(long i0=0; i0<36; i0+=1) 2023-01-11T21:41:23.4934485Z { 2023-01-11T21:41:23.4934680Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.4934810Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.4934898Z } 2023-01-11T21:41:23.4935034Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.4935151Z for(long i0=288; i0<288; i0+=1) 2023-01-11T21:41:23.4935239Z { 2023-01-11T21:41:23.4935359Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.4935454Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.4935541Z } 2023-01-11T21:41:23.4935653Z #pragma omp for 2023-01-11T21:41:23.4935768Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.4935855Z { 2023-01-11T21:41:23.4935968Z #pragma GCC ivdep 2023-01-11T21:41:23.4936081Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.4936201Z { 2023-01-11T21:41:23.4936315Z #pragma GCC ivdep 2023-01-11T21:41:23.4936436Z for(long i2=0; i2<5; i2+=1) 2023-01-11T21:41:23.4936525Z { 2023-01-11T21:41:23.4936619Z { 2023-01-11T21:41:23.4936713Z { 2023-01-11T21:41:23.4936872Z auto tmp0 = static_cast(3*i1); 2023-01-11T21:41:23.4937017Z auto tmp1 = static_cast(3 + (3*i1)); 2023-01-11T21:41:23.4937150Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.4937313Z auto tmp3 = static_cast(((6*i2) / 5)); 2023-01-11T21:41:23.4937484Z auto tmp4 = static_cast(2 + (((6*i2) / 5))); 2023-01-11T21:41:23.4937618Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:23.4937757Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:23.4937934Z float tmp7 = 0.0; 2023-01-11T21:41:23.4938024Z if(tmp6) 2023-01-11T21:41:23.4938123Z { 2023-01-11T21:41:23.4938287Z auto tmp8 = in_ptr0[(18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4938445Z auto tmp9 = static_cast(1); 2023-01-11T21:41:23.4938589Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:23.4938719Z tmp7 = tmp10; 2023-01-11T21:41:23.4938817Z } 2023-01-11T21:41:23.4938981Z auto tmp11 = static_cast(1 + (((6*i2) / 5))); 2023-01-11T21:41:23.4939101Z auto tmp12 = tmp11 < tmp4; 2023-01-11T21:41:23.4939234Z auto tmp13 = tmp2 & tmp12; 2023-01-11T21:41:23.4939367Z float tmp14 = 0.0; 2023-01-11T21:41:23.4939474Z if(tmp13) 2023-01-11T21:41:23.4939579Z { 2023-01-11T21:41:23.4939746Z auto tmp15 = in_ptr0[1 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4939904Z auto tmp16 = static_cast(1); 2023-01-11T21:41:23.4940029Z auto tmp17 = tmp15 + tmp16; 2023-01-11T21:41:23.4940149Z tmp14 = tmp17; 2023-01-11T21:41:23.4940245Z } 2023-01-11T21:41:23.4940378Z auto tmp18 = tmp14 + tmp7; 2023-01-11T21:41:23.4940543Z auto tmp19 = static_cast(1 + (3*i1)); 2023-01-11T21:41:23.4940676Z auto tmp20 = tmp19 < tmp1; 2023-01-11T21:41:23.4940809Z auto tmp21 = tmp20 & tmp5; 2023-01-11T21:41:23.4940916Z float tmp22 = 0.0; 2023-01-11T21:41:23.4941020Z if(tmp21) 2023-01-11T21:41:23.4941125Z { 2023-01-11T21:41:23.4941292Z auto tmp23 = in_ptr0[6 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4941451Z auto tmp24 = static_cast(1); 2023-01-11T21:41:23.4941593Z auto tmp25 = tmp23 + tmp24; 2023-01-11T21:41:23.4941714Z tmp22 = tmp25; 2023-01-11T21:41:23.4941810Z } 2023-01-11T21:41:23.4941931Z auto tmp26 = tmp22 + tmp18; 2023-01-11T21:41:23.4942061Z auto tmp27 = tmp20 & tmp12; 2023-01-11T21:41:23.4942186Z float tmp28 = 0.0; 2023-01-11T21:41:23.4942290Z if(tmp27) 2023-01-11T21:41:23.4942506Z { 2023-01-11T21:41:23.4942678Z auto tmp29 = in_ptr0[7 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4942849Z auto tmp30 = static_cast(1); 2023-01-11T21:41:23.4943048Z auto tmp31 = tmp29 + tmp30; 2023-01-11T21:41:23.4943169Z tmp28 = tmp31; 2023-01-11T21:41:23.4943264Z } 2023-01-11T21:41:23.4943404Z auto tmp32 = tmp28 + tmp26; 2023-01-11T21:41:23.4943565Z auto tmp33 = static_cast(2 + (3*i1)); 2023-01-11T21:41:23.4943704Z auto tmp34 = tmp33 < tmp1; 2023-01-11T21:41:23.4943850Z auto tmp35 = tmp34 & tmp5; 2023-01-11T21:41:23.4943972Z float tmp36 = 0.0; 2023-01-11T21:41:23.4944068Z if(tmp35) 2023-01-11T21:41:23.4944168Z { 2023-01-11T21:41:23.4944344Z auto tmp37 = in_ptr0[12 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4944498Z auto tmp38 = static_cast(1); 2023-01-11T21:41:23.4944686Z auto tmp39 = tmp37 + tmp38; 2023-01-11T21:41:23.4944803Z tmp36 = tmp39; 2023-01-11T21:41:23.4944903Z } 2023-01-11T21:41:23.4945020Z auto tmp40 = tmp36 + tmp32; 2023-01-11T21:41:23.4945154Z auto tmp41 = tmp34 & tmp12; 2023-01-11T21:41:23.4945280Z float tmp42 = 0.0; 2023-01-11T21:41:23.4945392Z if(tmp41) 2023-01-11T21:41:23.4945489Z { 2023-01-11T21:41:23.4945658Z auto tmp43 = in_ptr0[13 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4945814Z auto tmp44 = static_cast(1); 2023-01-11T21:41:23.4945940Z auto tmp45 = tmp43 + tmp44; 2023-01-11T21:41:23.4946061Z tmp42 = tmp45; 2023-01-11T21:41:23.4946172Z } 2023-01-11T21:41:23.4946307Z auto tmp46 = tmp42 + tmp40; 2023-01-11T21:41:23.4946443Z auto tmp47 = tmp1 < tmp1; 2023-01-11T21:41:23.4946576Z auto tmp48 = tmp47 & tmp5; 2023-01-11T21:41:23.4946696Z float tmp49 = 0.0; 2023-01-11T21:41:23.4946805Z if(tmp48) 2023-01-11T21:41:23.4946886Z { 2023-01-11T21:41:23.4947055Z auto tmp50 = in_ptr0[18 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4947211Z auto tmp51 = static_cast(1); 2023-01-11T21:41:23.4947353Z auto tmp52 = tmp50 + tmp51; 2023-01-11T21:41:23.4947475Z tmp49 = tmp52; 2023-01-11T21:41:23.4947572Z } 2023-01-11T21:41:23.4947705Z auto tmp53 = tmp49 + tmp46; 2023-01-11T21:41:23.4947833Z auto tmp54 = tmp47 & tmp12; 2023-01-11T21:41:23.4947956Z float tmp55 = 0.0; 2023-01-11T21:41:23.4948061Z if(tmp54) 2023-01-11T21:41:23.4948160Z { 2023-01-11T21:41:23.4948323Z auto tmp56 = in_ptr0[19 + (18*i1) + (36*i0) + (((6*i2) / 5))]; 2023-01-11T21:41:23.4948476Z auto tmp57 = static_cast(1); 2023-01-11T21:41:23.4948617Z auto tmp58 = tmp56 + tmp57; 2023-01-11T21:41:23.4948724Z tmp55 = tmp58; 2023-01-11T21:41:23.4948822Z } 2023-01-11T21:41:23.4948966Z auto tmp59 = tmp55 + tmp53; 2023-01-11T21:41:23.4949093Z float tmp60 = 0.0; 2023-01-11T21:41:23.4949199Z if(tmp6) 2023-01-11T21:41:23.4949298Z { 2023-01-11T21:41:23.4949490Z auto tmp61 = static_cast(1); 2023-01-11T21:41:23.4949609Z tmp60 = tmp61; 2023-01-11T21:41:23.4949687Z } 2023-01-11T21:41:23.4949808Z float tmp62 = 0.0; 2023-01-11T21:41:23.4949918Z if(tmp13) 2023-01-11T21:41:23.4950020Z { 2023-01-11T21:41:23.4950184Z auto tmp63 = static_cast(1); 2023-01-11T21:41:23.4950301Z tmp62 = tmp63; 2023-01-11T21:41:23.4950400Z } 2023-01-11T21:41:23.4950517Z auto tmp64 = tmp62 + tmp60; 2023-01-11T21:41:23.4950646Z float tmp65 = 0.0; 2023-01-11T21:41:23.4950751Z if(tmp21) 2023-01-11T21:41:23.4950849Z { 2023-01-11T21:41:23.4950998Z auto tmp66 = static_cast(1); 2023-01-11T21:41:23.4951158Z tmp65 = tmp66; 2023-01-11T21:41:23.4951256Z } 2023-01-11T21:41:23.4951373Z auto tmp67 = tmp65 + tmp64; 2023-01-11T21:41:23.4951497Z float tmp68 = 0.0; 2023-01-11T21:41:23.4951604Z if(tmp27) 2023-01-11T21:41:23.4951703Z { 2023-01-11T21:41:23.4951858Z auto tmp69 = static_cast(1); 2023-01-11T21:41:23.4951974Z tmp68 = tmp69; 2023-01-11T21:41:23.4952075Z } 2023-01-11T21:41:23.4952193Z auto tmp70 = tmp68 + tmp67; 2023-01-11T21:41:23.4952317Z float tmp71 = 0.0; 2023-01-11T21:41:23.4952421Z if(tmp35) 2023-01-11T21:41:23.4952519Z { 2023-01-11T21:41:23.4952677Z auto tmp72 = static_cast(1); 2023-01-11T21:41:23.4952800Z tmp71 = tmp72; 2023-01-11T21:41:23.4952897Z } 2023-01-11T21:41:23.4953018Z auto tmp73 = tmp71 + tmp70; 2023-01-11T21:41:23.4953139Z float tmp74 = 0.0; 2023-01-11T21:41:23.4953247Z if(tmp41) 2023-01-11T21:41:23.4953345Z { 2023-01-11T21:41:23.4953502Z auto tmp75 = static_cast(1); 2023-01-11T21:41:23.4953618Z tmp74 = tmp75; 2023-01-11T21:41:23.4953714Z } 2023-01-11T21:41:23.4953892Z auto tmp76 = tmp74 + tmp73; 2023-01-11T21:41:23.4954015Z float tmp77 = 0.0; 2023-01-11T21:41:23.4954126Z if(tmp48) 2023-01-11T21:41:23.4954220Z { 2023-01-11T21:41:23.4954378Z auto tmp78 = static_cast(1); 2023-01-11T21:41:23.4954502Z tmp77 = tmp78; 2023-01-11T21:41:23.4954601Z } 2023-01-11T21:41:23.4954724Z auto tmp79 = tmp77 + tmp76; 2023-01-11T21:41:23.4954849Z float tmp80 = 0.0; 2023-01-11T21:41:23.4954959Z if(tmp54) 2023-01-11T21:41:23.4955057Z { 2023-01-11T21:41:23.4955211Z auto tmp81 = static_cast(1); 2023-01-11T21:41:23.4955330Z tmp80 = tmp81; 2023-01-11T21:41:23.4955427Z } 2023-01-11T21:41:23.4955564Z auto tmp82 = tmp80 + tmp79; 2023-01-11T21:41:23.4955690Z auto tmp83 = tmp59 / tmp82; 2023-01-11T21:41:23.4955842Z out_ptr1[i2 + (5*i1) + (10*i0)] = tmp83; 2023-01-11T21:41:23.5037960Z } 2023-01-11T21:41:23.5038253Z } 2023-01-11T21:41:23.5038344Z } 2023-01-11T21:41:23.5038430Z } 2023-01-11T21:41:23.5038518Z } 2023-01-11T21:41:23.5038584Z } 2023-01-11T21:41:23.5038664Z } 2023-01-11T21:41:23.5038826Z ''') 2023-01-11T21:41:23.5038834Z 2023-01-11T21:41:23.5038840Z 2023-01-11T21:41:23.5038993Z async_compile.wait(globals()) 2023-01-11T21:41:23.5039092Z del async_compile 2023-01-11T21:41:23.5039098Z 2023-01-11T21:41:23.5039195Z def call(args): 2023-01-11T21:41:23.5039293Z arg0_1, = args 2023-01-11T21:41:23.5039368Z args.clear() 2023-01-11T21:41:23.5039684Z buf0 = empty_strided((2, 4, 6, 6), (144, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5039982Z buf1 = empty_strided((2, 4, 2, 5), (40, 10, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5040214Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5040312Z del arg0_1 2023-01-11T21:41:23.5040544Z return (buf0, buf1, ) 2023-01-11T21:41:23.5040554Z 2023-01-11T21:41:23.5040561Z 2023-01-11T21:41:23.5040693Z if __name__ == "__main__": 2023-01-11T21:41:23.5040896Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5041103Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5041500Z arg0_1 = rand_strided((2, 4, 6, 6), (144, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5041696Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5042204Z [2023-01-11 21:24:13,220] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 10 2023-01-11T21:41:23.5042213Z 2023-01-11T21:41:23.5042328Z ok (6.032s) 2023-01-11T21:41:23.5043249Z test_adaptive_avg_pool2d2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5043487Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5043979Z [2023-01-11 21:24:13,241] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 11 2023-01-11T21:41:23.5044437Z [2023-01-11 21:24:13,247] torch._inductor.ir: [WARNING] Using FallbackKernel: aten._adaptive_avg_pool2d 2023-01-11T21:41:23.5044923Z [2023-01-11 21:24:13,250] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 11 2023-01-11T21:41:23.5044950Z 2023-01-11T21:41:23.5045102Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5045222Z import torch 2023-01-11T21:41:23.5045343Z import random 2023-01-11T21:41:23.5045557Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5045777Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5045794Z 2023-01-11T21:41:23.5045932Z aten = torch.ops.aten 2023-01-11T21:41:23.5046175Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5046325Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5046351Z 2023-01-11T21:41:23.5046359Z 2023-01-11T21:41:23.5046499Z async_compile.wait(globals()) 2023-01-11T21:41:23.5046624Z del async_compile 2023-01-11T21:41:23.5046633Z 2023-01-11T21:41:23.5046757Z def call(args): 2023-01-11T21:41:23.5046875Z arg0_1, = args 2023-01-11T21:41:23.5046998Z args.clear() 2023-01-11T21:41:23.5047192Z buf0 = aten._adaptive_avg_pool2d(arg0_1, [4, 4]) 2023-01-11T21:41:23.5047310Z del arg0_1 2023-01-11T21:41:23.5047408Z buf1 = buf0 2023-01-11T21:41:23.5047590Z assert_size_stride(buf1, (2, 4, 4, 4), (64, 16, 4, 1)) 2023-01-11T21:41:23.5047704Z del buf0 2023-01-11T21:41:23.5047829Z return (buf1, ) 2023-01-11T21:41:23.5047837Z 2023-01-11T21:41:23.5047844Z 2023-01-11T21:41:23.5047977Z if __name__ == "__main__": 2023-01-11T21:41:23.5048238Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5048460Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5048860Z arg0_1 = rand_strided((2, 4, 21, 21), (1764, 441, 21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5049052Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5049061Z 2023-01-11T21:41:23.5049174Z ok (0.030s) 2023-01-11T21:41:23.5050086Z test_add_const_float_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5050313Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5050847Z [2023-01-11 21:24:13,264] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 12 2023-01-11T21:41:23.5051354Z [2023-01-11 21:24:14,899] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 12 2023-01-11T21:41:23.5051363Z 2023-01-11T21:41:23.5051532Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5051654Z import torch 2023-01-11T21:41:23.5051777Z import random 2023-01-11T21:41:23.5051969Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5052187Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5052195Z 2023-01-11T21:41:23.5052334Z aten = torch.ops.aten 2023-01-11T21:41:23.5052578Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5052743Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5052752Z 2023-01-11T21:41:23.5052759Z 2023-01-11T21:41:23.5053013Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5053398Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5053615Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5053775Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5053878Z { 2023-01-11T21:41:23.5054054Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5054162Z { 2023-01-11T21:41:23.5054298Z #pragma omp for 2023-01-11T21:41:23.5054442Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5054553Z { 2023-01-11T21:41:23.5054791Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5055036Z auto tmp1 = at::vec::Vectorized(static_cast(1.5)); 2023-01-11T21:41:23.5055182Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5055341Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5055450Z } 2023-01-11T21:41:23.5055619Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5055766Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:23.5055866Z { 2023-01-11T21:41:23.5056009Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5056191Z auto tmp1 = static_cast(1.5); 2023-01-11T21:41:23.5056338Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5056479Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5056586Z } 2023-01-11T21:41:23.5056690Z } 2023-01-11T21:41:23.5056778Z } 2023-01-11T21:41:23.5056919Z ''') 2023-01-11T21:41:23.5056927Z 2023-01-11T21:41:23.5056935Z 2023-01-11T21:41:23.5057094Z async_compile.wait(globals()) 2023-01-11T21:41:23.5057223Z del async_compile 2023-01-11T21:41:23.5057233Z 2023-01-11T21:41:23.5057357Z def call(args): 2023-01-11T21:41:23.5057477Z arg0_1, = args 2023-01-11T21:41:23.5057597Z args.clear() 2023-01-11T21:41:23.5057939Z buf0 = empty_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5058178Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5058354Z del arg0_1 2023-01-11T21:41:23.5058482Z return (buf0, ) 2023-01-11T21:41:23.5058490Z 2023-01-11T21:41:23.5058496Z 2023-01-11T21:41:23.5058630Z if __name__ == "__main__": 2023-01-11T21:41:23.5058839Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5059061Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5059423Z arg0_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5059600Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5059609Z 2023-01-11T21:41:23.5059721Z ok (1.649s) 2023-01-11T21:41:23.5060621Z test_add_const_int_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5060891Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5061388Z [2023-01-11 21:24:14,915] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 13 2023-01-11T21:41:23.5061881Z [2023-01-11 21:24:16,544] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 13 2023-01-11T21:41:23.5061889Z 2023-01-11T21:41:23.5062058Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5062180Z import torch 2023-01-11T21:41:23.5062299Z import random 2023-01-11T21:41:23.5062705Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5062930Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5062940Z 2023-01-11T21:41:23.5063078Z aten = torch.ops.aten 2023-01-11T21:41:23.5063320Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5063490Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5063498Z 2023-01-11T21:41:23.5063513Z 2023-01-11T21:41:23.5063773Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5064159Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5064372Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5064531Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5064634Z { 2023-01-11T21:41:23.5064810Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5064914Z { 2023-01-11T21:41:23.5065050Z #pragma omp for 2023-01-11T21:41:23.5065194Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5065304Z { 2023-01-11T21:41:23.5065529Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5065850Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5066002Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5066161Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5066275Z } 2023-01-11T21:41:23.5066455Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5066600Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:23.5066690Z { 2023-01-11T21:41:23.5069226Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5069417Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5069567Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5069710Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5069817Z } 2023-01-11T21:41:23.5069921Z } 2023-01-11T21:41:23.5070009Z } 2023-01-11T21:41:23.5070159Z ''') 2023-01-11T21:41:23.5070169Z 2023-01-11T21:41:23.5070176Z 2023-01-11T21:41:23.5070338Z async_compile.wait(globals()) 2023-01-11T21:41:23.5070464Z del async_compile 2023-01-11T21:41:23.5070472Z 2023-01-11T21:41:23.5070600Z def call(args): 2023-01-11T21:41:23.5070718Z arg0_1, = args 2023-01-11T21:41:23.5070840Z args.clear() 2023-01-11T21:41:23.5071191Z buf0 = empty_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5071551Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5071669Z del arg0_1 2023-01-11T21:41:23.5071792Z return (buf0, ) 2023-01-11T21:41:23.5071801Z 2023-01-11T21:41:23.5071809Z 2023-01-11T21:41:23.5071940Z if __name__ == "__main__": 2023-01-11T21:41:23.5072148Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5072372Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5072736Z arg0_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5072913Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5072936Z 2023-01-11T21:41:23.5073034Z ok (1.644s) 2023-01-11T21:41:23.5074090Z test_add_inplace_permuted_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5074332Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5074831Z [2023-01-11 21:24:16,560] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 14 2023-01-11T21:41:23.5075337Z [2023-01-11 21:24:18,280] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 14 2023-01-11T21:41:23.5075347Z 2023-01-11T21:41:23.5075517Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5075639Z import torch 2023-01-11T21:41:23.5075763Z import random 2023-01-11T21:41:23.5075957Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5076176Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5076185Z 2023-01-11T21:41:23.5076320Z aten = torch.ops.aten 2023-01-11T21:41:23.5076568Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5076730Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5076739Z 2023-01-11T21:41:23.5076746Z 2023-01-11T21:41:23.5076997Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5077381Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5077601Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5077782Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5077939Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5078044Z { 2023-01-11T21:41:23.5078222Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5078329Z { 2023-01-11T21:41:23.5078490Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5078630Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.5078736Z { 2023-01-11T21:41:23.5078869Z for(long i1=0; i1<12; i1+=1) 2023-01-11T21:41:23.5078978Z { 2023-01-11T21:41:23.5079142Z for(long i2=0; i2<27; i2+=1) 2023-01-11T21:41:23.5079253Z { 2023-01-11T21:41:23.5079529Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i2) + (221*i1) + (2652*i0)); 2023-01-11T21:41:23.5079787Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i2) + (221*i0)); 2023-01-11T21:41:23.5079946Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5080129Z tmp2.store(out_ptr0 + (8*i2) + (221*i1) + (2652*i0)); 2023-01-11T21:41:23.5080239Z } 2023-01-11T21:41:23.5080409Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5080563Z for(long i2=216; i2<221; i2+=1) 2023-01-11T21:41:23.5080673Z { 2023-01-11T21:41:23.5080861Z auto tmp0 = out_ptr0[i2 + (221*i1) + (2652*i0)]; 2023-01-11T21:41:23.5081034Z auto tmp1 = in_ptr1[i2 + (221*i0)]; 2023-01-11T21:41:23.5081179Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5081411Z out_ptr0[i2 + (221*i1) + (2652*i0)] = tmp2; 2023-01-11T21:41:23.5081521Z } 2023-01-11T21:41:23.5081635Z } 2023-01-11T21:41:23.5081745Z } 2023-01-11T21:41:23.5081851Z } 2023-01-11T21:41:23.5081953Z } 2023-01-11T21:41:23.5082080Z ''') 2023-01-11T21:41:23.5082089Z 2023-01-11T21:41:23.5082096Z 2023-01-11T21:41:23.5082258Z async_compile.wait(globals()) 2023-01-11T21:41:23.5082384Z del async_compile 2023-01-11T21:41:23.5082394Z 2023-01-11T21:41:23.5082515Z def call(args): 2023-01-11T21:41:23.5082646Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5082769Z args.clear() 2023-01-11T21:41:23.5083060Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.5083178Z del arg1_1 2023-01-11T21:41:23.5083290Z return (arg0_1, ) 2023-01-11T21:41:23.5083298Z 2023-01-11T21:41:23.5083306Z 2023-01-11T21:41:23.5083433Z if __name__ == "__main__": 2023-01-11T21:41:23.5083687Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5083912Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5084325Z arg0_1 = rand_strided((2, 13, 12, 17), (2652, 17, 221, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5084729Z arg1_1 = rand_strided((2, 13, 1, 17), (221, 17, 17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5084935Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5084943Z 2023-01-11T21:41:23.5085057Z ok (1.736s) 2023-01-11T21:41:23.5085930Z test_addmm_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5086165Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5086655Z [2023-01-11 21:24:18,313] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 15 2023-01-11T21:41:23.5087155Z [2023-01-11 21:24:20,000] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 15 2023-01-11T21:41:23.5087164Z 2023-01-11T21:41:23.5087331Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5087455Z import torch 2023-01-11T21:41:23.5087578Z import random 2023-01-11T21:41:23.5087786Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5088003Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5088011Z 2023-01-11T21:41:23.5088132Z aten = torch.ops.aten 2023-01-11T21:41:23.5088380Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5088544Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5088552Z 2023-01-11T21:41:23.5088559Z 2023-01-11T21:41:23.5088814Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5089190Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5089403Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5089588Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5089768Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.5089929Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5090099Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5090269Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.5090373Z { 2023-01-11T21:41:23.5090550Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5090658Z { 2023-01-11T21:41:23.5090791Z #pragma omp for 2023-01-11T21:41:23.5090918Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5091025Z { 2023-01-11T21:41:23.5091272Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5091568Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5091723Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5091885Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5091990Z } 2023-01-11T21:41:23.5092146Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5092292Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.5092398Z { 2023-01-11T21:41:23.5092548Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5092723Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5092870Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5093008Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5093101Z } 2023-01-11T21:41:23.5093236Z #pragma omp for 2023-01-11T21:41:23.5093378Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5093489Z { 2023-01-11T21:41:23.5093767Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.5094010Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.5094159Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5094316Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.5094412Z } 2023-01-11T21:41:23.5094579Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5094723Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.5094833Z { 2023-01-11T21:41:23.5094983Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5095160Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5095290Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5095428Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.5095534Z } 2023-01-11T21:41:23.5095670Z #pragma omp for 2023-01-11T21:41:23.5095812Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5095917Z { 2023-01-11T21:41:23.5096161Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:23.5096397Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.5096528Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5096687Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.5096793Z } 2023-01-11T21:41:23.5096963Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5097107Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.5097212Z { 2023-01-11T21:41:23.5097342Z auto tmp0 = in_ptr2[i0]; 2023-01-11T21:41:23.5097516Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.5097665Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5097805Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:23.5097914Z } 2023-01-11T21:41:23.5098016Z } 2023-01-11T21:41:23.5098119Z } 2023-01-11T21:41:23.5098257Z ''') 2023-01-11T21:41:23.5098266Z 2023-01-11T21:41:23.5098292Z 2023-01-11T21:41:23.5098525Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.5098919Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5099129Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.5099235Z { 2023-01-11T21:41:23.5099411Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5099519Z { 2023-01-11T21:41:23.5099655Z #pragma omp for 2023-01-11T21:41:23.5099781Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5099890Z { 2023-01-11T21:41:23.5100135Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5100379Z auto tmp1 = at::vec::Vectorized(static_cast(4)); 2023-01-11T21:41:23.5100529Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5100696Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5100803Z } 2023-01-11T21:41:23.5100959Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5101106Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.5101267Z { 2023-01-11T21:41:23.5101420Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.5101596Z auto tmp1 = static_cast(4); 2023-01-11T21:41:23.5101747Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5101892Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5101981Z } 2023-01-11T21:41:23.5102085Z } 2023-01-11T21:41:23.5102192Z } 2023-01-11T21:41:23.5102473Z ''') 2023-01-11T21:41:23.5102483Z 2023-01-11T21:41:23.5102490Z 2023-01-11T21:41:23.5102652Z async_compile.wait(globals()) 2023-01-11T21:41:23.5102779Z del async_compile 2023-01-11T21:41:23.5102787Z 2023-01-11T21:41:23.5102909Z def call(args): 2023-01-11T21:41:23.5103038Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.5103159Z args.clear() 2023-01-11T21:41:23.5103529Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5103890Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5104332Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5104767Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.5104892Z del arg0_1 2023-01-11T21:41:23.5105007Z del arg1_1 2023-01-11T21:41:23.5105105Z del arg2_1 2023-01-11T21:41:23.5105470Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5105691Z aten.addmm.out(buf0, buf1, buf2, beta=1, alpha=1, out=buf3) 2023-01-11T21:41:23.5105804Z del buf0 2023-01-11T21:41:23.5105918Z del buf1 2023-01-11T21:41:23.5106027Z del buf2 2023-01-11T21:41:23.5106179Z buf4 = buf3; del buf3 # reuse 2023-01-11T21:41:23.5106345Z kernel_cpp_1(c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.5106465Z return (buf4, ) 2023-01-11T21:41:23.5106475Z 2023-01-11T21:41:23.5106487Z 2023-01-11T21:41:23.5106621Z if __name__ == "__main__": 2023-01-11T21:41:23.5106825Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5107045Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5107405Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5107763Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5108101Z arg2_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5108319Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.5108326Z 2023-01-11T21:41:23.5108441Z ok (1.721s) 2023-01-11T21:41:23.5109353Z test_alexnet_prefix_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5109584Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5110078Z [2023-01-11 21:24:20,148] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 16 2023-01-11T21:41:23.5110476Z [2023-01-11 21:24:20,236] torch._inductor.scheduler: [DEBUG] removed dead node: buf2 2023-01-11T21:41:23.5110970Z [2023-01-11 21:24:21,835] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 16 2023-01-11T21:41:23.5110980Z 2023-01-11T21:41:23.5111150Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5111273Z import torch 2023-01-11T21:41:23.5111379Z import random 2023-01-11T21:41:23.5111589Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5111807Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5111816Z 2023-01-11T21:41:23.5112028Z aten = torch.ops.aten 2023-01-11T21:41:23.5112272Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5112435Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5112444Z 2023-01-11T21:41:23.5112452Z 2023-01-11T21:41:23.5112707Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5113082Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5113283Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5113456Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5113561Z { 2023-01-11T21:41:23.5113989Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5114119Z { 2023-01-11T21:41:23.5114262Z #pragma omp for 2023-01-11T21:41:23.5114403Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.5114495Z { 2023-01-11T21:41:23.5114634Z #pragma GCC ivdep 2023-01-11T21:41:23.5114777Z for(long i1=0; i1<27; i1+=1) 2023-01-11T21:41:23.5114972Z { 2023-01-11T21:41:23.5115117Z #pragma GCC ivdep 2023-01-11T21:41:23.5115270Z for(long i2=0; i2<27; i2+=1) 2023-01-11T21:41:23.5115381Z { 2023-01-11T21:41:23.5115479Z { 2023-01-11T21:41:23.5115601Z { 2023-01-11T21:41:23.5115802Z auto tmp0 = in_ptr0[(2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116002Z auto tmp2 = in_ptr0[1 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116198Z auto tmp5 = in_ptr0[2 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116396Z auto tmp8 = in_ptr0[55 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116601Z auto tmp11 = in_ptr0[56 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116784Z auto tmp14 = in_ptr0[57 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5116993Z auto tmp17 = in_ptr0[110 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5117192Z auto tmp20 = in_ptr0[111 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5117392Z auto tmp23 = in_ptr0[112 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5117565Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.5117737Z auto tmp3 = tmp2 * (tmp2>0); 2023-01-11T21:41:23.5117972Z auto tmp4 = (tmp1 != tmp1) ? tmp1 : std::max(tmp3, tmp1); 2023-01-11T21:41:23.5118143Z auto tmp6 = tmp5 * (tmp5>0); 2023-01-11T21:41:23.5118369Z auto tmp7 = (tmp4 != tmp4) ? tmp4 : std::max(tmp6, tmp4); 2023-01-11T21:41:23.5118524Z auto tmp9 = tmp8 * (tmp8>0); 2023-01-11T21:41:23.5118752Z auto tmp10 = (tmp7 != tmp7) ? tmp7 : std::max(tmp9, tmp7); 2023-01-11T21:41:23.5118935Z auto tmp12 = tmp11 * (tmp11>0); 2023-01-11T21:41:23.5119172Z auto tmp13 = (tmp10 != tmp10) ? tmp10 : std::max(tmp12, tmp10); 2023-01-11T21:41:23.5119350Z auto tmp15 = tmp14 * (tmp14>0); 2023-01-11T21:41:23.5119578Z auto tmp16 = (tmp13 != tmp13) ? tmp13 : std::max(tmp15, tmp13); 2023-01-11T21:41:23.5119756Z auto tmp18 = tmp17 * (tmp17>0); 2023-01-11T21:41:23.5119979Z auto tmp19 = (tmp16 != tmp16) ? tmp16 : std::max(tmp18, tmp16); 2023-01-11T21:41:23.5120138Z auto tmp21 = tmp20 * (tmp20>0); 2023-01-11T21:41:23.5120367Z auto tmp22 = (tmp19 != tmp19) ? tmp19 : std::max(tmp21, tmp19); 2023-01-11T21:41:23.5120542Z auto tmp24 = tmp23 * (tmp23>0); 2023-01-11T21:41:23.5120768Z auto tmp25 = (tmp22 != tmp22) ? tmp22 : std::max(tmp24, tmp22); 2023-01-11T21:41:23.5121000Z out_ptr0[i2 + (27*i1) + (729*i0)] = tmp25; 2023-01-11T21:41:23.5121115Z } 2023-01-11T21:41:23.5121227Z } 2023-01-11T21:41:23.5121336Z } 2023-01-11T21:41:23.5121428Z } 2023-01-11T21:41:23.5121534Z } 2023-01-11T21:41:23.5121641Z } 2023-01-11T21:41:23.5121745Z } 2023-01-11T21:41:23.5121910Z ''') 2023-01-11T21:41:23.5121919Z 2023-01-11T21:41:23.5121927Z 2023-01-11T21:41:23.5122091Z async_compile.wait(globals()) 2023-01-11T21:41:23.5122217Z del async_compile 2023-01-11T21:41:23.5122226Z 2023-01-11T21:41:23.5122332Z def call(args): 2023-01-11T21:41:23.5122477Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.5122602Z args.clear() 2023-01-11T21:41:23.5122842Z buf0 = aten.convolution(arg2_1, arg1_1, arg0_1, (4, 4), (2, 2), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.5123040Z assert_size_stride(buf0, (16, 64, 55, 55), (193600, 3025, 55, 1)) 2023-01-11T21:41:23.5123210Z del arg0_1 2023-01-11T21:41:23.5123329Z del arg1_1 2023-01-11T21:41:23.5123428Z del arg2_1 2023-01-11T21:41:23.5123852Z buf1 = empty_strided((16, 64, 27, 27), (46656, 729, 27, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5124090Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5124217Z return (buf1, ) 2023-01-11T21:41:23.5124226Z 2023-01-11T21:41:23.5124233Z 2023-01-11T21:41:23.5124368Z if __name__ == "__main__": 2023-01-11T21:41:23.5124574Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5124799Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5125161Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5125550Z arg1_1 = rand_strided((64, 3, 11, 11), (363, 121, 11, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5125978Z arg2_1 = rand_strided((16, 3, 224, 224), (150528, 50176, 224, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5126206Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.5126214Z 2023-01-11T21:41:23.5126329Z ok (1.921s) 2023-01-11T21:41:23.5127213Z test_any_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5127439Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5127927Z [2023-01-11 21:24:21,960] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 17 2023-01-11T21:41:23.5128424Z [2023-01-11 21:24:23,540] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 17 2023-01-11T21:41:23.5129253Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5129479Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5129959Z [2023-01-11 21:24:23,579] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 18 2023-01-11T21:41:23.5130435Z [2023-01-11 21:24:25,354] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 18 2023-01-11T21:41:23.5130458Z 2023-01-11T21:41:23.5130611Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5130737Z import torch 2023-01-11T21:41:23.5130859Z import random 2023-01-11T21:41:23.5131068Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5131346Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5131355Z 2023-01-11T21:41:23.5131490Z aten = torch.ops.aten 2023-01-11T21:41:23.5131729Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5131876Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5131883Z 2023-01-11T21:41:23.5131907Z 2023-01-11T21:41:23.5132139Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5132518Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5132727Z extern "C" void kernel(bool* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.5132901Z bool* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.5133088Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5133261Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.5133430Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:23.5133517Z { 2023-01-11T21:41:23.5133713Z auto out_ptr2 = in_out_ptr0; 2023-01-11T21:41:23.5133867Z auto out_ptr3 = in_out_ptr1; 2023-01-11T21:41:23.5133970Z { 2023-01-11T21:41:23.5134075Z { 2023-01-11T21:41:23.5134208Z bool tmp2 = 0; 2023-01-11T21:41:23.5134339Z bool tmp4 = 0; 2023-01-11T21:41:23.5134452Z bool tmp8 = 0; 2023-01-11T21:41:23.5134587Z bool tmp10 = 0; 2023-01-11T21:41:23.5134774Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5134888Z { 2023-01-11T21:41:23.5135188Z #pragma omp for reduction(||:tmp2) reduction(||:tmp4) reduction(||:tmp8) reduction(||:tmp10) 2023-01-11T21:41:23.5135345Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.5135455Z { 2023-01-11T21:41:23.5135556Z { 2023-01-11T21:41:23.5135722Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5135917Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.5136109Z auto tmp3 = std::isinf(tmp0); 2023-01-11T21:41:23.5136270Z auto tmp5 = tmp3 == 0; 2023-01-11T21:41:23.5136464Z auto tmp6 = static_cast(tmp5); 2023-01-11T21:41:23.5136649Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:23.5136790Z auto tmp9 = tmp5 == 0; 2023-01-11T21:41:23.5136942Z tmp2 = tmp2 || tmp1; 2023-01-11T21:41:23.5137088Z tmp4 = tmp4 || tmp3; 2023-01-11T21:41:23.5137239Z tmp8 = tmp8 || tmp7; 2023-01-11T21:41:23.5137392Z tmp10 = tmp10 || tmp9; 2023-01-11T21:41:23.5137506Z } 2023-01-11T21:41:23.5137614Z } 2023-01-11T21:41:23.5137705Z } 2023-01-11T21:41:23.5137841Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.5137975Z out_ptr1[0] = tmp4; 2023-01-11T21:41:23.5138107Z out_ptr2[0] = tmp8; 2023-01-11T21:41:23.5138250Z out_ptr3[0] = tmp10; 2023-01-11T21:41:23.5138359Z } 2023-01-11T21:41:23.5138465Z } 2023-01-11T21:41:23.5138553Z { 2023-01-11T21:41:23.5138657Z { 2023-01-11T21:41:23.5138806Z auto tmp0 = out_ptr2[0]; 2023-01-11T21:41:23.5138951Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:23.5139089Z in_out_ptr0[0] = tmp1; 2023-01-11T21:41:23.5139195Z } 2023-01-11T21:41:23.5139281Z } 2023-01-11T21:41:23.5139383Z { 2023-01-11T21:41:23.5139489Z { 2023-01-11T21:41:23.5139637Z auto tmp0 = out_ptr3[0]; 2023-01-11T21:41:23.5139782Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:23.5139922Z in_out_ptr1[0] = tmp1; 2023-01-11T21:41:23.5140028Z } 2023-01-11T21:41:23.5140117Z } 2023-01-11T21:41:23.5140219Z } 2023-01-11T21:41:23.5140363Z ''') 2023-01-11T21:41:23.5140372Z 2023-01-11T21:41:23.5140379Z 2023-01-11T21:41:23.5140540Z async_compile.wait(globals()) 2023-01-11T21:41:23.5140666Z del async_compile 2023-01-11T21:41:23.5140734Z 2023-01-11T21:41:23.5140862Z def call(args): 2023-01-11T21:41:23.5140983Z arg0_1, = args 2023-01-11T21:41:23.5141088Z args.clear() 2023-01-11T21:41:23.5141425Z buf0 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5141754Z buf1 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5142081Z buf2 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5142585Z buf3 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5142711Z buf4 = buf2; del buf2 # reuse 2023-01-11T21:41:23.5142821Z buf5 = buf3; del buf3 # reuse 2023-01-11T21:41:23.5143117Z kernel_cpp_0(c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5143228Z del arg0_1 2023-01-11T21:41:23.5143374Z return (buf0, buf1, buf4, buf5, ) 2023-01-11T21:41:23.5143381Z 2023-01-11T21:41:23.5143391Z 2023-01-11T21:41:23.5143579Z if __name__ == "__main__": 2023-01-11T21:41:23.5143742Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5143913Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5144219Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5144375Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5144382Z 2023-01-11T21:41:23.5144387Z 2023-01-11T21:41:23.5144513Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5144593Z import torch 2023-01-11T21:41:23.5144683Z import random 2023-01-11T21:41:23.5144844Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5145017Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5145024Z 2023-01-11T21:41:23.5145175Z aten = torch.ops.aten 2023-01-11T21:41:23.5145473Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5145670Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5145690Z 2023-01-11T21:41:23.5145699Z 2023-01-11T21:41:23.5146021Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5146497Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5146769Z extern "C" void kernel(bool* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.5146990Z bool* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.5147220Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5147432Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.5147644Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:23.5147780Z { 2023-01-11T21:41:23.5147962Z auto out_ptr3 = in_out_ptr0; 2023-01-11T21:41:23.5148159Z auto out_ptr2 = in_out_ptr1; 2023-01-11T21:41:23.5148387Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5148523Z { 2023-01-11T21:41:23.5148699Z #pragma omp for 2023-01-11T21:41:23.5148886Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5149040Z { 2023-01-11T21:41:23.5149172Z { 2023-01-11T21:41:23.5149311Z { 2023-01-11T21:41:23.5149489Z bool tmp2 = 0; 2023-01-11T21:41:23.5149683Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.5149835Z { 2023-01-11T21:41:23.5149988Z { 2023-01-11T21:41:23.5150210Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.5150440Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.5150640Z tmp2 = tmp2 || tmp1; 2023-01-11T21:41:23.5150792Z } 2023-01-11T21:41:23.5150941Z } 2023-01-11T21:41:23.5151126Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5151263Z } 2023-01-11T21:41:23.5151395Z } 2023-01-11T21:41:23.5151529Z } 2023-01-11T21:41:23.5151674Z } 2023-01-11T21:41:23.5151943Z { 2023-01-11T21:41:23.5152086Z { 2023-01-11T21:41:23.5152251Z bool tmp2 = 0; 2023-01-11T21:41:23.5152417Z bool tmp5 = 0; 2023-01-11T21:41:23.5152652Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5152792Z { 2023-01-11T21:41:23.5153084Z #pragma omp for reduction(||:tmp2) reduction(||:tmp5) 2023-01-11T21:41:23.5153286Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.5153432Z { 2023-01-11T21:41:23.5153575Z { 2023-01-11T21:41:23.5153854Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5154069Z auto tmp1 = std::isinf(tmp0); 2023-01-11T21:41:23.5154262Z auto tmp3 = tmp1 == 0; 2023-01-11T21:41:23.5154466Z auto tmp4 = tmp3 == 0; 2023-01-11T21:41:23.5154657Z tmp2 = tmp2 || tmp1; 2023-01-11T21:41:23.5154844Z tmp5 = tmp5 || tmp4; 2023-01-11T21:41:23.5155066Z } 2023-01-11T21:41:23.5155210Z } 2023-01-11T21:41:23.5155340Z } 2023-01-11T21:41:23.5155512Z out_ptr1[0] = tmp2; 2023-01-11T21:41:23.5155685Z out_ptr2[0] = tmp5; 2023-01-11T21:41:23.5155822Z } 2023-01-11T21:41:23.5155954Z } 2023-01-11T21:41:23.5156182Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5156322Z { 2023-01-11T21:41:23.5156486Z #pragma omp for 2023-01-11T21:41:23.5156672Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5156808Z { 2023-01-11T21:41:23.5156958Z { 2023-01-11T21:41:23.5157098Z { 2023-01-11T21:41:23.5157276Z bool tmp5 = 0; 2023-01-11T21:41:23.5157455Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:23.5157600Z { 2023-01-11T21:41:23.5157746Z { 2023-01-11T21:41:23.5157968Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.5158214Z auto tmp1 = std::isinf(tmp0); 2023-01-11T21:41:23.5158422Z auto tmp2 = tmp1 == 0; 2023-01-11T21:41:23.5158661Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.5158901Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:23.5159080Z tmp5 = tmp5 || tmp4; 2023-01-11T21:41:23.5159220Z } 2023-01-11T21:41:23.5159368Z } 2023-01-11T21:41:23.5159545Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:23.5159695Z } 2023-01-11T21:41:23.5159829Z } 2023-01-11T21:41:23.5159943Z } 2023-01-11T21:41:23.5160120Z #pragma omp for 2023-01-11T21:41:23.5160303Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5160446Z { 2023-01-11T21:41:23.5160584Z { 2023-01-11T21:41:23.5160731Z { 2023-01-11T21:41:23.5160946Z auto tmp0 = out_ptr3[i0]; 2023-01-11T21:41:23.5161128Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:23.5161329Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.5161479Z } 2023-01-11T21:41:23.5161626Z } 2023-01-11T21:41:23.5161771Z } 2023-01-11T21:41:23.5161949Z #pragma omp single 2023-01-11T21:41:23.5162092Z { 2023-01-11T21:41:23.5162221Z { 2023-01-11T21:41:23.5162363Z { 2023-01-11T21:41:23.5162561Z auto tmp0 = out_ptr2[0]; 2023-01-11T21:41:23.5162753Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:23.5162934Z in_out_ptr1[0] = tmp1; 2023-01-11T21:41:23.5163076Z } 2023-01-11T21:41:23.5163210Z } 2023-01-11T21:41:23.5163328Z } 2023-01-11T21:41:23.5163463Z } 2023-01-11T21:41:23.5163603Z } 2023-01-11T21:41:23.5163806Z ''') 2023-01-11T21:41:23.5163817Z 2023-01-11T21:41:23.5163826Z 2023-01-11T21:41:23.5164120Z async_compile.wait(globals()) 2023-01-11T21:41:23.5164285Z del async_compile 2023-01-11T21:41:23.5164295Z 2023-01-11T21:41:23.5164461Z def call(args): 2023-01-11T21:41:23.5164597Z arg0_1, = args 2023-01-11T21:41:23.5164763Z args.clear() 2023-01-11T21:41:23.5165203Z buf0 = empty_strided((16, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5165613Z buf1 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5166020Z buf4 = empty_strided((), (), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5166450Z buf2 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5166642Z buf3 = buf2; del buf2 # reuse 2023-01-11T21:41:23.5166810Z buf5 = buf4; del buf4 # reuse 2023-01-11T21:41:23.5167293Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5167451Z del arg0_1 2023-01-11T21:41:23.5167719Z return (buf0, buf1, buf3, buf5, ) 2023-01-11T21:41:23.5167734Z 2023-01-11T21:41:23.5167743Z 2023-01-11T21:41:23.5167918Z if __name__ == "__main__": 2023-01-11T21:41:23.5168189Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5168483Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5168929Z arg0_1 = rand_strided((16, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5169172Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5169182Z 2023-01-11T21:41:23.5169338Z ok (3.432s) 2023-01-11T21:41:23.5170538Z test_arange1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5170838Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5171408Z [2023-01-11 21:24:25,416] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 19 2023-01-11T21:41:23.5172035Z [2023-01-11 21:24:27,020] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 19 2023-01-11T21:41:23.5172046Z 2023-01-11T21:41:23.5172265Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5172430Z import torch 2023-01-11T21:41:23.5172592Z import random 2023-01-11T21:41:23.5172842Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5173129Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5173139Z 2023-01-11T21:41:23.5173320Z aten = torch.ops.aten 2023-01-11T21:41:23.5173649Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5173867Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5173877Z 2023-01-11T21:41:23.5173887Z 2023-01-11T21:41:23.5174209Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5174716Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5175000Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5175219Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5175421Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.5175557Z { 2023-01-11T21:41:23.5175791Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5175924Z { 2023-01-11T21:41:23.5176099Z #pragma omp for 2023-01-11T21:41:23.5176284Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.5176408Z { 2023-01-11T21:41:23.5176548Z { 2023-01-11T21:41:23.5176688Z { 2023-01-11T21:41:23.5176895Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5177129Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.5177328Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5177610Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5177744Z } 2023-01-11T21:41:23.5177885Z } 2023-01-11T21:41:23.5178026Z } 2023-01-11T21:41:23.5178198Z #pragma omp for 2023-01-11T21:41:23.5178381Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5178518Z { 2023-01-11T21:41:23.5178693Z #pragma GCC ivdep 2023-01-11T21:41:23.5178859Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.5179000Z { 2023-01-11T21:41:23.5179141Z { 2023-01-11T21:41:23.5179292Z { 2023-01-11T21:41:23.5179513Z auto tmp0 = out_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.5179764Z auto tmp1 = static_cast(10 + i1); 2023-01-11T21:41:23.5180013Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.5180206Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.5180472Z out_ptr1[i1 + (8*i0)] = tmp3; 2023-01-11T21:41:23.5180626Z } 2023-01-11T21:41:23.5180775Z } 2023-01-11T21:41:23.5180913Z } 2023-01-11T21:41:23.5181052Z } 2023-01-11T21:41:23.5181182Z } 2023-01-11T21:41:23.5181309Z } 2023-01-11T21:41:23.5181493Z ''') 2023-01-11T21:41:23.5181503Z 2023-01-11T21:41:23.5181511Z 2023-01-11T21:41:23.5181716Z async_compile.wait(globals()) 2023-01-11T21:41:23.5181878Z del async_compile 2023-01-11T21:41:23.5181886Z 2023-01-11T21:41:23.5182055Z def call(args): 2023-01-11T21:41:23.5182205Z arg0_1, = args 2023-01-11T21:41:23.5182490Z args.clear() 2023-01-11T21:41:23.5182924Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5183369Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5183745Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5183894Z del arg0_1 2023-01-11T21:41:23.5184073Z return (buf0, buf1, ) 2023-01-11T21:41:23.5184083Z 2023-01-11T21:41:23.5184092Z 2023-01-11T21:41:23.5184261Z if __name__ == "__main__": 2023-01-11T21:41:23.5184531Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5184824Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5185264Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5185510Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5185520Z 2023-01-11T21:41:23.5185675Z ok (1.667s) 2023-01-11T21:41:23.5186922Z test_arange2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5187221Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5187840Z [2023-01-11 21:24:27,059] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 20 2023-01-11T21:41:23.5188462Z [2023-01-11 21:24:28,741] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 20 2023-01-11T21:41:23.5188473Z 2023-01-11T21:41:23.5188692Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5188848Z import torch 2023-01-11T21:41:23.5188996Z import random 2023-01-11T21:41:23.5189262Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5189557Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5189567Z 2023-01-11T21:41:23.5189747Z aten = torch.ops.aten 2023-01-11T21:41:23.5190057Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5190275Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5190284Z 2023-01-11T21:41:23.5190292Z 2023-01-11T21:41:23.5190726Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5191218Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5191464Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.5191670Z long* __restrict__ out_ptr0) 2023-01-11T21:41:23.5191800Z { 2023-01-11T21:41:23.5192030Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5192168Z { 2023-01-11T21:41:23.5192341Z #pragma omp for 2023-01-11T21:41:23.5192534Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5192666Z { 2023-01-11T21:41:23.5192845Z #pragma GCC ivdep 2023-01-11T21:41:23.5193032Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.5193168Z { 2023-01-11T21:41:23.5193310Z { 2023-01-11T21:41:23.5193451Z { 2023-01-11T21:41:23.5193663Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.5194064Z auto tmp1 = static_cast(i1); 2023-01-11T21:41:23.5194282Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5194487Z out_ptr0[i1 + (8*i0)] = tmp2; 2023-01-11T21:41:23.5194639Z } 2023-01-11T21:41:23.5194782Z } 2023-01-11T21:41:23.5194924Z } 2023-01-11T21:41:23.5195069Z } 2023-01-11T21:41:23.5195188Z } 2023-01-11T21:41:23.5195323Z } 2023-01-11T21:41:23.5195511Z ''') 2023-01-11T21:41:23.5195520Z 2023-01-11T21:41:23.5195529Z 2023-01-11T21:41:23.5195739Z async_compile.wait(globals()) 2023-01-11T21:41:23.5195910Z del async_compile 2023-01-11T21:41:23.5195919Z 2023-01-11T21:41:23.5196086Z def call(args): 2023-01-11T21:41:23.5196240Z arg0_1, = args 2023-01-11T21:41:23.5196382Z args.clear() 2023-01-11T21:41:23.5196845Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5197162Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5197324Z del arg0_1 2023-01-11T21:41:23.5197491Z return (buf0, ) 2023-01-11T21:41:23.5197500Z 2023-01-11T21:41:23.5197509Z 2023-01-11T21:41:23.5197670Z if __name__ == "__main__": 2023-01-11T21:41:23.5197932Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5198236Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5198673Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5198917Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5198926Z 2023-01-11T21:41:23.5199072Z ok (1.719s) 2023-01-11T21:41:23.5200298Z test_arange3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5200589Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5201204Z [2023-01-11 21:24:28,782] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 21 2023-01-11T21:41:23.5201842Z [2023-01-11 21:24:30,427] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 21 2023-01-11T21:41:23.5201852Z 2023-01-11T21:41:23.5202075Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5202234Z import torch 2023-01-11T21:41:23.5202377Z import random 2023-01-11T21:41:23.5202659Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5202943Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5202954Z 2023-01-11T21:41:23.5203137Z aten = torch.ops.aten 2023-01-11T21:41:23.5203457Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5203664Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5203753Z 2023-01-11T21:41:23.5203768Z 2023-01-11T21:41:23.5204079Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5204592Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5204848Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5205077Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5205221Z { 2023-01-11T21:41:23.5205447Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5205587Z { 2023-01-11T21:41:23.5205760Z #pragma omp for 2023-01-11T21:41:23.5205941Z for(long i0=0; i0<14; i0+=1) 2023-01-11T21:41:23.5206063Z { 2023-01-11T21:41:23.5206206Z { 2023-01-11T21:41:23.5206352Z { 2023-01-11T21:41:23.5206551Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5206792Z auto tmp1 = static_cast(4*i0); 2023-01-11T21:41:23.5207037Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.5207345Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.5207513Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.5207656Z } 2023-01-11T21:41:23.5207800Z } 2023-01-11T21:41:23.5207946Z } 2023-01-11T21:41:23.5208084Z } 2023-01-11T21:41:23.5208218Z } 2023-01-11T21:41:23.5208378Z ''') 2023-01-11T21:41:23.5208406Z 2023-01-11T21:41:23.5208414Z 2023-01-11T21:41:23.5208600Z async_compile.wait(globals()) 2023-01-11T21:41:23.5208767Z del async_compile 2023-01-11T21:41:23.5208776Z 2023-01-11T21:41:23.5208944Z def call(args): 2023-01-11T21:41:23.5209102Z arg0_1, = args 2023-01-11T21:41:23.5209262Z args.clear() 2023-01-11T21:41:23.5209705Z buf0 = empty_strided((14, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5210010Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5210152Z del arg0_1 2023-01-11T21:41:23.5210320Z return (buf0, ) 2023-01-11T21:41:23.5210334Z 2023-01-11T21:41:23.5210347Z 2023-01-11T21:41:23.5210519Z if __name__ == "__main__": 2023-01-11T21:41:23.5210777Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5211065Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5211522Z arg0_1 = rand_strided((14, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5211768Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5211778Z 2023-01-11T21:41:23.5211930Z ok (1.687s) 2023-01-11T21:41:23.5213132Z test_arange4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5213403Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5214022Z [2023-01-11 21:24:30,464] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 22 2023-01-11T21:41:23.5214662Z [2023-01-11 21:24:32,074] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 22 2023-01-11T21:41:23.5214673Z 2023-01-11T21:41:23.5214899Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5215059Z import torch 2023-01-11T21:41:23.5215219Z import random 2023-01-11T21:41:23.5215492Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5215787Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5215797Z 2023-01-11T21:41:23.5215952Z aten = torch.ops.aten 2023-01-11T21:41:23.5216258Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5216466Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5216477Z 2023-01-11T21:41:23.5216486Z 2023-01-11T21:41:23.5216799Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5217372Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5217641Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5217856Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5217989Z { 2023-01-11T21:41:23.5218201Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5218341Z { 2023-01-11T21:41:23.5218509Z #pragma omp for 2023-01-11T21:41:23.5218689Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.5218823Z { 2023-01-11T21:41:23.5218965Z { 2023-01-11T21:41:23.5219104Z { 2023-01-11T21:41:23.5219291Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5219671Z auto tmp1 = static_cast(512 + ((-1)*i0)); 2023-01-11T21:41:23.5219906Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.5220208Z auto tmp3 = tmp0 - tmp2; 2023-01-11T21:41:23.5220473Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.5220624Z } 2023-01-11T21:41:23.5220761Z } 2023-01-11T21:41:23.5220885Z } 2023-01-11T21:41:23.5221023Z } 2023-01-11T21:41:23.5221156Z } 2023-01-11T21:41:23.5221325Z ''') 2023-01-11T21:41:23.5221334Z 2023-01-11T21:41:23.5221342Z 2023-01-11T21:41:23.5221544Z async_compile.wait(globals()) 2023-01-11T21:41:23.5221703Z del async_compile 2023-01-11T21:41:23.5221712Z 2023-01-11T21:41:23.5221870Z def call(args): 2023-01-11T21:41:23.5222004Z arg0_1, = args 2023-01-11T21:41:23.5222160Z args.clear() 2023-01-11T21:41:23.5222799Z buf0 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5223105Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5223246Z del arg0_1 2023-01-11T21:41:23.5223392Z return (buf0, ) 2023-01-11T21:41:23.5223402Z 2023-01-11T21:41:23.5223410Z 2023-01-11T21:41:23.5223561Z if __name__ == "__main__": 2023-01-11T21:41:23.5223819Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5224074Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5224505Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5224735Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5224744Z 2023-01-11T21:41:23.5224877Z ok (1.646s) 2023-01-11T21:41:23.5225997Z test_argmax_argmin1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5226276Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5226932Z [2023-01-11 21:24:32,115] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 23 2023-01-11T21:41:23.5227576Z [2023-01-11 21:24:33,709] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 23 2023-01-11T21:41:23.5227586Z 2023-01-11T21:41:23.5227806Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5227944Z import torch 2023-01-11T21:41:23.5228113Z import random 2023-01-11T21:41:23.5228385Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5228672Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5228683Z 2023-01-11T21:41:23.5228863Z aten = torch.ops.aten 2023-01-11T21:41:23.5229186Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5229406Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5229415Z 2023-01-11T21:41:23.5229424Z 2023-01-11T21:41:23.5229733Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5230208Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5230615Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5230836Z long* __restrict__ out_ptr0, 2023-01-11T21:41:23.5231045Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.5231180Z { 2023-01-11T21:41:23.5231320Z { 2023-01-11T21:41:23.5231469Z { 2023-01-11T21:41:23.5231726Z struct IndexValue_1 {size_t index; float value;}; 2023-01-11T21:41:23.5232276Z IndexValue_1 tmp1{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5232601Z #pragma omp declare reduction(argmax : struct IndexValue_1 :\ 2023-01-11T21:41:23.5232945Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5233283Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5233898Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5234256Z struct IndexValue_2 {size_t index; float value;}; 2023-01-11T21:41:23.5234449Z IndexValue_2 tmp2{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5234608Z #pragma omp declare reduction(argmin : struct IndexValue_2 :\ 2023-01-11T21:41:23.5234796Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5234987Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5235195Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5235358Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5235457Z { 2023-01-11T21:41:23.5235643Z #pragma omp for reduction(argmax:tmp1) reduction(argmin:tmp2) 2023-01-11T21:41:23.5235768Z for(long i0=0; i0<524288; i0+=1) 2023-01-11T21:41:23.5235847Z { 2023-01-11T21:41:23.5235953Z { 2023-01-11T21:41:23.5236091Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5236221Z if (tmp1.value < tmp0) { 2023-01-11T21:41:23.5236373Z tmp1.index = i0; tmp1.value = tmp0; 2023-01-11T21:41:23.5236472Z } 2023-01-11T21:41:23.5236609Z if (tmp2.value > tmp0) { 2023-01-11T21:41:23.5236758Z tmp2.index = i0; tmp2.value = tmp0; 2023-01-11T21:41:23.5236852Z } 2023-01-11T21:41:23.5236957Z } 2023-01-11T21:41:23.5237054Z } 2023-01-11T21:41:23.5237146Z } 2023-01-11T21:41:23.5237272Z out_ptr0[0] = tmp1.index; 2023-01-11T21:41:23.5237388Z out_ptr1[0] = tmp2.index; 2023-01-11T21:41:23.5237457Z } 2023-01-11T21:41:23.5237550Z } 2023-01-11T21:41:23.5237637Z } 2023-01-11T21:41:23.5237781Z ''') 2023-01-11T21:41:23.5237790Z 2023-01-11T21:41:23.5237801Z 2023-01-11T21:41:23.5237948Z async_compile.wait(globals()) 2023-01-11T21:41:23.5238052Z del async_compile 2023-01-11T21:41:23.5238060Z 2023-01-11T21:41:23.5238163Z def call(args): 2023-01-11T21:41:23.5238247Z arg0_1, = args 2023-01-11T21:41:23.5238349Z args.clear() 2023-01-11T21:41:23.5238612Z buf0 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5238879Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5239118Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.5239223Z del arg0_1 2023-01-11T21:41:23.5239336Z return (buf0, buf1, ) 2023-01-11T21:41:23.5239344Z 2023-01-11T21:41:23.5239350Z 2023-01-11T21:41:23.5239466Z if __name__ == "__main__": 2023-01-11T21:41:23.5239612Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5239787Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5240117Z arg0_1 = rand_strided((8, 256, 256), (65536, 256, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5240365Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5240373Z 2023-01-11T21:41:23.5240472Z ok (1.639s) 2023-01-11T21:41:23.5241151Z test_argmax_argmin2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5241340Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5241742Z [2023-01-11 21:24:33,760] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 24 2023-01-11T21:41:23.5242196Z [2023-01-11 21:24:35,321] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 24 2023-01-11T21:41:23.5242212Z 2023-01-11T21:41:23.5242351Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5242430Z import torch 2023-01-11T21:41:23.5242544Z import random 2023-01-11T21:41:23.5242706Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5242879Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5242887Z 2023-01-11T21:41:23.5243002Z aten = torch.ops.aten 2023-01-11T21:41:23.5243187Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5243320Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5243328Z 2023-01-11T21:41:23.5243334Z 2023-01-11T21:41:23.5243536Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5243817Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5243993Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5244139Z long* __restrict__ out_ptr0, 2023-01-11T21:41:23.5244269Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.5244402Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.5244533Z long* __restrict__ out_ptr3) 2023-01-11T21:41:23.5244622Z { 2023-01-11T21:41:23.5244742Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5244828Z { 2023-01-11T21:41:23.5244931Z #pragma omp for 2023-01-11T21:41:23.5245040Z for(long i0=0; i0<144; i0+=1) 2023-01-11T21:41:23.5245127Z { 2023-01-11T21:41:23.5245219Z { 2023-01-11T21:41:23.5245292Z { 2023-01-11T21:41:23.5245461Z struct IndexValue_3 {size_t index; float value;}; 2023-01-11T21:41:23.5245827Z IndexValue_3 tmp1{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5246020Z #pragma omp declare reduction(argmax : struct IndexValue_3 :\ 2023-01-11T21:41:23.5246248Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5246481Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5246857Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5247035Z struct IndexValue_4 {size_t index; float value;}; 2023-01-11T21:41:23.5247234Z IndexValue_4 tmp2{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5247422Z #pragma omp declare reduction(argmin : struct IndexValue_4 :\ 2023-01-11T21:41:23.5247635Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5247837Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5248043Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5248257Z for(long i1=0; i1<144; i1+=1) 2023-01-11T21:41:23.5248354Z { 2023-01-11T21:41:23.5248448Z { 2023-01-11T21:41:23.5248599Z auto tmp0 = in_ptr0[i0 + (144*i1)]; 2023-01-11T21:41:23.5248718Z if (tmp1.value < tmp0) { 2023-01-11T21:41:23.5248872Z tmp1.index = i1; tmp1.value = tmp0; 2023-01-11T21:41:23.5248975Z } 2023-01-11T21:41:23.5249109Z if (tmp2.value > tmp0) { 2023-01-11T21:41:23.5249261Z tmp2.index = i1; tmp2.value = tmp0; 2023-01-11T21:41:23.5249358Z } 2023-01-11T21:41:23.5249451Z } 2023-01-11T21:41:23.5249534Z } 2023-01-11T21:41:23.5249673Z out_ptr0[i0] = tmp1.index; 2023-01-11T21:41:23.5249803Z out_ptr1[i0] = tmp2.index; 2023-01-11T21:41:23.5249897Z } 2023-01-11T21:41:23.5250052Z } 2023-01-11T21:41:23.5250149Z } 2023-01-11T21:41:23.5250268Z #pragma omp for 2023-01-11T21:41:23.5250367Z for(long i0=0; i0<144; i0+=1) 2023-01-11T21:41:23.5250455Z { 2023-01-11T21:41:23.5250542Z { 2023-01-11T21:41:23.5250631Z { 2023-01-11T21:41:23.5250806Z struct IndexValue_5 {size_t index; float value;}; 2023-01-11T21:41:23.5251177Z IndexValue_5 tmp1{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5251375Z #pragma omp declare reduction(argmax : struct IndexValue_5 :\ 2023-01-11T21:41:23.5274538Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5274769Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5275192Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5275374Z struct IndexValue_6 {size_t index; float value;}; 2023-01-11T21:41:23.5275578Z IndexValue_6 tmp2{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:23.5275774Z #pragma omp declare reduction(argmin : struct IndexValue_6 :\ 2023-01-11T21:41:23.5275983Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.5276211Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.5276421Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:23.5276567Z for(long i1=0; i1<144; i1+=1) 2023-01-11T21:41:23.5276661Z { 2023-01-11T21:41:23.5276761Z { 2023-01-11T21:41:23.5276895Z auto tmp0 = in_ptr0[i1 + (144*i0)]; 2023-01-11T21:41:23.5277055Z if (tmp1.value < tmp0) { 2023-01-11T21:41:23.5277228Z tmp1.index = i1; tmp1.value = tmp0; 2023-01-11T21:41:23.5277339Z } 2023-01-11T21:41:23.5277469Z if (tmp2.value > tmp0) { 2023-01-11T21:41:23.5277622Z tmp2.index = i1; tmp2.value = tmp0; 2023-01-11T21:41:23.5277719Z } 2023-01-11T21:41:23.5277810Z } 2023-01-11T21:41:23.5277902Z } 2023-01-11T21:41:23.5278039Z out_ptr2[i0] = tmp1.index; 2023-01-11T21:41:23.5278168Z out_ptr3[i0] = tmp2.index; 2023-01-11T21:41:23.5278244Z } 2023-01-11T21:41:23.5278330Z } 2023-01-11T21:41:23.5278412Z } 2023-01-11T21:41:23.5278498Z } 2023-01-11T21:41:23.5278574Z } 2023-01-11T21:41:23.5278706Z ''') 2023-01-11T21:41:23.5278717Z 2023-01-11T21:41:23.5278723Z 2023-01-11T21:41:23.5279003Z async_compile.wait(globals()) 2023-01-11T21:41:23.5279081Z del async_compile 2023-01-11T21:41:23.5279087Z 2023-01-11T21:41:23.5279200Z def call(args): 2023-01-11T21:41:23.5279295Z arg0_1, = args 2023-01-11T21:41:23.5279393Z args.clear() 2023-01-11T21:41:23.5279679Z buf0 = empty_strided((144, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5279953Z buf1 = empty_strided((144, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5280231Z buf2 = empty_strided((144, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5280484Z buf3 = empty_strided((144, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5280773Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.5280874Z del arg0_1 2023-01-11T21:41:23.5280991Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:23.5280997Z 2023-01-11T21:41:23.5281008Z 2023-01-11T21:41:23.5281200Z if __name__ == "__main__": 2023-01-11T21:41:23.5281366Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5281536Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5281837Z arg0_1 = rand_strided((144, 144), (144, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5281974Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5281995Z 2023-01-11T21:41:23.5282068Z ok (1.609s) 2023-01-11T21:41:23.5282231Z test_argmax_argmin3_cpu (__main__.CpuTests) ... skip: 2023-01-11T21:41:23.5282444Z FIXME: In the case of having equally max/min elements, our implementation returns 2023-01-11T21:41:23.5282590Z the last index instead of the first one 2023-01-11T21:41:23.5282684Z (0.001s) 2023-01-11T21:41:23.5283393Z test_as_strided_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5283576Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5283952Z [2023-01-11 21:24:35,356] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 25 2023-01-11T21:41:23.5284325Z [2023-01-11 21:24:36,913] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 25 2023-01-11T21:41:23.5284352Z 2023-01-11T21:41:23.5284470Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5284572Z import torch 2023-01-11T21:41:23.5284674Z import random 2023-01-11T21:41:23.5284826Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5284998Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5285005Z 2023-01-11T21:41:23.5285112Z aten = torch.ops.aten 2023-01-11T21:41:23.5285314Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5285429Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5285436Z 2023-01-11T21:41:23.5285459Z 2023-01-11T21:41:23.5285641Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5285925Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5286090Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.5286232Z const float* __restrict__ in_ptr0) 2023-01-11T21:41:23.5286323Z { 2023-01-11T21:41:23.5286470Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5286558Z { 2023-01-11T21:41:23.5286651Z #pragma omp for 2023-01-11T21:41:23.5286773Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:23.5286859Z { 2023-01-11T21:41:23.5287062Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5287272Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5287486Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5287676Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.5287780Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.5287919Z tmp4.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5288006Z } 2023-01-11T21:41:23.5288144Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5288268Z for(long i0=4096; i0<4096; i0+=1) 2023-01-11T21:41:23.5288361Z { 2023-01-11T21:41:23.5288477Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5288602Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5288724Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5288858Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.5288976Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.5289090Z in_out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.5289179Z } 2023-01-11T21:41:23.5289333Z } 2023-01-11T21:41:23.5289409Z } 2023-01-11T21:41:23.5289532Z ''') 2023-01-11T21:41:23.5289540Z 2023-01-11T21:41:23.5289545Z 2023-01-11T21:41:23.5289680Z async_compile.wait(globals()) 2023-01-11T21:41:23.5289791Z del async_compile 2023-01-11T21:41:23.5289798Z 2023-01-11T21:41:23.5289897Z def call(args): 2023-01-11T21:41:23.5290000Z arg0_1, = args 2023-01-11T21:41:23.5290118Z args.clear() 2023-01-11T21:41:23.5290441Z buf0 = empty_strided((64, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5290611Z buf1 = as_strided(buf0, (8, 8, 64), (512, 64, 1)); del buf0 # reuse 2023-01-11T21:41:23.5290832Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.5291013Z return (as_strided(arg0_1, (8, 8, 64), (512, 64, 1)), buf1, ) 2023-01-11T21:41:23.5291021Z 2023-01-11T21:41:23.5291028Z 2023-01-11T21:41:23.5291153Z if __name__ == "__main__": 2023-01-11T21:41:23.5291342Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5291534Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5291849Z arg0_1 = rand_strided((64, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5291972Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5291994Z 2023-01-11T21:41:23.5292065Z ok (1.591s) 2023-01-11T21:41:23.5292727Z test_as_strided_scatter_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5292914Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5293329Z [2023-01-11 21:24:36,946] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 26 2023-01-11T21:41:23.5293757Z [2023-01-11 21:24:38,534] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 26 2023-01-11T21:41:23.5293769Z 2023-01-11T21:41:23.5293917Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5294032Z import torch 2023-01-11T21:41:23.5294137Z import random 2023-01-11T21:41:23.5294322Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5294496Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5294505Z 2023-01-11T21:41:23.5294622Z aten = torch.ops.aten 2023-01-11T21:41:23.5294812Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5294950Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5294957Z 2023-01-11T21:41:23.5294964Z 2023-01-11T21:41:23.5295179Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5295482Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5295665Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5295902Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5296028Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5296118Z { 2023-01-11T21:41:23.5296260Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5296347Z { 2023-01-11T21:41:23.5296457Z #pragma omp for 2023-01-11T21:41:23.5296578Z for(long i0=0; i0<1280; i0+=1) 2023-01-11T21:41:23.5296668Z { 2023-01-11T21:41:23.5296850Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5297042Z auto tmp1 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:23.5297161Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5297351Z auto tmp3 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:23.5297473Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.5297604Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5297758Z } 2023-01-11T21:41:23.5297886Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5298010Z for(long i0=10240; i0<10240; i0+=1) 2023-01-11T21:41:23.5298099Z { 2023-01-11T21:41:23.5298221Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5298364Z auto tmp1 = static_cast(8); 2023-01-11T21:41:23.5298486Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5298633Z auto tmp3 = static_cast(10); 2023-01-11T21:41:23.5298736Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.5298843Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.5298938Z } 2023-01-11T21:41:23.5299042Z #pragma omp for 2023-01-11T21:41:23.5299159Z for(long i0=0; i0<5120; i0+=1) 2023-01-11T21:41:23.5299250Z { 2023-01-11T21:41:23.5299346Z { 2023-01-11T21:41:23.5299421Z { 2023-01-11T21:41:23.5299553Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5299710Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5299843Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5299984Z auto tmp3 = static_cast(4); 2023-01-11T21:41:23.5300204Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.5300332Z out_ptr0[2*i0] = tmp4; 2023-01-11T21:41:23.5300411Z } 2023-01-11T21:41:23.5300510Z } 2023-01-11T21:41:23.5300608Z } 2023-01-11T21:41:23.5300700Z } 2023-01-11T21:41:23.5300782Z } 2023-01-11T21:41:23.5300902Z ''') 2023-01-11T21:41:23.5300912Z 2023-01-11T21:41:23.5300919Z 2023-01-11T21:41:23.5301046Z async_compile.wait(globals()) 2023-01-11T21:41:23.5301133Z del async_compile 2023-01-11T21:41:23.5301140Z 2023-01-11T21:41:23.5301235Z def call(args): 2023-01-11T21:41:23.5301350Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5301453Z args.clear() 2023-01-11T21:41:23.5301765Z buf0 = empty_strided((10, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5302021Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5302123Z del arg0_1 2023-01-11T21:41:23.5302202Z del arg1_1 2023-01-11T21:41:23.5302305Z return (buf0, ) 2023-01-11T21:41:23.5302313Z 2023-01-11T21:41:23.5302318Z 2023-01-11T21:41:23.5302574Z if __name__ == "__main__": 2023-01-11T21:41:23.5302726Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5302886Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5303196Z arg0_1 = rand_strided((10, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5303494Z arg1_1 = rand_strided((10, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5303661Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5303669Z 2023-01-11T21:41:23.5303741Z ok (1.622s) 2023-01-11T21:41:23.5304451Z test_avg_pool2d1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5304755Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5305156Z [2023-01-11 21:24:38,555] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 27 2023-01-11T21:41:23.5305547Z [2023-01-11 21:24:40,145] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 27 2023-01-11T21:41:23.5305558Z 2023-01-11T21:41:23.5305701Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5305801Z import torch 2023-01-11T21:41:23.5305908Z import random 2023-01-11T21:41:23.5306078Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5306346Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5306356Z 2023-01-11T21:41:23.5306443Z aten = torch.ops.aten 2023-01-11T21:41:23.5306641Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5306775Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5306783Z 2023-01-11T21:41:23.5306789Z 2023-01-11T21:41:23.5307003Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5307296Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5307468Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5307612Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5307698Z { 2023-01-11T21:41:23.5307821Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5307907Z { 2023-01-11T21:41:23.5308021Z #pragma omp for 2023-01-11T21:41:23.5308140Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5308232Z { 2023-01-11T21:41:23.5308364Z #pragma GCC ivdep 2023-01-11T21:41:23.5308483Z for(long i1=0; i1<7; i1+=1) 2023-01-11T21:41:23.5308565Z { 2023-01-11T21:41:23.5308684Z #pragma GCC ivdep 2023-01-11T21:41:23.5308802Z for(long i2=0; i2<7; i2+=1) 2023-01-11T21:41:23.5308896Z { 2023-01-11T21:41:23.5308995Z { 2023-01-11T21:41:23.5309090Z { 2023-01-11T21:41:23.5309221Z auto tmp0 = in_ptr0[(2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5309388Z auto tmp1 = in_ptr0[1 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5309546Z auto tmp3 = in_ptr0[2 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5309701Z auto tmp5 = in_ptr0[16 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5309857Z auto tmp7 = in_ptr0[17 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5310012Z auto tmp9 = in_ptr0[18 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5310172Z auto tmp11 = in_ptr0[32 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5310340Z auto tmp13 = in_ptr0[33 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5310501Z auto tmp15 = in_ptr0[34 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.5310618Z auto tmp2 = tmp1 + tmp0; 2023-01-11T21:41:23.5310753Z auto tmp4 = tmp3 + tmp2; 2023-01-11T21:41:23.5310887Z auto tmp6 = tmp5 + tmp4; 2023-01-11T21:41:23.5311016Z auto tmp8 = tmp7 + tmp6; 2023-01-11T21:41:23.5311156Z auto tmp10 = tmp9 + tmp8; 2023-01-11T21:41:23.5311292Z auto tmp12 = tmp11 + tmp10; 2023-01-11T21:41:23.5311433Z auto tmp14 = tmp13 + tmp12; 2023-01-11T21:41:23.5311637Z auto tmp16 = tmp15 + tmp14; 2023-01-11T21:41:23.5311816Z auto tmp17 = static_cast(0.1111111111111111); 2023-01-11T21:41:23.5311955Z auto tmp18 = tmp16 * tmp17; 2023-01-11T21:41:23.5312107Z out_ptr0[i2 + (7*i1) + (49*i0)] = tmp18; 2023-01-11T21:41:23.5312204Z } 2023-01-11T21:41:23.5312304Z } 2023-01-11T21:41:23.5312392Z } 2023-01-11T21:41:23.5312461Z } 2023-01-11T21:41:23.5312546Z } 2023-01-11T21:41:23.5312638Z } 2023-01-11T21:41:23.5312723Z } 2023-01-11T21:41:23.5312858Z ''') 2023-01-11T21:41:23.5312866Z 2023-01-11T21:41:23.5312872Z 2023-01-11T21:41:23.5312996Z async_compile.wait(globals()) 2023-01-11T21:41:23.5313097Z del async_compile 2023-01-11T21:41:23.5313103Z 2023-01-11T21:41:23.5313185Z def call(args): 2023-01-11T21:41:23.5313281Z arg0_1, = args 2023-01-11T21:41:23.5313382Z args.clear() 2023-01-11T21:41:23.5313847Z buf0 = empty_strided((2, 4, 7, 7), (196, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5314036Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5314125Z del arg0_1 2023-01-11T21:41:23.5314215Z return (buf0, ) 2023-01-11T21:41:23.5314222Z 2023-01-11T21:41:23.5314228Z 2023-01-11T21:41:23.5314328Z if __name__ == "__main__": 2023-01-11T21:41:23.5314479Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5314662Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5315008Z arg0_1 = rand_strided((2, 4, 16, 16), (1024, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5315161Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5315167Z 2023-01-11T21:41:23.5315264Z ok (1.610s) 2023-01-11T21:41:23.5315964Z test_avg_pool2d2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5316159Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5316557Z [2023-01-11 21:24:40,216] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 28 2023-01-11T21:41:23.5316957Z [2023-01-11 21:24:41,816] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 28 2023-01-11T21:41:23.5316964Z 2023-01-11T21:41:23.5317097Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5317184Z import torch 2023-01-11T21:41:23.5317284Z import random 2023-01-11T21:41:23.5317443Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5317622Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5317634Z 2023-01-11T21:41:23.5317747Z aten = torch.ops.aten 2023-01-11T21:41:23.5317938Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5318073Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5318080Z 2023-01-11T21:41:23.5318085Z 2023-01-11T21:41:23.5318289Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5318579Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5318748Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5318890Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5318982Z { 2023-01-11T21:41:23.5319122Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5319217Z { 2023-01-11T21:41:23.5319324Z #pragma omp for 2023-01-11T21:41:23.5319428Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.5319518Z { 2023-01-11T21:41:23.5319633Z #pragma GCC ivdep 2023-01-11T21:41:23.5319772Z for(long i1=0; i1<27; i1+=1) 2023-01-11T21:41:23.5319986Z { 2023-01-11T21:41:23.5320095Z #pragma GCC ivdep 2023-01-11T21:41:23.5320208Z for(long i2=0; i2<27; i2+=1) 2023-01-11T21:41:23.5320299Z { 2023-01-11T21:41:23.5320402Z { 2023-01-11T21:41:23.5320496Z { 2023-01-11T21:41:23.5320656Z auto tmp0 = in_ptr0[(2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5320820Z auto tmp1 = in_ptr0[1 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5320983Z auto tmp3 = in_ptr0[2 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5321149Z auto tmp5 = in_ptr0[55 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5321291Z auto tmp7 = in_ptr0[56 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5321455Z auto tmp9 = in_ptr0[57 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5321676Z auto tmp11 = in_ptr0[110 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5321842Z auto tmp13 = in_ptr0[111 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5322001Z auto tmp15 = in_ptr0[112 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.5322138Z auto tmp2 = tmp1 + tmp0; 2023-01-11T21:41:23.5322268Z auto tmp4 = tmp3 + tmp2; 2023-01-11T21:41:23.5322399Z auto tmp6 = tmp5 + tmp4; 2023-01-11T21:41:23.5322515Z auto tmp8 = tmp7 + tmp6; 2023-01-11T21:41:23.5322657Z auto tmp10 = tmp9 + tmp8; 2023-01-11T21:41:23.5322795Z auto tmp12 = tmp11 + tmp10; 2023-01-11T21:41:23.5322936Z auto tmp14 = tmp13 + tmp12; 2023-01-11T21:41:23.5323080Z auto tmp16 = tmp15 + tmp14; 2023-01-11T21:41:23.5323260Z auto tmp17 = static_cast(0.1111111111111111); 2023-01-11T21:41:23.5323402Z auto tmp18 = tmp16 * tmp17; 2023-01-11T21:41:23.5323542Z out_ptr0[i2 + (27*i1) + (729*i0)] = tmp18; 2023-01-11T21:41:23.5323645Z } 2023-01-11T21:41:23.5323742Z } 2023-01-11T21:41:23.5323841Z } 2023-01-11T21:41:23.5323937Z } 2023-01-11T21:41:23.5324022Z } 2023-01-11T21:41:23.5324108Z } 2023-01-11T21:41:23.5324180Z } 2023-01-11T21:41:23.5324311Z ''') 2023-01-11T21:41:23.5324320Z 2023-01-11T21:41:23.5324327Z 2023-01-11T21:41:23.5324454Z async_compile.wait(globals()) 2023-01-11T21:41:23.5324564Z del async_compile 2023-01-11T21:41:23.5324571Z 2023-01-11T21:41:23.5324674Z def call(args): 2023-01-11T21:41:23.5324770Z arg0_1, = args 2023-01-11T21:41:23.5324868Z args.clear() 2023-01-11T21:41:23.5325203Z buf0 = empty_strided((16, 64, 27, 27), (46656, 729, 27, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5325402Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5325504Z del arg0_1 2023-01-11T21:41:23.5325608Z return (buf0, ) 2023-01-11T21:41:23.5325617Z 2023-01-11T21:41:23.5325623Z 2023-01-11T21:41:23.5325736Z if __name__ == "__main__": 2023-01-11T21:41:23.5325905Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5326085Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5326428Z arg0_1 = rand_strided((16, 64, 55, 55), (193600, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5326571Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5326594Z 2023-01-11T21:41:23.5326679Z ok (1.746s) 2023-01-11T21:41:23.5327386Z test_avg_pool2d3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5327647Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5328042Z [2023-01-11 21:24:41,913] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 29 2023-01-11T21:41:23.5328435Z [2023-01-11 21:24:43,522] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 29 2023-01-11T21:41:23.5328442Z 2023-01-11T21:41:23.5328571Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5328678Z import torch 2023-01-11T21:41:23.5328780Z import random 2023-01-11T21:41:23.5328933Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5329108Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5329116Z 2023-01-11T21:41:23.5329236Z aten = torch.ops.aten 2023-01-11T21:41:23.5329527Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5329667Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5329674Z 2023-01-11T21:41:23.5329679Z 2023-01-11T21:41:23.5329889Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5330200Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5330377Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5330520Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5330599Z { 2023-01-11T21:41:23.5330741Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5330832Z { 2023-01-11T21:41:23.5330940Z #pragma omp for 2023-01-11T21:41:23.5331052Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5331138Z { 2023-01-11T21:41:23.5331248Z #pragma GCC ivdep 2023-01-11T21:41:23.5331367Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.5331464Z { 2023-01-11T21:41:23.5331563Z { 2023-01-11T21:41:23.5331657Z { 2023-01-11T21:41:23.5332551Z auto tmp0 = static_cast((-1) + (2*i0)); 2023-01-11T21:41:23.5332726Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.5332849Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.5332999Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.5333126Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.5333251Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.5333526Z auto tmp6 = static_cast((-1) + (2*i1)); 2023-01-11T21:41:23.5333662Z auto tmp7 = tmp6 >= tmp1; 2023-01-11T21:41:23.5333786Z auto tmp8 = tmp6 < tmp3; 2023-01-11T21:41:23.5333924Z auto tmp9 = tmp7 & tmp8; 2023-01-11T21:41:23.5334042Z auto tmp10 = tmp5 & tmp9; 2023-01-11T21:41:23.5334176Z float tmp11 = 0.0; 2023-01-11T21:41:23.5334280Z if(tmp10) 2023-01-11T21:41:23.5334430Z { 2023-01-11T21:41:23.5334732Z auto tmp12 = in_ptr0[(-9) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5334886Z tmp11 = tmp12; 2023-01-11T21:41:23.5335025Z } 2023-01-11T21:41:23.5335160Z auto tmp13 = static_cast(2*i1); 2023-01-11T21:41:23.5335331Z auto tmp14 = tmp13 >= tmp1; 2023-01-11T21:41:23.5335580Z auto tmp15 = tmp13 < tmp3; 2023-01-11T21:41:23.5335746Z auto tmp16 = tmp14 & tmp15; 2023-01-11T21:41:23.5335860Z auto tmp17 = tmp5 & tmp16; 2023-01-11T21:41:23.5336032Z float tmp18 = 0.0; 2023-01-11T21:41:23.5336173Z if(tmp17) 2023-01-11T21:41:23.5336398Z { 2023-01-11T21:41:23.5336696Z auto tmp19 = in_ptr0[(-8) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5336863Z tmp18 = tmp19; 2023-01-11T21:41:23.5337007Z } 2023-01-11T21:41:23.5337126Z auto tmp20 = tmp18 + tmp11; 2023-01-11T21:41:23.5337434Z auto tmp21 = static_cast(1 + (2*i1)); 2023-01-11T21:41:23.5337602Z auto tmp22 = tmp21 >= tmp1; 2023-01-11T21:41:23.5337762Z auto tmp23 = tmp21 < tmp3; 2023-01-11T21:41:23.5337926Z auto tmp24 = tmp22 & tmp23; 2023-01-11T21:41:23.5338093Z auto tmp25 = tmp5 & tmp24; 2023-01-11T21:41:23.5338251Z float tmp26 = 0.0; 2023-01-11T21:41:23.5416435Z if(tmp25) 2023-01-11T21:41:23.5416657Z { 2023-01-11T21:41:23.5416998Z auto tmp27 = in_ptr0[(-7) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5417298Z tmp26 = tmp27; 2023-01-11T21:41:23.5417402Z } 2023-01-11T21:41:23.5417544Z auto tmp28 = tmp26 + tmp20; 2023-01-11T21:41:23.5417704Z auto tmp29 = static_cast(2*i0); 2023-01-11T21:41:23.5417839Z auto tmp30 = tmp29 >= tmp1; 2023-01-11T21:41:23.5417943Z auto tmp31 = tmp29 < tmp3; 2023-01-11T21:41:23.5418062Z auto tmp32 = tmp30 & tmp31; 2023-01-11T21:41:23.5418197Z auto tmp33 = tmp32 & tmp9; 2023-01-11T21:41:23.5418315Z float tmp34 = 0.0; 2023-01-11T21:41:23.5418421Z if(tmp33) 2023-01-11T21:41:23.5418521Z { 2023-01-11T21:41:23.5418782Z auto tmp35 = in_ptr0[(-1) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5418872Z tmp34 = tmp35; 2023-01-11T21:41:23.5418976Z } 2023-01-11T21:41:23.5419096Z auto tmp36 = tmp34 + tmp28; 2023-01-11T21:41:23.5419226Z auto tmp37 = tmp32 & tmp16; 2023-01-11T21:41:23.5419352Z float tmp38 = 0.0; 2023-01-11T21:41:23.5419453Z if(tmp37) 2023-01-11T21:41:23.5419548Z { 2023-01-11T21:41:23.5419682Z auto tmp39 = in_ptr0[(2*i1) + (16*i0)]; 2023-01-11T21:41:23.5419797Z tmp38 = tmp39; 2023-01-11T21:41:23.5419887Z } 2023-01-11T21:41:23.5420022Z auto tmp40 = tmp38 + tmp36; 2023-01-11T21:41:23.5420144Z auto tmp41 = tmp32 & tmp24; 2023-01-11T21:41:23.5420258Z float tmp42 = 0.0; 2023-01-11T21:41:23.5420360Z if(tmp41) 2023-01-11T21:41:23.5420449Z { 2023-01-11T21:41:23.5420606Z auto tmp43 = in_ptr0[1 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5420726Z tmp42 = tmp43; 2023-01-11T21:41:23.5420824Z } 2023-01-11T21:41:23.5420955Z auto tmp44 = tmp42 + tmp40; 2023-01-11T21:41:23.5421111Z auto tmp45 = static_cast(1 + (2*i0)); 2023-01-11T21:41:23.5421251Z auto tmp46 = tmp45 >= tmp1; 2023-01-11T21:41:23.5421385Z auto tmp47 = tmp45 < tmp3; 2023-01-11T21:41:23.5421507Z auto tmp48 = tmp46 & tmp47; 2023-01-11T21:41:23.5421639Z auto tmp49 = tmp48 & tmp9; 2023-01-11T21:41:23.5421755Z float tmp50 = 0.0; 2023-01-11T21:41:23.5421846Z if(tmp49) 2023-01-11T21:41:23.5421942Z { 2023-01-11T21:41:23.5422093Z auto tmp51 = in_ptr0[7 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5422212Z tmp50 = tmp51; 2023-01-11T21:41:23.5422550Z } 2023-01-11T21:41:23.5422677Z auto tmp52 = tmp50 + tmp44; 2023-01-11T21:41:23.5422799Z auto tmp53 = tmp48 & tmp16; 2023-01-11T21:41:23.5422912Z float tmp54 = 0.0; 2023-01-11T21:41:23.5423003Z if(tmp53) 2023-01-11T21:41:23.5423093Z { 2023-01-11T21:41:23.5423237Z auto tmp55 = in_ptr0[8 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5423337Z tmp54 = tmp55; 2023-01-11T21:41:23.5423417Z } 2023-01-11T21:41:23.5423547Z auto tmp56 = tmp54 + tmp52; 2023-01-11T21:41:23.5423678Z auto tmp57 = tmp48 & tmp24; 2023-01-11T21:41:23.5423794Z float tmp58 = 0.0; 2023-01-11T21:41:23.5423900Z if(tmp57) 2023-01-11T21:41:23.5423994Z { 2023-01-11T21:41:23.5424250Z auto tmp59 = in_ptr0[9 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5424373Z tmp58 = tmp59; 2023-01-11T21:41:23.5424461Z } 2023-01-11T21:41:23.5424589Z auto tmp60 = tmp58 + tmp56; 2023-01-11T21:41:23.5425685Z auto tmp61 = static_cast(0.1111111111111111); 2023-01-11T21:41:23.5425819Z auto tmp62 = tmp60 * tmp61; 2023-01-11T21:41:23.5425961Z out_ptr0[i1 + (4*i0)] = tmp62; 2023-01-11T21:41:23.5426041Z } 2023-01-11T21:41:23.5426126Z } 2023-01-11T21:41:23.5426209Z } 2023-01-11T21:41:23.5426293Z } 2023-01-11T21:41:23.5426366Z } 2023-01-11T21:41:23.5426441Z } 2023-01-11T21:41:23.5426588Z ''') 2023-01-11T21:41:23.5426598Z 2023-01-11T21:41:23.5426603Z 2023-01-11T21:41:23.5426725Z async_compile.wait(globals()) 2023-01-11T21:41:23.5426821Z del async_compile 2023-01-11T21:41:23.5426833Z 2023-01-11T21:41:23.5426924Z def call(args): 2023-01-11T21:41:23.5427029Z arg0_1, = args 2023-01-11T21:41:23.5427128Z args.clear() 2023-01-11T21:41:23.5427428Z buf0 = empty_strided((1, 1, 4, 4), (16, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5427612Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5427699Z del arg0_1 2023-01-11T21:41:23.5427785Z return (buf0, ) 2023-01-11T21:41:23.5427791Z 2023-01-11T21:41:23.5427797Z 2023-01-11T21:41:23.5427889Z if __name__ == "__main__": 2023-01-11T21:41:23.5428050Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5428223Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5428523Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5428672Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5428679Z 2023-01-11T21:41:23.5428767Z ok (1.630s) 2023-01-11T21:41:23.5429466Z test_avg_pool2d4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5429646Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5430006Z [2023-01-11 21:24:43,547] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 30 2023-01-11T21:41:23.5430353Z [2023-01-11 21:24:45,177] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 30 2023-01-11T21:41:23.5430360Z 2023-01-11T21:41:23.5430474Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5430554Z import torch 2023-01-11T21:41:23.5430634Z import random 2023-01-11T21:41:23.5430775Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5431072Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5431081Z 2023-01-11T21:41:23.5431201Z aten = torch.ops.aten 2023-01-11T21:41:23.5431392Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5432388Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5432396Z 2023-01-11T21:41:23.5432401Z 2023-01-11T21:41:23.5432849Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5433171Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5433345Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5433489Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5433582Z { 2023-01-11T21:41:23.5433786Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5433858Z { 2023-01-11T21:41:23.5433959Z #pragma omp for 2023-01-11T21:41:23.5434081Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5434254Z { 2023-01-11T21:41:23.5434369Z #pragma GCC ivdep 2023-01-11T21:41:23.5434488Z for(long i1=0; i1<55; i1+=1) 2023-01-11T21:41:23.5434563Z { 2023-01-11T21:41:23.5434677Z #pragma GCC ivdep 2023-01-11T21:41:23.5434804Z for(long i2=0; i2<55; i2+=1) 2023-01-11T21:41:23.5434893Z { 2023-01-11T21:41:23.5434989Z { 2023-01-11T21:41:23.5435085Z { 2023-01-11T21:41:23.5435244Z auto tmp0 = in_ptr0[(2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5435384Z auto tmp1 = in_ptr0[1 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5435537Z auto tmp3 = in_ptr0[2 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5435694Z auto tmp5 = in_ptr0[111 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5436750Z auto tmp7 = in_ptr0[112 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5436913Z auto tmp9 = in_ptr0[113 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5437066Z auto tmp11 = in_ptr0[222 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5437223Z auto tmp13 = in_ptr0[223 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5437381Z auto tmp15 = in_ptr0[224 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.5437513Z auto tmp2 = tmp1 + tmp0; 2023-01-11T21:41:23.5437629Z auto tmp4 = tmp3 + tmp2; 2023-01-11T21:41:23.5437757Z auto tmp6 = tmp5 + tmp4; 2023-01-11T21:41:23.5437890Z auto tmp8 = tmp7 + tmp6; 2023-01-11T21:41:23.5438022Z auto tmp10 = tmp9 + tmp8; 2023-01-11T21:41:23.5438154Z auto tmp12 = tmp11 + tmp10; 2023-01-11T21:41:23.5438288Z auto tmp14 = tmp13 + tmp12; 2023-01-11T21:41:23.5438417Z auto tmp16 = tmp15 + tmp14; 2023-01-11T21:41:23.5438564Z auto tmp17 = static_cast(0.1111111111111111); 2023-01-11T21:41:23.5438695Z auto tmp18 = tmp16 * tmp17; 2023-01-11T21:41:23.5438843Z out_ptr0[i2 + (55*i1) + (3025*i0)] = tmp18; 2023-01-11T21:41:23.5438940Z } 2023-01-11T21:41:23.5439041Z } 2023-01-11T21:41:23.5439135Z } 2023-01-11T21:41:23.5439220Z } 2023-01-11T21:41:23.5439290Z } 2023-01-11T21:41:23.5439377Z } 2023-01-11T21:41:23.5439459Z } 2023-01-11T21:41:23.5439593Z ''') 2023-01-11T21:41:23.5439602Z 2023-01-11T21:41:23.5439607Z 2023-01-11T21:41:23.5439733Z async_compile.wait(globals()) 2023-01-11T21:41:23.5439836Z del async_compile 2023-01-11T21:41:23.5439843Z 2023-01-11T21:41:23.5439942Z def call(args): 2023-01-11T21:41:23.5440113Z arg0_1, = args 2023-01-11T21:41:23.5440209Z args.clear() 2023-01-11T21:41:23.5440544Z buf0 = empty_strided((2, 8, 55, 55), (24200, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5440733Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5440826Z del arg0_1 2023-01-11T21:41:23.5440928Z return (buf0, ) 2023-01-11T21:41:23.5440935Z 2023-01-11T21:41:23.5440940Z 2023-01-11T21:41:23.5441044Z if __name__ == "__main__": 2023-01-11T21:41:23.5441209Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5441366Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5441697Z arg0_1 = rand_strided((2, 8, 111, 111), (98568, 12321, 111, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5441845Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5441852Z 2023-01-11T21:41:23.5441930Z ok (1.658s) 2023-01-11T21:41:23.5442689Z test_avg_pool2d5_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5442865Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5443266Z [2023-01-11 21:24:45,202] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 31 2023-01-11T21:41:23.5443273Z 2023-01-11T21:41:23.5443402Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5443497Z import torch 2023-01-11T21:41:23.5443576Z import random 2023-01-11T21:41:23.5443732Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5443890Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5443896Z 2023-01-11T21:41:23.5444005Z aten = torch.ops.aten 2023-01-11T21:41:23.5444185Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5444306Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5444313Z 2023-01-11T21:41:23.5444319Z 2023-01-11T21:41:23.5444518Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5444828Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5444982Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5445124Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5445211Z { 2023-01-11T21:41:23.5445352Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5445444Z { 2023-01-11T21:41:23.5445550Z #pragma omp for 2023-01-11T21:41:23.5445662Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5445733Z { 2023-01-11T21:41:23.5445841Z #pragma GCC ivdep 2023-01-11T21:41:23.5445952Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.5446051Z { 2023-01-11T21:41:23.5446140Z { 2023-01-11T21:41:23.5446233Z { 2023-01-11T21:41:23.5446486Z auto tmp0 = static_cast((-1) + (2*i0)); 2023-01-11T21:41:23.5446613Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.5446740Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.5446881Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.5447011Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.5447141Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.5447386Z auto tmp6 = static_cast((-1) + (2*i1)); 2023-01-11T21:41:23.5447512Z auto tmp7 = tmp6 >= tmp1; 2023-01-11T21:41:23.5447620Z auto tmp8 = tmp6 < tmp3; 2023-01-11T21:41:23.5447747Z auto tmp9 = tmp7 & tmp8; 2023-01-11T21:41:23.5447875Z auto tmp10 = tmp5 & tmp9; 2023-01-11T21:41:23.5448064Z float tmp11 = 0.0; 2023-01-11T21:41:23.5448170Z if(tmp10) 2023-01-11T21:41:23.5448256Z { 2023-01-11T21:41:23.5448498Z auto tmp12 = in_ptr0[(-9) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5448599Z tmp11 = tmp12; 2023-01-11T21:41:23.5448691Z } 2023-01-11T21:41:23.5448845Z auto tmp13 = static_cast(2*i1); 2023-01-11T21:41:23.5448974Z auto tmp14 = tmp13 >= tmp1; 2023-01-11T21:41:23.5449099Z auto tmp15 = tmp13 < tmp3; 2023-01-11T21:41:23.5449228Z auto tmp16 = tmp14 & tmp15; 2023-01-11T21:41:23.5449354Z auto tmp17 = tmp5 & tmp16; 2023-01-11T21:41:23.5449472Z float tmp18 = 0.0; 2023-01-11T21:41:23.5449558Z if(tmp17) 2023-01-11T21:41:23.5449712Z { 2023-01-11T21:41:23.5449965Z auto tmp19 = in_ptr0[(-8) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5450076Z tmp18 = tmp19; 2023-01-11T21:41:23.5450168Z } 2023-01-11T21:41:23.5450293Z auto tmp20 = tmp18 + tmp11; 2023-01-11T21:41:23.5450441Z auto tmp21 = static_cast(1 + (2*i1)); 2023-01-11T21:41:23.5450548Z auto tmp22 = tmp21 >= tmp1; 2023-01-11T21:41:23.5450667Z auto tmp23 = tmp21 < tmp3; 2023-01-11T21:41:23.5450789Z auto tmp24 = tmp22 & tmp23; 2023-01-11T21:41:23.5450918Z auto tmp25 = tmp5 & tmp24; 2023-01-11T21:41:23.5451032Z float tmp26 = 0.0; 2023-01-11T21:41:23.5451130Z if(tmp25) 2023-01-11T21:41:23.5451222Z { 2023-01-11T21:41:23.5451456Z auto tmp27 = in_ptr0[(-7) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5451575Z tmp26 = tmp27; 2023-01-11T21:41:23.5451663Z } 2023-01-11T21:41:23.5451792Z auto tmp28 = tmp26 + tmp20; 2023-01-11T21:41:23.5451945Z auto tmp29 = static_cast(2*i0); 2023-01-11T21:41:23.5452074Z auto tmp30 = tmp29 >= tmp1; 2023-01-11T21:41:23.5452195Z auto tmp31 = tmp29 < tmp3; 2023-01-11T21:41:23.5452309Z auto tmp32 = tmp30 & tmp31; 2023-01-11T21:41:23.5452440Z auto tmp33 = tmp32 & tmp9; 2023-01-11T21:41:23.5452554Z float tmp34 = 0.0; 2023-01-11T21:41:23.5452652Z if(tmp33) 2023-01-11T21:41:23.5452741Z { 2023-01-11T21:41:23.5452986Z auto tmp35 = in_ptr0[(-1) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5453097Z tmp34 = tmp35; 2023-01-11T21:41:23.5453181Z } 2023-01-11T21:41:23.5453305Z auto tmp36 = tmp34 + tmp28; 2023-01-11T21:41:23.5453433Z auto tmp37 = tmp32 & tmp16; 2023-01-11T21:41:23.5453545Z float tmp38 = 0.0; 2023-01-11T21:41:23.5453650Z if(tmp37) 2023-01-11T21:41:23.5453743Z { 2023-01-11T21:41:23.5453893Z auto tmp39 = in_ptr0[(2*i1) + (16*i0)]; 2023-01-11T21:41:23.5453989Z tmp38 = tmp39; 2023-01-11T21:41:23.5454084Z } 2023-01-11T21:41:23.5454207Z auto tmp40 = tmp38 + tmp36; 2023-01-11T21:41:23.5454338Z auto tmp41 = tmp32 & tmp24; 2023-01-11T21:41:23.5454453Z float tmp42 = 0.0; 2023-01-11T21:41:23.5454557Z if(tmp41) 2023-01-11T21:41:23.5454651Z { 2023-01-11T21:41:23.5454785Z auto tmp43 = in_ptr0[1 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5454976Z tmp42 = tmp43; 2023-01-11T21:41:23.5455076Z } 2023-01-11T21:41:23.5455206Z auto tmp44 = tmp42 + tmp40; 2023-01-11T21:41:23.5455360Z auto tmp45 = static_cast(1 + (2*i0)); 2023-01-11T21:41:23.5455492Z auto tmp46 = tmp45 >= tmp1; 2023-01-11T21:41:23.5455610Z auto tmp47 = tmp45 < tmp3; 2023-01-11T21:41:23.5455732Z auto tmp48 = tmp46 & tmp47; 2023-01-11T21:41:23.5455844Z auto tmp49 = tmp48 & tmp9; 2023-01-11T21:41:23.5455960Z float tmp50 = 0.0; 2023-01-11T21:41:23.5456066Z if(tmp49) 2023-01-11T21:41:23.5456159Z { 2023-01-11T21:41:23.5456308Z auto tmp51 = in_ptr0[7 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5456426Z tmp50 = tmp51; 2023-01-11T21:41:23.5456581Z } 2023-01-11T21:41:23.5456700Z auto tmp52 = tmp50 + tmp44; 2023-01-11T21:41:23.5456829Z auto tmp53 = tmp48 & tmp16; 2023-01-11T21:41:23.5456942Z float tmp54 = 0.0; 2023-01-11T21:41:23.5457043Z if(tmp53) 2023-01-11T21:41:23.5457139Z { 2023-01-11T21:41:23.5457288Z auto tmp55 = in_ptr0[8 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5457403Z tmp54 = tmp55; 2023-01-11T21:41:23.5457479Z } 2023-01-11T21:41:23.5457601Z auto tmp56 = tmp54 + tmp52; 2023-01-11T21:41:23.5457723Z auto tmp57 = tmp48 & tmp24; 2023-01-11T21:41:23.5457832Z float tmp58 = 0.0; 2023-01-11T21:41:23.5457925Z if(tmp57) 2023-01-11T21:41:23.5458015Z { 2023-01-11T21:41:23.5458164Z auto tmp59 = in_ptr0[9 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5458258Z tmp58 = tmp59; 2023-01-11T21:41:23.5458351Z } 2023-01-11T21:41:23.5458472Z auto tmp60 = tmp58 + tmp56; 2023-01-11T21:41:23.5458587Z float tmp61 = 0.0; 2023-01-11T21:41:23.5458687Z if(tmp10) 2023-01-11T21:41:23.5458780Z { 2023-01-11T21:41:23.5458932Z auto tmp62 = static_cast(1); 2023-01-11T21:41:23.5459029Z tmp61 = tmp62; 2023-01-11T21:41:23.5459119Z } 2023-01-11T21:41:23.5459233Z float tmp63 = 0.0; 2023-01-11T21:41:23.5459334Z if(tmp17) 2023-01-11T21:41:23.5459428Z { 2023-01-11T21:41:23.5459577Z auto tmp64 = static_cast(1); 2023-01-11T21:41:23.5459704Z tmp63 = tmp64; 2023-01-11T21:41:23.5459795Z } 2023-01-11T21:41:23.5459951Z auto tmp65 = tmp63 + tmp61; 2023-01-11T21:41:23.5460074Z float tmp66 = 0.0; 2023-01-11T21:41:23.5460178Z if(tmp25) 2023-01-11T21:41:23.5460287Z { 2023-01-11T21:41:23.5460440Z auto tmp67 = static_cast(1); 2023-01-11T21:41:23.5460551Z tmp66 = tmp67; 2023-01-11T21:41:23.5460639Z } 2023-01-11T21:41:23.5460775Z auto tmp68 = tmp66 + tmp65; 2023-01-11T21:41:23.5460896Z float tmp69 = 0.0; 2023-01-11T21:41:23.5461002Z if(tmp33) 2023-01-11T21:41:23.5461098Z { 2023-01-11T21:41:23.5461246Z auto tmp70 = static_cast(1); 2023-01-11T21:41:23.5461356Z tmp69 = tmp70; 2023-01-11T21:41:23.5461515Z } 2023-01-11T21:41:23.5461656Z auto tmp71 = tmp69 + tmp68; 2023-01-11T21:41:23.5461777Z float tmp72 = 0.0; 2023-01-11T21:41:23.5461887Z if(tmp37) 2023-01-11T21:41:23.5461985Z { 2023-01-11T21:41:23.5462145Z auto tmp73 = static_cast(1); 2023-01-11T21:41:23.5462255Z tmp72 = tmp73; 2023-01-11T21:41:23.5462475Z } 2023-01-11T21:41:23.5462624Z auto tmp74 = tmp72 + tmp71; 2023-01-11T21:41:23.5462747Z float tmp75 = 0.0; 2023-01-11T21:41:23.5462854Z if(tmp41) 2023-01-11T21:41:23.5462950Z { 2023-01-11T21:41:23.5463093Z auto tmp76 = static_cast(1); 2023-01-11T21:41:23.5463213Z tmp75 = tmp76; 2023-01-11T21:41:23.5463293Z } 2023-01-11T21:41:23.5463515Z auto tmp77 = tmp75 + tmp74; 2023-01-11T21:41:23.5463636Z float tmp78 = 0.0; 2023-01-11T21:41:23.5463736Z if(tmp49) 2023-01-11T21:41:23.5463820Z { 2023-01-11T21:41:23.5463960Z auto tmp79 = static_cast(1); 2023-01-11T21:41:23.5464061Z tmp78 = tmp79; 2023-01-11T21:41:23.5464135Z } 2023-01-11T21:41:23.5464261Z auto tmp80 = tmp78 + tmp77; 2023-01-11T21:41:23.5464363Z float tmp81 = 0.0; 2023-01-11T21:41:23.5464460Z if(tmp53) 2023-01-11T21:41:23.5464554Z { 2023-01-11T21:41:23.5464705Z auto tmp82 = static_cast(1); 2023-01-11T21:41:23.5464810Z tmp81 = tmp82; 2023-01-11T21:41:23.5464904Z } 2023-01-11T21:41:23.5465027Z auto tmp83 = tmp81 + tmp80; 2023-01-11T21:41:23.5465148Z float tmp84 = 0.0; 2023-01-11T21:41:23.5465253Z if(tmp57) 2023-01-11T21:41:23.5465346Z { 2023-01-11T21:41:23.5465493Z auto tmp85 = static_cast(1); 2023-01-11T21:41:23.5465600Z tmp84 = tmp85; 2023-01-11T21:41:23.5465698Z } 2023-01-11T21:41:23.5465817Z auto tmp86 = tmp84 + tmp83; 2023-01-11T21:41:23.5465938Z auto tmp87 = tmp60 / tmp86; 2023-01-11T21:41:23.5466066Z out_ptr0[i1 + (4*i0)] = tmp87; 2023-01-11T21:41:23.5466156Z } 2023-01-11T21:41:23.5466246Z } 2023-01-11T21:41:23.5466334Z } 2023-01-11T21:41:23.5466404Z } 2023-01-11T21:41:23.5466490Z } 2023-01-11T21:41:23.5466571Z } 2023-01-11T21:41:23.5466711Z ''') 2023-01-11T21:41:23.5466721Z 2023-01-11T21:41:23.5466732Z 2023-01-11T21:41:23.5466863Z async_compile.wait(globals()) 2023-01-11T21:41:23.5466966Z del async_compile 2023-01-11T21:41:23.5466974Z 2023-01-11T21:41:23.5467071Z def call(args): 2023-01-11T21:41:23.5467171Z arg0_1, = args 2023-01-11T21:41:23.5467252Z args.clear() 2023-01-11T21:41:23.5467572Z buf0 = empty_strided((1, 1, 4, 4), (16, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5467762Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5467857Z del arg0_1 2023-01-11T21:41:23.5467956Z return (buf0, ) 2023-01-11T21:41:23.5467963Z 2023-01-11T21:41:23.5467968Z 2023-01-11T21:41:23.5468074Z if __name__ == "__main__": 2023-01-11T21:41:23.5468223Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5468382Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5468706Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5468962Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5469363Z [2023-01-11 21:24:46,856] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 31 2023-01-11T21:41:23.5469372Z 2023-01-11T21:41:23.5469465Z ok (1.676s) 2023-01-11T21:41:23.5470143Z test_avg_pool2d6_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5470307Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5470668Z [2023-01-11 21:24:46,877] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 32 2023-01-11T21:41:23.5471120Z [2023-01-11 21:24:48,513] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 32 2023-01-11T21:41:23.5471135Z 2023-01-11T21:41:23.5471270Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5471351Z import torch 2023-01-11T21:41:23.5471455Z import random 2023-01-11T21:41:23.5471628Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5471798Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5471805Z 2023-01-11T21:41:23.5471923Z aten = torch.ops.aten 2023-01-11T21:41:23.5472121Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5472250Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5472257Z 2023-01-11T21:41:23.5472262Z 2023-01-11T21:41:23.5472465Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5472748Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5472928Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5473069Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5473163Z { 2023-01-11T21:41:23.5473305Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5473394Z { 2023-01-11T21:41:23.5473503Z #pragma omp for 2023-01-11T21:41:23.5473606Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5473700Z { 2023-01-11T21:41:23.5473887Z #pragma GCC ivdep 2023-01-11T21:41:23.5474014Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.5474108Z { 2023-01-11T21:41:23.5474202Z { 2023-01-11T21:41:23.5474293Z { 2023-01-11T21:41:23.5474549Z auto tmp0 = static_cast((-1) + (2*i0)); 2023-01-11T21:41:23.5474701Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.5474833Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.5474983Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.5475114Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.5475251Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.5475512Z auto tmp6 = static_cast((-1) + (2*i1)); 2023-01-11T21:41:23.5475631Z auto tmp7 = tmp6 >= tmp1; 2023-01-11T21:41:23.5475761Z auto tmp8 = tmp6 < tmp3; 2023-01-11T21:41:23.5475887Z auto tmp9 = tmp7 & tmp8; 2023-01-11T21:41:23.5476011Z auto tmp10 = tmp5 & tmp9; 2023-01-11T21:41:23.5476133Z float tmp11 = 0.0; 2023-01-11T21:41:23.5476241Z if(tmp10) 2023-01-11T21:41:23.5476333Z { 2023-01-11T21:41:23.5476580Z auto tmp12 = in_ptr0[(-9) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5476694Z tmp11 = tmp12; 2023-01-11T21:41:23.5476782Z } 2023-01-11T21:41:23.5476937Z auto tmp13 = static_cast(2*i1); 2023-01-11T21:41:23.5477144Z auto tmp14 = tmp13 >= tmp1; 2023-01-11T21:41:23.5477276Z auto tmp15 = tmp13 < tmp3; 2023-01-11T21:41:23.5477399Z auto tmp16 = tmp14 & tmp15; 2023-01-11T21:41:23.5477506Z auto tmp17 = tmp5 & tmp16; 2023-01-11T21:41:23.5477616Z float tmp18 = 0.0; 2023-01-11T21:41:23.5477717Z if(tmp17) 2023-01-11T21:41:23.5477804Z { 2023-01-11T21:41:23.5478072Z auto tmp19 = in_ptr0[(-8) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5478188Z tmp18 = tmp19; 2023-01-11T21:41:23.5478279Z } 2023-01-11T21:41:23.5478405Z auto tmp20 = tmp18 + tmp11; 2023-01-11T21:41:23.5478546Z auto tmp21 = static_cast(1 + (2*i1)); 2023-01-11T21:41:23.5478678Z auto tmp22 = tmp21 >= tmp1; 2023-01-11T21:41:23.5478885Z auto tmp23 = tmp21 < tmp3; 2023-01-11T21:41:23.5479016Z auto tmp24 = tmp22 & tmp23; 2023-01-11T21:41:23.5479147Z auto tmp25 = tmp5 & tmp24; 2023-01-11T21:41:23.5479265Z float tmp26 = 0.0; 2023-01-11T21:41:23.5479366Z if(tmp25) 2023-01-11T21:41:23.5479444Z { 2023-01-11T21:41:23.5479839Z auto tmp27 = in_ptr0[(-7) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5479950Z tmp26 = tmp27; 2023-01-11T21:41:23.5480046Z } 2023-01-11T21:41:23.5480171Z auto tmp28 = tmp26 + tmp20; 2023-01-11T21:41:23.5480312Z auto tmp29 = static_cast(2*i0); 2023-01-11T21:41:23.5480447Z auto tmp30 = tmp29 >= tmp1; 2023-01-11T21:41:23.5480559Z auto tmp31 = tmp29 < tmp3; 2023-01-11T21:41:23.5480690Z auto tmp32 = tmp30 & tmp31; 2023-01-11T21:41:23.5480826Z auto tmp33 = tmp32 & tmp9; 2023-01-11T21:41:23.5480939Z float tmp34 = 0.0; 2023-01-11T21:41:23.5481033Z if(tmp33) 2023-01-11T21:41:23.5481120Z { 2023-01-11T21:41:23.5481383Z auto tmp35 = in_ptr0[(-1) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5481487Z tmp34 = tmp35; 2023-01-11T21:41:23.5481602Z } 2023-01-11T21:41:23.5481743Z auto tmp36 = tmp34 + tmp28; 2023-01-11T21:41:23.5481898Z auto tmp37 = tmp32 & tmp16; 2023-01-11T21:41:23.5482032Z float tmp38 = 0.0; 2023-01-11T21:41:23.5482129Z if(tmp37) 2023-01-11T21:41:23.5482220Z { 2023-01-11T21:41:23.5482347Z auto tmp39 = in_ptr0[(2*i1) + (16*i0)]; 2023-01-11T21:41:23.5482461Z tmp38 = tmp39; 2023-01-11T21:41:23.5482567Z } 2023-01-11T21:41:23.5482707Z auto tmp40 = tmp38 + tmp36; 2023-01-11T21:41:23.5482850Z auto tmp41 = tmp32 & tmp24; 2023-01-11T21:41:23.5482976Z float tmp42 = 0.0; 2023-01-11T21:41:23.5483090Z if(tmp41) 2023-01-11T21:41:23.5483170Z { 2023-01-11T21:41:23.5483320Z auto tmp43 = in_ptr0[1 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5483436Z tmp42 = tmp43; 2023-01-11T21:41:23.5483527Z } 2023-01-11T21:41:23.5483651Z auto tmp44 = tmp42 + tmp40; 2023-01-11T21:41:23.5483807Z auto tmp45 = static_cast(1 + (2*i0)); 2023-01-11T21:41:23.5483928Z auto tmp46 = tmp45 >= tmp1; 2023-01-11T21:41:23.5484050Z auto tmp47 = tmp45 < tmp3; 2023-01-11T21:41:23.5484167Z auto tmp48 = tmp46 & tmp47; 2023-01-11T21:41:23.5484368Z auto tmp49 = tmp48 & tmp9; 2023-01-11T21:41:23.5484475Z float tmp50 = 0.0; 2023-01-11T21:41:23.5484570Z if(tmp49) 2023-01-11T21:41:23.5484658Z { 2023-01-11T21:41:23.5485759Z auto tmp51 = in_ptr0[7 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5485869Z tmp50 = tmp51; 2023-01-11T21:41:23.5485945Z } 2023-01-11T21:41:23.5486064Z auto tmp52 = tmp50 + tmp44; 2023-01-11T21:41:23.5486183Z auto tmp53 = tmp48 & tmp16; 2023-01-11T21:41:23.5486286Z float tmp54 = 0.0; 2023-01-11T21:41:23.5486389Z if(tmp53) 2023-01-11T21:41:23.5486474Z { 2023-01-11T21:41:23.5486616Z auto tmp55 = in_ptr0[8 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5486779Z tmp54 = tmp55; 2023-01-11T21:41:23.5486864Z } 2023-01-11T21:41:23.5486979Z auto tmp56 = tmp54 + tmp52; 2023-01-11T21:41:23.5487101Z auto tmp57 = tmp48 & tmp24; 2023-01-11T21:41:23.5487210Z float tmp58 = 0.0; 2023-01-11T21:41:23.5487301Z if(tmp57) 2023-01-11T21:41:23.5487383Z { 2023-01-11T21:41:23.5487512Z auto tmp59 = in_ptr0[9 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.5487618Z tmp58 = tmp59; 2023-01-11T21:41:23.5487708Z } 2023-01-11T21:41:23.5487827Z auto tmp60 = tmp58 + tmp56; 2023-01-11T21:41:23.5487982Z auto tmp61 = static_cast(0.3333333333333333); 2023-01-11T21:41:23.5488103Z auto tmp62 = tmp60 * tmp61; 2023-01-11T21:41:23.5488231Z out_ptr0[i1 + (4*i0)] = tmp62; 2023-01-11T21:41:23.5488315Z } 2023-01-11T21:41:23.5488471Z } 2023-01-11T21:41:23.5488557Z } 2023-01-11T21:41:23.5488634Z } 2023-01-11T21:41:23.5488714Z } 2023-01-11T21:41:23.5488788Z } 2023-01-11T21:41:23.5488909Z ''') 2023-01-11T21:41:23.5488918Z 2023-01-11T21:41:23.5488924Z 2023-01-11T21:41:23.5489035Z async_compile.wait(globals()) 2023-01-11T21:41:23.5489123Z del async_compile 2023-01-11T21:41:23.5489131Z 2023-01-11T21:41:23.5489222Z def call(args): 2023-01-11T21:41:23.5489307Z arg0_1, = args 2023-01-11T21:41:23.5490873Z args.clear() 2023-01-11T21:41:23.5492004Z buf0 = empty_strided((1, 1, 4, 4), (16, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5492175Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5492264Z del arg0_1 2023-01-11T21:41:23.5492341Z return (buf0, ) 2023-01-11T21:41:23.5492349Z 2023-01-11T21:41:23.5492355Z 2023-01-11T21:41:23.5492461Z if __name__ == "__main__": 2023-01-11T21:41:23.5492621Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5492786Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5493992Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5494143Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5494149Z 2023-01-11T21:41:23.5494238Z ok (1.656s) 2023-01-11T21:41:23.5494898Z test_avg_pool2d7_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5495075Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5495454Z [2023-01-11 21:24:48,535] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 33 2023-01-11T21:41:23.5495914Z [2023-01-11 21:24:48,540] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.avg_pool2d 2023-01-11T21:41:23.5496290Z [2023-01-11 21:24:48,544] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 33 2023-01-11T21:41:23.5496298Z 2023-01-11T21:41:23.5496422Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5496520Z import torch 2023-01-11T21:41:23.5496613Z import random 2023-01-11T21:41:23.5496767Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5496936Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5496942Z 2023-01-11T21:41:23.5497038Z aten = torch.ops.aten 2023-01-11T21:41:23.5497222Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5497346Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5497353Z 2023-01-11T21:41:23.5497359Z 2023-01-11T21:41:23.5497536Z async_compile.wait(globals()) 2023-01-11T21:41:23.5497633Z del async_compile 2023-01-11T21:41:23.5497639Z 2023-01-11T21:41:23.5497730Z def call(args): 2023-01-11T21:41:23.5497819Z arg0_1, = args 2023-01-11T21:41:23.5497912Z args.clear() 2023-01-11T21:41:23.5498073Z buf0 = aten.avg_pool2d(arg0_1, [13, 13], [1, 1], [0, 0], False, True, None) 2023-01-11T21:41:23.5498160Z del arg0_1 2023-01-11T21:41:23.5498248Z buf1 = buf0 2023-01-11T21:41:23.5498392Z assert_size_stride(buf1, (1, 1, 12, 12), (144, 144, 12, 1)) 2023-01-11T21:41:23.5498472Z del buf0 2023-01-11T21:41:23.5498557Z return (buf1, ) 2023-01-11T21:41:23.5498564Z 2023-01-11T21:41:23.5498569Z 2023-01-11T21:41:23.5498667Z if __name__ == "__main__": 2023-01-11T21:41:23.5498815Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5498985Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5499342Z arg0_1 = rand_strided((1, 1, 24, 24), (576, 576, 24, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5499526Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5499534Z 2023-01-11T21:41:23.5499637Z ok (0.031s) 2023-01-11T21:41:23.5500427Z test_avg_pool2d_backward2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5500645Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5501065Z [2023-01-11 21:24:48,565] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 34 2023-01-11T21:41:23.5501073Z 2023-01-11T21:41:23.5501199Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5501290Z import torch 2023-01-11T21:41:23.5501372Z import random 2023-01-11T21:41:23.5501542Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5501694Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5501702Z 2023-01-11T21:41:23.5501796Z aten = torch.ops.aten 2023-01-11T21:41:23.5501970Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5502092Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5502098Z 2023-01-11T21:41:23.5502105Z 2023-01-11T21:41:23.5502300Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5502820Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5502988Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5503125Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5503211Z { 2023-01-11T21:41:23.5503349Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5503434Z { 2023-01-11T21:41:23.5503544Z #pragma omp for 2023-01-11T21:41:23.5503779Z for(long i0=0; i0<20; i0+=1) 2023-01-11T21:41:23.5503871Z { 2023-01-11T21:41:23.5503992Z #pragma GCC ivdep 2023-01-11T21:41:23.5504123Z for(long i1=0; i1<15; i1+=1) 2023-01-11T21:41:23.5504219Z { 2023-01-11T21:41:23.5504317Z { 2023-01-11T21:41:23.5504418Z { 2023-01-11T21:41:23.5504696Z auto tmp0 = static_cast((-1) + i0); 2023-01-11T21:41:23.5504965Z auto tmp1 = static_cast((-1) + i1); 2023-01-11T21:41:23.5505136Z auto tmp2 = static_cast(2 + i0); 2023-01-11T21:41:23.5505281Z auto tmp3 = static_cast(2 + i1); 2023-01-11T21:41:23.5505414Z auto tmp4 = static_cast(0); 2023-01-11T21:41:23.5505607Z auto tmp5 = (tmp4 != tmp4) ? tmp4 : std::max(tmp0, tmp4); 2023-01-11T21:41:23.5505786Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp1, tmp4); 2023-01-11T21:41:23.5506024Z auto tmp7 = static_cast(20); 2023-01-11T21:41:23.5506188Z auto tmp8 = (tmp7 != tmp7) ? tmp7 : std::min(tmp2, tmp7); 2023-01-11T21:41:23.5506332Z auto tmp9 = static_cast(15); 2023-01-11T21:41:23.5506507Z auto tmp10 = (tmp9 != tmp9) ? tmp9 : std::min(tmp3, tmp9); 2023-01-11T21:41:23.5506632Z auto tmp11 = tmp5 + tmp4; 2023-01-11T21:41:23.5506752Z auto tmp12 = tmp6 + tmp4; 2023-01-11T21:41:23.5506898Z auto tmp13 = static_cast(1); 2023-01-11T21:41:23.5507035Z auto tmp14 = static_cast(3); 2023-01-11T21:41:23.5507156Z auto tmp15 = tmp11 * tmp13; 2023-01-11T21:41:23.5507361Z auto tmp16 = tmp15 - tmp13; 2023-01-11T21:41:23.5507484Z auto tmp17 = tmp12 * tmp13; 2023-01-11T21:41:23.5507683Z auto tmp18 = tmp17 - tmp13; 2023-01-11T21:41:23.5507810Z auto tmp19 = tmp16 + tmp14; 2023-01-11T21:41:23.5507933Z auto tmp20 = tmp7 + tmp13; 2023-01-11T21:41:23.5508113Z auto tmp21 = (tmp20 != tmp20) ? tmp20 : std::min(tmp19, tmp20); 2023-01-11T21:41:23.5508236Z auto tmp22 = tmp18 + tmp14; 2023-01-11T21:41:23.5508357Z auto tmp23 = tmp9 + tmp13; 2023-01-11T21:41:23.5508514Z auto tmp24 = (tmp23 != tmp23) ? tmp23 : std::min(tmp22, tmp23); 2023-01-11T21:41:23.5508691Z auto tmp25 = (tmp4 != tmp4) ? tmp4 : std::max(tmp16, tmp4); 2023-01-11T21:41:23.5508866Z auto tmp26 = (tmp4 != tmp4) ? tmp4 : std::max(tmp18, tmp4); 2023-01-11T21:41:23.5509037Z auto tmp27 = (tmp7 != tmp7) ? tmp7 : std::min(tmp21, tmp7); 2023-01-11T21:41:23.5509205Z auto tmp28 = (tmp9 != tmp9) ? tmp9 : std::min(tmp24, tmp9); 2023-01-11T21:41:23.5509411Z auto tmp29 = tmp27 - tmp25; 2023-01-11T21:41:23.5509604Z auto tmp30 = tmp28 - tmp26; 2023-01-11T21:41:23.5509732Z auto tmp31 = tmp29 * tmp30; 2023-01-11T21:41:23.5509916Z auto tmp32 = tmp8 - tmp13; 2023-01-11T21:41:23.5510089Z auto tmp33 = (tmp32 != tmp32) ? tmp32 : std::min(tmp11, tmp32); 2023-01-11T21:41:23.5510288Z auto tmp34 = tmp10 - tmp13; 2023-01-11T21:41:23.5510456Z auto tmp35 = (tmp34 != tmp34) ? tmp34 : std::min(tmp12, tmp34); 2023-01-11T21:41:23.5510606Z auto tmp36 = in_ptr0[tmp35 + (15*tmp33)]; 2023-01-11T21:41:23.5510727Z auto tmp37 = tmp36 / tmp31; 2023-01-11T21:41:23.5510856Z auto tmp38 = tmp11 < tmp8; 2023-01-11T21:41:23.5510973Z auto tmp39 = tmp12 < tmp10; 2023-01-11T21:41:23.5511173Z auto tmp40 = tmp38 & tmp39; 2023-01-11T21:41:23.5511321Z auto tmp41 = static_cast(0.0); 2023-01-11T21:41:23.5511457Z auto tmp42 = tmp40 ? tmp37 : tmp41; 2023-01-11T21:41:23.5511587Z auto tmp43 = tmp6 + tmp13; 2023-01-11T21:41:23.5511711Z auto tmp44 = tmp43 * tmp13; 2023-01-11T21:41:23.5511910Z auto tmp45 = tmp44 - tmp13; 2023-01-11T21:41:23.5512032Z auto tmp46 = tmp45 + tmp14; 2023-01-11T21:41:23.5512214Z auto tmp47 = (tmp23 != tmp23) ? tmp23 : std::min(tmp46, tmp23); 2023-01-11T21:41:23.5512398Z auto tmp48 = (tmp4 != tmp4) ? tmp4 : std::max(tmp45, tmp4); 2023-01-11T21:41:23.5512570Z auto tmp49 = (tmp9 != tmp9) ? tmp9 : std::min(tmp47, tmp9); 2023-01-11T21:41:23.5512772Z auto tmp50 = tmp49 - tmp48; 2023-01-11T21:41:23.5512955Z auto tmp51 = tmp29 * tmp50; 2023-01-11T21:41:23.5513133Z auto tmp52 = (tmp34 != tmp34) ? tmp34 : std::min(tmp43, tmp34); 2023-01-11T21:41:23.5513270Z auto tmp53 = in_ptr0[tmp52 + (15*tmp33)]; 2023-01-11T21:41:23.5513405Z auto tmp54 = tmp53 / tmp51; 2023-01-11T21:41:23.5513530Z auto tmp55 = tmp43 < tmp10; 2023-01-11T21:41:23.5513636Z auto tmp56 = tmp38 & tmp55; 2023-01-11T21:41:23.5513821Z auto tmp57 = tmp42 + tmp54; 2023-01-11T21:41:23.5513966Z auto tmp58 = tmp56 ? tmp57 : tmp42; 2023-01-11T21:41:23.5514105Z auto tmp59 = static_cast(2); 2023-01-11T21:41:23.5514231Z auto tmp60 = tmp6 + tmp59; 2023-01-11T21:41:23.5514357Z auto tmp61 = tmp60 * tmp13; 2023-01-11T21:41:23.5514568Z auto tmp62 = tmp61 - tmp13; 2023-01-11T21:41:23.5514681Z auto tmp63 = tmp62 + tmp14; 2023-01-11T21:41:23.5514850Z auto tmp64 = (tmp23 != tmp23) ? tmp23 : std::min(tmp63, tmp23); 2023-01-11T21:41:23.5515011Z auto tmp65 = (tmp4 != tmp4) ? tmp4 : std::max(tmp62, tmp4); 2023-01-11T21:41:23.5515169Z auto tmp66 = (tmp9 != tmp9) ? tmp9 : std::min(tmp64, tmp9); 2023-01-11T21:41:23.5515366Z auto tmp67 = tmp66 - tmp65; 2023-01-11T21:41:23.5515489Z auto tmp68 = tmp29 * tmp67; 2023-01-11T21:41:23.5515659Z auto tmp69 = (tmp34 != tmp34) ? tmp34 : std::min(tmp60, tmp34); 2023-01-11T21:41:23.5515806Z auto tmp70 = in_ptr0[tmp69 + (15*tmp33)]; 2023-01-11T21:41:23.5515928Z auto tmp71 = tmp70 / tmp68; 2023-01-11T21:41:23.5516036Z auto tmp72 = tmp60 < tmp10; 2023-01-11T21:41:23.5516151Z auto tmp73 = tmp38 & tmp72; 2023-01-11T21:41:23.5516285Z auto tmp74 = tmp58 + tmp71; 2023-01-11T21:41:23.5516418Z auto tmp75 = tmp73 ? tmp74 : tmp58; 2023-01-11T21:41:23.5516541Z auto tmp76 = tmp5 + tmp13; 2023-01-11T21:41:23.5516661Z auto tmp77 = tmp76 * tmp13; 2023-01-11T21:41:23.5516864Z auto tmp78 = tmp77 - tmp13; 2023-01-11T21:41:23.5516980Z auto tmp79 = tmp78 + tmp14; 2023-01-11T21:41:23.5517154Z auto tmp80 = (tmp20 != tmp20) ? tmp20 : std::min(tmp79, tmp20); 2023-01-11T21:41:23.5517334Z auto tmp81 = (tmp4 != tmp4) ? tmp4 : std::max(tmp78, tmp4); 2023-01-11T21:41:23.5517503Z auto tmp82 = (tmp7 != tmp7) ? tmp7 : std::min(tmp80, tmp7); 2023-01-11T21:41:23.5517705Z auto tmp83 = tmp82 - tmp81; 2023-01-11T21:41:23.5517816Z auto tmp84 = tmp83 * tmp30; 2023-01-11T21:41:23.5517978Z auto tmp85 = (tmp32 != tmp32) ? tmp32 : std::min(tmp76, tmp32); 2023-01-11T21:41:23.5518206Z auto tmp86 = in_ptr0[tmp35 + (15*tmp85)]; 2023-01-11T21:41:23.5518322Z auto tmp87 = tmp86 / tmp84; 2023-01-11T21:41:23.5518440Z auto tmp88 = tmp76 < tmp8; 2023-01-11T21:41:23.5518558Z auto tmp89 = tmp88 & tmp39; 2023-01-11T21:41:23.5518685Z auto tmp90 = tmp75 + tmp87; 2023-01-11T21:41:23.5518827Z auto tmp91 = tmp89 ? tmp90 : tmp75; 2023-01-11T21:41:23.5518947Z auto tmp92 = tmp83 * tmp50; 2023-01-11T21:41:23.5519083Z auto tmp93 = in_ptr0[tmp52 + (15*tmp85)]; 2023-01-11T21:41:23.5519204Z auto tmp94 = tmp93 / tmp92; 2023-01-11T21:41:23.5519305Z auto tmp95 = tmp88 & tmp55; 2023-01-11T21:41:23.5519415Z auto tmp96 = tmp91 + tmp94; 2023-01-11T21:41:23.5519608Z auto tmp97 = tmp95 ? tmp96 : tmp91; 2023-01-11T21:41:23.5519735Z auto tmp98 = tmp83 * tmp67; 2023-01-11T21:41:23.5519869Z auto tmp99 = in_ptr0[tmp69 + (15*tmp85)]; 2023-01-11T21:41:23.5519995Z auto tmp100 = tmp99 / tmp98; 2023-01-11T21:41:23.5520120Z auto tmp101 = tmp88 & tmp72; 2023-01-11T21:41:23.5520237Z auto tmp102 = tmp97 + tmp100; 2023-01-11T21:41:23.5520374Z auto tmp103 = tmp101 ? tmp102 : tmp97; 2023-01-11T21:41:23.5520496Z auto tmp104 = tmp5 + tmp59; 2023-01-11T21:41:23.5520622Z auto tmp105 = tmp104 * tmp13; 2023-01-11T21:41:23.5520835Z auto tmp106 = tmp105 - tmp13; 2023-01-11T21:41:23.5520958Z auto tmp107 = tmp106 + tmp14; 2023-01-11T21:41:23.5521135Z auto tmp108 = (tmp20 != tmp20) ? tmp20 : std::min(tmp107, tmp20); 2023-01-11T21:41:23.5521321Z auto tmp109 = (tmp4 != tmp4) ? tmp4 : std::max(tmp106, tmp4); 2023-01-11T21:41:23.5521480Z auto tmp110 = (tmp7 != tmp7) ? tmp7 : std::min(tmp108, tmp7); 2023-01-11T21:41:23.5521696Z auto tmp111 = tmp110 - tmp109; 2023-01-11T21:41:23.5521818Z auto tmp112 = tmp111 * tmp30; 2023-01-11T21:41:23.5522000Z auto tmp113 = (tmp32 != tmp32) ? tmp32 : std::min(tmp104, tmp32); 2023-01-11T21:41:23.5522152Z auto tmp114 = in_ptr0[tmp35 + (15*tmp113)]; 2023-01-11T21:41:23.5522281Z auto tmp115 = tmp114 / tmp112; 2023-01-11T21:41:23.5522411Z auto tmp116 = tmp104 < tmp8; 2023-01-11T21:41:23.5522542Z auto tmp117 = tmp116 & tmp39; 2023-01-11T21:41:23.5522654Z auto tmp118 = tmp103 + tmp115; 2023-01-11T21:41:23.5522794Z auto tmp119 = tmp117 ? tmp118 : tmp103; 2023-01-11T21:41:23.5522933Z auto tmp120 = tmp111 * tmp50; 2023-01-11T21:41:23.5523082Z auto tmp121 = in_ptr0[tmp52 + (15*tmp113)]; 2023-01-11T21:41:23.5523210Z auto tmp122 = tmp121 / tmp120; 2023-01-11T21:41:23.5523343Z auto tmp123 = tmp116 & tmp55; 2023-01-11T21:41:23.5523470Z auto tmp124 = tmp119 + tmp122; 2023-01-11T21:41:23.5523599Z auto tmp125 = tmp123 ? tmp124 : tmp119; 2023-01-11T21:41:23.5523725Z auto tmp126 = tmp111 * tmp67; 2023-01-11T21:41:23.5523872Z auto tmp127 = in_ptr0[tmp69 + (15*tmp113)]; 2023-01-11T21:41:23.5524005Z auto tmp128 = tmp127 / tmp126; 2023-01-11T21:41:23.5524135Z auto tmp129 = tmp116 & tmp72; 2023-01-11T21:41:23.5524256Z auto tmp130 = tmp125 + tmp128; 2023-01-11T21:41:23.5524406Z auto tmp131 = tmp129 ? tmp130 : tmp125; 2023-01-11T21:41:23.5524609Z out_ptr0[i1 + (15*i0)] = tmp131; 2023-01-11T21:41:23.5524685Z } 2023-01-11T21:41:23.5524775Z } 2023-01-11T21:41:23.5524869Z } 2023-01-11T21:41:23.5524957Z } 2023-01-11T21:41:23.5525032Z } 2023-01-11T21:41:23.5525113Z } 2023-01-11T21:41:23.5525215Z ''') 2023-01-11T21:41:23.5525239Z 2023-01-11T21:41:23.5525245Z 2023-01-11T21:41:23.5525358Z async_compile.wait(globals()) 2023-01-11T21:41:23.5525447Z del async_compile 2023-01-11T21:41:23.5525454Z 2023-01-11T21:41:23.5525555Z def call(args): 2023-01-11T21:41:23.5525658Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5525762Z args.clear() 2023-01-11T21:41:23.5526095Z buf0 = empty_strided((1, 1, 20, 15), (300, 300, 15, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5526293Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5526372Z del arg0_1 2023-01-11T21:41:23.5526528Z return (buf0, ) 2023-01-11T21:41:23.5526537Z 2023-01-11T21:41:23.5526544Z 2023-01-11T21:41:23.5526654Z if __name__ == "__main__": 2023-01-11T21:41:23.5526818Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5526995Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5527310Z arg0_1 = rand_strided((1, 1, 20, 15), (300, 300, 15, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5527626Z arg1_1 = rand_strided((1, 1, 20, 15), (300, 300, 15, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5527785Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5528161Z [2023-01-11 21:24:50,245] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 34 2023-01-11T21:41:23.5528187Z 2023-01-11T21:41:23.5528265Z ok (1.701s) 2023-01-11T21:41:23.5528951Z test_avg_pool2d_backward3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5529136Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5529518Z [2023-01-11 21:24:50,286] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 35 2023-01-11T21:41:23.5529921Z [2023-01-11 21:24:52,072] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 35 2023-01-11T21:41:23.5529929Z 2023-01-11T21:41:23.5530053Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5530147Z import torch 2023-01-11T21:41:23.5530241Z import random 2023-01-11T21:41:23.5530393Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5530546Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5530559Z 2023-01-11T21:41:23.5530670Z aten = torch.ops.aten 2023-01-11T21:41:23.5530858Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5530993Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5531000Z 2023-01-11T21:41:23.5531006Z 2023-01-11T21:41:23.5531206Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5531505Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5531690Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5531832Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5531909Z { 2023-01-11T21:41:23.5532052Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5532130Z { 2023-01-11T21:41:23.5532244Z #pragma omp for 2023-01-11T21:41:23.5532362Z for(long i0=0; i0<2016; i0+=1) 2023-01-11T21:41:23.5532447Z { 2023-01-11T21:41:23.5532542Z #pragma GCC ivdep 2023-01-11T21:41:23.5532749Z for(long i1=0; i1<21; i1+=1) 2023-01-11T21:41:23.5532833Z { 2023-01-11T21:41:23.5532945Z #pragma GCC ivdep 2023-01-11T21:41:23.5533074Z for(long i2=0; i2<21; i2+=1) 2023-01-11T21:41:23.5533164Z { 2023-01-11T21:41:23.5533261Z { 2023-01-11T21:41:23.5533342Z { 2023-01-11T21:41:23.5533502Z auto tmp0 = static_cast(((1 + i1) / 2)); 2023-01-11T21:41:23.5533673Z auto tmp1 = static_cast(((1 + i2) / 2)); 2023-01-11T21:41:23.5533829Z auto tmp2 = static_cast(1 + (i1 / 2)); 2023-01-11T21:41:23.5533977Z auto tmp3 = static_cast(1 + (i2 / 2)); 2023-01-11T21:41:23.5534128Z auto tmp4 = static_cast(0); 2023-01-11T21:41:23.5534314Z auto tmp5 = (tmp4 != tmp4) ? tmp4 : std::max(tmp0, tmp4); 2023-01-11T21:41:23.5534575Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp1, tmp4); 2023-01-11T21:41:23.5534708Z auto tmp7 = static_cast(11); 2023-01-11T21:41:23.5534889Z auto tmp8 = (tmp7 != tmp7) ? tmp7 : std::min(tmp2, tmp7); 2023-01-11T21:41:23.5535061Z auto tmp9 = (tmp7 != tmp7) ? tmp7 : std::min(tmp3, tmp7); 2023-01-11T21:41:23.5535201Z auto tmp10 = tmp5 + tmp4; 2023-01-11T21:41:23.5535324Z auto tmp11 = tmp6 + tmp4; 2023-01-11T21:41:23.5535473Z auto tmp12 = static_cast(1); 2023-01-11T21:41:23.5535703Z auto tmp13 = tmp8 - tmp12; 2023-01-11T21:41:23.5535883Z auto tmp14 = (tmp13 != tmp13) ? tmp13 : std::min(tmp10, tmp13); 2023-01-11T21:41:23.5536073Z auto tmp15 = tmp9 - tmp12; 2023-01-11T21:41:23.5536261Z auto tmp16 = (tmp15 != tmp15) ? tmp15 : std::min(tmp11, tmp15); 2023-01-11T21:41:23.5536423Z auto tmp17 = in_ptr0[tmp16 + (11*tmp14) + (121*i0)]; 2023-01-11T21:41:23.5536553Z auto tmp18 = tmp17 / 1; 2023-01-11T21:41:23.5536684Z auto tmp19 = tmp10 < tmp8; 2023-01-11T21:41:23.5536815Z auto tmp20 = tmp11 < tmp9; 2023-01-11T21:41:23.5536948Z auto tmp21 = tmp19 & tmp20; 2023-01-11T21:41:23.5537102Z auto tmp22 = static_cast(0.0); 2023-01-11T21:41:23.5537233Z auto tmp23 = tmp21 ? tmp18 : tmp22; 2023-01-11T21:41:23.5537376Z out_ptr0[i2 + (21*i1) + (441*i0)] = tmp23; 2023-01-11T21:41:23.5537470Z } 2023-01-11T21:41:23.5537560Z } 2023-01-11T21:41:23.5537647Z } 2023-01-11T21:41:23.5537734Z } 2023-01-11T21:41:23.5537816Z } 2023-01-11T21:41:23.5537897Z } 2023-01-11T21:41:23.5537977Z } 2023-01-11T21:41:23.5538085Z ''') 2023-01-11T21:41:23.5538094Z 2023-01-11T21:41:23.5538100Z 2023-01-11T21:41:23.5538223Z async_compile.wait(globals()) 2023-01-11T21:41:23.5538324Z del async_compile 2023-01-11T21:41:23.5538330Z 2023-01-11T21:41:23.5538426Z def call(args): 2023-01-11T21:41:23.5538530Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5538612Z args.clear() 2023-01-11T21:41:23.5538935Z buf0 = empty_strided((1, 2016, 21, 21), (889056, 441, 21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5539119Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5539213Z del arg0_1 2023-01-11T21:41:23.5539312Z return (buf0, ) 2023-01-11T21:41:23.5539319Z 2023-01-11T21:41:23.5539325Z 2023-01-11T21:41:23.5539430Z if __name__ == "__main__": 2023-01-11T21:41:23.5539589Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5539765Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5540167Z arg0_1 = rand_strided((1, 2016, 11, 11), (243936, 121, 11, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5540492Z arg1_1 = rand_strided((1, 2016, 21, 21), (889056, 441, 21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5540656Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5540664Z 2023-01-11T21:41:23.5540758Z ok (1.917s) 2023-01-11T21:41:23.5541462Z test_avg_pool2d_backward4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5541645Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5542089Z [2023-01-11 21:24:52,195] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 36 2023-01-11T21:41:23.5542662Z [2023-01-11 21:24:52,210] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.avg_pool2d_backward 2023-01-11T21:41:23.5543079Z [2023-01-11 21:24:52,214] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 36 2023-01-11T21:41:23.5543088Z 2023-01-11T21:41:23.5543220Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5543299Z import torch 2023-01-11T21:41:23.5543403Z import random 2023-01-11T21:41:23.5543563Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5543740Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5543747Z 2023-01-11T21:41:23.5543856Z aten = torch.ops.aten 2023-01-11T21:41:23.5544038Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5544165Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5544172Z 2023-01-11T21:41:23.5544178Z 2023-01-11T21:41:23.5544310Z async_compile.wait(globals()) 2023-01-11T21:41:23.5544395Z del async_compile 2023-01-11T21:41:23.5544402Z 2023-01-11T21:41:23.5544497Z def call(args): 2023-01-11T21:41:23.5544601Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5544698Z args.clear() 2023-01-11T21:41:23.5544906Z buf0 = aten.avg_pool2d_backward(arg0_1, arg1_1, [13, 13], [1, 1], [0, 0], True, False, None) 2023-01-11T21:41:23.5544997Z del arg0_1 2023-01-11T21:41:23.5545086Z del arg1_1 2023-01-11T21:41:23.5545161Z buf1 = buf0 2023-01-11T21:41:23.5545306Z assert_size_stride(buf1, (1, 16, 24, 24), (9216, 576, 24, 1)) 2023-01-11T21:41:23.5545389Z del buf0 2023-01-11T21:41:23.5545485Z return (buf1, ) 2023-01-11T21:41:23.5545491Z 2023-01-11T21:41:23.5545496Z 2023-01-11T21:41:23.5545601Z if __name__ == "__main__": 2023-01-11T21:41:23.5545759Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5545933Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5546266Z arg0_1 = rand_strided((1, 16, 12, 12), (2304, 144, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5546567Z arg1_1 = rand_strided((1, 16, 24, 24), (9216, 576, 24, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5546730Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5546738Z 2023-01-11T21:41:23.5546828Z ok (0.054s) 2023-01-11T21:41:23.5547524Z test_avg_pool2d_backward_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5547704Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5548104Z [2023-01-11 21:24:52,248] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 37 2023-01-11T21:41:23.5548639Z [2023-01-11 21:24:53,858] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 37 2023-01-11T21:41:23.5548647Z 2023-01-11T21:41:23.5548778Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5548877Z import torch 2023-01-11T21:41:23.5548959Z import random 2023-01-11T21:41:23.5549127Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5549297Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5549304Z 2023-01-11T21:41:23.5549414Z aten = torch.ops.aten 2023-01-11T21:41:23.5549600Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5549727Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5549734Z 2023-01-11T21:41:23.5549740Z 2023-01-11T21:41:23.5549941Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5550221Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5550467Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5550600Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5550688Z { 2023-01-11T21:41:23.5550824Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5550903Z { 2023-01-11T21:41:23.5551012Z #pragma omp for 2023-01-11T21:41:23.5551126Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5551202Z { 2023-01-11T21:41:23.5551314Z #pragma GCC ivdep 2023-01-11T21:41:23.5551432Z for(long i1=0; i1<14; i1+=1) 2023-01-11T21:41:23.5551523Z { 2023-01-11T21:41:23.5551633Z #pragma GCC ivdep 2023-01-11T21:41:23.5551753Z for(long i2=0; i2<14; i2+=1) 2023-01-11T21:41:23.5551840Z { 2023-01-11T21:41:23.5551909Z { 2023-01-11T21:41:23.5551998Z { 2023-01-11T21:41:23.5552148Z auto tmp0 = static_cast((i1 / 2)); 2023-01-11T21:41:23.5552295Z auto tmp1 = static_cast((i2 / 2)); 2023-01-11T21:41:23.5553365Z auto tmp2 = static_cast(1 + (i1 / 2)); 2023-01-11T21:41:23.5553528Z auto tmp3 = static_cast(1 + (i2 / 2)); 2023-01-11T21:41:23.5553677Z auto tmp4 = static_cast(0); 2023-01-11T21:41:23.5553929Z auto tmp5 = (tmp4 != tmp4) ? tmp4 : std::max(tmp0, tmp4); 2023-01-11T21:41:23.5554108Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp1, tmp4); 2023-01-11T21:41:23.5554254Z auto tmp7 = static_cast(7); 2023-01-11T21:41:23.5554436Z auto tmp8 = (tmp7 != tmp7) ? tmp7 : std::min(tmp2, tmp7); 2023-01-11T21:41:23.5554617Z auto tmp9 = (tmp7 != tmp7) ? tmp7 : std::min(tmp3, tmp7); 2023-01-11T21:41:23.5554750Z auto tmp10 = tmp5 + tmp4; 2023-01-11T21:41:23.5554881Z auto tmp11 = tmp6 + tmp4; 2023-01-11T21:41:23.5555029Z auto tmp12 = static_cast(1); 2023-01-11T21:41:23.5555269Z auto tmp13 = tmp8 - tmp12; 2023-01-11T21:41:23.5555435Z auto tmp14 = (tmp13 != tmp13) ? tmp13 : std::min(tmp10, tmp13); 2023-01-11T21:41:23.5555654Z auto tmp15 = tmp9 - tmp12; 2023-01-11T21:41:23.5555836Z auto tmp16 = (tmp15 != tmp15) ? tmp15 : std::min(tmp11, tmp15); 2023-01-11T21:41:23.5556004Z auto tmp17 = in_ptr0[tmp16 + (7*tmp14) + (49*i0)]; 2023-01-11T21:41:23.5556135Z auto tmp18 = tmp17 / 4; 2023-01-11T21:41:23.5556265Z auto tmp19 = tmp10 < tmp8; 2023-01-11T21:41:23.5556391Z auto tmp20 = tmp11 < tmp9; 2023-01-11T21:41:23.5556526Z auto tmp21 = tmp19 & tmp20; 2023-01-11T21:41:23.5556666Z auto tmp22 = static_cast(0.0); 2023-01-11T21:41:23.5556896Z auto tmp23 = tmp21 ? tmp18 : tmp22; 2023-01-11T21:41:23.5557035Z out_ptr0[i2 + (14*i1) + (196*i0)] = tmp23; 2023-01-11T21:41:23.5557125Z } 2023-01-11T21:41:23.5557214Z } 2023-01-11T21:41:23.5557300Z } 2023-01-11T21:41:23.5557385Z } 2023-01-11T21:41:23.5557457Z } 2023-01-11T21:41:23.5557542Z } 2023-01-11T21:41:23.5557628Z } 2023-01-11T21:41:23.5557747Z ''') 2023-01-11T21:41:23.5557754Z 2023-01-11T21:41:23.5557760Z 2023-01-11T21:41:23.5557889Z async_compile.wait(globals()) 2023-01-11T21:41:23.5557989Z del async_compile 2023-01-11T21:41:23.5557996Z 2023-01-11T21:41:23.5558097Z def call(args): 2023-01-11T21:41:23.5558185Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5558285Z args.clear() 2023-01-11T21:41:23.5558658Z buf0 = empty_strided((2, 4, 14, 14), (784, 196, 14, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5558853Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5558944Z del arg0_1 2023-01-11T21:41:23.5559039Z return (buf0, ) 2023-01-11T21:41:23.5559046Z 2023-01-11T21:41:23.5559052Z 2023-01-11T21:41:23.5559155Z if __name__ == "__main__": 2023-01-11T21:41:23.5559333Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5559755Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5560085Z arg0_1 = rand_strided((2, 4, 7, 7), (196, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5560387Z arg1_1 = rand_strided((2, 4, 14, 14), (784, 196, 14, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5560545Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5560553Z 2023-01-11T21:41:23.5560640Z ok (1.643s) 2023-01-11T21:41:23.5561314Z test_baddbmm_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5561492Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5561861Z [2023-01-11 21:24:53,883] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 38 2023-01-11T21:41:23.5562254Z [2023-01-11 21:24:55,612] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 38 2023-01-11T21:41:23.5562262Z 2023-01-11T21:41:23.5562394Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5562476Z import torch 2023-01-11T21:41:23.5562570Z import random 2023-01-11T21:41:23.5562735Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5562899Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5562913Z 2023-01-11T21:41:23.5563020Z aten = torch.ops.aten 2023-01-11T21:41:23.5563212Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5563345Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5563353Z 2023-01-11T21:41:23.5563359Z 2023-01-11T21:41:23.5563554Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5563854Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5564025Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.5564172Z const float* __restrict__ in_ptr0) 2023-01-11T21:41:23.5564251Z { 2023-01-11T21:41:23.5564386Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5564465Z { 2023-01-11T21:41:23.5564574Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5564679Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:23.5564758Z { 2023-01-11T21:41:23.5564874Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:23.5565048Z { 2023-01-11T21:41:23.5565175Z for(long i2=0; i2<12; i2+=1) 2023-01-11T21:41:23.5565265Z { 2023-01-11T21:41:23.5565454Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i2) + (100*i0)); 2023-01-11T21:41:23.5565675Z auto tmp1 = at::vec::Vectorized::loadu(in_out_ptr0 + (8*i2) + (100*i1) + (12800*i0)); 2023-01-11T21:41:23.5565795Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5565954Z tmp2.store(in_out_ptr0 + (8*i2) + (100*i1) + (12800*i0)); 2023-01-11T21:41:23.5566043Z } 2023-01-11T21:41:23.5566172Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5566293Z for(long i2=96; i2<100; i2+=1) 2023-01-11T21:41:23.5566377Z { 2023-01-11T21:41:23.5566497Z auto tmp0 = in_ptr0[i2 + (100*i0)]; 2023-01-11T21:41:23.5566698Z auto tmp1 = in_out_ptr0[i2 + (100*i1) + (12800*i0)]; 2023-01-11T21:41:23.5566831Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5566980Z in_out_ptr0[i2 + (100*i1) + (12800*i0)] = tmp2; 2023-01-11T21:41:23.5567073Z } 2023-01-11T21:41:23.5567159Z } 2023-01-11T21:41:23.5567245Z } 2023-01-11T21:41:23.5567313Z } 2023-01-11T21:41:23.5567397Z } 2023-01-11T21:41:23.5567521Z ''') 2023-01-11T21:41:23.5567529Z 2023-01-11T21:41:23.5567534Z 2023-01-11T21:41:23.5567649Z async_compile.wait(globals()) 2023-01-11T21:41:23.5567751Z del async_compile 2023-01-11T21:41:23.5567757Z 2023-01-11T21:41:23.5567857Z def call(args): 2023-01-11T21:41:23.5567985Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.5568068Z args.clear() 2023-01-11T21:41:23.5568386Z buf0 = empty_strided((6, 128, 100), (12800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5568522Z aten.bmm.out(arg1_1, arg2_1, out=buf0) 2023-01-11T21:41:23.5568620Z del arg1_1 2023-01-11T21:41:23.5568714Z del arg2_1 2023-01-11T21:41:23.5568829Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.5569013Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.5569091Z del arg0_1 2023-01-11T21:41:23.5569183Z return (buf1, ) 2023-01-11T21:41:23.5569190Z 2023-01-11T21:41:23.5569196Z 2023-01-11T21:41:23.5569299Z if __name__ == "__main__": 2023-01-11T21:41:23.5569450Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5569621Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5569926Z arg0_1 = rand_strided((6, 1, 100), (100, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5570229Z arg1_1 = rand_strided((6, 128, 64), (8192, 64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5570537Z arg2_1 = rand_strided((6, 64, 100), (6400, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5570699Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.5570714Z 2023-01-11T21:41:23.5570788Z ok (1.757s) 2023-01-11T21:41:23.5571469Z test_batch_norm_2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5571645Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5572022Z [2023-01-11 21:24:55,896] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 39 2023-01-11T21:41:23.5572402Z [2023-01-11 21:24:57,733] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 39 2023-01-11T21:41:23.5573023Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5573281Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5573653Z [2023-01-11 21:24:58,036] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 40 2023-01-11T21:41:23.5573662Z 2023-01-11T21:41:23.5573790Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5573885Z import torch 2023-01-11T21:41:23.5573965Z import random 2023-01-11T21:41:23.5574127Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5574289Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5574297Z 2023-01-11T21:41:23.5574548Z aten = torch.ops.aten 2023-01-11T21:41:23.5574736Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5574927Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5574936Z 2023-01-11T21:41:23.5574942Z 2023-01-11T21:41:23.5575142Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5575431Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5575584Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5575723Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5575857Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.5575992Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:23.5576125Z const float* __restrict__ in_ptr4, 2023-01-11T21:41:23.5576253Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5576378Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5576504Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.5576615Z bool* __restrict__ out_ptr3) 2023-01-11T21:41:23.5576697Z { 2023-01-11T21:41:23.5576832Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5576915Z { 2023-01-11T21:41:23.5577019Z #pragma omp for 2023-01-11T21:41:23.5577129Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.5577196Z { 2023-01-11T21:41:23.5577380Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5577498Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5577583Z } 2023-01-11T21:41:23.5577715Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5577828Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.5577912Z { 2023-01-11T21:41:23.5578025Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5578118Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.5578197Z } 2023-01-11T21:41:23.5578299Z #pragma omp for 2023-01-11T21:41:23.5578409Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.5578495Z { 2023-01-11T21:41:23.5578678Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.5578783Z tmp0.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.5578868Z } 2023-01-11T21:41:23.5578999Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5579110Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.5579199Z { 2023-01-11T21:41:23.5579313Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5579416Z out_ptr1[i0] = tmp0; 2023-01-11T21:41:23.5579484Z } 2023-01-11T21:41:23.5579606Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5579712Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.5579793Z { 2023-01-11T21:41:23.5579911Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:23.5579991Z { 2023-01-11T21:41:23.5580110Z for(long i2=0; i2<8; i2+=1) 2023-01-11T21:41:23.5580185Z { 2023-01-11T21:41:23.5580399Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr2 + (8*i2) + (64*i1) + (640*i0)); 2023-01-11T21:41:23.5580660Z auto tmp1 = at::vec::Vectorized(out_ptr0[i1]); 2023-01-11T21:41:23.5580831Z auto tmp3 = at::vec::Vectorized(out_ptr1[i1]); 2023-01-11T21:41:23.5581006Z auto tmp11 = at::vec::Vectorized(in_ptr3[i1]); 2023-01-11T21:41:23.5581188Z auto tmp13 = at::vec::Vectorized(in_ptr4[i1]); 2023-01-11T21:41:23.5581403Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5581695Z auto tmp4 = at::vec::Vectorized(static_cast(1e-05)); 2023-01-11T21:41:23.5581813Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.5581916Z auto tmp6 = tmp5.sqrt(); 2023-01-11T21:41:23.5582049Z auto tmp7 = tmp6.reciprocal(); 2023-01-11T21:41:23.5582228Z auto tmp8 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5582597Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.5582733Z auto tmp10 = tmp2 * tmp9; 2023-01-11T21:41:23.5582861Z auto tmp12 = tmp10 * tmp11; 2023-01-11T21:41:23.5582981Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:23.5583151Z auto tmp15 = at::vec::clamp_min(tmp14, decltype(tmp14)(0)); 2023-01-11T21:41:23.5583306Z tmp15.store(out_ptr2 + (8*i2) + (64*i1) + (640*i0)); 2023-01-11T21:41:23.5583393Z } 2023-01-11T21:41:23.5583529Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5583650Z for(long i2=64; i2<64; i2+=1) 2023-01-11T21:41:23.5583740Z { 2023-01-11T21:41:23.5583885Z auto tmp0 = in_ptr2[i2 + (64*i1) + (640*i0)]; 2023-01-11T21:41:23.5583995Z auto tmp1 = out_ptr0[i1]; 2023-01-11T21:41:23.5584120Z auto tmp3 = out_ptr1[i1]; 2023-01-11T21:41:23.5584249Z auto tmp11 = in_ptr3[i1]; 2023-01-11T21:41:23.5584373Z auto tmp13 = in_ptr4[i1]; 2023-01-11T21:41:23.5584589Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5584819Z auto tmp4 = static_cast(1e-05); 2023-01-11T21:41:23.5584944Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.5585078Z auto tmp6 = std::sqrt(tmp5); 2023-01-11T21:41:23.5585183Z auto tmp7 = 1 / tmp6; 2023-01-11T21:41:23.5585325Z auto tmp8 = static_cast(1); 2023-01-11T21:41:23.5585442Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.5585566Z auto tmp10 = tmp2 * tmp9; 2023-01-11T21:41:23.5585688Z auto tmp12 = tmp10 * tmp11; 2023-01-11T21:41:23.5585812Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:23.5585940Z auto tmp15 = tmp14 * (tmp14>0); 2023-01-11T21:41:23.5586065Z out_ptr2[i2 + (64*i1) + (640*i0)] = tmp15; 2023-01-11T21:41:23.5586155Z } 2023-01-11T21:41:23.5586246Z } 2023-01-11T21:41:23.5586332Z } 2023-01-11T21:41:23.5586439Z #pragma omp for 2023-01-11T21:41:23.5586549Z for(long i0=0; i0<1280; i0+=1) 2023-01-11T21:41:23.5586634Z { 2023-01-11T21:41:23.5586709Z { 2023-01-11T21:41:23.5586799Z { 2023-01-11T21:41:23.5586927Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:23.5587065Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.5587191Z auto tmp2 = tmp0 <= tmp1; 2023-01-11T21:41:23.5587305Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.5587396Z } 2023-01-11T21:41:23.5587467Z } 2023-01-11T21:41:23.5587551Z } 2023-01-11T21:41:23.5587634Z } 2023-01-11T21:41:23.5587719Z } 2023-01-11T21:41:23.5587836Z ''') 2023-01-11T21:41:23.5587845Z 2023-01-11T21:41:23.5587938Z 2023-01-11T21:41:23.5588076Z async_compile.wait(globals()) 2023-01-11T21:41:23.5588176Z del async_compile 2023-01-11T21:41:23.5588182Z 2023-01-11T21:41:23.5588261Z def call(args): 2023-01-11T21:41:23.5588450Z primals_1, primals_2, primals_3, primals_4, primals_5, primals_6 = args 2023-01-11T21:41:23.5588548Z args.clear() 2023-01-11T21:41:23.5588838Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5589109Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5589409Z buf2 = empty_strided((2, 10, 8, 8), (640, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5589705Z buf3 = empty_strided((2, 10, 8, 8), (640, 64, 8, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5590254Z kernel_cpp_0(c_void_p(primals_3.data_ptr()), c_void_p(primals_4.data_ptr()), c_void_p(primals_6.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.5590342Z del primals_1 2023-01-11T21:41:23.5590438Z del primals_2 2023-01-11T21:41:23.5590538Z del primals_3 2023-01-11T21:41:23.5590633Z del primals_4 2023-01-11T21:41:23.5590787Z return (buf0, buf1, buf2, primals_6, buf0, buf1, buf3, ) 2023-01-11T21:41:23.5590795Z 2023-01-11T21:41:23.5590801Z 2023-01-11T21:41:23.5590900Z if __name__ == "__main__": 2023-01-11T21:41:23.5591051Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5591223Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5591633Z primals_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5591914Z primals_2 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5592199Z primals_3 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5592481Z primals_4 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5592749Z primals_5 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5593056Z primals_6 = rand_strided((2, 10, 8, 8), (640, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5593290Z print_performance(lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5, primals_6])) 2023-01-11T21:41:23.5593299Z 2023-01-11T21:41:23.5593691Z [2023-01-11 21:24:59,779] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 40 2023-01-11T21:41:23.5593699Z 2023-01-11T21:41:23.5593901Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5593982Z import torch 2023-01-11T21:41:23.5594081Z import random 2023-01-11T21:41:23.5594239Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5594401Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5594407Z 2023-01-11T21:41:23.5594513Z aten = torch.ops.aten 2023-01-11T21:41:23.5594704Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5594953Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5594960Z 2023-01-11T21:41:23.5594967Z 2023-01-11T21:41:23.5595148Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5595430Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5595590Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5595729Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5595867Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.5596007Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:23.5596144Z const float* __restrict__ in_ptr4, 2023-01-11T21:41:23.5596282Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5596391Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5596514Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.5596727Z bool* __restrict__ out_ptr3) 2023-01-11T21:41:23.5596813Z { 2023-01-11T21:41:23.5596950Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5597031Z { 2023-01-11T21:41:23.5597134Z #pragma omp for 2023-01-11T21:41:23.5597231Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.5597318Z { 2023-01-11T21:41:23.5597510Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5597639Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5597727Z } 2023-01-11T21:41:23.5597852Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5597967Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.5598039Z { 2023-01-11T21:41:23.5598154Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5598263Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.5598349Z } 2023-01-11T21:41:23.5598454Z #pragma omp for 2023-01-11T21:41:23.5598623Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.5598697Z { 2023-01-11T21:41:23.5598883Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.5599002Z tmp0.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.5599083Z } 2023-01-11T21:41:23.5599209Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5599320Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.5599407Z { 2023-01-11T21:41:23.5599509Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5599622Z out_ptr1[i0] = tmp0; 2023-01-11T21:41:23.5599704Z } 2023-01-11T21:41:23.5599830Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5599947Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.5600030Z { 2023-01-11T21:41:23.5600146Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:23.5600218Z { 2023-01-11T21:41:23.5600338Z for(long i2=0; i2<32; i2+=1) 2023-01-11T21:41:23.5600429Z { 2023-01-11T21:41:23.5600640Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr2 + (8*i2) + (256*i1) + (2560*i0)); 2023-01-11T21:41:23.5600813Z auto tmp1 = at::vec::Vectorized(out_ptr0[i1]); 2023-01-11T21:41:23.5600984Z auto tmp3 = at::vec::Vectorized(out_ptr1[i1]); 2023-01-11T21:41:23.5601155Z auto tmp11 = at::vec::Vectorized(in_ptr3[i1]); 2023-01-11T21:41:23.5601339Z auto tmp13 = at::vec::Vectorized(in_ptr4[i1]); 2023-01-11T21:41:23.5601533Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5601832Z auto tmp4 = at::vec::Vectorized(static_cast(1e-05)); 2023-01-11T21:41:23.5601957Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.5602079Z auto tmp6 = tmp5.sqrt(); 2023-01-11T21:41:23.5602221Z auto tmp7 = tmp6.reciprocal(); 2023-01-11T21:41:23.5602409Z auto tmp8 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5602536Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.5602660Z auto tmp10 = tmp2 * tmp9; 2023-01-11T21:41:23.5602771Z auto tmp12 = tmp10 * tmp11; 2023-01-11T21:41:23.5602894Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:23.5603073Z auto tmp15 = at::vec::clamp_min(tmp14, decltype(tmp14)(0)); 2023-01-11T21:41:23.5603223Z tmp15.store(out_ptr2 + (8*i2) + (256*i1) + (2560*i0)); 2023-01-11T21:41:23.5603314Z } 2023-01-11T21:41:23.5603448Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5603575Z for(long i2=256; i2<256; i2+=1) 2023-01-11T21:41:23.5603645Z { 2023-01-11T21:41:23.5603790Z auto tmp0 = in_ptr2[i2 + (256*i1) + (2560*i0)]; 2023-01-11T21:41:23.5603919Z auto tmp1 = out_ptr0[i1]; 2023-01-11T21:41:23.5604245Z auto tmp3 = out_ptr1[i1]; 2023-01-11T21:41:23.5604365Z auto tmp11 = in_ptr3[i1]; 2023-01-11T21:41:23.5604486Z auto tmp13 = in_ptr4[i1]; 2023-01-11T21:41:23.5604685Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5604910Z auto tmp4 = static_cast(1e-05); 2023-01-11T21:41:23.5605015Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.5605157Z auto tmp6 = std::sqrt(tmp5); 2023-01-11T21:41:23.5605274Z auto tmp7 = 1 / tmp6; 2023-01-11T21:41:23.5605411Z auto tmp8 = static_cast(1); 2023-01-11T21:41:23.5605528Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.5605650Z auto tmp10 = tmp2 * tmp9; 2023-01-11T21:41:23.5605778Z auto tmp12 = tmp10 * tmp11; 2023-01-11T21:41:23.5605888Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:23.5606081Z auto tmp15 = tmp14 * (tmp14>0); 2023-01-11T21:41:23.5606230Z out_ptr2[i2 + (256*i1) + (2560*i0)] = tmp15; 2023-01-11T21:41:23.5606326Z } 2023-01-11T21:41:23.5606412Z } 2023-01-11T21:41:23.5606498Z } 2023-01-11T21:41:23.5606607Z #pragma omp for 2023-01-11T21:41:23.5606707Z for(long i0=0; i0<7680; i0+=1) 2023-01-11T21:41:23.5606787Z { 2023-01-11T21:41:23.5606871Z { 2023-01-11T21:41:23.5606958Z { 2023-01-11T21:41:23.5607083Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:23.5607223Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.5607343Z auto tmp2 = tmp0 <= tmp1; 2023-01-11T21:41:23.5607443Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.5607526Z } 2023-01-11T21:41:23.5607614Z } 2023-01-11T21:41:23.5607701Z } 2023-01-11T21:41:23.5607786Z } 2023-01-11T21:41:23.5607873Z } 2023-01-11T21:41:23.5607987Z ''') 2023-01-11T21:41:23.5608000Z 2023-01-11T21:41:23.5608021Z 2023-01-11T21:41:23.5608135Z async_compile.wait(globals()) 2023-01-11T21:41:23.5608239Z del async_compile 2023-01-11T21:41:23.5608246Z 2023-01-11T21:41:23.5608346Z def call(args): 2023-01-11T21:41:23.5608545Z primals_1, primals_2, primals_3, primals_4, primals_5, primals_6 = args 2023-01-11T21:41:23.5608645Z args.clear() 2023-01-11T21:41:23.5611299Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5611577Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5611874Z buf2 = empty_strided((3, 10, 16, 16), (2560, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5612182Z buf3 = empty_strided((3, 10, 16, 16), (2560, 256, 16, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5612665Z kernel_cpp_0(c_void_p(primals_3.data_ptr()), c_void_p(primals_4.data_ptr()), c_void_p(primals_6.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.5612773Z del primals_1 2023-01-11T21:41:23.5612880Z del primals_2 2023-01-11T21:41:23.5612982Z del primals_3 2023-01-11T21:41:23.5613079Z del primals_4 2023-01-11T21:41:23.5613245Z return (buf0, buf1, buf2, primals_6, buf0, buf1, buf3, ) 2023-01-11T21:41:23.5613360Z 2023-01-11T21:41:23.5613367Z 2023-01-11T21:41:23.5613470Z if __name__ == "__main__": 2023-01-11T21:41:23.5613618Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5613788Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5614092Z primals_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5614388Z primals_2 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5614674Z primals_3 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5615030Z primals_4 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5615295Z primals_5 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5615629Z primals_6 = rand_strided((3, 10, 16, 16), (2560, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5615869Z print_performance(lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5, primals_6])) 2023-01-11T21:41:23.5615876Z 2023-01-11T21:41:23.5615967Z ok (4.165s) 2023-01-11T21:41:23.5616653Z test_bernoulli1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5616934Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5617330Z [2023-01-11 21:24:59,821] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 41 2023-01-11T21:41:23.5617721Z [2023-01-11 21:25:01,468] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 41 2023-01-11T21:41:23.5617729Z 2023-01-11T21:41:23.5617862Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5617970Z import torch 2023-01-11T21:41:23.5618069Z import random 2023-01-11T21:41:23.5618215Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5618381Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5618388Z 2023-01-11T21:41:23.5618498Z aten = torch.ops.aten 2023-01-11T21:41:23.5618673Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5618802Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5618810Z 2023-01-11T21:41:23.5618816Z 2023-01-11T21:41:23.5619023Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5619318Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5619477Z extern "C" void kernel(float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5619546Z { 2023-01-11T21:41:23.5619692Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5619783Z { 2023-01-11T21:41:23.5619890Z #pragma omp for 2023-01-11T21:41:23.5620003Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.5620093Z { 2023-01-11T21:41:23.5620296Z auto tmp0 = at::vec::Vectorized(static_cast(0)); 2023-01-11T21:41:23.5620406Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5620497Z } 2023-01-11T21:41:23.5620641Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5620754Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:23.5620840Z { 2023-01-11T21:41:23.5620983Z auto tmp0 = static_cast(0); 2023-01-11T21:41:23.5621103Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.5621171Z } 2023-01-11T21:41:23.5621254Z } 2023-01-11T21:41:23.5621341Z } 2023-01-11T21:41:23.5621461Z ''') 2023-01-11T21:41:23.5621468Z 2023-01-11T21:41:23.5621474Z 2023-01-11T21:41:23.5621603Z async_compile.wait(globals()) 2023-01-11T21:41:23.5621701Z del async_compile 2023-01-11T21:41:23.5621708Z 2023-01-11T21:41:23.5621812Z def call(args): 2023-01-11T21:41:23.5621893Z arg0_1, = args 2023-01-11T21:41:23.5621990Z args.clear() 2023-01-11T21:41:23.5622289Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5622646Z kernel_cpp_0(c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5622771Z aten.bernoulli_(buf0, ) 2023-01-11T21:41:23.5622878Z return (buf0, buf0, ) 2023-01-11T21:41:23.5622885Z 2023-01-11T21:41:23.5622891Z 2023-01-11T21:41:23.5622998Z if __name__ == "__main__": 2023-01-11T21:41:23.5623171Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5623441Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5623750Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5623903Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5623910Z 2023-01-11T21:41:23.5624008Z ok (1.688s) 2023-01-11T21:41:23.5624717Z test_bernoulli2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5624894Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5625291Z [2023-01-11 21:25:01,495] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 42 2023-01-11T21:41:23.5625762Z [2023-01-11 21:25:01,495] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.5626157Z [2023-01-11 21:25:03,039] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 42 2023-01-11T21:41:23.5626165Z 2023-01-11T21:41:23.5626306Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5626383Z import torch 2023-01-11T21:41:23.5626479Z import random 2023-01-11T21:41:23.5626634Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5626799Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5626806Z 2023-01-11T21:41:23.5626913Z aten = torch.ops.aten 2023-01-11T21:41:23.5627101Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5627226Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5627575Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.5627585Z 2023-01-11T21:41:23.5627606Z 2023-01-11T21:41:23.5627808Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5628107Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5628287Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.5628434Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5628578Z bool* __restrict__ out_ptr0) 2023-01-11T21:41:23.5628672Z { 2023-01-11T21:41:23.5628815Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5628883Z { 2023-01-11T21:41:23.5628993Z #pragma omp for 2023-01-11T21:41:23.5629109Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5629203Z { 2023-01-11T21:41:23.5629291Z { 2023-01-11T21:41:23.5629377Z { 2023-01-11T21:41:23.5629487Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.5629628Z auto tmp5 = in_ptr1[i0]; 2023-01-11T21:41:23.5629774Z auto tmp1 = static_cast(65535); 2023-01-11T21:41:23.5629910Z auto tmp2 = tmp0 ^ tmp1; 2023-01-11T21:41:23.5630048Z auto tmp3 = static_cast(i0); 2023-01-11T21:41:23.5630243Z auto tmp4 = static_cast(normalized_rand_cpu(tmp2, tmp3));; 2023-01-11T21:41:23.5630362Z auto tmp6 = tmp4 < tmp5; 2023-01-11T21:41:23.5630477Z out_ptr0[i0] = tmp6; 2023-01-11T21:41:23.5630554Z } 2023-01-11T21:41:23.5630642Z } 2023-01-11T21:41:23.5630731Z } 2023-01-11T21:41:23.5630813Z } 2023-01-11T21:41:23.5630898Z } 2023-01-11T21:41:23.5631019Z ''') 2023-01-11T21:41:23.5631025Z 2023-01-11T21:41:23.5631031Z 2023-01-11T21:41:23.5631152Z async_compile.wait(globals()) 2023-01-11T21:41:23.5631238Z del async_compile 2023-01-11T21:41:23.5631245Z 2023-01-11T21:41:23.5631341Z def call(args): 2023-01-11T21:41:23.5631437Z arg0_1, = args 2023-01-11T21:41:23.5631541Z args.clear() 2023-01-11T21:41:23.5631729Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.5632068Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5632303Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.5632378Z del arg0_1 2023-01-11T21:41:23.5632477Z return (buf0, ) 2023-01-11T21:41:23.5632484Z 2023-01-11T21:41:23.5632490Z 2023-01-11T21:41:23.5632594Z if __name__ == "__main__": 2023-01-11T21:41:23.5632756Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5632933Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5633201Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.5633479Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5633627Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5633634Z 2023-01-11T21:41:23.5633708Z ok (1.571s) 2023-01-11T21:41:23.5634566Z test_bitwise2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5634756Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5635181Z [2023-01-11 21:25:03,063] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 43 2023-01-11T21:41:23.5635592Z [2023-01-11 21:25:04,592] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 43 2023-01-11T21:41:23.5635600Z 2023-01-11T21:41:23.5635741Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5635845Z import torch 2023-01-11T21:41:23.5635946Z import random 2023-01-11T21:41:23.5636120Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5636287Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5636294Z 2023-01-11T21:41:23.5636408Z aten = torch.ops.aten 2023-01-11T21:41:23.5636615Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5636754Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5636762Z 2023-01-11T21:41:23.5636769Z 2023-01-11T21:41:23.5636980Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5637290Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5637463Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.5637608Z const bool* __restrict__ in_ptr1, 2023-01-11T21:41:23.5637728Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.5637869Z bool* __restrict__ out_ptr1, 2023-01-11T21:41:23.5638003Z bool* __restrict__ out_ptr2, 2023-01-11T21:41:23.5638146Z bool* __restrict__ out_ptr3) 2023-01-11T21:41:23.5638231Z { 2023-01-11T21:41:23.5638375Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5638461Z { 2023-01-11T21:41:23.5638555Z #pragma omp for 2023-01-11T21:41:23.5638676Z for(long i0=0; i0<40; i0+=1) 2023-01-11T21:41:23.5638765Z { 2023-01-11T21:41:23.5638852Z { 2023-01-11T21:41:23.5638945Z { 2023-01-11T21:41:23.5639074Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5639204Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.5639319Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:23.5639446Z auto tmp3 = tmp0 | tmp2; 2023-01-11T21:41:23.5639573Z auto tmp4 = tmp0 ^ tmp2; 2023-01-11T21:41:23.5639699Z auto tmp5 = tmp0 & tmp2; 2023-01-11T21:41:23.5639818Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.5639939Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.5640147Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.5640251Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:23.5640346Z } 2023-01-11T21:41:23.5640435Z } 2023-01-11T21:41:23.5640526Z } 2023-01-11T21:41:23.5640613Z } 2023-01-11T21:41:23.5640695Z } 2023-01-11T21:41:23.5640807Z ''') 2023-01-11T21:41:23.5640833Z 2023-01-11T21:41:23.5640839Z 2023-01-11T21:41:23.5640953Z async_compile.wait(globals()) 2023-01-11T21:41:23.5641056Z del async_compile 2023-01-11T21:41:23.5641064Z 2023-01-11T21:41:23.5641166Z def call(args): 2023-01-11T21:41:23.5641274Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5641377Z args.clear() 2023-01-11T21:41:23.5641692Z buf0 = empty_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5641987Z buf1 = empty_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5642265Z buf2 = empty_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5642635Z buf3 = empty_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5642977Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.5643074Z del arg0_1 2023-01-11T21:41:23.5643172Z del arg1_1 2023-01-11T21:41:23.5643297Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:23.5643305Z 2023-01-11T21:41:23.5643310Z 2023-01-11T21:41:23.5643419Z if __name__ == "__main__": 2023-01-11T21:41:23.5643585Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5643750Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5644051Z arg0_1 = rand_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5644329Z arg1_1 = rand_strided((2, 20), (20, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5644506Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5644518Z 2023-01-11T21:41:23.5644615Z ok (1.554s) 2023-01-11T21:41:23.5645330Z test_bitwise_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5645517Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5645913Z [2023-01-11 21:25:04,615] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 44 2023-01-11T21:41:23.5646315Z [2023-01-11 21:25:06,126] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 44 2023-01-11T21:41:23.5646322Z 2023-01-11T21:41:23.5646458Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5646549Z import torch 2023-01-11T21:41:23.5646653Z import random 2023-01-11T21:41:23.5646819Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5646997Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5647005Z 2023-01-11T21:41:23.5647114Z aten = torch.ops.aten 2023-01-11T21:41:23.5647313Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5647444Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5647452Z 2023-01-11T21:41:23.5647458Z 2023-01-11T21:41:23.5647665Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5647959Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5648129Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.5648280Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:23.5648418Z int* __restrict__ out_ptr0, 2023-01-11T21:41:23.5648560Z int* __restrict__ out_ptr1, 2023-01-11T21:41:23.5648772Z int* __restrict__ out_ptr2, 2023-01-11T21:41:23.5648898Z int* __restrict__ out_ptr3) 2023-01-11T21:41:23.5648969Z { 2023-01-11T21:41:23.5649105Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5649192Z { 2023-01-11T21:41:23.5649300Z #pragma omp for 2023-01-11T21:41:23.5649425Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.5649512Z { 2023-01-11T21:41:23.5649602Z { 2023-01-11T21:41:23.5649682Z { 2023-01-11T21:41:23.5649814Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5649939Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.5650054Z auto tmp1 = ~tmp0; 2023-01-11T21:41:23.5650175Z auto tmp3 = tmp0 | tmp2; 2023-01-11T21:41:23.5650296Z auto tmp4 = tmp0 ^ tmp2; 2023-01-11T21:41:23.5650418Z auto tmp5 = tmp0 & tmp2; 2023-01-11T21:41:23.5650601Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.5650718Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.5650833Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.5650941Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:23.5651029Z } 2023-01-11T21:41:23.5651115Z } 2023-01-11T21:41:23.5651198Z } 2023-01-11T21:41:23.5651265Z } 2023-01-11T21:41:23.5651351Z } 2023-01-11T21:41:23.5651476Z ''') 2023-01-11T21:41:23.5651484Z 2023-01-11T21:41:23.5651489Z 2023-01-11T21:41:23.5651615Z async_compile.wait(globals()) 2023-01-11T21:41:23.5651715Z del async_compile 2023-01-11T21:41:23.5651721Z 2023-01-11T21:41:23.5651823Z def call(args): 2023-01-11T21:41:23.5651921Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5652005Z args.clear() 2023-01-11T21:41:23.5652286Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5652551Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5652826Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5653079Z buf3 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5653411Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.5653503Z del arg0_1 2023-01-11T21:41:23.5653595Z del arg1_1 2023-01-11T21:41:23.5653703Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:23.5653710Z 2023-01-11T21:41:23.5653716Z 2023-01-11T21:41:23.5653827Z if __name__ == "__main__": 2023-01-11T21:41:23.5653991Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5654166Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5654433Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5654701Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.5654862Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5654869Z 2023-01-11T21:41:23.5654961Z ok (1.534s) 2023-01-11T21:41:23.5655575Z test_bmm1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5655743Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5656134Z [2023-01-11 21:25:06,154] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 45 2023-01-11T21:41:23.5656518Z [2023-01-11 21:25:07,750] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 45 2023-01-11T21:41:23.5657215Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5657392Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5657777Z [2023-01-11 21:25:07,774] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 46 2023-01-11T21:41:23.5658168Z [2023-01-11 21:25:09,387] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 46 2023-01-11T21:41:23.5658177Z 2023-01-11T21:41:23.5658309Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5658405Z import torch 2023-01-11T21:41:23.5658487Z import random 2023-01-11T21:41:23.5658654Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5658894Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5658903Z 2023-01-11T21:41:23.5659016Z aten = torch.ops.aten 2023-01-11T21:41:23.5659197Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5659324Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5659331Z 2023-01-11T21:41:23.5659338Z 2023-01-11T21:41:23.5659538Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5659829Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5659980Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5660395Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5660553Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5660686Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.5660768Z { 2023-01-11T21:41:23.5660912Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5661000Z { 2023-01-11T21:41:23.5661097Z #pragma omp for 2023-01-11T21:41:23.5661211Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5661296Z { 2023-01-11T21:41:23.5661494Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5661690Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5661807Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5661932Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5662021Z } 2023-01-11T21:41:23.5662144Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5662262Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.5662466Z { 2023-01-11T21:41:23.5662608Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5662749Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5662867Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5662985Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5663067Z } 2023-01-11T21:41:23.5663181Z #pragma omp for 2023-01-11T21:41:23.5663298Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5663384Z { 2023-01-11T21:41:23.5663577Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.5663762Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.5663881Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5663991Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.5664076Z } 2023-01-11T21:41:23.5664206Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5664326Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.5664415Z { 2023-01-11T21:41:23.5664532Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5664670Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5664776Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5664881Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.5665080Z } 2023-01-11T21:41:23.5665166Z } 2023-01-11T21:41:23.5665252Z } 2023-01-11T21:41:23.5665389Z ''') 2023-01-11T21:41:23.5665398Z 2023-01-11T21:41:23.5665404Z 2023-01-11T21:41:23.5665592Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.5665858Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5666022Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.5666105Z { 2023-01-11T21:41:23.5666241Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5666328Z { 2023-01-11T21:41:23.5666434Z #pragma omp for 2023-01-11T21:41:23.5666550Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5666628Z { 2023-01-11T21:41:23.5666818Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5667011Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.5667126Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5667339Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5667430Z } 2023-01-11T21:41:23.5667560Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5667658Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.5667743Z { 2023-01-11T21:41:23.5667870Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.5668008Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.5668132Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5668247Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5668337Z } 2023-01-11T21:41:23.5668411Z } 2023-01-11T21:41:23.5668496Z } 2023-01-11T21:41:23.5668615Z ''') 2023-01-11T21:41:23.5668624Z 2023-01-11T21:41:23.5668630Z 2023-01-11T21:41:23.5668756Z async_compile.wait(globals()) 2023-01-11T21:41:23.5668854Z del async_compile 2023-01-11T21:41:23.5668860Z 2023-01-11T21:41:23.5668959Z def call(args): 2023-01-11T21:41:23.5669061Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5669149Z args.clear() 2023-01-11T21:41:23.5669447Z buf0 = empty_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5669587Z aten.bmm.out(arg0_1, arg1_1, out=buf0) 2023-01-11T21:41:23.5669959Z buf1 = empty_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5670366Z buf2 = empty_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5670724Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.5670861Z del arg0_1 2023-01-11T21:41:23.5679488Z del arg1_1 2023-01-11T21:41:23.5679921Z buf3 = empty_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5680110Z aten.bmm.out(buf1, buf2, out=buf3) 2023-01-11T21:41:23.5680244Z del buf1 2023-01-11T21:41:23.5680372Z del buf2 2023-01-11T21:41:23.5680539Z buf4 = buf3; del buf3 # reuse 2023-01-11T21:41:23.5680750Z kernel_cpp_1(c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.5680899Z return (buf0, buf4, ) 2023-01-11T21:41:23.5680908Z 2023-01-11T21:41:23.5680916Z 2023-01-11T21:41:23.5681051Z if __name__ == "__main__": 2023-01-11T21:41:23.5681277Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5681527Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5681924Z arg0_1 = rand_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5682301Z arg1_1 = rand_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5682519Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5682526Z 2023-01-11T21:41:23.5682534Z 2023-01-11T21:41:23.5682715Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5682846Z import torch 2023-01-11T21:41:23.5682956Z import random 2023-01-11T21:41:23.5683183Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5683524Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5683533Z 2023-01-11T21:41:23.5683691Z aten = torch.ops.aten 2023-01-11T21:41:23.5683959Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5684146Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5684154Z 2023-01-11T21:41:23.5684162Z 2023-01-11T21:41:23.5684432Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5684837Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5685055Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5685248Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5685431Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5685618Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.5685743Z { 2023-01-11T21:41:23.5685927Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5686048Z { 2023-01-11T21:41:23.5686257Z #pragma omp for 2023-01-11T21:41:23.5686414Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.5686539Z { 2023-01-11T21:41:23.5686793Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.5687044Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.5687210Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5687381Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.5687491Z } 2023-01-11T21:41:23.5687682Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5687844Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.5687970Z { 2023-01-11T21:41:23.5688136Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5688321Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5688489Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5688629Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5688758Z } 2023-01-11T21:41:23.5688917Z #pragma omp for 2023-01-11T21:41:23.5689078Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.5689201Z { 2023-01-11T21:41:23.5689459Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.5689717Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.5689882Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5690040Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.5690166Z } 2023-01-11T21:41:23.5690354Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5690516Z for(long i0=80; i0<80; i0+=1) 2023-01-11T21:41:23.5690643Z { 2023-01-11T21:41:23.5690812Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.5690991Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5691158Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5691313Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.5691433Z } 2023-01-11T21:41:23.5691564Z } 2023-01-11T21:41:23.5691678Z } 2023-01-11T21:41:23.5691844Z ''') 2023-01-11T21:41:23.5691853Z 2023-01-11T21:41:23.5691860Z 2023-01-11T21:41:23.5692107Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.5692512Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5692744Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.5692865Z { 2023-01-11T21:41:23.5693059Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5693184Z { 2023-01-11T21:41:23.5693338Z #pragma omp for 2023-01-11T21:41:23.5693482Z for(long i0=0; i0<20; i0+=1) 2023-01-11T21:41:23.5693604Z { 2023-01-11T21:41:23.5693862Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5694117Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.5694279Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5694550Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.5713371Z } 2023-01-11T21:41:23.5713623Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.5713832Z for(long i0=160; i0<160; i0+=1) 2023-01-11T21:41:23.5713964Z { 2023-01-11T21:41:23.5714131Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.5714324Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.5714475Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5714631Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5714747Z } 2023-01-11T21:41:23.5714854Z } 2023-01-11T21:41:23.5714976Z } 2023-01-11T21:41:23.5715150Z ''') 2023-01-11T21:41:23.5715159Z 2023-01-11T21:41:23.5715166Z 2023-01-11T21:41:23.5715336Z async_compile.wait(globals()) 2023-01-11T21:41:23.5715478Z del async_compile 2023-01-11T21:41:23.5715486Z 2023-01-11T21:41:23.5715628Z def call(args): 2023-01-11T21:41:23.5715764Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5715889Z args.clear() 2023-01-11T21:41:23.5716421Z buf0 = empty_strided((1, 16, 10), (160, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5716731Z aten.mm.out(as_strided(arg0_1, (16, 8), (8, 1)), as_strided(arg1_1, (8, 10), (10, 1)), out=as_strided(buf0, (16, 10), (10, 1))) 2023-01-11T21:41:23.5717113Z buf1 = empty_strided((1, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5717495Z buf2 = empty_strided((1, 8, 10), (80, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5717855Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.5717962Z del arg0_1 2023-01-11T21:41:23.5718075Z del arg1_1 2023-01-11T21:41:23.5718444Z buf3 = empty_strided((1, 16, 10), (160, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5718740Z aten.mm.out(as_strided(buf1, (16, 8), (8, 1)), as_strided(buf2, (8, 10), (10, 1)), out=as_strided(buf3, (16, 10), (10, 1))) 2023-01-11T21:41:23.5718876Z del buf1 2023-01-11T21:41:23.5719005Z del buf2 2023-01-11T21:41:23.5719174Z buf4 = buf3; del buf3 # reuse 2023-01-11T21:41:23.5719363Z kernel_cpp_1(c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.5719513Z return (buf0, buf4, ) 2023-01-11T21:41:23.5719521Z 2023-01-11T21:41:23.5719529Z 2023-01-11T21:41:23.5719679Z if __name__ == "__main__": 2023-01-11T21:41:23.5719887Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5720133Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5720525Z arg0_1 = rand_strided((1, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5720905Z arg1_1 = rand_strided((1, 8, 10), (80, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5721132Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5721140Z 2023-01-11T21:41:23.5721274Z ok (3.262s) 2023-01-11T21:41:23.5722218Z test_bmm2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5722462Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5722960Z [2023-01-11 21:25:09,412] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 47 2023-01-11T21:41:23.5723442Z [2023-01-11 21:25:09,417] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 47 2023-01-11T21:41:23.5723450Z 2023-01-11T21:41:23.5723634Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5723778Z import torch 2023-01-11T21:41:23.5723922Z import random 2023-01-11T21:41:23.5724156Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5724404Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5724513Z 2023-01-11T21:41:23.5724676Z aten = torch.ops.aten 2023-01-11T21:41:23.5724940Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5725110Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5725118Z 2023-01-11T21:41:23.5725125Z 2023-01-11T21:41:23.5725296Z async_compile.wait(globals()) 2023-01-11T21:41:23.5725444Z del async_compile 2023-01-11T21:41:23.5725452Z 2023-01-11T21:41:23.5725598Z def call(args): 2023-01-11T21:41:23.5725746Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5725890Z args.clear() 2023-01-11T21:41:23.5726276Z buf0 = empty_strided((1, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5726560Z aten.mm.out(as_strided(arg0_1, (8, 8), (1, 8)), as_strided(arg1_1, (8, 8), (8, 1)), out=as_strided(buf0, (8, 8), (8, 1))) 2023-01-11T21:41:23.5726698Z del arg0_1 2023-01-11T21:41:23.5726826Z del arg1_1 2023-01-11T21:41:23.5726968Z return (buf0, ) 2023-01-11T21:41:23.5727045Z 2023-01-11T21:41:23.5727054Z 2023-01-11T21:41:23.5727199Z if __name__ == "__main__": 2023-01-11T21:41:23.5727418Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5727657Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5728051Z arg0_1 = rand_strided((1, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5728411Z arg1_1 = rand_strided((1, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5728630Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5728638Z 2023-01-11T21:41:23.5728774Z ok (0.029s) 2023-01-11T21:41:23.5729692Z test_bool_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5729939Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5730440Z [2023-01-11 21:25:09,450] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 48 2023-01-11T21:41:23.5730948Z [2023-01-11 21:25:10,999] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 48 2023-01-11T21:41:23.5730956Z 2023-01-11T21:41:23.5731140Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5731281Z import torch 2023-01-11T21:41:23.5731425Z import random 2023-01-11T21:41:23.5731642Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5731883Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5731891Z 2023-01-11T21:41:23.5732048Z aten = torch.ops.aten 2023-01-11T21:41:23.5732307Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5732490Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5732503Z 2023-01-11T21:41:23.5732513Z 2023-01-11T21:41:23.5732777Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5733179Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5733409Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.5733584Z const bool* __restrict__ in_ptr1, 2023-01-11T21:41:23.5733770Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.5733943Z bool* __restrict__ out_ptr1, 2023-01-11T21:41:23.5734116Z bool* __restrict__ out_ptr2, 2023-01-11T21:41:23.5734299Z bool* __restrict__ out_ptr3, 2023-01-11T21:41:23.5734482Z bool* __restrict__ out_ptr4, 2023-01-11T21:41:23.5734657Z bool* __restrict__ out_ptr5, 2023-01-11T21:41:23.5734813Z bool* __restrict__ out_ptr6, 2023-01-11T21:41:23.5734995Z bool* __restrict__ out_ptr7, 2023-01-11T21:41:23.5735249Z bool* __restrict__ out_ptr8) 2023-01-11T21:41:23.5735377Z { 2023-01-11T21:41:23.5735573Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5735697Z { 2023-01-11T21:41:23.5735851Z #pragma omp for 2023-01-11T21:41:23.5735991Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.5736119Z { 2023-01-11T21:41:23.5736240Z { 2023-01-11T21:41:23.5736365Z { 2023-01-11T21:41:23.5736535Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5736705Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.5736867Z auto tmp2 = tmp0 || tmp1; 2023-01-11T21:41:23.5737028Z auto tmp3 = tmp0 && tmp1; 2023-01-11T21:41:23.5737184Z auto tmp4 = tmp0 & tmp1; 2023-01-11T21:41:23.5737350Z auto tmp5 = tmp0 | tmp1; 2023-01-11T21:41:23.5737506Z auto tmp6 = tmp0 ^ tmp1; 2023-01-11T21:41:23.5737726Z auto tmp7 = tmp0 == 0; 2023-01-11T21:41:23.5737897Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.5738047Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.5738187Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.5738345Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:23.5738504Z out_ptr4[i0] = tmp6; 2023-01-11T21:41:23.5738663Z out_ptr5[i0] = tmp3; 2023-01-11T21:41:23.5738823Z out_ptr6[i0] = tmp2; 2023-01-11T21:41:23.5738979Z out_ptr7[i0] = tmp7; 2023-01-11T21:41:23.5739139Z out_ptr8[i0] = tmp1; 2023-01-11T21:41:23.5739249Z } 2023-01-11T21:41:23.5739370Z } 2023-01-11T21:41:23.5739495Z } 2023-01-11T21:41:23.5739619Z } 2023-01-11T21:41:23.5739736Z } 2023-01-11T21:41:23.5739908Z ''') 2023-01-11T21:41:23.5739917Z 2023-01-11T21:41:23.5739924Z 2023-01-11T21:41:23.5740111Z async_compile.wait(globals()) 2023-01-11T21:41:23.5740242Z del async_compile 2023-01-11T21:41:23.5740249Z 2023-01-11T21:41:23.5740392Z def call(args): 2023-01-11T21:41:23.5740537Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.5740673Z args.clear() 2023-01-11T21:41:23.5741033Z buf0 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5741381Z buf1 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5741733Z buf2 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5742067Z buf3 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5742579Z buf4 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5742931Z buf5 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5743265Z buf6 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5743610Z buf7 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5743963Z buf8 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5744638Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf8.data_ptr())) 2023-01-11T21:41:23.5744778Z del arg0_1 2023-01-11T21:41:23.5744920Z del arg1_1 2023-01-11T21:41:23.5745132Z return (buf0, buf1, buf2, buf3, buf4, buf5, buf6, buf7, buf8, ) 2023-01-11T21:41:23.5745140Z 2023-01-11T21:41:23.5745147Z 2023-01-11T21:41:23.5745297Z if __name__ == "__main__": 2023-01-11T21:41:23.5745533Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5745783Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5746159Z arg0_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5746654Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.5746893Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.5746902Z 2023-01-11T21:41:23.5747037Z ok (1.582s) 2023-01-11T21:41:23.5800143Z test_both_scalars_cpu (__main__.CpuTests) ... [2023-01-11 21:25:11,061] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 49 2023-01-11T21:41:23.5800718Z [2023-01-11 21:25:12,581] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 49 2023-01-11T21:41:23.5800729Z 2023-01-11T21:41:23.5800908Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5801046Z import torch 2023-01-11T21:41:23.5801192Z import random 2023-01-11T21:41:23.5801398Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5801618Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5801626Z 2023-01-11T21:41:23.5801744Z aten = torch.ops.aten 2023-01-11T21:41:23.5802108Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5802248Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5802255Z 2023-01-11T21:41:23.5802260Z 2023-01-11T21:41:23.5802479Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5802791Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5802986Z extern "C" void kernel(float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5803136Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5803294Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.5803444Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.5803588Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.5803744Z float* __restrict__ out_ptr5) 2023-01-11T21:41:23.5803845Z { 2023-01-11T21:41:23.5803968Z { 2023-01-11T21:41:23.5804070Z { 2023-01-11T21:41:23.5804266Z auto tmp0 = static_cast(4); 2023-01-11T21:41:23.5804435Z auto tmp1 = static_cast(3.3); 2023-01-11T21:41:23.5804559Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5804703Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.5804824Z } 2023-01-11T21:41:23.5804942Z } 2023-01-11T21:41:23.5805050Z { 2023-01-11T21:41:23.5805174Z { 2023-01-11T21:41:23.5805342Z auto tmp0 = static_cast(3.3); 2023-01-11T21:41:23.5805525Z auto tmp1 = static_cast(4); 2023-01-11T21:41:23.5805678Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5805826Z out_ptr1[0] = tmp2; 2023-01-11T21:41:23.5805936Z } 2023-01-11T21:41:23.5806046Z } 2023-01-11T21:41:23.5806153Z { 2023-01-11T21:41:23.5806246Z { 2023-01-11T21:41:23.5806421Z auto tmp0 = static_cast(4); 2023-01-11T21:41:23.5806601Z auto tmp1 = static_cast(3.3); 2023-01-11T21:41:23.5806852Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5807000Z out_ptr2[0] = tmp2; 2023-01-11T21:41:23.5807124Z } 2023-01-11T21:41:23.5807231Z } 2023-01-11T21:41:23.5807324Z { 2023-01-11T21:41:23.5807447Z { 2023-01-11T21:41:23.5807625Z auto tmp0 = static_cast(3.3); 2023-01-11T21:41:23.5807790Z auto tmp1 = static_cast(4); 2023-01-11T21:41:23.5808005Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.5808149Z out_ptr3[0] = tmp2; 2023-01-11T21:41:23.5808245Z } 2023-01-11T21:41:23.5808350Z } 2023-01-11T21:41:23.5808451Z { 2023-01-11T21:41:23.5808556Z { 2023-01-11T21:41:23.5808727Z auto tmp0 = static_cast(4); 2023-01-11T21:41:23.5808900Z auto tmp1 = static_cast(3.3); 2023-01-11T21:41:23.5809055Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5809177Z out_ptr4[0] = tmp2; 2023-01-11T21:41:23.5809291Z } 2023-01-11T21:41:23.5809397Z } 2023-01-11T21:41:23.5809627Z { 2023-01-11T21:41:23.5809733Z { 2023-01-11T21:41:23.5809910Z auto tmp0 = static_cast(3.3); 2023-01-11T21:41:23.5810087Z auto tmp1 = static_cast(4); 2023-01-11T21:41:23.5810232Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5810375Z out_ptr5[0] = tmp2; 2023-01-11T21:41:23.5810493Z } 2023-01-11T21:41:23.5810603Z } 2023-01-11T21:41:23.5810723Z } 2023-01-11T21:41:23.5810869Z ''') 2023-01-11T21:41:23.5810879Z 2023-01-11T21:41:23.5810887Z 2023-01-11T21:41:23.5811044Z async_compile.wait(globals()) 2023-01-11T21:41:23.5811161Z del async_compile 2023-01-11T21:41:23.5811169Z 2023-01-11T21:41:23.5811299Z def call(args): 2023-01-11T21:41:23.5811618Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5811914Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5812205Z buf2 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5812594Z buf3 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5812917Z buf4 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5813225Z buf5 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5813590Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr())) 2023-01-11T21:41:23.5813777Z return (buf0, buf1, buf2, buf3, buf4, buf5, ) 2023-01-11T21:41:23.5813786Z 2023-01-11T21:41:23.5813793Z 2023-01-11T21:41:23.5813935Z if __name__ == "__main__": 2023-01-11T21:41:23.5814154Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5814373Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5814550Z print_performance(lambda: call([])) 2023-01-11T21:41:23.5814558Z 2023-01-11T21:41:23.5814679Z ok (1.582s) 2023-01-11T21:41:23.5815596Z test_cat_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5815823Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5816287Z [2023-01-11 21:25:12,633] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 50 2023-01-11T21:41:23.5816768Z [2023-01-11 21:25:14,241] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 50 2023-01-11T21:41:23.5817579Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5817815Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5818270Z [2023-01-11 21:25:14,291] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 51 2023-01-11T21:41:23.5818749Z [2023-01-11 21:25:15,975] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 51 2023-01-11T21:41:23.5818758Z 2023-01-11T21:41:23.5818926Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5819058Z import torch 2023-01-11T21:41:23.5819187Z import random 2023-01-11T21:41:23.5819389Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5819612Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5819621Z 2023-01-11T21:41:23.5819751Z aten = torch.ops.aten 2023-01-11T21:41:23.5819979Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5820229Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5820236Z 2023-01-11T21:41:23.5820243Z 2023-01-11T21:41:23.5820501Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5820876Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5821099Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5821274Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5821433Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5821600Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.5821769Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.5821937Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.5822105Z double* __restrict__ out_ptr5, 2023-01-11T21:41:23.5822296Z double* __restrict__ out_ptr6) 2023-01-11T21:41:23.5822553Z { 2023-01-11T21:41:23.5822808Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5822913Z { 2023-01-11T21:41:23.5823055Z #pragma omp for 2023-01-11T21:41:23.5823186Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5823295Z { 2023-01-11T21:41:23.5823438Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.5823548Z { 2023-01-11T21:41:23.5823784Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (16*i0)); 2023-01-11T21:41:23.5824005Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.5824155Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5824332Z tmp0.store(out_ptr0 + (8*i1) + (36*i0)); 2023-01-11T21:41:23.5824495Z tmp2.store(out_ptr1 + (8*i1) + (36*i0)); 2023-01-11T21:41:23.5824614Z } 2023-01-11T21:41:23.5824773Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5824910Z for(long i1=16; i1<16; i1+=1) 2023-01-11T21:41:23.5825033Z { 2023-01-11T21:41:23.5825209Z auto tmp0 = in_ptr0[i1 + (16*i0)]; 2023-01-11T21:41:23.5825384Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5825538Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5825699Z out_ptr0[i1 + (36*i0)] = tmp0; 2023-01-11T21:41:23.5825858Z out_ptr1[i1 + (36*i0)] = tmp2; 2023-01-11T21:41:23.5825954Z } 2023-01-11T21:41:23.5826061Z } 2023-01-11T21:41:23.5826202Z #pragma omp for 2023-01-11T21:41:23.5826335Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.5826449Z { 2023-01-11T21:41:23.5826590Z #pragma GCC ivdep 2023-01-11T21:41:23.5826745Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.5826856Z { 2023-01-11T21:41:23.5826975Z { 2023-01-11T21:41:23.5827103Z { 2023-01-11T21:41:23.5827289Z auto tmp0 = in_ptr0[i1 + (16*i0)]; 2023-01-11T21:41:23.5827489Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5827649Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5827809Z out_ptr2[i1 + (36*i0)] = tmp2; 2023-01-11T21:41:23.5827915Z } 2023-01-11T21:41:23.5828031Z } 2023-01-11T21:41:23.5828144Z } 2023-01-11T21:41:23.5828259Z } 2023-01-11T21:41:23.5828393Z #pragma omp for 2023-01-11T21:41:23.5828531Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.5828627Z { 2023-01-11T21:41:23.5828739Z { 2023-01-11T21:41:23.5828863Z { 2023-01-11T21:41:23.5829025Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.5829201Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5829349Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5829535Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.5829780Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.5829926Z out_ptr4[i0] = tmp2; 2023-01-11T21:41:23.5830072Z out_ptr5[i0] = tmp3; 2023-01-11T21:41:23.5830215Z out_ptr6[i0] = tmp3; 2023-01-11T21:41:23.5830336Z } 2023-01-11T21:41:23.5830451Z } 2023-01-11T21:41:23.5830565Z } 2023-01-11T21:41:23.5830674Z } 2023-01-11T21:41:23.5830784Z } 2023-01-11T21:41:23.5830944Z ''') 2023-01-11T21:41:23.5830953Z 2023-01-11T21:41:23.5830960Z 2023-01-11T21:41:23.5831132Z async_compile.wait(globals()) 2023-01-11T21:41:23.5831252Z del async_compile 2023-01-11T21:41:23.5831260Z 2023-01-11T21:41:23.5831391Z def call(args): 2023-01-11T21:41:23.5831522Z arg0_1, = args 2023-01-11T21:41:23.5831641Z args.clear() 2023-01-11T21:41:23.5832006Z buf3 = empty_strided((8, 36), (36, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5832180Z buf0 = as_strided(buf3, (8, 16), (36, 1)) # alias 2023-01-11T21:41:23.5832413Z buf2 = as_strided(buf3, (8, 16), (36, 1), 20) # alias 2023-01-11T21:41:23.5832598Z buf1 = as_strided(buf3, (8, 4), (36, 1), 16) # alias 2023-01-11T21:41:23.5832959Z buf6 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5833135Z buf4 = as_strided(buf6, (8, 16), (16, 1)) # alias 2023-01-11T21:41:23.5833311Z buf5 = as_strided(buf6, (8, 16), (16, 1), 128) # alias 2023-01-11T21:41:23.5833638Z buf9 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.5833877Z buf7 = as_strided(buf9, (8, 16), (16, 1)) # alias 2023-01-11T21:41:23.5834059Z buf8 = as_strided(buf9, (8, 16), (16, 1), 128) # alias 2023-01-11T21:41:23.5834512Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf8.data_ptr())) 2023-01-11T21:41:23.5834634Z del arg0_1 2023-01-11T21:41:23.5834781Z return (buf3, buf6, buf9, ) 2023-01-11T21:41:23.5834789Z 2023-01-11T21:41:23.5834795Z 2023-01-11T21:41:23.5834922Z if __name__ == "__main__": 2023-01-11T21:41:23.5835121Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5835324Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5835680Z arg0_1 = rand_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5835870Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5835878Z 2023-01-11T21:41:23.5835885Z 2023-01-11T21:41:23.5836047Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5836185Z import torch 2023-01-11T21:41:23.5836303Z import random 2023-01-11T21:41:23.5836508Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5836735Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5836742Z 2023-01-11T21:41:23.5836849Z aten = torch.ops.aten 2023-01-11T21:41:23.5837081Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5837243Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5837249Z 2023-01-11T21:41:23.5837255Z 2023-01-11T21:41:23.5837484Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5837837Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5876773Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5877053Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.5877255Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.5877447Z const double* __restrict__ in_ptr3, 2023-01-11T21:41:23.5877639Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5877828Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.5878011Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.5878196Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.5878519Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.5878706Z float* __restrict__ out_ptr5, 2023-01-11T21:41:23.5878900Z double* __restrict__ out_ptr6, 2023-01-11T21:41:23.5879075Z double* __restrict__ out_ptr7, 2023-01-11T21:41:23.5879260Z float* __restrict__ out_ptr8, 2023-01-11T21:41:23.5879446Z double* __restrict__ out_ptr9) 2023-01-11T21:41:23.5879572Z { 2023-01-11T21:41:23.5879769Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5879898Z { 2023-01-11T21:41:23.5880064Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5880225Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.5880356Z { 2023-01-11T21:41:23.5880521Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.5880651Z { 2023-01-11T21:41:23.5880813Z #pragma GCC ivdep 2023-01-11T21:41:23.5881059Z for(long i2=0; i2<16; i2+=1) 2023-01-11T21:41:23.5881183Z { 2023-01-11T21:41:23.5881322Z { 2023-01-11T21:41:23.5881459Z { 2023-01-11T21:41:23.5881664Z auto tmp0 = in_ptr0[i0 + (3*i2) + (48*i1)]; 2023-01-11T21:41:23.5881875Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.5882058Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.5882263Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.5882432Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.5882627Z out_ptr0[i2 + (48*i1) + (144*i0)] = tmp0; 2023-01-11T21:41:23.5882817Z out_ptr1[i2 + (48*i1) + (144*i0)] = tmp2; 2023-01-11T21:41:23.5883005Z out_ptr2[i2 + (48*i1) + (144*i0)] = tmp4; 2023-01-11T21:41:23.5883142Z } 2023-01-11T21:41:23.5883278Z } 2023-01-11T21:41:23.5883413Z } 2023-01-11T21:41:23.5883544Z } 2023-01-11T21:41:23.5883661Z } 2023-01-11T21:41:23.5883844Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5884007Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.5884137Z { 2023-01-11T21:41:23.5884304Z for(long i1=0; i1<144; i1+=1) 2023-01-11T21:41:23.5884436Z { 2023-01-11T21:41:23.5884553Z { 2023-01-11T21:41:23.5884688Z { 2023-01-11T21:41:23.5884883Z auto tmp0 = in_ptr1[i1 + (144*i0)]; 2023-01-11T21:41:23.5885059Z out_ptr3[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.5885192Z } 2023-01-11T21:41:23.5885324Z } 2023-01-11T21:41:23.5885452Z } 2023-01-11T21:41:23.5885568Z } 2023-01-11T21:41:23.5885829Z #pragma omp for collapse(2) 2023-01-11T21:41:23.5886029Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.5886179Z { 2023-01-11T21:41:23.5886556Z for(long i1=0; i1<48; i1+=1) 2023-01-11T21:41:23.5886725Z { 2023-01-11T21:41:23.5886890Z { 2023-01-11T21:41:23.5887057Z { 2023-01-11T21:41:23.5887290Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.5887528Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.5887692Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.5887908Z out_ptr4[i1 + (48*i0)] = tmp2; 2023-01-11T21:41:23.5888091Z } 2023-01-11T21:41:23.5888296Z } 2023-01-11T21:41:23.5888460Z } 2023-01-11T21:41:23.5888626Z } 2023-01-11T21:41:23.5888816Z #pragma omp for 2023-01-11T21:41:23.5888966Z for(long i0=0; i0<144; i0+=1) 2023-01-11T21:41:23.5889126Z { 2023-01-11T21:41:23.5889328Z { 2023-01-11T21:41:23.5889509Z { 2023-01-11T21:41:23.5889791Z auto tmp0 = out_ptr4[i0]; 2023-01-11T21:41:23.5890072Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.5890223Z out_ptr5[i0] = tmp0; 2023-01-11T21:41:23.5890421Z out_ptr6[i0] = tmp1; 2023-01-11T21:41:23.5890617Z out_ptr7[i0] = tmp1; 2023-01-11T21:41:23.5890783Z } 2023-01-11T21:41:23.5890948Z } 2023-01-11T21:41:23.5891124Z } 2023-01-11T21:41:23.5891348Z #pragma omp for collapse(3) 2023-01-11T21:41:23.5891497Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.5891660Z { 2023-01-11T21:41:23.5891894Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.5892059Z { 2023-01-11T21:41:23.5892268Z for(long i2=0; i2<48; i2+=1) 2023-01-11T21:41:23.5892437Z { 2023-01-11T21:41:23.5892648Z { 2023-01-11T21:41:23.5892772Z { 2023-01-11T21:41:23.5893078Z auto tmp0 = in_ptr2[i2 + (48*i1) + (144*i0)]; 2023-01-11T21:41:23.5893310Z out_ptr8[i1 + (3*i2) + (144*i0)] = tmp0; 2023-01-11T21:41:23.5893480Z } 2023-01-11T21:41:23.5893680Z } 2023-01-11T21:41:23.5893851Z } 2023-01-11T21:41:23.5894029Z } 2023-01-11T21:41:23.5894145Z } 2023-01-11T21:41:23.5894362Z #pragma omp for collapse(3) 2023-01-11T21:41:23.5894562Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.5894729Z { 2023-01-11T21:41:23.5894922Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.5895093Z { 2023-01-11T21:41:23.5895248Z for(long i2=0; i2<48; i2+=1) 2023-01-11T21:41:23.5895445Z { 2023-01-11T21:41:23.5895622Z { 2023-01-11T21:41:23.5895793Z { 2023-01-11T21:41:23.5896067Z auto tmp0 = in_ptr3[i2 + (48*i1) + (144*i0)]; 2023-01-11T21:41:23.5896303Z out_ptr9[i1 + (3*i2) + (144*i0)] = tmp0; 2023-01-11T21:41:23.5896478Z } 2023-01-11T21:41:23.5896597Z } 2023-01-11T21:41:23.5896763Z } 2023-01-11T21:41:23.5896926Z } 2023-01-11T21:41:23.5897133Z } 2023-01-11T21:41:23.5897297Z } 2023-01-11T21:41:23.5897457Z } 2023-01-11T21:41:23.5897683Z ''') 2023-01-11T21:41:23.5897693Z 2023-01-11T21:41:23.5897700Z 2023-01-11T21:41:23.5897870Z async_compile.wait(globals()) 2023-01-11T21:41:23.5898057Z del async_compile 2023-01-11T21:41:23.5898065Z 2023-01-11T21:41:23.5898247Z def call(args): 2023-01-11T21:41:23.5898427Z arg0_1, = args 2023-01-11T21:41:23.5898618Z args.clear() 2023-01-11T21:41:23.5899099Z buf3 = empty_strided((1, 3, 3, 48), (432, 144, 48, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5899344Z buf0 = as_strided(buf3, (1, 3, 3, 16), (432, 144, 48, 1)) # alias 2023-01-11T21:41:23.5899590Z buf1 = as_strided(buf3, (1, 3, 3, 16), (432, 144, 48, 1), 16) # alias 2023-01-11T21:41:23.5899785Z buf2 = as_strided(buf3, (1, 3, 3, 16), (432, 144, 48, 1), 32) # alias 2023-01-11T21:41:23.5900262Z buf4 = empty_strided((1, 3, 3, 48), (432, 1, 144, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5900689Z buf7 = empty_strided((2, 3, 3, 16), (144, 48, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5900927Z buf5 = as_strided(buf7, (1, 3, 3, 16), (144, 48, 16, 1)) # alias 2023-01-11T21:41:23.5901181Z buf6 = as_strided(buf7, (1, 3, 3, 16), (144, 48, 16, 1), 144) # alias 2023-01-11T21:41:23.5901618Z buf11 = empty_strided((2, 3, 3, 16), (144, 48, 16, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.5901905Z buf9 = as_strided(buf11, (1, 3, 3, 16), (144, 48, 16, 1)) # alias 2023-01-11T21:41:23.5902150Z buf10 = as_strided(buf11, (1, 3, 3, 16), (144, 48, 16, 1), 144) # alias 2023-01-11T21:41:23.5902673Z buf8 = empty_strided((2, 3, 3, 16), (144, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5903215Z buf12 = empty_strided((2, 3, 3, 16), (144, 1, 48, 3), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.5904068Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf11.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf10.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf12.data_ptr())) 2023-01-11T21:41:23.5904247Z del arg0_1 2023-01-11T21:41:23.5904437Z del buf0 2023-01-11T21:41:23.5904609Z del buf1 2023-01-11T21:41:23.5904782Z del buf10 2023-01-11T21:41:23.5905023Z del buf11 2023-01-11T21:41:23.5905144Z del buf2 2023-01-11T21:41:23.5905320Z del buf3 2023-01-11T21:41:23.5905526Z del buf5 2023-01-11T21:41:23.5905697Z del buf6 2023-01-11T21:41:23.5905880Z del buf7 2023-01-11T21:41:23.5906125Z del buf9 2023-01-11T21:41:23.5906286Z return (buf4, buf8, buf12, ) 2023-01-11T21:41:23.5906351Z 2023-01-11T21:41:23.5906358Z 2023-01-11T21:41:23.5906500Z if __name__ == "__main__": 2023-01-11T21:41:23.5906760Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5907081Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5907521Z arg0_1 = rand_strided((1, 3, 3, 16), (144, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5907782Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.5907791Z 2023-01-11T21:41:23.5907963Z ok (3.394s) 2023-01-11T21:41:23.5908954Z test_cat_extern_kernel_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5909241Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5909773Z [2023-01-11 21:25:16,035] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 52 2023-01-11T21:41:23.5910263Z [2023-01-11 21:25:17,554] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 52 2023-01-11T21:41:23.5910324Z 2023-01-11T21:41:23.5910502Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5910709Z import torch 2023-01-11T21:41:23.5910938Z import random 2023-01-11T21:41:23.5911203Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5911478Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5911487Z 2023-01-11T21:41:23.5911685Z aten = torch.ops.aten 2023-01-11T21:41:23.5911977Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5912149Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5912164Z 2023-01-11T21:41:23.5912171Z 2023-01-11T21:41:23.5912470Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5912889Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5913198Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5913426Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.5913585Z { 2023-01-11T21:41:23.5913890Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5914005Z { 2023-01-11T21:41:23.5914204Z #pragma omp for 2023-01-11T21:41:23.5914406Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.5914571Z { 2023-01-11T21:41:23.5914784Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.5915027Z { 2023-01-11T21:41:23.5915351Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (256*i0)); 2023-01-11T21:41:23.5915584Z tmp0.store(out_ptr0 + (8*i1) + (512*i0)); 2023-01-11T21:41:23.5915778Z } 2023-01-11T21:41:23.5915997Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.5916202Z for(long i1=256; i1<256; i1+=1) 2023-01-11T21:41:23.5916369Z { 2023-01-11T21:41:23.5916602Z auto tmp0 = in_ptr0[i1 + (256*i0)]; 2023-01-11T21:41:23.5916818Z out_ptr0[i1 + (512*i0)] = tmp0; 2023-01-11T21:41:23.5916934Z } 2023-01-11T21:41:23.5917131Z } 2023-01-11T21:41:23.5917292Z } 2023-01-11T21:41:23.5917455Z } 2023-01-11T21:41:23.5917668Z ''') 2023-01-11T21:41:23.5917677Z 2023-01-11T21:41:23.5917684Z 2023-01-11T21:41:23.5917906Z async_compile.wait(globals()) 2023-01-11T21:41:23.5918104Z del async_compile 2023-01-11T21:41:23.5918112Z 2023-01-11T21:41:23.5918292Z def call(args): 2023-01-11T21:41:23.5918454Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.5918675Z args.clear() 2023-01-11T21:41:23.5919182Z buf0 = empty_strided((256, 1600), (1600, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5919407Z aten.mm.out(arg1_1, arg2_1, out=buf0) 2023-01-11T21:41:23.5919582Z del arg1_1 2023-01-11T21:41:23.5919770Z del arg2_1 2023-01-11T21:41:23.5920186Z buf3 = empty_strided((256, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5920372Z buf1 = as_strided(buf3, (256, 256), (512, 1)) # alias 2023-01-11T21:41:23.5920634Z aten.mm.out(as_strided(buf0, (256, 100), (1600, 1)), arg3_1, out=buf1) 2023-01-11T21:41:23.5920806Z del arg3_1 2023-01-11T21:41:23.5920976Z del buf0 2023-01-11T21:41:23.5921246Z buf2 = as_strided(buf3, (256, 256), (512, 1), 256) # alias 2023-01-11T21:41:23.5921535Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.5921722Z del arg0_1 2023-01-11T21:41:23.5921855Z return (buf3, ) 2023-01-11T21:41:23.5921863Z 2023-01-11T21:41:23.5921870Z 2023-01-11T21:41:23.5922063Z if __name__ == "__main__": 2023-01-11T21:41:23.5922325Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.5922603Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.5923061Z arg0_1 = rand_strided((256, 256), (256, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5923478Z arg1_1 = rand_strided((256, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5923926Z arg2_1 = rand_strided((1024, 1600), (1600, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5924350Z arg3_1 = rand_strided((100, 256), (256, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.5924584Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.5924645Z 2023-01-11T21:41:23.5924769Z ok (1.622s) 2023-01-11T21:41:23.5925751Z test_cat_upcasting_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.5926034Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.5926564Z [2023-01-11 21:25:17,620] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 53 2023-01-11T21:41:23.5927103Z [2023-01-11 21:25:19,154] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 53 2023-01-11T21:41:23.5927111Z 2023-01-11T21:41:23.5927335Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.5927569Z import torch 2023-01-11T21:41:23.5927753Z import random 2023-01-11T21:41:23.5927970Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.5928246Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.5928254Z 2023-01-11T21:41:23.5928450Z aten = torch.ops.aten 2023-01-11T21:41:23.5928741Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.5950597Z async_compile = AsyncCompile() 2023-01-11T21:41:23.5950618Z 2023-01-11T21:41:23.5950626Z 2023-01-11T21:41:23.5951033Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.5951485Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.5951768Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.5952012Z const half* __restrict__ in_ptr1, 2023-01-11T21:41:23.5952216Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.5952445Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.5952612Z { 2023-01-11T21:41:23.5952924Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.5953088Z { 2023-01-11T21:41:23.5953272Z #pragma omp for 2023-01-11T21:41:23.6013999Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.6014187Z { 2023-01-11T21:41:23.6014386Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.6014705Z { 2023-01-11T21:41:23.6015052Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (16*i0)); 2023-01-11T21:41:23.6015279Z tmp0.store(out_ptr0 + (8*i1) + (36*i0)); 2023-01-11T21:41:23.6015425Z } 2023-01-11T21:41:23.6015642Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.6015828Z for(long i1=16; i1<16; i1+=1) 2023-01-11T21:41:23.6015963Z { 2023-01-11T21:41:23.6016172Z auto tmp0 = in_ptr0[i1 + (16*i0)]; 2023-01-11T21:41:23.6016344Z out_ptr0[i1 + (36*i0)] = tmp0; 2023-01-11T21:41:23.6016483Z } 2023-01-11T21:41:23.6016621Z } 2023-01-11T21:41:23.6016789Z #pragma omp for 2023-01-11T21:41:23.6016969Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.6017106Z { 2023-01-11T21:41:23.6017283Z #pragma GCC ivdep 2023-01-11T21:41:23.6017453Z for(long i1=0; i1<20; i1+=1) 2023-01-11T21:41:23.6017607Z { 2023-01-11T21:41:23.6017759Z { 2023-01-11T21:41:23.6017907Z { 2023-01-11T21:41:23.6018182Z auto tmp0 = static_cast(in_ptr1[i1 + (20*i0)]); 2023-01-11T21:41:23.6018434Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.6018637Z out_ptr1[i1 + (36*i0)] = tmp1; 2023-01-11T21:41:23.6018769Z } 2023-01-11T21:41:23.6018914Z } 2023-01-11T21:41:23.6019054Z } 2023-01-11T21:41:23.6019189Z } 2023-01-11T21:41:23.6019332Z } 2023-01-11T21:41:23.6019468Z } 2023-01-11T21:41:23.6019682Z ''') 2023-01-11T21:41:23.6019694Z 2023-01-11T21:41:23.6019703Z 2023-01-11T21:41:23.6019894Z async_compile.wait(globals()) 2023-01-11T21:41:23.6020058Z del async_compile 2023-01-11T21:41:23.6020067Z 2023-01-11T21:41:23.6020233Z def call(args): 2023-01-11T21:41:23.6020401Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.6020562Z args.clear() 2023-01-11T21:41:23.6021032Z buf2 = empty_strided((8, 36), (36, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6021260Z buf0 = as_strided(buf2, (8, 16), (36, 1)) # alias 2023-01-11T21:41:23.6021493Z buf1 = as_strided(buf2, (8, 20), (36, 1), 16) # alias 2023-01-11T21:41:23.6021922Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6022077Z del arg0_1 2023-01-11T21:41:23.6022225Z del arg1_1 2023-01-11T21:41:23.6022535Z return (buf2, ) 2023-01-11T21:41:23.6022547Z 2023-01-11T21:41:23.6022556Z 2023-01-11T21:41:23.6022735Z if __name__ == "__main__": 2023-01-11T21:41:23.6022997Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6023268Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6023721Z arg0_1 = rand_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6024151Z arg1_1 = rand_strided((8, 20), (20, 1), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.6024526Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.6024535Z 2023-01-11T21:41:23.6024677Z ok (1.557s) 2023-01-11T21:41:23.6025897Z test_cauchy_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6026184Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6026814Z [2023-01-11 21:25:19,177] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 54 2023-01-11T21:41:23.6027454Z [2023-01-11 21:25:20,707] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 54 2023-01-11T21:41:23.6027551Z 2023-01-11T21:41:23.6027772Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6027936Z import torch 2023-01-11T21:41:23.6028076Z import random 2023-01-11T21:41:23.6028359Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6028642Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6028651Z 2023-01-11T21:41:23.6028832Z aten = torch.ops.aten 2023-01-11T21:41:23.6029146Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6029358Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6029368Z 2023-01-11T21:41:23.6029377Z 2023-01-11T21:41:23.6029700Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6030207Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6030466Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6030677Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.6030907Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6031043Z { 2023-01-11T21:41:23.6031177Z { 2023-01-11T21:41:23.6031320Z { 2023-01-11T21:41:23.6031498Z float tmp6 = 0; 2023-01-11T21:41:23.6031746Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6031876Z { 2023-01-11T21:41:23.6032108Z #pragma omp for reduction(+:tmp6) 2023-01-11T21:41:23.6032301Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6032449Z { 2023-01-11T21:41:23.6032642Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.6032786Z { 2023-01-11T21:41:23.6032936Z { 2023-01-11T21:41:23.6033125Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6033341Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:23.6033677Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.6033936Z auto tmp3 = 1 / tmp2; 2023-01-11T21:41:23.6034194Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.6034396Z auto tmp5 = tmp3 * tmp4; 2023-01-11T21:41:23.6034579Z tmp6 += tmp5; 2023-01-11T21:41:23.6034716Z } 2023-01-11T21:41:23.6034862Z } 2023-01-11T21:41:23.6035011Z } 2023-01-11T21:41:23.6035152Z } 2023-01-11T21:41:23.6035320Z out_ptr0[0] = tmp6; 2023-01-11T21:41:23.6035459Z } 2023-01-11T21:41:23.6035592Z } 2023-01-11T21:41:23.6035709Z } 2023-01-11T21:41:23.6035883Z ''') 2023-01-11T21:41:23.6035894Z 2023-01-11T21:41:23.6035903Z 2023-01-11T21:41:23.6036118Z async_compile.wait(globals()) 2023-01-11T21:41:23.6036290Z del async_compile 2023-01-11T21:41:23.6036300Z 2023-01-11T21:41:23.6036464Z def call(args): 2023-01-11T21:41:23.6036634Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.6036800Z args.clear() 2023-01-11T21:41:23.6037315Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6037702Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6037855Z del arg0_1 2023-01-11T21:41:23.6038007Z del arg1_1 2023-01-11T21:41:23.6038173Z return (buf0, ) 2023-01-11T21:41:23.6038182Z 2023-01-11T21:41:23.6038191Z 2023-01-11T21:41:23.6038361Z if __name__ == "__main__": 2023-01-11T21:41:23.6038627Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6038926Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6039363Z arg0_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6039794Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6040067Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.6040077Z 2023-01-11T21:41:23.6040224Z ok (1.552s) 2023-01-11T21:41:23.6041481Z test_clamp_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6041773Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6042405Z [2023-01-11 21:25:20,738] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 55 2023-01-11T21:41:23.6043044Z [2023-01-11 21:25:22,268] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 55 2023-01-11T21:41:23.6043055Z 2023-01-11T21:41:23.6043268Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6043429Z import torch 2023-01-11T21:41:23.6043570Z import random 2023-01-11T21:41:23.6043842Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6044138Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6044148Z 2023-01-11T21:41:23.6044321Z aten = torch.ops.aten 2023-01-11T21:41:23.6044630Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6044833Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6044843Z 2023-01-11T21:41:23.6044850Z 2023-01-11T21:41:23.6045152Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6045651Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6045904Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6046138Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.6046354Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6046563Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.6046779Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.6046919Z { 2023-01-11T21:41:23.6047143Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6047272Z { 2023-01-11T21:41:23.6047439Z #pragma omp for 2023-01-11T21:41:23.6047611Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.6047746Z { 2023-01-11T21:41:23.6048051Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.6048342Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.6048855Z auto tmp1 = at::vec::Vectorized(static_cast(-0.10000000149011612)); 2023-01-11T21:41:23.6049093Z auto tmp2 = at::vec::maximum(tmp0, tmp1); 2023-01-11T21:41:23.6049406Z auto tmp3 = at::vec::Vectorized(static_cast(0.10000000149011612)); 2023-01-11T21:41:23.6049648Z auto tmp4 = at::vec::minimum(tmp2, tmp3); 2023-01-11T21:41:23.6049958Z auto tmp6 = at::vec::Vectorized(static_cast(0.0)); 2023-01-11T21:41:23.6050285Z auto tmp7 = at::vec::maximum(tmp5, tmp6); 2023-01-11T21:41:23.6050470Z auto tmp8 = tmp0 + tmp5; 2023-01-11T21:41:23.6050702Z auto tmp9 = at::vec::minimum(tmp8, tmp6); 2023-01-11T21:41:23.6050903Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.6051092Z tmp7.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.6051269Z tmp9.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.6051405Z } 2023-01-11T21:41:23.6051618Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.6051799Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.6051937Z { 2023-01-11T21:41:23.6052120Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6052306Z auto tmp5 = in_ptr1[i0]; 2023-01-11T21:41:23.6052689Z auto tmp1 = static_cast(-0.10000000149011612); 2023-01-11T21:41:23.6052951Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:23.6053193Z auto tmp3 = static_cast(0.10000000149011612); 2023-01-11T21:41:23.6053524Z auto tmp4 = (tmp3 != tmp3) ? tmp3 : std::min(tmp2, tmp3); 2023-01-11T21:41:23.6053751Z auto tmp6 = static_cast(0.0); 2023-01-11T21:41:23.6054015Z auto tmp7 = (tmp6 != tmp6) ? tmp6 : std::max(tmp5, tmp6); 2023-01-11T21:41:23.6054193Z auto tmp8 = tmp0 + tmp5; 2023-01-11T21:41:23.6054439Z auto tmp9 = (tmp6 != tmp6) ? tmp6 : std::min(tmp8, tmp6); 2023-01-11T21:41:23.6054601Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.6054775Z out_ptr1[i0] = tmp7; 2023-01-11T21:41:23.6054952Z out_ptr2[i0] = tmp9; 2023-01-11T21:41:23.6055093Z } 2023-01-11T21:41:23.6055227Z } 2023-01-11T21:41:23.6055355Z } 2023-01-11T21:41:23.6055509Z ''') 2023-01-11T21:41:23.6055534Z 2023-01-11T21:41:23.6055543Z 2023-01-11T21:41:23.6055723Z async_compile.wait(globals()) 2023-01-11T21:41:23.6055881Z del async_compile 2023-01-11T21:41:23.6055891Z 2023-01-11T21:41:23.6056048Z def call(args): 2023-01-11T21:41:23.6056218Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.6056367Z args.clear() 2023-01-11T21:41:23.6056806Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6057219Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6057625Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6058095Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6058243Z del arg0_1 2023-01-11T21:41:23.6058389Z del arg1_1 2023-01-11T21:41:23.6058570Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.6058581Z 2023-01-11T21:41:23.6058590Z 2023-01-11T21:41:23.6058751Z if __name__ == "__main__": 2023-01-11T21:41:23.6059003Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6059277Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6059696Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6060119Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6060362Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.6060372Z 2023-01-11T21:41:23.6060507Z ok (1.562s) 2023-01-11T21:41:23.6061631Z test_clone_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6061906Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6062730Z [2023-01-11 21:25:22,300] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 56 2023-01-11T21:41:23.6063479Z [2023-01-11 21:25:23,815] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 56 2023-01-11T21:41:23.6063491Z 2023-01-11T21:41:23.6063707Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6063862Z import torch 2023-01-11T21:41:23.6064011Z import random 2023-01-11T21:41:23.6064273Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6064553Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6064563Z 2023-01-11T21:41:23.6064742Z aten = torch.ops.aten 2023-01-11T21:41:23.6065050Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6065254Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6065263Z 2023-01-11T21:41:23.6065271Z 2023-01-11T21:41:23.6065583Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6066091Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6066453Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6066679Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6066903Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6067035Z { 2023-01-11T21:41:23.6067262Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6067397Z { 2023-01-11T21:41:23.6067561Z #pragma omp for 2023-01-11T21:41:23.6067721Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6067858Z { 2023-01-11T21:41:23.6068155Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.6068460Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.6068647Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.6068953Z auto tmp3 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.6069150Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.6069336Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.6069547Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.6069685Z } 2023-01-11T21:41:23.6069907Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.6070085Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.6070222Z { 2023-01-11T21:41:23.6070417Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6070618Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.6070800Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.6071023Z auto tmp3 = static_cast(1); 2023-01-11T21:41:23.6071197Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.6071360Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.6071469Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.6071549Z } 2023-01-11T21:41:23.6071611Z } 2023-01-11T21:41:23.6071689Z } 2023-01-11T21:41:23.6071812Z ''') 2023-01-11T21:41:23.6071821Z 2023-01-11T21:41:23.6071826Z 2023-01-11T21:41:23.6071941Z async_compile.wait(globals()) 2023-01-11T21:41:23.6072049Z del async_compile 2023-01-11T21:41:23.6072055Z 2023-01-11T21:41:23.6072154Z def call(args): 2023-01-11T21:41:23.6072262Z arg0_1, = args 2023-01-11T21:41:23.6072353Z args.clear() 2023-01-11T21:41:23.6072653Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6072939Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6073181Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6073282Z del arg0_1 2023-01-11T21:41:23.6073387Z return (buf0, buf1, ) 2023-01-11T21:41:23.6073396Z 2023-01-11T21:41:23.6073402Z 2023-01-11T21:41:23.6073514Z if __name__ == "__main__": 2023-01-11T21:41:23.6073680Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6073917Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6074220Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6074490Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.6074498Z 2023-01-11T21:41:23.6074589Z ok (1.547s) 2023-01-11T21:41:23.6075279Z test_constant_pad_1d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6075453Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6075860Z [2023-01-11 21:25:23,845] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 57 2023-01-11T21:41:23.6076232Z [2023-01-11 21:25:25,358] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 57 2023-01-11T21:41:23.6076239Z 2023-01-11T21:41:23.6076443Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6076534Z import torch 2023-01-11T21:41:23.6076626Z import random 2023-01-11T21:41:23.6076793Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6076957Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6076964Z 2023-01-11T21:41:23.6077069Z aten = torch.ops.aten 2023-01-11T21:41:23.6077267Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6077397Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6077404Z 2023-01-11T21:41:23.6077411Z 2023-01-11T21:41:23.6077628Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6077900Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6078066Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6078217Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6078348Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6078435Z { 2023-01-11T21:41:23.6078577Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6078662Z { 2023-01-11T21:41:23.6078754Z #pragma omp for 2023-01-11T21:41:23.6078880Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6078966Z { 2023-01-11T21:41:23.6079069Z #pragma GCC ivdep 2023-01-11T21:41:23.6079192Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.6079286Z { 2023-01-11T21:41:23.6079376Z { 2023-01-11T21:41:23.6079450Z { 2023-01-11T21:41:23.6079602Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:23.6079755Z auto tmp1 = static_cast(31); 2023-01-11T21:41:23.6079891Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.6080010Z float tmp3 = 6.0; 2023-01-11T21:41:23.6080118Z if(tmp2) 2023-01-11T21:41:23.6080212Z { 2023-01-11T21:41:23.6080353Z auto tmp4 = in_ptr0[i1 + (31*i0)]; 2023-01-11T21:41:23.6080462Z tmp3 = tmp4; 2023-01-11T21:41:23.6080546Z } 2023-01-11T21:41:23.6080686Z out_ptr0[i1 + (32*i0)] = tmp3; 2023-01-11T21:41:23.6080777Z } 2023-01-11T21:41:23.6080865Z } 2023-01-11T21:41:23.6080948Z } 2023-01-11T21:41:23.6081022Z } 2023-01-11T21:41:23.6081130Z #pragma omp for 2023-01-11T21:41:23.6081239Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6081323Z { 2023-01-11T21:41:23.6081432Z #pragma GCC ivdep 2023-01-11T21:41:23.6081545Z for(long i1=0; i1<36; i1+=1) 2023-01-11T21:41:23.6081632Z { 2023-01-11T21:41:23.6081710Z { 2023-01-11T21:41:23.6081803Z { 2023-01-11T21:41:23.6082053Z auto tmp0 = static_cast((-2) + i1); 2023-01-11T21:41:23.6082289Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.6082423Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.6082582Z auto tmp3 = static_cast(31); 2023-01-11T21:41:23.6082716Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.6082830Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.6082951Z float tmp6 = 99.0; 2023-01-11T21:41:23.6083060Z if(tmp5) 2023-01-11T21:41:23.6083160Z { 2023-01-11T21:41:23.6083417Z auto tmp7 = in_ptr0[(-2) + i1 + (31*i0)]; 2023-01-11T21:41:23.6083531Z tmp6 = tmp7; 2023-01-11T21:41:23.6083624Z } 2023-01-11T21:41:23.6083740Z out_ptr1[i1 + (36*i0)] = tmp6; 2023-01-11T21:41:23.6083829Z } 2023-01-11T21:41:23.6083918Z } 2023-01-11T21:41:23.6083999Z } 2023-01-11T21:41:23.6084157Z } 2023-01-11T21:41:23.6084246Z } 2023-01-11T21:41:23.6084314Z } 2023-01-11T21:41:23.6084427Z ''') 2023-01-11T21:41:23.6084435Z 2023-01-11T21:41:23.6084441Z 2023-01-11T21:41:23.6084569Z async_compile.wait(globals()) 2023-01-11T21:41:23.6084672Z del async_compile 2023-01-11T21:41:23.6084679Z 2023-01-11T21:41:23.6084785Z def call(args): 2023-01-11T21:41:23.6084879Z arg0_1, = args 2023-01-11T21:41:23.6084969Z args.clear() 2023-01-11T21:41:23.6085268Z buf0 = empty_strided((2, 16, 32), (512, 32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6085546Z buf1 = empty_strided((2, 16, 36), (576, 36, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6085781Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6085879Z del arg0_1 2023-01-11T21:41:23.6085978Z return (buf0, buf1, ) 2023-01-11T21:41:23.6085985Z 2023-01-11T21:41:23.6085991Z 2023-01-11T21:41:23.6086100Z if __name__ == "__main__": 2023-01-11T21:41:23.6086265Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6086446Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6086747Z arg0_1 = rand_strided((2, 16, 31), (496, 31, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6086886Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.6086893Z 2023-01-11T21:41:23.6086976Z ok (1.543s) 2023-01-11T21:41:23.6087644Z test_constant_pad_2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6087821Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6088224Z [2023-01-11 21:25:25,388] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 58 2023-01-11T21:41:23.6088610Z [2023-01-11 21:25:26,949] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 58 2023-01-11T21:41:23.6088618Z 2023-01-11T21:41:23.6088745Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6088841Z import torch 2023-01-11T21:41:23.6088936Z import random 2023-01-11T21:41:23.6089088Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6089254Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6089261Z 2023-01-11T21:41:23.6089379Z aten = torch.ops.aten 2023-01-11T21:41:23.6089577Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6089709Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6089716Z 2023-01-11T21:41:23.6089721Z 2023-01-11T21:41:23.6089921Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6090218Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6090530Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6090663Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6090787Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6090872Z { 2023-01-11T21:41:23.6091009Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6091093Z { 2023-01-11T21:41:23.6091201Z #pragma omp for 2023-01-11T21:41:23.6091310Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.6091386Z { 2023-01-11T21:41:23.6091504Z #pragma GCC ivdep 2023-01-11T21:41:23.6091616Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:23.6091705Z { 2023-01-11T21:41:23.6091797Z { 2023-01-11T21:41:23.6091885Z { 2023-01-11T21:41:23.6092140Z auto tmp0 = static_cast((-1) + i0); 2023-01-11T21:41:23.6092276Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.6092478Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.6092612Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.6092732Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.6092957Z auto tmp5 = static_cast((-1) + i1); 2023-01-11T21:41:23.6093080Z auto tmp6 = tmp5 >= tmp1; 2023-01-11T21:41:23.6093211Z auto tmp7 = tmp5 < tmp3; 2023-01-11T21:41:23.6093318Z auto tmp8 = tmp2 & tmp4; 2023-01-11T21:41:23.6093436Z auto tmp9 = tmp8 & tmp6; 2023-01-11T21:41:23.6093557Z auto tmp10 = tmp9 & tmp7; 2023-01-11T21:41:23.6093669Z float tmp11 = 6.0; 2023-01-11T21:41:23.6093766Z if(tmp10) 2023-01-11T21:41:23.6093855Z { 2023-01-11T21:41:23.6094112Z auto tmp12 = in_ptr0[(-9) + i1 + (8*i0)]; 2023-01-11T21:41:23.6094221Z tmp11 = tmp12; 2023-01-11T21:41:23.6094319Z } 2023-01-11T21:41:23.6094461Z out_ptr0[i1 + (10*i0)] = tmp11; 2023-01-11T21:41:23.6094553Z } 2023-01-11T21:41:23.6094643Z } 2023-01-11T21:41:23.6094732Z } 2023-01-11T21:41:23.6094819Z } 2023-01-11T21:41:23.6094925Z #pragma omp for 2023-01-11T21:41:23.6095042Z for(long i0=0; i0<15; i0+=1) 2023-01-11T21:41:23.6095134Z { 2023-01-11T21:41:23.6095240Z #pragma GCC ivdep 2023-01-11T21:41:23.6095356Z for(long i1=0; i1<11; i1+=1) 2023-01-11T21:41:23.6095449Z { 2023-01-11T21:41:23.6095541Z { 2023-01-11T21:41:23.6095626Z { 2023-01-11T21:41:23.6095883Z auto tmp0 = static_cast((-3) + i0); 2023-01-11T21:41:23.6096032Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.6096171Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.6096314Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.6096442Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.6096683Z auto tmp5 = static_cast((-1) + i1); 2023-01-11T21:41:23.6096798Z auto tmp6 = tmp5 >= tmp1; 2023-01-11T21:41:23.6096928Z auto tmp7 = tmp5 < tmp3; 2023-01-11T21:41:23.6097048Z auto tmp8 = tmp2 & tmp4; 2023-01-11T21:41:23.6097176Z auto tmp9 = tmp8 & tmp6; 2023-01-11T21:41:23.6097299Z auto tmp10 = tmp9 & tmp7; 2023-01-11T21:41:23.6097413Z float tmp11 = 99.0; 2023-01-11T21:41:23.6097517Z if(tmp10) 2023-01-11T21:41:23.6097597Z { 2023-01-11T21:41:23.6097849Z auto tmp12 = in_ptr0[(-25) + i1 + (8*i0)]; 2023-01-11T21:41:23.6098052Z tmp11 = tmp12; 2023-01-11T21:41:23.6098149Z } 2023-01-11T21:41:23.6098274Z out_ptr1[i1 + (11*i0)] = tmp11; 2023-01-11T21:41:23.6098369Z } 2023-01-11T21:41:23.6098459Z } 2023-01-11T21:41:23.6098526Z } 2023-01-11T21:41:23.6098605Z } 2023-01-11T21:41:23.6098690Z } 2023-01-11T21:41:23.6098779Z } 2023-01-11T21:41:23.6098895Z ''') 2023-01-11T21:41:23.6098903Z 2023-01-11T21:41:23.6098908Z 2023-01-11T21:41:23.6099030Z async_compile.wait(globals()) 2023-01-11T21:41:23.6099138Z del async_compile 2023-01-11T21:41:23.6099145Z 2023-01-11T21:41:23.6099237Z def call(args): 2023-01-11T21:41:23.6099330Z arg0_1, = args 2023-01-11T21:41:23.6099431Z args.clear() 2023-01-11T21:41:23.6099756Z buf0 = empty_strided((1, 1, 10, 10), (100, 100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6100142Z buf1 = empty_strided((1, 1, 15, 11), (165, 165, 11, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6100369Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6100464Z del arg0_1 2023-01-11T21:41:23.6100569Z return (buf0, buf1, ) 2023-01-11T21:41:23.6100577Z 2023-01-11T21:41:23.6100584Z 2023-01-11T21:41:23.6100684Z if __name__ == "__main__": 2023-01-11T21:41:23.6100847Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6101027Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6101346Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6101493Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.6101500Z 2023-01-11T21:41:23.6101582Z ok (1.590s) 2023-01-11T21:41:23.6102217Z test_constant_pad_3d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6102538Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6102916Z [2023-01-11 21:25:26,979] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 59 2023-01-11T21:41:23.6103303Z [2023-01-11 21:25:28,549] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 59 2023-01-11T21:41:23.6103321Z 2023-01-11T21:41:23.6103434Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6103533Z import torch 2023-01-11T21:41:23.6103634Z import random 2023-01-11T21:41:23.6103809Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6103985Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6103993Z 2023-01-11T21:41:23.6104105Z aten = torch.ops.aten 2023-01-11T21:41:23.6104315Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6104436Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6104443Z 2023-01-11T21:41:23.6104464Z 2023-01-11T21:41:23.6104653Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6104951Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6105133Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6105276Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6105417Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6105502Z { 2023-01-11T21:41:23.6105646Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6105727Z { 2023-01-11T21:41:23.6105855Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6105970Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.6106060Z { 2023-01-11T21:41:23.6106174Z for(long i1=0; i1<15; i1+=1) 2023-01-11T21:41:23.6106395Z { 2023-01-11T21:41:23.6106496Z #pragma GCC ivdep 2023-01-11T21:41:23.6106618Z for(long i2=0; i2<11; i2+=1) 2023-01-11T21:41:23.6106707Z { 2023-01-11T21:41:23.6106828Z #pragma GCC ivdep 2023-01-11T21:41:23.6106958Z for(long i3=0; i3<7; i3+=1) 2023-01-11T21:41:23.6107050Z { 2023-01-11T21:41:23.6107147Z { 2023-01-11T21:41:23.6107232Z { 2023-01-11T21:41:23.6107495Z auto tmp0 = static_cast((-5) + i1); 2023-01-11T21:41:23.6107650Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.6107781Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.6107929Z auto tmp3 = static_cast(4); 2023-01-11T21:41:23.6108070Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.6108427Z auto tmp5 = static_cast((-3) + i2); 2023-01-11T21:41:23.6108568Z auto tmp6 = tmp5 >= tmp1; 2023-01-11T21:41:23.6108693Z auto tmp7 = tmp5 < tmp3; 2023-01-11T21:41:23.6108947Z auto tmp8 = static_cast((-1) + i3); 2023-01-11T21:41:23.6109089Z auto tmp9 = tmp8 >= tmp1; 2023-01-11T21:41:23.6109212Z auto tmp10 = tmp8 < tmp3; 2023-01-11T21:41:23.6109349Z auto tmp11 = tmp2 & tmp4; 2023-01-11T21:41:23.6109494Z auto tmp12 = tmp11 & tmp6; 2023-01-11T21:41:23.6109621Z auto tmp13 = tmp12 & tmp7; 2023-01-11T21:41:23.6109739Z auto tmp14 = tmp13 & tmp9; 2023-01-11T21:41:23.6109877Z auto tmp15 = tmp14 & tmp10; 2023-01-11T21:41:23.6110009Z float tmp16 = 6.0; 2023-01-11T21:41:23.6110123Z if(tmp15) 2023-01-11T21:41:23.6110227Z { 2023-01-11T21:41:23.6110512Z auto tmp17 = in_ptr0[(-93) + i3 + (4*i2) + (16*i1) + (64*i0)]; 2023-01-11T21:41:23.6110636Z tmp16 = tmp17; 2023-01-11T21:41:23.6110740Z } 2023-01-11T21:41:23.6110900Z out_ptr0[i3 + (7*i2) + (77*i1) + (1155*i0)] = tmp16; 2023-01-11T21:41:23.6111001Z } 2023-01-11T21:41:23.6111108Z } 2023-01-11T21:41:23.6111213Z } 2023-01-11T21:41:23.6111306Z } 2023-01-11T21:41:23.6111396Z } 2023-01-11T21:41:23.6111478Z } 2023-01-11T21:41:23.6111584Z #pragma omp for 2023-01-11T21:41:23.6111713Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.6111814Z { 2023-01-11T21:41:23.6111947Z #pragma GCC ivdep 2023-01-11T21:41:23.6112072Z for(long i1=0; i1<11; i1+=1) 2023-01-11T21:41:23.6112155Z { 2023-01-11T21:41:23.6112257Z #pragma GCC ivdep 2023-01-11T21:41:23.6112376Z for(long i2=0; i2<4; i2+=1) 2023-01-11T21:41:23.6112466Z { 2023-01-11T21:41:23.6112559Z { 2023-01-11T21:41:23.6112661Z { 2023-01-11T21:41:23.6112925Z auto tmp0 = static_cast((-3) + i1); 2023-01-11T21:41:23.6113074Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.6113189Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.6113337Z auto tmp3 = static_cast(4); 2023-01-11T21:41:23.6113468Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.6113590Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.6113790Z float tmp6 = 6.0; 2023-01-11T21:41:23.6113992Z if(tmp5) 2023-01-11T21:41:23.6114096Z { 2023-01-11T21:41:23.6114372Z auto tmp7 = in_ptr0[(-12) + i2 + (4*i1) + (16*i0)]; 2023-01-11T21:41:23.6114478Z tmp6 = tmp7; 2023-01-11T21:41:23.6114570Z } 2023-01-11T21:41:23.6114717Z out_ptr1[i2 + (4*i1) + (44*i0)] = tmp6; 2023-01-11T21:41:23.6114806Z } 2023-01-11T21:41:23.6114895Z } 2023-01-11T21:41:23.6114985Z } 2023-01-11T21:41:23.6115074Z } 2023-01-11T21:41:23.6115140Z } 2023-01-11T21:41:23.6115219Z } 2023-01-11T21:41:23.6115298Z } 2023-01-11T21:41:23.6115408Z ''') 2023-01-11T21:41:23.6115417Z 2023-01-11T21:41:23.6115424Z 2023-01-11T21:41:23.6115550Z async_compile.wait(globals()) 2023-01-11T21:41:23.6115645Z del async_compile 2023-01-11T21:41:23.6115656Z 2023-01-11T21:41:23.6115820Z def call(args): 2023-01-11T21:41:23.6115901Z arg0_1, = args 2023-01-11T21:41:23.6116000Z args.clear() 2023-01-11T21:41:23.6116319Z buf0 = empty_strided((2, 15, 11, 7), (1155, 77, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6116629Z buf1 = empty_strided((2, 4, 11, 4), (176, 44, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6116855Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6116947Z del arg0_1 2023-01-11T21:41:23.6117046Z return (buf0, buf1, ) 2023-01-11T21:41:23.6117054Z 2023-01-11T21:41:23.6117061Z 2023-01-11T21:41:23.6117161Z if __name__ == "__main__": 2023-01-11T21:41:23.6117321Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6117510Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6117817Z arg0_1 = rand_strided((2, 4, 4, 4), (64, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6117974Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.6117981Z 2023-01-11T21:41:23.6118070Z ok (1.601s) 2023-01-11T21:41:23.6118752Z test_conv2d_backward_channels_last_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6118930Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6119304Z [2023-01-11 21:25:28,601] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 60 2023-01-11T21:41:23.6119686Z [2023-01-11 21:25:28,620] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 60 2023-01-11T21:41:23.6119693Z 2023-01-11T21:41:23.6119821Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6119916Z import torch 2023-01-11T21:41:23.6120015Z import random 2023-01-11T21:41:23.6120180Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6120344Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6120351Z 2023-01-11T21:41:23.6120460Z aten = torch.ops.aten 2023-01-11T21:41:23.6120640Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6120774Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6120781Z 2023-01-11T21:41:23.6120786Z 2023-01-11T21:41:23.6120899Z async_compile.wait(globals()) 2023-01-11T21:41:23.6121004Z del async_compile 2023-01-11T21:41:23.6121011Z 2023-01-11T21:41:23.6121112Z def call(args): 2023-01-11T21:41:23.6121228Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.6121319Z args.clear() 2023-01-11T21:41:23.6121561Z buf0 = aten.convolution_backward(arg0_1, arg1_1, arg2_1, [320], [1, 1], [0, 0], [1, 1], False, [0, 0], 1, [True, True, True]) 2023-01-11T21:41:23.6121751Z del arg0_1 2023-01-11T21:41:23.6121833Z del arg1_1 2023-01-11T21:41:23.6121917Z del arg2_1 2023-01-11T21:41:23.6122014Z buf1 = buf0[0] 2023-01-11T21:41:23.6122177Z assert_size_stride(buf1, (2, 2048, 8, 8), (131072, 1, 16384, 2048)) 2023-01-11T21:41:23.6122276Z buf2 = buf0[1] 2023-01-11T21:41:23.6122431Z assert_size_stride(buf2, (320, 2048, 1, 1), (2048, 1, 2048, 2048)) 2023-01-11T21:41:23.6122530Z buf3 = buf0[2] 2023-01-11T21:41:23.6122655Z assert_size_stride(buf3, (320, ), (1, )) 2023-01-11T21:41:23.6122741Z del buf0 2023-01-11T21:41:23.6122856Z return (buf1, buf2, buf3, ) 2023-01-11T21:41:23.6122864Z 2023-01-11T21:41:23.6122869Z 2023-01-11T21:41:23.6122973Z if __name__ == "__main__": 2023-01-11T21:41:23.6123130Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6123302Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6123721Z arg0_1 = rand_strided((2, 320, 8, 8), (20480, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6124058Z arg1_1 = rand_strided((2, 2048, 8, 8), (131072, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6124407Z arg2_1 = rand_strided((320, 2048, 1, 1), (2048, 1, 2048, 2048), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6124607Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.6124616Z 2023-01-11T21:41:23.6124710Z ok (0.103s) 2023-01-11T21:41:23.6125034Z test_conv2d_binary_cpu (__main__.CpuTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:41:23.6125832Z test_conv2d_channels_last_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6126014Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6126386Z [2023-01-11 21:25:28,728] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 61 2023-01-11T21:41:23.6126785Z [2023-01-11 21:25:30,312] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 61 2023-01-11T21:41:23.6127435Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6127621Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6128038Z [2023-01-11 21:25:30,385] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 62 2023-01-11T21:41:23.6128454Z [2023-01-11 21:25:30,408] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 62 2023-01-11T21:41:23.6129096Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6129277Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6129674Z [2023-01-11 21:25:30,480] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 63 2023-01-11T21:41:23.6130075Z [2023-01-11 21:25:30,502] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 63 2023-01-11T21:41:23.6130084Z 2023-01-11T21:41:23.6130220Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6130416Z import torch 2023-01-11T21:41:23.6130510Z import random 2023-01-11T21:41:23.6130684Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6130849Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6130871Z 2023-01-11T21:41:23.6130962Z aten = torch.ops.aten 2023-01-11T21:41:23.6131157Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6131286Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6131293Z 2023-01-11T21:41:23.6131300Z 2023-01-11T21:41:23.6131507Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6131819Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6131985Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6132134Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6132206Z { 2023-01-11T21:41:23.6132348Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6132434Z { 2023-01-11T21:41:23.6132641Z #pragma omp for collapse(3) 2023-01-11T21:41:23.6132755Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.6132853Z { 2023-01-11T21:41:23.6132962Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.6133038Z { 2023-01-11T21:41:23.6133167Z for(long i2=0; i2<256; i2+=1) 2023-01-11T21:41:23.6133262Z { 2023-01-11T21:41:23.6133354Z { 2023-01-11T21:41:23.6133444Z { 2023-01-11T21:41:23.6133600Z auto tmp0 = in_ptr0[i2 + (256*i1) + (768*i0)]; 2023-01-11T21:41:23.6133754Z out_ptr0[i1 + (3*i2) + (768*i0)] = tmp0; 2023-01-11T21:41:23.6133831Z } 2023-01-11T21:41:23.6133919Z } 2023-01-11T21:41:23.6134007Z } 2023-01-11T21:41:23.6134102Z } 2023-01-11T21:41:23.6134190Z } 2023-01-11T21:41:23.6134278Z } 2023-01-11T21:41:23.6134367Z } 2023-01-11T21:41:23.6134488Z ''') 2023-01-11T21:41:23.6134497Z 2023-01-11T21:41:23.6134503Z 2023-01-11T21:41:23.6134628Z async_compile.wait(globals()) 2023-01-11T21:41:23.6134732Z del async_compile 2023-01-11T21:41:23.6134739Z 2023-01-11T21:41:23.6134840Z def call(args): 2023-01-11T21:41:23.6134982Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6135071Z args.clear() 2023-01-11T21:41:23.6135405Z buf0 = empty_strided((2, 3, 16, 16), (768, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6135591Z kernel_cpp_0(c_void_p(primals_3.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6135814Z buf1 = aten.convolution(buf0, primals_1, primals_2, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.6135971Z assert_size_stride(buf1, (2, 3, 16, 16), (768, 1, 48, 3)) 2023-01-11T21:41:23.6136062Z del buf0 2023-01-11T21:41:23.6136166Z del primals_2 2023-01-11T21:41:23.6136287Z return (buf1, primals_1, primals_3, ) 2023-01-11T21:41:23.6136297Z 2023-01-11T21:41:23.6136306Z 2023-01-11T21:41:23.6136402Z if __name__ == "__main__": 2023-01-11T21:41:23.6136543Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6136684Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6137009Z primals_1 = rand_strided((3, 3, 1, 1), (3, 1, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6137305Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6137632Z primals_3 = rand_strided((2, 3, 16, 16), (768, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6137830Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6137838Z 2023-01-11T21:41:23.6137844Z 2023-01-11T21:41:23.6137982Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6138083Z import torch 2023-01-11T21:41:23.6138181Z import random 2023-01-11T21:41:23.6138334Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6138605Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6138613Z 2023-01-11T21:41:23.6138718Z aten = torch.ops.aten 2023-01-11T21:41:23.6138896Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6139023Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6139031Z 2023-01-11T21:41:23.6139037Z 2023-01-11T21:41:23.6139169Z async_compile.wait(globals()) 2023-01-11T21:41:23.6139276Z del async_compile 2023-01-11T21:41:23.6139283Z 2023-01-11T21:41:23.6139383Z def call(args): 2023-01-11T21:41:23.6139522Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6139625Z args.clear() 2023-01-11T21:41:23.6139842Z buf0 = aten.convolution(primals_3, primals_1, primals_2, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.6139972Z assert_size_stride(buf0, (2, 3, 16, 16), (768, 1, 48, 3)) 2023-01-11T21:41:23.6140065Z del primals_2 2023-01-11T21:41:23.6140203Z return (buf0, primals_1, primals_3, ) 2023-01-11T21:41:23.6140215Z 2023-01-11T21:41:23.6140285Z 2023-01-11T21:41:23.6140390Z if __name__ == "__main__": 2023-01-11T21:41:23.6140551Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6140714Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6141038Z primals_1 = rand_strided((3, 3, 1, 1), (3, 1, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6141329Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6141658Z primals_3 = rand_strided((2, 3, 16, 16), (768, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6141857Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6141866Z 2023-01-11T21:41:23.6141872Z 2023-01-11T21:41:23.6142010Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6142114Z import torch 2023-01-11T21:41:23.6142214Z import random 2023-01-11T21:41:23.6142492Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6142679Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6142686Z 2023-01-11T21:41:23.6142803Z aten = torch.ops.aten 2023-01-11T21:41:23.6142985Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6143120Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6143127Z 2023-01-11T21:41:23.6143133Z 2023-01-11T21:41:23.6143258Z async_compile.wait(globals()) 2023-01-11T21:41:23.6143354Z del async_compile 2023-01-11T21:41:23.6143360Z 2023-01-11T21:41:23.6143455Z def call(args): 2023-01-11T21:41:23.6143587Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6143685Z args.clear() 2023-01-11T21:41:23.6143893Z buf0 = aten.convolution(primals_3, primals_1, primals_2, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.6144048Z assert_size_stride(buf0, (2, 3, 16, 16), (768, 1, 48, 3)) 2023-01-11T21:41:23.6144157Z del primals_2 2023-01-11T21:41:23.6144291Z return (buf0, primals_1, primals_3, ) 2023-01-11T21:41:23.6144298Z 2023-01-11T21:41:23.6144308Z 2023-01-11T21:41:23.6144411Z if __name__ == "__main__": 2023-01-11T21:41:23.6144578Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6144747Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6145093Z primals_1 = rand_strided((3, 3, 1, 1), (3, 1, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6145402Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6145736Z primals_3 = rand_strided((2, 3, 16, 16), (768, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6145939Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6145949Z 2023-01-11T21:41:23.6146044Z ok (1.848s) 2023-01-11T21:41:23.6146718Z test_conv2d_packed_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6147024Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6147409Z [2023-01-11 21:25:30,554] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 64 2023-01-11T21:41:23.6147773Z [2023-01-11 21:25:32,144] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 64 2023-01-11T21:41:23.6147796Z 2023-01-11T21:41:23.6147923Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6148018Z import torch 2023-01-11T21:41:23.6148125Z import random 2023-01-11T21:41:23.6148295Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6148469Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6148477Z 2023-01-11T21:41:23.6148588Z aten = torch.ops.aten 2023-01-11T21:41:23.6148856Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6148978Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6148985Z 2023-01-11T21:41:23.6149006Z 2023-01-11T21:41:23.6149203Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6149501Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6149670Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6149809Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6149891Z { 2023-01-11T21:41:23.6150034Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6150116Z { 2023-01-11T21:41:23.6150235Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6150345Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6150429Z { 2023-01-11T21:41:23.6150555Z for(long i1=0; i1<3136; i1+=1) 2023-01-11T21:41:23.6150643Z { 2023-01-11T21:41:23.6150725Z { 2023-01-11T21:41:23.6150814Z { 2023-01-11T21:41:23.6150961Z auto tmp0 = in_ptr0[i1 + (3136*i0)]; 2023-01-11T21:41:23.6151096Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6151188Z } 2023-01-11T21:41:23.6151279Z } 2023-01-11T21:41:23.6151373Z } 2023-01-11T21:41:23.6151457Z } 2023-01-11T21:41:23.6151531Z } 2023-01-11T21:41:23.6151613Z } 2023-01-11T21:41:23.6151741Z ''') 2023-01-11T21:41:23.6151750Z 2023-01-11T21:41:23.6151757Z 2023-01-11T21:41:23.6151959Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6152246Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6152414Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6152561Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6152646Z { 2023-01-11T21:41:23.6152772Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6152859Z { 2023-01-11T21:41:23.6152978Z #pragma omp for 2023-01-11T21:41:23.6153093Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.6153172Z { 2023-01-11T21:41:23.6153274Z #pragma GCC ivdep 2023-01-11T21:41:23.6153384Z for(long i1=0; i1<324; i1+=1) 2023-01-11T21:41:23.6153476Z { 2023-01-11T21:41:23.6153568Z { 2023-01-11T21:41:23.6153657Z { 2023-01-11T21:41:23.6153871Z auto tmp0 = in_ptr0[i0 + (64*i1)]; 2023-01-11T21:41:23.6154014Z out_ptr0[i1 + (324*i0)] = tmp0; 2023-01-11T21:41:23.6154102Z } 2023-01-11T21:41:23.6154177Z } 2023-01-11T21:41:23.6154268Z } 2023-01-11T21:41:23.6154355Z } 2023-01-11T21:41:23.6154438Z } 2023-01-11T21:41:23.6154520Z } 2023-01-11T21:41:23.6154645Z ''') 2023-01-11T21:41:23.6154653Z 2023-01-11T21:41:23.6154659Z 2023-01-11T21:41:23.6154786Z async_compile.wait(globals()) 2023-01-11T21:41:23.6154956Z del async_compile 2023-01-11T21:41:23.6154963Z 2023-01-11T21:41:23.6155064Z def call(args): 2023-01-11T21:41:23.6155185Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.6155280Z args.clear() 2023-01-11T21:41:23.6155609Z buf0 = empty_strided((1, 3, 56, 56), (9408, 1, 168, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6155808Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6155900Z del arg2_1 2023-01-11T21:41:23.6156269Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (3, 3), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6156427Z assert_size_stride(buf1, (1, 64, 18, 18), (20736, 1, 1152, 64)) 2023-01-11T21:41:23.6156533Z del arg0_1 2023-01-11T21:41:23.6156641Z del arg1_1 2023-01-11T21:41:23.6156732Z del buf0 2023-01-11T21:41:23.6157066Z buf2 = empty_strided((1, 64, 18, 18), (20736, 324, 18, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6157321Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6157433Z return (buf2, ) 2023-01-11T21:41:23.6157440Z 2023-01-11T21:41:23.6157445Z 2023-01-11T21:41:23.6157537Z if __name__ == "__main__": 2023-01-11T21:41:23.6157706Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6157886Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6158204Z arg0_1 = rand_strided((64, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6158494Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6158816Z arg2_1 = rand_strided((1, 3, 56, 56), (9408, 3136, 56, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6158991Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.6158997Z 2023-01-11T21:41:23.6159103Z ok (1.643s) 2023-01-11T21:41:23.6159365Z test_conv2d_unary_cpu (__main__.CpuTests) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:41:23.6160096Z test_conv3d_channels_last_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6160279Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6160667Z [2023-01-11 21:25:32,219] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 65 2023-01-11T21:41:23.6161059Z [2023-01-11 21:25:33,750] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 65 2023-01-11T21:41:23.6161713Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6161908Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6162302Z [2023-01-11 21:25:33,821] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 66 2023-01-11T21:41:23.6162707Z [2023-01-11 21:25:35,443] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 66 2023-01-11T21:41:23.6163363Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6163546Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6164013Z [2023-01-11 21:25:35,516] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 67 2023-01-11T21:41:23.6164416Z [2023-01-11 21:25:35,548] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 67 2023-01-11T21:41:23.6164426Z 2023-01-11T21:41:23.6164550Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6164653Z import torch 2023-01-11T21:41:23.6164758Z import random 2023-01-11T21:41:23.6164915Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6165084Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6165092Z 2023-01-11T21:41:23.6165206Z aten = torch.ops.aten 2023-01-11T21:41:23.6165398Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6165507Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6165514Z 2023-01-11T21:41:23.6165538Z 2023-01-11T21:41:23.6165728Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6166105Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6166293Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6166434Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6166522Z { 2023-01-11T21:41:23.6166659Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6166744Z { 2023-01-11T21:41:23.6166836Z #pragma omp for 2023-01-11T21:41:23.6166951Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.6167036Z { 2023-01-11T21:41:23.6167231Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.6167361Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.6167445Z } 2023-01-11T21:41:23.6167576Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.6167671Z for(long i0=8; i0<9; i0+=1) 2023-01-11T21:41:23.6167756Z { 2023-01-11T21:41:23.6167880Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6167993Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.6168081Z } 2023-01-11T21:41:23.6168169Z } 2023-01-11T21:41:23.6168243Z } 2023-01-11T21:41:23.6168359Z ''') 2023-01-11T21:41:23.6168370Z 2023-01-11T21:41:23.6168377Z 2023-01-11T21:41:23.6168504Z async_compile.wait(globals()) 2023-01-11T21:41:23.6168612Z del async_compile 2023-01-11T21:41:23.6168619Z 2023-01-11T21:41:23.6168712Z def call(args): 2023-01-11T21:41:23.6168851Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6168946Z args.clear() 2023-01-11T21:41:23.6169273Z buf0 = empty_strided((3, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6169455Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6169675Z buf1 = aten.convolution(primals_3, buf0, primals_2, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6169835Z assert_size_stride(buf1, (2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1)) 2023-01-11T21:41:23.6169932Z del buf0 2023-01-11T21:41:23.6170034Z del primals_2 2023-01-11T21:41:23.6170168Z return (buf1, primals_1, primals_3, ) 2023-01-11T21:41:23.6170175Z 2023-01-11T21:41:23.6170182Z 2023-01-11T21:41:23.6170289Z if __name__ == "__main__": 2023-01-11T21:41:23.6170461Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6170632Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6170985Z primals_1 = rand_strided((3, 3, 1, 1, 1), (3, 1, 3, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6171283Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6171615Z primals_3 = rand_strided((2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6171807Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6171815Z 2023-01-11T21:41:23.6171820Z 2023-01-11T21:41:23.6172035Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6172133Z import torch 2023-01-11T21:41:23.6172226Z import random 2023-01-11T21:41:23.6172372Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6172546Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6172553Z 2023-01-11T21:41:23.6172668Z aten = torch.ops.aten 2023-01-11T21:41:23.6172847Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6173126Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6173133Z 2023-01-11T21:41:23.6173139Z 2023-01-11T21:41:23.6173353Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6173655Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6173821Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6173942Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.6174077Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6174289Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6174378Z { 2023-01-11T21:41:23.6174519Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6174609Z { 2023-01-11T21:41:23.6174725Z #pragma omp for 2023-01-11T21:41:23.6174828Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.6174917Z { 2023-01-11T21:41:23.6175111Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.6175242Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.6175319Z } 2023-01-11T21:41:23.6175453Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.6175564Z for(long i0=8; i0<9; i0+=1) 2023-01-11T21:41:23.6175639Z { 2023-01-11T21:41:23.6175756Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6175865Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.6175948Z } 2023-01-11T21:41:23.6176079Z #pragma omp for collapse(3) 2023-01-11T21:41:23.6176198Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.6176287Z { 2023-01-11T21:41:23.6176381Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.6176464Z { 2023-01-11T21:41:23.6176584Z for(long i2=0; i2<4096; i2+=1) 2023-01-11T21:41:23.6176669Z { 2023-01-11T21:41:23.6176764Z { 2023-01-11T21:41:23.6176865Z { 2023-01-11T21:41:23.6177016Z auto tmp0 = in_ptr1[i1 + (3*i2) + (12288*i0)]; 2023-01-11T21:41:23.6177144Z out_ptr1[i2 + (4096*i1) + (12288*i0)] = tmp0; 2023-01-11T21:41:23.6177239Z } 2023-01-11T21:41:23.6177327Z } 2023-01-11T21:41:23.6177419Z } 2023-01-11T21:41:23.6177505Z } 2023-01-11T21:41:23.6177582Z } 2023-01-11T21:41:23.6177646Z } 2023-01-11T21:41:23.6177728Z } 2023-01-11T21:41:23.6177873Z ''') 2023-01-11T21:41:23.6177884Z 2023-01-11T21:41:23.6177894Z 2023-01-11T21:41:23.6178040Z async_compile.wait(globals()) 2023-01-11T21:41:23.6178150Z del async_compile 2023-01-11T21:41:23.6178158Z 2023-01-11T21:41:23.6178258Z def call(args): 2023-01-11T21:41:23.6178412Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6178506Z args.clear() 2023-01-11T21:41:23.6178814Z buf0 = empty_strided((3, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6179142Z buf1 = empty_strided((2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6179435Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(primals_3.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6179643Z buf2 = aten.convolution(buf1, buf0, primals_2, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6179794Z assert_size_stride(buf2, (2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1)) 2023-01-11T21:41:23.6179887Z del buf0 2023-01-11T21:41:23.6180067Z del buf1 2023-01-11T21:41:23.6180163Z del primals_2 2023-01-11T21:41:23.6180290Z return (buf2, primals_1, primals_3, ) 2023-01-11T21:41:23.6180298Z 2023-01-11T21:41:23.6180304Z 2023-01-11T21:41:23.6180409Z if __name__ == "__main__": 2023-01-11T21:41:23.6180579Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6180755Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6181081Z primals_1 = rand_strided((3, 3, 1, 1, 1), (3, 1, 3, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6181366Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6181706Z primals_3 = rand_strided((2, 3, 16, 16, 16), (12288, 1, 768, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6181907Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6181914Z 2023-01-11T21:41:23.6181921Z 2023-01-11T21:41:23.6182042Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6182213Z import torch 2023-01-11T21:41:23.6182309Z import random 2023-01-11T21:41:23.6182645Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6182818Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6182826Z 2023-01-11T21:41:23.6182939Z aten = torch.ops.aten 2023-01-11T21:41:23.6183128Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6183257Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6183264Z 2023-01-11T21:41:23.6183269Z 2023-01-11T21:41:23.6183463Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6183762Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6183929Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6184084Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.6184212Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.6184355Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.6184433Z { 2023-01-11T21:41:23.6184546Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6184627Z { 2023-01-11T21:41:23.6184730Z #pragma omp for 2023-01-11T21:41:23.6184831Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.6184915Z { 2023-01-11T21:41:23.6185102Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.6185234Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.6185301Z } 2023-01-11T21:41:23.6185422Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.6185526Z for(long i0=8; i0<9; i0+=1) 2023-01-11T21:41:23.6185611Z { 2023-01-11T21:41:23.6185731Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.6185839Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.6185922Z } 2023-01-11T21:41:23.6186022Z #pragma omp for collapse(3) 2023-01-11T21:41:23.6186124Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.6186219Z { 2023-01-11T21:41:23.6186333Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.6186420Z { 2023-01-11T21:41:23.6186537Z for(long i2=0; i2<4096; i2+=1) 2023-01-11T21:41:23.6186623Z { 2023-01-11T21:41:23.6186706Z { 2023-01-11T21:41:23.6186797Z { 2023-01-11T21:41:23.6186956Z auto tmp0 = in_ptr1[i1 + (3*i2) + (12288*i0)]; 2023-01-11T21:41:23.6187096Z out_ptr1[i2 + (4096*i1) + (12288*i0)] = tmp0; 2023-01-11T21:41:23.6187189Z } 2023-01-11T21:41:23.6187280Z } 2023-01-11T21:41:23.6187374Z } 2023-01-11T21:41:23.6187442Z } 2023-01-11T21:41:23.6187517Z } 2023-01-11T21:41:23.6187601Z } 2023-01-11T21:41:23.6187688Z } 2023-01-11T21:41:23.6187820Z ''') 2023-01-11T21:41:23.6187828Z 2023-01-11T21:41:23.6187834Z 2023-01-11T21:41:23.6187954Z async_compile.wait(globals()) 2023-01-11T21:41:23.6188184Z del async_compile 2023-01-11T21:41:23.6188191Z 2023-01-11T21:41:23.6188277Z def call(args): 2023-01-11T21:41:23.6188429Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.6188524Z args.clear() 2023-01-11T21:41:23.6188826Z buf0 = empty_strided((3, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6189133Z buf1 = empty_strided((2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6189423Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(primals_3.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.6189621Z buf2 = aten.convolution(buf1, buf0, primals_2, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6189780Z assert_size_stride(buf2, (2, 3, 16, 16, 16), (12288, 4096, 256, 16, 1)) 2023-01-11T21:41:23.6189857Z del buf0 2023-01-11T21:41:23.6189955Z del buf1 2023-01-11T21:41:23.6190135Z del primals_2 2023-01-11T21:41:23.6190286Z return (buf2, primals_1, primals_3, ) 2023-01-11T21:41:23.6190294Z 2023-01-11T21:41:23.6190299Z 2023-01-11T21:41:23.6190407Z if __name__ == "__main__": 2023-01-11T21:41:23.6190567Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6190744Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6191067Z primals_1 = rand_strided((3, 3, 1, 1, 1), (3, 1, 3, 3, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6191360Z primals_2 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6191704Z primals_3 = rand_strided((2, 3, 16, 16, 16), (12288, 1, 768, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6191891Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.6191899Z 2023-01-11T21:41:23.6191988Z ok (3.403s) 2023-01-11T21:41:23.6192194Z test_conv_autotune_cpu (__main__.CpuTests) ... skip: requires cuda (0.001s) 2023-01-11T21:41:23.6192708Z test_conv_backward_cpu (__main__.CpuTests) ... [2023-01-11 21:25:35,630] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 68 2023-01-11T21:41:23.6193094Z [2023-01-11 21:25:35,700] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 68 2023-01-11T21:41:23.6193102Z 2023-01-11T21:41:23.6193229Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6193305Z import torch 2023-01-11T21:41:23.6193405Z import random 2023-01-11T21:41:23.6193571Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6193819Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6193827Z 2023-01-11T21:41:23.6193931Z aten = torch.ops.aten 2023-01-11T21:41:23.6194118Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6194253Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6194260Z 2023-01-11T21:41:23.6194266Z 2023-01-11T21:41:23.6194386Z async_compile.wait(globals()) 2023-01-11T21:41:23.6194477Z del async_compile 2023-01-11T21:41:23.6194484Z 2023-01-11T21:41:23.6194589Z def call(args): 2023-01-11T21:41:23.6194769Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1 = args 2023-01-11T21:41:23.6194867Z args.clear() 2023-01-11T21:41:23.6195093Z buf0 = aten.convolution_backward(arg0_1, arg1_1, arg2_1, [4], [1, 1], [0, 0], [1, 1], False, [0, 0], 1, [True, True, True]) 2023-01-11T21:41:23.6195194Z buf1 = buf0[0] 2023-01-11T21:41:23.6195338Z assert_size_stride(buf1, (3, 4, 5, 5), (100, 25, 5, 1)) 2023-01-11T21:41:23.6195428Z buf2 = buf0[1] 2023-01-11T21:41:23.6195559Z assert_size_stride(buf2, (4, 4, 3, 3), (36, 9, 3, 1)) 2023-01-11T21:41:23.6195651Z buf3 = buf0[2] 2023-01-11T21:41:23.6195768Z assert_size_stride(buf3, (4, ), (1, )) 2023-01-11T21:41:23.6195851Z del buf0 2023-01-11T21:41:23.6196078Z buf4 = aten.convolution_backward(arg0_1, arg1_1, arg2_1, [4], [1, 1], [0, 0], [1, 1], False, [0, 0], 1, [True, False, False]) 2023-01-11T21:41:23.6196259Z del arg0_1 2023-01-11T21:41:23.6196351Z del arg1_1 2023-01-11T21:41:23.6196437Z del arg2_1 2023-01-11T21:41:23.6196530Z buf5 = buf4[0] 2023-01-11T21:41:23.6196664Z assert_size_stride(buf5, (3, 4, 5, 5), (100, 25, 5, 1)) 2023-01-11T21:41:23.6196759Z del buf4 2023-01-11T21:41:23.6196976Z buf6 = aten.convolution_backward(arg3_1, arg4_1, arg5_1, [4], [1], [0], [1], False, [0], 1, [True, True, True]) 2023-01-11T21:41:23.6197077Z del arg3_1 2023-01-11T21:41:23.6197158Z del arg4_1 2023-01-11T21:41:23.6197232Z del arg5_1 2023-01-11T21:41:23.6197326Z buf7 = buf6[0] 2023-01-11T21:41:23.6197476Z assert_size_stride(buf7, (3, 4, 5, 5), (100, 25, 5, 1)) 2023-01-11T21:41:23.6197564Z buf8 = buf6[1] 2023-01-11T21:41:23.6197695Z assert_size_stride(buf8, (4, 4, 3, 3), (36, 9, 3, 1)) 2023-01-11T21:41:23.6197788Z buf9 = buf6[2] 2023-01-11T21:41:23.6197908Z assert_size_stride(buf9, (4, ), (1, )) 2023-01-11T21:41:23.6197997Z del buf6 2023-01-11T21:41:23.6198298Z buf10 = aten.convolution_backward(arg6_1, arg7_1, arg8_1, [4], [1, 1, 1], [0, 0, 0], [1, 1, 1], False, [0, 0, 0], 1, [True, True, True]) 2023-01-11T21:41:23.6198400Z del arg6_1 2023-01-11T21:41:23.6198491Z del arg7_1 2023-01-11T21:41:23.6198572Z del arg8_1 2023-01-11T21:41:23.6198662Z buf11 = buf10[0] 2023-01-11T21:41:23.6198803Z assert_size_stride(buf11, (3, 4, 5, 5, 5), (500, 125, 25, 5, 1)) 2023-01-11T21:41:23.6198905Z buf12 = buf10[1] 2023-01-11T21:41:23.6199048Z assert_size_stride(buf12, (4, 4, 3, 3, 3), (108, 27, 9, 3, 1)) 2023-01-11T21:41:23.6199146Z buf13 = buf10[2] 2023-01-11T21:41:23.6199280Z assert_size_stride(buf13, (4, ), (1, )) 2023-01-11T21:41:23.6199384Z del buf10 2023-01-11T21:41:23.6199547Z return (buf1, buf2, buf3, buf5, buf7, buf8, buf9, buf11, buf12, buf13, ) 2023-01-11T21:41:23.6199557Z 2023-01-11T21:41:23.6199563Z 2023-01-11T21:41:23.6199665Z if __name__ == "__main__": 2023-01-11T21:41:23.6199827Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6200000Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6200310Z arg0_1 = rand_strided((3, 4, 3, 3), (36, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6200631Z arg1_1 = rand_strided((3, 4, 5, 5), (100, 25, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6200942Z arg2_1 = rand_strided((4, 4, 3, 3), (36, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6201241Z arg3_1 = rand_strided((3, 4, 3, 3), (36, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6201550Z arg4_1 = rand_strided((3, 4, 5, 5), (100, 25, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6201842Z arg5_1 = rand_strided((4, 4, 3, 3), (36, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6202139Z arg6_1 = rand_strided((3, 4, 3, 3, 3), (108, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6202464Z arg7_1 = rand_strided((3, 4, 5, 5, 5), (500, 125, 25, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6202778Z arg8_1 = rand_strided((4, 4, 3, 3, 3), (108, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6203017Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1])) 2023-01-11T21:41:23.6203025Z 2023-01-11T21:41:23.6203120Z ok (0.153s) 2023-01-11T21:41:23.6203795Z test_conv_bn_fuse_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6203971Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6204365Z [2023-01-11 21:25:35,816] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 69 2023-01-11T21:41:23.6204859Z [2023-01-11 21:25:35,838] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 69 2023-01-11T21:41:23.6205478Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6205648Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6206000Z [2023-01-11 21:25:35,986] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 70 2023-01-11T21:41:23.6206394Z [2023-01-11 21:25:36,009] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 70 2023-01-11T21:41:23.6207086Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6207278Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6207655Z [2023-01-11 21:25:36,120] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 71 2023-01-11T21:41:23.6208040Z [2023-01-11 21:25:36,142] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 71 2023-01-11T21:41:23.6208653Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6208853Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6209229Z [2023-01-11 21:25:36,256] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 72 2023-01-11T21:41:23.6209622Z [2023-01-11 21:25:36,279] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 72 2023-01-11T21:41:23.6210220Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6210409Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6210780Z [2023-01-11 21:25:36,516] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 73 2023-01-11T21:41:23.6211171Z [2023-01-11 21:25:36,538] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 73 2023-01-11T21:41:23.6211801Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6212005Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6212398Z [2023-01-11 21:25:36,649] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 74 2023-01-11T21:41:23.6212407Z 2023-01-11T21:41:23.6212544Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6212641Z import torch 2023-01-11T21:41:23.6212752Z import random 2023-01-11T21:41:23.6213006Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6213167Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6213191Z 2023-01-11T21:41:23.6213299Z aten = torch.ops.aten 2023-01-11T21:41:23.6213482Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6213613Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6213620Z 2023-01-11T21:41:23.6213626Z 2023-01-11T21:41:23.6213752Z async_compile.wait(globals()) 2023-01-11T21:41:23.6213860Z del async_compile 2023-01-11T21:41:23.6213866Z 2023-01-11T21:41:23.6213955Z def call(args): 2023-01-11T21:41:23.6214110Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6214200Z args.clear() 2023-01-11T21:41:23.6214397Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 1) 2023-01-11T21:41:23.6214545Z assert_size_stride(buf0, (1, 32, 112), (3584, 112, 1)) 2023-01-11T21:41:23.6214647Z del arg0_1 2023-01-11T21:41:23.6214851Z del arg1_1 2023-01-11T21:41:23.6214945Z del arg7_1 2023-01-11T21:41:23.6215042Z return (buf0, ) 2023-01-11T21:41:23.6215049Z 2023-01-11T21:41:23.6215054Z 2023-01-11T21:41:23.6215162Z if __name__ == "__main__": 2023-01-11T21:41:23.6215316Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6215494Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6215821Z arg0_1 = rand_strided((32, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6216119Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6216406Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6216680Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6216955Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6217248Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6217509Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6217822Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6218048Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6218056Z 2023-01-11T21:41:23.6218063Z 2023-01-11T21:41:23.6218205Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6218308Z import torch 2023-01-11T21:41:23.6218414Z import random 2023-01-11T21:41:23.6218582Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6218768Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6218776Z 2023-01-11T21:41:23.6218871Z aten = torch.ops.aten 2023-01-11T21:41:23.6219080Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6219209Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6219216Z 2023-01-11T21:41:23.6219228Z 2023-01-11T21:41:23.6219370Z async_compile.wait(globals()) 2023-01-11T21:41:23.6219474Z del async_compile 2023-01-11T21:41:23.6219481Z 2023-01-11T21:41:23.6219581Z def call(args): 2023-01-11T21:41:23.6219751Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6219846Z args.clear() 2023-01-11T21:41:23.6220034Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 4) 2023-01-11T21:41:23.6220191Z assert_size_stride(buf0, (1, 128, 112), (14336, 112, 1)) 2023-01-11T21:41:23.6220294Z del arg0_1 2023-01-11T21:41:23.6220387Z del arg1_1 2023-01-11T21:41:23.6220477Z del arg7_1 2023-01-11T21:41:23.6220585Z return (buf0, ) 2023-01-11T21:41:23.6220591Z 2023-01-11T21:41:23.6220597Z 2023-01-11T21:41:23.6220708Z if __name__ == "__main__": 2023-01-11T21:41:23.6220862Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6221054Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6221470Z arg0_1 = rand_strided((128, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6221765Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6222052Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6222475Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6222772Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6223050Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6223291Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6223599Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6223833Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6223843Z 2023-01-11T21:41:23.6223855Z 2023-01-11T21:41:23.6224092Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6224203Z import torch 2023-01-11T21:41:23.6224304Z import random 2023-01-11T21:41:23.6224476Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6224655Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6224662Z 2023-01-11T21:41:23.6224764Z aten = torch.ops.aten 2023-01-11T21:41:23.6224967Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6225103Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6225110Z 2023-01-11T21:41:23.6225116Z 2023-01-11T21:41:23.6225251Z async_compile.wait(globals()) 2023-01-11T21:41:23.6225356Z del async_compile 2023-01-11T21:41:23.6225362Z 2023-01-11T21:41:23.6225466Z def call(args): 2023-01-11T21:41:23.6225639Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6225726Z args.clear() 2023-01-11T21:41:23.6225882Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 1) 2023-01-11T21:41:23.6226037Z assert_size_stride(buf0, (1, 32, 112), (3584, 112, 1)) 2023-01-11T21:41:23.6226130Z del arg0_1 2023-01-11T21:41:23.6226223Z del arg1_1 2023-01-11T21:41:23.6226317Z del arg7_1 2023-01-11T21:41:23.6226421Z return (buf0, ) 2023-01-11T21:41:23.6226427Z 2023-01-11T21:41:23.6226432Z 2023-01-11T21:41:23.6226540Z if __name__ == "__main__": 2023-01-11T21:41:23.6226690Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6226871Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6227196Z arg0_1 = rand_strided((32, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6227492Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6227771Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6228039Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6228295Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6228567Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6228806Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6229099Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6229317Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6229326Z 2023-01-11T21:41:23.6229333Z 2023-01-11T21:41:23.6229461Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6229557Z import torch 2023-01-11T21:41:23.6229657Z import random 2023-01-11T21:41:23.6229810Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6229976Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6229982Z 2023-01-11T21:41:23.6230085Z aten = torch.ops.aten 2023-01-11T21:41:23.6230374Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6230499Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6230507Z 2023-01-11T21:41:23.6230512Z 2023-01-11T21:41:23.6230638Z async_compile.wait(globals()) 2023-01-11T21:41:23.6230741Z del async_compile 2023-01-11T21:41:23.6230747Z 2023-01-11T21:41:23.6230845Z def call(args): 2023-01-11T21:41:23.6231002Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6231102Z args.clear() 2023-01-11T21:41:23.6231269Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 4) 2023-01-11T21:41:23.6231418Z assert_size_stride(buf0, (1, 128, 112), (14336, 112, 1)) 2023-01-11T21:41:23.6231510Z del arg0_1 2023-01-11T21:41:23.6231603Z del arg1_1 2023-01-11T21:41:23.6231699Z del arg7_1 2023-01-11T21:41:23.6231800Z return (buf0, ) 2023-01-11T21:41:23.6231807Z 2023-01-11T21:41:23.6231813Z 2023-01-11T21:41:23.6231919Z if __name__ == "__main__": 2023-01-11T21:41:23.6232150Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6232311Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6232626Z arg0_1 = rand_strided((128, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6232903Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6233180Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6233446Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6233811Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6234094Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6234346Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6234634Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6234863Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6234871Z 2023-01-11T21:41:23.6234877Z 2023-01-11T21:41:23.6235010Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6235103Z import torch 2023-01-11T21:41:23.6235199Z import random 2023-01-11T21:41:23.6235364Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6235535Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6235542Z 2023-01-11T21:41:23.6235652Z aten = torch.ops.aten 2023-01-11T21:41:23.6235831Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6235953Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6235961Z 2023-01-11T21:41:23.6235966Z 2023-01-11T21:41:23.6236091Z async_compile.wait(globals()) 2023-01-11T21:41:23.6236191Z del async_compile 2023-01-11T21:41:23.6236197Z 2023-01-11T21:41:23.6236293Z def call(args): 2023-01-11T21:41:23.6236459Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6236560Z args.clear() 2023-01-11T21:41:23.6236742Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 1) 2023-01-11T21:41:23.6236870Z assert_size_stride(buf0, (1, 32, 110), (3520, 110, 1)) 2023-01-11T21:41:23.6236963Z del arg0_1 2023-01-11T21:41:23.6237055Z del arg1_1 2023-01-11T21:41:23.6237149Z del arg7_1 2023-01-11T21:41:23.6237247Z return (buf0, ) 2023-01-11T21:41:23.6237254Z 2023-01-11T21:41:23.6237260Z 2023-01-11T21:41:23.6237366Z if __name__ == "__main__": 2023-01-11T21:41:23.6237521Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6237676Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6237986Z arg0_1 = rand_strided((32, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6238276Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6238545Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6238912Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6239181Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6239451Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6239704Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6239983Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6240205Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6240213Z 2023-01-11T21:41:23.6240612Z [2023-01-11 21:25:36,672] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 74 2023-01-11T21:41:23.6241309Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6241505Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6241896Z [2023-01-11 21:25:36,783] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 75 2023-01-11T21:41:23.6242288Z [2023-01-11 21:25:36,806] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 75 2023-01-11T21:41:23.6242929Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6243125Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6243511Z [2023-01-11 21:25:36,916] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 76 2023-01-11T21:41:23.6243912Z [2023-01-11 21:25:36,938] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 76 2023-01-11T21:41:23.6244528Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6244692Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6245062Z [2023-01-11 21:25:37,050] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 77 2023-01-11T21:41:23.6245459Z [2023-01-11 21:25:37,072] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 77 2023-01-11T21:41:23.6246098Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6246268Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6246647Z [2023-01-11 21:25:37,182] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 78 2023-01-11T21:41:23.6246992Z [2023-01-11 21:25:37,204] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 78 2023-01-11T21:41:23.6247612Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6247874Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6248240Z [2023-01-11 21:25:37,315] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 79 2023-01-11T21:41:23.6248248Z 2023-01-11T21:41:23.6248387Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6248471Z import torch 2023-01-11T21:41:23.6248573Z import random 2023-01-11T21:41:23.6248737Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6248910Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6248918Z 2023-01-11T21:41:23.6249026Z aten = torch.ops.aten 2023-01-11T21:41:23.6249225Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6249416Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6249424Z 2023-01-11T21:41:23.6249430Z 2023-01-11T21:41:23.6249557Z async_compile.wait(globals()) 2023-01-11T21:41:23.6249646Z del async_compile 2023-01-11T21:41:23.6249654Z 2023-01-11T21:41:23.6249743Z def call(args): 2023-01-11T21:41:23.6249916Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6250021Z args.clear() 2023-01-11T21:41:23.6250212Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 4) 2023-01-11T21:41:23.6250354Z assert_size_stride(buf0, (1, 128, 110), (14080, 110, 1)) 2023-01-11T21:41:23.6250450Z del arg0_1 2023-01-11T21:41:23.6250526Z del arg1_1 2023-01-11T21:41:23.6250623Z del arg7_1 2023-01-11T21:41:23.6250717Z return (buf0, ) 2023-01-11T21:41:23.6250724Z 2023-01-11T21:41:23.6250729Z 2023-01-11T21:41:23.6250837Z if __name__ == "__main__": 2023-01-11T21:41:23.6250997Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6251171Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6251495Z arg0_1 = rand_strided((128, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6251789Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6252065Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6252350Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6252639Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6252923Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6253181Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6253487Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6253713Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6253726Z 2023-01-11T21:41:23.6253734Z 2023-01-11T21:41:23.6253872Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6253968Z import torch 2023-01-11T21:41:23.6254047Z import random 2023-01-11T21:41:23.6254218Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6254383Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6254389Z 2023-01-11T21:41:23.6254500Z aten = torch.ops.aten 2023-01-11T21:41:23.6254691Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6254820Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6254827Z 2023-01-11T21:41:23.6254832Z 2023-01-11T21:41:23.6254949Z async_compile.wait(globals()) 2023-01-11T21:41:23.6255036Z del async_compile 2023-01-11T21:41:23.6255057Z 2023-01-11T21:41:23.6255147Z def call(args): 2023-01-11T21:41:23.6255315Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6255497Z args.clear() 2023-01-11T21:41:23.6255684Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 1) 2023-01-11T21:41:23.6255835Z assert_size_stride(buf0, (1, 32, 108), (3456, 108, 1)) 2023-01-11T21:41:23.6255927Z del arg0_1 2023-01-11T21:41:23.6256010Z del arg1_1 2023-01-11T21:41:23.6256087Z del arg7_1 2023-01-11T21:41:23.6256196Z return (buf0, ) 2023-01-11T21:41:23.6256204Z 2023-01-11T21:41:23.6256210Z 2023-01-11T21:41:23.6256325Z if __name__ == "__main__": 2023-01-11T21:41:23.6256489Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6256658Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6256970Z arg0_1 = rand_strided((32, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6257249Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6257530Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6257852Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6258102Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6258346Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6258581Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6258856Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6259057Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6259064Z 2023-01-11T21:41:23.6259071Z 2023-01-11T21:41:23.6259189Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6259275Z import torch 2023-01-11T21:41:23.6259351Z import random 2023-01-11T21:41:23.6259494Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6259646Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6259657Z 2023-01-11T21:41:23.6259768Z aten = torch.ops.aten 2023-01-11T21:41:23.6259980Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6260119Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6260125Z 2023-01-11T21:41:23.6260132Z 2023-01-11T21:41:23.6260248Z async_compile.wait(globals()) 2023-01-11T21:41:23.6260342Z del async_compile 2023-01-11T21:41:23.6260348Z 2023-01-11T21:41:23.6260426Z def call(args): 2023-01-11T21:41:23.6260588Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6260689Z args.clear() 2023-01-11T21:41:23.6260881Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 4) 2023-01-11T21:41:23.6261030Z assert_size_stride(buf0, (1, 128, 108), (13824, 108, 1)) 2023-01-11T21:41:23.6261123Z del arg0_1 2023-01-11T21:41:23.6261217Z del arg1_1 2023-01-11T21:41:23.6261300Z del arg7_1 2023-01-11T21:41:23.6261402Z return (buf0, ) 2023-01-11T21:41:23.6261414Z 2023-01-11T21:41:23.6261424Z 2023-01-11T21:41:23.6261529Z if __name__ == "__main__": 2023-01-11T21:41:23.6261686Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6261857Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6262192Z arg0_1 = rand_strided((128, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6262673Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6262988Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6263289Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6263613Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6263956Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6264277Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6264780Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6265071Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6265080Z 2023-01-11T21:41:23.6265087Z 2023-01-11T21:41:23.6265251Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6265356Z import torch 2023-01-11T21:41:23.6265440Z import random 2023-01-11T21:41:23.6265596Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6265782Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6265789Z 2023-01-11T21:41:23.6265922Z aten = torch.ops.aten 2023-01-11T21:41:23.6266171Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6266318Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6266326Z 2023-01-11T21:41:23.6266331Z 2023-01-11T21:41:23.6266492Z async_compile.wait(globals()) 2023-01-11T21:41:23.6266619Z del async_compile 2023-01-11T21:41:23.6266630Z 2023-01-11T21:41:23.6266807Z def call(args): 2023-01-11T21:41:23.6266996Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6267117Z args.clear() 2023-01-11T21:41:23.6267353Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 1) 2023-01-11T21:41:23.6267533Z assert_size_stride(buf0, (1, 32, 112), (3584, 112, 1)) 2023-01-11T21:41:23.6267650Z del arg0_1 2023-01-11T21:41:23.6267761Z del arg1_1 2023-01-11T21:41:23.6267861Z del arg7_1 2023-01-11T21:41:23.6267979Z return (buf0, ) 2023-01-11T21:41:23.6267988Z 2023-01-11T21:41:23.6267999Z 2023-01-11T21:41:23.6268129Z if __name__ == "__main__": 2023-01-11T21:41:23.6268308Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6268532Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6268907Z arg0_1 = rand_strided((32, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6269267Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6269612Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6269939Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6270287Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6270627Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6270937Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6271325Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6271595Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6271604Z 2023-01-11T21:41:23.6271609Z 2023-01-11T21:41:23.6271785Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6271908Z import torch 2023-01-11T21:41:23.6272018Z import random 2023-01-11T21:41:23.6272245Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6272471Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6272479Z 2023-01-11T21:41:23.6272616Z aten = torch.ops.aten 2023-01-11T21:41:23.6272864Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6273030Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6273039Z 2023-01-11T21:41:23.6273046Z 2023-01-11T21:41:23.6273212Z async_compile.wait(globals()) 2023-01-11T21:41:23.6273339Z del async_compile 2023-01-11T21:41:23.6273347Z 2023-01-11T21:41:23.6273468Z def call(args): 2023-01-11T21:41:23.6273668Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6273856Z args.clear() 2023-01-11T21:41:23.6274104Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 4) 2023-01-11T21:41:23.6274290Z assert_size_stride(buf0, (1, 128, 112), (14336, 112, 1)) 2023-01-11T21:41:23.6274489Z del arg0_1 2023-01-11T21:41:23.6274613Z del arg1_1 2023-01-11T21:41:23.6274732Z del arg7_1 2023-01-11T21:41:23.6274842Z return (buf0, ) 2023-01-11T21:41:23.6274850Z 2023-01-11T21:41:23.6274857Z 2023-01-11T21:41:23.6274993Z if __name__ == "__main__": 2023-01-11T21:41:23.6275185Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6275403Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6275782Z arg0_1 = rand_strided((128, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6276136Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6276483Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6276824Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6277172Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6277591Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6277932Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6278330Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6278629Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6278638Z 2023-01-11T21:41:23.6279157Z [2023-01-11 21:25:37,337] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 79 2023-01-11T21:41:23.6280079Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6280301Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6280798Z [2023-01-11 21:25:37,446] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 80 2023-01-11T21:41:23.6281285Z [2023-01-11 21:25:37,468] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 80 2023-01-11T21:41:23.6281975Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6282170Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6282615Z [2023-01-11 21:25:37,579] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 81 2023-01-11T21:41:23.6283038Z [2023-01-11 21:25:37,601] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 81 2023-01-11T21:41:23.6283858Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6284089Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6284572Z [2023-01-11 21:25:37,711] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 82 2023-01-11T21:41:23.6285056Z [2023-01-11 21:25:37,732] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 82 2023-01-11T21:41:23.6285958Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6286234Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6286638Z [2023-01-11 21:25:37,843] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 83 2023-01-11T21:41:23.6287124Z [2023-01-11 21:25:37,864] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 83 2023-01-11T21:41:23.6287719Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6287944Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6288315Z [2023-01-11 21:25:37,976] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 84 2023-01-11T21:41:23.6288323Z 2023-01-11T21:41:23.6288455Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6288548Z import torch 2023-01-11T21:41:23.6288642Z import random 2023-01-11T21:41:23.6288797Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6288964Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6288971Z 2023-01-11T21:41:23.6289075Z aten = torch.ops.aten 2023-01-11T21:41:23.6289248Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6289380Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6289387Z 2023-01-11T21:41:23.6289392Z 2023-01-11T21:41:23.6289516Z async_compile.wait(globals()) 2023-01-11T21:41:23.6289620Z del async_compile 2023-01-11T21:41:23.6289627Z 2023-01-11T21:41:23.6289734Z def call(args): 2023-01-11T21:41:23.6289939Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6290039Z args.clear() 2023-01-11T21:41:23.6290211Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 1) 2023-01-11T21:41:23.6290357Z assert_size_stride(buf0, (1, 32, 112), (3584, 112, 1)) 2023-01-11T21:41:23.6290451Z del arg0_1 2023-01-11T21:41:23.6290541Z del arg1_1 2023-01-11T21:41:23.6290629Z del arg7_1 2023-01-11T21:41:23.6290726Z return (buf0, ) 2023-01-11T21:41:23.6290734Z 2023-01-11T21:41:23.6290740Z 2023-01-11T21:41:23.6290852Z if __name__ == "__main__": 2023-01-11T21:41:23.6291051Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6291215Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6291603Z arg0_1 = rand_strided((32, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6291944Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6292272Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6292605Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6292909Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6293234Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6293546Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6293901Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6294185Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6294194Z 2023-01-11T21:41:23.6294203Z 2023-01-11T21:41:23.6294355Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6294461Z import torch 2023-01-11T21:41:23.6294576Z import random 2023-01-11T21:41:23.6294752Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6295036Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6295044Z 2023-01-11T21:41:23.6295167Z aten = torch.ops.aten 2023-01-11T21:41:23.6295372Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6295513Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6295520Z 2023-01-11T21:41:23.6295527Z 2023-01-11T21:41:23.6295668Z async_compile.wait(globals()) 2023-01-11T21:41:23.6295781Z del async_compile 2023-01-11T21:41:23.6295788Z 2023-01-11T21:41:23.6295905Z def call(args): 2023-01-11T21:41:23.6296091Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6296201Z args.clear() 2023-01-11T21:41:23.6296404Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 4) 2023-01-11T21:41:23.6296562Z assert_size_stride(buf0, (1, 128, 112), (14336, 112, 1)) 2023-01-11T21:41:23.6296652Z del arg0_1 2023-01-11T21:41:23.6296754Z del arg1_1 2023-01-11T21:41:23.6296862Z del arg7_1 2023-01-11T21:41:23.6297039Z return (buf0, ) 2023-01-11T21:41:23.6297050Z 2023-01-11T21:41:23.6297057Z 2023-01-11T21:41:23.6297186Z if __name__ == "__main__": 2023-01-11T21:41:23.6297345Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6297500Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6297820Z arg0_1 = rand_strided((128, 3, 1), (3, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6298153Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6298489Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6298818Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6299149Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6299460Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6299773Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6300147Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6300435Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6300442Z 2023-01-11T21:41:23.6300447Z 2023-01-11T21:41:23.6300603Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6300716Z import torch 2023-01-11T21:41:23.6300838Z import random 2023-01-11T21:41:23.6301036Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6301260Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6301268Z 2023-01-11T21:41:23.6301401Z aten = torch.ops.aten 2023-01-11T21:41:23.6301636Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6301797Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6301805Z 2023-01-11T21:41:23.6301812Z 2023-01-11T21:41:23.6301971Z async_compile.wait(globals()) 2023-01-11T21:41:23.6302106Z del async_compile 2023-01-11T21:41:23.6302114Z 2023-01-11T21:41:23.6302237Z def call(args): 2023-01-11T21:41:23.6302567Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6302692Z args.clear() 2023-01-11T21:41:23.6302927Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 1) 2023-01-11T21:41:23.6303091Z assert_size_stride(buf0, (1, 32, 110), (3520, 110, 1)) 2023-01-11T21:41:23.6303212Z del arg0_1 2023-01-11T21:41:23.6303313Z del arg1_1 2023-01-11T21:41:23.6303421Z del arg7_1 2023-01-11T21:41:23.6303537Z return (buf0, ) 2023-01-11T21:41:23.6303543Z 2023-01-11T21:41:23.6303548Z 2023-01-11T21:41:23.6303657Z if __name__ == "__main__": 2023-01-11T21:41:23.6303852Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6304077Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6304477Z arg0_1 = rand_strided((32, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6304952Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6305309Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6305634Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6305942Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6306285Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6306614Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6306985Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6307268Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6307277Z 2023-01-11T21:41:23.6307284Z 2023-01-11T21:41:23.6307448Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6307643Z import torch 2023-01-11T21:41:23.6307772Z import random 2023-01-11T21:41:23.6307996Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6308236Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6308245Z 2023-01-11T21:41:23.6308383Z aten = torch.ops.aten 2023-01-11T21:41:23.6308628Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6308794Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6308802Z 2023-01-11T21:41:23.6308810Z 2023-01-11T21:41:23.6308971Z async_compile.wait(globals()) 2023-01-11T21:41:23.6309100Z del async_compile 2023-01-11T21:41:23.6309107Z 2023-01-11T21:41:23.6309230Z def call(args): 2023-01-11T21:41:23.6309439Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6309567Z args.clear() 2023-01-11T21:41:23.6309810Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (1,), False, (0,), 4) 2023-01-11T21:41:23.6309990Z assert_size_stride(buf0, (1, 128, 110), (14080, 110, 1)) 2023-01-11T21:41:23.6310104Z del arg0_1 2023-01-11T21:41:23.6310219Z del arg1_1 2023-01-11T21:41:23.6310330Z del arg7_1 2023-01-11T21:41:23.6310451Z return (buf0, ) 2023-01-11T21:41:23.6310457Z 2023-01-11T21:41:23.6310463Z 2023-01-11T21:41:23.6310582Z if __name__ == "__main__": 2023-01-11T21:41:23.6310784Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6310993Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6311353Z arg0_1 = rand_strided((128, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6311720Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6312078Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6312439Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6312768Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6313121Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6313446Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6313907Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6314198Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6314207Z 2023-01-11T21:41:23.6314229Z 2023-01-11T21:41:23.6314381Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6314507Z import torch 2023-01-11T21:41:23.6314633Z import random 2023-01-11T21:41:23.6314836Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6315067Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6315073Z 2023-01-11T21:41:23.6315188Z aten = torch.ops.aten 2023-01-11T21:41:23.6315399Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6315613Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6315622Z 2023-01-11T21:41:23.6315631Z 2023-01-11T21:41:23.6315791Z async_compile.wait(globals()) 2023-01-11T21:41:23.6315906Z del async_compile 2023-01-11T21:41:23.6315915Z 2023-01-11T21:41:23.6316043Z def call(args): 2023-01-11T21:41:23.6316215Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6316326Z args.clear() 2023-01-11T21:41:23.6316553Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 1) 2023-01-11T21:41:23.6316740Z assert_size_stride(buf0, (1, 32, 108), (3456, 108, 1)) 2023-01-11T21:41:23.6316847Z del arg0_1 2023-01-11T21:41:23.6316955Z del arg1_1 2023-01-11T21:41:23.6317071Z del arg7_1 2023-01-11T21:41:23.6317187Z return (buf0, ) 2023-01-11T21:41:23.6317194Z 2023-01-11T21:41:23.6317200Z 2023-01-11T21:41:23.6317333Z if __name__ == "__main__": 2023-01-11T21:41:23.6317545Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6317816Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6318176Z arg0_1 = rand_strided((32, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6318536Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6318898Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6319223Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6319566Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6319904Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6320239Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6320622Z arg7_1 = rand_strided((1, 3, 112), (336, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6320902Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6320930Z 2023-01-11T21:41:23.6321414Z [2023-01-11 21:25:37,997] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 84 2023-01-11T21:41:23.6322312Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6322537Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6323027Z [2023-01-11 21:25:38,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 85 2023-01-11T21:41:23.6323538Z [2023-01-11 21:25:39,706] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 85 2023-01-11T21:41:23.6324440Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6324664Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6325156Z [2023-01-11 21:25:39,857] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 86 2023-01-11T21:41:23.6325644Z [2023-01-11 21:25:39,879] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 86 2023-01-11T21:41:23.6326553Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6326842Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6327305Z [2023-01-11 21:25:39,996] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 87 2023-01-11T21:41:23.6327316Z 2023-01-11T21:41:23.6327477Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6327601Z import torch 2023-01-11T21:41:23.6327727Z import random 2023-01-11T21:41:23.6327947Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6328182Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6328191Z 2023-01-11T21:41:23.6328323Z aten = torch.ops.aten 2023-01-11T21:41:23.6328582Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6328726Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6328747Z 2023-01-11T21:41:23.6328754Z 2023-01-11T21:41:23.6328892Z async_compile.wait(globals()) 2023-01-11T21:41:23.6329065Z del async_compile 2023-01-11T21:41:23.6329075Z 2023-01-11T21:41:23.6329199Z def call(args): 2023-01-11T21:41:23.6329413Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6329531Z args.clear() 2023-01-11T21:41:23.6329777Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1,), (0,), (2,), False, (0,), 4) 2023-01-11T21:41:23.6329967Z assert_size_stride(buf0, (1, 128, 108), (13824, 108, 1)) 2023-01-11T21:41:23.6330071Z del arg0_1 2023-01-11T21:41:23.6330187Z del arg1_1 2023-01-11T21:41:23.6330303Z del arg7_1 2023-01-11T21:41:23.6330421Z return (buf0, ) 2023-01-11T21:41:23.6330429Z 2023-01-11T21:41:23.6330437Z 2023-01-11T21:41:23.6330563Z if __name__ == "__main__": 2023-01-11T21:41:23.6330763Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6330999Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6331385Z arg0_1 = rand_strided((128, 3, 3), (9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6331744Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6332093Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6332447Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6332792Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6333313Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6333648Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6334013Z arg7_1 = rand_strided((1, 12, 112), (1344, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6334302Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6334310Z 2023-01-11T21:41:23.6334315Z 2023-01-11T21:41:23.6334471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6334588Z import torch 2023-01-11T21:41:23.6334711Z import random 2023-01-11T21:41:23.6334906Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6335127Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6335136Z 2023-01-11T21:41:23.6335278Z aten = torch.ops.aten 2023-01-11T21:41:23.6335511Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6335660Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6335681Z 2023-01-11T21:41:23.6335689Z 2023-01-11T21:41:23.6335928Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6336340Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6336559Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6336722Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6336824Z { 2023-01-11T21:41:23.6336992Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6337196Z { 2023-01-11T21:41:23.6337335Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6337480Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6337586Z { 2023-01-11T21:41:23.6337745Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6337853Z { 2023-01-11T21:41:23.6337953Z { 2023-01-11T21:41:23.6338067Z { 2023-01-11T21:41:23.6338233Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6338392Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6338498Z } 2023-01-11T21:41:23.6338604Z } 2023-01-11T21:41:23.6338713Z } 2023-01-11T21:41:23.6338812Z } 2023-01-11T21:41:23.6338916Z } 2023-01-11T21:41:23.6339007Z } 2023-01-11T21:41:23.6339138Z ''') 2023-01-11T21:41:23.6339146Z 2023-01-11T21:41:23.6339152Z 2023-01-11T21:41:23.6339387Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6339861Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6340074Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6340233Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6340321Z { 2023-01-11T21:41:23.6340454Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6340542Z { 2023-01-11T21:41:23.6340653Z #pragma omp for 2023-01-11T21:41:23.6340774Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6340862Z { 2023-01-11T21:41:23.6340976Z #pragma GCC ivdep 2023-01-11T21:41:23.6341097Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6341174Z { 2023-01-11T21:41:23.6341270Z { 2023-01-11T21:41:23.6341364Z { 2023-01-11T21:41:23.6341507Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6341648Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6341748Z } 2023-01-11T21:41:23.6341835Z } 2023-01-11T21:41:23.6341903Z } 2023-01-11T21:41:23.6341987Z } 2023-01-11T21:41:23.6342077Z } 2023-01-11T21:41:23.6342158Z } 2023-01-11T21:41:23.6342267Z ''') 2023-01-11T21:41:23.6342275Z 2023-01-11T21:41:23.6342280Z 2023-01-11T21:41:23.6342538Z async_compile.wait(globals()) 2023-01-11T21:41:23.6342633Z del async_compile 2023-01-11T21:41:23.6342654Z 2023-01-11T21:41:23.6342735Z def call(args): 2023-01-11T21:41:23.6342905Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6343006Z args.clear() 2023-01-11T21:41:23.6343344Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6343576Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6343676Z del arg7_1 2023-01-11T21:41:23.6344084Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6344239Z assert_size_stride(buf1, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6344342Z del arg0_1 2023-01-11T21:41:23.6344449Z del arg1_1 2023-01-11T21:41:23.6344545Z del buf0 2023-01-11T21:41:23.6344912Z buf2 = empty_strided((1, 32, 112, 112), (401408, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6345126Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6345250Z return (buf2, ) 2023-01-11T21:41:23.6345257Z 2023-01-11T21:41:23.6345264Z 2023-01-11T21:41:23.6345394Z if __name__ == "__main__": 2023-01-11T21:41:23.6345555Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6345741Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6346057Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6346358Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6346764Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6347078Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6347385Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6347673Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6347935Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6348306Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6348554Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6348562Z 2023-01-11T21:41:23.6348571Z 2023-01-11T21:41:23.6348719Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6348826Z import torch 2023-01-11T21:41:23.6348939Z import random 2023-01-11T21:41:23.6349204Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6349399Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6349406Z 2023-01-11T21:41:23.6349511Z aten = torch.ops.aten 2023-01-11T21:41:23.6349721Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6349856Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6349863Z 2023-01-11T21:41:23.6349870Z 2023-01-11T21:41:23.6350007Z async_compile.wait(globals()) 2023-01-11T21:41:23.6350108Z del async_compile 2023-01-11T21:41:23.6350115Z 2023-01-11T21:41:23.6350224Z def call(args): 2023-01-11T21:41:23.6350415Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6350538Z args.clear() 2023-01-11T21:41:23.6350987Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6351158Z assert_size_stride(buf0, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6351271Z del arg0_1 2023-01-11T21:41:23.6351371Z del arg1_1 2023-01-11T21:41:23.6351472Z del arg7_1 2023-01-11T21:41:23.6351575Z return (buf0, ) 2023-01-11T21:41:23.6351583Z 2023-01-11T21:41:23.6351590Z 2023-01-11T21:41:23.6351704Z if __name__ == "__main__": 2023-01-11T21:41:23.6351862Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6352053Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6352385Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6352734Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6353087Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6353428Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6353830Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6354178Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6354478Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6354871Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6355150Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6355159Z 2023-01-11T21:41:23.6355654Z [2023-01-11 21:25:41,625] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 87 2023-01-11T21:41:23.6356510Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6356836Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6357315Z [2023-01-11 21:25:41,774] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 88 2023-01-11T21:41:23.6357742Z [2023-01-11 21:25:41,797] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 88 2023-01-11T21:41:23.6358356Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6358536Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6358905Z [2023-01-11 21:25:41,937] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 89 2023-01-11T21:41:23.6358918Z 2023-01-11T21:41:23.6359109Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6359191Z import torch 2023-01-11T21:41:23.6359284Z import random 2023-01-11T21:41:23.6359441Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6359607Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6359613Z 2023-01-11T21:41:23.6359718Z aten = torch.ops.aten 2023-01-11T21:41:23.6359945Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6360080Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6360087Z 2023-01-11T21:41:23.6360093Z 2023-01-11T21:41:23.6360279Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6360651Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6360867Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6361049Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6361152Z { 2023-01-11T21:41:23.6361348Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6361450Z { 2023-01-11T21:41:23.6361573Z #pragma omp for 2023-01-11T21:41:23.6361698Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6361801Z { 2023-01-11T21:41:23.6361944Z #pragma GCC ivdep 2023-01-11T21:41:23.6362091Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6362203Z { 2023-01-11T21:41:23.6362307Z { 2023-01-11T21:41:23.6362398Z { 2023-01-11T21:41:23.6362581Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6362742Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6362858Z } 2023-01-11T21:41:23.6362973Z } 2023-01-11T21:41:23.6363082Z } 2023-01-11T21:41:23.6363192Z } 2023-01-11T21:41:23.6363285Z } 2023-01-11T21:41:23.6363395Z } 2023-01-11T21:41:23.6363532Z ''') 2023-01-11T21:41:23.6363540Z 2023-01-11T21:41:23.6363553Z 2023-01-11T21:41:23.6363808Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6364222Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6364446Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6364622Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6364712Z { 2023-01-11T21:41:23.6364874Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6364972Z { 2023-01-11T21:41:23.6365106Z #pragma omp for 2023-01-11T21:41:23.6365252Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6365360Z { 2023-01-11T21:41:23.6365506Z #pragma GCC ivdep 2023-01-11T21:41:23.6365641Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6365747Z { 2023-01-11T21:41:23.6365852Z { 2023-01-11T21:41:23.6365951Z { 2023-01-11T21:41:23.6366116Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6366401Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6366517Z } 2023-01-11T21:41:23.6366614Z } 2023-01-11T21:41:23.6366721Z } 2023-01-11T21:41:23.6366813Z } 2023-01-11T21:41:23.6366894Z } 2023-01-11T21:41:23.6366978Z } 2023-01-11T21:41:23.6367090Z ''') 2023-01-11T21:41:23.6367097Z 2023-01-11T21:41:23.6367103Z 2023-01-11T21:41:23.6367217Z async_compile.wait(globals()) 2023-01-11T21:41:23.6367299Z del async_compile 2023-01-11T21:41:23.6367307Z 2023-01-11T21:41:23.6367402Z def call(args): 2023-01-11T21:41:23.6367566Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6367663Z args.clear() 2023-01-11T21:41:23.6367989Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6368178Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6368276Z del arg7_1 2023-01-11T21:41:23.6368702Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6368870Z assert_size_stride(buf1, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6368957Z del arg0_1 2023-01-11T21:41:23.6369047Z del arg1_1 2023-01-11T21:41:23.6369128Z del buf0 2023-01-11T21:41:23.6369460Z buf2 = empty_strided((1, 128, 112, 112), (1605632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6369651Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6369749Z return (buf2, ) 2023-01-11T21:41:23.6369756Z 2023-01-11T21:41:23.6369762Z 2023-01-11T21:41:23.6369860Z if __name__ == "__main__": 2023-01-11T21:41:23.6370029Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6370191Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6370507Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6370795Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6371078Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6371343Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6371616Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6371881Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6372143Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6372463Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6372683Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6372690Z 2023-01-11T21:41:23.6372695Z 2023-01-11T21:41:23.6372833Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6372928Z import torch 2023-01-11T21:41:23.6373028Z import random 2023-01-11T21:41:23.6373188Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6373339Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6373359Z 2023-01-11T21:41:23.6373454Z aten = torch.ops.aten 2023-01-11T21:41:23.6373635Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6373763Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6373769Z 2023-01-11T21:41:23.6373775Z 2023-01-11T21:41:23.6373890Z async_compile.wait(globals()) 2023-01-11T21:41:23.6373981Z del async_compile 2023-01-11T21:41:23.6373987Z 2023-01-11T21:41:23.6374075Z def call(args): 2023-01-11T21:41:23.6374229Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6374307Z args.clear() 2023-01-11T21:41:23.6374677Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6374920Z assert_size_stride(buf0, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6375015Z del arg0_1 2023-01-11T21:41:23.6375105Z del arg1_1 2023-01-11T21:41:23.6375196Z del arg7_1 2023-01-11T21:41:23.6375294Z return (buf0, ) 2023-01-11T21:41:23.6375301Z 2023-01-11T21:41:23.6375306Z 2023-01-11T21:41:23.6375407Z if __name__ == "__main__": 2023-01-11T21:41:23.6375557Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6375730Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6376041Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6376332Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6376616Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6376898Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6377256Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6377533Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6377762Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6378070Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6378277Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6378284Z 2023-01-11T21:41:23.6378658Z [2023-01-11 21:25:41,970] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 89 2023-01-11T21:41:23.6379298Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6379475Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6379864Z [2023-01-11 21:25:42,081] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 90 2023-01-11T21:41:23.6380260Z [2023-01-11 21:25:42,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 90 2023-01-11T21:41:23.6380897Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6381072Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6381445Z [2023-01-11 21:25:42,227] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 91 2023-01-11T21:41:23.6381809Z [2023-01-11 21:25:42,260] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 91 2023-01-11T21:41:23.6382538Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6382716Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6383109Z [2023-01-11 21:25:42,411] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 92 2023-01-11T21:41:23.6383117Z 2023-01-11T21:41:23.6383257Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6383358Z import torch 2023-01-11T21:41:23.6383563Z import random 2023-01-11T21:41:23.6383728Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6383893Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6383899Z 2023-01-11T21:41:23.6383991Z aten = torch.ops.aten 2023-01-11T21:41:23.6384173Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6384297Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6384303Z 2023-01-11T21:41:23.6384308Z 2023-01-11T21:41:23.6384506Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6384791Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6384962Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6385103Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6385192Z { 2023-01-11T21:41:23.6385328Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6385417Z { 2023-01-11T21:41:23.6385550Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6385758Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6385844Z { 2023-01-11T21:41:23.6385966Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6386054Z { 2023-01-11T21:41:23.6386124Z { 2023-01-11T21:41:23.6386214Z { 2023-01-11T21:41:23.6386364Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6386493Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6386582Z } 2023-01-11T21:41:23.6386672Z } 2023-01-11T21:41:23.6386745Z } 2023-01-11T21:41:23.6386835Z } 2023-01-11T21:41:23.6386925Z } 2023-01-11T21:41:23.6387013Z } 2023-01-11T21:41:23.6387133Z ''') 2023-01-11T21:41:23.6387141Z 2023-01-11T21:41:23.6387147Z 2023-01-11T21:41:23.6387341Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6387645Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6387824Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6387953Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6388037Z { 2023-01-11T21:41:23.6388180Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6388267Z { 2023-01-11T21:41:23.6388372Z #pragma omp for 2023-01-11T21:41:23.6388484Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6388555Z { 2023-01-11T21:41:23.6388665Z #pragma GCC ivdep 2023-01-11T21:41:23.6388789Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6388872Z { 2023-01-11T21:41:23.6388960Z { 2023-01-11T21:41:23.6389050Z { 2023-01-11T21:41:23.6389187Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6389304Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6389399Z } 2023-01-11T21:41:23.6389488Z } 2023-01-11T21:41:23.6389574Z } 2023-01-11T21:41:23.6389656Z } 2023-01-11T21:41:23.6389737Z } 2023-01-11T21:41:23.6389817Z } 2023-01-11T21:41:23.6389925Z ''') 2023-01-11T21:41:23.6389933Z 2023-01-11T21:41:23.6389939Z 2023-01-11T21:41:23.6390067Z async_compile.wait(globals()) 2023-01-11T21:41:23.6390170Z del async_compile 2023-01-11T21:41:23.6390177Z 2023-01-11T21:41:23.6390269Z def call(args): 2023-01-11T21:41:23.6390434Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6390532Z args.clear() 2023-01-11T21:41:23.6390851Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6391040Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6391123Z del arg7_1 2023-01-11T21:41:23.6391497Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6391661Z assert_size_stride(buf1, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6391832Z del arg0_1 2023-01-11T21:41:23.6391920Z del arg1_1 2023-01-11T21:41:23.6392010Z del buf0 2023-01-11T21:41:23.6392335Z buf2 = empty_strided((1, 32, 112, 112), (401408, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6392509Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6392614Z return (buf2, ) 2023-01-11T21:41:23.6392622Z 2023-01-11T21:41:23.6392628Z 2023-01-11T21:41:23.6392740Z if __name__ == "__main__": 2023-01-11T21:41:23.6392905Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6393081Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6393371Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6393637Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6394054Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6394325Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6394603Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6394882Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6395144Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6395475Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6395706Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6395715Z 2023-01-11T21:41:23.6395721Z 2023-01-11T21:41:23.6395856Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6395958Z import torch 2023-01-11T21:41:23.6396050Z import random 2023-01-11T21:41:23.6396227Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6396420Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6396427Z 2023-01-11T21:41:23.6396542Z aten = torch.ops.aten 2023-01-11T21:41:23.6396746Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6396886Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6396894Z 2023-01-11T21:41:23.6396900Z 2023-01-11T21:41:23.6397036Z async_compile.wait(globals()) 2023-01-11T21:41:23.6397136Z del async_compile 2023-01-11T21:41:23.6397143Z 2023-01-11T21:41:23.6397237Z def call(args): 2023-01-11T21:41:23.6397411Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6397515Z args.clear() 2023-01-11T21:41:23.6397904Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6398066Z assert_size_stride(buf0, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6398162Z del arg0_1 2023-01-11T21:41:23.6398264Z del arg1_1 2023-01-11T21:41:23.6398355Z del arg7_1 2023-01-11T21:41:23.6398440Z return (buf0, ) 2023-01-11T21:41:23.6398447Z 2023-01-11T21:41:23.6398454Z 2023-01-11T21:41:23.6398559Z if __name__ == "__main__": 2023-01-11T21:41:23.6398727Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6398909Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6399223Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6399509Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6399773Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6400045Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6400314Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6400580Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6400948Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6401255Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6401475Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6401483Z 2023-01-11T21:41:23.6401489Z 2023-01-11T21:41:23.6401626Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6401724Z import torch 2023-01-11T21:41:23.6401821Z import random 2023-01-11T21:41:23.6401969Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6402143Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6402150Z 2023-01-11T21:41:23.6402268Z aten = torch.ops.aten 2023-01-11T21:41:23.6402451Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6402575Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6402583Z 2023-01-11T21:41:23.6402593Z 2023-01-11T21:41:23.6402857Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6403143Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6403299Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6403413Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6403492Z { 2023-01-11T21:41:23.6403622Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6403704Z { 2023-01-11T21:41:23.6403807Z #pragma omp for 2023-01-11T21:41:23.6403916Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6403996Z { 2023-01-11T21:41:23.6404087Z #pragma GCC ivdep 2023-01-11T21:41:23.6404203Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6404290Z { 2023-01-11T21:41:23.6404376Z { 2023-01-11T21:41:23.6404461Z { 2023-01-11T21:41:23.6404600Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6404723Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6404817Z } 2023-01-11T21:41:23.6404906Z } 2023-01-11T21:41:23.6404996Z } 2023-01-11T21:41:23.6405087Z } 2023-01-11T21:41:23.6405173Z } 2023-01-11T21:41:23.6405259Z } 2023-01-11T21:41:23.6405365Z ''') 2023-01-11T21:41:23.6405372Z 2023-01-11T21:41:23.6405393Z 2023-01-11T21:41:23.6405581Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6405878Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6406047Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6406183Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6406270Z { 2023-01-11T21:41:23.6406407Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6406492Z { 2023-01-11T21:41:23.6406584Z #pragma omp for 2023-01-11T21:41:23.6406695Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6406793Z { 2023-01-11T21:41:23.6406908Z #pragma GCC ivdep 2023-01-11T21:41:23.6407038Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6407128Z { 2023-01-11T21:41:23.6407206Z { 2023-01-11T21:41:23.6407297Z { 2023-01-11T21:41:23.6407445Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6407584Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6407679Z } 2023-01-11T21:41:23.6407773Z } 2023-01-11T21:41:23.6407865Z } 2023-01-11T21:41:23.6407941Z } 2023-01-11T21:41:23.6408024Z } 2023-01-11T21:41:23.6408110Z } 2023-01-11T21:41:23.6408237Z ''') 2023-01-11T21:41:23.6408244Z 2023-01-11T21:41:23.6408250Z 2023-01-11T21:41:23.6408378Z async_compile.wait(globals()) 2023-01-11T21:41:23.6408484Z del async_compile 2023-01-11T21:41:23.6408492Z 2023-01-11T21:41:23.6408589Z def call(args): 2023-01-11T21:41:23.6408841Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6408943Z args.clear() 2023-01-11T21:41:23.6409296Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6409476Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6409564Z del arg7_1 2023-01-11T21:41:23.6409944Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6410098Z assert_size_stride(buf1, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6410194Z del arg0_1 2023-01-11T21:41:23.6410267Z del arg1_1 2023-01-11T21:41:23.6410355Z del buf0 2023-01-11T21:41:23.6410689Z buf2 = empty_strided((1, 128, 112, 112), (1605632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6410879Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6411058Z return (buf2, ) 2023-01-11T21:41:23.6411068Z 2023-01-11T21:41:23.6411074Z 2023-01-11T21:41:23.6411186Z if __name__ == "__main__": 2023-01-11T21:41:23.6411349Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6411529Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6411826Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6412107Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6412379Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6412642Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6412897Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6413152Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6413404Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6413717Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6413915Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6413923Z 2023-01-11T21:41:23.6414296Z [2023-01-11 21:25:42,435] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 92 2023-01-11T21:41:23.6414947Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6415132Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6415506Z [2023-01-11 21:25:42,591] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 93 2023-01-11T21:41:23.6415890Z [2023-01-11 21:25:44,158] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 93 2023-01-11T21:41:23.6416459Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6416625Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6416986Z [2023-01-11 21:25:44,269] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 94 2023-01-11T21:41:23.6417345Z [2023-01-11 21:25:44,291] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 94 2023-01-11T21:41:23.6418014Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6418182Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6418518Z [2023-01-11 21:25:44,408] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 95 2023-01-11T21:41:23.6418540Z 2023-01-11T21:41:23.6418650Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6418747Z import torch 2023-01-11T21:41:23.6418845Z import random 2023-01-11T21:41:23.6418993Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6419151Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6419159Z 2023-01-11T21:41:23.6419257Z aten = torch.ops.aten 2023-01-11T21:41:23.6419505Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6419613Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6419619Z 2023-01-11T21:41:23.6419624Z 2023-01-11T21:41:23.6419737Z async_compile.wait(globals()) 2023-01-11T21:41:23.6419830Z del async_compile 2023-01-11T21:41:23.6419837Z 2023-01-11T21:41:23.6419924Z def call(args): 2023-01-11T21:41:23.6420078Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6420177Z args.clear() 2023-01-11T21:41:23.6420573Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6420731Z assert_size_stride(buf0, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6420812Z del arg0_1 2023-01-11T21:41:23.6420907Z del arg1_1 2023-01-11T21:41:23.6421005Z del arg7_1 2023-01-11T21:41:23.6421108Z return (buf0, ) 2023-01-11T21:41:23.6421115Z 2023-01-11T21:41:23.6421125Z 2023-01-11T21:41:23.6421233Z if __name__ == "__main__": 2023-01-11T21:41:23.6421391Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6421564Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6421883Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6422168Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6422612Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6422905Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6423180Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6423458Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6423709Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6424031Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6424245Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6424267Z 2023-01-11T21:41:23.6424274Z 2023-01-11T21:41:23.6424396Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6424490Z import torch 2023-01-11T21:41:23.6424597Z import random 2023-01-11T21:41:23.6424760Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6424933Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6424940Z 2023-01-11T21:41:23.6425051Z aten = torch.ops.aten 2023-01-11T21:41:23.6425245Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6425359Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6425379Z 2023-01-11T21:41:23.6425385Z 2023-01-11T21:41:23.6425577Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6425858Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6426138Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6426272Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6426356Z { 2023-01-11T21:41:23.6426491Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6426576Z { 2023-01-11T21:41:23.6426683Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6426793Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6426875Z { 2023-01-11T21:41:23.6426995Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6427083Z { 2023-01-11T21:41:23.6427171Z { 2023-01-11T21:41:23.6427261Z { 2023-01-11T21:41:23.6427391Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6427518Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6427607Z } 2023-01-11T21:41:23.6427694Z } 2023-01-11T21:41:23.6427858Z } 2023-01-11T21:41:23.6427945Z } 2023-01-11T21:41:23.6428010Z } 2023-01-11T21:41:23.6428088Z } 2023-01-11T21:41:23.6428205Z ''') 2023-01-11T21:41:23.6428213Z 2023-01-11T21:41:23.6428218Z 2023-01-11T21:41:23.6428420Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6428734Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6428903Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6429043Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6429124Z { 2023-01-11T21:41:23.6429241Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6429322Z { 2023-01-11T21:41:23.6429426Z #pragma omp for 2023-01-11T21:41:23.6429537Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6429620Z { 2023-01-11T21:41:23.6429730Z #pragma GCC ivdep 2023-01-11T21:41:23.6429849Z for(long i1=0; i1<12100; i1+=1) 2023-01-11T21:41:23.6429935Z { 2023-01-11T21:41:23.6430021Z { 2023-01-11T21:41:23.6430110Z { 2023-01-11T21:41:23.6430252Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6430383Z out_ptr0[i1 + (12100*i0)] = tmp0; 2023-01-11T21:41:23.6430474Z } 2023-01-11T21:41:23.6430553Z } 2023-01-11T21:41:23.6430636Z } 2023-01-11T21:41:23.6430728Z } 2023-01-11T21:41:23.6430811Z } 2023-01-11T21:41:23.6430892Z } 2023-01-11T21:41:23.6431015Z ''') 2023-01-11T21:41:23.6431024Z 2023-01-11T21:41:23.6431030Z 2023-01-11T21:41:23.6431164Z async_compile.wait(globals()) 2023-01-11T21:41:23.6431252Z del async_compile 2023-01-11T21:41:23.6431273Z 2023-01-11T21:41:23.6431356Z def call(args): 2023-01-11T21:41:23.6431522Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6431622Z args.clear() 2023-01-11T21:41:23.6431946Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6432133Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6432225Z del arg7_1 2023-01-11T21:41:23.6432581Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6432719Z assert_size_stride(buf1, (1, 32, 110, 110), (387200, 1, 3520, 32)) 2023-01-11T21:41:23.6432809Z del arg0_1 2023-01-11T21:41:23.6432897Z del arg1_1 2023-01-11T21:41:23.6432988Z del buf0 2023-01-11T21:41:23.6433314Z buf2 = empty_strided((1, 32, 110, 110), (387200, 12100, 110, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6433493Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6433594Z return (buf2, ) 2023-01-11T21:41:23.6433601Z 2023-01-11T21:41:23.6433606Z 2023-01-11T21:41:23.6433703Z if __name__ == "__main__": 2023-01-11T21:41:23.6434003Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6434167Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6434477Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6434747Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6435009Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6435266Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6435523Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6435787Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6436056Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6436384Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6436681Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6436691Z 2023-01-11T21:41:23.6436698Z 2023-01-11T21:41:23.6436836Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6436937Z import torch 2023-01-11T21:41:23.6437039Z import random 2023-01-11T21:41:23.6437197Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6437362Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6437368Z 2023-01-11T21:41:23.6437456Z aten = torch.ops.aten 2023-01-11T21:41:23.6437639Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6437760Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6437767Z 2023-01-11T21:41:23.6437772Z 2023-01-11T21:41:23.6437888Z async_compile.wait(globals()) 2023-01-11T21:41:23.6437983Z del async_compile 2023-01-11T21:41:23.6437990Z 2023-01-11T21:41:23.6438082Z def call(args): 2023-01-11T21:41:23.6438251Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6438351Z args.clear() 2023-01-11T21:41:23.6438724Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6438874Z assert_size_stride(buf0, (1, 32, 110, 110), (387200, 1, 3520, 32)) 2023-01-11T21:41:23.6438970Z del arg0_1 2023-01-11T21:41:23.6439060Z del arg1_1 2023-01-11T21:41:23.6439146Z del arg7_1 2023-01-11T21:41:23.6439239Z return (buf0, ) 2023-01-11T21:41:23.6439247Z 2023-01-11T21:41:23.6439253Z 2023-01-11T21:41:23.6439352Z if __name__ == "__main__": 2023-01-11T21:41:23.6439498Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6439678Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6439987Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6440262Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6440543Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6440817Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6441092Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6441375Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6441617Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6441922Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6442136Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6442144Z 2023-01-11T21:41:23.6442533Z [2023-01-11 21:25:46,088] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 95 2023-01-11T21:41:23.6443203Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6443478Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6443877Z [2023-01-11 21:25:46,239] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 96 2023-01-11T21:41:23.6444273Z [2023-01-11 21:25:46,262] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 96 2023-01-11T21:41:23.6445010Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6445261Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6445670Z [2023-01-11 21:25:46,402] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 97 2023-01-11T21:41:23.6445679Z 2023-01-11T21:41:23.6445850Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6445939Z import torch 2023-01-11T21:41:23.6446079Z import random 2023-01-11T21:41:23.6446296Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6446712Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6446720Z 2023-01-11T21:41:23.6446869Z aten = torch.ops.aten 2023-01-11T21:41:23.6447048Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6447219Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6447227Z 2023-01-11T21:41:23.6447233Z 2023-01-11T21:41:23.6447475Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6447809Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6448023Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6448203Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6448335Z { 2023-01-11T21:41:23.6448586Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6448660Z { 2023-01-11T21:41:23.6448820Z #pragma omp for 2023-01-11T21:41:23.6448973Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6449098Z { 2023-01-11T21:41:23.6449248Z #pragma GCC ivdep 2023-01-11T21:41:23.6449411Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6449482Z { 2023-01-11T21:41:23.6449610Z { 2023-01-11T21:41:23.6449784Z { 2023-01-11T21:41:23.6450018Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6450192Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6450332Z } 2023-01-11T21:41:23.6450456Z } 2023-01-11T21:41:23.6450528Z } 2023-01-11T21:41:23.6450649Z } 2023-01-11T21:41:23.6450769Z } 2023-01-11T21:41:23.6450887Z } 2023-01-11T21:41:23.6451059Z ''') 2023-01-11T21:41:23.6451067Z 2023-01-11T21:41:23.6451073Z 2023-01-11T21:41:23.6451353Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6451699Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6451904Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6452024Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6452153Z { 2023-01-11T21:41:23.6452336Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6452458Z { 2023-01-11T21:41:23.6452613Z #pragma omp for 2023-01-11T21:41:23.6452807Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6452883Z { 2023-01-11T21:41:23.6453080Z #pragma GCC ivdep 2023-01-11T21:41:23.6453318Z for(long i1=0; i1<12100; i1+=1) 2023-01-11T21:41:23.6453450Z { 2023-01-11T21:41:23.6453572Z { 2023-01-11T21:41:23.6453704Z { 2023-01-11T21:41:23.6453897Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6454024Z out_ptr0[i1 + (12100*i0)] = tmp0; 2023-01-11T21:41:23.6454154Z } 2023-01-11T21:41:23.6454284Z } 2023-01-11T21:41:23.6454452Z } 2023-01-11T21:41:23.6454577Z } 2023-01-11T21:41:23.6454699Z } 2023-01-11T21:41:23.6454819Z } 2023-01-11T21:41:23.6454936Z ''') 2023-01-11T21:41:23.6454945Z 2023-01-11T21:41:23.6454951Z 2023-01-11T21:41:23.6455131Z async_compile.wait(globals()) 2023-01-11T21:41:23.6455272Z del async_compile 2023-01-11T21:41:23.6455279Z 2023-01-11T21:41:23.6455454Z def call(args): 2023-01-11T21:41:23.6455659Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6455901Z args.clear() 2023-01-11T21:41:23.6456290Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6456470Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6456613Z del arg7_1 2023-01-11T21:41:23.6457029Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6457230Z assert_size_stride(buf1, (1, 128, 110, 110), (1548800, 1, 14080, 128)) 2023-01-11T21:41:23.6457364Z del arg0_1 2023-01-11T21:41:23.6457503Z del arg1_1 2023-01-11T21:41:23.6457633Z del buf0 2023-01-11T21:41:23.6458063Z buf2 = empty_strided((1, 128, 110, 110), (1548800, 12100, 110, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6458254Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6458405Z return (buf2, ) 2023-01-11T21:41:23.6458420Z 2023-01-11T21:41:23.6458431Z 2023-01-11T21:41:23.6458584Z if __name__ == "__main__": 2023-01-11T21:41:23.6458802Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6459034Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6459451Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6459787Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6460097Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6460361Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6460726Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6461035Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6461326Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6461718Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6461983Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6461992Z 2023-01-11T21:41:23.6461998Z 2023-01-11T21:41:23.6462164Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6462301Z import torch 2023-01-11T21:41:23.6462534Z import random 2023-01-11T21:41:23.6462757Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6463023Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6463031Z 2023-01-11T21:41:23.6463177Z aten = torch.ops.aten 2023-01-11T21:41:23.6463407Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6463571Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6463578Z 2023-01-11T21:41:23.6463584Z 2023-01-11T21:41:23.6463780Z async_compile.wait(globals()) 2023-01-11T21:41:23.6463917Z del async_compile 2023-01-11T21:41:23.6464029Z 2023-01-11T21:41:23.6464117Z def call(args): 2023-01-11T21:41:23.6464335Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6464470Z args.clear() 2023-01-11T21:41:23.6464940Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6465129Z assert_size_stride(buf0, (1, 128, 110, 110), (1548800, 1, 14080, 128)) 2023-01-11T21:41:23.6465253Z del arg0_1 2023-01-11T21:41:23.6465381Z del arg1_1 2023-01-11T21:41:23.6465459Z del arg7_1 2023-01-11T21:41:23.6465594Z return (buf0, ) 2023-01-11T21:41:23.6465601Z 2023-01-11T21:41:23.6465608Z 2023-01-11T21:41:23.6465760Z if __name__ == "__main__": 2023-01-11T21:41:23.6465959Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6466164Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6466576Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6467004Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6467323Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6467582Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6467947Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6468249Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6468541Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6468886Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6469134Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6469142Z 2023-01-11T21:41:23.6469623Z [2023-01-11 21:25:48,026] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 97 2023-01-11T21:41:23.6470271Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6470490Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6470909Z [2023-01-11 21:25:48,140] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 98 2023-01-11T21:41:23.6471268Z [2023-01-11 21:25:48,163] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 98 2023-01-11T21:41:23.6471925Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6472142Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6472564Z [2023-01-11 21:25:48,283] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 99 2023-01-11T21:41:23.6472992Z [2023-01-11 21:25:49,900] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 99 2023-01-11T21:41:23.6473714Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6474024Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6474524Z [2023-01-11 21:25:50,033] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 100 2023-01-11T21:41:23.6474534Z 2023-01-11T21:41:23.6474701Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6474833Z import torch 2023-01-11T21:41:23.6474960Z import random 2023-01-11T21:41:23.6475114Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6475325Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6475334Z 2023-01-11T21:41:23.6475478Z aten = torch.ops.aten 2023-01-11T21:41:23.6475767Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6475936Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6475943Z 2023-01-11T21:41:23.6475949Z 2023-01-11T21:41:23.6476190Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6476507Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6476786Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6476910Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6477031Z { 2023-01-11T21:41:23.6477210Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6477333Z { 2023-01-11T21:41:23.6477532Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6477716Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6477840Z { 2023-01-11T21:41:23.6477942Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6478066Z { 2023-01-11T21:41:23.6478187Z { 2023-01-11T21:41:23.6478325Z { 2023-01-11T21:41:23.6478501Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6478663Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6478823Z } 2023-01-11T21:41:23.6478899Z } 2023-01-11T21:41:23.6479024Z } 2023-01-11T21:41:23.6479147Z } 2023-01-11T21:41:23.6479276Z } 2023-01-11T21:41:23.6479412Z } 2023-01-11T21:41:23.6479568Z ''') 2023-01-11T21:41:23.6479576Z 2023-01-11T21:41:23.6479582Z 2023-01-11T21:41:23.6479807Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6480078Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6480273Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6480522Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6480638Z { 2023-01-11T21:41:23.6480809Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6480938Z { 2023-01-11T21:41:23.6481076Z #pragma omp for 2023-01-11T21:41:23.6481174Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6481298Z { 2023-01-11T21:41:23.6481441Z #pragma GCC ivdep 2023-01-11T21:41:23.6481593Z for(long i1=0; i1<11664; i1+=1) 2023-01-11T21:41:23.6481743Z { 2023-01-11T21:41:23.6481860Z { 2023-01-11T21:41:23.6481944Z { 2023-01-11T21:41:23.6482126Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6482291Z out_ptr0[i1 + (11664*i0)] = tmp0; 2023-01-11T21:41:23.6482418Z } 2023-01-11T21:41:23.6482539Z } 2023-01-11T21:41:23.6482658Z } 2023-01-11T21:41:23.6482778Z } 2023-01-11T21:41:23.6482845Z } 2023-01-11T21:41:23.6483031Z } 2023-01-11T21:41:23.6483196Z ''') 2023-01-11T21:41:23.6483203Z 2023-01-11T21:41:23.6483209Z 2023-01-11T21:41:23.6483364Z async_compile.wait(globals()) 2023-01-11T21:41:23.6483498Z del async_compile 2023-01-11T21:41:23.6483505Z 2023-01-11T21:41:23.6483635Z def call(args): 2023-01-11T21:41:23.6483835Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6483920Z args.clear() 2023-01-11T21:41:23.6484267Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6484555Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6484730Z del arg7_1 2023-01-11T21:41:23.6485125Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6485318Z assert_size_stride(buf1, (1, 32, 108, 108), (373248, 1, 3456, 32)) 2023-01-11T21:41:23.6485448Z del arg0_1 2023-01-11T21:41:23.6485574Z del arg1_1 2023-01-11T21:41:23.6485645Z del buf0 2023-01-11T21:41:23.6486001Z buf2 = empty_strided((1, 32, 108, 108), (373248, 11664, 108, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6486214Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6486391Z return (buf2, ) 2023-01-11T21:41:23.6486399Z 2023-01-11T21:41:23.6486405Z 2023-01-11T21:41:23.6486575Z if __name__ == "__main__": 2023-01-11T21:41:23.6486765Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6487040Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6487374Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6487635Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6487940Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6488250Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6488559Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6488853Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6489190Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6489526Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6489815Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6489829Z 2023-01-11T21:41:23.6489835Z 2023-01-11T21:41:23.6490007Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6490093Z import torch 2023-01-11T21:41:23.6490239Z import random 2023-01-11T21:41:23.6490445Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6490701Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6490709Z 2023-01-11T21:41:23.6490868Z aten = torch.ops.aten 2023-01-11T21:41:23.6491153Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6491317Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6491325Z 2023-01-11T21:41:23.6491331Z 2023-01-11T21:41:23.6491438Z async_compile.wait(globals()) 2023-01-11T21:41:23.6491570Z del async_compile 2023-01-11T21:41:23.6491577Z 2023-01-11T21:41:23.6491713Z def call(args): 2023-01-11T21:41:23.6491913Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6591092Z args.clear() 2023-01-11T21:41:23.6591718Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6591879Z assert_size_stride(buf0, (1, 32, 108, 108), (373248, 1, 3456, 32)) 2023-01-11T21:41:23.6591972Z del arg0_1 2023-01-11T21:41:23.6592058Z del arg1_1 2023-01-11T21:41:23.6592134Z del arg7_1 2023-01-11T21:41:23.6592227Z return (buf0, ) 2023-01-11T21:41:23.6592235Z 2023-01-11T21:41:23.6592241Z 2023-01-11T21:41:23.6592337Z if __name__ == "__main__": 2023-01-11T21:41:23.6592492Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6592657Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6592948Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6593249Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6593527Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6594131Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6594395Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6594667Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6594924Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6595242Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6595464Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6595474Z 2023-01-11T21:41:23.6595481Z 2023-01-11T21:41:23.6595617Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6595716Z import torch 2023-01-11T21:41:23.6595808Z import random 2023-01-11T21:41:23.6595951Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6596216Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6596228Z 2023-01-11T21:41:23.6596368Z aten = torch.ops.aten 2023-01-11T21:41:23.6596586Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6596699Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6596706Z 2023-01-11T21:41:23.6596713Z 2023-01-11T21:41:23.6596937Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6597232Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6597400Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6597553Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6597635Z { 2023-01-11T21:41:23.6597769Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6597855Z { 2023-01-11T21:41:23.6597964Z #pragma omp for 2023-01-11T21:41:23.6598071Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6598144Z { 2023-01-11T21:41:23.6598254Z #pragma GCC ivdep 2023-01-11T21:41:23.6598378Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6598466Z { 2023-01-11T21:41:23.6598577Z { 2023-01-11T21:41:23.6598661Z { 2023-01-11T21:41:23.6598793Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6598924Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6599011Z } 2023-01-11T21:41:23.6599095Z } 2023-01-11T21:41:23.6599175Z } 2023-01-11T21:41:23.6599251Z } 2023-01-11T21:41:23.6599323Z } 2023-01-11T21:41:23.6599397Z } 2023-01-11T21:41:23.6599507Z ''') 2023-01-11T21:41:23.6599514Z 2023-01-11T21:41:23.6599519Z 2023-01-11T21:41:23.6599696Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6599983Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6600155Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6600299Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6600374Z { 2023-01-11T21:41:23.6600512Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6600588Z { 2023-01-11T21:41:23.6600688Z #pragma omp for 2023-01-11T21:41:23.6600819Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6600896Z { 2023-01-11T21:41:23.6600993Z #pragma GCC ivdep 2023-01-11T21:41:23.6601100Z for(long i1=0; i1<11664; i1+=1) 2023-01-11T21:41:23.6601177Z { 2023-01-11T21:41:23.6601248Z { 2023-01-11T21:41:23.6601329Z { 2023-01-11T21:41:23.6601458Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6601580Z out_ptr0[i1 + (11664*i0)] = tmp0; 2023-01-11T21:41:23.6601661Z } 2023-01-11T21:41:23.6601743Z } 2023-01-11T21:41:23.6601822Z } 2023-01-11T21:41:23.6601890Z } 2023-01-11T21:41:23.6602052Z } 2023-01-11T21:41:23.6602129Z } 2023-01-11T21:41:23.6602240Z ''') 2023-01-11T21:41:23.6602247Z 2023-01-11T21:41:23.6602252Z 2023-01-11T21:41:23.6602367Z async_compile.wait(globals()) 2023-01-11T21:41:23.6602454Z del async_compile 2023-01-11T21:41:23.6602460Z 2023-01-11T21:41:23.6602556Z def call(args): 2023-01-11T21:41:23.6602719Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6602819Z args.clear() 2023-01-11T21:41:23.6603143Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6603319Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6603404Z del arg7_1 2023-01-11T21:41:23.6603739Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6603890Z assert_size_stride(buf1, (1, 128, 108, 108), (1492992, 1, 13824, 128)) 2023-01-11T21:41:23.6603977Z del arg0_1 2023-01-11T21:41:23.6604108Z del arg1_1 2023-01-11T21:41:23.6604189Z del buf0 2023-01-11T21:41:23.6604503Z buf2 = empty_strided((1, 128, 108, 108), (1492992, 11664, 108, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6604671Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6604760Z return (buf2, ) 2023-01-11T21:41:23.6604767Z 2023-01-11T21:41:23.6604773Z 2023-01-11T21:41:23.6604867Z if __name__ == "__main__": 2023-01-11T21:41:23.6605019Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6605173Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6605486Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6605783Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6606040Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6606306Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6606562Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6606820Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6607062Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6607352Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6607557Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6607564Z 2023-01-11T21:41:23.6607925Z [2023-01-11 21:25:50,056] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 100 2023-01-11T21:41:23.6608525Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6608713Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6609069Z [2023-01-11 21:25:50,182] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 101 2023-01-11T21:41:23.6609429Z [2023-01-11 21:25:50,214] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 101 2023-01-11T21:41:23.6610004Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6610177Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6610602Z [2023-01-11 21:25:50,325] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 102 2023-01-11T21:41:23.6610958Z [2023-01-11 21:25:50,348] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 102 2023-01-11T21:41:23.6611531Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6611691Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6612027Z [2023-01-11 21:25:50,466] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 103 2023-01-11T21:41:23.6612034Z 2023-01-11T21:41:23.6612172Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6612346Z import torch 2023-01-11T21:41:23.6612444Z import random 2023-01-11T21:41:23.6612590Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6612750Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6612757Z 2023-01-11T21:41:23.6612856Z aten = torch.ops.aten 2023-01-11T21:41:23.6613029Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6613133Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6613139Z 2023-01-11T21:41:23.6613144Z 2023-01-11T21:41:23.6613253Z async_compile.wait(globals()) 2023-01-11T21:41:23.6613348Z del async_compile 2023-01-11T21:41:23.6613354Z 2023-01-11T21:41:23.6613440Z def call(args): 2023-01-11T21:41:23.6613598Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6613690Z args.clear() 2023-01-11T21:41:23.6614036Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6614191Z assert_size_stride(buf0, (1, 128, 108, 108), (1492992, 1, 13824, 128)) 2023-01-11T21:41:23.6614266Z del arg0_1 2023-01-11T21:41:23.6614347Z del arg1_1 2023-01-11T21:41:23.6614433Z del arg7_1 2023-01-11T21:41:23.6614520Z return (buf0, ) 2023-01-11T21:41:23.6614526Z 2023-01-11T21:41:23.6614531Z 2023-01-11T21:41:23.6614626Z if __name__ == "__main__": 2023-01-11T21:41:23.6614771Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6614929Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6615200Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6615461Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6615725Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6615977Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6616250Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6616498Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6616733Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6617027Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6617217Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6617231Z 2023-01-11T21:41:23.6617237Z 2023-01-11T21:41:23.6617345Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6617435Z import torch 2023-01-11T21:41:23.6617523Z import random 2023-01-11T21:41:23.6617672Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6617825Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6617831Z 2023-01-11T21:41:23.6617926Z aten = torch.ops.aten 2023-01-11T21:41:23.6618177Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6618286Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6618293Z 2023-01-11T21:41:23.6618309Z 2023-01-11T21:41:23.6618478Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6618747Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6618900Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6619028Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6619105Z { 2023-01-11T21:41:23.6639978Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6640127Z { 2023-01-11T21:41:23.6640252Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6640348Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6640422Z { 2023-01-11T21:41:23.6640527Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6640598Z { 2023-01-11T21:41:23.6640673Z { 2023-01-11T21:41:23.6640863Z { 2023-01-11T21:41:23.6640995Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6641107Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6641182Z } 2023-01-11T21:41:23.6641254Z } 2023-01-11T21:41:23.6641323Z } 2023-01-11T21:41:23.6641394Z } 2023-01-11T21:41:23.6641457Z } 2023-01-11T21:41:23.6641522Z } 2023-01-11T21:41:23.6641650Z ''') 2023-01-11T21:41:23.6641659Z 2023-01-11T21:41:23.6641664Z 2023-01-11T21:41:23.6641849Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6642121Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6642267Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6642386Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6642453Z { 2023-01-11T21:41:23.6642565Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6642639Z { 2023-01-11T21:41:23.6642728Z #pragma omp for 2023-01-11T21:41:23.6642824Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6642893Z { 2023-01-11T21:41:23.6642984Z #pragma GCC ivdep 2023-01-11T21:41:23.6643082Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6643155Z { 2023-01-11T21:41:23.6643230Z { 2023-01-11T21:41:23.6643304Z { 2023-01-11T21:41:23.6643425Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6643539Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6643612Z } 2023-01-11T21:41:23.6643680Z } 2023-01-11T21:41:23.6643751Z } 2023-01-11T21:41:23.6643822Z } 2023-01-11T21:41:23.6643890Z } 2023-01-11T21:41:23.6643956Z } 2023-01-11T21:41:23.6644052Z ''') 2023-01-11T21:41:23.6644060Z 2023-01-11T21:41:23.6644065Z 2023-01-11T21:41:23.6644176Z async_compile.wait(globals()) 2023-01-11T21:41:23.6644251Z del async_compile 2023-01-11T21:41:23.6644257Z 2023-01-11T21:41:23.6644336Z def call(args): 2023-01-11T21:41:23.6644480Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6644560Z args.clear() 2023-01-11T21:41:23.6644854Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6645016Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6645092Z del arg7_1 2023-01-11T21:41:23.6645425Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6645552Z assert_size_stride(buf1, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6645627Z del arg0_1 2023-01-11T21:41:23.6645700Z del arg1_1 2023-01-11T21:41:23.6645773Z del buf0 2023-01-11T21:41:23.6646064Z buf2 = empty_strided((1, 32, 112, 112), (401408, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6646322Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6646405Z return (buf2, ) 2023-01-11T21:41:23.6646412Z 2023-01-11T21:41:23.6646417Z 2023-01-11T21:41:23.6646503Z if __name__ == "__main__": 2023-01-11T21:41:23.6646636Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6646783Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6647056Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6647301Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6647542Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6647775Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6648009Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6648328Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6648563Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6648850Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6649042Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6649048Z 2023-01-11T21:41:23.6649054Z 2023-01-11T21:41:23.6649164Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6649244Z import torch 2023-01-11T21:41:23.6649323Z import random 2023-01-11T21:41:23.6649461Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6649602Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6649614Z 2023-01-11T21:41:23.6649695Z aten = torch.ops.aten 2023-01-11T21:41:23.6649858Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6649974Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6649980Z 2023-01-11T21:41:23.6649986Z 2023-01-11T21:41:23.6650090Z async_compile.wait(globals()) 2023-01-11T21:41:23.6650173Z del async_compile 2023-01-11T21:41:23.6650180Z 2023-01-11T21:41:23.6650258Z def call(args): 2023-01-11T21:41:23.6650397Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6650470Z args.clear() 2023-01-11T21:41:23.6650797Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6650928Z assert_size_stride(buf0, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6651009Z del arg0_1 2023-01-11T21:41:23.6651085Z del arg1_1 2023-01-11T21:41:23.6651158Z del arg7_1 2023-01-11T21:41:23.6651238Z return (buf0, ) 2023-01-11T21:41:23.6651244Z 2023-01-11T21:41:23.6651249Z 2023-01-11T21:41:23.6651332Z if __name__ == "__main__": 2023-01-11T21:41:23.6651469Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6651619Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6651887Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6652133Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6652372Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6652601Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6652842Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6653078Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6653302Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6653586Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6653785Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6653862Z 2023-01-11T21:41:23.6654221Z [2023-01-11 21:25:50,498] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 103 2023-01-11T21:41:23.6654787Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6654940Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6655276Z [2023-01-11 21:25:50,634] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 104 2023-01-11T21:41:23.6655623Z [2023-01-11 21:25:50,657] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 104 2023-01-11T21:41:23.6656258Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6656417Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6656754Z [2023-01-11 21:25:50,784] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 105 2023-01-11T21:41:23.6656762Z 2023-01-11T21:41:23.6656866Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6656945Z import torch 2023-01-11T21:41:23.6657023Z import random 2023-01-11T21:41:23.6657160Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6657302Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6657308Z 2023-01-11T21:41:23.6657405Z aten = torch.ops.aten 2023-01-11T21:41:23.6657564Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6657670Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6657677Z 2023-01-11T21:41:23.6657682Z 2023-01-11T21:41:23.6657846Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6658108Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6658255Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6658371Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6658441Z { 2023-01-11T21:41:23.6658556Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6658626Z { 2023-01-11T21:41:23.6658708Z #pragma omp for 2023-01-11T21:41:23.6658802Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6658874Z { 2023-01-11T21:41:23.6658965Z #pragma GCC ivdep 2023-01-11T21:41:23.6659068Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6659145Z { 2023-01-11T21:41:23.6659217Z { 2023-01-11T21:41:23.6659285Z { 2023-01-11T21:41:23.6659406Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6659516Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6659590Z } 2023-01-11T21:41:23.6659668Z } 2023-01-11T21:41:23.6659742Z } 2023-01-11T21:41:23.6659810Z } 2023-01-11T21:41:23.6659873Z } 2023-01-11T21:41:23.6659943Z } 2023-01-11T21:41:23.6660043Z ''') 2023-01-11T21:41:23.6660050Z 2023-01-11T21:41:23.6660056Z 2023-01-11T21:41:23.6660219Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6660478Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6660621Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6660740Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6660865Z { 2023-01-11T21:41:23.6660980Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6661051Z { 2023-01-11T21:41:23.6661138Z #pragma omp for 2023-01-11T21:41:23.6661230Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6661298Z { 2023-01-11T21:41:23.6661388Z #pragma GCC ivdep 2023-01-11T21:41:23.6661487Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6661563Z { 2023-01-11T21:41:23.6661648Z { 2023-01-11T21:41:23.6661731Z { 2023-01-11T21:41:23.6661846Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6661970Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6662055Z } 2023-01-11T21:41:23.6662135Z } 2023-01-11T21:41:23.6662212Z } 2023-01-11T21:41:23.6662285Z } 2023-01-11T21:41:23.6662593Z } 2023-01-11T21:41:23.6662659Z } 2023-01-11T21:41:23.6662778Z ''') 2023-01-11T21:41:23.6662866Z 2023-01-11T21:41:23.6662873Z 2023-01-11T21:41:23.6662990Z async_compile.wait(globals()) 2023-01-11T21:41:23.6663083Z del async_compile 2023-01-11T21:41:23.6663089Z 2023-01-11T21:41:23.6663175Z def call(args): 2023-01-11T21:41:23.6663325Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6663412Z args.clear() 2023-01-11T21:41:23.6663701Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6663871Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6663956Z del arg7_1 2023-01-11T21:41:23.6664288Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6664427Z assert_size_stride(buf1, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6664515Z del arg0_1 2023-01-11T21:41:23.6664597Z del arg1_1 2023-01-11T21:41:23.6664683Z del buf0 2023-01-11T21:41:23.6664975Z buf2 = empty_strided((1, 128, 112, 112), (1605632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6665144Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6665234Z return (buf2, ) 2023-01-11T21:41:23.6665240Z 2023-01-11T21:41:23.6665245Z 2023-01-11T21:41:23.6665339Z if __name__ == "__main__": 2023-01-11T21:41:23.6665481Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6665641Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6665918Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6666171Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6666417Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6666665Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6666919Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6667166Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6667402Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6667700Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6667898Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6667906Z 2023-01-11T21:41:23.6667910Z 2023-01-11T21:41:23.6668028Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6668099Z import torch 2023-01-11T21:41:23.6668189Z import random 2023-01-11T21:41:23.6668331Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6668481Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6668487Z 2023-01-11T21:41:23.6668581Z aten = torch.ops.aten 2023-01-11T21:41:23.6668842Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6668953Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6668959Z 2023-01-11T21:41:23.6668964Z 2023-01-11T21:41:23.6669075Z async_compile.wait(globals()) 2023-01-11T21:41:23.6669152Z del async_compile 2023-01-11T21:41:23.6669158Z 2023-01-11T21:41:23.6669242Z def call(args): 2023-01-11T21:41:23.6669391Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6669476Z args.clear() 2023-01-11T21:41:23.6669812Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6669956Z assert_size_stride(buf0, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6670039Z del arg0_1 2023-01-11T21:41:23.6670118Z del arg1_1 2023-01-11T21:41:23.6670188Z del arg7_1 2023-01-11T21:41:23.6670276Z return (buf0, ) 2023-01-11T21:41:23.6670282Z 2023-01-11T21:41:23.6670290Z 2023-01-11T21:41:23.6670473Z if __name__ == "__main__": 2023-01-11T21:41:23.6670620Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6670775Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6671054Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6671307Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6671545Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6671794Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6672160Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6672513Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6672851Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6673233Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6673533Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6673542Z 2023-01-11T21:41:23.6674133Z [2023-01-11 21:25:50,817] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 105 2023-01-11T21:41:23.6675018Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6675251Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6675747Z [2023-01-11 21:25:50,927] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 106 2023-01-11T21:41:23.6676263Z [2023-01-11 21:25:50,956] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 106 2023-01-11T21:41:23.6677143Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6677382Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6677878Z [2023-01-11 21:25:51,097] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 107 2023-01-11T21:41:23.6678416Z [2023-01-11 21:25:51,129] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 107 2023-01-11T21:41:23.6679294Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6679617Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6680123Z [2023-01-11 21:25:51,252] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 108 2023-01-11T21:41:23.6680134Z 2023-01-11T21:41:23.6680319Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6680462Z import torch 2023-01-11T21:41:23.6680589Z import random 2023-01-11T21:41:23.6680812Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6681042Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6681051Z 2023-01-11T21:41:23.6681191Z aten = torch.ops.aten 2023-01-11T21:41:23.6681443Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6681684Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6681693Z 2023-01-11T21:41:23.6681700Z 2023-01-11T21:41:23.6681962Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6682360Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6682580Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6682762Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6682881Z { 2023-01-11T21:41:23.6683055Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6683163Z { 2023-01-11T21:41:23.6683310Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6683465Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6683558Z { 2023-01-11T21:41:23.6683722Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6683830Z { 2023-01-11T21:41:23.6683952Z { 2023-01-11T21:41:23.6684072Z { 2023-01-11T21:41:23.6684260Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6684417Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6684529Z } 2023-01-11T21:41:23.6684657Z } 2023-01-11T21:41:23.6684776Z } 2023-01-11T21:41:23.6684896Z } 2023-01-11T21:41:23.6685012Z } 2023-01-11T21:41:23.6685124Z } 2023-01-11T21:41:23.6685261Z ''') 2023-01-11T21:41:23.6685270Z 2023-01-11T21:41:23.6685290Z 2023-01-11T21:41:23.6685530Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6685922Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6686156Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6686345Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6686464Z { 2023-01-11T21:41:23.6686655Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6686774Z { 2023-01-11T21:41:23.6686909Z #pragma omp for 2023-01-11T21:41:23.6687058Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6687174Z { 2023-01-11T21:41:23.6687329Z #pragma GCC ivdep 2023-01-11T21:41:23.6687492Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6687619Z { 2023-01-11T21:41:23.6687727Z { 2023-01-11T21:41:23.6687853Z { 2023-01-11T21:41:23.6688042Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6688218Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6688347Z } 2023-01-11T21:41:23.6688473Z } 2023-01-11T21:41:23.6688590Z } 2023-01-11T21:41:23.6688665Z } 2023-01-11T21:41:23.6688743Z } 2023-01-11T21:41:23.6688824Z } 2023-01-11T21:41:23.6688936Z ''') 2023-01-11T21:41:23.6688945Z 2023-01-11T21:41:23.6688951Z 2023-01-11T21:41:23.6689074Z async_compile.wait(globals()) 2023-01-11T21:41:23.6689177Z del async_compile 2023-01-11T21:41:23.6689261Z 2023-01-11T21:41:23.6689365Z def call(args): 2023-01-11T21:41:23.6689500Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6689627Z args.clear() 2023-01-11T21:41:23.6690053Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6690239Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6690372Z del arg7_1 2023-01-11T21:41:23.6690849Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6691056Z assert_size_stride(buf1, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6691191Z del arg0_1 2023-01-11T21:41:23.6691313Z del arg1_1 2023-01-11T21:41:23.6691441Z del buf0 2023-01-11T21:41:23.6691867Z buf2 = empty_strided((1, 32, 112, 112), (401408, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6692188Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6692336Z return (buf2, ) 2023-01-11T21:41:23.6692344Z 2023-01-11T21:41:23.6692351Z 2023-01-11T21:41:23.6692488Z if __name__ == "__main__": 2023-01-11T21:41:23.6692707Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6692944Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6693327Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6693694Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6694056Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6694398Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6694767Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6695100Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6695442Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6695858Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6696141Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6696150Z 2023-01-11T21:41:23.6696170Z 2023-01-11T21:41:23.6696346Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6696482Z import torch 2023-01-11T21:41:23.6696625Z import random 2023-01-11T21:41:23.6696852Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6697093Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6697101Z 2023-01-11T21:41:23.6697256Z aten = torch.ops.aten 2023-01-11T21:41:23.6697520Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6697692Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6697700Z 2023-01-11T21:41:23.6697713Z 2023-01-11T21:41:23.6697896Z async_compile.wait(globals()) 2023-01-11T21:41:23.6698042Z del async_compile 2023-01-11T21:41:23.6698050Z 2023-01-11T21:41:23.6698191Z def call(args): 2023-01-11T21:41:23.6698411Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6698552Z args.clear() 2023-01-11T21:41:23.6699037Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6699233Z assert_size_stride(buf0, (1, 32, 112, 112), (401408, 1, 3584, 32)) 2023-01-11T21:41:23.6699339Z del arg0_1 2023-01-11T21:41:23.6699450Z del arg1_1 2023-01-11T21:41:23.6699555Z del arg7_1 2023-01-11T21:41:23.6699673Z return (buf0, ) 2023-01-11T21:41:23.6699680Z 2023-01-11T21:41:23.6699686Z 2023-01-11T21:41:23.6699833Z if __name__ == "__main__": 2023-01-11T21:41:23.6700056Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6700301Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6700758Z arg0_1 = rand_strided((32, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6701125Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6701497Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6701853Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6702185Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6702592Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6702839Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6703225Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6703516Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6703633Z 2023-01-11T21:41:23.6703642Z 2023-01-11T21:41:23.6703816Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6703956Z import torch 2023-01-11T21:41:23.6704104Z import random 2023-01-11T21:41:23.6704331Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6704571Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6704580Z 2023-01-11T21:41:23.6704735Z aten = torch.ops.aten 2023-01-11T21:41:23.6704998Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6705168Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6705190Z 2023-01-11T21:41:23.6705197Z 2023-01-11T21:41:23.6705454Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6705849Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6706086Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6706274Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6706396Z { 2023-01-11T21:41:23.6706584Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6706707Z { 2023-01-11T21:41:23.6706855Z #pragma omp for 2023-01-11T21:41:23.6707014Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6707144Z { 2023-01-11T21:41:23.6707299Z #pragma GCC ivdep 2023-01-11T21:41:23.6707445Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6707560Z { 2023-01-11T21:41:23.6707668Z { 2023-01-11T21:41:23.6707775Z { 2023-01-11T21:41:23.6707977Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6708156Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6708283Z } 2023-01-11T21:41:23.6708400Z } 2023-01-11T21:41:23.6708520Z } 2023-01-11T21:41:23.6708624Z } 2023-01-11T21:41:23.6708736Z } 2023-01-11T21:41:23.6708860Z } 2023-01-11T21:41:23.6709027Z ''') 2023-01-11T21:41:23.6709040Z 2023-01-11T21:41:23.6709047Z 2023-01-11T21:41:23.6709314Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6709715Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6709951Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6710140Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6710251Z { 2023-01-11T21:41:23.6710449Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6710572Z { 2023-01-11T21:41:23.6710721Z #pragma omp for 2023-01-11T21:41:23.6710877Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6711007Z { 2023-01-11T21:41:23.6711155Z #pragma GCC ivdep 2023-01-11T21:41:23.6711327Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6711456Z { 2023-01-11T21:41:23.6711586Z { 2023-01-11T21:41:23.6711717Z { 2023-01-11T21:41:23.6712017Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6712197Z out_ptr0[i1 + (12544*i0)] = tmp0; 2023-01-11T21:41:23.6712315Z } 2023-01-11T21:41:23.6712445Z } 2023-01-11T21:41:23.6712572Z } 2023-01-11T21:41:23.6712700Z } 2023-01-11T21:41:23.6712824Z } 2023-01-11T21:41:23.6712943Z } 2023-01-11T21:41:23.6713110Z ''') 2023-01-11T21:41:23.6713119Z 2023-01-11T21:41:23.6713126Z 2023-01-11T21:41:23.6713288Z async_compile.wait(globals()) 2023-01-11T21:41:23.6713416Z del async_compile 2023-01-11T21:41:23.6713423Z 2023-01-11T21:41:23.6713546Z def call(args): 2023-01-11T21:41:23.6713846Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6713986Z args.clear() 2023-01-11T21:41:23.6714402Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6714737Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6714880Z del arg7_1 2023-01-11T21:41:23.6715346Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6715537Z assert_size_stride(buf1, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6715677Z del arg0_1 2023-01-11T21:41:23.6715812Z del arg1_1 2023-01-11T21:41:23.6715943Z del buf0 2023-01-11T21:41:23.6716365Z buf2 = empty_strided((1, 128, 112, 112), (1605632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6716582Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6716674Z return (buf2, ) 2023-01-11T21:41:23.6716681Z 2023-01-11T21:41:23.6716701Z 2023-01-11T21:41:23.6716804Z if __name__ == "__main__": 2023-01-11T21:41:23.6716999Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6717207Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6717588Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6717931Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6718275Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6718591Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6718875Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6719187Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6719482Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6719907Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6720200Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6720214Z 2023-01-11T21:41:23.6720704Z [2023-01-11 21:25:51,274] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 108 2023-01-11T21:41:23.6721543Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6721787Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6722292Z [2023-01-11 21:25:51,394] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 109 2023-01-11T21:41:23.6722809Z [2023-01-11 21:25:51,427] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 109 2023-01-11T21:41:23.6723710Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6724020Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6724529Z [2023-01-11 21:25:51,535] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 110 2023-01-11T21:41:23.6724996Z [2023-01-11 21:25:51,557] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 110 2023-01-11T21:41:23.6725938Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6726190Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6726703Z [2023-01-11 21:25:51,669] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 111 2023-01-11T21:41:23.6726714Z 2023-01-11T21:41:23.6726904Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6727047Z import torch 2023-01-11T21:41:23.6727198Z import random 2023-01-11T21:41:23.6727428Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6727656Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6727664Z 2023-01-11T21:41:23.6727823Z aten = torch.ops.aten 2023-01-11T21:41:23.6728090Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6728276Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6728285Z 2023-01-11T21:41:23.6728292Z 2023-01-11T21:41:23.6728471Z async_compile.wait(globals()) 2023-01-11T21:41:23.6728619Z del async_compile 2023-01-11T21:41:23.6728631Z 2023-01-11T21:41:23.6728776Z def call(args): 2023-01-11T21:41:23.6729000Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6729130Z args.clear() 2023-01-11T21:41:23.6729621Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6729833Z assert_size_stride(buf0, (1, 128, 112, 112), (1605632, 1, 14336, 128)) 2023-01-11T21:41:23.6729971Z del arg0_1 2023-01-11T21:41:23.6730107Z del arg1_1 2023-01-11T21:41:23.6730236Z del arg7_1 2023-01-11T21:41:23.6730381Z return (buf0, ) 2023-01-11T21:41:23.6730390Z 2023-01-11T21:41:23.6730396Z 2023-01-11T21:41:23.6730533Z if __name__ == "__main__": 2023-01-11T21:41:23.6730759Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6731003Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6731409Z arg0_1 = rand_strided((128, 3, 1, 1), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6731774Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6732087Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6732410Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6732770Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6733120Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6733467Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6733883Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6734155Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6734163Z 2023-01-11T21:41:23.6734170Z 2023-01-11T21:41:23.6734335Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6734526Z import torch 2023-01-11T21:41:23.6734650Z import random 2023-01-11T21:41:23.6734846Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6735047Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6735055Z 2023-01-11T21:41:23.6735185Z aten = torch.ops.aten 2023-01-11T21:41:23.6735402Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6735567Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6735576Z 2023-01-11T21:41:23.6735583Z 2023-01-11T21:41:23.6735827Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6736186Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6736425Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6736600Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6736692Z { 2023-01-11T21:41:23.6736875Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6736998Z { 2023-01-11T21:41:23.6737243Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6737400Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6737516Z { 2023-01-11T21:41:23.6737680Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6737788Z { 2023-01-11T21:41:23.6737913Z { 2023-01-11T21:41:23.6738045Z { 2023-01-11T21:41:23.6738234Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6738407Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6738538Z } 2023-01-11T21:41:23.6738667Z } 2023-01-11T21:41:23.6738779Z } 2023-01-11T21:41:23.6738904Z } 2023-01-11T21:41:23.6739030Z } 2023-01-11T21:41:23.6739155Z } 2023-01-11T21:41:23.6739296Z ''') 2023-01-11T21:41:23.6739306Z 2023-01-11T21:41:23.6739313Z 2023-01-11T21:41:23.6739560Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6739960Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6740172Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6740338Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6740454Z { 2023-01-11T21:41:23.6740654Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6740779Z { 2023-01-11T21:41:23.6740928Z #pragma omp for 2023-01-11T21:41:23.6741081Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6741187Z { 2023-01-11T21:41:23.6741326Z #pragma GCC ivdep 2023-01-11T21:41:23.6741484Z for(long i1=0; i1<12100; i1+=1) 2023-01-11T21:41:23.6741592Z { 2023-01-11T21:41:23.6741693Z { 2023-01-11T21:41:23.6741804Z { 2023-01-11T21:41:23.6741948Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6742122Z out_ptr0[i1 + (12100*i0)] = tmp0; 2023-01-11T21:41:23.6742263Z } 2023-01-11T21:41:23.6742551Z } 2023-01-11T21:41:23.6742684Z } 2023-01-11T21:41:23.6742798Z } 2023-01-11T21:41:23.6742924Z } 2023-01-11T21:41:23.6743035Z } 2023-01-11T21:41:23.6743191Z ''') 2023-01-11T21:41:23.6743200Z 2023-01-11T21:41:23.6743208Z 2023-01-11T21:41:23.6743391Z async_compile.wait(globals()) 2023-01-11T21:41:23.6743541Z del async_compile 2023-01-11T21:41:23.6743549Z 2023-01-11T21:41:23.6743691Z def call(args): 2023-01-11T21:41:23.6743920Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6744066Z args.clear() 2023-01-11T21:41:23.6744481Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6744725Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6744857Z del arg7_1 2023-01-11T21:41:23.6745347Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6745658Z assert_size_stride(buf1, (1, 32, 110, 110), (387200, 1, 3520, 32)) 2023-01-11T21:41:23.6745795Z del arg0_1 2023-01-11T21:41:23.6745929Z del arg1_1 2023-01-11T21:41:23.6746054Z del buf0 2023-01-11T21:41:23.6746473Z buf2 = empty_strided((1, 32, 110, 110), (387200, 12100, 110, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6746735Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6746879Z return (buf2, ) 2023-01-11T21:41:23.6746887Z 2023-01-11T21:41:23.6746895Z 2023-01-11T21:41:23.6747049Z if __name__ == "__main__": 2023-01-11T21:41:23.6747281Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6747527Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6747917Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6748363Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6748721Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6749079Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6749446Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6749800Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6750136Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6750560Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6750859Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6750868Z 2023-01-11T21:41:23.6750875Z 2023-01-11T21:41:23.6751064Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6751208Z import torch 2023-01-11T21:41:23.6751338Z import random 2023-01-11T21:41:23.6751578Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6751821Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6751830Z 2023-01-11T21:41:23.6751987Z aten = torch.ops.aten 2023-01-11T21:41:23.6752228Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6752389Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6752397Z 2023-01-11T21:41:23.6752404Z 2023-01-11T21:41:23.6752568Z async_compile.wait(globals()) 2023-01-11T21:41:23.6752698Z del async_compile 2023-01-11T21:41:23.6752707Z 2023-01-11T21:41:23.6752823Z def call(args): 2023-01-11T21:41:23.6753040Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6753183Z args.clear() 2023-01-11T21:41:23.6753628Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.6753880Z assert_size_stride(buf0, (1, 32, 110, 110), (387200, 1, 3520, 32)) 2023-01-11T21:41:23.6754012Z del arg0_1 2023-01-11T21:41:23.6754149Z del arg1_1 2023-01-11T21:41:23.6754264Z del arg7_1 2023-01-11T21:41:23.6754410Z return (buf0, ) 2023-01-11T21:41:23.6754419Z 2023-01-11T21:41:23.6754426Z 2023-01-11T21:41:23.6754574Z if __name__ == "__main__": 2023-01-11T21:41:23.6754783Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6755012Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6755398Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6755760Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6756107Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6756457Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6756798Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6757243Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6757548Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6757939Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6758234Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6758242Z 2023-01-11T21:41:23.6758776Z [2023-01-11 21:25:51,703] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 111 2023-01-11T21:41:23.6759654Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6759970Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6760490Z [2023-01-11 21:25:51,826] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 112 2023-01-11T21:41:23.6760994Z [2023-01-11 21:25:51,848] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 112 2023-01-11T21:41:23.6761867Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6762068Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6762581Z [2023-01-11 21:25:51,968] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 113 2023-01-11T21:41:23.6762590Z 2023-01-11T21:41:23.6762776Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6762920Z import torch 2023-01-11T21:41:23.6763067Z import random 2023-01-11T21:41:23.6763301Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6763551Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6763561Z 2023-01-11T21:41:23.6763706Z aten = torch.ops.aten 2023-01-11T21:41:23.6763973Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6764161Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6764168Z 2023-01-11T21:41:23.6764176Z 2023-01-11T21:41:23.6764442Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6764846Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6765083Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6765276Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6765405Z { 2023-01-11T21:41:23.6765591Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6765721Z { 2023-01-11T21:41:23.6765869Z #pragma omp for 2023-01-11T21:41:23.6766021Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6766141Z { 2023-01-11T21:41:23.6766281Z #pragma GCC ivdep 2023-01-11T21:41:23.6766432Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6766558Z { 2023-01-11T21:41:23.6766674Z { 2023-01-11T21:41:23.6766800Z { 2023-01-11T21:41:23.6766992Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6767168Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6767299Z } 2023-01-11T21:41:23.6767413Z } 2023-01-11T21:41:23.6767524Z } 2023-01-11T21:41:23.6767648Z } 2023-01-11T21:41:23.6767746Z } 2023-01-11T21:41:23.6767851Z } 2023-01-11T21:41:23.6768013Z ''') 2023-01-11T21:41:23.6768021Z 2023-01-11T21:41:23.6768028Z 2023-01-11T21:41:23.6768379Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6768753Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6768986Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6769177Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6769301Z { 2023-01-11T21:41:23.6769499Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6769629Z { 2023-01-11T21:41:23.6769787Z #pragma omp for 2023-01-11T21:41:23.6769935Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6770060Z { 2023-01-11T21:41:23.6770212Z #pragma GCC ivdep 2023-01-11T21:41:23.6770378Z for(long i1=0; i1<12100; i1+=1) 2023-01-11T21:41:23.6770510Z { 2023-01-11T21:41:23.6770643Z { 2023-01-11T21:41:23.6770780Z { 2023-01-11T21:41:23.6770959Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6771204Z out_ptr0[i1 + (12100*i0)] = tmp0; 2023-01-11T21:41:23.6771340Z } 2023-01-11T21:41:23.6771466Z } 2023-01-11T21:41:23.6771589Z } 2023-01-11T21:41:23.6771715Z } 2023-01-11T21:41:23.6771839Z } 2023-01-11T21:41:23.6771949Z } 2023-01-11T21:41:23.6772101Z ''') 2023-01-11T21:41:23.6772109Z 2023-01-11T21:41:23.6772116Z 2023-01-11T21:41:23.6772297Z async_compile.wait(globals()) 2023-01-11T21:41:23.6772444Z del async_compile 2023-01-11T21:41:23.6772452Z 2023-01-11T21:41:23.6772596Z def call(args): 2023-01-11T21:41:23.6772818Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6772957Z args.clear() 2023-01-11T21:41:23.6773367Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6773623Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6773762Z del arg7_1 2023-01-11T21:41:23.6774244Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6774441Z assert_size_stride(buf1, (1, 128, 110, 110), (1548800, 1, 14080, 128)) 2023-01-11T21:41:23.6774579Z del arg0_1 2023-01-11T21:41:23.6774695Z del arg1_1 2023-01-11T21:41:23.6774809Z del buf0 2023-01-11T21:41:23.6775247Z buf2 = empty_strided((1, 128, 110, 110), (1548800, 12100, 110, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6775507Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6775638Z return (buf2, ) 2023-01-11T21:41:23.6775646Z 2023-01-11T21:41:23.6775654Z 2023-01-11T21:41:23.6775796Z if __name__ == "__main__": 2023-01-11T21:41:23.6776016Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6776262Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6776641Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6776996Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6777336Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6777672Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6777995Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6778320Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6778626Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6779029Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6779284Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6779292Z 2023-01-11T21:41:23.6779299Z 2023-01-11T21:41:23.6779454Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6779637Z import torch 2023-01-11T21:41:23.6779751Z import random 2023-01-11T21:41:23.6779941Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6780132Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6780139Z 2023-01-11T21:41:23.6780261Z aten = torch.ops.aten 2023-01-11T21:41:23.6780474Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6780613Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6780620Z 2023-01-11T21:41:23.6780626Z 2023-01-11T21:41:23.6780774Z async_compile.wait(globals()) 2023-01-11T21:41:23.6780864Z del async_compile 2023-01-11T21:41:23.6780871Z 2023-01-11T21:41:23.6780983Z def call(args): 2023-01-11T21:41:23.6781164Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6781277Z args.clear() 2023-01-11T21:41:23.6781678Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (1, 1), 4, 'none', [], '') 2023-01-11T21:41:23.6781913Z assert_size_stride(buf0, (1, 128, 110, 110), (1548800, 1, 14080, 128)) 2023-01-11T21:41:23.6782019Z del arg0_1 2023-01-11T21:41:23.6782112Z del arg1_1 2023-01-11T21:41:23.6782218Z del arg7_1 2023-01-11T21:41:23.6782457Z return (buf0, ) 2023-01-11T21:41:23.6782469Z 2023-01-11T21:41:23.6782474Z 2023-01-11T21:41:23.6782601Z if __name__ == "__main__": 2023-01-11T21:41:23.6782775Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6782952Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6783286Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6783586Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6783866Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6784173Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6784497Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6784811Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6785120Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6785517Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6785792Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6785800Z 2023-01-11T21:41:23.6786294Z [2023-01-11 21:25:52,000] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 113 2023-01-11T21:41:23.6787121Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6787337Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6787784Z [2023-01-11 21:25:52,109] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 114 2023-01-11T21:41:23.6788290Z [2023-01-11 21:25:52,131] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 114 2023-01-11T21:41:23.6789136Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6789353Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6789832Z [2023-01-11 21:25:52,283] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 115 2023-01-11T21:41:23.6790381Z [2023-01-11 21:25:52,315] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 115 2023-01-11T21:41:23.6791065Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6791302Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6791796Z [2023-01-11 21:25:52,441] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 116 2023-01-11T21:41:23.6791805Z 2023-01-11T21:41:23.6791985Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6792124Z import torch 2023-01-11T21:41:23.6792253Z import random 2023-01-11T21:41:23.6792583Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6792826Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6792834Z 2023-01-11T21:41:23.6792978Z aten = torch.ops.aten 2023-01-11T21:41:23.6793222Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6793397Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6793404Z 2023-01-11T21:41:23.6793411Z 2023-01-11T21:41:23.6793657Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6794119Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6794313Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6794498Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6794615Z { 2023-01-11T21:41:23.6794807Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6794929Z { 2023-01-11T21:41:23.6795098Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6795266Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6795373Z { 2023-01-11T21:41:23.6795531Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6795653Z { 2023-01-11T21:41:23.6795775Z { 2023-01-11T21:41:23.6795897Z { 2023-01-11T21:41:23.6796081Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6796237Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.6796344Z } 2023-01-11T21:41:23.6796461Z } 2023-01-11T21:41:23.6796572Z } 2023-01-11T21:41:23.6796683Z } 2023-01-11T21:41:23.6796798Z } 2023-01-11T21:41:23.6796914Z } 2023-01-11T21:41:23.6797045Z ''') 2023-01-11T21:41:23.6797054Z 2023-01-11T21:41:23.6797061Z 2023-01-11T21:41:23.6797300Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6797663Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6797887Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6798061Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6798170Z { 2023-01-11T21:41:23.6798356Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6798462Z { 2023-01-11T21:41:23.6798599Z #pragma omp for 2023-01-11T21:41:23.6798740Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.6798854Z { 2023-01-11T21:41:23.6798999Z #pragma GCC ivdep 2023-01-11T21:41:23.6799153Z for(long i1=0; i1<11664; i1+=1) 2023-01-11T21:41:23.6799264Z { 2023-01-11T21:41:23.6799356Z { 2023-01-11T21:41:23.6799463Z { 2023-01-11T21:41:23.6799645Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.6799814Z out_ptr0[i1 + (11664*i0)] = tmp0; 2023-01-11T21:41:23.6799937Z } 2023-01-11T21:41:23.6800049Z } 2023-01-11T21:41:23.6800256Z } 2023-01-11T21:41:23.6800359Z } 2023-01-11T21:41:23.6800477Z } 2023-01-11T21:41:23.6800595Z } 2023-01-11T21:41:23.6800755Z ''') 2023-01-11T21:41:23.6800764Z 2023-01-11T21:41:23.6800771Z 2023-01-11T21:41:23.6800952Z async_compile.wait(globals()) 2023-01-11T21:41:23.6801095Z del async_compile 2023-01-11T21:41:23.6801103Z 2023-01-11T21:41:23.6801240Z def call(args): 2023-01-11T21:41:23.6801434Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6801566Z args.clear() 2023-01-11T21:41:23.6801971Z buf0 = empty_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6802206Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6802308Z del arg7_1 2023-01-11T21:41:23.6802716Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6802977Z assert_size_stride(buf1, (1, 32, 108, 108), (373248, 1, 3456, 32)) 2023-01-11T21:41:23.6803119Z del arg0_1 2023-01-11T21:41:23.6803221Z del arg1_1 2023-01-11T21:41:23.6803333Z del buf0 2023-01-11T21:41:23.6803762Z buf2 = empty_strided((1, 32, 108, 108), (373248, 11664, 108, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6803995Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6804129Z return (buf2, ) 2023-01-11T21:41:23.6804137Z 2023-01-11T21:41:23.6804144Z 2023-01-11T21:41:23.6804261Z if __name__ == "__main__": 2023-01-11T21:41:23.6804488Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6804721Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6805126Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6805462Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6805829Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6806155Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6806495Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6806791Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6807041Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6807407Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6807710Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6807719Z 2023-01-11T21:41:23.6807727Z 2023-01-11T21:41:23.6807903Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6808046Z import torch 2023-01-11T21:41:23.6808191Z import random 2023-01-11T21:41:23.6808411Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6808620Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6808628Z 2023-01-11T21:41:23.6808753Z aten = torch.ops.aten 2023-01-11T21:41:23.6809013Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6809162Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6809168Z 2023-01-11T21:41:23.6809174Z 2023-01-11T21:41:23.6809291Z async_compile.wait(globals()) 2023-01-11T21:41:23.6809386Z del async_compile 2023-01-11T21:41:23.6809392Z 2023-01-11T21:41:23.6809484Z def call(args): 2023-01-11T21:41:23.6809638Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6809733Z args.clear() 2023-01-11T21:41:23.6810086Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 1, 'none', [], '') 2023-01-11T21:41:23.6810243Z assert_size_stride(buf0, (1, 32, 108, 108), (373248, 1, 3456, 32)) 2023-01-11T21:41:23.6810364Z del arg0_1 2023-01-11T21:41:23.6810553Z del arg1_1 2023-01-11T21:41:23.6810659Z del arg7_1 2023-01-11T21:41:23.6810780Z return (buf0, ) 2023-01-11T21:41:23.6810788Z 2023-01-11T21:41:23.6810795Z 2023-01-11T21:41:23.6810937Z if __name__ == "__main__": 2023-01-11T21:41:23.6811115Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6811354Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6811723Z arg0_1 = rand_strided((32, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6812084Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6812459Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6812821Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6813132Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6813446Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6813854Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6814266Z arg7_1 = rand_strided((1, 3, 112, 112), (37632, 1, 336, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6814509Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6814533Z 2023-01-11T21:41:23.6814540Z 2023-01-11T21:41:23.6814658Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6814758Z import torch 2023-01-11T21:41:23.6814860Z import random 2023-01-11T21:41:23.6815025Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6815213Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6815221Z 2023-01-11T21:41:23.6815367Z aten = torch.ops.aten 2023-01-11T21:41:23.6815597Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6815746Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6815753Z 2023-01-11T21:41:23.6815779Z 2023-01-11T21:41:23.6816019Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6816373Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6816582Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6816745Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6816859Z { 2023-01-11T21:41:23.6817036Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6817146Z { 2023-01-11T21:41:23.6817286Z #pragma omp for 2023-01-11T21:41:23.6817455Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6817563Z { 2023-01-11T21:41:23.6817687Z #pragma GCC ivdep 2023-01-11T21:41:23.6817840Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.6817949Z { 2023-01-11T21:41:23.6818040Z { 2023-01-11T21:41:23.6818171Z { 2023-01-11T21:41:23.6818353Z auto tmp0 = in_ptr0[i1 + (12544*i0)]; 2023-01-11T21:41:23.6818494Z out_ptr0[i0 + (12*i1)] = tmp0; 2023-01-11T21:41:23.6818595Z } 2023-01-11T21:41:23.6818689Z } 2023-01-11T21:41:23.6818784Z } 2023-01-11T21:41:23.6818885Z } 2023-01-11T21:41:23.6818992Z } 2023-01-11T21:41:23.6819115Z } 2023-01-11T21:41:23.6819290Z ''') 2023-01-11T21:41:23.6819300Z 2023-01-11T21:41:23.6819307Z 2023-01-11T21:41:23.6819577Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.6819975Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6820209Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6820391Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6820492Z { 2023-01-11T21:41:23.6820690Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6820814Z { 2023-01-11T21:41:23.6820968Z #pragma omp for 2023-01-11T21:41:23.6821130Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.6821340Z { 2023-01-11T21:41:23.6821483Z #pragma GCC ivdep 2023-01-11T21:41:23.6821646Z for(long i1=0; i1<11664; i1+=1) 2023-01-11T21:41:23.6821774Z { 2023-01-11T21:41:23.6821906Z { 2023-01-11T21:41:23.6822039Z { 2023-01-11T21:41:23.6822228Z auto tmp0 = in_ptr0[i0 + (128*i1)]; 2023-01-11T21:41:23.6822510Z out_ptr0[i1 + (11664*i0)] = tmp0; 2023-01-11T21:41:23.6822609Z } 2023-01-11T21:41:23.6822712Z } 2023-01-11T21:41:23.6822830Z } 2023-01-11T21:41:23.6822958Z } 2023-01-11T21:41:23.6823080Z } 2023-01-11T21:41:23.6823206Z } 2023-01-11T21:41:23.6823376Z ''') 2023-01-11T21:41:23.6823385Z 2023-01-11T21:41:23.6823392Z 2023-01-11T21:41:23.6823559Z async_compile.wait(globals()) 2023-01-11T21:41:23.6823708Z del async_compile 2023-01-11T21:41:23.6823716Z 2023-01-11T21:41:23.6823853Z def call(args): 2023-01-11T21:41:23.6824181Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6824310Z args.clear() 2023-01-11T21:41:23.6824733Z buf0 = empty_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6824969Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6825082Z del arg7_1 2023-01-11T21:41:23.6825461Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6825635Z assert_size_stride(buf1, (1, 128, 108, 108), (1492992, 1, 13824, 128)) 2023-01-11T21:41:23.6825747Z del arg0_1 2023-01-11T21:41:23.6825858Z del arg1_1 2023-01-11T21:41:23.6825959Z del buf0 2023-01-11T21:41:23.6826311Z buf2 = empty_strided((1, 128, 108, 108), (1492992, 11664, 108, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6826532Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.6826645Z return (buf2, ) 2023-01-11T21:41:23.6826653Z 2023-01-11T21:41:23.6826675Z 2023-01-11T21:41:23.6826785Z if __name__ == "__main__": 2023-01-11T21:41:23.6826973Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6827167Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6827514Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6827815Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6828122Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6828426Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6828711Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6829012Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6829271Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6829671Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 12544, 112, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6829928Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6829937Z 2023-01-11T21:41:23.6830376Z [2023-01-11 21:25:52,463] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 116 2023-01-11T21:41:23.6831124Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6831340Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6831788Z [2023-01-11 21:25:52,608] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 117 2023-01-11T21:41:23.6832422Z [2023-01-11 21:25:52,631] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 117 2023-01-11T21:41:23.6833311Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6833536Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6834133Z [2023-01-11 21:25:52,884] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 118 2023-01-11T21:41:23.6834624Z [2023-01-11 21:25:54,425] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 118 2023-01-11T21:41:23.6835574Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6835813Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6836310Z [2023-01-11 21:25:54,775] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 119 2023-01-11T21:41:23.6836827Z [2023-01-11 21:25:54,799] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 119 2023-01-11T21:41:23.6837704Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6837937Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6838421Z [2023-01-11 21:25:55,537] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 120 2023-01-11T21:41:23.6838431Z 2023-01-11T21:41:23.6838617Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6838743Z import torch 2023-01-11T21:41:23.6838885Z import random 2023-01-11T21:41:23.6839112Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6839348Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6839358Z 2023-01-11T21:41:23.6839515Z aten = torch.ops.aten 2023-01-11T21:41:23.6839776Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6839951Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6839959Z 2023-01-11T21:41:23.6839966Z 2023-01-11T21:41:23.6840150Z async_compile.wait(globals()) 2023-01-11T21:41:23.6840282Z del async_compile 2023-01-11T21:41:23.6840289Z 2023-01-11T21:41:23.6840427Z def call(args): 2023-01-11T21:41:23.6840658Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6840799Z args.clear() 2023-01-11T21:41:23.6841251Z buf0 = torch.ops.mkldnn._convolution_pointwise(arg7_1, arg0_1, arg1_1, (0, 0), (1, 1), (2, 2), 4, 'none', [], '') 2023-01-11T21:41:23.6841469Z assert_size_stride(buf0, (1, 128, 108, 108), (1492992, 1, 13824, 128)) 2023-01-11T21:41:23.6841605Z del arg0_1 2023-01-11T21:41:23.6841724Z del arg1_1 2023-01-11T21:41:23.6841859Z del arg7_1 2023-01-11T21:41:23.6841997Z return (buf0, ) 2023-01-11T21:41:23.6842005Z 2023-01-11T21:41:23.6842012Z 2023-01-11T21:41:23.6842163Z if __name__ == "__main__": 2023-01-11T21:41:23.6842383Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6842632Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6843137Z arg0_1 = rand_strided((128, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6843449Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6843781Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6844087Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6844374Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6844652Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6844924Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6845316Z arg7_1 = rand_strided((1, 12, 112, 112), (150528, 1, 1344, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6845584Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6845598Z 2023-01-11T21:41:23.6845680Z 2023-01-11T21:41:23.6845867Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6845997Z import torch 2023-01-11T21:41:23.6846101Z import random 2023-01-11T21:41:23.6846304Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6846501Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6846509Z 2023-01-11T21:41:23.6846636Z aten = torch.ops.aten 2023-01-11T21:41:23.6846887Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6847048Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6847054Z 2023-01-11T21:41:23.6847060Z 2023-01-11T21:41:23.6847203Z async_compile.wait(globals()) 2023-01-11T21:41:23.6847314Z del async_compile 2023-01-11T21:41:23.6847335Z 2023-01-11T21:41:23.6847433Z def call(args): 2023-01-11T21:41:23.6847633Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6847759Z args.clear() 2023-01-11T21:41:23.6847972Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6848152Z assert_size_stride(buf0, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6848266Z del arg0_1 2023-01-11T21:41:23.6848385Z del arg1_1 2023-01-11T21:41:23.6848482Z del arg7_1 2023-01-11T21:41:23.6848596Z return (buf0, ) 2023-01-11T21:41:23.6848604Z 2023-01-11T21:41:23.6848611Z 2023-01-11T21:41:23.6848734Z if __name__ == "__main__": 2023-01-11T21:41:23.6848921Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6849122Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6849486Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6849819Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6850143Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6850473Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6850790Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6851149Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6851465Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6851872Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6852125Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6852134Z 2023-01-11T21:41:23.6852141Z 2023-01-11T21:41:23.6852295Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6852411Z import torch 2023-01-11T21:41:23.6852505Z import random 2023-01-11T21:41:23.6852710Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6852940Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6853033Z 2023-01-11T21:41:23.6853191Z aten = torch.ops.aten 2023-01-11T21:41:23.6853456Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6853629Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6853637Z 2023-01-11T21:41:23.6853645Z 2023-01-11T21:41:23.6853908Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6854301Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6854504Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6854665Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6854765Z { 2023-01-11T21:41:23.6854946Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6855079Z { 2023-01-11T21:41:23.6855247Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6855410Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6855522Z { 2023-01-11T21:41:23.6855694Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6855865Z { 2023-01-11T21:41:23.6855957Z { 2023-01-11T21:41:23.6856061Z { 2023-01-11T21:41:23.6856232Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.6856413Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6856533Z } 2023-01-11T21:41:23.6856663Z } 2023-01-11T21:41:23.6856780Z } 2023-01-11T21:41:23.6856901Z } 2023-01-11T21:41:23.6857022Z } 2023-01-11T21:41:23.6857151Z } 2023-01-11T21:41:23.6857304Z ''') 2023-01-11T21:41:23.6857328Z 2023-01-11T21:41:23.6857335Z 2023-01-11T21:41:23.6857503Z async_compile.wait(globals()) 2023-01-11T21:41:23.6857653Z del async_compile 2023-01-11T21:41:23.6857661Z 2023-01-11T21:41:23.6857805Z def call(args): 2023-01-11T21:41:23.6858025Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6858149Z args.clear() 2023-01-11T21:41:23.6858596Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6858848Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6858944Z del arg7_1 2023-01-11T21:41:23.6859205Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6859420Z assert_size_stride(buf1, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6859555Z del arg0_1 2023-01-11T21:41:23.6859683Z del arg1_1 2023-01-11T21:41:23.6859826Z return (buf1, ) 2023-01-11T21:41:23.6859833Z 2023-01-11T21:41:23.6859840Z 2023-01-11T21:41:23.6859976Z if __name__ == "__main__": 2023-01-11T21:41:23.6860164Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6860370Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6860791Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6861169Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6861523Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6861888Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6862211Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6862656Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6862945Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6863321Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6863589Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6863597Z 2023-01-11T21:41:23.6863604Z 2023-01-11T21:41:23.6863761Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6863987Z import torch 2023-01-11T21:41:23.6864104Z import random 2023-01-11T21:41:23.6864284Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6864494Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6864502Z 2023-01-11T21:41:23.6864630Z aten = torch.ops.aten 2023-01-11T21:41:23.6864884Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6865051Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6865058Z 2023-01-11T21:41:23.6865065Z 2023-01-11T21:41:23.6865233Z async_compile.wait(globals()) 2023-01-11T21:41:23.6865342Z del async_compile 2023-01-11T21:41:23.6865350Z 2023-01-11T21:41:23.6865471Z def call(args): 2023-01-11T21:41:23.6865655Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6865769Z args.clear() 2023-01-11T21:41:23.6866013Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6866298Z assert_size_stride(buf0, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6866421Z del arg0_1 2023-01-11T21:41:23.6866532Z del arg1_1 2023-01-11T21:41:23.6866652Z del arg7_1 2023-01-11T21:41:23.6866784Z return (buf0, ) 2023-01-11T21:41:23.6866792Z 2023-01-11T21:41:23.6866800Z 2023-01-11T21:41:23.6866950Z if __name__ == "__main__": 2023-01-11T21:41:23.6867159Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6867364Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6867745Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6868057Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6868392Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6868743Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6869109Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6869452Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6869789Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6870211Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6870489Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6870499Z 2023-01-11T21:41:23.6870947Z [2023-01-11 21:25:57,117] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 120 2023-01-11T21:41:23.6871823Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6872038Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6872504Z [2023-01-11 21:25:57,737] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 121 2023-01-11T21:41:23.6873018Z [2023-01-11 21:25:57,760] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 121 2023-01-11T21:41:23.6873861Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6874110Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6874632Z [2023-01-11 21:25:57,998] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 122 2023-01-11T21:41:23.6875237Z [2023-01-11 21:25:58,027] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 122 2023-01-11T21:41:23.6876148Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6876356Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6876749Z [2023-01-11 21:25:58,344] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 123 2023-01-11T21:41:23.6877157Z [2023-01-11 21:25:58,367] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 123 2023-01-11T21:41:23.6877976Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6878143Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6878576Z [2023-01-11 21:25:59,106] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 124 2023-01-11T21:41:23.6878585Z 2023-01-11T21:41:23.6878753Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6878878Z import torch 2023-01-11T21:41:23.6878999Z import random 2023-01-11T21:41:23.6879225Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6879426Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6879452Z 2023-01-11T21:41:23.6879571Z aten = torch.ops.aten 2023-01-11T21:41:23.6879822Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6879981Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6879989Z 2023-01-11T21:41:23.6879996Z 2023-01-11T21:41:23.6880260Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6880666Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6880905Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6881098Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6881209Z { 2023-01-11T21:41:23.6881391Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6881520Z { 2023-01-11T21:41:23.6881677Z #pragma omp for 2023-01-11T21:41:23.6881841Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6881971Z { 2023-01-11T21:41:23.6882134Z #pragma GCC ivdep 2023-01-11T21:41:23.6882292Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6882430Z { 2023-01-11T21:41:23.6882558Z { 2023-01-11T21:41:23.6882683Z { 2023-01-11T21:41:23.6882871Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.6883029Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6883117Z } 2023-01-11T21:41:23.6883188Z } 2023-01-11T21:41:23.6883283Z } 2023-01-11T21:41:23.6883410Z } 2023-01-11T21:41:23.6883520Z } 2023-01-11T21:41:23.6883641Z } 2023-01-11T21:41:23.6883803Z ''') 2023-01-11T21:41:23.6883812Z 2023-01-11T21:41:23.6883820Z 2023-01-11T21:41:23.6883999Z async_compile.wait(globals()) 2023-01-11T21:41:23.6884132Z del async_compile 2023-01-11T21:41:23.6884140Z 2023-01-11T21:41:23.6884283Z def call(args): 2023-01-11T21:41:23.6884478Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6884599Z args.clear() 2023-01-11T21:41:23.6885030Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6885369Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6885504Z del arg7_1 2023-01-11T21:41:23.6885717Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6885889Z assert_size_stride(buf1, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6885978Z del arg0_1 2023-01-11T21:41:23.6886065Z del arg1_1 2023-01-11T21:41:23.6886167Z return (buf1, ) 2023-01-11T21:41:23.6886175Z 2023-01-11T21:41:23.6886184Z 2023-01-11T21:41:23.6886332Z if __name__ == "__main__": 2023-01-11T21:41:23.6886540Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6886763Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6887170Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6887572Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6887834Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6888161Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6888463Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6888730Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6888962Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6889315Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6889525Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6889550Z 2023-01-11T21:41:23.6889555Z 2023-01-11T21:41:23.6889657Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6889761Z import torch 2023-01-11T21:41:23.6889852Z import random 2023-01-11T21:41:23.6890002Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6890157Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6890163Z 2023-01-11T21:41:23.6890261Z aten = torch.ops.aten 2023-01-11T21:41:23.6890515Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6890620Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6890644Z 2023-01-11T21:41:23.6890649Z 2023-01-11T21:41:23.6890745Z async_compile.wait(globals()) 2023-01-11T21:41:23.6890837Z del async_compile 2023-01-11T21:41:23.6890844Z 2023-01-11T21:41:23.6890940Z def call(args): 2023-01-11T21:41:23.6891087Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6891176Z args.clear() 2023-01-11T21:41:23.6891355Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6891511Z assert_size_stride(buf0, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6891582Z del arg0_1 2023-01-11T21:41:23.6891709Z del arg1_1 2023-01-11T21:41:23.6891808Z del arg7_1 2023-01-11T21:41:23.6891899Z return (buf0, ) 2023-01-11T21:41:23.6891906Z 2023-01-11T21:41:23.6891911Z 2023-01-11T21:41:23.6892006Z if __name__ == "__main__": 2023-01-11T21:41:23.6892152Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6892314Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6892617Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6892859Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6893109Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6893377Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6893631Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6893966Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6894202Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6894511Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6894717Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6894725Z 2023-01-11T21:41:23.6894729Z 2023-01-11T21:41:23.6894874Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6894947Z import torch 2023-01-11T21:41:23.6895039Z import random 2023-01-11T21:41:23.6895187Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6895345Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6895351Z 2023-01-11T21:41:23.6895455Z aten = torch.ops.aten 2023-01-11T21:41:23.6895696Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6895818Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6895824Z 2023-01-11T21:41:23.6895830Z 2023-01-11T21:41:23.6895998Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6896256Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6896417Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6896547Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6896624Z { 2023-01-11T21:41:23.6896771Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6896851Z { 2023-01-11T21:41:23.6896950Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6897053Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6897132Z { 2023-01-11T21:41:23.6897244Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6897326Z { 2023-01-11T21:41:23.6897406Z { 2023-01-11T21:41:23.6897497Z { 2023-01-11T21:41:23.6897608Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.6897733Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6897817Z } 2023-01-11T21:41:23.6897898Z } 2023-01-11T21:41:23.6897977Z } 2023-01-11T21:41:23.6898055Z } 2023-01-11T21:41:23.6898130Z } 2023-01-11T21:41:23.6898188Z } 2023-01-11T21:41:23.6898301Z ''') 2023-01-11T21:41:23.6898308Z 2023-01-11T21:41:23.6898313Z 2023-01-11T21:41:23.6898428Z async_compile.wait(globals()) 2023-01-11T21:41:23.6898537Z del async_compile 2023-01-11T21:41:23.6898543Z 2023-01-11T21:41:23.6898632Z def call(args): 2023-01-11T21:41:23.6898783Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6898872Z args.clear() 2023-01-11T21:41:23.6899174Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6899353Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6899442Z del arg7_1 2023-01-11T21:41:23.6899622Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6899769Z assert_size_stride(buf1, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6899851Z del arg0_1 2023-01-11T21:41:23.6899936Z del arg1_1 2023-01-11T21:41:23.6900026Z return (buf1, ) 2023-01-11T21:41:23.6900033Z 2023-01-11T21:41:23.6900039Z 2023-01-11T21:41:23.6900136Z if __name__ == "__main__": 2023-01-11T21:41:23.6900281Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6900435Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6900727Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6900980Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6901316Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6901561Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6901826Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6902048Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6902285Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6902736Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6902939Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6902947Z 2023-01-11T21:41:23.6902952Z 2023-01-11T21:41:23.6903070Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6903159Z import torch 2023-01-11T21:41:23.6903248Z import random 2023-01-11T21:41:23.6903506Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6903648Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6903654Z 2023-01-11T21:41:23.6903757Z aten = torch.ops.aten 2023-01-11T21:41:23.6903926Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6904043Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6904049Z 2023-01-11T21:41:23.6904054Z 2023-01-11T21:41:23.6904162Z async_compile.wait(globals()) 2023-01-11T21:41:23.6904254Z del async_compile 2023-01-11T21:41:23.6904260Z 2023-01-11T21:41:23.6904347Z def call(args): 2023-01-11T21:41:23.6904494Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6904569Z args.clear() 2023-01-11T21:41:23.6904744Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6904892Z assert_size_stride(buf0, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6904996Z del arg0_1 2023-01-11T21:41:23.6905080Z del arg1_1 2023-01-11T21:41:23.6905164Z del arg7_1 2023-01-11T21:41:23.6905255Z return (buf0, ) 2023-01-11T21:41:23.6905261Z 2023-01-11T21:41:23.6905267Z 2023-01-11T21:41:23.6905359Z if __name__ == "__main__": 2023-01-11T21:41:23.6905486Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6905640Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6905944Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6906200Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6906470Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6906718Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6906957Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6907210Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6907427Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6907736Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6907953Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6907962Z 2023-01-11T21:41:23.6908319Z [2023-01-11 21:25:59,135] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 124 2023-01-11T21:41:23.6908867Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6909146Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6909487Z [2023-01-11 21:25:59,852] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 125 2023-01-11T21:41:23.6909832Z [2023-01-11 21:25:59,875] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 125 2023-01-11T21:41:23.6910385Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6910547Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6910879Z [2023-01-11 21:26:00,155] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 126 2023-01-11T21:41:23.6911278Z [2023-01-11 21:26:00,187] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 126 2023-01-11T21:41:23.6911848Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6912006Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6912347Z [2023-01-11 21:26:00,581] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 127 2023-01-11T21:41:23.6912700Z [2023-01-11 21:26:00,604] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 127 2023-01-11T21:41:23.6913239Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6913400Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6913819Z [2023-01-11 21:26:01,411] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 128 2023-01-11T21:41:23.6913831Z 2023-01-11T21:41:23.6913950Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6914039Z import torch 2023-01-11T21:41:23.6914109Z import random 2023-01-11T21:41:23.6914254Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6914413Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6914420Z 2023-01-11T21:41:23.6914518Z aten = torch.ops.aten 2023-01-11T21:41:23.6914685Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6914807Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6914814Z 2023-01-11T21:41:23.6914819Z 2023-01-11T21:41:23.6914995Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6915255Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6915403Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6915513Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6915589Z { 2023-01-11T21:41:23.6915709Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6915787Z { 2023-01-11T21:41:23.6915892Z #pragma omp for 2023-01-11T21:41:23.6915993Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6916054Z { 2023-01-11T21:41:23.6916154Z #pragma GCC ivdep 2023-01-11T21:41:23.6916265Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6916344Z { 2023-01-11T21:41:23.6916424Z { 2023-01-11T21:41:23.6916512Z { 2023-01-11T21:41:23.6916731Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.6916836Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6916917Z } 2023-01-11T21:41:23.6917019Z } 2023-01-11T21:41:23.6917095Z } 2023-01-11T21:41:23.6917171Z } 2023-01-11T21:41:23.6917246Z } 2023-01-11T21:41:23.6917320Z } 2023-01-11T21:41:23.6917410Z ''') 2023-01-11T21:41:23.6917417Z 2023-01-11T21:41:23.6917422Z 2023-01-11T21:41:23.6917538Z async_compile.wait(globals()) 2023-01-11T21:41:23.6917628Z del async_compile 2023-01-11T21:41:23.6917634Z 2023-01-11T21:41:23.6917719Z def call(args): 2023-01-11T21:41:23.6917876Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6917964Z args.clear() 2023-01-11T21:41:23.6918285Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6918494Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6918581Z del arg7_1 2023-01-11T21:41:23.6918762Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6918912Z assert_size_stride(buf1, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.6918993Z del arg0_1 2023-01-11T21:41:23.6919076Z del arg1_1 2023-01-11T21:41:23.6919173Z return (buf1, ) 2023-01-11T21:41:23.6919179Z 2023-01-11T21:41:23.6919184Z 2023-01-11T21:41:23.6919275Z if __name__ == "__main__": 2023-01-11T21:41:23.6919402Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6919564Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6919860Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6920118Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6920426Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6920781Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6921132Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6921485Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6921804Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6922234Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6922526Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6922535Z 2023-01-11T21:41:23.6922542Z 2023-01-11T21:41:23.6922730Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6922874Z import torch 2023-01-11T21:41:23.6923017Z import random 2023-01-11T21:41:23.6923247Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6923484Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6923492Z 2023-01-11T21:41:23.6923634Z aten = torch.ops.aten 2023-01-11T21:41:23.6923890Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6924071Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6924079Z 2023-01-11T21:41:23.6924086Z 2023-01-11T21:41:23.6924268Z async_compile.wait(globals()) 2023-01-11T21:41:23.6924415Z del async_compile 2023-01-11T21:41:23.6924422Z 2023-01-11T21:41:23.6924565Z def call(args): 2023-01-11T21:41:23.6924784Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6924926Z args.clear() 2023-01-11T21:41:23.6925172Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6925384Z assert_size_stride(buf0, (1, 32, 53, 53, 53), (4764064, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.6925611Z del arg0_1 2023-01-11T21:41:23.6925753Z del arg1_1 2023-01-11T21:41:23.6925891Z del arg7_1 2023-01-11T21:41:23.6926036Z return (buf0, ) 2023-01-11T21:41:23.6926043Z 2023-01-11T21:41:23.6926050Z 2023-01-11T21:41:23.6926200Z if __name__ == "__main__": 2023-01-11T21:41:23.6926422Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6926648Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6927058Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6927418Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6927774Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6928123Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6928473Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6928911Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6929260Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6929678Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6929967Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6929976Z 2023-01-11T21:41:23.6929983Z 2023-01-11T21:41:23.6930171Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6930313Z import torch 2023-01-11T21:41:23.6930458Z import random 2023-01-11T21:41:23.6930686Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6930923Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6930932Z 2023-01-11T21:41:23.6931084Z aten = torch.ops.aten 2023-01-11T21:41:23.6931330Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6931512Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6931527Z 2023-01-11T21:41:23.6931534Z 2023-01-11T21:41:23.6931792Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6932169Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6932403Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6932593Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6932720Z { 2023-01-11T21:41:23.6932916Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6933029Z { 2023-01-11T21:41:23.6933208Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6933370Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6933498Z { 2023-01-11T21:41:23.6933666Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6933796Z { 2023-01-11T21:41:23.6933914Z { 2023-01-11T21:41:23.6934049Z { 2023-01-11T21:41:23.6934241Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.6934432Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6934564Z } 2023-01-11T21:41:23.6934696Z } 2023-01-11T21:41:23.6934823Z } 2023-01-11T21:41:23.6934938Z } 2023-01-11T21:41:23.6935064Z } 2023-01-11T21:41:23.6935189Z } 2023-01-11T21:41:23.6935350Z ''') 2023-01-11T21:41:23.6935358Z 2023-01-11T21:41:23.6935364Z 2023-01-11T21:41:23.6935547Z async_compile.wait(globals()) 2023-01-11T21:41:23.6935699Z del async_compile 2023-01-11T21:41:23.6935706Z 2023-01-11T21:41:23.6935849Z def call(args): 2023-01-11T21:41:23.6936071Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6936200Z args.clear() 2023-01-11T21:41:23.6936628Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6936886Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6937102Z del arg7_1 2023-01-11T21:41:23.6937361Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6937574Z assert_size_stride(buf1, (1, 32, 53, 53, 53), (4764064, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.6937709Z del arg0_1 2023-01-11T21:41:23.6937835Z del arg1_1 2023-01-11T21:41:23.6937980Z return (buf1, ) 2023-01-11T21:41:23.6937988Z 2023-01-11T21:41:23.6937994Z 2023-01-11T21:41:23.6938144Z if __name__ == "__main__": 2023-01-11T21:41:23.6938365Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6938605Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6939010Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6939371Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6939724Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6940119Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6940477Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6940830Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6941169Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6941589Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6941880Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6941888Z 2023-01-11T21:41:23.6941895Z 2023-01-11T21:41:23.6942082Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6942226Z import torch 2023-01-11T21:41:23.6942491Z import random 2023-01-11T21:41:23.6942721Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6942964Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6942977Z 2023-01-11T21:41:23.6943135Z aten = torch.ops.aten 2023-01-11T21:41:23.6943393Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6943574Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6943582Z 2023-01-11T21:41:23.6943588Z 2023-01-11T21:41:23.6943768Z async_compile.wait(globals()) 2023-01-11T21:41:23.6943919Z del async_compile 2023-01-11T21:41:23.6943927Z 2023-01-11T21:41:23.6944058Z def call(args): 2023-01-11T21:41:23.6944276Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6944419Z args.clear() 2023-01-11T21:41:23.6944677Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6944895Z assert_size_stride(buf0, (1, 128, 53, 53, 53), (19056256, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.6945034Z del arg0_1 2023-01-11T21:41:23.6945170Z del arg1_1 2023-01-11T21:41:23.6945310Z del arg7_1 2023-01-11T21:41:23.6945444Z return (buf0, ) 2023-01-11T21:41:23.6945452Z 2023-01-11T21:41:23.6945459Z 2023-01-11T21:41:23.6945609Z if __name__ == "__main__": 2023-01-11T21:41:23.6945836Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6946077Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6946495Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6946857Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6947215Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6947572Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6947914Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6948269Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6948712Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6949149Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6949441Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6949450Z 2023-01-11T21:41:23.6949956Z [2023-01-11 21:26:01,440] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 128 2023-01-11T21:41:23.6950817Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6951053Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6951620Z [2023-01-11 21:26:02,157] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 129 2023-01-11T21:41:23.6952125Z [2023-01-11 21:26:02,184] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 129 2023-01-11T21:41:23.6952972Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6953212Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6953699Z [2023-01-11 21:26:02,500] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 130 2023-01-11T21:41:23.6954272Z [2023-01-11 21:26:02,529] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 130 2023-01-11T21:41:23.6955131Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6955371Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6955857Z [2023-01-11 21:26:02,919] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 131 2023-01-11T21:41:23.6956358Z [2023-01-11 21:26:02,943] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 131 2023-01-11T21:41:23.6957209Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6957455Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6957939Z [2023-01-11 21:26:03,671] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 132 2023-01-11T21:41:23.6957950Z 2023-01-11T21:41:23.6958134Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6958263Z import torch 2023-01-11T21:41:23.6958411Z import random 2023-01-11T21:41:23.6958639Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6958875Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6958883Z 2023-01-11T21:41:23.6959040Z aten = torch.ops.aten 2023-01-11T21:41:23.6959301Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6959483Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6959491Z 2023-01-11T21:41:23.6959498Z 2023-01-11T21:41:23.6959834Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6960205Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6960439Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6960633Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6960758Z { 2023-01-11T21:41:23.6960952Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6961079Z { 2023-01-11T21:41:23.6961234Z #pragma omp for 2023-01-11T21:41:23.6961382Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.6961509Z { 2023-01-11T21:41:23.6961670Z #pragma GCC ivdep 2023-01-11T21:41:23.6961841Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6961971Z { 2023-01-11T21:41:23.6962104Z { 2023-01-11T21:41:23.6962225Z { 2023-01-11T21:41:23.6962417Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.6962655Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6962793Z } 2023-01-11T21:41:23.6962925Z } 2023-01-11T21:41:23.6963056Z } 2023-01-11T21:41:23.6963184Z } 2023-01-11T21:41:23.6963297Z } 2023-01-11T21:41:23.6963422Z } 2023-01-11T21:41:23.6963584Z ''') 2023-01-11T21:41:23.6963592Z 2023-01-11T21:41:23.6963599Z 2023-01-11T21:41:23.6963780Z async_compile.wait(globals()) 2023-01-11T21:41:23.6963929Z del async_compile 2023-01-11T21:41:23.6963937Z 2023-01-11T21:41:23.6964082Z def call(args): 2023-01-11T21:41:23.6964306Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6964435Z args.clear() 2023-01-11T21:41:23.6964876Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6965134Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6965273Z del arg7_1 2023-01-11T21:41:23.6965540Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6965759Z assert_size_stride(buf1, (1, 128, 53, 53, 53), (19056256, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.6965896Z del arg0_1 2023-01-11T21:41:23.6966031Z del arg1_1 2023-01-11T21:41:23.6966162Z return (buf1, ) 2023-01-11T21:41:23.6966170Z 2023-01-11T21:41:23.6966190Z 2023-01-11T21:41:23.6966326Z if __name__ == "__main__": 2023-01-11T21:41:23.6966550Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6966793Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6967202Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6967567Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6967928Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6968290Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6968632Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6968985Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6969324Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6969758Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6970051Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6970059Z 2023-01-11T21:41:23.6970066Z 2023-01-11T21:41:23.6970256Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6970400Z import torch 2023-01-11T21:41:23.6970546Z import random 2023-01-11T21:41:23.6970760Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6970999Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6971079Z 2023-01-11T21:41:23.6971240Z aten = torch.ops.aten 2023-01-11T21:41:23.6971500Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6971683Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6971691Z 2023-01-11T21:41:23.6971698Z 2023-01-11T21:41:23.6971879Z async_compile.wait(globals()) 2023-01-11T21:41:23.6972028Z del async_compile 2023-01-11T21:41:23.6972035Z 2023-01-11T21:41:23.6972180Z def call(args): 2023-01-11T21:41:23.6972389Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6972530Z args.clear() 2023-01-11T21:41:23.6972789Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6973001Z assert_size_stride(buf0, (1, 32, 51, 51, 51), (4244832, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.6973141Z del arg0_1 2023-01-11T21:41:23.6973277Z del arg1_1 2023-01-11T21:41:23.6973414Z del arg7_1 2023-01-11T21:41:23.6973602Z return (buf0, ) 2023-01-11T21:41:23.6973625Z 2023-01-11T21:41:23.6973631Z 2023-01-11T21:41:23.6973768Z if __name__ == "__main__": 2023-01-11T21:41:23.6973994Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6974239Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6974651Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6975008Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6975367Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6975720Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6976076Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6976414Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6976754Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6977188Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6977481Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6977489Z 2023-01-11T21:41:23.6977496Z 2023-01-11T21:41:23.6977684Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6977826Z import torch 2023-01-11T21:41:23.6977970Z import random 2023-01-11T21:41:23.6978197Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6978419Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6978427Z 2023-01-11T21:41:23.6978586Z aten = torch.ops.aten 2023-01-11T21:41:23.6978846Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6979029Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6979037Z 2023-01-11T21:41:23.6979044Z 2023-01-11T21:41:23.6979304Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.6979689Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.6979921Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.6980110Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.6980223Z { 2023-01-11T21:41:23.6980416Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.6980542Z { 2023-01-11T21:41:23.6980720Z #pragma omp for collapse(2) 2023-01-11T21:41:23.6980882Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.6981009Z { 2023-01-11T21:41:23.6981179Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.6981292Z { 2023-01-11T21:41:23.6981420Z { 2023-01-11T21:41:23.6981552Z { 2023-01-11T21:41:23.6981743Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.6981924Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.6982130Z } 2023-01-11T21:41:23.6982262Z } 2023-01-11T21:41:23.6982504Z } 2023-01-11T21:41:23.6982639Z } 2023-01-11T21:41:23.6982765Z } 2023-01-11T21:41:23.6982891Z } 2023-01-11T21:41:23.6983054Z ''') 2023-01-11T21:41:23.6983061Z 2023-01-11T21:41:23.6983068Z 2023-01-11T21:41:23.6983252Z async_compile.wait(globals()) 2023-01-11T21:41:23.6983400Z del async_compile 2023-01-11T21:41:23.6983409Z 2023-01-11T21:41:23.6983538Z def call(args): 2023-01-11T21:41:23.6983765Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6983908Z args.clear() 2023-01-11T21:41:23.6984342Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6984596Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.6984735Z del arg7_1 2023-01-11T21:41:23.6985082Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.6985290Z assert_size_stride(buf1, (1, 32, 51, 51, 51), (4244832, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.6985427Z del arg0_1 2023-01-11T21:41:23.6985564Z del arg1_1 2023-01-11T21:41:23.6985709Z return (buf1, ) 2023-01-11T21:41:23.6985717Z 2023-01-11T21:41:23.6985724Z 2023-01-11T21:41:23.6985879Z if __name__ == "__main__": 2023-01-11T21:41:23.6986100Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6986338Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6986745Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6987090Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6987450Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6987808Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6988171Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6988522Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6988860Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6989285Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6989579Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6989589Z 2023-01-11T21:41:23.6989596Z 2023-01-11T21:41:23.6989783Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.6989912Z import torch 2023-01-11T21:41:23.6990057Z import random 2023-01-11T21:41:23.6990287Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.6990522Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.6990529Z 2023-01-11T21:41:23.6990685Z aten = torch.ops.aten 2023-01-11T21:41:23.6990950Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.6991132Z async_compile = AsyncCompile() 2023-01-11T21:41:23.6991140Z 2023-01-11T21:41:23.6991147Z 2023-01-11T21:41:23.6991325Z async_compile.wait(globals()) 2023-01-11T21:41:23.6991459Z del async_compile 2023-01-11T21:41:23.6991466Z 2023-01-11T21:41:23.6991613Z def call(args): 2023-01-11T21:41:23.6991833Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.6991975Z args.clear() 2023-01-11T21:41:23.6992234Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.6992450Z assert_size_stride(buf0, (1, 128, 51, 51, 51), (16979328, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.6992588Z del arg0_1 2023-01-11T21:41:23.6992712Z del arg1_1 2023-01-11T21:41:23.6992847Z del arg7_1 2023-01-11T21:41:23.6992995Z return (buf0, ) 2023-01-11T21:41:23.6993003Z 2023-01-11T21:41:23.6993103Z 2023-01-11T21:41:23.6993276Z if __name__ == "__main__": 2023-01-11T21:41:23.6993502Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.6993805Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.6994227Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6994589Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6994933Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6995294Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6995652Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6996007Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6996344Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.6996894Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.6997192Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.6997200Z 2023-01-11T21:41:23.6997708Z [2023-01-11 21:26:03,700] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 132 2023-01-11T21:41:23.6998566Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.6998790Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.6999278Z [2023-01-11 21:26:04,347] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 133 2023-01-11T21:41:23.6999793Z [2023-01-11 21:26:04,370] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 133 2023-01-11T21:41:23.7000651Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7000890Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7001377Z [2023-01-11 21:26:04,630] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 134 2023-01-11T21:41:23.7001875Z [2023-01-11 21:26:04,658] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 134 2023-01-11T21:41:23.7002733Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7002976Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7003461Z [2023-01-11 21:26:05,003] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 135 2023-01-11T21:41:23.7003960Z [2023-01-11 21:26:05,025] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 135 2023-01-11T21:41:23.7004817Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7005126Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7005613Z [2023-01-11 21:26:05,771] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 136 2023-01-11T21:41:23.7005623Z 2023-01-11T21:41:23.7005811Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7005958Z import torch 2023-01-11T21:41:23.7006104Z import random 2023-01-11T21:41:23.7006332Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7006569Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7006577Z 2023-01-11T21:41:23.7006737Z aten = torch.ops.aten 2023-01-11T21:41:23.7006980Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7007165Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7007173Z 2023-01-11T21:41:23.7007180Z 2023-01-11T21:41:23.7007445Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7007908Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7008146Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7008335Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7008463Z { 2023-01-11T21:41:23.7008657Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7008770Z { 2023-01-11T21:41:23.7008925Z #pragma omp for 2023-01-11T21:41:23.7009087Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7009217Z { 2023-01-11T21:41:23.7009374Z #pragma GCC ivdep 2023-01-11T21:41:23.7009542Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7009673Z { 2023-01-11T21:41:23.7009787Z { 2023-01-11T21:41:23.7009919Z { 2023-01-11T21:41:23.7010111Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.7010295Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7010430Z } 2023-01-11T21:41:23.7010563Z } 2023-01-11T21:41:23.7010677Z } 2023-01-11T21:41:23.7010806Z } 2023-01-11T21:41:23.7010932Z } 2023-01-11T21:41:23.7011055Z } 2023-01-11T21:41:23.7011214Z ''') 2023-01-11T21:41:23.7011223Z 2023-01-11T21:41:23.7011230Z 2023-01-11T21:41:23.7011410Z async_compile.wait(globals()) 2023-01-11T21:41:23.7011560Z del async_compile 2023-01-11T21:41:23.7011567Z 2023-01-11T21:41:23.7011711Z def call(args): 2023-01-11T21:41:23.7011920Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7012063Z args.clear() 2023-01-11T21:41:23.7012504Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7012759Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7012897Z del arg7_1 2023-01-11T21:41:23.7013159Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7013382Z assert_size_stride(buf1, (1, 128, 51, 51, 51), (16979328, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.7013506Z del arg0_1 2023-01-11T21:41:23.7013641Z del arg1_1 2023-01-11T21:41:23.7013788Z return (buf1, ) 2023-01-11T21:41:23.7013796Z 2023-01-11T21:41:23.7013803Z 2023-01-11T21:41:23.7013952Z if __name__ == "__main__": 2023-01-11T21:41:23.7014174Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7014413Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7014823Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7015184Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7015529Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7015885Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7016322Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7016680Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7017013Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7017443Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7017734Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7017743Z 2023-01-11T21:41:23.7017750Z 2023-01-11T21:41:23.7017937Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7018082Z import torch 2023-01-11T21:41:23.7018211Z import random 2023-01-11T21:41:23.7018437Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7018674Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7018682Z 2023-01-11T21:41:23.7018839Z aten = torch.ops.aten 2023-01-11T21:41:23.7019157Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7019345Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7019354Z 2023-01-11T21:41:23.7019360Z 2023-01-11T21:41:23.7019539Z async_compile.wait(globals()) 2023-01-11T21:41:23.7019674Z del async_compile 2023-01-11T21:41:23.7019695Z 2023-01-11T21:41:23.7019829Z def call(args): 2023-01-11T21:41:23.7020050Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7020195Z args.clear() 2023-01-11T21:41:23.7020450Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7020663Z assert_size_stride(buf0, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7020803Z del arg0_1 2023-01-11T21:41:23.7020941Z del arg1_1 2023-01-11T21:41:23.7021066Z del arg7_1 2023-01-11T21:41:23.7021213Z return (buf0, ) 2023-01-11T21:41:23.7021221Z 2023-01-11T21:41:23.7021232Z 2023-01-11T21:41:23.7021389Z if __name__ == "__main__": 2023-01-11T21:41:23.7021610Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7021848Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7022253Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7022744Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7023104Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7023447Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7023802Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7024155Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7024494Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7024933Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7025227Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7025235Z 2023-01-11T21:41:23.7025242Z 2023-01-11T21:41:23.7025430Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7025573Z import torch 2023-01-11T21:41:23.7025702Z import random 2023-01-11T21:41:23.7025926Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7026164Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7026172Z 2023-01-11T21:41:23.7026329Z aten = torch.ops.aten 2023-01-11T21:41:23.7026586Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7026766Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7026775Z 2023-01-11T21:41:23.7026782Z 2023-01-11T21:41:23.7027042Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7027430Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7027751Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7027942Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7028068Z { 2023-01-11T21:41:23.7028264Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7028391Z { 2023-01-11T21:41:23.7028571Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7028729Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7028845Z { 2023-01-11T21:41:23.7029019Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7029154Z { 2023-01-11T21:41:23.7029284Z { 2023-01-11T21:41:23.7029418Z { 2023-01-11T21:41:23.7029609Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.7029789Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7029908Z } 2023-01-11T21:41:23.7030039Z } 2023-01-11T21:41:23.7030233Z } 2023-01-11T21:41:23.7030365Z } 2023-01-11T21:41:23.7030490Z } 2023-01-11T21:41:23.7030614Z } 2023-01-11T21:41:23.7030759Z ''') 2023-01-11T21:41:23.7030781Z 2023-01-11T21:41:23.7030788Z 2023-01-11T21:41:23.7030953Z async_compile.wait(globals()) 2023-01-11T21:41:23.7031100Z del async_compile 2023-01-11T21:41:23.7031108Z 2023-01-11T21:41:23.7031253Z def call(args): 2023-01-11T21:41:23.7031473Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7031614Z args.clear() 2023-01-11T21:41:23.7032050Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7032306Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7032431Z del arg7_1 2023-01-11T21:41:23.7032690Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7032910Z assert_size_stride(buf1, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7033052Z del arg0_1 2023-01-11T21:41:23.7033191Z del arg1_1 2023-01-11T21:41:23.7033336Z return (buf1, ) 2023-01-11T21:41:23.7033344Z 2023-01-11T21:41:23.7033350Z 2023-01-11T21:41:23.7033502Z if __name__ == "__main__": 2023-01-11T21:41:23.7033789Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7034018Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7034424Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7034788Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7035148Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7035507Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7035859Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7036219Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7036560Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7036970Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7037264Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7037273Z 2023-01-11T21:41:23.7037280Z 2023-01-11T21:41:23.7037471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7037617Z import torch 2023-01-11T21:41:23.7037762Z import random 2023-01-11T21:41:23.7037989Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7038226Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7038234Z 2023-01-11T21:41:23.7038391Z aten = torch.ops.aten 2023-01-11T21:41:23.7038633Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7038904Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7038912Z 2023-01-11T21:41:23.7038919Z 2023-01-11T21:41:23.7039098Z async_compile.wait(globals()) 2023-01-11T21:41:23.7039246Z del async_compile 2023-01-11T21:41:23.7039253Z 2023-01-11T21:41:23.7039396Z def call(args): 2023-01-11T21:41:23.7039620Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7039763Z args.clear() 2023-01-11T21:41:23.7040022Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7040224Z assert_size_stride(buf0, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7040363Z del arg0_1 2023-01-11T21:41:23.7040502Z del arg1_1 2023-01-11T21:41:23.7040637Z del arg7_1 2023-01-11T21:41:23.7040784Z return (buf0, ) 2023-01-11T21:41:23.7040793Z 2023-01-11T21:41:23.7040799Z 2023-01-11T21:41:23.7040950Z if __name__ == "__main__": 2023-01-11T21:41:23.7041223Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7041468Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7041860Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7042224Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7042587Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7042947Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7043305Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7043661Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7043999Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7044439Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7044727Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7044735Z 2023-01-11T21:41:23.7045247Z [2023-01-11 21:26:05,799] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 136 2023-01-11T21:41:23.7046105Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7046345Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7046836Z [2023-01-11 21:26:06,493] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 137 2023-01-11T21:41:23.7047347Z [2023-01-11 21:26:06,515] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 137 2023-01-11T21:41:23.7048203Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7048442Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7048928Z [2023-01-11 21:26:06,752] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 138 2023-01-11T21:41:23.7049431Z [2023-01-11 21:26:06,781] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 138 2023-01-11T21:41:23.7050291Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7050596Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7051070Z [2023-01-11 21:26:07,099] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 139 2023-01-11T21:41:23.7051574Z [2023-01-11 21:26:07,121] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 139 2023-01-11T21:41:23.7052425Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7052665Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7053206Z [2023-01-11 21:26:07,854] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 140 2023-01-11T21:41:23.7053216Z 2023-01-11T21:41:23.7053409Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7053551Z import torch 2023-01-11T21:41:23.7053697Z import random 2023-01-11T21:41:23.7053921Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7054144Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7054170Z 2023-01-11T21:41:23.7054315Z aten = torch.ops.aten 2023-01-11T21:41:23.7054577Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7054762Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7054770Z 2023-01-11T21:41:23.7054777Z 2023-01-11T21:41:23.7055034Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7055416Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7055658Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7055847Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7055958Z { 2023-01-11T21:41:23.7056154Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7056281Z { 2023-01-11T21:41:23.7056439Z #pragma omp for 2023-01-11T21:41:23.7056602Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7056731Z { 2023-01-11T21:41:23.7056888Z #pragma GCC ivdep 2023-01-11T21:41:23.7057046Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7057175Z { 2023-01-11T21:41:23.7057307Z { 2023-01-11T21:41:23.7057441Z { 2023-01-11T21:41:23.7057636Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.7057821Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7057954Z } 2023-01-11T21:41:23.7058071Z } 2023-01-11T21:41:23.7058199Z } 2023-01-11T21:41:23.7058336Z } 2023-01-11T21:41:23.7058463Z } 2023-01-11T21:41:23.7058587Z } 2023-01-11T21:41:23.7058743Z ''') 2023-01-11T21:41:23.7058751Z 2023-01-11T21:41:23.7058757Z 2023-01-11T21:41:23.7058940Z async_compile.wait(globals()) 2023-01-11T21:41:23.7059076Z del async_compile 2023-01-11T21:41:23.7059084Z 2023-01-11T21:41:23.7059229Z def call(args): 2023-01-11T21:41:23.7059453Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7059599Z args.clear() 2023-01-11T21:41:23.7060040Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7060296Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7060435Z del arg7_1 2023-01-11T21:41:23.7060682Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7060902Z assert_size_stride(buf1, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7061100Z del arg0_1 2023-01-11T21:41:23.7061238Z del arg1_1 2023-01-11T21:41:23.7061381Z return (buf1, ) 2023-01-11T21:41:23.7061389Z 2023-01-11T21:41:23.7061396Z 2023-01-11T21:41:23.7061546Z if __name__ == "__main__": 2023-01-11T21:41:23.7061768Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7062008Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7062526Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7062898Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7063261Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7063619Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7063976Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7064409Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7064747Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7065177Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7065455Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7065479Z 2023-01-11T21:41:23.7065486Z 2023-01-11T21:41:23.7065663Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7065805Z import torch 2023-01-11T21:41:23.7065951Z import random 2023-01-11T21:41:23.7066176Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7066416Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7066424Z 2023-01-11T21:41:23.7066583Z aten = torch.ops.aten 2023-01-11T21:41:23.7066841Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7067017Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7067024Z 2023-01-11T21:41:23.7067048Z 2023-01-11T21:41:23.7067212Z async_compile.wait(globals()) 2023-01-11T21:41:23.7067360Z del async_compile 2023-01-11T21:41:23.7067369Z 2023-01-11T21:41:23.7067502Z def call(args): 2023-01-11T21:41:23.7067709Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7067812Z args.clear() 2023-01-11T21:41:23.7068016Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7068184Z assert_size_stride(buf0, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7068266Z del arg0_1 2023-01-11T21:41:23.7068353Z del arg1_1 2023-01-11T21:41:23.7068444Z del arg7_1 2023-01-11T21:41:23.7068538Z return (buf0, ) 2023-01-11T21:41:23.7068545Z 2023-01-11T21:41:23.7068550Z 2023-01-11T21:41:23.7068651Z if __name__ == "__main__": 2023-01-11T21:41:23.7068820Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7068998Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7069322Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7069598Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7069890Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7070169Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7070447Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7070728Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7071008Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7071357Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7071685Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7071694Z 2023-01-11T21:41:23.7071700Z 2023-01-11T21:41:23.7071832Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7071917Z import torch 2023-01-11T21:41:23.7072015Z import random 2023-01-11T21:41:23.7072178Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7072349Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7072356Z 2023-01-11T21:41:23.7072471Z aten = torch.ops.aten 2023-01-11T21:41:23.7072650Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7072780Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7072787Z 2023-01-11T21:41:23.7072792Z 2023-01-11T21:41:23.7072968Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7073256Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7073427Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7073626Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7073712Z { 2023-01-11T21:41:23.7073923Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7074017Z { 2023-01-11T21:41:23.7074135Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7074250Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7074342Z { 2023-01-11T21:41:23.7074469Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7074556Z { 2023-01-11T21:41:23.7074641Z { 2023-01-11T21:41:23.7074732Z { 2023-01-11T21:41:23.7074864Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.7075000Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7075093Z } 2023-01-11T21:41:23.7075184Z } 2023-01-11T21:41:23.7075276Z } 2023-01-11T21:41:23.7075362Z } 2023-01-11T21:41:23.7075449Z } 2023-01-11T21:41:23.7075522Z } 2023-01-11T21:41:23.7075645Z ''') 2023-01-11T21:41:23.7075653Z 2023-01-11T21:41:23.7075659Z 2023-01-11T21:41:23.7075785Z async_compile.wait(globals()) 2023-01-11T21:41:23.7075890Z del async_compile 2023-01-11T21:41:23.7075898Z 2023-01-11T21:41:23.7075986Z def call(args): 2023-01-11T21:41:23.7076156Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7076258Z args.clear() 2023-01-11T21:41:23.7076597Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7076795Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7076891Z del arg7_1 2023-01-11T21:41:23.7077091Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7077261Z assert_size_stride(buf1, (1, 32, 55, 55, 55), (5324000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7077359Z del arg0_1 2023-01-11T21:41:23.7077459Z del arg1_1 2023-01-11T21:41:23.7077553Z return (buf1, ) 2023-01-11T21:41:23.7077561Z 2023-01-11T21:41:23.7077568Z 2023-01-11T21:41:23.7077665Z if __name__ == "__main__": 2023-01-11T21:41:23.7077830Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7078013Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7078343Z arg0_1 = rand_strided((32, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7078639Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7078915Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7079203Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7079482Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7079745Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7080099Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7080452Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7080676Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7080685Z 2023-01-11T21:41:23.7080690Z 2023-01-11T21:41:23.7080822Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7080924Z import torch 2023-01-11T21:41:23.7081025Z import random 2023-01-11T21:41:23.7081181Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7081341Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7081348Z 2023-01-11T21:41:23.7081455Z aten = torch.ops.aten 2023-01-11T21:41:23.7081648Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7081784Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7081790Z 2023-01-11T21:41:23.7081801Z 2023-01-11T21:41:23.7082009Z async_compile.wait(globals()) 2023-01-11T21:41:23.7082116Z del async_compile 2023-01-11T21:41:23.7082122Z 2023-01-11T21:41:23.7082223Z def call(args): 2023-01-11T21:41:23.7082393Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7082486Z args.clear() 2023-01-11T21:41:23.7082679Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7082850Z assert_size_stride(buf0, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7082939Z del arg0_1 2023-01-11T21:41:23.7083026Z del arg1_1 2023-01-11T21:41:23.7083118Z del arg7_1 2023-01-11T21:41:23.7083217Z return (buf0, ) 2023-01-11T21:41:23.7083224Z 2023-01-11T21:41:23.7083231Z 2023-01-11T21:41:23.7083338Z if __name__ == "__main__": 2023-01-11T21:41:23.7083495Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7083667Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7084045Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7084367Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7084674Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7084980Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7085293Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7085602Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7085876Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7086258Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7086512Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7086529Z 2023-01-11T21:41:23.7086984Z [2023-01-11 21:26:07,882] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 140 2023-01-11T21:41:23.7087718Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7087918Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7088353Z [2023-01-11 21:26:08,571] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 141 2023-01-11T21:41:23.7088794Z [2023-01-11 21:26:08,593] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 141 2023-01-11T21:41:23.7089517Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7089805Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7090234Z [2023-01-11 21:26:08,865] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 142 2023-01-11T21:41:23.7090654Z [2023-01-11 21:26:08,893] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 142 2023-01-11T21:41:23.7091416Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7091626Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7092054Z [2023-01-11 21:26:09,277] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 143 2023-01-11T21:41:23.7092497Z [2023-01-11 21:26:09,299] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 143 2023-01-11T21:41:23.7093229Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7093428Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7093859Z [2023-01-11 21:26:10,089] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 144 2023-01-11T21:41:23.7093874Z 2023-01-11T21:41:23.7094023Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7094127Z import torch 2023-01-11T21:41:23.7094228Z import random 2023-01-11T21:41:23.7094416Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7094609Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7094618Z 2023-01-11T21:41:23.7094744Z aten = torch.ops.aten 2023-01-11T21:41:23.7094958Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7095104Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7095111Z 2023-01-11T21:41:23.7095118Z 2023-01-11T21:41:23.7095346Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7095687Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7095874Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7096016Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7096123Z { 2023-01-11T21:41:23.7096286Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7096387Z { 2023-01-11T21:41:23.7096514Z #pragma omp for 2023-01-11T21:41:23.7096643Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7096728Z { 2023-01-11T21:41:23.7096862Z #pragma GCC ivdep 2023-01-11T21:41:23.7097007Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7097112Z { 2023-01-11T21:41:23.7097220Z { 2023-01-11T21:41:23.7097326Z { 2023-01-11T21:41:23.7097490Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.7097633Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7097744Z } 2023-01-11T21:41:23.7097850Z } 2023-01-11T21:41:23.7097953Z } 2023-01-11T21:41:23.7098055Z } 2023-01-11T21:41:23.7098153Z } 2023-01-11T21:41:23.7098253Z } 2023-01-11T21:41:23.7098454Z ''') 2023-01-11T21:41:23.7098466Z 2023-01-11T21:41:23.7098474Z 2023-01-11T21:41:23.7098627Z async_compile.wait(globals()) 2023-01-11T21:41:23.7098743Z del async_compile 2023-01-11T21:41:23.7098751Z 2023-01-11T21:41:23.7098863Z def call(args): 2023-01-11T21:41:23.7099049Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7099162Z args.clear() 2023-01-11T21:41:23.7099563Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7099758Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7099865Z del arg7_1 2023-01-11T21:41:23.7100086Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7100267Z assert_size_stride(buf1, (1, 128, 55, 55, 55), (21296000, 166375, 3025, 55, 1)) 2023-01-11T21:41:23.7100380Z del arg0_1 2023-01-11T21:41:23.7100487Z del arg1_1 2023-01-11T21:41:23.7100662Z return (buf1, ) 2023-01-11T21:41:23.7100673Z 2023-01-11T21:41:23.7100679Z 2023-01-11T21:41:23.7100805Z if __name__ == "__main__": 2023-01-11T21:41:23.7100974Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7101172Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7101533Z arg0_1 = rand_strided((128, 3, 1, 1, 1), (3, 1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7101847Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7102146Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7102572Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7102885Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7103190Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7103478Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7103866Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7104111Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7104122Z 2023-01-11T21:41:23.7104129Z 2023-01-11T21:41:23.7104277Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7104386Z import torch 2023-01-11T21:41:23.7104496Z import random 2023-01-11T21:41:23.7104680Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7104879Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7104886Z 2023-01-11T21:41:23.7104988Z aten = torch.ops.aten 2023-01-11T21:41:23.7105202Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7105341Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7105349Z 2023-01-11T21:41:23.7105356Z 2023-01-11T21:41:23.7105504Z async_compile.wait(globals()) 2023-01-11T21:41:23.7105613Z del async_compile 2023-01-11T21:41:23.7105620Z 2023-01-11T21:41:23.7105726Z def call(args): 2023-01-11T21:41:23.7105907Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7106019Z args.clear() 2023-01-11T21:41:23.7106233Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7106420Z assert_size_stride(buf0, (1, 32, 53, 53, 53), (4764064, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.7106526Z del arg0_1 2023-01-11T21:41:23.7106630Z del arg1_1 2023-01-11T21:41:23.7106733Z del arg7_1 2023-01-11T21:41:23.7106845Z return (buf0, ) 2023-01-11T21:41:23.7106852Z 2023-01-11T21:41:23.7106859Z 2023-01-11T21:41:23.7106978Z if __name__ == "__main__": 2023-01-11T21:41:23.7107165Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7107352Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7107842Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7108161Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7108477Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7108782Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7109086Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7109388Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7109677Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7110044Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7110291Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7110305Z 2023-01-11T21:41:23.7110392Z 2023-01-11T21:41:23.7110546Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7110663Z import torch 2023-01-11T21:41:23.7110775Z import random 2023-01-11T21:41:23.7110958Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7111150Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7111157Z 2023-01-11T21:41:23.7111278Z aten = torch.ops.aten 2023-01-11T21:41:23.7111486Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7111635Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7111643Z 2023-01-11T21:41:23.7111649Z 2023-01-11T21:41:23.7111875Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7112205Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7112401Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7112557Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7112654Z { 2023-01-11T21:41:23.7112815Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7112892Z { 2023-01-11T21:41:23.7113033Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7113166Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7113264Z { 2023-01-11T21:41:23.7113405Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7113503Z { 2023-01-11T21:41:23.7113588Z { 2023-01-11T21:41:23.7113689Z { 2023-01-11T21:41:23.7113923Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.7114083Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7114186Z } 2023-01-11T21:41:23.7114287Z } 2023-01-11T21:41:23.7114382Z } 2023-01-11T21:41:23.7114466Z } 2023-01-11T21:41:23.7114563Z } 2023-01-11T21:41:23.7114655Z } 2023-01-11T21:41:23.7114792Z ''') 2023-01-11T21:41:23.7114801Z 2023-01-11T21:41:23.7114813Z 2023-01-11T21:41:23.7114967Z async_compile.wait(globals()) 2023-01-11T21:41:23.7115090Z del async_compile 2023-01-11T21:41:23.7115097Z 2023-01-11T21:41:23.7115209Z def call(args): 2023-01-11T21:41:23.7115388Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7115503Z args.clear() 2023-01-11T21:41:23.7115896Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7116115Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7116225Z del arg7_1 2023-01-11T21:41:23.7116455Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7116638Z assert_size_stride(buf1, (1, 32, 53, 53, 53), (4764064, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.7116748Z del arg0_1 2023-01-11T21:41:23.7116843Z del arg1_1 2023-01-11T21:41:23.7116957Z return (buf1, ) 2023-01-11T21:41:23.7117041Z 2023-01-11T21:41:23.7117052Z 2023-01-11T21:41:23.7117176Z if __name__ == "__main__": 2023-01-11T21:41:23.7117357Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7117531Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7117860Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7118149Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7118430Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7118717Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7119025Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7119333Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7119623Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7120063Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7120316Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7120324Z 2023-01-11T21:41:23.7120331Z 2023-01-11T21:41:23.7120479Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7120592Z import torch 2023-01-11T21:41:23.7120680Z import random 2023-01-11T21:41:23.7120844Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7121029Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7121038Z 2023-01-11T21:41:23.7121161Z aten = torch.ops.aten 2023-01-11T21:41:23.7121355Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7121498Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7121506Z 2023-01-11T21:41:23.7121513Z 2023-01-11T21:41:23.7121656Z async_compile.wait(globals()) 2023-01-11T21:41:23.7121772Z del async_compile 2023-01-11T21:41:23.7121789Z 2023-01-11T21:41:23.7121885Z def call(args): 2023-01-11T21:41:23.7122054Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7122153Z args.clear() 2023-01-11T21:41:23.7122381Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7122549Z assert_size_stride(buf0, (1, 128, 53, 53, 53), (19056256, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.7122644Z del arg0_1 2023-01-11T21:41:23.7122745Z del arg1_1 2023-01-11T21:41:23.7122845Z del arg7_1 2023-01-11T21:41:23.7122936Z return (buf0, ) 2023-01-11T21:41:23.7122944Z 2023-01-11T21:41:23.7122949Z 2023-01-11T21:41:23.7123053Z if __name__ == "__main__": 2023-01-11T21:41:23.7123233Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7123425Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7123766Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7124082Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7124373Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7124672Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7124964Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7125244Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7125503Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7125859Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7126113Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7126122Z 2023-01-11T21:41:23.7126529Z [2023-01-11 21:26:10,117] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 144 2023-01-11T21:41:23.7127214Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7127387Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7127755Z [2023-01-11 21:26:10,822] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 145 2023-01-11T21:41:23.7128126Z [2023-01-11 21:26:10,844] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 145 2023-01-11T21:41:23.7128779Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7128960Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7129335Z [2023-01-11 21:26:11,159] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 146 2023-01-11T21:41:23.7129729Z [2023-01-11 21:26:11,187] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 146 2023-01-11T21:41:23.7130328Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7130496Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7130863Z [2023-01-11 21:26:11,575] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 147 2023-01-11T21:41:23.7131236Z [2023-01-11 21:26:11,597] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 147 2023-01-11T21:41:23.7131838Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7132011Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7132379Z [2023-01-11 21:26:12,268] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 148 2023-01-11T21:41:23.7132387Z 2023-01-11T21:41:23.7132520Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7132599Z import torch 2023-01-11T21:41:23.7132693Z import random 2023-01-11T21:41:23.7132851Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7133014Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7133021Z 2023-01-11T21:41:23.7133126Z aten = torch.ops.aten 2023-01-11T21:41:23.7133306Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7133428Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7133435Z 2023-01-11T21:41:23.7133442Z 2023-01-11T21:41:23.7133625Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7133895Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7134056Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7134191Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7134273Z { 2023-01-11T21:41:23.7134407Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7134553Z { 2023-01-11T21:41:23.7134657Z #pragma omp for 2023-01-11T21:41:23.7134751Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7134839Z { 2023-01-11T21:41:23.7134949Z #pragma GCC ivdep 2023-01-11T21:41:23.7135069Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7135155Z { 2023-01-11T21:41:23.7135241Z { 2023-01-11T21:41:23.7135315Z { 2023-01-11T21:41:23.7135452Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.7135583Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7135676Z } 2023-01-11T21:41:23.7135765Z } 2023-01-11T21:41:23.7135852Z } 2023-01-11T21:41:23.7135934Z } 2023-01-11T21:41:23.7136003Z } 2023-01-11T21:41:23.7136079Z } 2023-01-11T21:41:23.7136192Z ''') 2023-01-11T21:41:23.7136199Z 2023-01-11T21:41:23.7136204Z 2023-01-11T21:41:23.7136380Z async_compile.wait(globals()) 2023-01-11T21:41:23.7136483Z del async_compile 2023-01-11T21:41:23.7136489Z 2023-01-11T21:41:23.7136583Z def call(args): 2023-01-11T21:41:23.7136753Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7136834Z args.clear() 2023-01-11T21:41:23.7137171Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7137354Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7137445Z del arg7_1 2023-01-11T21:41:23.7137648Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (1, 1, 1), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7137813Z assert_size_stride(buf1, (1, 128, 53, 53, 53), (19056256, 148877, 2809, 53, 1)) 2023-01-11T21:41:23.7137906Z del arg0_1 2023-01-11T21:41:23.7138001Z del arg1_1 2023-01-11T21:41:23.7138092Z return (buf1, ) 2023-01-11T21:41:23.7138099Z 2023-01-11T21:41:23.7138110Z 2023-01-11T21:41:23.7138207Z if __name__ == "__main__": 2023-01-11T21:41:23.7138358Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7138518Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7138836Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7139112Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7139383Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7139670Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7139958Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7140237Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7140503Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7140838Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7141061Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7141069Z 2023-01-11T21:41:23.7141076Z 2023-01-11T21:41:23.7141206Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7141302Z import torch 2023-01-11T21:41:23.7141404Z import random 2023-01-11T21:41:23.7141560Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7141740Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7141747Z 2023-01-11T21:41:23.7141855Z aten = torch.ops.aten 2023-01-11T21:41:23.7142054Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7142185Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7142193Z 2023-01-11T21:41:23.7142199Z 2023-01-11T21:41:23.7142488Z async_compile.wait(globals()) 2023-01-11T21:41:23.7142601Z del async_compile 2023-01-11T21:41:23.7142720Z 2023-01-11T21:41:23.7142822Z def call(args): 2023-01-11T21:41:23.7142975Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7143075Z args.clear() 2023-01-11T21:41:23.7143272Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7143445Z assert_size_stride(buf0, (1, 32, 51, 51, 51), (4244832, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.7143545Z del arg0_1 2023-01-11T21:41:23.7143639Z del arg1_1 2023-01-11T21:41:23.7143727Z del arg7_1 2023-01-11T21:41:23.7143817Z return (buf0, ) 2023-01-11T21:41:23.7143839Z 2023-01-11T21:41:23.7143846Z 2023-01-11T21:41:23.7143941Z if __name__ == "__main__": 2023-01-11T21:41:23.7144114Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7144299Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7144654Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7145042Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7145333Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7145613Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7145884Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7146122Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7146353Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7146675Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7146884Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7146892Z 2023-01-11T21:41:23.7146897Z 2023-01-11T21:41:23.7147019Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7147114Z import torch 2023-01-11T21:41:23.7147207Z import random 2023-01-11T21:41:23.7147364Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7147514Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7147521Z 2023-01-11T21:41:23.7147624Z aten = torch.ops.aten 2023-01-11T21:41:23.7147805Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7147925Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7147932Z 2023-01-11T21:41:23.7147938Z 2023-01-11T21:41:23.7148128Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7148410Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7148571Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7148705Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7148772Z { 2023-01-11T21:41:23.7148907Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7148993Z { 2023-01-11T21:41:23.7149125Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7149240Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7149325Z { 2023-01-11T21:41:23.7149449Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7149524Z { 2023-01-11T21:41:23.7149615Z { 2023-01-11T21:41:23.7149703Z { 2023-01-11T21:41:23.7149841Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.7149980Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7150075Z } 2023-01-11T21:41:23.7150166Z } 2023-01-11T21:41:23.7150244Z } 2023-01-11T21:41:23.7150332Z } 2023-01-11T21:41:23.7150419Z } 2023-01-11T21:41:23.7150507Z } 2023-01-11T21:41:23.7150619Z ''') 2023-01-11T21:41:23.7150628Z 2023-01-11T21:41:23.7150634Z 2023-01-11T21:41:23.7150762Z async_compile.wait(globals()) 2023-01-11T21:41:23.7150858Z del async_compile 2023-01-11T21:41:23.7150969Z 2023-01-11T21:41:23.7151050Z def call(args): 2023-01-11T21:41:23.7151212Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7151302Z args.clear() 2023-01-11T21:41:23.7151691Z buf0 = empty_strided((1, 3, 55, 55, 55), (499125, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7151918Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7152027Z del arg7_1 2023-01-11T21:41:23.7152265Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 1) 2023-01-11T21:41:23.7152444Z assert_size_stride(buf1, (1, 32, 51, 51, 51), (4244832, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.7152556Z del arg0_1 2023-01-11T21:41:23.7152666Z del arg1_1 2023-01-11T21:41:23.7152780Z return (buf1, ) 2023-01-11T21:41:23.7152789Z 2023-01-11T21:41:23.7152796Z 2023-01-11T21:41:23.7152919Z if __name__ == "__main__": 2023-01-11T21:41:23.7153171Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7153379Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7153834Z arg0_1 = rand_strided((32, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7154155Z arg1_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7154487Z arg2_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7154819Z arg3_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7155151Z arg4_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7155481Z arg5_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7155797Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7156201Z arg7_1 = rand_strided((1, 3, 55, 55, 55), (499125, 1, 9075, 165, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7156477Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7156487Z 2023-01-11T21:41:23.7156495Z 2023-01-11T21:41:23.7156657Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7156761Z import torch 2023-01-11T21:41:23.7156879Z import random 2023-01-11T21:41:23.7157082Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7157288Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7157296Z 2023-01-11T21:41:23.7157430Z aten = torch.ops.aten 2023-01-11T21:41:23.7157668Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7157824Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7157831Z 2023-01-11T21:41:23.7157838Z 2023-01-11T21:41:23.7157976Z async_compile.wait(globals()) 2023-01-11T21:41:23.7158101Z del async_compile 2023-01-11T21:41:23.7158108Z 2023-01-11T21:41:23.7158228Z def call(args): 2023-01-11T21:41:23.7158437Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7158551Z args.clear() 2023-01-11T21:41:23.7158794Z buf0 = aten.convolution(arg7_1, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7158994Z assert_size_stride(buf0, (1, 128, 51, 51, 51), (16979328, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.7159106Z del arg0_1 2023-01-11T21:41:23.7159207Z del arg1_1 2023-01-11T21:41:23.7159318Z del arg7_1 2023-01-11T21:41:23.7159438Z return (buf0, ) 2023-01-11T21:41:23.7159445Z 2023-01-11T21:41:23.7159453Z 2023-01-11T21:41:23.7159582Z if __name__ == "__main__": 2023-01-11T21:41:23.7159779Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7159988Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7160382Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7160730Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7161117Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7161467Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7161801Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7162144Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7162467Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7162889Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7163156Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7163164Z 2023-01-11T21:41:23.7163654Z [2023-01-11 21:26:12,295] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 148 2023-01-11T21:41:23.7163664Z 2023-01-11T21:41:23.7163871Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7163973Z import torch 2023-01-11T21:41:23.7164091Z import random 2023-01-11T21:41:23.7164291Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7164500Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7164508Z 2023-01-11T21:41:23.7164641Z aten = torch.ops.aten 2023-01-11T21:41:23.7164875Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7165027Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7165036Z 2023-01-11T21:41:23.7165043Z 2023-01-11T21:41:23.7165262Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7165627Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7165830Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7165994Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7166099Z { 2023-01-11T21:41:23.7166272Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7166377Z { 2023-01-11T21:41:23.7166492Z #pragma omp for 2023-01-11T21:41:23.7166632Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7166739Z { 2023-01-11T21:41:23.7166873Z #pragma GCC ivdep 2023-01-11T21:41:23.7167024Z for(long i1=0; i1<166375; i1+=1) 2023-01-11T21:41:23.7167133Z { 2023-01-11T21:41:23.7167241Z { 2023-01-11T21:41:23.7167338Z { 2023-01-11T21:41:23.7167509Z auto tmp0 = in_ptr0[i0 + (12*i1)]; 2023-01-11T21:41:23.7167672Z out_ptr0[i1 + (166375*i0)] = tmp0; 2023-01-11T21:41:23.7167782Z } 2023-01-11T21:41:23.7167889Z } 2023-01-11T21:41:23.7167993Z } 2023-01-11T21:41:23.7168097Z } 2023-01-11T21:41:23.7168183Z } 2023-01-11T21:41:23.7168281Z } 2023-01-11T21:41:23.7168416Z ''') 2023-01-11T21:41:23.7168423Z 2023-01-11T21:41:23.7168435Z 2023-01-11T21:41:23.7168588Z async_compile.wait(globals()) 2023-01-11T21:41:23.7178381Z del async_compile 2023-01-11T21:41:23.7178399Z 2023-01-11T21:41:23.7178555Z def call(args): 2023-01-11T21:41:23.7178739Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args 2023-01-11T21:41:23.7178838Z args.clear() 2023-01-11T21:41:23.7179234Z buf0 = empty_strided((1, 12, 55, 55, 55), (1996500, 166375, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7179412Z kernel_cpp_0(c_void_p(arg7_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7179488Z del arg7_1 2023-01-11T21:41:23.7179684Z buf1 = aten.convolution(buf0, arg0_1, arg1_1, (1, 1, 1), (0, 0, 0), (2, 2, 2), False, (0, 0, 0), 4) 2023-01-11T21:41:23.7179849Z assert_size_stride(buf1, (1, 128, 51, 51, 51), (16979328, 132651, 2601, 51, 1)) 2023-01-11T21:41:23.7179938Z del arg0_1 2023-01-11T21:41:23.7180028Z del arg1_1 2023-01-11T21:41:23.7180125Z return (buf1, ) 2023-01-11T21:41:23.7180239Z 2023-01-11T21:41:23.7180250Z 2023-01-11T21:41:23.7180348Z if __name__ == "__main__": 2023-01-11T21:41:23.7180509Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7180658Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7180986Z arg0_1 = rand_strided((128, 3, 3, 3, 3), (81, 27, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7181259Z arg1_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7181525Z arg2_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7181793Z arg3_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7182079Z arg4_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7182566Z arg5_1 = rand_strided((128, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7182842Z arg6_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7183292Z arg7_1 = rand_strided((1, 12, 55, 55, 55), (1996500, 1, 36300, 660, 12), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7183508Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1])) 2023-01-11T21:41:23.7183515Z 2023-01-11T21:41:23.7183611Z ok (37.036s) 2023-01-11T21:41:23.7184344Z test_conv_functional_bn_fuse_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7184535Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7184934Z [2023-01-11 21:26:12,853] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 149 2023-01-11T21:41:23.7185328Z [2023-01-11 21:26:14,406] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 149 2023-01-11T21:41:23.7185336Z 2023-01-11T21:41:23.7185463Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7185557Z import torch 2023-01-11T21:41:23.7185639Z import random 2023-01-11T21:41:23.7185805Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7185977Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7185985Z 2023-01-11T21:41:23.7186090Z aten = torch.ops.aten 2023-01-11T21:41:23.7186283Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7186403Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7186410Z 2023-01-11T21:41:23.7186416Z 2023-01-11T21:41:23.7186610Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7186914Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7187097Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7187226Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7187315Z { 2023-01-11T21:41:23.7187457Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7187539Z { 2023-01-11T21:41:23.7187669Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7187787Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7187865Z { 2023-01-11T21:41:23.7187992Z for(long i1=0; i1<31136; i1+=1) 2023-01-11T21:41:23.7188085Z { 2023-01-11T21:41:23.7188173Z { 2023-01-11T21:41:23.7188265Z { 2023-01-11T21:41:23.7188412Z auto tmp0 = in_ptr0[i1 + (31136*i0)]; 2023-01-11T21:41:23.7188540Z out_ptr0[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.7188618Z } 2023-01-11T21:41:23.7188707Z } 2023-01-11T21:41:23.7188795Z } 2023-01-11T21:41:23.7188883Z } 2023-01-11T21:41:23.7189062Z } 2023-01-11T21:41:23.7189149Z } 2023-01-11T21:41:23.7189270Z ''') 2023-01-11T21:41:23.7189279Z 2023-01-11T21:41:23.7189284Z 2023-01-11T21:41:23.7189466Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.7189767Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7189935Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7190078Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7190162Z { 2023-01-11T21:41:23.7190303Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7190393Z { 2023-01-11T21:41:23.7190491Z #pragma omp for 2023-01-11T21:41:23.7190607Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7190690Z { 2023-01-11T21:41:23.7190802Z #pragma GCC ivdep 2023-01-11T21:41:23.7190931Z for(long i1=0; i1<29916; i1+=1) 2023-01-11T21:41:23.7191016Z { 2023-01-11T21:41:23.7191105Z { 2023-01-11T21:41:23.7191236Z { 2023-01-11T21:41:23.7191379Z auto tmp0 = in_ptr0[i0 + (64*i1)]; 2023-01-11T21:41:23.7191518Z out_ptr0[i1 + (29916*i0)] = tmp0; 2023-01-11T21:41:23.7191610Z } 2023-01-11T21:41:23.7191701Z } 2023-01-11T21:41:23.7191791Z } 2023-01-11T21:41:23.7191879Z } 2023-01-11T21:41:23.7191949Z } 2023-01-11T21:41:23.7192030Z } 2023-01-11T21:41:23.7192143Z ''') 2023-01-11T21:41:23.7192151Z 2023-01-11T21:41:23.7192157Z 2023-01-11T21:41:23.7192284Z async_compile.wait(globals()) 2023-01-11T21:41:23.7192381Z del async_compile 2023-01-11T21:41:23.7192387Z 2023-01-11T21:41:23.7192484Z def call(args): 2023-01-11T21:41:23.7192642Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1 = args 2023-01-11T21:41:23.7192727Z args.clear() 2023-01-11T21:41:23.7193047Z buf0 = empty_strided((1, 3, 556, 56), (93408, 1, 168, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7193253Z kernel_cpp_0(c_void_p(arg6_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7193348Z del arg6_1 2023-01-11T21:41:23.7193816Z buf1 = torch.ops.mkldnn._convolution_pointwise(buf0, arg2_1, arg3_1, (0, 0), (1, 1), (1, 1), 1, 'none', [], '') 2023-01-11T21:41:23.7193975Z assert_size_stride(buf1, (1, 64, 554, 54), (1914624, 1, 3456, 64)) 2023-01-11T21:41:23.7194070Z del arg2_1 2023-01-11T21:41:23.7194149Z del arg3_1 2023-01-11T21:41:23.7194247Z del buf0 2023-01-11T21:41:23.7194582Z buf2 = empty_strided((1, 64, 554, 54), (1914624, 29916, 54, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7194776Z kernel_cpp_1(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.7194876Z return (buf2, ) 2023-01-11T21:41:23.7194883Z 2023-01-11T21:41:23.7194889Z 2023-01-11T21:41:23.7194997Z if __name__ == "__main__": 2023-01-11T21:41:23.7195164Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7195348Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7195623Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7195901Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7196215Z arg2_1 = rand_strided((64, 3, 3, 3), (1, 0, 0, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7196501Z arg3_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7196779Z arg4_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7197059Z arg5_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7197396Z arg6_1 = rand_strided((1, 3, 556, 56), (93408, 31136, 56, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7197619Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1])) 2023-01-11T21:41:23.7197626Z 2023-01-11T21:41:23.7197717Z ok (1.709s) 2023-01-11T21:41:23.7198502Z test_convolution1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7198685Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7199100Z [2023-01-11 21:26:14,534] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 150 2023-01-11T21:41:23.7199513Z [2023-01-11 21:26:16,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 150 2023-01-11T21:41:23.7199522Z 2023-01-11T21:41:23.7199665Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7199766Z import torch 2023-01-11T21:41:23.7199861Z import random 2023-01-11T21:41:23.7200088Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7200277Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7200284Z 2023-01-11T21:41:23.7200382Z aten = torch.ops.aten 2023-01-11T21:41:23.7200581Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7200716Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7200724Z 2023-01-11T21:41:23.7200729Z 2023-01-11T21:41:23.7200932Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7201236Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7201412Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7201549Z bool* __restrict__ out_ptr0) 2023-01-11T21:41:23.7201637Z { 2023-01-11T21:41:23.7201774Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7201860Z { 2023-01-11T21:41:23.7201972Z #pragma omp for 2023-01-11T21:41:23.7202095Z for(long i0=0; i0<2352; i0+=1) 2023-01-11T21:41:23.7202195Z { 2023-01-11T21:41:23.7202282Z { 2023-01-11T21:41:23.7202357Z { 2023-01-11T21:41:23.7202495Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7202631Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7202786Z auto tmp2 = static_cast(0); 2023-01-11T21:41:23.7202923Z auto tmp3 = tmp1 <= tmp2; 2023-01-11T21:41:23.7203054Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7203175Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7203254Z } 2023-01-11T21:41:23.7203347Z } 2023-01-11T21:41:23.7203439Z } 2023-01-11T21:41:23.7203524Z } 2023-01-11T21:41:23.7203612Z } 2023-01-11T21:41:23.7203734Z ''') 2023-01-11T21:41:23.7203743Z 2023-01-11T21:41:23.7203749Z 2023-01-11T21:41:23.7203882Z async_compile.wait(globals()) 2023-01-11T21:41:23.7203973Z del async_compile 2023-01-11T21:41:23.7203996Z 2023-01-11T21:41:23.7204087Z def call(args): 2023-01-11T21:41:23.7204238Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.7204348Z args.clear() 2023-01-11T21:41:23.7204575Z buf0 = aten.convolution(primals_3, primals_1, primals_2, (1, 1), (0, 0), (1, 1), False, (0, 0), 1) 2023-01-11T21:41:23.7204733Z assert_size_stride(buf0, (2, 6, 14, 14), (1176, 196, 14, 1)) 2023-01-11T21:41:23.7204838Z del primals_2 2023-01-11T21:41:23.7205010Z buf1 = as_strided(buf0, (2, 6, 14, 14), (1176, 196, 14, 1)); del buf0 # reuse 2023-01-11T21:41:23.7205331Z buf2 = empty_strided((2, 6, 14, 14), (1176, 196, 14, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.7205526Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.7205679Z return (buf1, primals_1, primals_3, buf2, ) 2023-01-11T21:41:23.7205686Z 2023-01-11T21:41:23.7205693Z 2023-01-11T21:41:23.7205801Z if __name__ == "__main__": 2023-01-11T21:41:23.7205969Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7206222Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7206559Z primals_1 = rand_strided((6, 5, 3, 3), (45, 9, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7206862Z primals_2 = rand_strided((6, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7207187Z primals_3 = rand_strided((2, 5, 16, 16), (1280, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7207390Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.7207399Z 2023-01-11T21:41:23.7207496Z ok (1.655s) 2023-01-11T21:41:23.7208202Z test_convolution2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7208442Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7208841Z [2023-01-11 21:26:16,146] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 151 2023-01-11T21:41:23.7209241Z [2023-01-11 21:26:16,168] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 151 2023-01-11T21:41:23.7209250Z 2023-01-11T21:41:23.7209388Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7209492Z import torch 2023-01-11T21:41:23.7209590Z import random 2023-01-11T21:41:23.7209742Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7209911Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7209918Z 2023-01-11T21:41:23.7210020Z aten = torch.ops.aten 2023-01-11T21:41:23.7210206Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7210323Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7210330Z 2023-01-11T21:41:23.7210340Z 2023-01-11T21:41:23.7210455Z async_compile.wait(globals()) 2023-01-11T21:41:23.7210547Z del async_compile 2023-01-11T21:41:23.7210553Z 2023-01-11T21:41:23.7210642Z def call(args): 2023-01-11T21:41:23.7210740Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.7210845Z args.clear() 2023-01-11T21:41:23.7211064Z buf0 = aten.convolution(arg0_1, arg1_1, arg2_1, (4,), (0,), (1,), True, (0,), 1) 2023-01-11T21:41:23.7211221Z assert_size_stride(buf0, (2, 16, 364), (5824, 364, 1)) 2023-01-11T21:41:23.7211321Z del arg0_1 2023-01-11T21:41:23.7211428Z del arg1_1 2023-01-11T21:41:23.7211517Z del arg2_1 2023-01-11T21:41:23.7211623Z return (buf0, ) 2023-01-11T21:41:23.7211629Z 2023-01-11T21:41:23.7211636Z 2023-01-11T21:41:23.7211744Z if __name__ == "__main__": 2023-01-11T21:41:23.7211897Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7212080Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7212412Z arg0_1 = rand_strided((2, 32, 90), (2880, 90, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7212719Z arg1_1 = rand_strided((32, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7212993Z arg2_1 = rand_strided((16, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7213144Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.7213166Z 2023-01-11T21:41:23.7213240Z ok (0.065s) 2023-01-11T21:41:23.7213876Z test_cos_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7214042Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7214422Z [2023-01-11 21:26:16,201] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 152 2023-01-11T21:41:23.7214896Z [2023-01-11 21:26:17,708] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 152 2023-01-11T21:41:23.7214904Z 2023-01-11T21:41:23.7215040Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7215137Z import torch 2023-01-11T21:41:23.7215231Z import random 2023-01-11T21:41:23.7215400Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7215564Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7215571Z 2023-01-11T21:41:23.7215680Z aten = torch.ops.aten 2023-01-11T21:41:23.7215876Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7216007Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7216015Z 2023-01-11T21:41:23.7216022Z 2023-01-11T21:41:23.7216224Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7216606Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7216779Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7216926Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7217049Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.7217136Z { 2023-01-11T21:41:23.7217286Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7217371Z { 2023-01-11T21:41:23.7217474Z #pragma omp for 2023-01-11T21:41:23.7217595Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.7217677Z { 2023-01-11T21:41:23.7217873Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7217994Z auto tmp1 = tmp0.cos(); 2023-01-11T21:41:23.7218188Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7218311Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.7218497Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.7218623Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.7218743Z auto tmp6 = tmp5.cos(); 2023-01-11T21:41:23.7218857Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7218980Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7219063Z } 2023-01-11T21:41:23.7219197Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7219319Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.7219405Z { 2023-01-11T21:41:23.7219520Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7219627Z auto tmp1 = std::cos(tmp0); 2023-01-11T21:41:23.7219763Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.7219881Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.7220017Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.7220131Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.7220253Z auto tmp6 = std::cos(tmp5); 2023-01-11T21:41:23.7220370Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7220475Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.7220560Z } 2023-01-11T21:41:23.7220639Z } 2023-01-11T21:41:23.7220719Z } 2023-01-11T21:41:23.7220842Z ''') 2023-01-11T21:41:23.7220850Z 2023-01-11T21:41:23.7220856Z 2023-01-11T21:41:23.7220986Z async_compile.wait(globals()) 2023-01-11T21:41:23.7221088Z del async_compile 2023-01-11T21:41:23.7221095Z 2023-01-11T21:41:23.7221180Z def call(args): 2023-01-11T21:41:23.7221276Z arg0_1, = args 2023-01-11T21:41:23.7221381Z args.clear() 2023-01-11T21:41:23.7221689Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7221983Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7222212Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.7222306Z del arg0_1 2023-01-11T21:41:23.7222586Z return (buf0, buf1, ) 2023-01-11T21:41:23.7222702Z 2023-01-11T21:41:23.7222712Z 2023-01-11T21:41:23.7222814Z if __name__ == "__main__": 2023-01-11T21:41:23.7222988Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7223170Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7223482Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7223639Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7223646Z 2023-01-11T21:41:23.7223734Z ok (1.540s) 2023-01-11T21:41:23.7224433Z test_cpp_wrapper_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7224608Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7225075Z [2023-01-11 21:26:17,741] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 153 2023-01-11T21:41:23.7225084Z 2023-01-11T21:41:23.7225205Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7225299Z import torch 2023-01-11T21:41:23.7225401Z import random 2023-01-11T21:41:23.7225571Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7225750Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7225758Z 2023-01-11T21:41:23.7225861Z aten = torch.ops.aten 2023-01-11T21:41:23.7226052Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7226185Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7226192Z 2023-01-11T21:41:23.7226199Z 2023-01-11T21:41:23.7226383Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7226678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7226855Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7227008Z const float* __restrict__ in_ptr0) 2023-01-11T21:41:23.7227096Z { 2023-01-11T21:41:23.7227232Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7227318Z { 2023-01-11T21:41:23.7227414Z #pragma omp for 2023-01-11T21:41:23.7227531Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:23.7227618Z { 2023-01-11T21:41:23.7227808Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7228002Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.7228117Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7228302Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7228415Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7228539Z tmp4.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7228627Z } 2023-01-11T21:41:23.7228771Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7228901Z for(long i0=4096; i0<4096; i0+=1) 2023-01-11T21:41:23.7228988Z { 2023-01-11T21:41:23.7229108Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7229246Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.7229357Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7229497Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.7229620Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7229734Z in_out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7229826Z } 2023-01-11T21:41:23.7229916Z } 2023-01-11T21:41:23.7229988Z } 2023-01-11T21:41:23.7230110Z ''') 2023-01-11T21:41:23.7230118Z 2023-01-11T21:41:23.7230248Z async_compile.wait(globals()) 2023-01-11T21:41:23.7230349Z del async_compile 2023-01-11T21:41:23.7230517Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7230614Z wrapper = ( 2023-01-11T21:41:23.7230723Z ''' 2023-01-11T21:41:23.7230815Z #include 2023-01-11T21:41:23.7230997Z #include 2023-01-11T21:41:23.7231182Z std::vector call_0(std::vector args) { 2023-01-11T21:41:23.7231300Z at::Tensor arg0_1; 2023-01-11T21:41:23.7231405Z arg0_1 = args[0]; 2023-01-11T21:41:23.7231596Z auto buf0 = at::empty_strided({64, 64}, {64, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7231779Z auto buf1 = at::as_strided(buf0, {8, 8, 64}, {512, 64, 1}); buf0.reset(); // reuse 2023-01-11T21:41:23.7232150Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/7t/c7teerg2txdzjxddrmv3s7fxnncw2uo4ovbnvjzi3xo2blybtvpk.so", RTLD_NOW); 2023-01-11T21:41:23.7232278Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7232430Z void (*kernel_cpp_0)(float*,const float*); 2023-01-11T21:41:23.7232609Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7232796Z kernel_cpp_0((float*)(buf1.data_ptr()), (float*)(arg0_1.data_ptr())); 2023-01-11T21:41:23.7233205Z return std::vector({at::as_strided(arg0_1, {8, 8, 64}, {512, 64, 1}), buf1}); }''' ) 2023-01-11T21:41:23.7233217Z 2023-01-11T21:41:23.7233323Z module = load_inline( 2023-01-11T21:41:23.7233703Z name='inline_extension_cvr6w7wpa3pkttzmjiie5bttvbnlvilrhuls2ejiiqdffyagh3el', 2023-01-11T21:41:23.7233893Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7234037Z functions=['call_0'], 2023-01-11T21:41:23.7234596Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7234806Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7235693Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7235706Z 2023-01-11T21:41:23.7235815Z def _wrap_func(f): 2023-01-11T21:41:23.7235909Z def g(args): 2023-01-11T21:41:23.7236006Z return f(args) 2023-01-11T21:41:23.7236091Z return g 2023-01-11T21:41:23.7236197Z call = _wrap_func(module.call_0) 2023-01-11T21:41:23.7236217Z 2023-01-11T21:41:23.7236224Z 2023-01-11T21:41:23.7236315Z if __name__ == "__main__": 2023-01-11T21:41:23.7236465Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7236660Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7236951Z arg0_1 = rand_strided((64, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7237107Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7237114Z 2023-01-11T21:41:23.7237530Z [2023-01-11 21:26:36,863] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 153 2023-01-11T21:41:23.7238208Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7238396Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7238789Z [2023-01-11 21:26:36,886] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 154 2023-01-11T21:41:23.7238797Z 2023-01-11T21:41:23.7238934Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7239018Z import torch 2023-01-11T21:41:23.7239113Z import random 2023-01-11T21:41:23.7239279Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7239448Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7239456Z 2023-01-11T21:41:23.7239568Z aten = torch.ops.aten 2023-01-11T21:41:23.7239756Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7239971Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7239980Z 2023-01-11T21:41:23.7239987Z 2023-01-11T21:41:23.7240171Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7240470Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7240644Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.7240787Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:23.7240927Z int* __restrict__ out_ptr0, 2023-01-11T21:41:23.7241056Z int* __restrict__ out_ptr1, 2023-01-11T21:41:23.7241186Z int* __restrict__ out_ptr2, 2023-01-11T21:41:23.7241316Z int* __restrict__ out_ptr3) 2023-01-11T21:41:23.7241390Z { 2023-01-11T21:41:23.7241534Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7241619Z { 2023-01-11T21:41:23.7241727Z #pragma omp for 2023-01-11T21:41:23.7241843Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7241990Z { 2023-01-11T21:41:23.7242068Z { 2023-01-11T21:41:23.7242160Z { 2023-01-11T21:41:23.7242292Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7242419Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7242532Z auto tmp1 = ~tmp0; 2023-01-11T21:41:23.7242660Z auto tmp3 = tmp0 | tmp2; 2023-01-11T21:41:23.7242797Z auto tmp4 = tmp0 ^ tmp2; 2023-01-11T21:41:23.7242910Z auto tmp5 = tmp0 & tmp2; 2023-01-11T21:41:23.7243034Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7243152Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.7243276Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.7243394Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:23.7243482Z } 2023-01-11T21:41:23.7243566Z } 2023-01-11T21:41:23.7243635Z } 2023-01-11T21:41:23.7243717Z } 2023-01-11T21:41:23.7243805Z } 2023-01-11T21:41:23.7243931Z ''') 2023-01-11T21:41:23.7243939Z 2023-01-11T21:41:23.7244078Z async_compile.wait(globals()) 2023-01-11T21:41:23.7244179Z del async_compile 2023-01-11T21:41:23.7244354Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7244440Z wrapper = ( 2023-01-11T21:41:23.7244550Z ''' 2023-01-11T21:41:23.7244658Z #include 2023-01-11T21:41:23.7244764Z #include 2023-01-11T21:41:23.7244959Z std::vector call_1(std::vector args) { 2023-01-11T21:41:23.7245073Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7245180Z arg0_1 = args[0]; 2023-01-11T21:41:23.7245266Z arg1_1 = args[1]; 2023-01-11T21:41:23.7245458Z auto buf0 = at::empty_strided({64, }, {1, }, at::ScalarType::Int); 2023-01-11T21:41:23.7245655Z auto buf1 = at::empty_strided({64, }, {1, }, at::ScalarType::Int); 2023-01-11T21:41:23.7245845Z auto buf2 = at::empty_strided({64, }, {1, }, at::ScalarType::Int); 2023-01-11T21:41:23.7246038Z auto buf3 = at::empty_strided({64, }, {1, }, at::ScalarType::Int); 2023-01-11T21:41:23.7246466Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/rv/crvxoskxizzxrfommbf7bg5tkfmti76e7o2dzmn36stpkt3uvwnx.so", RTLD_NOW); 2023-01-11T21:41:23.7246627Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7246808Z void (*kernel_cpp_0)(const int*,const int*,int*,int*,int*,int*); 2023-01-11T21:41:23.7246971Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7247281Z kernel_cpp_0((int*)(arg0_1.data_ptr()), (int*)(arg1_1.data_ptr()), (int*)(buf0.data_ptr()), (int*)(buf1.data_ptr()), (int*)(buf2.data_ptr()), (int*)(buf3.data_ptr())); 2023-01-11T21:41:23.7247376Z arg0_1.reset(); 2023-01-11T21:41:23.7247471Z arg1_1.reset(); 2023-01-11T21:41:23.7247755Z return std::vector({buf0, buf1, buf2, buf3}); }''' ) 2023-01-11T21:41:23.7247765Z 2023-01-11T21:41:23.7247872Z module = load_inline( 2023-01-11T21:41:23.7248330Z name='inline_extension_cuyauoq6gdklrmicb22pr47u4ph7mongsidlmteo5v4kpvsi5a4d', 2023-01-11T21:41:23.7248447Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7248594Z functions=['call_1'], 2023-01-11T21:41:23.7249159Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7249380Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7250302Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7250311Z 2023-01-11T21:41:23.7250417Z def _wrap_func(f): 2023-01-11T21:41:23.7250516Z def g(args): 2023-01-11T21:41:23.7250620Z return f(args) 2023-01-11T21:41:23.7250711Z return g 2023-01-11T21:41:23.7250888Z call = _wrap_func(module.call_1) 2023-01-11T21:41:23.7250913Z 2023-01-11T21:41:23.7250919Z 2023-01-11T21:41:23.7251013Z if __name__ == "__main__": 2023-01-11T21:41:23.7251188Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7251364Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7251653Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.7251959Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.7252139Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7252147Z 2023-01-11T21:41:23.7252543Z [2023-01-11 21:26:56,481] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 154 2023-01-11T21:41:23.7253209Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7253400Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7253790Z [2023-01-11 21:26:56,504] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 155 2023-01-11T21:41:23.7253812Z 2023-01-11T21:41:23.7253940Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7254030Z import torch 2023-01-11T21:41:23.7254134Z import random 2023-01-11T21:41:23.7254296Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7254477Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7254484Z 2023-01-11T21:41:23.7254596Z aten = torch.ops.aten 2023-01-11T21:41:23.7254790Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7254906Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7254913Z 2023-01-11T21:41:23.7254925Z 2023-01-11T21:41:23.7255121Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7255423Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7255596Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7255742Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7255886Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7256026Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.7256109Z { 2023-01-11T21:41:23.7256236Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7256322Z { 2023-01-11T21:41:23.7256435Z #pragma omp for 2023-01-11T21:41:23.7256553Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.7256637Z { 2023-01-11T21:41:23.7256832Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7257017Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.7257206Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7257344Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7257432Z } 2023-01-11T21:41:23.7257565Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7257681Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.7257768Z { 2023-01-11T21:41:23.7257882Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7258010Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.7258127Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7258242Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7258336Z } 2023-01-11T21:41:23.7258446Z #pragma omp for 2023-01-11T21:41:23.7258560Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.7258635Z { 2023-01-11T21:41:23.7258824Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7259018Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7259205Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7259343Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7259429Z } 2023-01-11T21:41:23.7259581Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7259708Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.7259791Z { 2023-01-11T21:41:23.7259912Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.7260054Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7260171Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7260285Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.7260371Z } 2023-01-11T21:41:23.7260445Z } 2023-01-11T21:41:23.7260532Z } 2023-01-11T21:41:23.7260670Z ''') 2023-01-11T21:41:23.7260679Z 2023-01-11T21:41:23.7260686Z 2023-01-11T21:41:23.7260888Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.7261207Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7261392Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7261479Z { 2023-01-11T21:41:23.7261620Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7261692Z { 2023-01-11T21:41:23.7261799Z #pragma omp for 2023-01-11T21:41:23.7261914Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.7262005Z { 2023-01-11T21:41:23.7262213Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7262552Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.7262679Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7262807Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7262896Z } 2023-01-11T21:41:23.7263033Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7263154Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.7263243Z { 2023-01-11T21:41:23.7263374Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7263527Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.7263630Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7263745Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7263832Z } 2023-01-11T21:41:23.7263917Z } 2023-01-11T21:41:23.7263999Z } 2023-01-11T21:41:23.7264125Z ''') 2023-01-11T21:41:23.7264134Z 2023-01-11T21:41:23.7264264Z async_compile.wait(globals()) 2023-01-11T21:41:23.7264357Z del async_compile 2023-01-11T21:41:23.7264528Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7264629Z wrapper = ( 2023-01-11T21:41:23.7264744Z ''' 2023-01-11T21:41:23.7264847Z #include 2023-01-11T21:41:23.7264950Z #include 2023-01-11T21:41:23.7265117Z std::vector call_2(std::vector args) { 2023-01-11T21:41:23.7265232Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7265334Z arg0_1 = args[0]; 2023-01-11T21:41:23.7265434Z arg1_1 = args[1]; 2023-01-11T21:41:23.7265748Z auto buf0 = at::empty_strided({2, 8, 8}, {64, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7265885Z at::bmm_out(buf0, arg0_1, arg1_1); 2023-01-11T21:41:23.7266080Z auto buf1 = at::empty_strided({2, 8, 8}, {64, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7266273Z auto buf2 = at::empty_strided({2, 8, 8}, {64, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7266701Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/pc/cpc65tdxpyhvrndvlki6qhxomio7koznfdtyy2it7cwxowywkzk6.so", RTLD_NOW); 2023-01-11T21:41:23.7266837Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7267040Z void (*kernel_cpp_0)(const float*,const float*,float*,float*); 2023-01-11T21:41:23.7267225Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7267499Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(arg1_1.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf2.data_ptr())); 2023-01-11T21:41:23.7267608Z arg0_1.reset(); 2023-01-11T21:41:23.7267811Z arg1_1.reset(); 2023-01-11T21:41:23.7268025Z auto buf3 = at::empty_strided({2, 8, 8}, {64, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7268145Z at::bmm_out(buf3, buf1, buf2); 2023-01-11T21:41:23.7268253Z buf1.reset(); 2023-01-11T21:41:23.7268359Z buf2.reset(); 2023-01-11T21:41:23.7268505Z auto buf4 = buf3; buf3.reset(); // reuse 2023-01-11T21:41:23.7268878Z auto kernel_cpp_1_lib = dlopen("/tmp/torchinductor_jenkins/2i/c2iaojccekodbchj66prruw6huckuqrbwyh2qznbbrfrnmn36tp4.so", RTLD_NOW); 2023-01-11T21:41:23.7269012Z assert(kernel_cpp_1_lib != nullptr); 2023-01-11T21:41:23.7269139Z void (*kernel_cpp_1)(float*); 2023-01-11T21:41:23.7269305Z *(void **) (&kernel_cpp_1) = dlsym(kernel_cpp_1_lib, "kernel"); 2023-01-11T21:41:23.7269444Z kernel_cpp_1((float*)(buf4.data_ptr())); 2023-01-11T21:41:23.7269716Z return std::vector({buf0, buf4}); }''' ) 2023-01-11T21:41:23.7269726Z 2023-01-11T21:41:23.7269842Z module = load_inline( 2023-01-11T21:41:23.7270297Z name='inline_extension_chfnadliirwsn3ujofkzbsnpr6hburp722b4pdqce6kt2trz36ns', 2023-01-11T21:41:23.7270433Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7270609Z functions=['call_2'], 2023-01-11T21:41:23.7271157Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7271389Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7272326Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7272351Z 2023-01-11T21:41:23.7272444Z def _wrap_func(f): 2023-01-11T21:41:23.7272545Z def g(args): 2023-01-11T21:41:23.7272652Z return f(args) 2023-01-11T21:41:23.7272759Z return g 2023-01-11T21:41:23.7272892Z call = _wrap_func(module.call_2) 2023-01-11T21:41:23.7272900Z 2023-01-11T21:41:23.7272906Z 2023-01-11T21:41:23.7273004Z if __name__ == "__main__": 2023-01-11T21:41:23.7273162Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7273330Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7273653Z arg0_1 = rand_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7274061Z arg1_1 = rand_strided((2, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7274223Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7274231Z 2023-01-11T21:41:23.7274621Z [2023-01-11 21:27:16,459] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 155 2023-01-11T21:41:23.7275240Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7275498Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7275888Z [2023-01-11 21:27:16,483] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 156 2023-01-11T21:41:23.7275896Z 2023-01-11T21:41:23.7276023Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7276119Z import torch 2023-01-11T21:41:23.7276208Z import random 2023-01-11T21:41:23.7276378Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7276552Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7276560Z 2023-01-11T21:41:23.7276673Z aten = torch.ops.aten 2023-01-11T21:41:23.7276858Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7276991Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7277002Z 2023-01-11T21:41:23.7277086Z 2023-01-11T21:41:23.7277293Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7277592Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7277754Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7277901Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7278035Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7278163Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.7278245Z { 2023-01-11T21:41:23.7278383Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7278464Z { 2023-01-11T21:41:23.7278553Z #pragma omp for 2023-01-11T21:41:23.7278664Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.7278747Z { 2023-01-11T21:41:23.7278936Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7279121Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.7279247Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7279376Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7279449Z } 2023-01-11T21:41:23.7279583Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7279701Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.7279792Z { 2023-01-11T21:41:23.7279912Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7280058Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.7280181Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7280292Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7280373Z } 2023-01-11T21:41:23.7280482Z #pragma omp for 2023-01-11T21:41:23.7280592Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.7280676Z { 2023-01-11T21:41:23.7280859Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7281056Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7281166Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7281300Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7281382Z } 2023-01-11T21:41:23.7281520Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7281633Z for(long i0=80; i0<80; i0+=1) 2023-01-11T21:41:23.7281718Z { 2023-01-11T21:41:23.7281832Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.7281962Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7282083Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7282195Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.7282283Z } 2023-01-11T21:41:23.7282369Z } 2023-01-11T21:41:23.7282452Z } 2023-01-11T21:41:23.7282567Z ''') 2023-01-11T21:41:23.7282589Z 2023-01-11T21:41:23.7282595Z 2023-01-11T21:41:23.7282785Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.7283098Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7283354Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7283442Z { 2023-01-11T21:41:23.7283584Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7283670Z { 2023-01-11T21:41:23.7283777Z #pragma omp for 2023-01-11T21:41:23.7283876Z for(long i0=0; i0<20; i0+=1) 2023-01-11T21:41:23.7283961Z { 2023-01-11T21:41:23.7284160Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7284354Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.7284477Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7284611Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7284699Z } 2023-01-11T21:41:23.7284826Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7284945Z for(long i0=160; i0<160; i0+=1) 2023-01-11T21:41:23.7285032Z { 2023-01-11T21:41:23.7285218Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7285354Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.7285477Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7285595Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7285666Z } 2023-01-11T21:41:23.7285748Z } 2023-01-11T21:41:23.7285830Z } 2023-01-11T21:41:23.7285951Z ''') 2023-01-11T21:41:23.7285959Z 2023-01-11T21:41:23.7286090Z async_compile.wait(globals()) 2023-01-11T21:41:23.7286191Z del async_compile 2023-01-11T21:41:23.7286365Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7286455Z wrapper = ( 2023-01-11T21:41:23.7286574Z ''' 2023-01-11T21:41:23.7286682Z #include 2023-01-11T21:41:23.7286786Z #include 2023-01-11T21:41:23.7286970Z std::vector call_3(std::vector args) { 2023-01-11T21:41:23.7287083Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7287179Z arg0_1 = args[0]; 2023-01-11T21:41:23.7287267Z arg1_1 = args[1]; 2023-01-11T21:41:23.7287470Z auto buf0 = at::empty_strided({1, 16, 10}, {160, 10, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7287663Z auto buf0_as_strided = at::as_strided(buf0, {16, 10}, {10, 1}); 2023-01-11T21:41:23.7287905Z at::mm_out(buf0_as_strided, at::as_strided(arg0_1, {16, 8}, {8, 1}), at::as_strided(arg1_1, {8, 10}, {10, 1})); 2023-01-11T21:41:23.7288111Z auto buf1 = at::empty_strided({1, 16, 8}, {128, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7288311Z auto buf2 = at::empty_strided({1, 8, 10}, {80, 10, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7288706Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/ra/cralzjg7p7jfrh7lq7f5aawuy5ztcxsmezjq7ltkvv7n75g5gmjr.so", RTLD_NOW); 2023-01-11T21:41:23.7288838Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7289001Z void (*kernel_cpp_0)(const float*,const float*,float*,float*); 2023-01-11T21:41:23.7289150Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7289405Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(arg1_1.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf2.data_ptr())); 2023-01-11T21:41:23.7289505Z arg0_1.reset(); 2023-01-11T21:41:23.7289607Z arg1_1.reset(); 2023-01-11T21:41:23.7289813Z auto buf3 = at::empty_strided({1, 16, 10}, {160, 10, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7289987Z auto buf3_as_strided = at::as_strided(buf3, {16, 10}, {10, 1}); 2023-01-11T21:41:23.7290226Z at::mm_out(buf3_as_strided, at::as_strided(buf1, {16, 8}, {8, 1}), at::as_strided(buf2, {8, 10}, {10, 1})); 2023-01-11T21:41:23.7290330Z buf1.reset(); 2023-01-11T21:41:23.7290417Z buf2.reset(); 2023-01-11T21:41:23.7290556Z auto buf4 = buf3; buf3.reset(); // reuse 2023-01-11T21:41:23.7290917Z auto kernel_cpp_1_lib = dlopen("/tmp/torchinductor_jenkins/yc/cycbydl6b3o6njowlq7op2e3gon6ehl777xoi27mhfawtegw4ri4.so", RTLD_NOW); 2023-01-11T21:41:23.7291062Z assert(kernel_cpp_1_lib != nullptr); 2023-01-11T21:41:23.7291266Z void (*kernel_cpp_1)(float*); 2023-01-11T21:41:23.7291441Z *(void **) (&kernel_cpp_1) = dlsym(kernel_cpp_1_lib, "kernel"); 2023-01-11T21:41:23.7291588Z kernel_cpp_1((float*)(buf4.data_ptr())); 2023-01-11T21:41:23.7291837Z return std::vector({buf0, buf4}); }''' ) 2023-01-11T21:41:23.7291859Z 2023-01-11T21:41:23.7291961Z module = load_inline( 2023-01-11T21:41:23.7292357Z name='inline_extension_cfk34ucpswp45ihunrnj5h3vvmenylbjrdehpsrzmjuafwmv7mzt', 2023-01-11T21:41:23.7292476Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7292640Z functions=['call_3'], 2023-01-11T21:41:23.7293199Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7293424Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7294494Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7294510Z 2023-01-11T21:41:23.7294612Z def _wrap_func(f): 2023-01-11T21:41:23.7294697Z def g(args): 2023-01-11T21:41:23.7294798Z return f(args) 2023-01-11T21:41:23.7294892Z return g 2023-01-11T21:41:23.7295025Z call = _wrap_func(module.call_3) 2023-01-11T21:41:23.7295033Z 2023-01-11T21:41:23.7295040Z 2023-01-11T21:41:23.7295148Z if __name__ == "__main__": 2023-01-11T21:41:23.7295327Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7295522Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7295833Z arg0_1 = rand_strided((1, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7296117Z arg1_1 = rand_strided((1, 8, 10), (80, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7296269Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7296279Z 2023-01-11T21:41:23.7296647Z [2023-01-11 21:27:36,682] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 156 2023-01-11T21:41:23.7297215Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7297381Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7297735Z [2023-01-11 21:27:36,702] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 157 2023-01-11T21:41:23.7297742Z 2023-01-11T21:41:23.7297865Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7297953Z import torch 2023-01-11T21:41:23.7298042Z import random 2023-01-11T21:41:23.7298183Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7298337Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7298343Z 2023-01-11T21:41:23.7298442Z aten = torch.ops.aten 2023-01-11T21:41:23.7298616Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7298737Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7298744Z 2023-01-11T21:41:23.7298855Z async_compile.wait(globals()) 2023-01-11T21:41:23.7298946Z del async_compile 2023-01-11T21:41:23.7299097Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7299171Z wrapper = ( 2023-01-11T21:41:23.7299277Z ''' 2023-01-11T21:41:23.7299374Z #include 2023-01-11T21:41:23.7299471Z #include 2023-01-11T21:41:23.7299629Z at::Tensor call_4(std::vector args) { 2023-01-11T21:41:23.7299736Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7299837Z arg0_1 = args[0]; 2023-01-11T21:41:23.7299926Z arg1_1 = args[1]; 2023-01-11T21:41:23.7300202Z auto buf0 = at::empty_strided({1, 8, 8}, {64, 8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7300379Z auto buf0_as_strided = at::as_strided(buf0, {8, 8}, {8, 1}); 2023-01-11T21:41:23.7300614Z at::mm_out(buf0_as_strided, at::as_strided(arg0_1, {8, 8}, {1, 8}), at::as_strided(arg1_1, {8, 8}, {8, 1})); 2023-01-11T21:41:23.7300715Z arg0_1.reset(); 2023-01-11T21:41:23.7300813Z arg1_1.reset(); 2023-01-11T21:41:23.7300978Z return buf0; }''' ) 2023-01-11T21:41:23.7300985Z 2023-01-11T21:41:23.7301096Z module = load_inline( 2023-01-11T21:41:23.7301457Z name='inline_extension_chbemg2ahqboherkh3dfy5k24qgsw5gtkd5ehfcdmncijlslqgpn', 2023-01-11T21:41:23.7301574Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7301733Z functions=['call_4'], 2023-01-11T21:41:23.7302269Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7302745Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7303649Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7303659Z 2023-01-11T21:41:23.7303759Z def _wrap_func(f): 2023-01-11T21:41:23.7303861Z def g(args): 2023-01-11T21:41:23.7303950Z return f(args) 2023-01-11T21:41:23.7304039Z return g 2023-01-11T21:41:23.7304166Z call = _wrap_func(module.call_4) 2023-01-11T21:41:23.7304173Z 2023-01-11T21:41:23.7304181Z 2023-01-11T21:41:23.7304288Z if __name__ == "__main__": 2023-01-11T21:41:23.7304453Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7304632Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7304938Z arg0_1 = rand_strided((1, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7305233Z arg1_1 = rand_strided((1, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7305386Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7305393Z 2023-01-11T21:41:23.7305788Z [2023-01-11 21:27:56,449] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 157 2023-01-11T21:41:23.7306421Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7306589Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7306982Z [2023-01-11 21:27:56,532] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 158 2023-01-11T21:41:23.7306998Z 2023-01-11T21:41:23.7307144Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7307244Z import torch 2023-01-11T21:41:23.7307353Z import random 2023-01-11T21:41:23.7307515Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7307678Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7307685Z 2023-01-11T21:41:23.7307796Z aten = torch.ops.aten 2023-01-11T21:41:23.7307982Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7308113Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7308121Z 2023-01-11T21:41:23.7308126Z 2023-01-11T21:41:23.7308315Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7308602Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7308769Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7308902Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7309125Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7309256Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7309391Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7309521Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.7309704Z double* __restrict__ out_ptr5, 2023-01-11T21:41:23.7309904Z double* __restrict__ out_ptr6) 2023-01-11T21:41:23.7310030Z { 2023-01-11T21:41:23.7310232Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7310362Z { 2023-01-11T21:41:23.7310536Z #pragma omp for 2023-01-11T21:41:23.7310713Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7310852Z { 2023-01-11T21:41:23.7311035Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.7311173Z { 2023-01-11T21:41:23.7311494Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (16*i0)); 2023-01-11T21:41:23.7311876Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7312063Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7312285Z tmp0.store(out_ptr0 + (8*i1) + (36*i0)); 2023-01-11T21:41:23.7312506Z tmp2.store(out_ptr1 + (8*i1) + (36*i0)); 2023-01-11T21:41:23.7312648Z } 2023-01-11T21:41:23.7312858Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.7313036Z for(long i1=16; i1<16; i1+=1) 2023-01-11T21:41:23.7313162Z { 2023-01-11T21:41:23.7313363Z auto tmp0 = in_ptr0[i1 + (16*i0)]; 2023-01-11T21:41:23.7313576Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7313831Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7314025Z out_ptr0[i1 + (36*i0)] = tmp0; 2023-01-11T21:41:23.7314219Z out_ptr1[i1 + (36*i0)] = tmp2; 2023-01-11T21:41:23.7314352Z } 2023-01-11T21:41:23.7314476Z } 2023-01-11T21:41:23.7314662Z #pragma omp for 2023-01-11T21:41:23.7314833Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7314960Z { 2023-01-11T21:41:23.7315122Z #pragma GCC ivdep 2023-01-11T21:41:23.7315290Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.7315409Z { 2023-01-11T21:41:23.7315541Z { 2023-01-11T21:41:23.7315686Z { 2023-01-11T21:41:23.7315909Z auto tmp0 = in_ptr0[i1 + (16*i0)]; 2023-01-11T21:41:23.7316145Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.7316345Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7316548Z out_ptr2[i1 + (36*i0)] = tmp2; 2023-01-11T21:41:23.7316678Z } 2023-01-11T21:41:23.7316817Z } 2023-01-11T21:41:23.7316955Z } 2023-01-11T21:41:23.7317095Z } 2023-01-11T21:41:23.7317264Z #pragma omp for 2023-01-11T21:41:23.7317448Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.7317595Z { 2023-01-11T21:41:23.7317724Z { 2023-01-11T21:41:23.7317870Z { 2023-01-11T21:41:23.7318078Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7318316Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7318511Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7318760Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7318946Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.7319113Z out_ptr4[i0] = tmp2; 2023-01-11T21:41:23.7319300Z out_ptr5[i0] = tmp3; 2023-01-11T21:41:23.7319481Z out_ptr6[i0] = tmp3; 2023-01-11T21:41:23.7319621Z } 2023-01-11T21:41:23.7319758Z } 2023-01-11T21:41:23.7319900Z } 2023-01-11T21:41:23.7320034Z } 2023-01-11T21:41:23.7320152Z } 2023-01-11T21:41:23.7320349Z ''') 2023-01-11T21:41:23.7320361Z 2023-01-11T21:41:23.7320664Z async_compile.wait(globals()) 2023-01-11T21:41:23.7320825Z del async_compile 2023-01-11T21:41:23.7321098Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7321253Z wrapper = ( 2023-01-11T21:41:23.7321403Z ''' 2023-01-11T21:41:23.7321564Z #include 2023-01-11T21:41:23.7321731Z #include 2023-01-11T21:41:23.7322017Z std::vector call_5(std::vector args) { 2023-01-11T21:41:23.7322181Z at::Tensor arg0_1; 2023-01-11T21:41:23.7322330Z arg0_1 = args[0]; 2023-01-11T21:41:23.7322622Z auto buf3 = at::empty_strided({8, 36}, {36, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7322863Z auto buf0 = at::as_strided(buf3, {8, 16}, {36, 1}); // alias 2023-01-11T21:41:23.7323106Z auto buf2 = at::as_strided(buf3, {8, 16}, {36, 1}, 20); // alias 2023-01-11T21:41:23.7323349Z auto buf1 = at::as_strided(buf3, {8, 4}, {36, 1}, 16); // alias 2023-01-11T21:41:23.7323639Z auto buf6 = at::empty_strided({16, 16}, {16, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7323941Z auto buf4 = at::as_strided(buf6, {8, 16}, {16, 1}); // alias 2023-01-11T21:41:23.7324186Z auto buf5 = at::as_strided(buf6, {8, 16}, {16, 1}, 128); // alias 2023-01-11T21:41:23.7324479Z auto buf9 = at::empty_strided({16, 16}, {16, 1}, at::ScalarType::Double); 2023-01-11T21:41:23.7324721Z auto buf7 = at::as_strided(buf9, {8, 16}, {16, 1}); // alias 2023-01-11T21:41:23.7324970Z auto buf8 = at::as_strided(buf9, {8, 16}, {16, 1}, 128); // alias 2023-01-11T21:41:23.7325633Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/xx/cxxvcduhvysrcz4iolqfqobmycduemx7yqmtsfsczg53dn526b2j.so", RTLD_NOW); 2023-01-11T21:41:23.7325837Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7326189Z void (*kernel_cpp_0)(const float*,float*,float*,float*,float*,float*,double*,double*); 2023-01-11T21:41:23.7326474Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7327164Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(buf0.data_ptr()), (float*)(buf2.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf4.data_ptr()), (float*)(buf5.data_ptr()), (double*)(buf7.data_ptr()), (double*)(buf8.data_ptr())); 2023-01-11T21:41:23.7327333Z arg0_1.reset(); 2023-01-11T21:41:23.7327746Z return std::vector({buf3, buf6, buf9}); }''' ) 2023-01-11T21:41:23.7327757Z 2023-01-11T21:41:23.7327929Z module = load_inline( 2023-01-11T21:41:23.7328546Z name='inline_extension_cji7g5r4jyhbjebn7otwyol5fhfvue57jqht5etdlccgyremizfh', 2023-01-11T21:41:23.7328741Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7328974Z functions=['call_5'], 2023-01-11T21:41:23.7329842Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7330177Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7331729Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7331746Z 2023-01-11T21:41:23.7331917Z def _wrap_func(f): 2023-01-11T21:41:23.7332071Z def g(args): 2023-01-11T21:41:23.7332230Z return f(args) 2023-01-11T21:41:23.7332373Z return g 2023-01-11T21:41:23.7332560Z call = _wrap_func(module.call_5) 2023-01-11T21:41:23.7332570Z 2023-01-11T21:41:23.7332594Z 2023-01-11T21:41:23.7332747Z if __name__ == "__main__": 2023-01-11T21:41:23.7333008Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7333309Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7333760Z arg0_1 = rand_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7334009Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7334020Z 2023-01-11T21:41:23.7334741Z [2023-01-11 21:28:15,662] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 158 2023-01-11T21:41:23.7335905Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7336182Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7336789Z [2023-01-11 21:28:15,713] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 159 2023-01-11T21:41:23.7336800Z 2023-01-11T21:41:23.7336997Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7337155Z import torch 2023-01-11T21:41:23.7337307Z import random 2023-01-11T21:41:23.7337561Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7337904Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7337916Z 2023-01-11T21:41:23.7338086Z aten = torch.ops.aten 2023-01-11T21:41:23.7338393Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7338582Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7338592Z 2023-01-11T21:41:23.7338617Z 2023-01-11T21:41:23.7338916Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7339427Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7339697Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7339932Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7340166Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.7340402Z const double* __restrict__ in_ptr3, 2023-01-11T21:41:23.7340626Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7340831Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7341044Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7341265Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7341480Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.7341686Z float* __restrict__ out_ptr5, 2023-01-11T21:41:23.7341913Z double* __restrict__ out_ptr6, 2023-01-11T21:41:23.7342144Z double* __restrict__ out_ptr7, 2023-01-11T21:41:23.7342502Z float* __restrict__ out_ptr8, 2023-01-11T21:41:23.7342707Z double* __restrict__ out_ptr9) 2023-01-11T21:41:23.7342838Z { 2023-01-11T21:41:23.7343061Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7343190Z { 2023-01-11T21:41:23.7343398Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7343578Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7343701Z { 2023-01-11T21:41:23.7343888Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.7344027Z { 2023-01-11T21:41:23.7344212Z #pragma GCC ivdep 2023-01-11T21:41:23.7344409Z for(long i2=0; i2<16; i2+=1) 2023-01-11T21:41:23.7344550Z { 2023-01-11T21:41:23.7344693Z { 2023-01-11T21:41:23.7344827Z { 2023-01-11T21:41:23.7345070Z auto tmp0 = in_ptr0[i0 + (3*i2) + (48*i1)]; 2023-01-11T21:41:23.7345309Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.7345517Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7345757Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.7345957Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.7346176Z out_ptr0[i2 + (48*i1) + (144*i0)] = tmp0; 2023-01-11T21:41:23.7346376Z out_ptr1[i2 + (48*i1) + (144*i0)] = tmp2; 2023-01-11T21:41:23.7346710Z out_ptr2[i2 + (48*i1) + (144*i0)] = tmp4; 2023-01-11T21:41:23.7346864Z } 2023-01-11T21:41:23.7347011Z } 2023-01-11T21:41:23.7347157Z } 2023-01-11T21:41:23.7347295Z } 2023-01-11T21:41:23.7347429Z } 2023-01-11T21:41:23.7347617Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7347801Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7347943Z { 2023-01-11T21:41:23.7348128Z for(long i1=0; i1<144; i1+=1) 2023-01-11T21:41:23.7348275Z { 2023-01-11T21:41:23.7348417Z { 2023-01-11T21:41:23.7348564Z { 2023-01-11T21:41:23.7348767Z auto tmp0 = in_ptr1[i1 + (144*i0)]; 2023-01-11T21:41:23.7348971Z out_ptr3[i0 + (3*i1)] = tmp0; 2023-01-11T21:41:23.7349116Z } 2023-01-11T21:41:23.7349258Z } 2023-01-11T21:41:23.7349402Z } 2023-01-11T21:41:23.7349656Z } 2023-01-11T21:41:23.7349863Z #pragma omp for collapse(2) 2023-01-11T21:41:23.7350024Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7350170Z { 2023-01-11T21:41:23.7350352Z for(long i1=0; i1<48; i1+=1) 2023-01-11T21:41:23.7350492Z { 2023-01-11T21:41:23.7350631Z { 2023-01-11T21:41:23.7350776Z { 2023-01-11T21:41:23.7350981Z auto tmp0 = in_ptr0[i0 + (3*i1)]; 2023-01-11T21:41:23.7351221Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7351428Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7351628Z out_ptr4[i1 + (48*i0)] = tmp2; 2023-01-11T21:41:23.7351774Z } 2023-01-11T21:41:23.7351922Z } 2023-01-11T21:41:23.7352060Z } 2023-01-11T21:41:23.7352182Z } 2023-01-11T21:41:23.7352355Z #pragma omp for 2023-01-11T21:41:23.7352545Z for(long i0=0; i0<144; i0+=1) 2023-01-11T21:41:23.7352684Z { 2023-01-11T21:41:23.7352827Z { 2023-01-11T21:41:23.7352963Z { 2023-01-11T21:41:23.7353169Z auto tmp0 = out_ptr4[i0]; 2023-01-11T21:41:23.7353407Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7353588Z out_ptr5[i0] = tmp0; 2023-01-11T21:41:23.7353843Z out_ptr6[i0] = tmp1; 2023-01-11T21:41:23.7354027Z out_ptr7[i0] = tmp1; 2023-01-11T21:41:23.7354174Z } 2023-01-11T21:41:23.7354311Z } 2023-01-11T21:41:23.7354453Z } 2023-01-11T21:41:23.7354649Z #pragma omp for collapse(3) 2023-01-11T21:41:23.7354831Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.7354970Z { 2023-01-11T21:41:23.7355158Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.7355297Z { 2023-01-11T21:41:23.7355486Z for(long i2=0; i2<48; i2+=1) 2023-01-11T21:41:23.7355638Z { 2023-01-11T21:41:23.7355772Z { 2023-01-11T21:41:23.7355917Z { 2023-01-11T21:41:23.7356147Z auto tmp0 = in_ptr2[i2 + (48*i1) + (144*i0)]; 2023-01-11T21:41:23.7356366Z out_ptr8[i1 + (3*i2) + (144*i0)] = tmp0; 2023-01-11T21:41:23.7356510Z } 2023-01-11T21:41:23.7356653Z } 2023-01-11T21:41:23.7356797Z } 2023-01-11T21:41:23.7356920Z } 2023-01-11T21:41:23.7357065Z } 2023-01-11T21:41:23.7357272Z #pragma omp for collapse(3) 2023-01-11T21:41:23.7357450Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.7357584Z { 2023-01-11T21:41:23.7357768Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.7357894Z { 2023-01-11T21:41:23.7358095Z for(long i2=0; i2<48; i2+=1) 2023-01-11T21:41:23.7358236Z { 2023-01-11T21:41:23.7358459Z { 2023-01-11T21:41:23.7358606Z { 2023-01-11T21:41:23.7358843Z auto tmp0 = in_ptr3[i2 + (48*i1) + (144*i0)]; 2023-01-11T21:41:23.7359066Z out_ptr9[i1 + (3*i2) + (144*i0)] = tmp0; 2023-01-11T21:41:23.7359200Z } 2023-01-11T21:41:23.7359342Z } 2023-01-11T21:41:23.7359483Z } 2023-01-11T21:41:23.7359623Z } 2023-01-11T21:41:23.7359760Z } 2023-01-11T21:41:23.7359896Z } 2023-01-11T21:41:23.7360034Z } 2023-01-11T21:41:23.7360207Z ''') 2023-01-11T21:41:23.7360219Z 2023-01-11T21:41:23.7360424Z async_compile.wait(globals()) 2023-01-11T21:41:23.7360595Z del async_compile 2023-01-11T21:41:23.7360869Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7361025Z wrapper = ( 2023-01-11T21:41:23.7361193Z ''' 2023-01-11T21:41:23.7361353Z #include 2023-01-11T21:41:23.7361504Z #include 2023-01-11T21:41:23.7361860Z std::vector call_6(std::vector args) { 2023-01-11T21:41:23.7362034Z at::Tensor arg0_1; 2023-01-11T21:41:23.7362187Z arg0_1 = args[0]; 2023-01-11T21:41:23.7362493Z auto buf3 = at::empty_strided({1, 3, 3, 48}, {432, 144, 48, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7362762Z auto buf0 = at::as_strided(buf3, {1, 3, 3, 16}, {432, 144, 48, 1}); // alias 2023-01-11T21:41:23.7363025Z auto buf1 = at::as_strided(buf3, {1, 3, 3, 16}, {432, 144, 48, 1}, 16); // alias 2023-01-11T21:41:23.7363292Z auto buf2 = at::as_strided(buf3, {1, 3, 3, 16}, {432, 144, 48, 1}, 32); // alias 2023-01-11T21:41:23.7363591Z auto buf4 = at::empty_strided({1, 3, 3, 48}, {432, 1, 144, 3}, at::ScalarType::Float); 2023-01-11T21:41:23.7363893Z auto buf7 = at::empty_strided({2, 3, 3, 16}, {144, 48, 16, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7364160Z auto buf5 = at::as_strided(buf7, {1, 3, 3, 16}, {144, 48, 16, 1}); // alias 2023-01-11T21:41:23.7364437Z auto buf6 = at::as_strided(buf7, {1, 3, 3, 16}, {144, 48, 16, 1}, 144); // alias 2023-01-11T21:41:23.7364755Z auto buf11 = at::empty_strided({2, 3, 3, 16}, {144, 48, 16, 1}, at::ScalarType::Double); 2023-01-11T21:41:23.7365014Z auto buf9 = at::as_strided(buf11, {1, 3, 3, 16}, {144, 48, 16, 1}); // alias 2023-01-11T21:41:23.7365286Z auto buf10 = at::as_strided(buf11, {1, 3, 3, 16}, {144, 48, 16, 1}, 144); // alias 2023-01-11T21:41:23.7365594Z auto buf8 = at::empty_strided({2, 3, 3, 16}, {144, 1, 48, 3}, at::ScalarType::Float); 2023-01-11T21:41:23.7365908Z auto buf12 = at::empty_strided({2, 3, 3, 16}, {144, 1, 48, 3}, at::ScalarType::Double); 2023-01-11T21:41:23.7366533Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/tf/ctf2pptj3ayxmvzwwc77hmhshbex7tnnbmizfndel6malv4tyfpu.so", RTLD_NOW); 2023-01-11T21:41:23.7366738Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7367260Z void (*kernel_cpp_0)(const float*,const float*,const float*,const double*,float*,float*,float*,float*,float*,float*,double*,double*,float*,double*); 2023-01-11T21:41:23.7367542Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7368590Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(buf3.data_ptr()), (float*)(buf7.data_ptr()), (double*)(buf11.data_ptr()), (float*)(buf0.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf2.data_ptr()), (float*)(buf4.data_ptr()), (float*)(buf5.data_ptr()), (float*)(buf6.data_ptr()), (double*)(buf9.data_ptr()), (double*)(buf10.data_ptr()), (float*)(buf8.data_ptr()), (double*)(buf12.data_ptr())); 2023-01-11T21:41:23.7368746Z arg0_1.reset(); 2023-01-11T21:41:23.7368903Z buf0.reset(); 2023-01-11T21:41:23.7369060Z buf1.reset(); 2023-01-11T21:41:23.7369214Z buf10.reset(); 2023-01-11T21:41:23.7369374Z buf11.reset(); 2023-01-11T21:41:23.7369518Z buf2.reset(); 2023-01-11T21:41:23.7369669Z buf3.reset(); 2023-01-11T21:41:23.7369820Z buf5.reset(); 2023-01-11T21:41:23.7370056Z buf6.reset(); 2023-01-11T21:41:23.7370216Z buf7.reset(); 2023-01-11T21:41:23.7370374Z buf9.reset(); 2023-01-11T21:41:23.7370788Z return std::vector({buf4, buf8, buf12}); }''' ) 2023-01-11T21:41:23.7370815Z 2023-01-11T21:41:23.7370973Z module = load_inline( 2023-01-11T21:41:23.7371619Z name='inline_extension_cwwd52gevofbwzpjosptdssanute3m5y5v4upouhvxxh6jtkg7ny', 2023-01-11T21:41:23.7371815Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7372066Z functions=['call_6'], 2023-01-11T21:41:23.7372926Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7373258Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7374905Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7374928Z 2023-01-11T21:41:23.7375094Z def _wrap_func(f): 2023-01-11T21:41:23.7375230Z def g(args): 2023-01-11T21:41:23.7375398Z return f(args) 2023-01-11T21:41:23.7375549Z return g 2023-01-11T21:41:23.7375748Z call = _wrap_func(module.call_6) 2023-01-11T21:41:23.7375758Z 2023-01-11T21:41:23.7375767Z 2023-01-11T21:41:23.7375934Z if __name__ == "__main__": 2023-01-11T21:41:23.7376198Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7376475Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7376975Z arg0_1 = rand_strided((1, 3, 3, 16), (144, 1, 48, 3), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7377212Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7377222Z 2023-01-11T21:41:23.7377855Z [2023-01-11 21:28:35,060] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 159 2023-01-11T21:41:23.7378974Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7379264Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7379882Z [2023-01-11 21:28:35,113] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 160 2023-01-11T21:41:23.7379893Z 2023-01-11T21:41:23.7380109Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7380267Z import torch 2023-01-11T21:41:23.7380427Z import random 2023-01-11T21:41:23.7380709Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7380972Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7380983Z 2023-01-11T21:41:23.7381156Z aten = torch.ops.aten 2023-01-11T21:41:23.7381472Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7381681Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7381691Z 2023-01-11T21:41:23.7381699Z 2023-01-11T21:41:23.7382003Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7382639Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7382907Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7383040Z { 2023-01-11T21:41:23.7383255Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7383384Z { 2023-01-11T21:41:23.7383562Z #pragma omp for 2023-01-11T21:41:23.7383743Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.7383887Z { 2023-01-11T21:41:23.7384197Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7384498Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:23.7384813Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7384953Z } 2023-01-11T21:41:23.7385169Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7385347Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:23.7385483Z { 2023-01-11T21:41:23.7385687Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7386008Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:23.7386170Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:23.7386351Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7386488Z } 2023-01-11T21:41:23.7386619Z } 2023-01-11T21:41:23.7386749Z } 2023-01-11T21:41:23.7386919Z ''') 2023-01-11T21:41:23.7386927Z 2023-01-11T21:41:23.7387129Z async_compile.wait(globals()) 2023-01-11T21:41:23.7387275Z del async_compile 2023-01-11T21:41:23.7387547Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7387702Z wrapper = ( 2023-01-11T21:41:23.7387869Z ''' 2023-01-11T21:41:23.7388035Z #include 2023-01-11T21:41:23.7388293Z #include 2023-01-11T21:41:23.7388595Z std::vector call_7(std::vector args) { 2023-01-11T21:41:23.7388827Z at::Tensor primals_1, primals_2, primals_3; 2023-01-11T21:41:23.7389005Z primals_1 = args[0]; 2023-01-11T21:41:23.7389174Z primals_2 = args[1]; 2023-01-11T21:41:23.7389344Z primals_3 = args[2]; 2023-01-11T21:41:23.7389644Z auto buf0 = at::empty_strided({2, 16}, {16, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7389976Z at::addmm_out(buf0, primals_2, primals_3, at::as_strided(primals_1, {8, 16}, {1, 8}), 1, 1); 2023-01-11T21:41:23.7390149Z primals_1.reset(); 2023-01-11T21:41:23.7390306Z primals_2.reset(); 2023-01-11T21:41:23.7390522Z auto buf1 = buf0; buf0.reset(); // reuse 2023-01-11T21:41:23.7391146Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/iw/ciwiq5etlmci4iuytny63x3zhsv2pxo7evslwt4lbr6dhqxlpznj.so", RTLD_NOW); 2023-01-11T21:41:23.7391361Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7391565Z void (*kernel_cpp_0)(float*); 2023-01-11T21:41:23.7391845Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7392077Z kernel_cpp_0((float*)(buf1.data_ptr())); 2023-01-11T21:41:23.7392511Z return std::vector({buf1, primals_3, buf1}); }''' ) 2023-01-11T21:41:23.7392523Z 2023-01-11T21:41:23.7392686Z module = load_inline( 2023-01-11T21:41:23.7393296Z name='inline_extension_cfo4kg3ofnmy3tbyyrnmb4ko7l4ctd5apfqvog3nnpfe7zmqeqge', 2023-01-11T21:41:23.7393484Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7393798Z functions=['call_7'], 2023-01-11T21:41:23.7394677Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7395015Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7396572Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7396588Z 2023-01-11T21:41:23.7396750Z def _wrap_func(f): 2023-01-11T21:41:23.7396906Z def g(args): 2023-01-11T21:41:23.7397044Z return f(args) 2023-01-11T21:41:23.7397187Z return g 2023-01-11T21:41:23.7397394Z call = _wrap_func(module.call_7) 2023-01-11T21:41:23.7397404Z 2023-01-11T21:41:23.7397413Z 2023-01-11T21:41:23.7397580Z if __name__ == "__main__": 2023-01-11T21:41:23.7397841Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7398122Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7398613Z primals_1 = rand_strided((16, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7399060Z primals_2 = rand_strided((16, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7399631Z primals_3 = rand_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7399957Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.7399968Z 2023-01-11T21:41:23.7400607Z [2023-01-11 21:28:56,424] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 160 2023-01-11T21:41:23.7401766Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7402046Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7402653Z [2023-01-11 21:28:56,622] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 161 2023-01-11T21:41:23.7402668Z 2023-01-11T21:41:23.7402944Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7403104Z import torch 2023-01-11T21:41:23.7403264Z import random 2023-01-11T21:41:23.7403531Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7403815Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7403826Z 2023-01-11T21:41:23.7404005Z aten = torch.ops.aten 2023-01-11T21:41:23.7404307Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7404515Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7404525Z 2023-01-11T21:41:23.7404534Z 2023-01-11T21:41:23.7404836Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7405339Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7405608Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7405733Z { 2023-01-11T21:41:23.7405955Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7406094Z { 2023-01-11T21:41:23.7406279Z #pragma omp for 2023-01-11T21:41:23.7406458Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.7406599Z { 2023-01-11T21:41:23.7406904Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7407189Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.7407399Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7407495Z } 2023-01-11T21:41:23.7407621Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7407728Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.7407805Z { 2023-01-11T21:41:23.7407915Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7408009Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7408114Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7408203Z } 2023-01-11T21:41:23.7408293Z } 2023-01-11T21:41:23.7408385Z } 2023-01-11T21:41:23.7408509Z ''') 2023-01-11T21:41:23.7408517Z 2023-01-11T21:41:23.7408527Z 2023-01-11T21:41:23.7408720Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.7408996Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7409159Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7409243Z { 2023-01-11T21:41:23.7409393Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7409480Z { 2023-01-11T21:41:23.7409581Z #pragma omp for 2023-01-11T21:41:23.7409700Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.7409776Z { 2023-01-11T21:41:23.7409985Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7410174Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.7410310Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7410403Z } 2023-01-11T21:41:23.7410548Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7410671Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.7410836Z { 2023-01-11T21:41:23.7410968Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7411099Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7411223Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7411313Z } 2023-01-11T21:41:23.7411405Z } 2023-01-11T21:41:23.7411495Z } 2023-01-11T21:41:23.7411612Z ''') 2023-01-11T21:41:23.7411620Z 2023-01-11T21:41:23.7411626Z 2023-01-11T21:41:23.7411844Z kernel_cpp_2 = async_compile.cpp(''' 2023-01-11T21:41:23.7412167Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7412342Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.7412428Z { 2023-01-11T21:41:23.7412578Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7412675Z { 2023-01-11T21:41:23.7412765Z #pragma omp for 2023-01-11T21:41:23.7412882Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.7412980Z { 2023-01-11T21:41:23.7413250Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7413426Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.7413556Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.7413642Z } 2023-01-11T21:41:23.7413756Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7413863Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.7413955Z { 2023-01-11T21:41:23.7414081Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7414193Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7414307Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7414394Z } 2023-01-11T21:41:23.7414456Z } 2023-01-11T21:41:23.7414531Z } 2023-01-11T21:41:23.7414659Z ''') 2023-01-11T21:41:23.7414666Z 2023-01-11T21:41:23.7414671Z 2023-01-11T21:41:23.7414854Z kernel_cpp_3 = async_compile.cpp(''' 2023-01-11T21:41:23.7415156Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7415328Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7415467Z bool* __restrict__ out_ptr0) 2023-01-11T21:41:23.7415544Z { 2023-01-11T21:41:23.7415668Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7415749Z { 2023-01-11T21:41:23.7415857Z #pragma omp for 2023-01-11T21:41:23.7415981Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.7416084Z { 2023-01-11T21:41:23.7416192Z { 2023-01-11T21:41:23.7416288Z { 2023-01-11T21:41:23.7416449Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.7416605Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7416780Z auto tmp2 = static_cast(0); 2023-01-11T21:41:23.7416933Z auto tmp3 = tmp1 <= tmp2; 2023-01-11T21:41:23.7417080Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7417215Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7417295Z } 2023-01-11T21:41:23.7417393Z } 2023-01-11T21:41:23.7417486Z } 2023-01-11T21:41:23.7417573Z } 2023-01-11T21:41:23.7417663Z } 2023-01-11T21:41:23.7417802Z ''') 2023-01-11T21:41:23.7417811Z 2023-01-11T21:41:23.7417952Z async_compile.wait(globals()) 2023-01-11T21:41:23.7418051Z del async_compile 2023-01-11T21:41:23.7418232Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7418343Z wrapper = ( 2023-01-11T21:41:23.7418470Z ''' 2023-01-11T21:41:23.7418583Z #include 2023-01-11T21:41:23.7418697Z #include 2023-01-11T21:41:23.7418888Z std::vector call_8(std::vector args) { 2023-01-11T21:41:23.7419129Z at::Tensor primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9; 2023-01-11T21:41:23.7419232Z primals_1 = args[0]; 2023-01-11T21:41:23.7419339Z primals_2 = args[1]; 2023-01-11T21:41:23.7419541Z primals_3 = args[2]; 2023-01-11T21:41:23.7419647Z primals_4 = args[3]; 2023-01-11T21:41:23.7419753Z primals_5 = args[4]; 2023-01-11T21:41:23.7419857Z primals_6 = args[5]; 2023-01-11T21:41:23.7419952Z primals_7 = args[6]; 2023-01-11T21:41:23.7420071Z primals_8 = args[7]; 2023-01-11T21:41:23.7420187Z primals_9 = args[8]; 2023-01-11T21:41:23.7420385Z auto buf0 = at::empty_strided({2, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7420600Z at::addmm_out(buf0, primals_2, primals_9, at::as_strided(primals_1, {8, 8}, {1, 8}), 1, 1); 2023-01-11T21:41:23.7420715Z primals_1.reset(); 2023-01-11T21:41:23.7420817Z primals_2.reset(); 2023-01-11T21:41:23.7420947Z auto buf1 = buf0; buf0.reset(); // reuse 2023-01-11T21:41:23.7421328Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/qf/cqfoag4edkqhhwar6dcfekr7xrl4rhfowlfpqtdov7pzkh5vony6.so", RTLD_NOW); 2023-01-11T21:41:23.7421471Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7421663Z void (*kernel_cpp_0)(float*); 2023-01-11T21:41:23.7421853Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7422001Z kernel_cpp_0((float*)(buf1.data_ptr())); 2023-01-11T21:41:23.7422188Z auto buf2 = at::empty_strided({2, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7422531Z at::addmm_out(buf2, primals_4, buf1, at::as_strided(primals_3, {8, 8}, {1, 8}), 1, 1); 2023-01-11T21:41:23.7422629Z primals_4.reset(); 2023-01-11T21:41:23.7422755Z auto buf3 = buf2; buf2.reset(); // reuse 2023-01-11T21:41:23.7423105Z auto kernel_cpp_1_lib = dlopen("/tmp/torchinductor_jenkins/qf/cqfoag4edkqhhwar6dcfekr7xrl4rhfowlfpqtdov7pzkh5vony6.so", RTLD_NOW); 2023-01-11T21:41:23.7423246Z assert(kernel_cpp_1_lib != nullptr); 2023-01-11T21:41:23.7423366Z void (*kernel_cpp_1)(float*); 2023-01-11T21:41:23.7423540Z *(void **) (&kernel_cpp_1) = dlsym(kernel_cpp_1_lib, "kernel"); 2023-01-11T21:41:23.7423675Z kernel_cpp_1((float*)(buf3.data_ptr())); 2023-01-11T21:41:23.7423872Z auto buf4 = at::empty_strided({2, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7424061Z at::addmm_out(buf4, primals_6, buf3, at::as_strided(primals_5, {8, 8}, {1, 8}), 1, 1); 2023-01-11T21:41:23.7424165Z primals_6.reset(); 2023-01-11T21:41:23.7424300Z auto buf5 = buf4; buf4.reset(); // reuse 2023-01-11T21:41:23.7424673Z auto kernel_cpp_2_lib = dlopen("/tmp/torchinductor_jenkins/qf/cqfoag4edkqhhwar6dcfekr7xrl4rhfowlfpqtdov7pzkh5vony6.so", RTLD_NOW); 2023-01-11T21:41:23.7424814Z assert(kernel_cpp_2_lib != nullptr); 2023-01-11T21:41:23.7424935Z void (*kernel_cpp_2)(float*); 2023-01-11T21:41:23.7425104Z *(void **) (&kernel_cpp_2) = dlsym(kernel_cpp_2_lib, "kernel"); 2023-01-11T21:41:23.7425247Z kernel_cpp_2((float*)(buf5.data_ptr())); 2023-01-11T21:41:23.7425416Z auto buf6 = at::empty_strided({2, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7425618Z at::addmm_out(buf6, primals_8, buf5, at::as_strided(primals_7, {8, 8}, {1, 8}), 1, 1); 2023-01-11T21:41:23.7425723Z primals_8.reset(); 2023-01-11T21:41:23.7425860Z auto buf7 = buf6; buf6.reset(); // reuse 2023-01-11T21:41:23.7426042Z auto buf8 = at::empty_strided({2, 8}, {8, 1}, at::ScalarType::Bool); 2023-01-11T21:41:23.7426383Z auto kernel_cpp_3_lib = dlopen("/tmp/torchinductor_jenkins/by/cby2ob4hd4n2ixdn4t25zfpvb6vyfd66nfdv33d5vsx36mxcwhy4.so", RTLD_NOW); 2023-01-11T21:41:23.7426520Z assert(kernel_cpp_3_lib != nullptr); 2023-01-11T21:41:23.7426656Z void (*kernel_cpp_3)(float*,bool*); 2023-01-11T21:41:23.7426821Z *(void **) (&kernel_cpp_3) = dlsym(kernel_cpp_3_lib, "kernel"); 2023-01-11T21:41:23.7427007Z kernel_cpp_3((float*)(buf7.data_ptr()), (bool*)(buf8.data_ptr())); 2023-01-11T21:41:23.7427610Z return std::vector({buf7, primals_9, buf1, buf3, buf5, buf8, at::as_strided(primals_7, {8, 8}, {8, 1}), at::as_strided(primals_5, {8, 8}, {8, 1}), at::as_strided(primals_3, {8, 8}, {8, 1})}); }''' ) 2023-01-11T21:41:23.7427734Z 2023-01-11T21:41:23.7427847Z module = load_inline( 2023-01-11T21:41:23.7428227Z name='inline_extension_csdpljql3gdtwyefwna55hislgjyk4wkihbuoidk4i5vw3bqzfra', 2023-01-11T21:41:23.7428349Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7428516Z functions=['call_8'], 2023-01-11T21:41:23.7429052Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7429268Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7430118Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7430142Z 2023-01-11T21:41:23.7430250Z def _wrap_func(f): 2023-01-11T21:41:23.7430352Z def g(args): 2023-01-11T21:41:23.7430572Z return f(args) 2023-01-11T21:41:23.7430671Z return g 2023-01-11T21:41:23.7430810Z call = _wrap_func(module.call_8) 2023-01-11T21:41:23.7430818Z 2023-01-11T21:41:23.7430824Z 2023-01-11T21:41:23.7430925Z if __name__ == "__main__": 2023-01-11T21:41:23.7431083Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7431234Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7431540Z primals_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7431842Z primals_2 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7432131Z primals_3 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7432423Z primals_4 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7432698Z primals_5 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7432981Z primals_6 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7433290Z primals_7 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7433590Z primals_8 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7433970Z primals_9 = rand_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7434251Z print_performance(lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9])) 2023-01-11T21:41:23.7434260Z 2023-01-11T21:41:23.7434684Z [2023-01-11 21:29:18,007] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 161 2023-01-11T21:41:23.7435064Z [2023-01-11 21:29:18,029] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 162 2023-01-11T21:41:23.7435072Z 2023-01-11T21:41:23.7435201Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7435306Z import torch 2023-01-11T21:41:23.7435408Z import random 2023-01-11T21:41:23.7435580Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7435734Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7435741Z 2023-01-11T21:41:23.7435841Z aten = torch.ops.aten 2023-01-11T21:41:23.7436025Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7436148Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7436155Z 2023-01-11T21:41:23.7436161Z 2023-01-11T21:41:23.7436351Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7436647Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7436828Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7436977Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7437108Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7437198Z { 2023-01-11T21:41:23.7437348Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7437532Z { 2023-01-11T21:41:23.7437634Z #pragma omp for 2023-01-11T21:41:23.7437743Z for(long i0=0; i0<12500; i0+=1) 2023-01-11T21:41:23.7437826Z { 2023-01-11T21:41:23.7438007Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7438182Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7438302Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7438429Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7438512Z } 2023-01-11T21:41:23.7438634Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7438750Z for(long i0=100000; i0<100000; i0+=1) 2023-01-11T21:41:23.7438814Z { 2023-01-11T21:41:23.7438921Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7439031Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7439139Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7439244Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7439386Z } 2023-01-11T21:41:23.7439486Z } 2023-01-11T21:41:23.7439568Z } 2023-01-11T21:41:23.7439693Z ''') 2023-01-11T21:41:23.7439702Z 2023-01-11T21:41:23.7439834Z async_compile.wait(globals()) 2023-01-11T21:41:23.7439942Z del async_compile 2023-01-11T21:41:23.7440102Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7440198Z wrapper = ( 2023-01-11T21:41:23.7440311Z ''' 2023-01-11T21:41:23.7440398Z #include 2023-01-11T21:41:23.7440503Z #include 2023-01-11T21:41:23.7440703Z std::vector call_9(std::vector args) { 2023-01-11T21:41:23.7440846Z at::Tensor primals_1, primals_2; 2023-01-11T21:41:23.7440954Z primals_1 = args[0]; 2023-01-11T21:41:23.7441060Z primals_2 = args[1]; 2023-01-11T21:41:23.7441244Z auto buf0 = at::empty_strided({100000, }, {1, }, at::ScalarType::Float); 2023-01-11T21:41:23.7441568Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/76/c76av2fn5v7sv2ienoi5x7je4ut64tlvvozup7k5kastmftp5d3z.so", RTLD_NOW); 2023-01-11T21:41:23.7441722Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7441897Z void (*kernel_cpp_0)(const float*,const float*,float*); 2023-01-11T21:41:23.7442081Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7442325Z kernel_cpp_0((float*)(primals_1.data_ptr()), (float*)(primals_2.data_ptr()), (float*)(buf0.data_ptr())); 2023-01-11T21:41:23.7442438Z primals_2.reset(); 2023-01-11T21:41:23.7442717Z return std::vector({buf0, primals_1}); }''' ) 2023-01-11T21:41:23.7442726Z 2023-01-11T21:41:23.7442834Z module = load_inline( 2023-01-11T21:41:23.7443197Z name='inline_extension_c5mwnoxgxnxcege5ed3o2rm3o6dlemfn2y763tsgjbijuxitr5bs', 2023-01-11T21:41:23.7443307Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7443463Z functions=['call_9'], 2023-01-11T21:41:23.7443991Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7444209Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7445125Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7445136Z 2023-01-11T21:41:23.7445250Z def _wrap_func(f): 2023-01-11T21:41:23.7445358Z def g(args): 2023-01-11T21:41:23.7445450Z return f(args) 2023-01-11T21:41:23.7445550Z return g 2023-01-11T21:41:23.7445676Z call = _wrap_func(module.call_9) 2023-01-11T21:41:23.7445683Z 2023-01-11T21:41:23.7445688Z 2023-01-11T21:41:23.7445785Z if __name__ == "__main__": 2023-01-11T21:41:23.7445935Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7446099Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7446514Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7446828Z primals_2 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7447001Z print_performance(lambda: call([primals_1, primals_2])) 2023-01-11T21:41:23.7447023Z 2023-01-11T21:41:23.7447437Z [2023-01-11 21:29:39,052] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 162 2023-01-11T21:41:23.7447850Z [2023-01-11 21:29:39,056] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling BACKWARDS graph 162 2023-01-11T21:41:23.7447858Z 2023-01-11T21:41:23.7448016Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7448124Z import torch 2023-01-11T21:41:23.7448239Z import random 2023-01-11T21:41:23.7448427Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7448609Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7448616Z 2023-01-11T21:41:23.7448813Z aten = torch.ops.aten 2023-01-11T21:41:23.7449013Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7449155Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7449163Z 2023-01-11T21:41:23.7449169Z 2023-01-11T21:41:23.7449384Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7449677Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7449854Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7450011Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7450152Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7450228Z { 2023-01-11T21:41:23.7450371Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7450464Z { 2023-01-11T21:41:23.7450573Z #pragma omp for 2023-01-11T21:41:23.7450691Z for(long i0=0; i0<12500; i0+=1) 2023-01-11T21:41:23.7450788Z { 2023-01-11T21:41:23.7450991Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7451175Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7451272Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7451397Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7451490Z } 2023-01-11T21:41:23.7451617Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7451744Z for(long i0=100000; i0<100000; i0+=1) 2023-01-11T21:41:23.7451829Z { 2023-01-11T21:41:23.7451933Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7452061Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7452172Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7452277Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7452367Z } 2023-01-11T21:41:23.7452456Z } 2023-01-11T21:41:23.7452542Z } 2023-01-11T21:41:23.7452649Z ''') 2023-01-11T21:41:23.7452657Z 2023-01-11T21:41:23.7452777Z async_compile.wait(globals()) 2023-01-11T21:41:23.7452884Z del async_compile 2023-01-11T21:41:23.7453040Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7453137Z wrapper = ( 2023-01-11T21:41:23.7453248Z ''' 2023-01-11T21:41:23.7453344Z #include 2023-01-11T21:41:23.7453427Z #include 2023-01-11T21:41:23.7453601Z std::vector call_10(std::vector args) { 2023-01-11T21:41:23.7453731Z at::Tensor primals_1, tangents_1; 2023-01-11T21:41:23.7453837Z primals_1 = args[0]; 2023-01-11T21:41:23.7453941Z tangents_1 = args[1]; 2023-01-11T21:41:23.7454133Z auto buf0 = at::empty_strided({100000, }, {1, }, at::ScalarType::Float); 2023-01-11T21:41:23.7454484Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/76/c76av2fn5v7sv2ienoi5x7je4ut64tlvvozup7k5kastmftp5d3z.so", RTLD_NOW); 2023-01-11T21:41:23.7454641Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7454801Z void (*kernel_cpp_0)(const float*,const float*,float*); 2023-01-11T21:41:23.7455087Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7455353Z kernel_cpp_0((float*)(tangents_1.data_ptr()), (float*)(primals_1.data_ptr()), (float*)(buf0.data_ptr())); 2023-01-11T21:41:23.7455468Z primals_1.reset(); 2023-01-11T21:41:23.7455583Z tangents_1.reset(); 2023-01-11T21:41:23.7456056Z return std::vector({at::Tensor(), buf0}); }''' ) 2023-01-11T21:41:23.7456065Z 2023-01-11T21:41:23.7456161Z module = load_inline( 2023-01-11T21:41:23.7456516Z name='inline_extension_cw4gwewvco6iido3xvjhqkdjwb76c4nhgeylgbiva5xor4layxbv', 2023-01-11T21:41:23.7456645Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7456821Z functions=['call_10'], 2023-01-11T21:41:23.7457395Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7457611Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7458537Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7458555Z 2023-01-11T21:41:23.7458655Z def _wrap_func(f): 2023-01-11T21:41:23.7458745Z def g(args): 2023-01-11T21:41:23.7458839Z return f(args) 2023-01-11T21:41:23.7458912Z return g 2023-01-11T21:41:23.7459032Z call = _wrap_func(module.call_10) 2023-01-11T21:41:23.7459038Z 2023-01-11T21:41:23.7459044Z 2023-01-11T21:41:23.7459143Z if __name__ == "__main__": 2023-01-11T21:41:23.7459289Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7459448Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7459738Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7460027Z tangents_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7460194Z print_performance(lambda: call([primals_1, tangents_1])) 2023-01-11T21:41:23.7460201Z 2023-01-11T21:41:23.7460597Z [2023-01-11 21:29:58,620] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling BACKWARDS graph 162 2023-01-11T21:41:23.7460978Z [2023-01-11 21:29:58,741] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 163 2023-01-11T21:41:23.7461349Z [2023-01-11 21:29:58,743] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.7461702Z [2023-01-11 21:29:58,745] torch._inductor.graph: [DEBUG] Set _can_use_cpp_wrapper to False due to Constants 2023-01-11T21:41:23.7462070Z [2023-01-11 21:30:00,349] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 163 2023-01-11T21:41:23.7462621Z [2023-01-11 21:30:00,352] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling BACKWARDS graph 163 2023-01-11T21:41:23.7462640Z 2023-01-11T21:41:23.7462769Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7462864Z import torch 2023-01-11T21:41:23.7462964Z import random 2023-01-11T21:41:23.7463120Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7463311Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7463318Z 2023-01-11T21:41:23.7463433Z aten = torch.ops.aten 2023-01-11T21:41:23.7463642Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7463775Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7464029Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.7464038Z 2023-01-11T21:41:23.7464043Z 2023-01-11T21:41:23.7464250Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7464554Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7464714Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.7465049Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7465211Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.7465353Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7465443Z { 2023-01-11T21:41:23.7465581Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7465663Z { 2023-01-11T21:41:23.7465751Z #pragma omp for 2023-01-11T21:41:23.7465872Z for(long i0=0; i0<100000; i0+=1) 2023-01-11T21:41:23.7465960Z { 2023-01-11T21:41:23.7466048Z { 2023-01-11T21:41:23.7466137Z { 2023-01-11T21:41:23.7466255Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.7466380Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7466487Z auto tmp7 = in_ptr2[i0]; 2023-01-11T21:41:23.7466623Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7466898Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7467049Z auto tmp3 = static_cast(0.33); 2023-01-11T21:41:23.7467176Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7467317Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7467438Z auto tmp8 = tmp6 * tmp7; 2023-01-11T21:41:23.7467546Z auto tmp9 = tmp5 * tmp8; 2023-01-11T21:41:23.7467708Z auto tmp10 = static_cast(1.492537313432836); 2023-01-11T21:41:23.7467841Z auto tmp11 = tmp9 * tmp10; 2023-01-11T21:41:23.7467963Z out_ptr0[i0] = tmp11; 2023-01-11T21:41:23.7468058Z } 2023-01-11T21:41:23.7468145Z } 2023-01-11T21:41:23.7468233Z } 2023-01-11T21:41:23.7468302Z } 2023-01-11T21:41:23.7468379Z } 2023-01-11T21:41:23.7468513Z ''') 2023-01-11T21:41:23.7468523Z 2023-01-11T21:41:23.7468528Z 2023-01-11T21:41:23.7468655Z async_compile.wait(globals()) 2023-01-11T21:41:23.7468768Z del async_compile 2023-01-11T21:41:23.7468775Z 2023-01-11T21:41:23.7468872Z def call(args): 2023-01-11T21:41:23.7468991Z primals_1, primals_2 = args 2023-01-11T21:41:23.7469078Z args.clear() 2023-01-11T21:41:23.7469269Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.7469574Z buf0 = empty_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7469866Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7469964Z del primals_2 2023-01-11T21:41:23.7470124Z return (buf0, primals_1, seed_cpu_None.clone(), ) 2023-01-11T21:41:23.7470132Z 2023-01-11T21:41:23.7470137Z 2023-01-11T21:41:23.7470241Z if __name__ == "__main__": 2023-01-11T21:41:23.7470392Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7470546Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7470834Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7471118Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7471396Z primals_2 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7471567Z print_performance(lambda: call([primals_1, primals_2])) 2023-01-11T21:41:23.7471574Z 2023-01-11T21:41:23.7471580Z 2023-01-11T21:41:23.7471707Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7471793Z import torch 2023-01-11T21:41:23.7471886Z import random 2023-01-11T21:41:23.7472033Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7472194Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7472201Z 2023-01-11T21:41:23.7472303Z aten = torch.ops.aten 2023-01-11T21:41:23.7472480Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7472598Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7472690Z 2023-01-11T21:41:23.7472696Z 2023-01-11T21:41:23.7472883Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7473149Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7473303Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7473459Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7473596Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.7473807Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7473899Z { 2023-01-11T21:41:23.7474042Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7474130Z { 2023-01-11T21:41:23.7474253Z #pragma omp for 2023-01-11T21:41:23.7474360Z for(long i0=0; i0<100000; i0+=1) 2023-01-11T21:41:23.7474452Z { 2023-01-11T21:41:23.7474545Z { 2023-01-11T21:41:23.7474636Z { 2023-01-11T21:41:23.7474838Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.7474968Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7475093Z auto tmp10 = in_ptr2[i0]; 2023-01-11T21:41:23.7475218Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7475409Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7475555Z auto tmp3 = static_cast(0.33); 2023-01-11T21:41:23.7475684Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7475835Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7475961Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.7476109Z auto tmp8 = static_cast(1.492537313432836); 2023-01-11T21:41:23.7476234Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.7476338Z auto tmp11 = tmp9 * tmp10; 2023-01-11T21:41:23.7476446Z out_ptr0[i0] = tmp11; 2023-01-11T21:41:23.7476540Z } 2023-01-11T21:41:23.7476626Z } 2023-01-11T21:41:23.7476707Z } 2023-01-11T21:41:23.7476784Z } 2023-01-11T21:41:23.7476856Z } 2023-01-11T21:41:23.7476984Z ''') 2023-01-11T21:41:23.7476992Z 2023-01-11T21:41:23.7477118Z async_compile.wait(globals()) 2023-01-11T21:41:23.7477221Z del async_compile 2023-01-11T21:41:23.7477381Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7477482Z wrapper = ( 2023-01-11T21:41:23.7477598Z ''' 2023-01-11T21:41:23.7477697Z #include 2023-01-11T21:41:23.7477805Z #include 2023-01-11T21:41:23.7478004Z std::vector call_11(std::vector args) { 2023-01-11T21:41:23.7478164Z at::Tensor primals_1, philox_seed_like, tangents_1; 2023-01-11T21:41:23.7478266Z primals_1 = args[0]; 2023-01-11T21:41:23.7478381Z philox_seed_like = args[1]; 2023-01-11T21:41:23.7478484Z tangents_1 = args[2]; 2023-01-11T21:41:23.7478653Z auto buf0 = at::empty_strided({100000, }, {1, }, at::ScalarType::Float); 2023-01-11T21:41:23.7479025Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/lq/clqibsqxpldu4echxzc3z6phb5szhugslpae44irpgfrltvzbhob.so", RTLD_NOW); 2023-01-11T21:41:23.7479176Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7479388Z void (*kernel_cpp_0)(const long*,const float*,const float*,float*); 2023-01-11T21:41:23.7479576Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7479895Z kernel_cpp_0((long*)(philox_seed_like.data_ptr()), (float*)(tangents_1.data_ptr()), (float*)(primals_1.data_ptr()), (float*)(buf0.data_ptr())); 2023-01-11T21:41:23.7480019Z philox_seed_like.reset(); 2023-01-11T21:41:23.7480124Z primals_1.reset(); 2023-01-11T21:41:23.7480220Z tangents_1.reset(); 2023-01-11T21:41:23.7480508Z return std::vector({at::Tensor(), buf0}); }''' ) 2023-01-11T21:41:23.7480519Z 2023-01-11T21:41:23.7480633Z module = load_inline( 2023-01-11T21:41:23.7481130Z name='inline_extension_c7pb2bael7vzlyb6qu32uoilu6pwiy3hiolshidh4zeg5hqyeqvp', 2023-01-11T21:41:23.7481260Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7481432Z functions=['call_11'], 2023-01-11T21:41:23.7482007Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7482242Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7483205Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7483215Z 2023-01-11T21:41:23.7483317Z def _wrap_func(f): 2023-01-11T21:41:23.7483422Z def g(args): 2023-01-11T21:41:23.7483533Z return f(args) 2023-01-11T21:41:23.7483621Z return g 2023-01-11T21:41:23.7483843Z call = _wrap_func(module.call_11) 2023-01-11T21:41:23.7483854Z 2023-01-11T21:41:23.7483861Z 2023-01-11T21:41:23.7483977Z if __name__ == "__main__": 2023-01-11T21:41:23.7484158Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7484342Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7484655Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7484968Z philox_seed_like = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7485287Z tangents_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7485518Z print_performance(lambda: call([primals_1, philox_seed_like, tangents_1])) 2023-01-11T21:41:23.7485526Z 2023-01-11T21:41:23.7485962Z [2023-01-11 21:30:21,467] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling BACKWARDS graph 163 2023-01-11T21:41:23.7486651Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7486838Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7487244Z [2023-01-11 21:30:21,493] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 164 2023-01-11T21:41:23.7487252Z 2023-01-11T21:41:23.7487392Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7487500Z import torch 2023-01-11T21:41:23.7487590Z import random 2023-01-11T21:41:23.7487770Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7487961Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7487969Z 2023-01-11T21:41:23.7488090Z aten = torch.ops.aten 2023-01-11T21:41:23.7488304Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7488448Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7488454Z 2023-01-11T21:41:23.7488593Z async_compile.wait(globals()) 2023-01-11T21:41:23.7488690Z del async_compile 2023-01-11T21:41:23.7488875Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7488981Z wrapper = ( 2023-01-11T21:41:23.7489110Z ''' 2023-01-11T21:41:23.7489215Z #include 2023-01-11T21:41:23.7489326Z #include 2023-01-11T21:41:23.7489505Z at::Tensor call_12(std::vector args) { 2023-01-11T21:41:23.7489627Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7489737Z arg0_1 = args[0]; 2023-01-11T21:41:23.7489848Z arg1_1 = args[1]; 2023-01-11T21:41:23.7490051Z auto buf0 = at::empty_strided({32, 32}, {32, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7490233Z at::mm_out(buf0, arg0_1, at::as_strided(arg1_1, {32, 32}, {32, 1})); 2023-01-11T21:41:23.7490343Z arg0_1.reset(); 2023-01-11T21:41:23.7490528Z arg1_1.reset(); 2023-01-11T21:41:23.7490675Z return buf0; }''' ) 2023-01-11T21:41:23.7490685Z 2023-01-11T21:41:23.7490791Z module = load_inline( 2023-01-11T21:41:23.7491186Z name='inline_extension_cbk5vdefanataz2xcfa5gu6egrblcazbommb7bzzso26owifwayb', 2023-01-11T21:41:23.7491316Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7491480Z functions=['call_12'], 2023-01-11T21:41:23.7492040Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7492265Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7493203Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7493219Z 2023-01-11T21:41:23.7493429Z def _wrap_func(f): 2023-01-11T21:41:23.7493524Z def g(args): 2023-01-11T21:41:23.7493643Z return f(args) 2023-01-11T21:41:23.7493737Z return g 2023-01-11T21:41:23.7493858Z call = _wrap_func(module.call_12) 2023-01-11T21:41:23.7493865Z 2023-01-11T21:41:23.7493871Z 2023-01-11T21:41:23.7493975Z if __name__ == "__main__": 2023-01-11T21:41:23.7494132Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7494312Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7494600Z arg0_1 = rand_strided((32, 32), (1, 32), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7494910Z arg1_1 = rand_strided((32, 1, 32), (32, 1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7495067Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7495075Z 2023-01-11T21:41:23.7495479Z [2023-01-11 21:30:22,840] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 164 2023-01-11T21:41:23.7495841Z STAGE:2023-01-11 21:30:22 1454:1454 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:41:23.7496209Z [2023-01-11 21:30:22,853] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 165 2023-01-11T21:41:23.7496592Z [2023-01-11 21:30:22,854] torch._inductor.graph: [DEBUG] Set _can_use_cpp_wrapper to False due to profiler not supported 2023-01-11T21:41:23.7496966Z [2023-01-11 21:30:24,436] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 165 2023-01-11T21:41:23.7497318Z STAGE:2023-01-11 21:30:24 1454:1454 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:41:23.7497678Z STAGE:2023-01-11 21:30:24 1454:1454 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:41:23.7498275Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7498454Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7498839Z [2023-01-11 21:30:24,458] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 166 2023-01-11T21:41:23.7498847Z 2023-01-11T21:41:23.7498989Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7499103Z import torch 2023-01-11T21:41:23.7499210Z import random 2023-01-11T21:41:23.7499388Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7499563Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7499571Z 2023-01-11T21:41:23.7499683Z aten = torch.ops.aten 2023-01-11T21:41:23.7499859Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7499991Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7500085Z 2023-01-11T21:41:23.7500095Z 2023-01-11T21:41:23.7500298Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7500582Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7500739Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7500887Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7501032Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7501103Z { 2023-01-11T21:41:23.7501242Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7501321Z { 2023-01-11T21:41:23.7501423Z #pragma omp for 2023-01-11T21:41:23.7501530Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7501619Z { 2023-01-11T21:41:23.7501803Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7501988Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7502089Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7502276Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7502504Z } 2023-01-11T21:41:23.7502642Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7502756Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:23.7502837Z { 2023-01-11T21:41:23.7502940Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7503047Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7503161Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7503272Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7503356Z } 2023-01-11T21:41:23.7503434Z } 2023-01-11T21:41:23.7503510Z } 2023-01-11T21:41:23.7503621Z ''') 2023-01-11T21:41:23.7503629Z 2023-01-11T21:41:23.7503650Z 2023-01-11T21:41:23.7503756Z async_compile.wait(globals()) 2023-01-11T21:41:23.7503858Z del async_compile 2023-01-11T21:41:23.7503865Z 2023-01-11T21:41:23.7503963Z def call(args): 2023-01-11T21:41:23.7504057Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7504155Z args.clear() 2023-01-11T21:41:23.7504306Z from torch.profiler import record_function 2023-01-11T21:41:23.7504533Z with record_function('inductor_wrapper_call'): 2023-01-11T21:41:23.7504809Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7505041Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7505138Z del arg0_1 2023-01-11T21:41:23.7505233Z del arg1_1 2023-01-11T21:41:23.7505335Z return (buf0, ) 2023-01-11T21:41:23.7505342Z 2023-01-11T21:41:23.7505349Z 2023-01-11T21:41:23.7505451Z if __name__ == "__main__": 2023-01-11T21:41:23.7505607Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7505764Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7506044Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7506325Z arg1_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7506493Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7506500Z 2023-01-11T21:41:23.7506506Z 2023-01-11T21:41:23.7506636Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7506732Z import torch 2023-01-11T21:41:23.7506829Z import random 2023-01-11T21:41:23.7506991Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7507143Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7507166Z 2023-01-11T21:41:23.7507257Z aten = torch.ops.aten 2023-01-11T21:41:23.7507441Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7507570Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7507576Z 2023-01-11T21:41:23.7507581Z 2023-01-11T21:41:23.7507761Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7508039Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7508211Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7508464Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7508577Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7508699Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7508833Z long* __restrict__ out_ptr3, 2023-01-11T21:41:23.7508963Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7509047Z { 2023-01-11T21:41:23.7509131Z { 2023-01-11T21:41:23.7509217Z { 2023-01-11T21:41:23.7509317Z float tmp1 = 0; 2023-01-11T21:41:23.7509649Z float tmp2 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.7509816Z float tmp3 = std::numeric_limits::infinity(); 2023-01-11T21:41:23.7509977Z struct IndexValue_7 {size_t index; float value;}; 2023-01-11T21:41:23.7510302Z IndexValue_7 tmp4{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:23.7510576Z #pragma omp declare reduction(argmax : struct IndexValue_7 :\ 2023-01-11T21:41:23.7510790Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.7510993Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.7511339Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:23.7511475Z struct IndexValue_8 {size_t index; float value;}; 2023-01-11T21:41:23.7511644Z IndexValue_8 tmp5{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:23.7511857Z #pragma omp declare reduction(argmin : struct IndexValue_8 :\ 2023-01-11T21:41:23.7512085Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:23.7512308Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:23.7512527Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:23.7512661Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.7512762Z { 2023-01-11T21:41:23.7512845Z { 2023-01-11T21:41:23.7512984Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7513096Z tmp1 += tmp0; 2023-01-11T21:41:23.7513240Z tmp2 = std::max(tmp2, tmp0); 2023-01-11T21:41:23.7513375Z tmp3 = std::min(tmp3, tmp0); 2023-01-11T21:41:23.7513504Z if (tmp4.value < tmp0) { 2023-01-11T21:41:23.7513653Z tmp4.index = i0; tmp4.value = tmp0; 2023-01-11T21:41:23.7513806Z } 2023-01-11T21:41:23.7513938Z if (tmp5.value > tmp0) { 2023-01-11T21:41:23.7514082Z tmp5.index = i0; tmp5.value = tmp0; 2023-01-11T21:41:23.7514175Z } 2023-01-11T21:41:23.7514265Z } 2023-01-11T21:41:23.7514351Z } 2023-01-11T21:41:23.7514463Z out_ptr0[0] = tmp1; 2023-01-11T21:41:23.7514554Z out_ptr1[0] = tmp2; 2023-01-11T21:41:23.7514656Z out_ptr2[0] = tmp3; 2023-01-11T21:41:23.7514769Z out_ptr3[0] = tmp4.index; 2023-01-11T21:41:23.7514876Z out_ptr4[0] = tmp5.index; 2023-01-11T21:41:23.7514962Z } 2023-01-11T21:41:23.7515045Z } 2023-01-11T21:41:23.7515110Z } 2023-01-11T21:41:23.7515240Z ''') 2023-01-11T21:41:23.7515247Z 2023-01-11T21:41:23.7515367Z async_compile.wait(globals()) 2023-01-11T21:41:23.7515466Z del async_compile 2023-01-11T21:41:23.7515626Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7515722Z wrapper = ( 2023-01-11T21:41:23.7515832Z ''' 2023-01-11T21:41:23.7515923Z #include 2023-01-11T21:41:23.7516021Z #include 2023-01-11T21:41:23.7516204Z std::vector call_13(std::vector args) { 2023-01-11T21:41:23.7516324Z at::Tensor arg0_1; 2023-01-11T21:41:23.7516516Z arg0_1 = args[0]; 2023-01-11T21:41:23.7516712Z auto buf0 = at::empty_strided({}, {}, at::ScalarType::Float); 2023-01-11T21:41:23.7516907Z auto buf1 = at::empty_strided({}, {}, at::ScalarType::Float); 2023-01-11T21:41:23.7517096Z auto buf2 = at::empty_strided({}, {}, at::ScalarType::Float); 2023-01-11T21:41:23.7517266Z auto buf3 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7517452Z auto buf4 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7517846Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/5l/c5l5a67pncykxc4k6lcjmgu7elavbipctmegtckslekz4senmd6h.so", RTLD_NOW); 2023-01-11T21:41:23.7517983Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7518175Z void (*kernel_cpp_0)(const float*,float*,float*,float*,long*,long*); 2023-01-11T21:41:23.7518357Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7518756Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(buf0.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf2.data_ptr()), (long*)(buf3.data_ptr()), (long*)(buf4.data_ptr())); 2023-01-11T21:41:23.7518864Z arg0_1.reset(); 2023-01-11T21:41:23.7519143Z return std::vector({buf0, buf1, buf2, buf3, buf4}); }''' ) 2023-01-11T21:41:23.7519165Z 2023-01-11T21:41:23.7519259Z module = load_inline( 2023-01-11T21:41:23.7519666Z name='inline_extension_cnves6oopp5lj6abdi7pghxpzsce42tujpsuwbgsp347d4fyecyk', 2023-01-11T21:41:23.7519795Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7519961Z functions=['call_13'], 2023-01-11T21:41:23.7520479Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7520698Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7521689Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7521706Z 2023-01-11T21:41:23.7521829Z def _wrap_func(f): 2023-01-11T21:41:23.7521918Z def g(args): 2023-01-11T21:41:23.7522029Z return f(args) 2023-01-11T21:41:23.7522137Z return g 2023-01-11T21:41:23.7522279Z call = _wrap_func(module.call_13) 2023-01-11T21:41:23.7522288Z 2023-01-11T21:41:23.7522294Z 2023-01-11T21:41:23.7522397Z if __name__ == "__main__": 2023-01-11T21:41:23.7522552Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7522725Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7523027Z arg0_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7523158Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7523164Z 2023-01-11T21:41:23.7523562Z [2023-01-11 21:30:46,345] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 166 2023-01-11T21:41:23.7524228Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7524426Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7524813Z [2023-01-11 21:30:46,370] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 167 2023-01-11T21:41:23.7524820Z 2023-01-11T21:41:23.7524952Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7525053Z import torch 2023-01-11T21:41:23.7525160Z import random 2023-01-11T21:41:23.7525327Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7525490Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7525579Z 2023-01-11T21:41:23.7525706Z aten = torch.ops.aten 2023-01-11T21:41:23.7525894Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7526024Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7526032Z 2023-01-11T21:41:23.7526039Z 2023-01-11T21:41:23.7526239Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7526534Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7526707Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7526854Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7526977Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7527113Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.7527199Z { 2023-01-11T21:41:23.7527350Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7527444Z { 2023-01-11T21:41:23.7527560Z #pragma omp for 2023-01-11T21:41:23.7527739Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7527816Z { 2023-01-11T21:41:23.7528010Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7528194Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7528382Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.7528497Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.7528676Z auto tmp4 = at::vec::clamp_min(tmp3, decltype(tmp3)(0)); 2023-01-11T21:41:23.7528862Z auto tmp5 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:23.7528982Z auto tmp6 = tmp4 / tmp5; 2023-01-11T21:41:23.7529090Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7529205Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7529296Z } 2023-01-11T21:41:23.7529429Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7529545Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.7529643Z { 2023-01-11T21:41:23.7529762Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7529864Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7529994Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.7530115Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.7530247Z auto tmp4 = tmp3 * (tmp3>0); 2023-01-11T21:41:23.7530397Z auto tmp5 = static_cast(10); 2023-01-11T21:41:23.7530510Z auto tmp6 = tmp4 / tmp5; 2023-01-11T21:41:23.7530627Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.7530729Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.7530816Z } 2023-01-11T21:41:23.7530899Z } 2023-01-11T21:41:23.7530984Z } 2023-01-11T21:41:23.7531115Z ''') 2023-01-11T21:41:23.7531124Z 2023-01-11T21:41:23.7531260Z async_compile.wait(globals()) 2023-01-11T21:41:23.7531368Z del async_compile 2023-01-11T21:41:23.7531532Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7531649Z wrapper = ( 2023-01-11T21:41:23.7531774Z ''' 2023-01-11T21:41:23.7531889Z #include 2023-01-11T21:41:23.7532003Z #include 2023-01-11T21:41:23.7532200Z std::vector call_14(std::vector args) { 2023-01-11T21:41:23.7532298Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7532406Z arg0_1 = args[0]; 2023-01-11T21:41:23.7532516Z arg1_1 = args[1]; 2023-01-11T21:41:23.7532711Z auto buf0 = at::empty_strided({8, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7532910Z auto buf1 = at::empty_strided({8, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7533307Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/7x/c7xp2x2aeezqsrsqkwdk4izewjw4zicynazuew4aormeadx7we4w.so", RTLD_NOW); 2023-01-11T21:41:23.7533451Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7533635Z void (*kernel_cpp_0)(const float*,const float*,float*,float*); 2023-01-11T21:41:23.7533802Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7534180Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(arg1_1.data_ptr()), (float*)(buf0.data_ptr()), (float*)(buf1.data_ptr())); 2023-01-11T21:41:23.7534289Z arg0_1.reset(); 2023-01-11T21:41:23.7534393Z arg1_1.reset(); 2023-01-11T21:41:23.7534658Z return std::vector({buf0, buf1}); }''' ) 2023-01-11T21:41:23.7534668Z 2023-01-11T21:41:23.7534772Z module = load_inline( 2023-01-11T21:41:23.7535145Z name='inline_extension_calgsbsip6x4xs4ezhgcztxahbh3ege57zlzguw75tt24wn43c6r', 2023-01-11T21:41:23.7535260Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7535404Z functions=['call_14'], 2023-01-11T21:41:23.7535926Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7536130Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7537064Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7537081Z 2023-01-11T21:41:23.7537192Z def _wrap_func(f): 2023-01-11T21:41:23.7537289Z def g(args): 2023-01-11T21:41:23.7537397Z return f(args) 2023-01-11T21:41:23.7537490Z return g 2023-01-11T21:41:23.7537617Z call = _wrap_func(module.call_14) 2023-01-11T21:41:23.7537640Z 2023-01-11T21:41:23.7537648Z 2023-01-11T21:41:23.7537741Z if __name__ == "__main__": 2023-01-11T21:41:23.7537909Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7538090Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7538389Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7538677Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7538845Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7538854Z 2023-01-11T21:41:23.7539239Z [2023-01-11 21:31:07,805] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 167 2023-01-11T21:41:23.7539865Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7540041Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7540415Z [2023-01-11 21:31:07,822] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 168 2023-01-11T21:41:23.7540440Z 2023-01-11T21:41:23.7540560Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7540659Z import torch 2023-01-11T21:41:23.7540759Z import random 2023-01-11T21:41:23.7540916Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7541082Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7541090Z 2023-01-11T21:41:23.7541201Z aten = torch.ops.aten 2023-01-11T21:41:23.7541391Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7541510Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7541518Z 2023-01-11T21:41:23.7541524Z 2023-01-11T21:41:23.7541720Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7542002Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7542171Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7542302Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7542529Z { 2023-01-11T21:41:23.7542658Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7542720Z { 2023-01-11T21:41:23.7542927Z #pragma omp for 2023-01-11T21:41:23.7543030Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7543114Z { 2023-01-11T21:41:23.7543291Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7543468Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:23.7543576Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7543688Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7543755Z } 2023-01-11T21:41:23.7543874Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7543985Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.7544065Z { 2023-01-11T21:41:23.7544181Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7544407Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:23.7544517Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:23.7544610Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:23.7544714Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7544878Z } 2023-01-11T21:41:23.7544956Z } 2023-01-11T21:41:23.7545039Z } 2023-01-11T21:41:23.7545153Z ''') 2023-01-11T21:41:23.7545161Z 2023-01-11T21:41:23.7545271Z async_compile.wait(globals()) 2023-01-11T21:41:23.7545373Z del async_compile 2023-01-11T21:41:23.7545543Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7545640Z wrapper = ( 2023-01-11T21:41:23.7545749Z ''' 2023-01-11T21:41:23.7545849Z #include 2023-01-11T21:41:23.7545949Z #include 2023-01-11T21:41:23.7546098Z at::Tensor call_15(std::vector args) { 2023-01-11T21:41:23.7546209Z at::Tensor arg0_1; 2023-01-11T21:41:23.7546309Z arg0_1 = args[0]; 2023-01-11T21:41:23.7546481Z auto buf0 = at::empty_strided({8, 8}, {8, 1}, at::ScalarType::Float); 2023-01-11T21:41:23.7546827Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/35/c35igtmj5ncrr7wfhg5mi3p27d5wx6jrpmulflkkuspmmqmzjoe5.so", RTLD_NOW); 2023-01-11T21:41:23.7546972Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7547107Z void (*kernel_cpp_0)(const float*,float*); 2023-01-11T21:41:23.7547266Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7547420Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(buf0.data_ptr())); 2023-01-11T21:41:23.7547510Z arg0_1.reset(); 2023-01-11T21:41:23.7547658Z return buf0; }''' ) 2023-01-11T21:41:23.7547666Z 2023-01-11T21:41:23.7547765Z module = load_inline( 2023-01-11T21:41:23.7548109Z name='inline_extension_cjm5ytgzhvhh3tc653lkgnoh5sagh4gzhwzzhzx6xddloix3de7v', 2023-01-11T21:41:23.7548218Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7548373Z functions=['call_15'], 2023-01-11T21:41:23.7548925Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7549146Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7550018Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7550032Z 2023-01-11T21:41:23.7550139Z def _wrap_func(f): 2023-01-11T21:41:23.7550231Z def g(args): 2023-01-11T21:41:23.7550334Z return f(args) 2023-01-11T21:41:23.7550425Z return g 2023-01-11T21:41:23.7550541Z call = _wrap_func(module.call_15) 2023-01-11T21:41:23.7550549Z 2023-01-11T21:41:23.7550554Z 2023-01-11T21:41:23.7550659Z if __name__ == "__main__": 2023-01-11T21:41:23.7550797Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7550961Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7551242Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7551391Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7551482Z 2023-01-11T21:41:23.7551887Z [2023-01-11 21:31:28,637] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 168 2023-01-11T21:41:23.7552491Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7552665Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7553068Z [2023-01-11 21:31:28,669] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 169 2023-01-11T21:41:23.7553076Z 2023-01-11T21:41:23.7553223Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7553314Z import torch 2023-01-11T21:41:23.7553422Z import random 2023-01-11T21:41:23.7553704Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7553963Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7553972Z 2023-01-11T21:41:23.7554085Z aten = torch.ops.aten 2023-01-11T21:41:23.7554290Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7554427Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7554437Z 2023-01-11T21:41:23.7554443Z 2023-01-11T21:41:23.7554651Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7554934Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7555121Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7555279Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.7555423Z double* __restrict__ out_ptr1, 2023-01-11T21:41:23.7555570Z double* __restrict__ out_ptr2) 2023-01-11T21:41:23.7555667Z { 2023-01-11T21:41:23.7555812Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7555898Z { 2023-01-11T21:41:23.7556016Z #pragma omp for 2023-01-11T21:41:23.7556141Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.7556244Z { 2023-01-11T21:41:23.7556352Z { 2023-01-11T21:41:23.7556455Z { 2023-01-11T21:41:23.7556580Z double tmp2 = 0; 2023-01-11T21:41:23.7556700Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.7556796Z { 2023-01-11T21:41:23.7556898Z { 2023-01-11T21:41:23.7557054Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.7557212Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7557324Z tmp2 += tmp1; 2023-01-11T21:41:23.7557421Z } 2023-01-11T21:41:23.7557510Z } 2023-01-11T21:41:23.7557631Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7557722Z } 2023-01-11T21:41:23.7557812Z } 2023-01-11T21:41:23.7557903Z } 2023-01-11T21:41:23.7557992Z } 2023-01-11T21:41:23.7558076Z { 2023-01-11T21:41:23.7558144Z { 2023-01-11T21:41:23.7558258Z double tmp2 = 0; 2023-01-11T21:41:23.7558414Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7558514Z { 2023-01-11T21:41:23.7558675Z #pragma omp for reduction(+:tmp2) 2023-01-11T21:41:23.7558806Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.7558891Z { 2023-01-11T21:41:23.7558984Z { 2023-01-11T21:41:23.7559130Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7559288Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7559399Z tmp2 += tmp1; 2023-01-11T21:41:23.7559489Z } 2023-01-11T21:41:23.7559574Z } 2023-01-11T21:41:23.7559643Z } 2023-01-11T21:41:23.7559856Z out_ptr1[0] = tmp2; 2023-01-11T21:41:23.7559939Z } 2023-01-11T21:41:23.7560024Z } 2023-01-11T21:41:23.7560174Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7560258Z { 2023-01-11T21:41:23.7560368Z #pragma omp for 2023-01-11T21:41:23.7560478Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.7560565Z { 2023-01-11T21:41:23.7560673Z #pragma GCC ivdep 2023-01-11T21:41:23.7560784Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.7560869Z { 2023-01-11T21:41:23.7560958Z { 2023-01-11T21:41:23.7561034Z { 2023-01-11T21:41:23.7561171Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.7561310Z auto tmp2 = out_ptr0[i1]; 2023-01-11T21:41:23.7561445Z auto tmp4 = out_ptr1[0]; 2023-01-11T21:41:23.7561596Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7561794Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.7561937Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.7562077Z out_ptr2[i1 + (32*i0)] = tmp5; 2023-01-11T21:41:23.7562161Z } 2023-01-11T21:41:23.7562264Z } 2023-01-11T21:41:23.7562360Z } 2023-01-11T21:41:23.7562446Z } 2023-01-11T21:41:23.7562538Z } 2023-01-11T21:41:23.7562623Z } 2023-01-11T21:41:23.7562749Z ''') 2023-01-11T21:41:23.7562758Z 2023-01-11T21:41:23.7562889Z async_compile.wait(globals()) 2023-01-11T21:41:23.7562987Z del async_compile 2023-01-11T21:41:23.7563161Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7563269Z wrapper = ( 2023-01-11T21:41:23.7563385Z ''' 2023-01-11T21:41:23.7563497Z #include 2023-01-11T21:41:23.7563587Z #include 2023-01-11T21:41:23.7563756Z at::Tensor call_16(std::vector args) { 2023-01-11T21:41:23.7563853Z at::Tensor arg0_1; 2023-01-11T21:41:23.7563964Z arg0_1 = args[0]; 2023-01-11T21:41:23.7564147Z auto buf0 = at::empty_strided({32, }, {1, }, at::ScalarType::Double); 2023-01-11T21:41:23.7564314Z auto buf1 = at::empty_strided({}, {}, at::ScalarType::Double); 2023-01-11T21:41:23.7564485Z auto buf2 = at::empty_strided({32, 32}, {32, 1}, at::ScalarType::Double); 2023-01-11T21:41:23.7564822Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/zh/czhbish5ijarri5zlmhjqoss3r374biyjp3mnifm6ga2y5j5hx55.so", RTLD_NOW); 2023-01-11T21:41:23.7564940Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7565122Z void (*kernel_cpp_0)(const float*,double*,double*,double*); 2023-01-11T21:41:23.7565307Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7565557Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (double*)(buf0.data_ptr()), (double*)(buf1.data_ptr()), (double*)(buf2.data_ptr())); 2023-01-11T21:41:23.7565665Z arg0_1.reset(); 2023-01-11T21:41:23.7565823Z return buf2; }''' ) 2023-01-11T21:41:23.7565834Z 2023-01-11T21:41:23.7565934Z module = load_inline( 2023-01-11T21:41:23.7566287Z name='inline_extension_cch76d6suiocuagkhrlcimmdoy5nhjvwl4xulurcswqzhcc7kzny', 2023-01-11T21:41:23.7566381Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7566535Z functions=['call_16'], 2023-01-11T21:41:23.7567083Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7567305Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7568205Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7568214Z 2023-01-11T21:41:23.7568319Z def _wrap_func(f): 2023-01-11T21:41:23.7568504Z def g(args): 2023-01-11T21:41:23.7568610Z return f(args) 2023-01-11T21:41:23.7568689Z return g 2023-01-11T21:41:23.7568813Z call = _wrap_func(module.call_16) 2023-01-11T21:41:23.7568820Z 2023-01-11T21:41:23.7568826Z 2023-01-11T21:41:23.7568925Z if __name__ == "__main__": 2023-01-11T21:41:23.7569089Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7569264Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7569566Z arg0_1 = rand_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7569725Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7569732Z 2023-01-11T21:41:23.7570116Z [2023-01-11 21:31:49,478] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 169 2023-01-11T21:41:23.7570800Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7570983Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7571340Z [2023-01-11 21:31:49,500] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 170 2023-01-11T21:41:23.7571348Z 2023-01-11T21:41:23.7571471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7571564Z import torch 2023-01-11T21:41:23.7571656Z import random 2023-01-11T21:41:23.7571819Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7571978Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7571985Z 2023-01-11T21:41:23.7572098Z aten = torch.ops.aten 2023-01-11T21:41:23.7572275Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7572401Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7572412Z 2023-01-11T21:41:23.7572421Z 2023-01-11T21:41:23.7572615Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7572884Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7573058Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7573209Z const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.7573351Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.7573430Z { 2023-01-11T21:41:23.7573541Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.7573622Z { 2023-01-11T21:41:23.7573708Z { 2023-01-11T21:41:23.7573802Z long tmp2 = 0; 2023-01-11T21:41:23.7573899Z long tmp3 = 0; 2023-01-11T21:41:23.7574040Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7574120Z { 2023-01-11T21:41:23.7574285Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:23.7574405Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7574498Z { 2023-01-11T21:41:23.7574590Z { 2023-01-11T21:41:23.7574722Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7574874Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7574980Z tmp2 += tmp1; 2023-01-11T21:41:23.7575071Z tmp3 += tmp1; 2023-01-11T21:41:23.7575160Z } 2023-01-11T21:41:23.7575251Z } 2023-01-11T21:41:23.7575338Z } 2023-01-11T21:41:23.7575447Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.7575551Z out_ptr1[0] = tmp3; 2023-01-11T21:41:23.7575658Z } 2023-01-11T21:41:23.7575738Z } 2023-01-11T21:41:23.7575827Z { 2023-01-11T21:41:23.7575925Z { 2023-01-11T21:41:23.7576050Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:23.7576172Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:23.7576320Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7576516Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7576635Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7576750Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:23.7576833Z } 2023-01-11T21:41:23.7576917Z } 2023-01-11T21:41:23.7577022Z } 2023-01-11T21:41:23.7577156Z ''') 2023-01-11T21:41:23.7577164Z 2023-01-11T21:41:23.7577277Z async_compile.wait(globals()) 2023-01-11T21:41:23.7577382Z del async_compile 2023-01-11T21:41:23.7577551Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7577653Z wrapper = ( 2023-01-11T21:41:23.7577765Z ''' 2023-01-11T21:41:23.7577868Z #include 2023-01-11T21:41:23.7577969Z #include 2023-01-11T21:41:23.7578131Z at::Tensor call_17(std::vector args) { 2023-01-11T21:41:23.7578245Z at::Tensor arg0_1; 2023-01-11T21:41:23.7578354Z arg0_1 = args[0]; 2023-01-11T21:41:23.7578525Z auto buf0 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7578777Z auto buf1 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7578931Z auto buf2 = buf0; buf0.reset(); // reuse 2023-01-11T21:41:23.7579301Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/7i/c7ikfrx6rninixt746snmnigf7nld7jz6nxxc6lhqudu4kpzv6cu.so", RTLD_NOW); 2023-01-11T21:41:23.7579449Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7579577Z void (*kernel_cpp_0)(long*,const bool*,long*); 2023-01-11T21:41:23.7579747Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7579954Z kernel_cpp_0((long*)(buf2.data_ptr()), (bool*)(arg0_1.data_ptr()), (long*)(buf1.data_ptr())); 2023-01-11T21:41:23.7580048Z arg0_1.reset(); 2023-01-11T21:41:23.7580204Z return buf2; }''' ) 2023-01-11T21:41:23.7580213Z 2023-01-11T21:41:23.7580309Z module = load_inline( 2023-01-11T21:41:23.7580665Z name='inline_extension_cilgouchaeqshp6duams4dexs6tbjv33w7cmpjeqyecqjxdnhm6c', 2023-01-11T21:41:23.7580786Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7580955Z functions=['call_17'], 2023-01-11T21:41:23.7581477Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7581686Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7582705Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7582715Z 2023-01-11T21:41:23.7582823Z def _wrap_func(f): 2023-01-11T21:41:23.7582926Z def g(args): 2023-01-11T21:41:23.7583036Z return f(args) 2023-01-11T21:41:23.7583117Z return g 2023-01-11T21:41:23.7583251Z call = _wrap_func(module.call_17) 2023-01-11T21:41:23.7583258Z 2023-01-11T21:41:23.7583271Z 2023-01-11T21:41:23.7583386Z if __name__ == "__main__": 2023-01-11T21:41:23.7583554Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7583727Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7584017Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.7584182Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7584190Z 2023-01-11T21:41:23.7584599Z [2023-01-11 21:32:10,515] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 170 2023-01-11T21:41:23.7585260Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7585451Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7585960Z [2023-01-11 21:32:10,535] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 171 2023-01-11T21:41:23.7585983Z 2023-01-11T21:41:23.7586103Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7586201Z import torch 2023-01-11T21:41:23.7586291Z import random 2023-01-11T21:41:23.7586440Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7586614Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7586622Z 2023-01-11T21:41:23.7586724Z aten = torch.ops.aten 2023-01-11T21:41:23.7586906Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7587014Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7587021Z 2023-01-11T21:41:23.7587026Z 2023-01-11T21:41:23.7587225Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7587490Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7587730Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7587889Z const unsigned char* __restrict__ in_ptr0, 2023-01-11T21:41:23.7588019Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.7588105Z { 2023-01-11T21:41:23.7588213Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.7588303Z { 2023-01-11T21:41:23.7588392Z { 2023-01-11T21:41:23.7588496Z long tmp2 = 0; 2023-01-11T21:41:23.7588608Z long tmp3 = 0; 2023-01-11T21:41:23.7588757Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7588844Z { 2023-01-11T21:41:23.7589018Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:23.7589140Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7589240Z { 2023-01-11T21:41:23.7589337Z { 2023-01-11T21:41:23.7589468Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7589635Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7589747Z tmp2 += tmp1; 2023-01-11T21:41:23.7589848Z tmp3 += tmp1; 2023-01-11T21:41:23.7589945Z } 2023-01-11T21:41:23.7590038Z } 2023-01-11T21:41:23.7590124Z } 2023-01-11T21:41:23.7590236Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.7590347Z out_ptr1[0] = tmp3; 2023-01-11T21:41:23.7590438Z } 2023-01-11T21:41:23.7590511Z } 2023-01-11T21:41:23.7590588Z { 2023-01-11T21:41:23.7590672Z { 2023-01-11T21:41:23.7590794Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:23.7590913Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:23.7591047Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7591154Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7591271Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7591380Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:23.7591472Z } 2023-01-11T21:41:23.7591550Z } 2023-01-11T21:41:23.7591633Z } 2023-01-11T21:41:23.7591764Z ''') 2023-01-11T21:41:23.7591772Z 2023-01-11T21:41:23.7591887Z async_compile.wait(globals()) 2023-01-11T21:41:23.7591990Z del async_compile 2023-01-11T21:41:23.7592154Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7592250Z wrapper = ( 2023-01-11T21:41:23.7592362Z ''' 2023-01-11T21:41:23.7592468Z #include 2023-01-11T21:41:23.7592570Z #include 2023-01-11T21:41:23.7592724Z at::Tensor call_18(std::vector args) { 2023-01-11T21:41:23.7592825Z at::Tensor arg0_1; 2023-01-11T21:41:23.7592923Z arg0_1 = args[0]; 2023-01-11T21:41:23.7593092Z auto buf0 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7593255Z auto buf1 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7593387Z auto buf2 = buf0; buf0.reset(); // reuse 2023-01-11T21:41:23.7593818Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/tb/ctbh34bqrqepgalem35dqwj3753otv7p7fp3h5othhadkxb5qbmw.so", RTLD_NOW); 2023-01-11T21:41:23.7594037Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7594193Z void (*kernel_cpp_0)(long*,const unsigned char*,long*); 2023-01-11T21:41:23.7594363Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7594595Z kernel_cpp_0((long*)(buf2.data_ptr()), (unsigned char*)(arg0_1.data_ptr()), (long*)(buf1.data_ptr())); 2023-01-11T21:41:23.7594696Z arg0_1.reset(); 2023-01-11T21:41:23.7594845Z return buf2; }''' ) 2023-01-11T21:41:23.7594853Z 2023-01-11T21:41:23.7594945Z module = load_inline( 2023-01-11T21:41:23.7595277Z name='inline_extension_cd35e6efhhtvu2vyxgwximssv66dj3ogpttagtwmu3k3432jc22z', 2023-01-11T21:41:23.7595369Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7595511Z functions=['call_18'], 2023-01-11T21:41:23.7596160Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7596378Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7597378Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7597386Z 2023-01-11T21:41:23.7597492Z def _wrap_func(f): 2023-01-11T21:41:23.7597586Z def g(args): 2023-01-11T21:41:23.7597682Z return f(args) 2023-01-11T21:41:23.7597776Z return g 2023-01-11T21:41:23.7597886Z call = _wrap_func(module.call_18) 2023-01-11T21:41:23.7597892Z 2023-01-11T21:41:23.7597897Z 2023-01-11T21:41:23.7597996Z if __name__ == "__main__": 2023-01-11T21:41:23.7598159Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7598343Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7598640Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.uint8) 2023-01-11T21:41:23.7598789Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7598797Z 2023-01-11T21:41:23.7599185Z [2023-01-11 21:32:31,244] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 171 2023-01-11T21:41:23.7599864Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7600078Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7600481Z [2023-01-11 21:32:31,264] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 172 2023-01-11T21:41:23.7600507Z 2023-01-11T21:41:23.7600629Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7600726Z import torch 2023-01-11T21:41:23.7600834Z import random 2023-01-11T21:41:23.7600997Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7601167Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7601174Z 2023-01-11T21:41:23.7601274Z aten = torch.ops.aten 2023-01-11T21:41:23.7601478Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7601622Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7601632Z 2023-01-11T21:41:23.7601658Z 2023-01-11T21:41:23.7601876Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7602280Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7602488Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7602658Z const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.7602805Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.7602979Z { 2023-01-11T21:41:23.7603102Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.7603175Z { 2023-01-11T21:41:23.7603264Z { 2023-01-11T21:41:23.7603370Z long tmp2 = 0; 2023-01-11T21:41:23.7603472Z long tmp3 = 0; 2023-01-11T21:41:23.7603647Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7603740Z { 2023-01-11T21:41:23.7603915Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:23.7604034Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7604127Z { 2023-01-11T21:41:23.7604221Z { 2023-01-11T21:41:23.7604358Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7604531Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7604658Z tmp2 += tmp1; 2023-01-11T21:41:23.7604765Z tmp3 += tmp1; 2023-01-11T21:41:23.7604871Z } 2023-01-11T21:41:23.7605035Z } 2023-01-11T21:41:23.7605145Z } 2023-01-11T21:41:23.7605273Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.7605397Z out_ptr1[0] = tmp3; 2023-01-11T21:41:23.7605498Z } 2023-01-11T21:41:23.7605580Z } 2023-01-11T21:41:23.7605677Z { 2023-01-11T21:41:23.7605775Z { 2023-01-11T21:41:23.7605893Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:23.7606016Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:23.7606170Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7606302Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7606414Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7606539Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:23.7606636Z } 2023-01-11T21:41:23.7606727Z } 2023-01-11T21:41:23.7606819Z } 2023-01-11T21:41:23.7606961Z ''') 2023-01-11T21:41:23.7606972Z 2023-01-11T21:41:23.7607105Z async_compile.wait(globals()) 2023-01-11T21:41:23.7607228Z del async_compile 2023-01-11T21:41:23.7607421Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7607527Z wrapper = ( 2023-01-11T21:41:23.7607657Z ''' 2023-01-11T21:41:23.7607773Z #include 2023-01-11T21:41:23.7607888Z #include 2023-01-11T21:41:23.7608050Z at::Tensor call_19(std::vector args) { 2023-01-11T21:41:23.7608154Z at::Tensor arg0_1; 2023-01-11T21:41:23.7608253Z arg0_1 = args[0]; 2023-01-11T21:41:23.7608449Z auto buf0 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7608629Z auto buf1 = at::empty_strided({}, {}, at::ScalarType::Long); 2023-01-11T21:41:23.7608757Z auto buf2 = buf0; buf0.reset(); // reuse 2023-01-11T21:41:23.7609188Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/3h/c3h2ropcb3kmljlkoaniup2ktv5vwaget4hkir3pmqeix6qdyoww.so", RTLD_NOW); 2023-01-11T21:41:23.7609318Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7609462Z void (*kernel_cpp_0)(long*,const int*,long*); 2023-01-11T21:41:23.7609646Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7609876Z kernel_cpp_0((long*)(buf2.data_ptr()), (int*)(arg0_1.data_ptr()), (long*)(buf1.data_ptr())); 2023-01-11T21:41:23.7609971Z arg0_1.reset(); 2023-01-11T21:41:23.7610132Z return buf2; }''' ) 2023-01-11T21:41:23.7610144Z 2023-01-11T21:41:23.7610246Z module = load_inline( 2023-01-11T21:41:23.7610637Z name='inline_extension_cnlotoitsz6eiybqu2bw3lgfkvhrhrmri7y2brdfjlaghnyjsytr', 2023-01-11T21:41:23.7610746Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7610883Z functions=['call_19'], 2023-01-11T21:41:23.7611399Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7611610Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7612480Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7612568Z 2023-01-11T21:41:23.7612667Z def _wrap_func(f): 2023-01-11T21:41:23.7612758Z def g(args): 2023-01-11T21:41:23.7612853Z return f(args) 2023-01-11T21:41:23.7612940Z return g 2023-01-11T21:41:23.7613051Z call = _wrap_func(module.call_19) 2023-01-11T21:41:23.7613058Z 2023-01-11T21:41:23.7613063Z 2023-01-11T21:41:23.7613160Z if __name__ == "__main__": 2023-01-11T21:41:23.7613310Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7613484Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7613761Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.7613902Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7613908Z 2023-01-11T21:41:23.7614390Z [2023-01-11 21:32:52,225] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 172 2023-01-11T21:41:23.7615019Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7615190Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7615546Z [2023-01-11 21:32:52,250] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 173 2023-01-11T21:41:23.7615569Z 2023-01-11T21:41:23.7615684Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7615777Z import torch 2023-01-11T21:41:23.7615873Z import random 2023-01-11T21:41:23.7616040Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7616216Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7616223Z 2023-01-11T21:41:23.7616329Z aten = torch.ops.aten 2023-01-11T21:41:23.7616522Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7616636Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7616642Z 2023-01-11T21:41:23.7616662Z 2023-01-11T21:41:23.7616842Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7617133Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7617294Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7617442Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7617581Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7617706Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.7617784Z { 2023-01-11T21:41:23.7617902Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7617990Z { 2023-01-11T21:41:23.7618100Z #pragma omp for 2023-01-11T21:41:23.7618214Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7618292Z { 2023-01-11T21:41:23.7618401Z #pragma GCC ivdep 2023-01-11T21:41:23.7618515Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.7618583Z { 2023-01-11T21:41:23.7618671Z { 2023-01-11T21:41:23.7618755Z { 2023-01-11T21:41:23.7618892Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.7619028Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:23.7619177Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.7619342Z out_ptr0[i0 + (8*i1)] = tmp2; 2023-01-11T21:41:23.7619426Z } 2023-01-11T21:41:23.7619530Z } 2023-01-11T21:41:23.7619633Z } 2023-01-11T21:41:23.7619737Z } 2023-01-11T21:41:23.7619866Z #pragma omp for 2023-01-11T21:41:23.7620077Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7620170Z { 2023-01-11T21:41:23.7620366Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7620551Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.7620670Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7620852Z auto tmp3 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:23.7620968Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7621083Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7621161Z } 2023-01-11T21:41:23.7621272Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7621378Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.7621460Z { 2023-01-11T21:41:23.7621576Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.7621723Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.7621841Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.7622059Z auto tmp3 = static_cast(10); 2023-01-11T21:41:23.7622164Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.7622274Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.7622508Z } 2023-01-11T21:41:23.7622605Z } 2023-01-11T21:41:23.7622688Z } 2023-01-11T21:41:23.7622815Z ''') 2023-01-11T21:41:23.7622822Z 2023-01-11T21:41:23.7622923Z async_compile.wait(globals()) 2023-01-11T21:41:23.7623018Z del async_compile 2023-01-11T21:41:23.7623172Z from torch.utils.cpp_extension import load_inline 2023-01-11T21:41:23.7623263Z wrapper = ( 2023-01-11T21:41:23.7623372Z ''' 2023-01-11T21:41:23.7623471Z #include 2023-01-11T21:41:23.7623572Z #include 2023-01-11T21:41:23.7623729Z std::vector call_20(std::vector args) { 2023-01-11T21:41:23.7623837Z at::Tensor arg0_1, arg1_1; 2023-01-11T21:41:23.7623927Z arg0_1 = args[0]; 2023-01-11T21:41:23.7624014Z arg1_1 = args[1]; 2023-01-11T21:41:23.7624191Z auto buf0 = at::empty_strided({8, 8}, {1, 8}, at::ScalarType::Float); 2023-01-11T21:41:23.7624363Z auto buf1 = at::empty_strided({8, 8}, {1, 8}, at::ScalarType::Float); 2023-01-11T21:41:23.7624691Z auto kernel_cpp_0_lib = dlopen("/tmp/torchinductor_jenkins/ue/cueituey67usa3z6yqp6aokcyslwfb22e2edl237c4xqodiu4ah2.so", RTLD_NOW); 2023-01-11T21:41:23.7624821Z assert(kernel_cpp_0_lib != nullptr); 2023-01-11T21:41:23.7624972Z void (*kernel_cpp_0)(const float*,const float*,float*,float*); 2023-01-11T21:41:23.7625166Z *(void **) (&kernel_cpp_0) = dlsym(kernel_cpp_0_lib, "kernel"); 2023-01-11T21:41:23.7625450Z kernel_cpp_0((float*)(arg0_1.data_ptr()), (float*)(arg1_1.data_ptr()), (float*)(buf0.data_ptr()), (float*)(buf1.data_ptr())); 2023-01-11T21:41:23.7625550Z arg0_1.reset(); 2023-01-11T21:41:23.7625663Z arg1_1.reset(); 2023-01-11T21:41:23.7625993Z return std::vector({buf0, buf1}); }''' ) 2023-01-11T21:41:23.7626004Z 2023-01-11T21:41:23.7626123Z module = load_inline( 2023-01-11T21:41:23.7626497Z name='inline_extension_cjhlg7jjaalr6ljwfilqinh666vxmfig6blf4xoze25v3hhrk5f6', 2023-01-11T21:41:23.7626599Z cpp_sources=[wrapper], 2023-01-11T21:41:23.7626782Z functions=['call_20'], 2023-01-11T21:41:23.7627297Z extra_cflags=['-std=c++17 -Wno-unused-variable -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS'], 2023-01-11T21:41:23.7627496Z extra_ldflags=['-shared -fPIC -lgomp'], 2023-01-11T21:41:23.7628321Z extra_include_paths=['-I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10']) 2023-01-11T21:41:23.7628329Z 2023-01-11T21:41:23.7628420Z def _wrap_func(f): 2023-01-11T21:41:23.7628511Z def g(args): 2023-01-11T21:41:23.7628607Z return f(args) 2023-01-11T21:41:23.7628812Z return g 2023-01-11T21:41:23.7628945Z call = _wrap_func(module.call_20) 2023-01-11T21:41:23.7628953Z 2023-01-11T21:41:23.7628959Z 2023-01-11T21:41:23.7629063Z if __name__ == "__main__": 2023-01-11T21:41:23.7629252Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7629450Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7629806Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7630155Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7630366Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7630376Z 2023-01-11T21:41:23.7630915Z [2023-01-11 21:33:13,266] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 173 2023-01-11T21:41:23.7631016Z ok (415.558s) 2023-01-11T21:41:23.7631235Z test_cudnn_rnn_cpu (__main__.CpuTests) ... skip: requires CUDA (0.002s) 2023-01-11T21:41:23.7631602Z test_dense_mask_index_cpu (__main__.CpuTests) ... skip: https://github.com/pytorch/torchdynamo/issues/1697 (0.001s) 2023-01-11T21:41:23.7632437Z test_div1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7632599Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7632968Z [2023-01-11 21:33:13,329] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 174 2023-01-11T21:41:23.7633329Z [2023-01-11 21:33:14,937] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 174 2023-01-11T21:41:23.7633337Z 2023-01-11T21:41:23.7633455Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7633548Z import torch 2023-01-11T21:41:23.7633629Z import random 2023-01-11T21:41:23.7633870Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7634031Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7634037Z 2023-01-11T21:41:23.7634144Z aten = torch.ops.aten 2023-01-11T21:41:23.7634320Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7634439Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7634447Z 2023-01-11T21:41:23.7634452Z 2023-01-11T21:41:23.7634637Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7634899Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7635041Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7635176Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7635306Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7635436Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7635565Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7635688Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7635813Z float* __restrict__ out_ptr4) 2023-01-11T21:41:23.7635880Z { 2023-01-11T21:41:23.7636013Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7636094Z { 2023-01-11T21:41:23.7636197Z #pragma omp for 2023-01-11T21:41:23.7636304Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.7636385Z { 2023-01-11T21:41:23.7636575Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7636752Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7636851Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7636966Z auto tmp3 = tmp2.floor(); 2023-01-11T21:41:23.7637075Z auto tmp4 = tmp2.trunc(); 2023-01-11T21:41:23.7637191Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7637409Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7637539Z tmp4.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.7637657Z tmp2.store(out_ptr3 + 8*i0); 2023-01-11T21:41:23.7637764Z tmp3.store(out_ptr4 + 8*i0); 2023-01-11T21:41:23.7637844Z } 2023-01-11T21:41:23.7637984Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7638128Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.7638235Z { 2023-01-11T21:41:23.7638378Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7638514Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7638663Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7638830Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:23.7638974Z auto tmp4 = std::trunc(tmp2); 2023-01-11T21:41:23.7639112Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7639249Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.7639386Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.7639570Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.7639717Z out_ptr4[i0] = tmp3; 2023-01-11T21:41:23.7639825Z } 2023-01-11T21:41:23.7639932Z } 2023-01-11T21:41:23.7640036Z } 2023-01-11T21:41:23.7640181Z ''') 2023-01-11T21:41:23.7640193Z 2023-01-11T21:41:23.7640200Z 2023-01-11T21:41:23.7640365Z async_compile.wait(globals()) 2023-01-11T21:41:23.7640472Z del async_compile 2023-01-11T21:41:23.7640494Z 2023-01-11T21:41:23.7640591Z def call(args): 2023-01-11T21:41:23.7640698Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7640803Z args.clear() 2023-01-11T21:41:23.7641167Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7641436Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7641723Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7641980Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7642234Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7642570Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7642661Z del arg0_1 2023-01-11T21:41:23.7642746Z del arg1_1 2023-01-11T21:41:23.7642869Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7642876Z 2023-01-11T21:41:23.7642881Z 2023-01-11T21:41:23.7642979Z if __name__ == "__main__": 2023-01-11T21:41:23.7643135Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7643295Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7643549Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7643807Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7643964Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7643971Z 2023-01-11T21:41:23.7644059Z ok (1.669s) 2023-01-11T21:41:23.7644669Z test_div2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7644835Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7645206Z [2023-01-11 21:33:15,000] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 175 2023-01-11T21:41:23.7645585Z [2023-01-11 21:33:16,596] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 175 2023-01-11T21:41:23.7645594Z 2023-01-11T21:41:23.7645731Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7645932Z import torch 2023-01-11T21:41:23.7646045Z import random 2023-01-11T21:41:23.7646268Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7646496Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7646504Z 2023-01-11T21:41:23.7646647Z aten = torch.ops.aten 2023-01-11T21:41:23.7646906Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7647074Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7647083Z 2023-01-11T21:41:23.7647091Z 2023-01-11T21:41:23.7647351Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7647778Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7647991Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7648183Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7648367Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7648614Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7648792Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7648942Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7649109Z float* __restrict__ out_ptr4) 2023-01-11T21:41:23.7649200Z { 2023-01-11T21:41:23.7649391Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7649485Z { 2023-01-11T21:41:23.7649610Z #pragma omp for 2023-01-11T21:41:23.7649755Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7649865Z { 2023-01-11T21:41:23.7649976Z { 2023-01-11T21:41:23.7650075Z { 2023-01-11T21:41:23.7650239Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7650406Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7650602Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7650721Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:23.7650864Z auto tmp4 = std::floor(tmp3); 2023-01-11T21:41:23.7650994Z auto tmp5 = std::trunc(tmp3); 2023-01-11T21:41:23.7651090Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7651200Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.7651308Z out_ptr2[i0] = tmp5; 2023-01-11T21:41:23.7651414Z out_ptr3[i0] = tmp3; 2023-01-11T21:41:23.7651518Z out_ptr4[i0] = tmp4; 2023-01-11T21:41:23.7651600Z } 2023-01-11T21:41:23.7651684Z } 2023-01-11T21:41:23.7651749Z } 2023-01-11T21:41:23.7651826Z } 2023-01-11T21:41:23.7651905Z } 2023-01-11T21:41:23.7652024Z ''') 2023-01-11T21:41:23.7652032Z 2023-01-11T21:41:23.7652038Z 2023-01-11T21:41:23.7652173Z async_compile.wait(globals()) 2023-01-11T21:41:23.7652278Z del async_compile 2023-01-11T21:41:23.7652284Z 2023-01-11T21:41:23.7652380Z def call(args): 2023-01-11T21:41:23.7652465Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7652582Z args.clear() 2023-01-11T21:41:23.7652971Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7653350Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7653720Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7654081Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7654442Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7654974Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7655079Z del arg0_1 2023-01-11T21:41:23.7655195Z del arg1_1 2023-01-11T21:41:23.7655372Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7655382Z 2023-01-11T21:41:23.7655466Z 2023-01-11T21:41:23.7655609Z if __name__ == "__main__": 2023-01-11T21:41:23.7655822Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7656059Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7656430Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7656801Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7657003Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7657012Z 2023-01-11T21:41:23.7657133Z ok (1.659s) 2023-01-11T21:41:23.7658156Z test_div3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7658463Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7658996Z [2023-01-11 21:33:16,646] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 176 2023-01-11T21:41:23.7659469Z [2023-01-11 21:33:18,260] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 176 2023-01-11T21:41:23.7659476Z 2023-01-11T21:41:23.7659598Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7659691Z import torch 2023-01-11T21:41:23.7659781Z import random 2023-01-11T21:41:23.7659918Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7660074Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7660081Z 2023-01-11T21:41:23.7660182Z aten = torch.ops.aten 2023-01-11T21:41:23.7660372Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7660505Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7660511Z 2023-01-11T21:41:23.7660516Z 2023-01-11T21:41:23.7660775Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7661211Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7661435Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7661604Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7661783Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7661959Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7662129Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7662303Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7662615Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7662720Z { 2023-01-11T21:41:23.7662888Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7662993Z { 2023-01-11T21:41:23.7663128Z #pragma omp for 2023-01-11T21:41:23.7681377Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7681556Z { 2023-01-11T21:41:23.7681666Z { 2023-01-11T21:41:23.7681771Z { 2023-01-11T21:41:23.7681940Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7682097Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7682294Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7682483Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7682640Z auto tmp4 = tmp1 / tmp3; 2023-01-11T21:41:23.7683147Z auto tmp5 = ((tmp0 < 0) != (tmp2 < 0) ? (tmp0 % tmp2 != 0 ? tmp0 / tmp2 - 1 : tmp0 / tmp2) : tmp0 / tmp2); 2023-01-11T21:41:23.7683303Z auto tmp6 = tmp0 / tmp2; 2023-01-11T21:41:23.7683438Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7683582Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7683723Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7683865Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.7684165Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7684270Z } 2023-01-11T21:41:23.7684373Z } 2023-01-11T21:41:23.7684465Z } 2023-01-11T21:41:23.7684566Z } 2023-01-11T21:41:23.7684665Z } 2023-01-11T21:41:23.7684797Z ''') 2023-01-11T21:41:23.7684807Z 2023-01-11T21:41:23.7684814Z 2023-01-11T21:41:23.7684975Z async_compile.wait(globals()) 2023-01-11T21:41:23.7685098Z del async_compile 2023-01-11T21:41:23.7685106Z 2023-01-11T21:41:23.7685228Z def call(args): 2023-01-11T21:41:23.7685347Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7685464Z args.clear() 2023-01-11T21:41:23.7685793Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7686147Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7686503Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7686923Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7687280Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7687815Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7687921Z del arg0_1 2023-01-11T21:41:23.7688035Z del arg1_1 2023-01-11T21:41:23.7688204Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7688214Z 2023-01-11T21:41:23.7688222Z 2023-01-11T21:41:23.7688346Z if __name__ == "__main__": 2023-01-11T21:41:23.7688559Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7688791Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7689151Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7689504Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7689713Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7689729Z 2023-01-11T21:41:23.7689828Z ok (1.663s) 2023-01-11T21:41:23.7690846Z test_div4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7691077Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7691599Z [2023-01-11 21:33:18,310] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 177 2023-01-11T21:41:23.7692114Z [2023-01-11 21:33:18,330] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 177 2023-01-11T21:41:23.7692123Z 2023-01-11T21:41:23.7692289Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7692408Z import torch 2023-01-11T21:41:23.7692519Z import random 2023-01-11T21:41:23.7692730Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7692963Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7692973Z 2023-01-11T21:41:23.7693105Z aten = torch.ops.aten 2023-01-11T21:41:23.7693365Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7693528Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7693536Z 2023-01-11T21:41:23.7693544Z 2023-01-11T21:41:23.7693796Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7694217Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7694436Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7694616Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7694778Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7695017Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7695178Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7695348Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7695508Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7695599Z { 2023-01-11T21:41:23.7695778Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7695870Z { 2023-01-11T21:41:23.7696000Z #pragma omp for 2023-01-11T21:41:23.7696125Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7696226Z { 2023-01-11T21:41:23.7696326Z { 2023-01-11T21:41:23.7696428Z { 2023-01-11T21:41:23.7696580Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7696732Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7696926Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7697114Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7697348Z auto tmp4 = tmp1 / tmp3; 2023-01-11T21:41:23.7697832Z auto tmp5 = ((tmp0 < 0) != (tmp2 < 0) ? (tmp0 % tmp2 != 0 ? tmp0 / tmp2 - 1 : tmp0 / tmp2) : tmp0 / tmp2); 2023-01-11T21:41:23.7697987Z auto tmp6 = tmp0 / tmp2; 2023-01-11T21:41:23.7698127Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7698264Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7698403Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7698538Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.7698673Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7698775Z } 2023-01-11T21:41:23.7698876Z } 2023-01-11T21:41:23.7698966Z } 2023-01-11T21:41:23.7699066Z } 2023-01-11T21:41:23.7699161Z } 2023-01-11T21:41:23.7699288Z ''') 2023-01-11T21:41:23.7699297Z 2023-01-11T21:41:23.7699305Z 2023-01-11T21:41:23.7699470Z async_compile.wait(globals()) 2023-01-11T21:41:23.7699594Z del async_compile 2023-01-11T21:41:23.7699607Z 2023-01-11T21:41:23.7699728Z def call(args): 2023-01-11T21:41:23.7699863Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7699974Z args.clear() 2023-01-11T21:41:23.7700357Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7700732Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7701096Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7701465Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7701829Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7702579Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7702697Z del arg0_1 2023-01-11T21:41:23.7702803Z del arg1_1 2023-01-11T21:41:23.7702982Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7702992Z 2023-01-11T21:41:23.7702999Z 2023-01-11T21:41:23.7703135Z if __name__ == "__main__": 2023-01-11T21:41:23.7703359Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7703603Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7703980Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7704350Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7704570Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7704579Z 2023-01-11T21:41:23.7704685Z ok (0.070s) 2023-01-11T21:41:23.7705737Z test_div5_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7706090Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7706626Z [2023-01-11 21:33:18,379] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 178 2023-01-11T21:41:23.7707165Z [2023-01-11 21:33:19,945] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 178 2023-01-11T21:41:23.7707177Z 2023-01-11T21:41:23.7707354Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7707481Z import torch 2023-01-11T21:41:23.7707609Z import random 2023-01-11T21:41:23.7707833Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7708057Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7708066Z 2023-01-11T21:41:23.7708211Z aten = torch.ops.aten 2023-01-11T21:41:23.7708475Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7708718Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7708727Z 2023-01-11T21:41:23.7708733Z 2023-01-11T21:41:23.7708987Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7709424Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7709651Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7709834Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7709993Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7710170Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7710348Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7710516Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7710621Z { 2023-01-11T21:41:23.7710803Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7710911Z { 2023-01-11T21:41:23.7711032Z #pragma omp for 2023-01-11T21:41:23.7711185Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7711301Z { 2023-01-11T21:41:23.7711410Z { 2023-01-11T21:41:23.7711519Z { 2023-01-11T21:41:23.7711680Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7711882Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7712060Z auto tmp2 = static_cast(16); 2023-01-11T21:41:23.7712222Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:23.7712410Z auto tmp4 = static_cast(16); 2023-01-11T21:41:23.7712888Z auto tmp5 = ((tmp0 < 0) != (tmp4 < 0) ? (tmp0 % tmp4 != 0 ? tmp0 / tmp4 - 1 : tmp0 / tmp4) : tmp0 / tmp4); 2023-01-11T21:41:23.7713051Z auto tmp6 = tmp0 / tmp4; 2023-01-11T21:41:23.7713199Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.7713350Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7713481Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7713644Z out_ptr3[i0] = tmp3; 2023-01-11T21:41:23.7713860Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7713976Z } 2023-01-11T21:41:23.7714090Z } 2023-01-11T21:41:23.7714200Z } 2023-01-11T21:41:23.7714309Z } 2023-01-11T21:41:23.7714399Z } 2023-01-11T21:41:23.7714539Z ''') 2023-01-11T21:41:23.7714548Z 2023-01-11T21:41:23.7714555Z 2023-01-11T21:41:23.7714725Z async_compile.wait(globals()) 2023-01-11T21:41:23.7714853Z del async_compile 2023-01-11T21:41:23.7714861Z 2023-01-11T21:41:23.7714986Z def call(args): 2023-01-11T21:41:23.7715105Z arg0_1, = args 2023-01-11T21:41:23.7715230Z args.clear() 2023-01-11T21:41:23.7715594Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7715965Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7716331Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7716782Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7717143Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7717625Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7717744Z del arg0_1 2023-01-11T21:41:23.7717922Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7717931Z 2023-01-11T21:41:23.7717939Z 2023-01-11T21:41:23.7718074Z if __name__ == "__main__": 2023-01-11T21:41:23.7718272Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7718505Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7718866Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7719075Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7719083Z 2023-01-11T21:41:23.7719255Z ok (1.615s) 2023-01-11T21:41:23.7720305Z test_div6_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7720545Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7721067Z [2023-01-11 21:33:19,995] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 179 2023-01-11T21:41:23.7721609Z [2023-01-11 21:33:21,604] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 179 2023-01-11T21:41:23.7721619Z 2023-01-11T21:41:23.7721784Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7721911Z import torch 2023-01-11T21:41:23.7722041Z import random 2023-01-11T21:41:23.7722272Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7722510Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7722519Z 2023-01-11T21:41:23.7722664Z aten = torch.ops.aten 2023-01-11T21:41:23.7722928Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7723088Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7723111Z 2023-01-11T21:41:23.7723119Z 2023-01-11T21:41:23.7723362Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7723794Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7724020Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.7724213Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7724390Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7724564Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7724735Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7724902Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7725049Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7725134Z { 2023-01-11T21:41:23.7725268Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7725356Z { 2023-01-11T21:41:23.7725462Z #pragma omp for 2023-01-11T21:41:23.7725573Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.7725642Z { 2023-01-11T21:41:23.7725736Z { 2023-01-11T21:41:23.7725825Z { 2023-01-11T21:41:23.7725957Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7726090Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:23.7726235Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7726387Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.7726519Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:23.7726645Z auto tmp5 = tmp2 / tmp4; 2023-01-11T21:41:23.7727082Z auto tmp6 = ((tmp1 < 0) != (tmp3 < 0) ? (tmp1 % tmp3 != 0 ? tmp1 / tmp3 - 1 : tmp1 / tmp3) : tmp1 / tmp3); 2023-01-11T21:41:23.7727209Z auto tmp7 = tmp1 / tmp3; 2023-01-11T21:41:23.7727364Z auto tmp8 = static_cast(tmp0); 2023-01-11T21:41:23.7727495Z auto tmp9 = tmp8 / tmp4; 2023-01-11T21:41:23.7727611Z out_ptr0[i0] = tmp5; 2023-01-11T21:41:23.7727709Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.7727820Z out_ptr2[i0] = tmp7; 2023-01-11T21:41:23.7727938Z out_ptr3[i0] = tmp9; 2023-01-11T21:41:23.7728050Z out_ptr4[i0] = tmp6; 2023-01-11T21:41:23.7728137Z } 2023-01-11T21:41:23.7728224Z } 2023-01-11T21:41:23.7728307Z } 2023-01-11T21:41:23.7728375Z } 2023-01-11T21:41:23.7728457Z } 2023-01-11T21:41:23.7728572Z ''') 2023-01-11T21:41:23.7728586Z 2023-01-11T21:41:23.7728648Z 2023-01-11T21:41:23.7728776Z async_compile.wait(globals()) 2023-01-11T21:41:23.7728875Z del async_compile 2023-01-11T21:41:23.7728881Z 2023-01-11T21:41:23.7728973Z def call(args): 2023-01-11T21:41:23.7729074Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7729153Z args.clear() 2023-01-11T21:41:23.7729432Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7729714Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7729972Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7730234Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7730496Z buf4 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7730872Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7730978Z del arg0_1 2023-01-11T21:41:23.7731069Z del arg1_1 2023-01-11T21:41:23.7731189Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7731198Z 2023-01-11T21:41:23.7731204Z 2023-01-11T21:41:23.7731308Z if __name__ == "__main__": 2023-01-11T21:41:23.7731470Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7731649Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7731929Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.7732208Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7732372Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7732378Z 2023-01-11T21:41:23.7732471Z ok (1.659s) 2023-01-11T21:41:23.7733123Z test_div7_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7733304Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7733687Z [2023-01-11 21:33:21,656] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 180 2023-01-11T21:41:23.7734060Z [2023-01-11 21:33:23,234] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 180 2023-01-11T21:41:23.7734068Z 2023-01-11T21:41:23.7734201Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7734300Z import torch 2023-01-11T21:41:23.7734399Z import random 2023-01-11T21:41:23.7734558Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7734720Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7734727Z 2023-01-11T21:41:23.7734906Z aten = torch.ops.aten 2023-01-11T21:41:23.7735102Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7735236Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7735243Z 2023-01-11T21:41:23.7735249Z 2023-01-11T21:41:23.7735444Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7735716Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7735883Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7736028Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7736160Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7736283Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7736413Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7736544Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7736674Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7736821Z { 2023-01-11T21:41:23.7736968Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7737052Z { 2023-01-11T21:41:23.7737146Z #pragma omp for 2023-01-11T21:41:23.7737268Z for(long i0=0; i0<10000; i0+=1) 2023-01-11T21:41:23.7737355Z { 2023-01-11T21:41:23.7737443Z { 2023-01-11T21:41:23.7737542Z { 2023-01-11T21:41:23.7737680Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7737795Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7737940Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7738084Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7738209Z auto tmp4 = tmp1 / tmp3; 2023-01-11T21:41:23.7738571Z auto tmp5 = ((tmp0 < 0) != (tmp2 < 0) ? (tmp0 % tmp2 != 0 ? tmp0 / tmp2 - 1 : tmp0 / tmp2) : tmp0 / tmp2); 2023-01-11T21:41:23.7738689Z auto tmp6 = tmp0 / tmp2; 2023-01-11T21:41:23.7738810Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7738925Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7739026Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7739140Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.7739252Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7739334Z } 2023-01-11T21:41:23.7739421Z } 2023-01-11T21:41:23.7739506Z } 2023-01-11T21:41:23.7739592Z } 2023-01-11T21:41:23.7739661Z } 2023-01-11T21:41:23.7739781Z ''') 2023-01-11T21:41:23.7739789Z 2023-01-11T21:41:23.7739795Z 2023-01-11T21:41:23.7739927Z async_compile.wait(globals()) 2023-01-11T21:41:23.7740028Z del async_compile 2023-01-11T21:41:23.7740035Z 2023-01-11T21:41:23.7740132Z def call(args): 2023-01-11T21:41:23.7740235Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7740334Z args.clear() 2023-01-11T21:41:23.7740617Z buf0 = empty_strided((100, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7740910Z buf1 = empty_strided((100, 100), (100, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7741202Z buf2 = empty_strided((100, 100), (100, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7741483Z buf3 = empty_strided((100, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7741780Z buf4 = empty_strided((100, 100), (100, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7742147Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7742245Z del arg0_1 2023-01-11T21:41:23.7742486Z del arg1_1 2023-01-11T21:41:23.7742607Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7742616Z 2023-01-11T21:41:23.7742640Z 2023-01-11T21:41:23.7742729Z if __name__ == "__main__": 2023-01-11T21:41:23.7742895Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7743188Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7743487Z arg0_1 = rand_strided((100, 100), (100, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7743783Z arg1_1 = rand_strided((100, 100), (100, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7743945Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7743953Z 2023-01-11T21:41:23.7744038Z ok (1.630s) 2023-01-11T21:41:23.7744492Z test_div8_cpu (__main__.CpuTests) ... [2023-01-11 21:33:23,278] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 181 2023-01-11T21:41:23.7744894Z [2023-01-11 21:33:24,854] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 181 2023-01-11T21:41:23.7744904Z 2023-01-11T21:41:23.7745033Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7745129Z import torch 2023-01-11T21:41:23.7745228Z import random 2023-01-11T21:41:23.7745469Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7745645Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7745653Z 2023-01-11T21:41:23.7745764Z aten = torch.ops.aten 2023-01-11T21:41:23.7745938Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7746058Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7746064Z 2023-01-11T21:41:23.7746070Z 2023-01-11T21:41:23.7746262Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7746534Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7746696Z extern "C" void kernel(long* __restrict__ out_ptr0, 2023-01-11T21:41:23.7746822Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7746944Z long* __restrict__ out_ptr2) 2023-01-11T21:41:23.7747026Z { 2023-01-11T21:41:23.7747095Z { 2023-01-11T21:41:23.7747185Z { 2023-01-11T21:41:23.7747326Z auto tmp0 = static_cast(1024); 2023-01-11T21:41:23.7747469Z auto tmp1 = static_cast(100); 2023-01-11T21:41:23.7747582Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7747687Z out_ptr0[0] = tmp2; 2023-01-11T21:41:23.7747771Z } 2023-01-11T21:41:23.7747834Z } 2023-01-11T21:41:23.7747913Z { 2023-01-11T21:41:23.7747994Z { 2023-01-11T21:41:23.7748127Z auto tmp0 = static_cast(1024); 2023-01-11T21:41:23.7748270Z auto tmp1 = static_cast(100); 2023-01-11T21:41:23.7748627Z auto tmp2 = ((tmp0 < 0) != (tmp1 < 0) ? (tmp0 % tmp1 != 0 ? tmp0 / tmp1 - 1 : tmp0 / tmp1) : tmp0 / tmp1); 2023-01-11T21:41:23.7748738Z out_ptr1[0] = tmp2; 2023-01-11T21:41:23.7748809Z } 2023-01-11T21:41:23.7748895Z } 2023-01-11T21:41:23.7748980Z { 2023-01-11T21:41:23.7749065Z { 2023-01-11T21:41:23.7749209Z auto tmp0 = static_cast(1024); 2023-01-11T21:41:23.7749343Z auto tmp1 = static_cast(100); 2023-01-11T21:41:23.7749450Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7749568Z out_ptr2[0] = tmp2; 2023-01-11T21:41:23.7749653Z } 2023-01-11T21:41:23.7749735Z } 2023-01-11T21:41:23.7749818Z } 2023-01-11T21:41:23.7749928Z ''') 2023-01-11T21:41:23.7749936Z 2023-01-11T21:41:23.7749941Z 2023-01-11T21:41:23.7750071Z async_compile.wait(globals()) 2023-01-11T21:41:23.7750153Z del async_compile 2023-01-11T21:41:23.7750175Z 2023-01-11T21:41:23.7750253Z def call(args): 2023-01-11T21:41:23.7750517Z buf0 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7750782Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7751042Z buf2 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7751267Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.7751382Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.7751388Z 2023-01-11T21:41:23.7751473Z 2023-01-11T21:41:23.7751582Z if __name__ == "__main__": 2023-01-11T21:41:23.7751736Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7751896Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7752032Z print_performance(lambda: call([])) 2023-01-11T21:41:23.7752039Z 2023-01-11T21:41:23.7752124Z ok (1.621s) 2023-01-11T21:41:23.7752813Z test_div_prim_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7752988Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7753382Z [2023-01-11 21:33:24,919] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 182 2023-01-11T21:41:23.7753925Z [2023-01-11 21:33:26,489] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 182 2023-01-11T21:41:23.7754548Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7754723Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7755102Z [2023-01-11 21:33:26,531] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 183 2023-01-11T21:41:23.7755466Z [2023-01-11 21:33:28,092] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 183 2023-01-11T21:41:23.7755491Z 2023-01-11T21:41:23.7755610Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7755720Z import torch 2023-01-11T21:41:23.7755824Z import random 2023-01-11T21:41:23.7755987Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7756156Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7756165Z 2023-01-11T21:41:23.7756273Z aten = torch.ops.aten 2023-01-11T21:41:23.7756461Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7756575Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7756582Z 2023-01-11T21:41:23.7756603Z 2023-01-11T21:41:23.7756781Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7757070Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7757245Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7757395Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7757539Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7757631Z { 2023-01-11T21:41:23.7757773Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7757843Z { 2023-01-11T21:41:23.7757951Z #pragma omp for 2023-01-11T21:41:23.7758069Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.7758375Z { 2023-01-11T21:41:23.7758708Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7759117Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7759462Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7759759Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7760002Z } 2023-01-11T21:41:23.7760266Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7760557Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:23.7760791Z { 2023-01-11T21:41:23.7761042Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7761315Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7761580Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7762008Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7762262Z } 2023-01-11T21:41:23.7762462Z } 2023-01-11T21:41:23.7762671Z } 2023-01-11T21:41:23.7762930Z ''') 2023-01-11T21:41:23.7763070Z 2023-01-11T21:41:23.7763077Z 2023-01-11T21:41:23.7763193Z async_compile.wait(globals()) 2023-01-11T21:41:23.7763468Z del async_compile 2023-01-11T21:41:23.7763628Z 2023-01-11T21:41:23.7763725Z def call(args): 2023-01-11T21:41:23.7763990Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7764257Z args.clear() 2023-01-11T21:41:23.7764692Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7765160Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7765494Z del arg0_1 2023-01-11T21:41:23.7765734Z del arg1_1 2023-01-11T21:41:23.7766022Z return (buf0, ) 2023-01-11T21:41:23.7766183Z 2023-01-11T21:41:23.7766190Z 2023-01-11T21:41:23.7766299Z if __name__ == "__main__": 2023-01-11T21:41:23.7766700Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7767078Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7767610Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7768110Z arg1_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7768507Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7768722Z 2023-01-11T21:41:23.7768727Z 2023-01-11T21:41:23.7768863Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7769117Z import torch 2023-01-11T21:41:23.7769343Z import random 2023-01-11T21:41:23.7769642Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7770006Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7770200Z 2023-01-11T21:41:23.7770306Z aten = torch.ops.aten 2023-01-11T21:41:23.7770640Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7770997Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7771156Z 2023-01-11T21:41:23.7771180Z 2023-01-11T21:41:23.7771375Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7771843Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7772320Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7772648Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7772969Z long* __restrict__ out_ptr0) 2023-01-11T21:41:23.7773222Z { 2023-01-11T21:41:23.7773471Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7773717Z { 2023-01-11T21:41:23.7773948Z #pragma omp for 2023-01-11T21:41:23.7774211Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:23.7774438Z { 2023-01-11T21:41:23.7774655Z { 2023-01-11T21:41:23.7774880Z { 2023-01-11T21:41:23.7775127Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7775426Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7775713Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7775984Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7776243Z } 2023-01-11T21:41:23.7776461Z } 2023-01-11T21:41:23.7776658Z } 2023-01-11T21:41:23.7776869Z } 2023-01-11T21:41:23.7777067Z } 2023-01-11T21:41:23.7777303Z ''') 2023-01-11T21:41:23.7777436Z 2023-01-11T21:41:23.7777442Z 2023-01-11T21:41:23.7777568Z async_compile.wait(globals()) 2023-01-11T21:41:23.7777861Z del async_compile 2023-01-11T21:41:23.7778017Z 2023-01-11T21:41:23.7778103Z def call(args): 2023-01-11T21:41:23.7778347Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7778605Z args.clear() 2023-01-11T21:41:23.7779038Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7779497Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7779937Z del arg0_1 2023-01-11T21:41:23.7780185Z del arg1_1 2023-01-11T21:41:23.7780425Z return (buf0, ) 2023-01-11T21:41:23.7780592Z 2023-01-11T21:41:23.7780598Z 2023-01-11T21:41:23.7780709Z if __name__ == "__main__": 2023-01-11T21:41:23.7781036Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7781411Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7781926Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7782580Z arg1_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7782933Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7783133Z 2023-01-11T21:41:23.7783227Z ok (3.236s) 2023-01-11T21:41:23.7784148Z test_div_zero_dim_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7785020Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7785656Z [2023-01-11 21:33:28,148] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 184 2023-01-11T21:41:23.7786293Z [2023-01-11 21:33:29,804] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 184 2023-01-11T21:41:23.7787215Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7788025Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7788655Z [2023-01-11 21:33:29,919] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 185 2023-01-11T21:41:23.7789323Z [2023-01-11 21:33:31,582] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 185 2023-01-11T21:41:23.7790192Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7790966Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7791552Z [2023-01-11 21:33:31,631] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 186 2023-01-11T21:41:23.7791847Z 2023-01-11T21:41:23.7791983Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7792266Z import torch 2023-01-11T21:41:23.7792503Z import random 2023-01-11T21:41:23.7792805Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7793157Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7793376Z 2023-01-11T21:41:23.7793491Z aten = torch.ops.aten 2023-01-11T21:41:23.7793924Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7794272Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7794463Z 2023-01-11T21:41:23.7794469Z 2023-01-11T21:41:23.7794680Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7795155Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7795641Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7795972Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7796298Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7796617Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7797032Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7797337Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7797630Z float* __restrict__ out_ptr4) 2023-01-11T21:41:23.7797867Z { 2023-01-11T21:41:23.7798135Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7798408Z { 2023-01-11T21:41:23.7798616Z #pragma omp for 2023-01-11T21:41:23.7798879Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.7799112Z { 2023-01-11T21:41:23.7799435Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.7799849Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:23.7800155Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7800429Z auto tmp3 = tmp2.floor(); 2023-01-11T21:41:23.7800689Z auto tmp4 = tmp2.trunc(); 2023-01-11T21:41:23.7800967Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7801341Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7801628Z tmp4.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.7801928Z tmp2.store(out_ptr3 + 8*i0); 2023-01-11T21:41:23.7802224Z tmp3.store(out_ptr4 + 8*i0); 2023-01-11T21:41:23.7802450Z } 2023-01-11T21:41:23.7802710Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7803007Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.7803246Z { 2023-01-11T21:41:23.7803491Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7803782Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:23.7804045Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7804351Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:23.7804652Z auto tmp4 = std::trunc(tmp2); 2023-01-11T21:41:23.7804921Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7805185Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.7805446Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.7805723Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.7805988Z out_ptr4[i0] = tmp3; 2023-01-11T21:41:23.7806211Z } 2023-01-11T21:41:23.7806403Z } 2023-01-11T21:41:23.7806610Z } 2023-01-11T21:41:23.7806869Z ''') 2023-01-11T21:41:23.7806996Z 2023-01-11T21:41:23.7807002Z 2023-01-11T21:41:23.7807128Z async_compile.wait(globals()) 2023-01-11T21:41:23.7807378Z del async_compile 2023-01-11T21:41:23.7807539Z 2023-01-11T21:41:23.7807631Z def call(args): 2023-01-11T21:41:23.7807880Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7808104Z args.clear() 2023-01-11T21:41:23.7808553Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7809075Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7809566Z buf2 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7810059Z buf3 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7810534Z buf4 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7811119Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7811584Z del arg0_1 2023-01-11T21:41:23.7811809Z del arg1_1 2023-01-11T21:41:23.7812173Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7812446Z 2023-01-11T21:41:23.7812454Z 2023-01-11T21:41:23.7812592Z if __name__ == "__main__": 2023-01-11T21:41:23.7813004Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7813540Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7814218Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7814890Z arg1_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7815547Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7815838Z 2023-01-11T21:41:23.7815846Z 2023-01-11T21:41:23.7816040Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7816425Z import torch 2023-01-11T21:41:23.7816782Z import random 2023-01-11T21:41:23.7817212Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7817736Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7818048Z 2023-01-11T21:41:23.7818208Z aten = torch.ops.aten 2023-01-11T21:41:23.7818699Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7819195Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7819456Z 2023-01-11T21:41:23.7819462Z 2023-01-11T21:41:23.7819744Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7820348Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7820985Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7821530Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7821968Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7822549Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.7822967Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.7823395Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7823821Z float* __restrict__ out_ptr4) 2023-01-11T21:41:23.7824162Z { 2023-01-11T21:41:23.7824534Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7824924Z { 2023-01-11T21:41:23.7825230Z #pragma omp for 2023-01-11T21:41:23.7825611Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.7825967Z { 2023-01-11T21:41:23.7826401Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:23.7826925Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.7827403Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7827812Z auto tmp3 = tmp2.floor(); 2023-01-11T21:41:23.7828201Z auto tmp4 = tmp2.trunc(); 2023-01-11T21:41:23.7828617Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.7829037Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.7829436Z tmp4.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.7829853Z tmp2.store(out_ptr3 + 8*i0); 2023-01-11T21:41:23.7830269Z tmp3.store(out_ptr4 + 8*i0); 2023-01-11T21:41:23.7830620Z } 2023-01-11T21:41:23.7831002Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.7831419Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.7831760Z { 2023-01-11T21:41:23.7832112Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.7832512Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.7832877Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7833298Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:23.7833805Z auto tmp4 = std::trunc(tmp2); 2023-01-11T21:41:23.7834223Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.7834586Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.7834964Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:23.7835309Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:23.7835668Z out_ptr4[i0] = tmp3; 2023-01-11T21:41:23.7836008Z } 2023-01-11T21:41:23.7836308Z } 2023-01-11T21:41:23.7836584Z } 2023-01-11T21:41:23.7836929Z ''') 2023-01-11T21:41:23.7837115Z 2023-01-11T21:41:23.7837122Z 2023-01-11T21:41:23.7837293Z async_compile.wait(globals()) 2023-01-11T21:41:23.7837638Z del async_compile 2023-01-11T21:41:23.7837853Z 2023-01-11T21:41:23.7837989Z def call(args): 2023-01-11T21:41:23.7838319Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7838642Z args.clear() 2023-01-11T21:41:23.7839202Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7839814Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7840610Z buf2 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7841257Z buf3 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7841922Z buf4 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7842690Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7843341Z del arg0_1 2023-01-11T21:41:23.7843660Z del arg1_1 2023-01-11T21:41:23.7844042Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7844281Z 2023-01-11T21:41:23.7844288Z 2023-01-11T21:41:23.7844426Z if __name__ == "__main__": 2023-01-11T21:41:23.7844866Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7845390Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7846139Z arg0_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7846745Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7847196Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7847437Z 2023-01-11T21:41:23.7847955Z [2023-01-11 21:33:33,212] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 186 2023-01-11T21:41:23.7849221Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.7850383Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.7851194Z [2023-01-11 21:33:33,266] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 187 2023-01-11T21:41:23.7852117Z [2023-01-11 21:33:34,851] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 187 2023-01-11T21:41:23.7852520Z 2023-01-11T21:41:23.7852706Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7853068Z import torch 2023-01-11T21:41:23.7853411Z import random 2023-01-11T21:41:23.7853841Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7854360Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7854675Z 2023-01-11T21:41:23.7854834Z aten = torch.ops.aten 2023-01-11T21:41:23.7855321Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7855779Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7856031Z 2023-01-11T21:41:23.7856038Z 2023-01-11T21:41:23.7856315Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7856957Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7857624Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7858082Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7858519Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7858907Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7859302Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7859728Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7860136Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7860469Z { 2023-01-11T21:41:23.7860836Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7861189Z { 2023-01-11T21:41:23.7861443Z #pragma omp for 2023-01-11T21:41:23.7861776Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.7862105Z { 2023-01-11T21:41:23.7862523Z { 2023-01-11T21:41:23.7862837Z { 2023-01-11T21:41:23.7863167Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.7863686Z auto tmp2 = in_ptr1[0]; 2023-01-11T21:41:23.7864077Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7864551Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7864953Z auto tmp4 = tmp1 / tmp3; 2023-01-11T21:41:23.7865573Z auto tmp5 = ((tmp0 < 0) != (tmp2 < 0) ? (tmp0 % tmp2 != 0 ? tmp0 / tmp2 - 1 : tmp0 / tmp2) : tmp0 / tmp2); 2023-01-11T21:41:23.7865966Z auto tmp6 = tmp0 / tmp2; 2023-01-11T21:41:23.7866309Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7866673Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7867048Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7867408Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.7867745Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7868072Z } 2023-01-11T21:41:23.7868374Z } 2023-01-11T21:41:23.7868664Z } 2023-01-11T21:41:23.7869079Z } 2023-01-11T21:41:23.7869387Z } 2023-01-11T21:41:23.7869687Z ''') 2023-01-11T21:41:23.7869865Z 2023-01-11T21:41:23.7869873Z 2023-01-11T21:41:23.7870042Z async_compile.wait(globals()) 2023-01-11T21:41:23.7870390Z del async_compile 2023-01-11T21:41:23.7870584Z 2023-01-11T21:41:23.7870717Z def call(args): 2023-01-11T21:41:23.7871029Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7871355Z args.clear() 2023-01-11T21:41:23.7871889Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7872506Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7873119Z buf2 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7873818Z buf3 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7874346Z buf4 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7875117Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7875779Z del arg0_1 2023-01-11T21:41:23.7876077Z del arg1_1 2023-01-11T21:41:23.7876370Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7876621Z 2023-01-11T21:41:23.7876627Z 2023-01-11T21:41:23.7876728Z if __name__ == "__main__": 2023-01-11T21:41:23.7877018Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7877348Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7877869Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7878416Z arg1_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7878787Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7879044Z 2023-01-11T21:41:23.7879051Z 2023-01-11T21:41:23.7879222Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7879581Z import torch 2023-01-11T21:41:23.7879881Z import random 2023-01-11T21:41:23.7880254Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7880782Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7881047Z 2023-01-11T21:41:23.7881153Z aten = torch.ops.aten 2023-01-11T21:41:23.7881568Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7882066Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7882309Z 2023-01-11T21:41:23.7882317Z 2023-01-11T21:41:23.7882509Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7883109Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7883741Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.7884139Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.7884523Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7884886Z long* __restrict__ out_ptr1, 2023-01-11T21:41:23.7885370Z long* __restrict__ out_ptr2, 2023-01-11T21:41:23.7885767Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.7886158Z long* __restrict__ out_ptr4) 2023-01-11T21:41:23.7886508Z { 2023-01-11T21:41:23.7886839Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7887159Z { 2023-01-11T21:41:23.7887466Z #pragma omp for 2023-01-11T21:41:23.7887743Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:23.7888049Z { 2023-01-11T21:41:23.7888330Z { 2023-01-11T21:41:23.7888622Z { 2023-01-11T21:41:23.7888941Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.7889300Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.7889731Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.7890155Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:23.7890642Z auto tmp4 = tmp1 / tmp3; 2023-01-11T21:41:23.7891344Z auto tmp5 = ((tmp0 < 0) != (tmp2 < 0) ? (tmp0 % tmp2 != 0 ? tmp0 / tmp2 - 1 : tmp0 / tmp2) : tmp0 / tmp2); 2023-01-11T21:41:23.7891786Z auto tmp6 = tmp0 / tmp2; 2023-01-11T21:41:23.7892127Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.7892452Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.7892777Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.7893080Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.7893424Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:23.7893739Z } 2023-01-11T21:41:23.7894021Z } 2023-01-11T21:41:23.7894311Z } 2023-01-11T21:41:23.7894574Z } 2023-01-11T21:41:23.7894841Z } 2023-01-11T21:41:23.7895171Z ''') 2023-01-11T21:41:23.7895365Z 2023-01-11T21:41:23.7895372Z 2023-01-11T21:41:23.7895555Z async_compile.wait(globals()) 2023-01-11T21:41:23.7895920Z del async_compile 2023-01-11T21:41:23.7896141Z 2023-01-11T21:41:23.7896284Z def call(args): 2023-01-11T21:41:23.7896630Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.7896960Z args.clear() 2023-01-11T21:41:23.7897513Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7898117Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7898691Z buf2 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7899238Z buf3 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7899783Z buf4 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7900498Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.7901092Z del arg0_1 2023-01-11T21:41:23.7901420Z del arg1_1 2023-01-11T21:41:23.7901754Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.7901969Z 2023-01-11T21:41:23.7901975Z 2023-01-11T21:41:23.7902123Z if __name__ == "__main__": 2023-01-11T21:41:23.7902805Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7903304Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7903939Z arg0_1 = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7904486Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7904970Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.7905233Z 2023-01-11T21:41:23.7905353Z ok (6.760s) 2023-01-11T21:41:23.7906128Z test_dropout_cpu (__main__.CpuTests) ... [2023-01-11 21:33:34,926] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 188 2023-01-11T21:41:23.7907032Z [2023-01-11 21:33:34,927] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.7908051Z [2023-01-11 21:33:36,483] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 188 2023-01-11T21:41:23.7908921Z [2023-01-11 21:33:36,549] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 189 2023-01-11T21:41:23.7909762Z [2023-01-11 21:33:36,550] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.7910640Z [2023-01-11 21:33:36,558] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 189 2023-01-11T21:41:23.7911029Z 2023-01-11T21:41:23.7911215Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7911609Z import torch 2023-01-11T21:41:23.7911946Z import random 2023-01-11T21:41:23.7912382Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7912909Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7913214Z 2023-01-11T21:41:23.7913358Z aten = torch.ops.aten 2023-01-11T21:41:23.7914059Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7914568Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7915100Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.7915453Z 2023-01-11T21:41:23.7915460Z 2023-01-11T21:41:23.7915744Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7916383Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7917037Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.7917504Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7917946Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7918315Z { 2023-01-11T21:41:23.7918670Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7919051Z { 2023-01-11T21:41:23.7919388Z #pragma omp for 2023-01-11T21:41:23.7919752Z for(long i0=0; i0<1000; i0+=1) 2023-01-11T21:41:23.7920103Z { 2023-01-11T21:41:23.7920425Z { 2023-01-11T21:41:23.7920730Z { 2023-01-11T21:41:23.7921091Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.7921492Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7921906Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7922429Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7922951Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.7923384Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7923805Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7924237Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.7924667Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.7925080Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.7925475Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.7925830Z } 2023-01-11T21:41:23.7926144Z } 2023-01-11T21:41:23.7926464Z } 2023-01-11T21:41:23.7926773Z } 2023-01-11T21:41:23.7927063Z } 2023-01-11T21:41:23.7927412Z ''') 2023-01-11T21:41:23.7927606Z 2023-01-11T21:41:23.7927614Z 2023-01-11T21:41:23.7927799Z async_compile.wait(globals()) 2023-01-11T21:41:23.7928171Z del async_compile 2023-01-11T21:41:23.7928390Z 2023-01-11T21:41:23.7928533Z def call(args): 2023-01-11T21:41:23.7928880Z arg0_1, = args 2023-01-11T21:41:23.7929207Z args.clear() 2023-01-11T21:41:23.7929668Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.7930349Z buf0 = empty_strided((1000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7930982Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7931473Z del arg0_1 2023-01-11T21:41:23.7931815Z return (buf0, ) 2023-01-11T21:41:23.7932032Z 2023-01-11T21:41:23.7932124Z 2023-01-11T21:41:23.7932284Z if __name__ == "__main__": 2023-01-11T21:41:23.7932702Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7933213Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7933845Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7934476Z arg0_1 = rand_strided((1000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7934938Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7935194Z 2023-01-11T21:41:23.7935200Z 2023-01-11T21:41:23.7935374Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7935735Z import torch 2023-01-11T21:41:23.7936045Z import random 2023-01-11T21:41:23.7936447Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7936935Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7937217Z 2023-01-11T21:41:23.7937347Z aten = torch.ops.aten 2023-01-11T21:41:23.7937798Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7938337Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7938838Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.7939169Z 2023-01-11T21:41:23.7939175Z 2023-01-11T21:41:23.7939428Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7940030Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7940643Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.7941078Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7941492Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7941836Z { 2023-01-11T21:41:23.7942168Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7942666Z { 2023-01-11T21:41:23.7942980Z #pragma omp for 2023-01-11T21:41:23.7943321Z for(long i0=0; i0<1000; i0+=1) 2023-01-11T21:41:23.7943651Z { 2023-01-11T21:41:23.7943954Z { 2023-01-11T21:41:23.7944242Z { 2023-01-11T21:41:23.7944578Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.7944956Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7945343Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7945825Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7946305Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.7946704Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7947099Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7947501Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.7947904Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.7948288Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.7948654Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.7948981Z } 2023-01-11T21:41:23.7949277Z } 2023-01-11T21:41:23.7949571Z } 2023-01-11T21:41:23.7949860Z } 2023-01-11T21:41:23.7950129Z } 2023-01-11T21:41:23.7950458Z ''') 2023-01-11T21:41:23.7950636Z 2023-01-11T21:41:23.7950643Z 2023-01-11T21:41:23.7950813Z async_compile.wait(globals()) 2023-01-11T21:41:23.7951163Z del async_compile 2023-01-11T21:41:23.7951366Z 2023-01-11T21:41:23.7951500Z def call(args): 2023-01-11T21:41:23.7951815Z arg0_1, = args 2023-01-11T21:41:23.7952131Z args.clear() 2023-01-11T21:41:23.7952541Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.7953180Z buf0 = empty_strided((1000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7953845Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7954306Z del arg0_1 2023-01-11T21:41:23.7954621Z return (buf0, ) 2023-01-11T21:41:23.7954823Z 2023-01-11T21:41:23.7954830Z 2023-01-11T21:41:23.7955096Z if __name__ == "__main__": 2023-01-11T21:41:23.7955491Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7955970Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7956598Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7957234Z arg0_1 = rand_strided((1000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7957692Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7957949Z 2023-01-11T21:41:23.7958077Z ok (1.706s) 2023-01-11T21:41:23.7958877Z test_dropout_deterministic_cpu (__main__.CpuTests) ... [2023-01-11 21:33:36,627] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 190 2023-01-11T21:41:23.7959759Z [2023-01-11 21:33:36,628] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.7960575Z [2023-01-11 21:33:38,199] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 190 2023-01-11T21:41:23.7961477Z [2023-01-11 21:33:38,265] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 191 2023-01-11T21:41:23.7962282Z [2023-01-11 21:33:38,265] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.7963084Z [2023-01-11 21:33:38,274] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 191 2023-01-11T21:41:23.7963453Z 2023-01-11T21:41:23.7963629Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7963992Z import torch 2023-01-11T21:41:23.7964302Z import random 2023-01-11T21:41:23.7964707Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7965196Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7965477Z 2023-01-11T21:41:23.7965624Z aten = torch.ops.aten 2023-01-11T21:41:23.7966055Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7966511Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7967022Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.7967356Z 2023-01-11T21:41:23.7967363Z 2023-01-11T21:41:23.7967596Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7968198Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7968807Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.7968998Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7969176Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7969279Z { 2023-01-11T21:41:23.7969459Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7969578Z { 2023-01-11T21:41:23.7969722Z #pragma omp for 2023-01-11T21:41:23.7969873Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.7969994Z { 2023-01-11T21:41:23.7970112Z { 2023-01-11T21:41:23.7970216Z { 2023-01-11T21:41:23.7970380Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.7970545Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7970725Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7970972Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7971162Z auto tmp3 = static_cast(0.55); 2023-01-11T21:41:23.7971325Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7971500Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7971663Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.7971861Z auto tmp8 = static_cast(2.2222222222222223); 2023-01-11T21:41:23.7972021Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.7972173Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.7972295Z } 2023-01-11T21:41:23.7972414Z } 2023-01-11T21:41:23.7972521Z } 2023-01-11T21:41:23.7972720Z } 2023-01-11T21:41:23.7972839Z } 2023-01-11T21:41:23.7972992Z ''') 2023-01-11T21:41:23.7973000Z 2023-01-11T21:41:23.7973006Z 2023-01-11T21:41:23.7973175Z async_compile.wait(globals()) 2023-01-11T21:41:23.7973314Z del async_compile 2023-01-11T21:41:23.7973321Z 2023-01-11T21:41:23.7973453Z def call(args): 2023-01-11T21:41:23.7973568Z arg0_1, = args 2023-01-11T21:41:23.7973699Z args.clear() 2023-01-11T21:41:23.7973931Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.7974272Z buf0 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7974529Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7974625Z del arg0_1 2023-01-11T21:41:23.7974723Z return (buf0, ) 2023-01-11T21:41:23.7974729Z 2023-01-11T21:41:23.7974734Z 2023-01-11T21:41:23.7974840Z if __name__ == "__main__": 2023-01-11T21:41:23.7975036Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7975213Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7975485Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7975759Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7975905Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7975912Z 2023-01-11T21:41:23.7975917Z 2023-01-11T21:41:23.7976044Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7976138Z import torch 2023-01-11T21:41:23.7976242Z import random 2023-01-11T21:41:23.7976385Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7976562Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7976568Z 2023-01-11T21:41:23.7976673Z aten = torch.ops.aten 2023-01-11T21:41:23.7976862Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7977002Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7977247Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.7977254Z 2023-01-11T21:41:23.7977259Z 2023-01-11T21:41:23.7977444Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7977745Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7977888Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.7978029Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.7978160Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.7978238Z { 2023-01-11T21:41:23.7978369Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7978454Z { 2023-01-11T21:41:23.7978561Z #pragma omp for 2023-01-11T21:41:23.7978660Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.7978738Z { 2023-01-11T21:41:23.7978820Z { 2023-01-11T21:41:23.7978920Z { 2023-01-11T21:41:23.7979045Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.7979171Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.7979291Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.7979478Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.7979621Z auto tmp3 = static_cast(0.55); 2023-01-11T21:41:23.7979743Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.7979884Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.7980004Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.7980153Z auto tmp8 = static_cast(2.2222222222222223); 2023-01-11T21:41:23.7980277Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.7980374Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.7980456Z } 2023-01-11T21:41:23.7980539Z } 2023-01-11T21:41:23.7980637Z } 2023-01-11T21:41:23.7980789Z } 2023-01-11T21:41:23.7980870Z } 2023-01-11T21:41:23.7980974Z ''') 2023-01-11T21:41:23.7980998Z 2023-01-11T21:41:23.7981003Z 2023-01-11T21:41:23.7981121Z async_compile.wait(globals()) 2023-01-11T21:41:23.7981245Z del async_compile 2023-01-11T21:41:23.7981252Z 2023-01-11T21:41:23.7981367Z def call(args): 2023-01-11T21:41:23.7981470Z arg0_1, = args 2023-01-11T21:41:23.7981578Z args.clear() 2023-01-11T21:41:23.7981775Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.7982069Z buf0 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7982291Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.7982515Z del arg0_1 2023-01-11T21:41:23.7982615Z return (buf0, ) 2023-01-11T21:41:23.7982622Z 2023-01-11T21:41:23.7982627Z 2023-01-11T21:41:23.7982732Z if __name__ == "__main__": 2023-01-11T21:41:23.7982980Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7983152Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7983428Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.7983693Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7983819Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7983842Z 2023-01-11T21:41:23.7983912Z ok (1.716s) 2023-01-11T21:41:23.7984410Z test_dtype_mismatch_issue_cpu (__main__.CpuTests) ... [2023-01-11 21:33:38,300] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 192 2023-01-11T21:41:23.7984791Z [2023-01-11 21:33:39,918] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 192 2023-01-11T21:41:23.7984799Z 2023-01-11T21:41:23.7984930Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.7985030Z import torch 2023-01-11T21:41:23.7985123Z import random 2023-01-11T21:41:23.7985298Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.7985454Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.7985476Z 2023-01-11T21:41:23.7985570Z aten = torch.ops.aten 2023-01-11T21:41:23.7985765Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.7985903Z async_compile = AsyncCompile() 2023-01-11T21:41:23.7985910Z 2023-01-11T21:41:23.7985916Z 2023-01-11T21:41:23.7986104Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.7986396Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.7986557Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.7986696Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.7986811Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.7986943Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.7987029Z { 2023-01-11T21:41:23.7987150Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:23.7987293Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.7987378Z { 2023-01-11T21:41:23.7987486Z #pragma omp for 2023-01-11T21:41:23.7987583Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:23.7987666Z { 2023-01-11T21:41:23.7987751Z { 2023-01-11T21:41:23.7987836Z { 2023-01-11T21:41:23.7988194Z float tmp5 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.7988331Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.7988423Z { 2023-01-11T21:41:23.7988504Z { 2023-01-11T21:41:23.7988661Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:23.7988806Z auto tmp1 = static_cast(63); 2023-01-11T21:41:23.7988929Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.7989041Z float tmp3 = 0.0; 2023-01-11T21:41:23.7989242Z if(tmp2) 2023-01-11T21:41:23.7989332Z { 2023-01-11T21:41:23.7989448Z auto tmp4 = in_ptr0[i1 + (63*i0)]; 2023-01-11T21:41:23.7989565Z tmp3 = tmp4; 2023-01-11T21:41:23.7989656Z } 2023-01-11T21:41:23.7989795Z tmp5 = std::max(tmp5, tmp3); 2023-01-11T21:41:23.7989888Z } 2023-01-11T21:41:23.7989977Z } 2023-01-11T21:41:23.7990093Z out_ptr0[i0] = tmp5; 2023-01-11T21:41:23.7990163Z } 2023-01-11T21:41:23.7990245Z } 2023-01-11T21:41:23.7990327Z } 2023-01-11T21:41:23.7990429Z #pragma omp for 2023-01-11T21:41:23.7990547Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:23.7990628Z { 2023-01-11T21:41:23.7990710Z { 2023-01-11T21:41:23.7990781Z { 2023-01-11T21:41:23.7990891Z float tmp8 = 0; 2023-01-11T21:41:23.7991065Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.7991153Z { 2023-01-11T21:41:23.7991242Z { 2023-01-11T21:41:23.7991368Z auto tmp5 = out_ptr0[i0]; 2023-01-11T21:41:23.7991510Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:23.7991641Z auto tmp1 = static_cast(63); 2023-01-11T21:41:23.7991770Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:23.7991889Z float tmp3 = 0.0; 2023-01-11T21:41:23.7991992Z if(tmp2) 2023-01-11T21:41:23.7992087Z { 2023-01-11T21:41:23.7992233Z auto tmp4 = in_ptr0[i1 + (63*i0)]; 2023-01-11T21:41:23.7992345Z tmp3 = tmp4; 2023-01-11T21:41:23.7992426Z } 2023-01-11T21:41:23.7992648Z auto tmp6 = tmp3 - tmp5; 2023-01-11T21:41:23.7992795Z auto tmp7 = std::exp(tmp6); 2023-01-11T21:41:23.7992930Z out_ptr1[i1 + (64*i0)] = tmp7; 2023-01-11T21:41:23.7993045Z tmp8 += tmp7; 2023-01-11T21:41:23.7993147Z } 2023-01-11T21:41:23.7993243Z } 2023-01-11T21:41:23.7993345Z out_ptr2[i0] = tmp8; 2023-01-11T21:41:23.7993430Z } 2023-01-11T21:41:23.7993514Z } 2023-01-11T21:41:23.7993597Z } 2023-01-11T21:41:23.7993699Z #pragma omp for 2023-01-11T21:41:23.7993883Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:23.7993975Z { 2023-01-11T21:41:23.7994070Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.7994164Z { 2023-01-11T21:41:23.7994368Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + (8*i1) + (64*i0)); 2023-01-11T21:41:23.7994543Z auto tmp1 = at::vec::Vectorized(out_ptr2[i0]); 2023-01-11T21:41:23.7994670Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7994816Z tmp2.store(in_out_ptr0 + (8*i1) + (64*i0)); 2023-01-11T21:41:23.7994903Z } 2023-01-11T21:41:23.7995016Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.7995138Z for(long i1=64; i1<64; i1+=1) 2023-01-11T21:41:23.7995222Z { 2023-01-11T21:41:23.7995352Z auto tmp0 = out_ptr1[i1 + (64*i0)]; 2023-01-11T21:41:23.7995470Z auto tmp1 = out_ptr2[i0]; 2023-01-11T21:41:23.7995580Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.7995701Z in_out_ptr0[i1 + (64*i0)] = tmp2; 2023-01-11T21:41:23.7995769Z } 2023-01-11T21:41:23.7995851Z } 2023-01-11T21:41:23.7995931Z } 2023-01-11T21:41:23.7996011Z } 2023-01-11T21:41:23.7996133Z ''') 2023-01-11T21:41:23.7996142Z 2023-01-11T21:41:23.7996147Z 2023-01-11T21:41:23.7996268Z async_compile.wait(globals()) 2023-01-11T21:41:23.7996443Z del async_compile 2023-01-11T21:41:23.7996450Z 2023-01-11T21:41:23.7996525Z def call(args): 2023-01-11T21:41:23.7996615Z arg0_1, = args 2023-01-11T21:41:23.7996711Z args.clear() 2023-01-11T21:41:23.7997030Z buf0 = empty_strided((128, 32, 1), (32, 1, 4096), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7997316Z buf1 = empty_strided((128, 32, 64), (2048, 64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7997605Z buf2 = empty_strided((128, 32, 1), (32, 1, 4096), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7997720Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:23.7997984Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.7998062Z del arg0_1 2023-01-11T21:41:23.7998161Z return (buf3, ) 2023-01-11T21:41:23.7998168Z 2023-01-11T21:41:23.7998174Z 2023-01-11T21:41:23.7998275Z if __name__ == "__main__": 2023-01-11T21:41:23.7998496Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.7998670Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.7998968Z arg0_1 = rand_strided((128, 32, 63), (2016, 63, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.7999109Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.7999116Z 2023-01-11T21:41:23.7999205Z ok (1.647s) 2023-01-11T21:41:23.7999869Z test_elu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8000024Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8000403Z [2023-01-11 21:33:39,966] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 193 2023-01-11T21:41:23.8000793Z [2023-01-11 21:33:41,584] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 193 2023-01-11T21:41:23.8000800Z 2023-01-11T21:41:23.8000935Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8001036Z import torch 2023-01-11T21:41:23.8001132Z import random 2023-01-11T21:41:23.8001299Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8001476Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8001484Z 2023-01-11T21:41:23.8001584Z aten = torch.ops.aten 2023-01-11T21:41:23.8001773Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8001899Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8001905Z 2023-01-11T21:41:23.8001911Z 2023-01-11T21:41:23.8002093Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8002378Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8002541Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8002670Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8002801Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8002871Z { 2023-01-11T21:41:23.8003003Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8003085Z { 2023-01-11T21:41:23.8003186Z #pragma omp for 2023-01-11T21:41:23.8003295Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.8003381Z { 2023-01-11T21:41:23.8003463Z { 2023-01-11T21:41:23.8003533Z { 2023-01-11T21:41:23.8003657Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8003801Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.8003926Z auto tmp2 = tmp0 > tmp1; 2023-01-11T21:41:23.8004081Z auto tmp3 = static_cast(1.0507009873554805); 2023-01-11T21:41:23.8004204Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8004423Z auto tmp5 = static_cast(1.0); 2023-01-11T21:41:23.8004527Z auto tmp6 = tmp0 * tmp5; 2023-01-11T21:41:23.8004669Z auto tmp7 = std::expm1(tmp6); 2023-01-11T21:41:23.8004820Z auto tmp8 = static_cast(1.7580993408473766); 2023-01-11T21:41:23.8004942Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.8005083Z auto tmp10 = tmp2 ? tmp4 : tmp9; 2023-01-11T21:41:23.8005225Z auto tmp11 = static_cast(2); 2023-01-11T21:41:23.8005353Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:23.8005478Z auto tmp13 = static_cast(1); 2023-01-11T21:41:23.8005599Z auto tmp14 = tmp0 + tmp13; 2023-01-11T21:41:23.8005726Z auto tmp15 = tmp14 > tmp1; 2023-01-11T21:41:23.8005864Z auto tmp16 = static_cast(3); 2023-01-11T21:41:23.8006084Z auto tmp17 = tmp14 * tmp16; 2023-01-11T21:41:23.8006233Z auto tmp18 = static_cast(4); 2023-01-11T21:41:23.8006362Z auto tmp19 = tmp14 * tmp18; 2023-01-11T21:41:23.8006503Z auto tmp20 = std::expm1(tmp19); 2023-01-11T21:41:23.8006626Z auto tmp21 = static_cast(6); 2023-01-11T21:41:23.8006751Z auto tmp22 = tmp20 * tmp21; 2023-01-11T21:41:23.8006883Z auto tmp23 = tmp15 ? tmp17 : tmp22; 2023-01-11T21:41:23.8006994Z out_ptr0[i0] = tmp12; 2023-01-11T21:41:23.8007110Z out_ptr1[i0] = tmp23; 2023-01-11T21:41:23.8007194Z } 2023-01-11T21:41:23.8007295Z } 2023-01-11T21:41:23.8007381Z } 2023-01-11T21:41:23.8007479Z } 2023-01-11T21:41:23.8007580Z } 2023-01-11T21:41:23.8007721Z ''') 2023-01-11T21:41:23.8007730Z 2023-01-11T21:41:23.8007736Z 2023-01-11T21:41:23.8007884Z async_compile.wait(globals()) 2023-01-11T21:41:23.8008012Z del async_compile 2023-01-11T21:41:23.8008019Z 2023-01-11T21:41:23.8008132Z def call(args): 2023-01-11T21:41:23.8008227Z arg0_1, = args 2023-01-11T21:41:23.8008342Z args.clear() 2023-01-11T21:41:23.8008666Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8008977Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8009239Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8009350Z del arg0_1 2023-01-11T21:41:23.8009475Z return (buf0, buf1, ) 2023-01-11T21:41:23.8009484Z 2023-01-11T21:41:23.8009490Z 2023-01-11T21:41:23.8009608Z if __name__ == "__main__": 2023-01-11T21:41:23.8009774Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8009971Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8010295Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8010470Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8010477Z 2023-01-11T21:41:23.8010587Z ok (1.664s) 2023-01-11T21:41:23.8011398Z test_embedding_bag_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8011598Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8012032Z [2023-01-11 21:33:41,610] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 194 2023-01-11T21:41:23.8012400Z [2023-01-11 21:33:41,625] torch._inductor.ir: [WARNING] Using FallbackKernel: aten._embedding_bag 2023-01-11T21:41:23.8012915Z [2023-01-11 21:33:41,628] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 194 2023-01-11T21:41:23.8012923Z 2023-01-11T21:41:23.8013077Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8013191Z import torch 2023-01-11T21:41:23.8013307Z import random 2023-01-11T21:41:23.8013492Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8013692Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8013699Z 2023-01-11T21:41:23.8013827Z aten = torch.ops.aten 2023-01-11T21:41:23.8014044Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8014180Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8014187Z 2023-01-11T21:41:23.8014194Z 2023-01-11T21:41:23.8014335Z async_compile.wait(globals()) 2023-01-11T21:41:23.8014455Z del async_compile 2023-01-11T21:41:23.8014462Z 2023-01-11T21:41:23.8014577Z def call(args): 2023-01-11T21:41:23.8014704Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8014823Z args.clear() 2023-01-11T21:41:23.8015114Z buf0 = aten._embedding_bag(arg0_1, arg1_1, arg2_1) 2023-01-11T21:41:23.8015217Z del arg0_1 2023-01-11T21:41:23.8015323Z del arg1_1 2023-01-11T21:41:23.8015429Z del arg2_1 2023-01-11T21:41:23.8015541Z buf1 = buf0[0] 2023-01-11T21:41:23.8015697Z assert_size_stride(buf1, (3, 4), (4, 1)) 2023-01-11T21:41:23.8015806Z buf2 = buf0[1] 2023-01-11T21:41:23.8015953Z assert_size_stride(buf2, (0, ), (1, )) 2023-01-11T21:41:23.8016045Z buf3 = buf0[2] 2023-01-11T21:41:23.8016184Z assert_size_stride(buf3, (3, ), (1, )) 2023-01-11T21:41:23.8016296Z buf4 = buf0[3] 2023-01-11T21:41:23.8016440Z assert_size_stride(buf4, (3, ), (1, )) 2023-01-11T21:41:23.8016542Z del buf0 2023-01-11T21:41:23.8016686Z return (buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:23.8016694Z 2023-01-11T21:41:23.8016700Z 2023-01-11T21:41:23.8016822Z if __name__ == "__main__": 2023-01-11T21:41:23.8016989Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8017196Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8017531Z arg0_1 = rand_strided((10, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8017832Z arg1_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8018127Z arg2_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8018322Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8018330Z 2023-01-11T21:41:23.8018437Z ok (0.044s) 2023-01-11T21:41:23.8019218Z test_embedding_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8019422Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8019842Z [2023-01-11 21:33:41,734] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 195 2023-01-11T21:41:23.8020272Z [2023-01-11 21:33:43,307] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 195 2023-01-11T21:41:23.8020281Z 2023-01-11T21:41:23.8020435Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8020539Z import torch 2023-01-11T21:41:23.8020634Z import random 2023-01-11T21:41:23.8020823Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8021016Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8021024Z 2023-01-11T21:41:23.8021150Z aten = torch.ops.aten 2023-01-11T21:41:23.8021351Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8021496Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8021504Z 2023-01-11T21:41:23.8021509Z 2023-01-11T21:41:23.8021730Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8022138Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8022457Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8022626Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8022783Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8022934Z bool* __restrict__ out_ptr1, 2023-01-11T21:41:23.8023066Z long* __restrict__ out_ptr2) 2023-01-11T21:41:23.8023164Z { 2023-01-11T21:41:23.8023324Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8023423Z { 2023-01-11T21:41:23.8023545Z #pragma omp for 2023-01-11T21:41:23.8023676Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8023776Z { 2023-01-11T21:41:23.8023906Z #pragma GCC ivdep 2023-01-11T21:41:23.8024054Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8024166Z { 2023-01-11T21:41:23.8024382Z { 2023-01-11T21:41:23.8024507Z { 2023-01-11T21:41:23.8024665Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8024837Z auto tmp1 = in_ptr1[i1 + (4*tmp0)]; 2023-01-11T21:41:23.8024989Z auto tmp2 = tmp1 * (tmp1>0); 2023-01-11T21:41:23.8025151Z out_ptr0[i1 + (4*i0)] = tmp2; 2023-01-11T21:41:23.8025282Z } 2023-01-11T21:41:23.8025401Z } 2023-01-11T21:41:23.8025517Z } 2023-01-11T21:41:23.8025638Z } 2023-01-11T21:41:23.8025761Z #pragma omp for 2023-01-11T21:41:23.8025913Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8026031Z { 2023-01-11T21:41:23.8026145Z { 2023-01-11T21:41:23.8026262Z { 2023-01-11T21:41:23.8026421Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8026601Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.8026750Z auto tmp2 = tmp0 <= tmp1; 2023-01-11T21:41:23.8026902Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8027019Z } 2023-01-11T21:41:23.8027136Z } 2023-01-11T21:41:23.8027253Z } 2023-01-11T21:41:23.8027390Z #pragma omp for 2023-01-11T21:41:23.8027534Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8027631Z { 2023-01-11T21:41:23.8027746Z { 2023-01-11T21:41:23.8027858Z { 2023-01-11T21:41:23.8028014Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8028197Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.8028374Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.8028521Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:23.8028623Z } 2023-01-11T21:41:23.8028738Z } 2023-01-11T21:41:23.8028851Z } 2023-01-11T21:41:23.8028964Z } 2023-01-11T21:41:23.8029077Z } 2023-01-11T21:41:23.8029248Z ''') 2023-01-11T21:41:23.8029258Z 2023-01-11T21:41:23.8029264Z 2023-01-11T21:41:23.8029426Z async_compile.wait(globals()) 2023-01-11T21:41:23.8029541Z del async_compile 2023-01-11T21:41:23.8029548Z 2023-01-11T21:41:23.8029673Z def call(args): 2023-01-11T21:41:23.8029830Z primals_1, primals_2 = args 2023-01-11T21:41:23.8029957Z args.clear() 2023-01-11T21:41:23.8030300Z buf0 = empty_strided((2, 8, 4), (32, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8030626Z buf1 = empty_strided((2, 8, 4), (32, 4, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8030948Z buf2 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8031299Z kernel_cpp_0(c_void_p(primals_2.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8031433Z del primals_1 2023-01-11T21:41:23.8031558Z del primals_2 2023-01-11T21:41:23.8031818Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8031826Z 2023-01-11T21:41:23.8031832Z 2023-01-11T21:41:23.8031964Z if __name__ == "__main__": 2023-01-11T21:41:23.8032162Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8032372Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8032724Z primals_1 = rand_strided((10, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8033031Z primals_2 = rand_strided((2, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8033254Z print_performance(lambda: call([primals_1, primals_2])) 2023-01-11T21:41:23.8033262Z 2023-01-11T21:41:23.8033384Z ok (1.679s) 2023-01-11T21:41:23.8034360Z test_exp_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8034591Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8035040Z [2023-01-11 21:33:43,326] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 196 2023-01-11T21:41:23.8035488Z [2023-01-11 21:33:45,162] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 196 2023-01-11T21:41:23.8035496Z 2023-01-11T21:41:23.8035660Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8035788Z import torch 2023-01-11T21:41:23.8035912Z import random 2023-01-11T21:41:23.8036094Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8036304Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8036312Z 2023-01-11T21:41:23.8036451Z aten = torch.ops.aten 2023-01-11T21:41:23.8036679Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8036849Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8036857Z 2023-01-11T21:41:23.8036863Z 2023-01-11T21:41:23.8037093Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8037437Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8037644Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8037809Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8037980Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8038141Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8038258Z { 2023-01-11T21:41:23.8038430Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8038537Z { 2023-01-11T21:41:23.8038674Z #pragma omp for 2023-01-11T21:41:23.8038800Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8038914Z { 2023-01-11T21:41:23.8039147Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8039379Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8039526Z auto tmp1 = tmp0.exp(); 2023-01-11T21:41:23.8039672Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.8039814Z auto tmp4 = tmp3.exp(); 2023-01-11T21:41:23.8039956Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8040122Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8040253Z } 2023-01-11T21:41:23.8040455Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8040631Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8040761Z { 2023-01-11T21:41:23.8040942Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8041105Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:23.8041293Z auto tmp1 = std::exp(tmp0); 2023-01-11T21:41:23.8041477Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.8041672Z auto tmp4 = std::exp(tmp3); 2023-01-11T21:41:23.8041838Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8042115Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8042246Z } 2023-01-11T21:41:23.8042363Z } 2023-01-11T21:41:23.8042496Z } 2023-01-11T21:41:23.8042679Z ''') 2023-01-11T21:41:23.8042690Z 2023-01-11T21:41:23.8042699Z 2023-01-11T21:41:23.8042895Z async_compile.wait(globals()) 2023-01-11T21:41:23.8043058Z del async_compile 2023-01-11T21:41:23.8043068Z 2023-01-11T21:41:23.8043221Z def call(args): 2023-01-11T21:41:23.8043382Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8043516Z args.clear() 2023-01-11T21:41:23.8043961Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8044380Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8044804Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8044949Z del arg0_1 2023-01-11T21:41:23.8045096Z del arg1_1 2023-01-11T21:41:23.8045342Z return (buf0, buf1, ) 2023-01-11T21:41:23.8045354Z 2023-01-11T21:41:23.8045362Z 2023-01-11T21:41:23.8045527Z if __name__ == "__main__": 2023-01-11T21:41:23.8045767Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8046039Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8046479Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8046896Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8047143Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8047153Z 2023-01-11T21:41:23.8047291Z ok (1.855s) 2023-01-11T21:41:23.8048458Z test_expand_as_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8048749Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8049361Z [2023-01-11 21:33:45,206] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 197 2023-01-11T21:41:23.8049954Z [2023-01-11 21:33:47,044] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 197 2023-01-11T21:41:23.8049983Z 2023-01-11T21:41:23.8050182Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8050332Z import torch 2023-01-11T21:41:23.8050480Z import random 2023-01-11T21:41:23.8050732Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8051002Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8051012Z 2023-01-11T21:41:23.8051184Z aten = torch.ops.aten 2023-01-11T21:41:23.8051476Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8051668Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8051678Z 2023-01-11T21:41:23.8051686Z 2023-01-11T21:41:23.8051989Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8052456Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8052718Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8052861Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8052949Z { 2023-01-11T21:41:23.8053090Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8053167Z { 2023-01-11T21:41:23.8053298Z #pragma omp for collapse(2) 2023-01-11T21:41:23.8053419Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:23.8053505Z { 2023-01-11T21:41:23.8053634Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:23.8053721Z { 2023-01-11T21:41:23.8053851Z for(long i2=0; i2<12; i2+=1) 2023-01-11T21:41:23.8053919Z { 2023-01-11T21:41:23.8054225Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i2) + (100*i0)); 2023-01-11T21:41:23.8054428Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8054555Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8054685Z auto tmp3 = tmp2 + tmp1; 2023-01-11T21:41:23.8054846Z tmp3.store(out_ptr0 + (8*i2) + (100*i1) + (12800*i0)); 2023-01-11T21:41:23.8054931Z } 2023-01-11T21:41:23.8055071Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8055179Z for(long i2=96; i2<100; i2+=1) 2023-01-11T21:41:23.8055259Z { 2023-01-11T21:41:23.8055411Z auto tmp0 = in_ptr0[i2 + (100*i0)]; 2023-01-11T21:41:23.8055543Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8055659Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8055790Z auto tmp3 = tmp2 + tmp1; 2023-01-11T21:41:23.8056004Z out_ptr0[i2 + (100*i1) + (12800*i0)] = tmp3; 2023-01-11T21:41:23.8056078Z } 2023-01-11T21:41:23.8056170Z } 2023-01-11T21:41:23.8056261Z } 2023-01-11T21:41:23.8056347Z } 2023-01-11T21:41:23.8056433Z } 2023-01-11T21:41:23.8056565Z ''') 2023-01-11T21:41:23.8056574Z 2023-01-11T21:41:23.8056579Z 2023-01-11T21:41:23.8056723Z async_compile.wait(globals()) 2023-01-11T21:41:23.8056805Z del async_compile 2023-01-11T21:41:23.8056812Z 2023-01-11T21:41:23.8056900Z def call(args): 2023-01-11T21:41:23.8056998Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8057102Z args.clear() 2023-01-11T21:41:23.8057447Z buf0 = empty_strided((6, 128, 100), (12800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8057665Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8057827Z return (as_strided(arg0_1, (6, 128, 100), (100, 0, 1)), buf0, ) 2023-01-11T21:41:23.8057835Z 2023-01-11T21:41:23.8057848Z 2023-01-11T21:41:23.8057955Z if __name__ == "__main__": 2023-01-11T21:41:23.8058108Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8058289Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8058612Z arg0_1 = rand_strided((6, 1, 100), (100, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8058924Z arg1_1 = rand_strided((6, 128, 100), (12800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8059083Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8059091Z 2023-01-11T21:41:23.8059189Z ok (1.884s) 2023-01-11T21:41:23.8059924Z test_expand_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8060106Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8060496Z [2023-01-11 21:33:47,078] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 198 2023-01-11T21:41:23.8060856Z [2023-01-11 21:33:48,675] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 198 2023-01-11T21:41:23.8060878Z 2023-01-11T21:41:23.8060988Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8061082Z import torch 2023-01-11T21:41:23.8061178Z import random 2023-01-11T21:41:23.8061338Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8061523Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8061531Z 2023-01-11T21:41:23.8061649Z aten = torch.ops.aten 2023-01-11T21:41:23.8061868Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8061994Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8062083Z 2023-01-11T21:41:23.8062109Z 2023-01-11T21:41:23.8062308Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8062720Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8062904Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8063060Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8063214Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8063300Z { 2023-01-11T21:41:23.8063450Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8063516Z { 2023-01-11T21:41:23.8063628Z #pragma omp for 2023-01-11T21:41:23.8063744Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.8063837Z { 2023-01-11T21:41:23.8063961Z #pragma GCC ivdep 2023-01-11T21:41:23.8064084Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.8064167Z { 2023-01-11T21:41:23.8064294Z #pragma GCC ivdep 2023-01-11T21:41:23.8064548Z for(long i2=0; i2<3; i2+=1) 2023-01-11T21:41:23.8064645Z { 2023-01-11T21:41:23.8064760Z #pragma GCC ivdep 2023-01-11T21:41:23.8064883Z for(long i3=0; i3<2; i3+=1) 2023-01-11T21:41:23.8064976Z { 2023-01-11T21:41:23.8065053Z { 2023-01-11T21:41:23.8065151Z { 2023-01-11T21:41:23.8065297Z auto tmp0 = in_ptr0[i3 + (2*i1)]; 2023-01-11T21:41:23.8065444Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8065580Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8065743Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.8065880Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.8066026Z out_ptr0[i3 + (2*i2) + (6*i1) + (12*i0)] = tmp4; 2023-01-11T21:41:23.8066113Z } 2023-01-11T21:41:23.8066218Z } 2023-01-11T21:41:23.8066316Z } 2023-01-11T21:41:23.8066401Z } 2023-01-11T21:41:23.8066484Z } 2023-01-11T21:41:23.8066568Z } 2023-01-11T21:41:23.8066672Z #pragma omp for collapse(2) 2023-01-11T21:41:23.8066782Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8066869Z { 2023-01-11T21:41:23.8066986Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.8067074Z { 2023-01-11T21:41:23.8067194Z #pragma GCC ivdep 2023-01-11T21:41:23.8067321Z for(long i2=0; i2<3; i2+=1) 2023-01-11T21:41:23.8067403Z { 2023-01-11T21:41:23.8067520Z #pragma GCC ivdep 2023-01-11T21:41:23.8067639Z for(long i3=0; i3<2; i3+=1) 2023-01-11T21:41:23.8067730Z { 2023-01-11T21:41:23.8067823Z { 2023-01-11T21:41:23.8067919Z { 2023-01-11T21:41:23.8068065Z auto tmp0 = in_ptr0[i3 + (2*i1)]; 2023-01-11T21:41:23.8068196Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8068344Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8068512Z out_ptr1[i3 + (2*i2) + (6*i1) + (12*i0)] = tmp2; 2023-01-11T21:41:23.8068612Z } 2023-01-11T21:41:23.8068711Z } 2023-01-11T21:41:23.8068808Z } 2023-01-11T21:41:23.8068903Z } 2023-01-11T21:41:23.8068986Z } 2023-01-11T21:41:23.8069082Z } 2023-01-11T21:41:23.8069172Z } 2023-01-11T21:41:23.8069273Z } 2023-01-11T21:41:23.8069431Z ''') 2023-01-11T21:41:23.8069440Z 2023-01-11T21:41:23.8069447Z 2023-01-11T21:41:23.8069595Z async_compile.wait(globals()) 2023-01-11T21:41:23.8069715Z del async_compile 2023-01-11T21:41:23.8069722Z 2023-01-11T21:41:23.8069818Z def call(args): 2023-01-11T21:41:23.8070036Z arg0_1, = args 2023-01-11T21:41:23.8070148Z args.clear() 2023-01-11T21:41:23.8070511Z buf0 = empty_strided((3, 4, 2, 3, 2), (48, 12, 6, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8070855Z buf1 = empty_strided((2, 1, 2, 3, 2), (12, 12, 6, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8071107Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8071291Z return (buf0, buf1, as_strided(arg0_1, (2, 2, 5, 2), (0, 2, 0, 1)), ) 2023-01-11T21:41:23.8071300Z 2023-01-11T21:41:23.8071307Z 2023-01-11T21:41:23.8071423Z if __name__ == "__main__": 2023-01-11T21:41:23.8071576Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8071775Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8072102Z arg0_1 = rand_strided((2, 1, 2), (2, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8072246Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8072358Z 2023-01-11T21:41:23.8072451Z ok (1.629s) 2023-01-11T21:41:23.8073451Z test_expanded_reduction_cpu (__main__.CpuTests) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/87157 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.001s) 2023-01-11T21:41:23.8074198Z test_expm1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8074384Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8074775Z [2023-01-11 21:33:48,695] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 199 2023-01-11T21:41:23.8075162Z [2023-01-11 21:33:50,312] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 199 2023-01-11T21:41:23.8075835Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8076017Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8076382Z [2023-01-11 21:33:50,330] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 200 2023-01-11T21:41:23.8076753Z [2023-01-11 21:33:51,887] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 200 2023-01-11T21:41:23.8077402Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8077584Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8077964Z [2023-01-11 21:33:51,906] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 201 2023-01-11T21:41:23.8078340Z [2023-01-11 21:33:53,498] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 201 2023-01-11T21:41:23.8078964Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8079252Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8079682Z [2023-01-11 21:33:53,515] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 202 2023-01-11T21:41:23.8080103Z [2023-01-11 21:33:55,094] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 202 2023-01-11T21:41:23.8080770Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8080955Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8081408Z [2023-01-11 21:33:55,120] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 203 2023-01-11T21:41:23.8081437Z 2023-01-11T21:41:23.8081552Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8081659Z import torch 2023-01-11T21:41:23.8081767Z import random 2023-01-11T21:41:23.8081956Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8082147Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8082156Z 2023-01-11T21:41:23.8082283Z aten = torch.ops.aten 2023-01-11T21:41:23.8082499Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8082621Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8082629Z 2023-01-11T21:41:23.8082634Z 2023-01-11T21:41:23.8082847Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8083170Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8083340Z extern "C" void kernel(const half* __restrict__ in_ptr0, 2023-01-11T21:41:23.8083486Z half* __restrict__ out_ptr0, 2023-01-11T21:41:23.8083633Z half* __restrict__ out_ptr1) 2023-01-11T21:41:23.8083727Z { 2023-01-11T21:41:23.8083883Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8083961Z { 2023-01-11T21:41:23.8084081Z #pragma omp for 2023-01-11T21:41:23.8084212Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8084308Z { 2023-01-11T21:41:23.8084409Z { 2023-01-11T21:41:23.8084508Z { 2023-01-11T21:41:23.8084680Z auto tmp0 = static_cast(in_ptr0[i0]); 2023-01-11T21:41:23.8084849Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8085007Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8085147Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8085266Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8085384Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8085479Z } 2023-01-11T21:41:23.8085553Z } 2023-01-11T21:41:23.8085641Z } 2023-01-11T21:41:23.8085729Z } 2023-01-11T21:41:23.8085820Z } 2023-01-11T21:41:23.8085950Z ''') 2023-01-11T21:41:23.8085958Z 2023-01-11T21:41:23.8085965Z 2023-01-11T21:41:23.8086095Z async_compile.wait(globals()) 2023-01-11T21:41:23.8086190Z del async_compile 2023-01-11T21:41:23.8086198Z 2023-01-11T21:41:23.8086283Z def call(args): 2023-01-11T21:41:23.8086383Z arg0_1, = args 2023-01-11T21:41:23.8086481Z args.clear() 2023-01-11T21:41:23.8086765Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8087058Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8087303Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8087401Z del arg0_1 2023-01-11T21:41:23.8087511Z return (buf0, buf1, ) 2023-01-11T21:41:23.8087601Z 2023-01-11T21:41:23.8087612Z 2023-01-11T21:41:23.8087708Z if __name__ == "__main__": 2023-01-11T21:41:23.8087875Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8088048Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8088342Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8088494Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8088503Z 2023-01-11T21:41:23.8088508Z 2023-01-11T21:41:23.8088642Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8088739Z import torch 2023-01-11T21:41:23.8088842Z import random 2023-01-11T21:41:23.8088982Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8089144Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8089150Z 2023-01-11T21:41:23.8089248Z aten = torch.ops.aten 2023-01-11T21:41:23.8089425Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8089540Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8089617Z 2023-01-11T21:41:23.8089624Z 2023-01-11T21:41:23.8089812Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8090078Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8090240Z extern "C" void kernel(const half* __restrict__ in_ptr0, 2023-01-11T21:41:23.8090352Z half* __restrict__ out_ptr0, 2023-01-11T21:41:23.8090495Z half* __restrict__ out_ptr1) 2023-01-11T21:41:23.8090585Z { 2023-01-11T21:41:23.8090726Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8090811Z { 2023-01-11T21:41:23.8090919Z #pragma omp for 2023-01-11T21:41:23.8091031Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.8091102Z { 2023-01-11T21:41:23.8091183Z { 2023-01-11T21:41:23.8091267Z { 2023-01-11T21:41:23.8091425Z auto tmp0 = static_cast(in_ptr0[i0]); 2023-01-11T21:41:23.8091575Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8091723Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8091848Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8091950Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8092075Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8092169Z } 2023-01-11T21:41:23.8092257Z } 2023-01-11T21:41:23.8092339Z } 2023-01-11T21:41:23.8092426Z } 2023-01-11T21:41:23.8092491Z } 2023-01-11T21:41:23.8092614Z ''') 2023-01-11T21:41:23.8092624Z 2023-01-11T21:41:23.8092630Z 2023-01-11T21:41:23.8092759Z async_compile.wait(globals()) 2023-01-11T21:41:23.8092866Z del async_compile 2023-01-11T21:41:23.8092874Z 2023-01-11T21:41:23.8092981Z def call(args): 2023-01-11T21:41:23.8093084Z arg0_1, = args 2023-01-11T21:41:23.8093195Z args.clear() 2023-01-11T21:41:23.8093514Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8093809Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8094056Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8094157Z del arg0_1 2023-01-11T21:41:23.8094267Z return (buf0, buf1, ) 2023-01-11T21:41:23.8094274Z 2023-01-11T21:41:23.8094281Z 2023-01-11T21:41:23.8094397Z if __name__ == "__main__": 2023-01-11T21:41:23.8094579Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8094767Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8095087Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.8095249Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8095257Z 2023-01-11T21:41:23.8095263Z 2023-01-11T21:41:23.8095410Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8095516Z import torch 2023-01-11T21:41:23.8095629Z import random 2023-01-11T21:41:23.8095898Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8096066Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8096073Z 2023-01-11T21:41:23.8096178Z aten = torch.ops.aten 2023-01-11T21:41:23.8096377Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8096491Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8096497Z 2023-01-11T21:41:23.8096503Z 2023-01-11T21:41:23.8096729Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8097048Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8097226Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8097373Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8097511Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8097590Z { 2023-01-11T21:41:23.8097722Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8097810Z { 2023-01-11T21:41:23.8098002Z #pragma omp for 2023-01-11T21:41:23.8098125Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8098216Z { 2023-01-11T21:41:23.8098426Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8098554Z auto tmp1 = tmp0.expm1(); 2023-01-11T21:41:23.8098725Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8098837Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8098964Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8099098Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8099187Z } 2023-01-11T21:41:23.8099331Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8099454Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8099531Z { 2023-01-11T21:41:23.8099654Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8099796Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8099937Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8100057Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8100171Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8100285Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8100367Z } 2023-01-11T21:41:23.8100456Z } 2023-01-11T21:41:23.8100544Z } 2023-01-11T21:41:23.8100678Z ''') 2023-01-11T21:41:23.8100686Z 2023-01-11T21:41:23.8100692Z 2023-01-11T21:41:23.8100819Z async_compile.wait(globals()) 2023-01-11T21:41:23.8100926Z del async_compile 2023-01-11T21:41:23.8100933Z 2023-01-11T21:41:23.8101033Z def call(args): 2023-01-11T21:41:23.8101133Z arg0_1, = args 2023-01-11T21:41:23.8101219Z args.clear() 2023-01-11T21:41:23.8101501Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8101767Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8101990Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8102098Z del arg0_1 2023-01-11T21:41:23.8102218Z return (buf0, buf1, ) 2023-01-11T21:41:23.8102226Z 2023-01-11T21:41:23.8102232Z 2023-01-11T21:41:23.8102504Z if __name__ == "__main__": 2023-01-11T21:41:23.8102653Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8102821Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8103107Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8103257Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8103264Z 2023-01-11T21:41:23.8103269Z 2023-01-11T21:41:23.8103397Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8103495Z import torch 2023-01-11T21:41:23.8103592Z import random 2023-01-11T21:41:23.8103761Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8103946Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8103953Z 2023-01-11T21:41:23.8104079Z aten = torch.ops.aten 2023-01-11T21:41:23.8104386Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8104538Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8104546Z 2023-01-11T21:41:23.8104553Z 2023-01-11T21:41:23.8104777Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8105090Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8105269Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8105406Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8105541Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8105637Z { 2023-01-11T21:41:23.8105781Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8105875Z { 2023-01-11T21:41:23.8105982Z #pragma omp for 2023-01-11T21:41:23.8106092Z for(long i0=0; i0<25; i0+=1) 2023-01-11T21:41:23.8106176Z { 2023-01-11T21:41:23.8106346Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8106565Z auto tmp1 = tmp0.expm1(); 2023-01-11T21:41:23.8106759Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8106873Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8106995Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8107122Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8107213Z } 2023-01-11T21:41:23.8107341Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8107464Z for(long i0=200; i0<201; i0+=1) 2023-01-11T21:41:23.8107560Z { 2023-01-11T21:41:23.8107677Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8107815Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8107971Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8108103Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8108210Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8108331Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8108432Z } 2023-01-11T21:41:23.8108528Z } 2023-01-11T21:41:23.8108621Z } 2023-01-11T21:41:23.8108768Z ''') 2023-01-11T21:41:23.8108777Z 2023-01-11T21:41:23.8108783Z 2023-01-11T21:41:23.8108914Z async_compile.wait(globals()) 2023-01-11T21:41:23.8109008Z del async_compile 2023-01-11T21:41:23.8109032Z 2023-01-11T21:41:23.8109112Z def call(args): 2023-01-11T21:41:23.8109206Z arg0_1, = args 2023-01-11T21:41:23.8109300Z args.clear() 2023-01-11T21:41:23.8109588Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8109862Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8110075Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8110164Z del arg0_1 2023-01-11T21:41:23.8110250Z return (buf0, buf1, ) 2023-01-11T21:41:23.8110257Z 2023-01-11T21:41:23.8110263Z 2023-01-11T21:41:23.8110373Z if __name__ == "__main__": 2023-01-11T21:41:23.8110553Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8110741Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8111057Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8111224Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8111231Z 2023-01-11T21:41:23.8111661Z [2023-01-11 21:33:56,662] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 203 2023-01-11T21:41:23.8112317Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8112505Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8112995Z [2023-01-11 21:33:56,679] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 204 2023-01-11T21:41:23.8113410Z [2023-01-11 21:33:58,236] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 204 2023-01-11T21:41:23.8114148Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8114339Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8114801Z [2023-01-11 21:33:58,255] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 205 2023-01-11T21:41:23.8115281Z [2023-01-11 21:33:59,825] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 205 2023-01-11T21:41:23.8116238Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8116464Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8116941Z [2023-01-11 21:33:59,844] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 206 2023-01-11T21:41:23.8117429Z [2023-01-11 21:34:01,430] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 206 2023-01-11T21:41:23.8118301Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8118522Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8118973Z [2023-01-11 21:34:01,449] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 207 2023-01-11T21:41:23.8119000Z 2023-01-11T21:41:23.8119145Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8119260Z import torch 2023-01-11T21:41:23.8119379Z import random 2023-01-11T21:41:23.8119585Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8119801Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8119810Z 2023-01-11T21:41:23.8119941Z aten = torch.ops.aten 2023-01-11T21:41:23.8120179Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8120322Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8120330Z 2023-01-11T21:41:23.8120352Z 2023-01-11T21:41:23.8120582Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8120970Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8121177Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.8121341Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.8121506Z double* __restrict__ out_ptr1) 2023-01-11T21:41:23.8121603Z { 2023-01-11T21:41:23.8121767Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8121850Z { 2023-01-11T21:41:23.8121977Z #pragma omp for 2023-01-11T21:41:23.8122110Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8122209Z { 2023-01-11T21:41:23.8122310Z { 2023-01-11T21:41:23.8122414Z { 2023-01-11T21:41:23.8122550Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8122723Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8122904Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8123136Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8123274Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8123408Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8123510Z } 2023-01-11T21:41:23.8123595Z } 2023-01-11T21:41:23.8123695Z } 2023-01-11T21:41:23.8123791Z } 2023-01-11T21:41:23.8123887Z } 2023-01-11T21:41:23.8124024Z ''') 2023-01-11T21:41:23.8124034Z 2023-01-11T21:41:23.8124042Z 2023-01-11T21:41:23.8124197Z async_compile.wait(globals()) 2023-01-11T21:41:23.8124316Z del async_compile 2023-01-11T21:41:23.8124323Z 2023-01-11T21:41:23.8124438Z def call(args): 2023-01-11T21:41:23.8124533Z arg0_1, = args 2023-01-11T21:41:23.8124650Z args.clear() 2023-01-11T21:41:23.8124991Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8125322Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8125680Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8125797Z del arg0_1 2023-01-11T21:41:23.8125924Z return (buf0, buf1, ) 2023-01-11T21:41:23.8125933Z 2023-01-11T21:41:23.8125940Z 2023-01-11T21:41:23.8126050Z if __name__ == "__main__": 2023-01-11T21:41:23.8126249Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8126463Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8126791Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8126974Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8126982Z 2023-01-11T21:41:23.8126989Z 2023-01-11T21:41:23.8127150Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8127264Z import torch 2023-01-11T21:41:23.8127378Z import random 2023-01-11T21:41:23.8127561Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8127779Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8127791Z 2023-01-11T21:41:23.8127922Z aten = torch.ops.aten 2023-01-11T21:41:23.8128157Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8128312Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8128320Z 2023-01-11T21:41:23.8128327Z 2023-01-11T21:41:23.8128585Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8128974Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8129206Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.8129380Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.8129565Z double* __restrict__ out_ptr1) 2023-01-11T21:41:23.8129688Z { 2023-01-11T21:41:23.8129883Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8130006Z { 2023-01-11T21:41:23.8130155Z #pragma omp for 2023-01-11T21:41:23.8130313Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.8130430Z { 2023-01-11T21:41:23.8130556Z { 2023-01-11T21:41:23.8130681Z { 2023-01-11T21:41:23.8130857Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8131052Z auto tmp1 = std::expm1(tmp0); 2023-01-11T21:41:23.8131249Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8131419Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.8131564Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8131724Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.8131851Z } 2023-01-11T21:41:23.8131977Z } 2023-01-11T21:41:23.8132100Z } 2023-01-11T21:41:23.8132223Z } 2023-01-11T21:41:23.8132328Z } 2023-01-11T21:41:23.8132493Z ''') 2023-01-11T21:41:23.8132501Z 2023-01-11T21:41:23.8132508Z 2023-01-11T21:41:23.8132690Z async_compile.wait(globals()) 2023-01-11T21:41:23.8132834Z del async_compile 2023-01-11T21:41:23.8132842Z 2023-01-11T21:41:23.8133077Z def call(args): 2023-01-11T21:41:23.8133211Z arg0_1, = args 2023-01-11T21:41:23.8133350Z args.clear() 2023-01-11T21:41:23.8133721Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8134066Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8134372Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8134509Z del arg0_1 2023-01-11T21:41:23.8134659Z return (buf0, buf1, ) 2023-01-11T21:41:23.8134668Z 2023-01-11T21:41:23.8134675Z 2023-01-11T21:41:23.8134821Z if __name__ == "__main__": 2023-01-11T21:41:23.8135041Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8135279Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8135645Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8135840Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8135949Z 2023-01-11T21:41:23.8135973Z 2023-01-11T21:41:23.8136145Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8136284Z import torch 2023-01-11T21:41:23.8136423Z import random 2023-01-11T21:41:23.8136648Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8136882Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8136889Z 2023-01-11T21:41:23.8137044Z aten = torch.ops.aten 2023-01-11T21:41:23.8137303Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8137468Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8137476Z 2023-01-11T21:41:23.8137482Z 2023-01-11T21:41:23.8137744Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8138137Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8138361Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.8138547Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8138740Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8138861Z { 2023-01-11T21:41:23.8139040Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8139166Z { 2023-01-11T21:41:23.8139316Z #pragma omp for 2023-01-11T21:41:23.8139473Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8139597Z { 2023-01-11T21:41:23.8139723Z { 2023-01-11T21:41:23.8139851Z { 2023-01-11T21:41:23.8140008Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8140212Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.8140404Z auto tmp2 = std::expm1(tmp1); 2023-01-11T21:41:23.8140599Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.8140771Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8140930Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8141089Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8141207Z } 2023-01-11T21:41:23.8141335Z } 2023-01-11T21:41:23.8141457Z } 2023-01-11T21:41:23.8141582Z } 2023-01-11T21:41:23.8141707Z } 2023-01-11T21:41:23.8141876Z ''') 2023-01-11T21:41:23.8141885Z 2023-01-11T21:41:23.8141892Z 2023-01-11T21:41:23.8142077Z async_compile.wait(globals()) 2023-01-11T21:41:23.8142210Z del async_compile 2023-01-11T21:41:23.8142233Z 2023-01-11T21:41:23.8142495Z def call(args): 2023-01-11T21:41:23.8142643Z arg0_1, = args 2023-01-11T21:41:23.8142787Z args.clear() 2023-01-11T21:41:23.8143162Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8143521Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8143829Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8143967Z del arg0_1 2023-01-11T21:41:23.8144108Z return (buf0, buf1, ) 2023-01-11T21:41:23.8144116Z 2023-01-11T21:41:23.8144236Z 2023-01-11T21:41:23.8144393Z if __name__ == "__main__": 2023-01-11T21:41:23.8144615Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8144855Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8145221Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.8145434Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8145442Z 2023-01-11T21:41:23.8145448Z 2023-01-11T21:41:23.8145635Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8145775Z import torch 2023-01-11T21:41:23.8145903Z import random 2023-01-11T21:41:23.8146129Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8146364Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8146372Z 2023-01-11T21:41:23.8146526Z aten = torch.ops.aten 2023-01-11T21:41:23.8146783Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8146965Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8146978Z 2023-01-11T21:41:23.8147076Z 2023-01-11T21:41:23.8147347Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8147728Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8147936Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.8148124Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8148309Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8148432Z { 2023-01-11T21:41:23.8148627Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8148751Z { 2023-01-11T21:41:23.8148902Z #pragma omp for 2023-01-11T21:41:23.8149050Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.8149178Z { 2023-01-11T21:41:23.8149305Z { 2023-01-11T21:41:23.8149436Z { 2023-01-11T21:41:23.8149610Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8149815Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.8150021Z auto tmp2 = std::expm1(tmp1); 2023-01-11T21:41:23.8150201Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.8150376Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8150538Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8150700Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8150831Z } 2023-01-11T21:41:23.8150960Z } 2023-01-11T21:41:23.8151088Z } 2023-01-11T21:41:23.8151200Z } 2023-01-11T21:41:23.8151326Z } 2023-01-11T21:41:23.8151496Z ''') 2023-01-11T21:41:23.8151505Z 2023-01-11T21:41:23.8151512Z 2023-01-11T21:41:23.8151692Z async_compile.wait(globals()) 2023-01-11T21:41:23.8151839Z del async_compile 2023-01-11T21:41:23.8151847Z 2023-01-11T21:41:23.8151990Z def call(args): 2023-01-11T21:41:23.8152128Z arg0_1, = args 2023-01-11T21:41:23.8152254Z args.clear() 2023-01-11T21:41:23.8152626Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8152990Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8153298Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8153435Z del arg0_1 2023-01-11T21:41:23.8153588Z return (buf0, buf1, ) 2023-01-11T21:41:23.8153596Z 2023-01-11T21:41:23.8153603Z 2023-01-11T21:41:23.8153820Z if __name__ == "__main__": 2023-01-11T21:41:23.8154049Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8154274Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8154597Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.8154760Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8154767Z 2023-01-11T21:41:23.8155178Z [2023-01-11 21:34:03,017] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 207 2023-01-11T21:41:23.8155907Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8156102Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8156516Z [2023-01-11 21:34:03,035] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 208 2023-01-11T21:41:23.8156932Z [2023-01-11 21:34:04,589] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 208 2023-01-11T21:41:23.8156939Z 2023-01-11T21:41:23.8157084Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8157175Z import torch 2023-01-11T21:41:23.8157284Z import random 2023-01-11T21:41:23.8157465Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8157717Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8157726Z 2023-01-11T21:41:23.8157845Z aten = torch.ops.aten 2023-01-11T21:41:23.8158052Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8158194Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8158201Z 2023-01-11T21:41:23.8158208Z 2023-01-11T21:41:23.8158420Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8158739Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8158926Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8159068Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8159210Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8159301Z { 2023-01-11T21:41:23.8159458Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8159548Z { 2023-01-11T21:41:23.8159651Z #pragma omp for 2023-01-11T21:41:23.8159783Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8159879Z { 2023-01-11T21:41:23.8159968Z { 2023-01-11T21:41:23.8160062Z { 2023-01-11T21:41:23.8160203Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8160362Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.8160503Z auto tmp2 = std::expm1(tmp1); 2023-01-11T21:41:23.8160655Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.8160791Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8160919Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8161039Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8161135Z } 2023-01-11T21:41:23.8161226Z } 2023-01-11T21:41:23.8161295Z } 2023-01-11T21:41:23.8161379Z } 2023-01-11T21:41:23.8161466Z } 2023-01-11T21:41:23.8161595Z ''') 2023-01-11T21:41:23.8161603Z 2023-01-11T21:41:23.8161608Z 2023-01-11T21:41:23.8161752Z async_compile.wait(globals()) 2023-01-11T21:41:23.8161856Z del async_compile 2023-01-11T21:41:23.8161862Z 2023-01-11T21:41:23.8161957Z def call(args): 2023-01-11T21:41:23.8162041Z arg0_1, = args 2023-01-11T21:41:23.8162139Z args.clear() 2023-01-11T21:41:23.8162440Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8162735Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8162989Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8163085Z del arg0_1 2023-01-11T21:41:23.8163193Z return (buf0, buf1, ) 2023-01-11T21:41:23.8163202Z 2023-01-11T21:41:23.8163208Z 2023-01-11T21:41:23.8163325Z if __name__ == "__main__": 2023-01-11T21:41:23.8163465Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8163654Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8163959Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8164224Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8164231Z 2023-01-11T21:41:23.8164236Z 2023-01-11T21:41:23.8164371Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8164471Z import torch 2023-01-11T21:41:23.8164577Z import random 2023-01-11T21:41:23.8164745Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8164904Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8164911Z 2023-01-11T21:41:23.8165019Z aten = torch.ops.aten 2023-01-11T21:41:23.8165219Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8165354Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8165362Z 2023-01-11T21:41:23.8165369Z 2023-01-11T21:41:23.8165578Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8165895Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8166110Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8166270Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8166393Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8166481Z { 2023-01-11T21:41:23.8166632Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8166720Z { 2023-01-11T21:41:23.8166829Z #pragma omp for 2023-01-11T21:41:23.8166950Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.8167020Z { 2023-01-11T21:41:23.8167114Z { 2023-01-11T21:41:23.8167205Z { 2023-01-11T21:41:23.8167334Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8167492Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.8167647Z auto tmp2 = std::expm1(tmp1); 2023-01-11T21:41:23.8167796Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.8167929Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8168038Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8168167Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8168254Z } 2023-01-11T21:41:23.8168344Z } 2023-01-11T21:41:23.8168434Z } 2023-01-11T21:41:23.8168520Z } 2023-01-11T21:41:23.8168591Z } 2023-01-11T21:41:23.8168714Z ''') 2023-01-11T21:41:23.8168722Z 2023-01-11T21:41:23.8168729Z 2023-01-11T21:41:23.8168856Z async_compile.wait(globals()) 2023-01-11T21:41:23.8168962Z del async_compile 2023-01-11T21:41:23.8168970Z 2023-01-11T21:41:23.8169071Z def call(args): 2023-01-11T21:41:23.8169169Z arg0_1, = args 2023-01-11T21:41:23.8169266Z args.clear() 2023-01-11T21:41:23.8169567Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8169840Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8170091Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8170193Z del arg0_1 2023-01-11T21:41:23.8170309Z return (buf0, buf1, ) 2023-01-11T21:41:23.8170317Z 2023-01-11T21:41:23.8170323Z 2023-01-11T21:41:23.8170430Z if __name__ == "__main__": 2023-01-11T21:41:23.8170600Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8170775Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8171064Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8171201Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8171209Z 2023-01-11T21:41:23.8171304Z ok (15.913s) 2023-01-11T21:41:23.8172062Z test_fill1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8172331Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8172761Z [2023-01-11 21:34:04,772] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 209 2023-01-11T21:41:23.8173205Z [2023-01-11 21:34:06,296] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 209 2023-01-11T21:41:23.8173213Z 2023-01-11T21:41:23.8173362Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8173469Z import torch 2023-01-11T21:41:23.8173580Z import random 2023-01-11T21:41:23.8173759Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8173965Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8173974Z 2023-01-11T21:41:23.8174099Z aten = torch.ops.aten 2023-01-11T21:41:23.8174331Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8174478Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8174487Z 2023-01-11T21:41:23.8174499Z 2023-01-11T21:41:23.8174797Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8175146Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8175319Z extern "C" void kernel(float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8175445Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8175532Z { 2023-01-11T21:41:23.8175680Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8175767Z { 2023-01-11T21:41:23.8175878Z #pragma omp for 2023-01-11T21:41:23.8175999Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8176088Z { 2023-01-11T21:41:23.8176281Z auto tmp0 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8176414Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8176507Z } 2023-01-11T21:41:23.8176641Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8176758Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8176849Z { 2023-01-11T21:41:23.8176996Z auto tmp0 = static_cast(1); 2023-01-11T21:41:23.8177104Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8177196Z } 2023-01-11T21:41:23.8177308Z #pragma omp for 2023-01-11T21:41:23.8177424Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8177515Z { 2023-01-11T21:41:23.8177711Z auto tmp0 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8177845Z tmp0.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8177921Z } 2023-01-11T21:41:23.8178061Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8178180Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8178270Z { 2023-01-11T21:41:23.8178413Z auto tmp0 = static_cast(2); 2023-01-11T21:41:23.8178530Z out_ptr1[i0] = tmp0; 2023-01-11T21:41:23.8178600Z } 2023-01-11T21:41:23.8178689Z } 2023-01-11T21:41:23.8178767Z } 2023-01-11T21:41:23.8178890Z ''') 2023-01-11T21:41:23.8178905Z 2023-01-11T21:41:23.8178911Z 2023-01-11T21:41:23.8179029Z async_compile.wait(globals()) 2023-01-11T21:41:23.8179122Z del async_compile 2023-01-11T21:41:23.8179129Z 2023-01-11T21:41:23.8179220Z def call(args): 2023-01-11T21:41:23.8179308Z arg0_1, = args 2023-01-11T21:41:23.8179396Z args.clear() 2023-01-11T21:41:23.8179702Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8180002Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8180191Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8180308Z return (buf0, buf1, ) 2023-01-11T21:41:23.8180317Z 2023-01-11T21:41:23.8180324Z 2023-01-11T21:41:23.8180432Z if __name__ == "__main__": 2023-01-11T21:41:23.8191950Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8192247Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8192606Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8192906Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8192915Z 2023-01-11T21:41:23.8192997Z ok (1.707s) 2023-01-11T21:41:23.8193794Z test_fill2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8193988Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8194406Z [2023-01-11 21:34:06,342] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 210 2023-01-11T21:41:23.8194793Z [2023-01-11 21:34:07,897] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 210 2023-01-11T21:41:23.8194891Z 2023-01-11T21:41:23.8195026Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8195130Z import torch 2023-01-11T21:41:23.8195233Z import random 2023-01-11T21:41:23.8195408Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8195577Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8195600Z 2023-01-11T21:41:23.8195693Z aten = torch.ops.aten 2023-01-11T21:41:23.8195900Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8196033Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8196040Z 2023-01-11T21:41:23.8196048Z 2023-01-11T21:41:23.8196251Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8196564Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8196736Z extern "C" void kernel(float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8196900Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8196996Z { 2023-01-11T21:41:23.8197170Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8197260Z { 2023-01-11T21:41:23.8197364Z #pragma omp for 2023-01-11T21:41:23.8197484Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8197577Z { 2023-01-11T21:41:23.8197781Z auto tmp0 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8197904Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8197992Z } 2023-01-11T21:41:23.8198130Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8198243Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8198331Z { 2023-01-11T21:41:23.8198481Z auto tmp0 = static_cast(1); 2023-01-11T21:41:23.8198593Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8198666Z } 2023-01-11T21:41:23.8198779Z #pragma omp for 2023-01-11T21:41:23.8198900Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8198993Z { 2023-01-11T21:41:23.8199200Z auto tmp0 = at::vec::Vectorized(static_cast(3.0)); 2023-01-11T21:41:23.8199337Z tmp0.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8199422Z } 2023-01-11T21:41:23.8199541Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8199659Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8199754Z { 2023-01-11T21:41:23.8199904Z auto tmp0 = static_cast(3.0); 2023-01-11T21:41:23.8200020Z out_ptr1[i0] = tmp0; 2023-01-11T21:41:23.8200110Z } 2023-01-11T21:41:23.8200197Z } 2023-01-11T21:41:23.8200265Z } 2023-01-11T21:41:23.8200392Z ''') 2023-01-11T21:41:23.8200400Z 2023-01-11T21:41:23.8200407Z 2023-01-11T21:41:23.8200543Z async_compile.wait(globals()) 2023-01-11T21:41:23.8200644Z del async_compile 2023-01-11T21:41:23.8200652Z 2023-01-11T21:41:23.8200754Z def call(args): 2023-01-11T21:41:23.8200856Z arg0_1, = args 2023-01-11T21:41:23.8200959Z args.clear() 2023-01-11T21:41:23.8201260Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8201636Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8201830Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8201938Z return (buf0, buf1, ) 2023-01-11T21:41:23.8201946Z 2023-01-11T21:41:23.8201953Z 2023-01-11T21:41:23.8202060Z if __name__ == "__main__": 2023-01-11T21:41:23.8202226Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8202409Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8202712Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8202856Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8202865Z 2023-01-11T21:41:23.8202958Z ok (1.601s) 2023-01-11T21:41:23.8203715Z test_flip_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8203902Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8204308Z [2023-01-11 21:34:07,939] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 211 2023-01-11T21:41:23.8204716Z [2023-01-11 21:34:09,533] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 211 2023-01-11T21:41:23.8204724Z 2023-01-11T21:41:23.8204860Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8204957Z import torch 2023-01-11T21:41:23.8205055Z import random 2023-01-11T21:41:23.8205207Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8205378Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8205386Z 2023-01-11T21:41:23.8205509Z aten = torch.ops.aten 2023-01-11T21:41:23.8205696Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8205826Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8205834Z 2023-01-11T21:41:23.8205840Z 2023-01-11T21:41:23.8206043Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8206345Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8206520Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8206651Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8206785Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8206870Z { 2023-01-11T21:41:23.8207015Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8207103Z { 2023-01-11T21:41:23.8207212Z #pragma omp for 2023-01-11T21:41:23.8207332Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:23.8207405Z { 2023-01-11T21:41:23.8207504Z #pragma GCC ivdep 2023-01-11T21:41:23.8207624Z for(long i1=0; i1<6; i1+=1) 2023-01-11T21:41:23.8207713Z { 2023-01-11T21:41:23.8207805Z { 2023-01-11T21:41:23.8207902Z { 2023-01-11T21:41:23.8208166Z auto tmp0 = in_ptr0[5 + ((-1)*i1) + (6*i0)]; 2023-01-11T21:41:23.8208291Z out_ptr0[i1 + (6*i0)] = tmp0; 2023-01-11T21:41:23.8208381Z } 2023-01-11T21:41:23.8208474Z } 2023-01-11T21:41:23.8208573Z } 2023-01-11T21:41:23.8208660Z } 2023-01-11T21:41:23.8208794Z #pragma omp for collapse(2) 2023-01-11T21:41:23.8208910Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8208985Z { 2023-01-11T21:41:23.8209106Z for(long i1=0; i1<6; i1+=1) 2023-01-11T21:41:23.8209200Z { 2023-01-11T21:41:23.8209325Z #pragma GCC ivdep 2023-01-11T21:41:23.8209451Z for(long i2=0; i2<6; i2+=1) 2023-01-11T21:41:23.8209624Z { 2023-01-11T21:41:23.8209704Z { 2023-01-11T21:41:23.8209804Z { 2023-01-11T21:41:23.8210088Z auto tmp0 = in_ptr0[30 + i2 + ((-6)*i1) + (36*i0)]; 2023-01-11T21:41:23.8210257Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8210475Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8210623Z out_ptr1[i2 + (6*i1) + (36*i0)] = tmp2; 2023-01-11T21:41:23.8210722Z } 2023-01-11T21:41:23.8210812Z } 2023-01-11T21:41:23.8210888Z } 2023-01-11T21:41:23.8210974Z } 2023-01-11T21:41:23.8211062Z } 2023-01-11T21:41:23.8211148Z } 2023-01-11T21:41:23.8211237Z } 2023-01-11T21:41:23.8211346Z ''') 2023-01-11T21:41:23.8211355Z 2023-01-11T21:41:23.8211361Z 2023-01-11T21:41:23.8211487Z async_compile.wait(globals()) 2023-01-11T21:41:23.8211576Z del async_compile 2023-01-11T21:41:23.8211681Z 2023-01-11T21:41:23.8211787Z def call(args): 2023-01-11T21:41:23.8211887Z arg0_1, = args 2023-01-11T21:41:23.8211987Z args.clear() 2023-01-11T21:41:23.8212315Z buf0 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8212630Z buf1 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8212875Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8212956Z del arg0_1 2023-01-11T21:41:23.8213066Z return (buf0, buf1, ) 2023-01-11T21:41:23.8213072Z 2023-01-11T21:41:23.8213077Z 2023-01-11T21:41:23.8213186Z if __name__ == "__main__": 2023-01-11T21:41:23.8213357Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8213542Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8213865Z arg0_1 = rand_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8214030Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8214039Z 2023-01-11T21:41:23.8214132Z ok (1.635s) 2023-01-11T21:41:23.8214853Z test_fmod_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8215032Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8215434Z [2023-01-11 21:34:09,556] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 212 2023-01-11T21:41:23.8215854Z [2023-01-11 21:34:11,136] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 212 2023-01-11T21:41:23.8215863Z 2023-01-11T21:41:23.8216011Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8216115Z import torch 2023-01-11T21:41:23.8216219Z import random 2023-01-11T21:41:23.8216398Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8216583Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8216592Z 2023-01-11T21:41:23.8216689Z aten = torch.ops.aten 2023-01-11T21:41:23.8216893Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8217027Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8217036Z 2023-01-11T21:41:23.8217043Z 2023-01-11T21:41:23.8217249Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8217557Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8217731Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8217882Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8218026Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8218229Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8218314Z { 2023-01-11T21:41:23.8218455Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8218546Z { 2023-01-11T21:41:23.8218656Z #pragma omp for 2023-01-11T21:41:23.8218773Z for(long i0=0; i0<9; i0+=1) 2023-01-11T21:41:23.8218867Z { 2023-01-11T21:41:23.8219055Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8219245Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8219376Z auto tmp2 = tmp0.fmod(tmp1); 2023-01-11T21:41:23.8219584Z auto tmp3 = at::vec::Vectorized(static_cast(3.0)); 2023-01-11T21:41:23.8219702Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8219831Z auto tmp5 = tmp4.fmod(tmp1); 2023-01-11T21:41:23.8220029Z auto tmp6 = at::vec::Vectorized(static_cast(2.0)); 2023-01-11T21:41:23.8220287Z auto tmp7 = tmp5 - tmp6; 2023-01-11T21:41:23.8220410Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8220537Z tmp7.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8220628Z } 2023-01-11T21:41:23.8220760Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8220875Z for(long i0=72; i0<72; i0+=1) 2023-01-11T21:41:23.8220964Z { 2023-01-11T21:41:23.8221080Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8221180Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8221327Z auto tmp2 = std::fmod(tmp0, tmp1); 2023-01-11T21:41:23.8221478Z auto tmp3 = static_cast(3.0); 2023-01-11T21:41:23.8221595Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8221739Z auto tmp5 = std::fmod(tmp4, tmp1); 2023-01-11T21:41:23.8221879Z auto tmp6 = static_cast(2.0); 2023-01-11T21:41:23.8222070Z auto tmp7 = tmp5 - tmp6; 2023-01-11T21:41:23.8222168Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8222280Z out_ptr1[i0] = tmp7; 2023-01-11T21:41:23.8222508Z } 2023-01-11T21:41:23.8222595Z } 2023-01-11T21:41:23.8222678Z } 2023-01-11T21:41:23.8222790Z ''') 2023-01-11T21:41:23.8222800Z 2023-01-11T21:41:23.8222805Z 2023-01-11T21:41:23.8222936Z async_compile.wait(globals()) 2023-01-11T21:41:23.8223025Z del async_compile 2023-01-11T21:41:23.8223032Z 2023-01-11T21:41:23.8223129Z def call(args): 2023-01-11T21:41:23.8223238Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8223339Z args.clear() 2023-01-11T21:41:23.8223661Z buf0 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8223974Z buf1 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8224239Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8224312Z del arg0_1 2023-01-11T21:41:23.8224407Z del arg1_1 2023-01-11T21:41:23.8224529Z return (buf0, buf1, ) 2023-01-11T21:41:23.8224538Z 2023-01-11T21:41:23.8224545Z 2023-01-11T21:41:23.8224648Z if __name__ == "__main__": 2023-01-11T21:41:23.8224821Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8225001Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8225325Z arg0_1 = rand_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8225636Z arg1_1 = rand_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8225792Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8225816Z 2023-01-11T21:41:23.8225895Z ok (1.603s) 2023-01-11T21:41:23.8226598Z test_fmod_zero_dim_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8226912Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8227318Z [2023-01-11 21:34:11,151] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 213 2023-01-11T21:41:23.8227735Z [2023-01-11 21:34:12,788] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 213 2023-01-11T21:41:23.8228394Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8228581Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8229056Z [2023-01-11 21:34:12,817] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 214 2023-01-11T21:41:23.8229474Z [2023-01-11 21:34:14,752] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 214 2023-01-11T21:41:23.8229483Z 2023-01-11T21:41:23.8229622Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8229709Z import torch 2023-01-11T21:41:23.8229808Z import random 2023-01-11T21:41:23.8229980Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8230159Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8230168Z 2023-01-11T21:41:23.8230279Z aten = torch.ops.aten 2023-01-11T21:41:23.8230479Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8230611Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8230617Z 2023-01-11T21:41:23.8230624Z 2023-01-11T21:41:23.8230829Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8231129Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8231308Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8231469Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8231617Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8231709Z { 2023-01-11T21:41:23.8231860Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8231950Z { 2023-01-11T21:41:23.8232041Z #pragma omp for 2023-01-11T21:41:23.8232158Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8232248Z { 2023-01-11T21:41:23.8232454Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8232633Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:23.8232767Z auto tmp2 = tmp0.fmod(tmp1); 2023-01-11T21:41:23.8232898Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8232990Z } 2023-01-11T21:41:23.8233125Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8233240Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.8233332Z { 2023-01-11T21:41:23.8233455Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8233572Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:23.8233718Z auto tmp2 = std::fmod(tmp0, tmp1); 2023-01-11T21:41:23.8233896Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8233983Z } 2023-01-11T21:41:23.8234072Z } 2023-01-11T21:41:23.8234157Z } 2023-01-11T21:41:23.8234287Z ''') 2023-01-11T21:41:23.8234295Z 2023-01-11T21:41:23.8234300Z 2023-01-11T21:41:23.8234436Z async_compile.wait(globals()) 2023-01-11T21:41:23.8234550Z del async_compile 2023-01-11T21:41:23.8234557Z 2023-01-11T21:41:23.8234662Z def call(args): 2023-01-11T21:41:23.8234752Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8234854Z args.clear() 2023-01-11T21:41:23.8235156Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8235491Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8235599Z del arg0_1 2023-01-11T21:41:23.8235698Z del arg1_1 2023-01-11T21:41:23.8235798Z return (buf0, ) 2023-01-11T21:41:23.8235806Z 2023-01-11T21:41:23.8235811Z 2023-01-11T21:41:23.8235905Z if __name__ == "__main__": 2023-01-11T21:41:23.8236078Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8236271Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8236585Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8236870Z arg1_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8237036Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8237044Z 2023-01-11T21:41:23.8237050Z 2023-01-11T21:41:23.8237184Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8237290Z import torch 2023-01-11T21:41:23.8237377Z import random 2023-01-11T21:41:23.8237612Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8237790Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8237798Z 2023-01-11T21:41:23.8237913Z aten = torch.ops.aten 2023-01-11T21:41:23.8238116Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8238249Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8238256Z 2023-01-11T21:41:23.8238262Z 2023-01-11T21:41:23.8238469Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8238773Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8238934Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8239089Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8239233Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8239326Z { 2023-01-11T21:41:23.8239467Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8239558Z { 2023-01-11T21:41:23.8239672Z #pragma omp for 2023-01-11T21:41:23.8239776Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8239866Z { 2023-01-11T21:41:23.8240052Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:23.8240248Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8240384Z auto tmp2 = tmp0.fmod(tmp1); 2023-01-11T21:41:23.8240528Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8240619Z } 2023-01-11T21:41:23.8240745Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8240867Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.8240957Z { 2023-01-11T21:41:23.8241082Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.8241205Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8241361Z auto tmp2 = std::fmod(tmp0, tmp1); 2023-01-11T21:41:23.8241476Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8241563Z } 2023-01-11T21:41:23.8241658Z } 2023-01-11T21:41:23.8241749Z } 2023-01-11T21:41:23.8241870Z ''') 2023-01-11T21:41:23.8241878Z 2023-01-11T21:41:23.8241885Z 2023-01-11T21:41:23.8242020Z async_compile.wait(globals()) 2023-01-11T21:41:23.8242130Z del async_compile 2023-01-11T21:41:23.8242137Z 2023-01-11T21:41:23.8242241Z def call(args): 2023-01-11T21:41:23.8242331Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8242434Z args.clear() 2023-01-11T21:41:23.8242747Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8243007Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8243110Z del arg0_1 2023-01-11T21:41:23.8243208Z del arg1_1 2023-01-11T21:41:23.8243313Z return (buf0, ) 2023-01-11T21:41:23.8243321Z 2023-01-11T21:41:23.8243327Z 2023-01-11T21:41:23.8243439Z if __name__ == "__main__": 2023-01-11T21:41:23.8243599Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8243869Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8244171Z arg0_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8244463Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8244628Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8244636Z 2023-01-11T21:41:23.8244729Z ok (3.615s) 2023-01-11T21:41:23.8245456Z test_forced_buffer_realize_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8245646Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8246122Z [2023-01-11 21:34:14,778] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 215 2023-01-11T21:41:23.8246523Z [2023-01-11 21:34:16,352] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 215 2023-01-11T21:41:23.8246548Z 2023-01-11T21:41:23.8246665Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8246762Z import torch 2023-01-11T21:41:23.8246867Z import random 2023-01-11T21:41:23.8247037Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8247216Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8247223Z 2023-01-11T21:41:23.8247337Z aten = torch.ops.aten 2023-01-11T21:41:23.8247538Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8247654Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8247662Z 2023-01-11T21:41:23.8247670Z 2023-01-11T21:41:23.8247874Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8248190Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8248362Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8248521Z const float* __restrict__ in_ptr0) 2023-01-11T21:41:23.8248613Z { 2023-01-11T21:41:23.8248748Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8248837Z { 2023-01-11T21:41:23.8248931Z #pragma omp for 2023-01-11T21:41:23.8249052Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8249143Z { 2023-01-11T21:41:23.8249338Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8249537Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8249662Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8249782Z auto tmp3 = tmp2 * tmp1; 2023-01-11T21:41:23.8249907Z tmp3.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8249999Z } 2023-01-11T21:41:23.8250134Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8250258Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:23.8250351Z { 2023-01-11T21:41:23.8250474Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8250615Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8250725Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8250844Z auto tmp3 = tmp2 * tmp1; 2023-01-11T21:41:23.8250966Z in_out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.8251056Z } 2023-01-11T21:41:23.8251142Z } 2023-01-11T21:41:23.8251226Z } 2023-01-11T21:41:23.8251331Z ''') 2023-01-11T21:41:23.8251339Z 2023-01-11T21:41:23.8251360Z 2023-01-11T21:41:23.8251476Z async_compile.wait(globals()) 2023-01-11T21:41:23.8251584Z del async_compile 2023-01-11T21:41:23.8251591Z 2023-01-11T21:41:23.8251695Z def call(args): 2023-01-11T21:41:23.8251799Z arg0_1, = args 2023-01-11T21:41:23.8251900Z args.clear() 2023-01-11T21:41:23.8252197Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8252397Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.8252582Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.8252681Z del arg0_1 2023-01-11T21:41:23.8252790Z return (buf1, ) 2023-01-11T21:41:23.8252797Z 2023-01-11T21:41:23.8252804Z 2023-01-11T21:41:23.8252915Z if __name__ == "__main__": 2023-01-11T21:41:23.8253078Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8253262Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8253559Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8253723Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8253730Z 2023-01-11T21:41:23.8253806Z ok (1.600s) 2023-01-11T21:41:23.8254578Z test_full_like_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8254774Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8255186Z [2023-01-11 21:34:16,386] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 216 2023-01-11T21:41:23.8255585Z [2023-01-11 21:34:17,926] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 216 2023-01-11T21:41:23.8255593Z 2023-01-11T21:41:23.8255730Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8255832Z import torch 2023-01-11T21:41:23.8255933Z import random 2023-01-11T21:41:23.8256103Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8256265Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8256272Z 2023-01-11T21:41:23.8256390Z aten = torch.ops.aten 2023-01-11T21:41:23.8256599Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8256735Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8256742Z 2023-01-11T21:41:23.8256749Z 2023-01-11T21:41:23.8256955Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8257260Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8257420Z extern "C" void kernel(float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8257506Z { 2023-01-11T21:41:23.8257634Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8257723Z { 2023-01-11T21:41:23.8257836Z #pragma omp for 2023-01-11T21:41:23.8257951Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8258038Z { 2023-01-11T21:41:23.8258251Z auto tmp0 = at::vec::Vectorized(static_cast(7.777)); 2023-01-11T21:41:23.8258455Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8258630Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8258764Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8258854Z } 2023-01-11T21:41:23.8258988Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8259106Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.8259194Z { 2023-01-11T21:41:23.8259348Z auto tmp0 = static_cast(7.777); 2023-01-11T21:41:23.8259475Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8259662Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8259778Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8259873Z } 2023-01-11T21:41:23.8259961Z } 2023-01-11T21:41:23.8260049Z } 2023-01-11T21:41:23.8260148Z ''') 2023-01-11T21:41:23.8260172Z 2023-01-11T21:41:23.8260180Z 2023-01-11T21:41:23.8260302Z async_compile.wait(globals()) 2023-01-11T21:41:23.8260408Z del async_compile 2023-01-11T21:41:23.8260415Z 2023-01-11T21:41:23.8260514Z def call(args): 2023-01-11T21:41:23.8260616Z arg0_1, = args 2023-01-11T21:41:23.8260792Z args.clear() 2023-01-11T21:41:23.8261089Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8261242Z kernel_cpp_0(c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8261333Z return (buf0, ) 2023-01-11T21:41:23.8261340Z 2023-01-11T21:41:23.8261360Z 2023-01-11T21:41:23.8261456Z if __name__ == "__main__": 2023-01-11T21:41:23.8261624Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8261812Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8262114Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8262269Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8262276Z 2023-01-11T21:41:23.8262507Z ok (1.574s) 2023-01-11T21:41:23.8263331Z test_fuse_tiled_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8263526Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8263936Z [2023-01-11 21:34:17,944] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 217 2023-01-11T21:41:23.8264337Z [2023-01-11 21:34:19,558] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 217 2023-01-11T21:41:23.8264347Z 2023-01-11T21:41:23.8264482Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8264585Z import torch 2023-01-11T21:41:23.8264687Z import random 2023-01-11T21:41:23.8264853Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8265042Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8265050Z 2023-01-11T21:41:23.8265162Z aten = torch.ops.aten 2023-01-11T21:41:23.8265358Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8265495Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8265502Z 2023-01-11T21:41:23.8265508Z 2023-01-11T21:41:23.8265717Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8266023Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8266199Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8266356Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8266496Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8266624Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8266745Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8266835Z { 2023-01-11T21:41:23.8266985Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8267073Z { 2023-01-11T21:41:23.8267182Z #pragma omp for 2023-01-11T21:41:23.8267300Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.8267387Z { 2023-01-11T21:41:23.8267493Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:23.8267585Z { 2023-01-11T21:41:23.8267776Z auto tmp0 = at::vec::Vectorized(in_ptr0[i0]); 2023-01-11T21:41:23.8267976Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:23.8268105Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8268253Z tmp2.store(out_ptr0 + (8*i1) + (128*i0)); 2023-01-11T21:41:23.8268342Z } 2023-01-11T21:41:23.8268455Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8268581Z for(long i1=128; i1<128; i1+=1) 2023-01-11T21:41:23.8268670Z { 2023-01-11T21:41:23.8268788Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8268908Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:23.8269029Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8269265Z out_ptr0[i1 + (128*i0)] = tmp2; 2023-01-11T21:41:23.8269340Z } 2023-01-11T21:41:23.8269428Z } 2023-01-11T21:41:23.8269539Z #pragma omp for 2023-01-11T21:41:23.8269658Z for(long i0=0; i0<2048; i0+=1) 2023-01-11T21:41:23.8269751Z { 2023-01-11T21:41:23.8269950Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:23.8270152Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8270261Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8270394Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8270487Z } 2023-01-11T21:41:23.8270627Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8270753Z for(long i0=16384; i0<16384; i0+=1) 2023-01-11T21:41:23.8270845Z { 2023-01-11T21:41:23.8270973Z auto tmp0 = in_ptr2[i0]; 2023-01-11T21:41:23.8271103Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8271322Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8271443Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8271535Z } 2023-01-11T21:41:23.8271625Z } 2023-01-11T21:41:23.8271713Z } 2023-01-11T21:41:23.8271856Z ''') 2023-01-11T21:41:23.8271865Z 2023-01-11T21:41:23.8271872Z 2023-01-11T21:41:23.8271992Z async_compile.wait(globals()) 2023-01-11T21:41:23.8272094Z del async_compile 2023-01-11T21:41:23.8272102Z 2023-01-11T21:41:23.8272203Z def call(args): 2023-01-11T21:41:23.8272328Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8272436Z args.clear() 2023-01-11T21:41:23.8272758Z buf0 = empty_strided((128, 128), (128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8273069Z buf1 = empty_strided((128, 128), (128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8273405Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8273495Z del arg0_1 2023-01-11T21:41:23.8273595Z del arg1_1 2023-01-11T21:41:23.8273690Z del arg2_1 2023-01-11T21:41:23.8273879Z return (buf0, buf1, ) 2023-01-11T21:41:23.8273888Z 2023-01-11T21:41:23.8273895Z 2023-01-11T21:41:23.8274014Z if __name__ == "__main__": 2023-01-11T21:41:23.8274190Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8274372Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8274673Z arg0_1 = rand_strided((128, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8274978Z arg1_1 = rand_strided((1, 128), (128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8275282Z arg2_1 = rand_strided((128, 128), (128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8275461Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8275469Z 2023-01-11T21:41:23.8275561Z ok (1.634s) 2023-01-11T21:41:23.8276282Z test_gather1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8276459Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8276838Z [2023-01-11 21:34:19,587] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 218 2023-01-11T21:41:23.8277223Z [2023-01-11 21:34:21,174] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 218 2023-01-11T21:41:23.8277231Z 2023-01-11T21:41:23.8277360Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8277443Z import torch 2023-01-11T21:41:23.8277540Z import random 2023-01-11T21:41:23.8277710Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8277890Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8277973Z 2023-01-11T21:41:23.8278090Z aten = torch.ops.aten 2023-01-11T21:41:23.8278283Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8278418Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8278426Z 2023-01-11T21:41:23.8278432Z 2023-01-11T21:41:23.8278643Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8278934Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8279106Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8279254Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8279396Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8279535Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8279621Z { 2023-01-11T21:41:23.8279772Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8279844Z { 2023-01-11T21:41:23.8280015Z #pragma omp for 2023-01-11T21:41:23.8280136Z for(long i0=0; i0<20; i0+=1) 2023-01-11T21:41:23.8280228Z { 2023-01-11T21:41:23.8280338Z #pragma GCC ivdep 2023-01-11T21:41:23.8280459Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:23.8280549Z { 2023-01-11T21:41:23.8280624Z { 2023-01-11T21:41:23.8280720Z { 2023-01-11T21:41:23.8280870Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:23.8281023Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8281157Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8281306Z auto tmp3 = in_ptr1[tmp2 + (6*i1)]; 2023-01-11T21:41:23.8281440Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:23.8281558Z out_ptr1[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:23.8281654Z } 2023-01-11T21:41:23.8281746Z } 2023-01-11T21:41:23.8281844Z } 2023-01-11T21:41:23.8281934Z } 2023-01-11T21:41:23.8282021Z } 2023-01-11T21:41:23.8282107Z } 2023-01-11T21:41:23.8282214Z ''') 2023-01-11T21:41:23.8282224Z 2023-01-11T21:41:23.8282230Z 2023-01-11T21:41:23.8282367Z async_compile.wait(globals()) 2023-01-11T21:41:23.8282471Z del async_compile 2023-01-11T21:41:23.8282479Z 2023-01-11T21:41:23.8282586Z def call(args): 2023-01-11T21:41:23.8282699Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8282804Z args.clear() 2023-01-11T21:41:23.8283134Z buf0 = empty_strided((4, 5, 10, 1), (50, 10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8283441Z buf1 = empty_strided((4, 5, 10, 1), (50, 10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8283726Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8283826Z del arg0_1 2023-01-11T21:41:23.8283924Z del arg1_1 2023-01-11T21:41:23.8284046Z return (buf0, buf1, ) 2023-01-11T21:41:23.8284055Z 2023-01-11T21:41:23.8284061Z 2023-01-11T21:41:23.8284170Z if __name__ == "__main__": 2023-01-11T21:41:23.8284335Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8284516Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8284827Z arg0_1 = rand_strided((1, 1, 10, 6), (60, 60, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8285149Z arg1_1 = rand_strided((4, 5, 10, 1), (50, 10, 1, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8285315Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8285324Z 2023-01-11T21:41:23.8285424Z ok (1.615s) 2023-01-11T21:41:23.8285579Z test_gather2_cpu (__main__.CpuTests) ... ok (0.001s) 2023-01-11T21:41:23.8286321Z test_gather_scatter_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8286604Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8287019Z [2023-01-11 21:34:21,275] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 219 2023-01-11T21:41:23.8287431Z [2023-01-11 21:34:22,850] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 219 2023-01-11T21:41:23.8287441Z 2023-01-11T21:41:23.8287579Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8287666Z import torch 2023-01-11T21:41:23.8287770Z import random 2023-01-11T21:41:23.8287939Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8288105Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8288113Z 2023-01-11T21:41:23.8288218Z aten = torch.ops.aten 2023-01-11T21:41:23.8288475Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8288615Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8288623Z 2023-01-11T21:41:23.8288630Z 2023-01-11T21:41:23.8288841Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8289134Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8289306Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8289458Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8289602Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8289689Z { 2023-01-11T21:41:23.8289834Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8289925Z { 2023-01-11T21:41:23.8290021Z #pragma omp for 2023-01-11T21:41:23.8290138Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8290228Z { 2023-01-11T21:41:23.8290437Z auto tmp0 = at::vec::Vectorized(static_cast(0)); 2023-01-11T21:41:23.8290580Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8290669Z } 2023-01-11T21:41:23.8290811Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8290917Z for(long i0=512; i0<512; i0+=1) 2023-01-11T21:41:23.8291009Z { 2023-01-11T21:41:23.8291153Z auto tmp0 = static_cast(0); 2023-01-11T21:41:23.8291263Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8291360Z } 2023-01-11T21:41:23.8291471Z #pragma omp for 2023-01-11T21:41:23.8291573Z for(long i0=0; i0<80; i0+=1) 2023-01-11T21:41:23.8291658Z { 2023-01-11T21:41:23.8291776Z #pragma GCC ivdep 2023-01-11T21:41:23.8291898Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:23.8291989Z { 2023-01-11T21:41:23.8292084Z { 2023-01-11T21:41:23.8292179Z { 2023-01-11T21:41:23.8292301Z auto tmp0 = in_ptr0[80 + i0]; 2023-01-11T21:41:23.8292444Z auto tmp1 = in_ptr0[i0]; 2023-01-11T21:41:23.8292592Z auto tmp2 = in_ptr1[i1 + (32*tmp1)]; 2023-01-11T21:41:23.8292741Z auto tmp3 = in_ptr1[i1 + (32*tmp0)]; 2023-01-11T21:41:23.8292969Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.8293121Z auto tmp5 = static_cast(1); 2023-01-11T21:41:23.8293258Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:23.8293413Z atomic_add(&out_ptr0[i1 + (32*tmp0)], tmp6); 2023-01-11T21:41:23.8293514Z } 2023-01-11T21:41:23.8293609Z } 2023-01-11T21:41:23.8293702Z } 2023-01-11T21:41:23.8293793Z } 2023-01-11T21:41:23.8293881Z } 2023-01-11T21:41:23.8293964Z } 2023-01-11T21:41:23.8294072Z ''') 2023-01-11T21:41:23.8294081Z 2023-01-11T21:41:23.8294103Z 2023-01-11T21:41:23.8294234Z async_compile.wait(globals()) 2023-01-11T21:41:23.8294345Z del async_compile 2023-01-11T21:41:23.8294431Z 2023-01-11T21:41:23.8294542Z def call(args): 2023-01-11T21:41:23.8294653Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8294759Z args.clear() 2023-01-11T21:41:23.8295070Z buf0 = empty_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8295320Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8295407Z del arg0_1 2023-01-11T21:41:23.8295503Z del arg1_1 2023-01-11T21:41:23.8295605Z return (buf0, ) 2023-01-11T21:41:23.8295612Z 2023-01-11T21:41:23.8295619Z 2023-01-11T21:41:23.8295729Z if __name__ == "__main__": 2023-01-11T21:41:23.8295896Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8296081Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8296393Z arg0_1 = rand_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8296678Z arg1_1 = rand_strided((2, 80), (80, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8296910Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8296919Z 2023-01-11T21:41:23.8297019Z ok (1.675s) 2023-01-11T21:41:23.8297748Z test_gelu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8297934Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8298345Z [2023-01-11 21:34:22,893] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 220 2023-01-11T21:41:23.8298752Z [2023-01-11 21:34:24,502] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 220 2023-01-11T21:41:23.8298759Z 2023-01-11T21:41:23.8298910Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8299015Z import torch 2023-01-11T21:41:23.8299118Z import random 2023-01-11T21:41:23.8299286Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8299473Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8299481Z 2023-01-11T21:41:23.8299601Z aten = torch.ops.aten 2023-01-11T21:41:23.8299804Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8299942Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8299950Z 2023-01-11T21:41:23.8299957Z 2023-01-11T21:41:23.8300163Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8300469Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8300650Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8300776Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8300921Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8301021Z { 2023-01-11T21:41:23.8301165Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8301257Z { 2023-01-11T21:41:23.8301370Z #pragma omp for 2023-01-11T21:41:23.8301489Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8301571Z { 2023-01-11T21:41:23.8301784Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8301990Z auto tmp1 = at::vec::Vectorized(static_cast(0.5)); 2023-01-11T21:41:23.8302114Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8302484Z auto tmp3 = at::vec::Vectorized(static_cast(0.7071067811865476)); 2023-01-11T21:41:23.8302612Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8302736Z auto tmp5 = tmp4.erf(); 2023-01-11T21:41:23.8302935Z auto tmp6 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8303040Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:23.8303269Z auto tmp8 = tmp2 * tmp7; 2023-01-11T21:41:23.8303469Z auto tmp9 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8303596Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:23.8303716Z auto tmp11 = tmp0 + tmp6; 2023-01-11T21:41:23.8303834Z auto tmp12 = tmp11 * tmp1; 2023-01-11T21:41:23.8303953Z auto tmp13 = tmp11 * tmp3; 2023-01-11T21:41:23.8304057Z auto tmp14 = tmp13.erf(); 2023-01-11T21:41:23.8304179Z auto tmp15 = tmp14 + tmp6; 2023-01-11T21:41:23.8304302Z auto tmp16 = tmp12 * tmp15; 2023-01-11T21:41:23.8304434Z tmp10.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8304566Z tmp16.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8304653Z } 2023-01-11T21:41:23.8304794Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8304901Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8304990Z { 2023-01-11T21:41:23.8305115Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8305372Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:23.8305501Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8305665Z auto tmp3 = static_cast(0.7071067811865476); 2023-01-11T21:41:23.8305791Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8305911Z auto tmp5 = std::erf(tmp4); 2023-01-11T21:41:23.8306052Z auto tmp6 = static_cast(1); 2023-01-11T21:41:23.8306176Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:23.8306295Z auto tmp8 = tmp2 * tmp7; 2023-01-11T21:41:23.8306437Z auto tmp9 = static_cast(2); 2023-01-11T21:41:23.8306559Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:23.8306676Z auto tmp11 = tmp0 + tmp6; 2023-01-11T21:41:23.8306782Z auto tmp12 = tmp11 * tmp1; 2023-01-11T21:41:23.8306901Z auto tmp13 = tmp11 * tmp3; 2023-01-11T21:41:23.8307040Z auto tmp14 = std::erf(tmp13); 2023-01-11T21:41:23.8307172Z auto tmp15 = tmp14 + tmp6; 2023-01-11T21:41:23.8307297Z auto tmp16 = tmp12 * tmp15; 2023-01-11T21:41:23.8307411Z out_ptr0[i0] = tmp10; 2023-01-11T21:41:23.8307523Z out_ptr1[i0] = tmp16; 2023-01-11T21:41:23.8307599Z } 2023-01-11T21:41:23.8307688Z } 2023-01-11T21:41:23.8307779Z } 2023-01-11T21:41:23.8307921Z ''') 2023-01-11T21:41:23.8307929Z 2023-01-11T21:41:23.8307935Z 2023-01-11T21:41:23.8308065Z async_compile.wait(globals()) 2023-01-11T21:41:23.8308170Z del async_compile 2023-01-11T21:41:23.8308177Z 2023-01-11T21:41:23.8308278Z def call(args): 2023-01-11T21:41:23.8308362Z arg0_1, = args 2023-01-11T21:41:23.8308462Z args.clear() 2023-01-11T21:41:23.8308775Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8309077Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8309322Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8309428Z del arg0_1 2023-01-11T21:41:23.8309541Z return (buf0, buf1, ) 2023-01-11T21:41:23.8309548Z 2023-01-11T21:41:23.8309555Z 2023-01-11T21:41:23.8309667Z if __name__ == "__main__": 2023-01-11T21:41:23.8309830Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8310013Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8310324Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8310486Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8310494Z 2023-01-11T21:41:23.8310591Z ok (1.652s) 2023-01-11T21:41:23.8311332Z test_glu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8311601Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8312028Z [2023-01-11 21:34:24,544] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 221 2023-01-11T21:41:23.8312451Z [2023-01-11 21:34:26,226] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 221 2023-01-11T21:41:23.8312459Z 2023-01-11T21:41:23.8312579Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8312677Z import torch 2023-01-11T21:41:23.8312777Z import random 2023-01-11T21:41:23.8312951Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8313135Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8313143Z 2023-01-11T21:41:23.8313256Z aten = torch.ops.aten 2023-01-11T21:41:23.8313455Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8313591Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8313662Z 2023-01-11T21:41:23.8313669Z 2023-01-11T21:41:23.8313943Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8314253Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8314435Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8314581Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8314723Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8314860Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8314945Z { 2023-01-11T21:41:23.8315079Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8315171Z { 2023-01-11T21:41:23.8315282Z #pragma omp for 2023-01-11T21:41:23.8315408Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8315503Z { 2023-01-11T21:41:23.8315625Z #pragma GCC ivdep 2023-01-11T21:41:23.8315751Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8315845Z { 2023-01-11T21:41:23.8315940Z { 2023-01-11T21:41:23.8316042Z { 2023-01-11T21:41:23.8316194Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.8316348Z auto tmp1 = in_ptr0[4 + i1 + (8*i0)]; 2023-01-11T21:41:23.8316602Z auto tmp2 = std::exp(-tmp1); 2023-01-11T21:41:23.8316735Z auto tmp3 = 1 / (1 + tmp2); 2023-01-11T21:41:23.8316858Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8316994Z out_ptr0[i1 + (4*i0)] = tmp4; 2023-01-11T21:41:23.8317091Z } 2023-01-11T21:41:23.8317187Z } 2023-01-11T21:41:23.8317280Z } 2023-01-11T21:41:23.8317368Z } 2023-01-11T21:41:23.8317482Z #pragma omp for 2023-01-11T21:41:23.8317589Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8317681Z { 2023-01-11T21:41:23.8317804Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.8317900Z { 2023-01-11T21:41:23.8318117Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (1024*i0)); 2023-01-11T21:41:23.8318334Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 512 + (8*i1) + (1024*i0)); 2023-01-11T21:41:23.8318535Z auto tmp2 = decltype(tmp1)(1)/(decltype(tmp1)(1) + tmp1.neg().exp()); 2023-01-11T21:41:23.8318664Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:23.8318801Z tmp3.store(out_ptr1 + (8*i1) + (512*i0)); 2023-01-11T21:41:23.8318887Z } 2023-01-11T21:41:23.8319021Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8319150Z for(long i1=512; i1<512; i1+=1) 2023-01-11T21:41:23.8319242Z { 2023-01-11T21:41:23.8319384Z auto tmp0 = in_ptr0[i1 + (1024*i0)]; 2023-01-11T21:41:23.8319534Z auto tmp1 = in_ptr0[512 + i1 + (1024*i0)]; 2023-01-11T21:41:23.8319837Z auto tmp2 = std::exp(-tmp1); 2023-01-11T21:41:23.8319963Z auto tmp3 = 1 / (1 + tmp2); 2023-01-11T21:41:23.8320086Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8320223Z out_ptr1[i1 + (512*i0)] = tmp4; 2023-01-11T21:41:23.8320315Z } 2023-01-11T21:41:23.8320402Z } 2023-01-11T21:41:23.8320500Z #pragma omp for 2023-01-11T21:41:23.8320622Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.8320714Z { 2023-01-11T21:41:23.8320837Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8320925Z { 2023-01-11T21:41:23.8321136Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (64*i0)); 2023-01-11T21:41:23.8321346Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 32 + (8*i1) + (64*i0)); 2023-01-11T21:41:23.8321543Z auto tmp2 = decltype(tmp1)(1)/(decltype(tmp1)(1) + tmp1.neg().exp()); 2023-01-11T21:41:23.8321655Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:23.8321858Z tmp3.store(out_ptr2 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.8321950Z } 2023-01-11T21:41:23.8322084Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8322210Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:23.8322298Z { 2023-01-11T21:41:23.8322444Z auto tmp0 = in_ptr0[i1 + (64*i0)]; 2023-01-11T21:41:23.8322575Z auto tmp1 = in_ptr0[32 + i1 + (64*i0)]; 2023-01-11T21:41:23.8322799Z auto tmp2 = std::exp(-tmp1); 2023-01-11T21:41:23.8322930Z auto tmp3 = 1 / (1 + tmp2); 2023-01-11T21:41:23.8323056Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8323181Z out_ptr2[i1 + (32*i0)] = tmp4; 2023-01-11T21:41:23.8323274Z } 2023-01-11T21:41:23.8323364Z } 2023-01-11T21:41:23.8323441Z } 2023-01-11T21:41:23.8323525Z } 2023-01-11T21:41:23.8323640Z ''') 2023-01-11T21:41:23.8323649Z 2023-01-11T21:41:23.8323661Z 2023-01-11T21:41:23.8323795Z async_compile.wait(globals()) 2023-01-11T21:41:23.8323899Z del async_compile 2023-01-11T21:41:23.8323907Z 2023-01-11T21:41:23.8324013Z def call(args): 2023-01-11T21:41:23.8324110Z arg0_1, = args 2023-01-11T21:41:23.8324201Z args.clear() 2023-01-11T21:41:23.8324532Z buf0 = empty_strided((8, 16, 8, 4), (512, 32, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8324855Z buf1 = empty_strided((8, 8, 8, 8), (512, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8325176Z buf2 = empty_strided((8, 16, 4, 8), (512, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8325459Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8325562Z del arg0_1 2023-01-11T21:41:23.8325685Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8325693Z 2023-01-11T21:41:23.8325699Z 2023-01-11T21:41:23.8325808Z if __name__ == "__main__": 2023-01-11T21:41:23.8325978Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8326140Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8326472Z arg0_1 = rand_strided((8, 16, 8, 8), (1024, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8326628Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8326635Z 2023-01-11T21:41:23.8326729Z ok (1.724s) 2023-01-11T21:41:23.8327461Z test_grid_sampler_2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8327648Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8328062Z [2023-01-11 21:34:27,679] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 222 2023-01-11T21:41:23.8328458Z [2023-01-11 21:34:28,312] torch._inductor.scheduler: [DEBUG] remove_buffer('buf5') 2023-01-11T21:41:23.8328762Z [2023-01-11 21:34:28,312] torch._inductor.scheduler: [DEBUG] remove_buffer('buf3') 2023-01-11T21:41:23.8329051Z [2023-01-11 21:34:28,312] torch._inductor.scheduler: [DEBUG] remove_buffer('buf7') 2023-01-11T21:41:23.8329059Z 2023-01-11T21:41:23.8329202Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8329308Z import torch 2023-01-11T21:41:23.8329413Z import random 2023-01-11T21:41:23.8329585Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8329766Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8329774Z 2023-01-11T21:41:23.8329895Z aten = torch.ops.aten 2023-01-11T21:41:23.8330094Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8330216Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8330228Z 2023-01-11T21:41:23.8330315Z 2023-01-11T21:41:23.8330539Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8330860Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8331043Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8331198Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.8331353Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8331512Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8331664Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8331801Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8331950Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.8332099Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.8332249Z long* __restrict__ out_ptr4, 2023-01-11T21:41:23.8332395Z long* __restrict__ out_ptr5, 2023-01-11T21:41:23.8332541Z float* __restrict__ out_ptr6, 2023-01-11T21:41:23.8332680Z long* __restrict__ out_ptr7, 2023-01-11T21:41:23.8332800Z long* __restrict__ out_ptr8, 2023-01-11T21:41:23.8332939Z float* __restrict__ out_ptr9, 2023-01-11T21:41:23.8333076Z long* __restrict__ out_ptr10, 2023-01-11T21:41:23.8333215Z long* __restrict__ out_ptr11, 2023-01-11T21:41:23.8333363Z float* __restrict__ out_ptr12, 2023-01-11T21:41:23.8333505Z long* __restrict__ out_ptr13, 2023-01-11T21:41:23.8333644Z long* __restrict__ out_ptr14, 2023-01-11T21:41:23.8333772Z float* __restrict__ out_ptr15) 2023-01-11T21:41:23.8333865Z { 2023-01-11T21:41:23.8334016Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8334104Z { 2023-01-11T21:41:23.8334215Z #pragma omp for 2023-01-11T21:41:23.8334352Z for(long i0=0; i0<495616; i0+=1) 2023-01-11T21:41:23.8334443Z { 2023-01-11T21:41:23.8334521Z { 2023-01-11T21:41:23.8334613Z { 2023-01-11T21:41:23.8334753Z auto tmp0 = in_ptr0[2*i0]; 2023-01-11T21:41:23.8334894Z auto tmp9 = in_ptr0[1 + (2*i0)]; 2023-01-11T21:41:23.8335056Z auto tmp1 = static_cast(175.5); 2023-01-11T21:41:23.8335187Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8335321Z auto tmp3 = tmp2 + tmp1; 2023-01-11T21:41:23.8335459Z auto tmp4 = std::floor(tmp3); 2023-01-11T21:41:23.8335606Z auto tmp5 = static_cast(0); 2023-01-11T21:41:23.8335739Z auto tmp6 = tmp4 >= tmp5; 2023-01-11T21:41:23.8335895Z auto tmp7 = static_cast(352); 2023-01-11T21:41:23.8336032Z auto tmp8 = tmp4 < tmp7; 2023-01-11T21:41:23.8336250Z auto tmp10 = tmp9 * tmp1; 2023-01-11T21:41:23.8336381Z auto tmp11 = tmp10 + tmp1; 2023-01-11T21:41:23.8336513Z auto tmp12 = std::floor(tmp11); 2023-01-11T21:41:23.8336647Z auto tmp13 = tmp12 >= tmp5; 2023-01-11T21:41:23.8336782Z auto tmp14 = tmp12 < tmp7; 2023-01-11T21:41:23.8336915Z auto tmp15 = tmp13 && tmp14; 2023-01-11T21:41:23.8337046Z auto tmp16 = tmp8 && tmp15; 2023-01-11T21:41:23.8337182Z auto tmp17 = tmp6 && tmp16; 2023-01-11T21:41:23.8337335Z auto tmp18 = static_cast(1); 2023-01-11T21:41:23.8337452Z auto tmp19 = tmp4 + tmp18; 2023-01-11T21:41:23.8337689Z auto tmp20 = tmp19 - tmp3; 2023-01-11T21:41:23.8337823Z auto tmp21 = tmp12 + tmp18; 2023-01-11T21:41:23.8338036Z auto tmp22 = tmp21 - tmp11; 2023-01-11T21:41:23.8338228Z auto tmp23 = tmp20 * tmp22; 2023-01-11T21:41:23.8338377Z auto tmp24 = tmp17 ? tmp23 : tmp5; 2023-01-11T21:41:23.8338510Z auto tmp25 = tmp19 >= tmp5; 2023-01-11T21:41:23.8338645Z auto tmp26 = tmp19 < tmp7; 2023-01-11T21:41:23.8338766Z auto tmp27 = tmp26 && tmp15; 2023-01-11T21:41:23.8338906Z auto tmp28 = tmp25 && tmp27; 2023-01-11T21:41:23.8339115Z auto tmp29 = tmp3 - tmp4; 2023-01-11T21:41:23.8339246Z auto tmp30 = tmp29 * tmp22; 2023-01-11T21:41:23.8339391Z auto tmp31 = tmp28 ? tmp30 : tmp5; 2023-01-11T21:41:23.8339523Z auto tmp32 = tmp21 >= tmp5; 2023-01-11T21:41:23.8339656Z auto tmp33 = tmp21 < tmp7; 2023-01-11T21:41:23.8339777Z auto tmp34 = tmp32 && tmp33; 2023-01-11T21:41:23.8339913Z auto tmp35 = tmp8 && tmp34; 2023-01-11T21:41:23.8340052Z auto tmp36 = tmp6 && tmp35; 2023-01-11T21:41:23.8340272Z auto tmp37 = tmp11 - tmp12; 2023-01-11T21:41:23.8340401Z auto tmp38 = tmp20 * tmp37; 2023-01-11T21:41:23.8340546Z auto tmp39 = tmp36 ? tmp38 : tmp5; 2023-01-11T21:41:23.8340679Z auto tmp40 = tmp26 && tmp34; 2023-01-11T21:41:23.8340798Z auto tmp41 = tmp25 && tmp40; 2023-01-11T21:41:23.8340937Z auto tmp42 = tmp29 * tmp37; 2023-01-11T21:41:23.8341081Z auto tmp43 = tmp41 ? tmp42 : tmp5; 2023-01-11T21:41:23.8341244Z auto tmp44 = static_cast(176.0); 2023-01-11T21:41:23.8341376Z auto tmp45 = tmp0 * tmp44; 2023-01-11T21:41:23.8341518Z auto tmp46 = tmp45 + tmp1; 2023-01-11T21:41:23.8341677Z auto tmp47 = static_cast(0.0); 2023-01-11T21:41:23.8341882Z auto tmp48 = (tmp47 != tmp47) ? tmp47 : std::max(tmp46, tmp47); 2023-01-11T21:41:23.8342044Z auto tmp49 = static_cast(351.0); 2023-01-11T21:41:23.8342248Z auto tmp50 = (tmp49 != tmp49) ? tmp49 : std::min(tmp48, tmp49); 2023-01-11T21:41:23.8342573Z auto tmp51 = std::floor(tmp50); 2023-01-11T21:41:23.8342718Z auto tmp52 = tmp51 >= tmp5; 2023-01-11T21:41:23.8342856Z auto tmp53 = tmp51 < tmp7; 2023-01-11T21:41:23.8342994Z auto tmp54 = tmp9 * tmp44; 2023-01-11T21:41:23.8343130Z auto tmp55 = tmp54 + tmp1; 2023-01-11T21:41:23.8343303Z auto tmp56 = (tmp47 != tmp47) ? tmp47 : std::max(tmp55, tmp47); 2023-01-11T21:41:23.8343482Z auto tmp57 = (tmp49 != tmp49) ? tmp49 : std::min(tmp56, tmp49); 2023-01-11T21:41:23.8343628Z auto tmp58 = std::floor(tmp57); 2023-01-11T21:41:23.8343765Z auto tmp59 = tmp58 >= tmp5; 2023-01-11T21:41:23.8343902Z auto tmp60 = tmp58 < tmp7; 2023-01-11T21:41:23.8344143Z auto tmp61 = tmp59 && tmp60; 2023-01-11T21:41:23.8344277Z auto tmp62 = tmp53 && tmp61; 2023-01-11T21:41:23.8344405Z auto tmp63 = tmp52 && tmp62; 2023-01-11T21:41:23.8344543Z auto tmp64 = static_cast(tmp51); 2023-01-11T21:41:23.8344692Z auto tmp65 = static_cast(0); 2023-01-11T21:41:23.8344842Z auto tmp66 = tmp63 ? tmp64 : tmp65; 2023-01-11T21:41:23.8344996Z auto tmp67 = static_cast(tmp58); 2023-01-11T21:41:23.8345141Z auto tmp68 = tmp63 ? tmp67 : tmp65; 2023-01-11T21:41:23.8345275Z auto tmp69 = tmp51 + tmp18; 2023-01-11T21:41:23.8345519Z auto tmp70 = tmp69 - tmp50; 2023-01-11T21:41:23.8345637Z auto tmp71 = tmp58 + tmp18; 2023-01-11T21:41:23.8345848Z auto tmp72 = tmp71 - tmp57; 2023-01-11T21:41:23.8346064Z auto tmp73 = tmp70 * tmp72; 2023-01-11T21:41:23.8346207Z auto tmp74 = tmp63 ? tmp73 : tmp5; 2023-01-11T21:41:23.8346342Z auto tmp75 = tmp69 >= tmp5; 2023-01-11T21:41:23.8346477Z auto tmp76 = tmp69 < tmp7; 2023-01-11T21:41:23.8346614Z auto tmp77 = tmp76 && tmp61; 2023-01-11T21:41:23.8346752Z auto tmp78 = tmp75 && tmp77; 2023-01-11T21:41:23.8346890Z auto tmp79 = static_cast(tmp69); 2023-01-11T21:41:23.8347039Z auto tmp80 = tmp78 ? tmp79 : tmp65; 2023-01-11T21:41:23.8347183Z auto tmp81 = tmp78 ? tmp67 : tmp65; 2023-01-11T21:41:23.8347391Z auto tmp82 = tmp50 - tmp51; 2023-01-11T21:41:23.8347518Z auto tmp83 = tmp82 * tmp72; 2023-01-11T21:41:23.8347661Z auto tmp84 = tmp78 ? tmp83 : tmp5; 2023-01-11T21:41:23.8347792Z auto tmp85 = tmp71 >= tmp5; 2023-01-11T21:41:23.8347919Z auto tmp86 = tmp71 < tmp7; 2023-01-11T21:41:23.8348050Z auto tmp87 = tmp85 && tmp86; 2023-01-11T21:41:23.8348185Z auto tmp88 = tmp53 && tmp87; 2023-01-11T21:41:23.8348322Z auto tmp89 = tmp52 && tmp88; 2023-01-11T21:41:23.8348468Z auto tmp90 = tmp89 ? tmp64 : tmp65; 2023-01-11T21:41:23.8348624Z auto tmp91 = static_cast(tmp71); 2023-01-11T21:41:23.8348777Z auto tmp92 = tmp89 ? tmp91 : tmp65; 2023-01-11T21:41:23.8348974Z auto tmp93 = tmp57 - tmp58; 2023-01-11T21:41:23.8349103Z auto tmp94 = tmp70 * tmp93; 2023-01-11T21:41:23.8349245Z auto tmp95 = tmp89 ? tmp94 : tmp5; 2023-01-11T21:41:23.8349385Z auto tmp96 = tmp76 && tmp87; 2023-01-11T21:41:23.8349522Z auto tmp97 = tmp75 && tmp96; 2023-01-11T21:41:23.8349668Z auto tmp98 = tmp97 ? tmp79 : tmp65; 2023-01-11T21:41:23.8349818Z auto tmp99 = tmp97 ? tmp91 : tmp65; 2023-01-11T21:41:23.8349949Z auto tmp100 = tmp82 * tmp93; 2023-01-11T21:41:23.8350080Z auto tmp101 = tmp97 ? tmp100 : tmp5; 2023-01-11T21:41:23.8350202Z out_ptr0[i0] = tmp24; 2023-01-11T21:41:23.8350322Z out_ptr1[i0] = tmp31; 2023-01-11T21:41:23.8350447Z out_ptr2[i0] = tmp39; 2023-01-11T21:41:23.8350567Z out_ptr3[i0] = tmp43; 2023-01-11T21:41:23.8350693Z out_ptr4[i0] = tmp66; 2023-01-11T21:41:23.8350805Z out_ptr5[i0] = tmp68; 2023-01-11T21:41:23.8350906Z out_ptr6[i0] = tmp74; 2023-01-11T21:41:23.8351025Z out_ptr7[i0] = tmp80; 2023-01-11T21:41:23.8351137Z out_ptr8[i0] = tmp81; 2023-01-11T21:41:23.8351253Z out_ptr9[i0] = tmp84; 2023-01-11T21:41:23.8351379Z out_ptr10[i0] = tmp90; 2023-01-11T21:41:23.8351578Z out_ptr11[i0] = tmp92; 2023-01-11T21:41:23.8351697Z out_ptr12[i0] = tmp95; 2023-01-11T21:41:23.8351801Z out_ptr13[i0] = tmp98; 2023-01-11T21:41:23.8351919Z out_ptr14[i0] = tmp99; 2023-01-11T21:41:23.8352045Z out_ptr15[i0] = tmp101; 2023-01-11T21:41:23.8352140Z } 2023-01-11T21:41:23.8352229Z } 2023-01-11T21:41:23.8352318Z } 2023-01-11T21:41:23.8352432Z #pragma omp for 2023-01-11T21:41:23.8352540Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8352630Z { 2023-01-11T21:41:23.8352747Z #pragma GCC ivdep 2023-01-11T21:41:23.8352863Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.8352951Z { 2023-01-11T21:41:23.8353068Z #pragma GCC ivdep 2023-01-11T21:41:23.8353184Z for(long i2=0; i2<123904; i2+=1) 2023-01-11T21:41:23.8353277Z { 2023-01-11T21:41:23.8353431Z { 2023-01-11T21:41:23.8353529Z { 2023-01-11T21:41:23.8353683Z auto tmp2 = in_ptr0[(2*i2) + (247808*i0)]; 2023-01-11T21:41:23.8353915Z auto tmp11 = in_ptr0[1 + (2*i2) + (247808*i0)]; 2023-01-11T21:41:23.8354073Z auto tmp51 = out_ptr0[i2 + (123904*i0)]; 2023-01-11T21:41:23.8354213Z auto tmp53 = out_ptr1[i2 + (123904*i0)]; 2023-01-11T21:41:23.8354360Z auto tmp56 = out_ptr2[i2 + (123904*i0)]; 2023-01-11T21:41:23.8354509Z auto tmp59 = out_ptr3[i2 + (123904*i0)]; 2023-01-11T21:41:23.8354670Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:23.8354830Z auto tmp1 = static_cast(i1); 2023-01-11T21:41:23.8354989Z auto tmp3 = static_cast(175.5); 2023-01-11T21:41:23.8355135Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8355274Z auto tmp5 = tmp4 + tmp3; 2023-01-11T21:41:23.8355416Z auto tmp6 = std::floor(tmp5); 2023-01-11T21:41:23.8355573Z auto tmp7 = static_cast(0); 2023-01-11T21:41:23.8355715Z auto tmp8 = tmp6 >= tmp7; 2023-01-11T21:41:23.8355869Z auto tmp9 = static_cast(352); 2023-01-11T21:41:23.8356013Z auto tmp10 = tmp6 < tmp9; 2023-01-11T21:41:23.8356156Z auto tmp12 = tmp11 * tmp3; 2023-01-11T21:41:23.8356285Z auto tmp13 = tmp12 + tmp3; 2023-01-11T21:41:23.8356435Z auto tmp14 = std::floor(tmp13); 2023-01-11T21:41:23.8356559Z auto tmp15 = tmp14 >= tmp7; 2023-01-11T21:41:23.8356692Z auto tmp16 = tmp14 < tmp9; 2023-01-11T21:41:23.8356848Z auto tmp17 = tmp15 && tmp16; 2023-01-11T21:41:23.8356993Z auto tmp18 = tmp10 && tmp17; 2023-01-11T21:41:23.8357129Z auto tmp19 = tmp8 && tmp18; 2023-01-11T21:41:23.8357292Z auto tmp20 = static_cast(tmp14); 2023-01-11T21:41:23.8357447Z auto tmp21 = static_cast(0); 2023-01-11T21:41:23.8357586Z auto tmp22 = tmp19 ? tmp20 : tmp21; 2023-01-11T21:41:23.8357746Z auto tmp23 = static_cast(tmp6); 2023-01-11T21:41:23.8357900Z auto tmp24 = tmp19 ? tmp23 : tmp21; 2023-01-11T21:41:23.8358101Z auto tmp25 = in_ptr1[tmp24 + (352*tmp22) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8358275Z auto tmp26 = static_cast(1); 2023-01-11T21:41:23.8358422Z auto tmp27 = tmp6 + tmp26; 2023-01-11T21:41:23.8358646Z auto tmp28 = tmp27 >= tmp7; 2023-01-11T21:41:23.8358780Z auto tmp29 = tmp27 < tmp9; 2023-01-11T21:41:23.8358903Z auto tmp30 = tmp29 && tmp17; 2023-01-11T21:41:23.8359046Z auto tmp31 = tmp28 && tmp30; 2023-01-11T21:41:23.8359196Z auto tmp32 = tmp31 ? tmp20 : tmp21; 2023-01-11T21:41:23.8359357Z auto tmp33 = static_cast(tmp27); 2023-01-11T21:41:23.8359514Z auto tmp34 = tmp31 ? tmp33 : tmp21; 2023-01-11T21:41:23.8359704Z auto tmp35 = in_ptr1[tmp34 + (352*tmp32) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8359848Z auto tmp36 = tmp14 + tmp26; 2023-01-11T21:41:23.8359983Z auto tmp37 = tmp36 >= tmp7; 2023-01-11T21:41:23.8360106Z auto tmp38 = tmp36 < tmp9; 2023-01-11T21:41:23.8360304Z auto tmp39 = tmp37 && tmp38; 2023-01-11T21:41:23.8360446Z auto tmp40 = tmp10 && tmp39; 2023-01-11T21:41:23.8360592Z auto tmp41 = tmp8 && tmp40; 2023-01-11T21:41:23.8360768Z auto tmp42 = static_cast(tmp36); 2023-01-11T21:41:23.8360941Z auto tmp43 = tmp41 ? tmp42 : tmp21; 2023-01-11T21:41:23.8361110Z auto tmp44 = tmp41 ? tmp23 : tmp21; 2023-01-11T21:41:23.8361302Z auto tmp45 = in_ptr1[tmp44 + (352*tmp43) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8361449Z auto tmp46 = tmp29 && tmp39; 2023-01-11T21:41:23.8361588Z auto tmp47 = tmp28 && tmp46; 2023-01-11T21:41:23.8361745Z auto tmp48 = tmp47 ? tmp42 : tmp21; 2023-01-11T21:41:23.8361900Z auto tmp49 = tmp47 ? tmp33 : tmp21; 2023-01-11T21:41:23.8362102Z auto tmp50 = in_ptr1[tmp49 + (352*tmp48) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8362248Z auto tmp52 = tmp25 * tmp51; 2023-01-11T21:41:23.8362390Z auto tmp54 = tmp35 * tmp53; 2023-01-11T21:41:23.8362512Z auto tmp55 = tmp52 + tmp54; 2023-01-11T21:41:23.8362658Z auto tmp57 = tmp45 * tmp56; 2023-01-11T21:41:23.8362800Z auto tmp58 = tmp55 + tmp57; 2023-01-11T21:41:23.8362934Z auto tmp60 = tmp50 * tmp59; 2023-01-11T21:41:23.8363072Z auto tmp61 = tmp58 + tmp60; 2023-01-11T21:41:23.8363233Z in_out_ptr0[i2 + (123904*i1) + (371712*i0)] = tmp61; 2023-01-11T21:41:23.8363332Z } 2023-01-11T21:41:23.8363413Z } 2023-01-11T21:41:23.8363500Z } 2023-01-11T21:41:23.8363595Z } 2023-01-11T21:41:23.8363695Z } 2023-01-11T21:41:23.8363817Z #pragma omp for 2023-01-11T21:41:23.8363936Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8364025Z { 2023-01-11T21:41:23.8364130Z #pragma GCC ivdep 2023-01-11T21:41:23.8364250Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.8364349Z { 2023-01-11T21:41:23.8364462Z #pragma GCC ivdep 2023-01-11T21:41:23.8364591Z for(long i2=0; i2<123904; i2+=1) 2023-01-11T21:41:23.8364688Z { 2023-01-11T21:41:23.8364785Z { 2023-01-11T21:41:23.8364871Z { 2023-01-11T21:41:23.8365023Z auto tmp2 = out_ptr5[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365179Z auto tmp3 = out_ptr4[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365329Z auto tmp5 = out_ptr6[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365476Z auto tmp7 = out_ptr8[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365704Z auto tmp8 = out_ptr7[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365860Z auto tmp10 = out_ptr9[i2 + (123904*i0)]; 2023-01-11T21:41:23.8365998Z auto tmp13 = out_ptr11[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366146Z auto tmp14 = out_ptr10[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366299Z auto tmp16 = out_ptr12[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366445Z auto tmp19 = out_ptr14[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366588Z auto tmp20 = out_ptr13[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366728Z auto tmp22 = out_ptr15[i2 + (123904*i0)]; 2023-01-11T21:41:23.8366887Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:23.8367036Z auto tmp1 = static_cast(i1); 2023-01-11T21:41:23.8367260Z auto tmp4 = in_ptr1[tmp3 + (352*tmp2) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8367406Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:23.8367591Z auto tmp9 = in_ptr1[tmp8 + (352*tmp7) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8367732Z auto tmp11 = tmp9 * tmp10; 2023-01-11T21:41:23.8367872Z auto tmp12 = tmp6 + tmp11; 2023-01-11T21:41:23.8368059Z auto tmp15 = in_ptr1[tmp14 + (352*tmp13) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8368198Z auto tmp17 = tmp15 * tmp16; 2023-01-11T21:41:23.8368338Z auto tmp18 = tmp12 + tmp17; 2023-01-11T21:41:23.8368512Z auto tmp21 = in_ptr1[tmp20 + (352*tmp19) + (123904*tmp1) + (371712*tmp0)]; 2023-01-11T21:41:23.8368645Z auto tmp23 = tmp21 * tmp22; 2023-01-11T21:41:23.8368788Z auto tmp24 = tmp18 + tmp23; 2023-01-11T21:41:23.8368956Z in_out_ptr1[i2 + (123904*i1) + (371712*i0)] = tmp24; 2023-01-11T21:41:23.8369054Z } 2023-01-11T21:41:23.8369147Z } 2023-01-11T21:41:23.8369239Z } 2023-01-11T21:41:23.8369315Z } 2023-01-11T21:41:23.8369402Z } 2023-01-11T21:41:23.8369490Z } 2023-01-11T21:41:23.8369577Z } 2023-01-11T21:41:23.8369722Z ''') 2023-01-11T21:41:23.8369732Z 2023-01-11T21:41:23.8369739Z 2023-01-11T21:41:23.8369872Z async_compile.wait(globals()) 2023-01-11T21:41:23.8369975Z del async_compile 2023-01-11T21:41:23.8369982Z 2023-01-11T21:41:23.8370069Z def call(args): 2023-01-11T21:41:23.8370179Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8370282Z args.clear() 2023-01-11T21:41:23.8370622Z buf0 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8370966Z buf2 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8371303Z buf4 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8371630Z buf6 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8371951Z buf9 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8372257Z buf10 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8372577Z buf11 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8372893Z buf12 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8373208Z buf13 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8373534Z buf14 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8373928Z buf15 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8374242Z buf16 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8374565Z buf17 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8374876Z buf19 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8375181Z buf20 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8375491Z buf21 = empty_strided((4, 352, 352), (123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8375837Z buf1 = empty_strided((4, 3, 352, 352), (371712, 123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8375960Z buf8 = buf1; del buf1 # reuse 2023-01-11T21:41:23.8376374Z buf18 = empty_strided((4, 3, 352, 352), (371712, 123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8376508Z buf22 = buf18; del buf18 # reuse 2023-01-11T21:41:23.8377385Z kernel_cpp_0(c_void_p(buf8.data_ptr()), c_void_p(buf22.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf10.data_ptr()), c_void_p(buf11.data_ptr()), c_void_p(buf12.data_ptr()), c_void_p(buf13.data_ptr()), c_void_p(buf14.data_ptr()), c_void_p(buf15.data_ptr()), c_void_p(buf16.data_ptr()), c_void_p(buf17.data_ptr()), c_void_p(buf19.data_ptr()), c_void_p(buf20.data_ptr()), c_void_p(buf21.data_ptr())) 2023-01-11T21:41:23.8377488Z del arg0_1 2023-01-11T21:41:23.8377587Z del arg1_1 2023-01-11T21:41:23.8377688Z return (buf8, buf22, ) 2023-01-11T21:41:23.8377697Z 2023-01-11T21:41:23.8377719Z 2023-01-11T21:41:23.8377817Z if __name__ == "__main__": 2023-01-11T21:41:23.8377987Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8378180Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8378539Z arg0_1 = rand_strided((4, 3, 352, 352), (371712, 123904, 352, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8378875Z arg1_1 = rand_strided((4, 352, 352, 2), (247808, 704, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8379055Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8379475Z [2023-01-11 21:34:30,009] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 222 2023-01-11T21:41:23.8379484Z 2023-01-11T21:41:23.8379580Z ok (3.878s) 2023-01-11T21:41:23.8380273Z test_hardsigmoid_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8380464Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8380858Z [2023-01-11 21:34:30,157] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 223 2023-01-11T21:41:23.8381267Z [2023-01-11 21:34:31,759] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 223 2023-01-11T21:41:23.8381275Z 2023-01-11T21:41:23.8381416Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8381523Z import torch 2023-01-11T21:41:23.8381627Z import random 2023-01-11T21:41:23.8381799Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8381983Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8381992Z 2023-01-11T21:41:23.8382095Z aten = torch.ops.aten 2023-01-11T21:41:23.8382282Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8382556Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8382673Z 2023-01-11T21:41:23.8382680Z 2023-01-11T21:41:23.8382896Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8383203Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8383382Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8383531Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8383672Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8383793Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8383882Z { 2023-01-11T21:41:23.8384031Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8384126Z { 2023-01-11T21:41:23.8384242Z #pragma omp for 2023-01-11T21:41:23.8384361Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8384450Z { 2023-01-11T21:41:23.8384635Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8384944Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.8385074Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8385278Z auto tmp3 = at::vec::Vectorized(static_cast(0.0)); 2023-01-11T21:41:23.8385439Z auto tmp4 = at::vec::maximum(tmp2, tmp3); 2023-01-11T21:41:23.8385637Z auto tmp5 = at::vec::Vectorized(static_cast(6.0)); 2023-01-11T21:41:23.8385790Z auto tmp6 = at::vec::minimum(tmp4, tmp5); 2023-01-11T21:41:23.8385982Z auto tmp7 = at::vec::Vectorized(static_cast(6)); 2023-01-11T21:41:23.8386087Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:23.8386208Z auto tmp9 = tmp2 + tmp1; 2023-01-11T21:41:23.8386365Z auto tmp10 = at::vec::maximum(tmp9, tmp3); 2023-01-11T21:41:23.8386528Z auto tmp11 = at::vec::minimum(tmp10, tmp5); 2023-01-11T21:41:23.8386652Z auto tmp12 = tmp11 / tmp7; 2023-01-11T21:41:23.8386867Z auto tmp13 = tmp0 - tmp1; 2023-01-11T21:41:23.8386993Z auto tmp14 = tmp13 + tmp1; 2023-01-11T21:41:23.8387133Z auto tmp15 = at::vec::maximum(tmp14, tmp3); 2023-01-11T21:41:23.8387291Z auto tmp16 = at::vec::minimum(tmp15, tmp5); 2023-01-11T21:41:23.8387416Z auto tmp17 = tmp16 / tmp7; 2023-01-11T21:41:23.8387549Z tmp8.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8387684Z tmp12.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8387818Z tmp17.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8387913Z } 2023-01-11T21:41:23.8388038Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8388158Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8388250Z { 2023-01-11T21:41:23.8388377Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8388521Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.8388643Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8388790Z auto tmp3 = static_cast(0.0); 2023-01-11T21:41:23.8388980Z auto tmp4 = (tmp3 != tmp3) ? tmp3 : std::max(tmp2, tmp3); 2023-01-11T21:41:23.8389113Z auto tmp5 = static_cast(6.0); 2023-01-11T21:41:23.8389298Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::min(tmp4, tmp5); 2023-01-11T21:41:23.8389447Z auto tmp7 = static_cast(6); 2023-01-11T21:41:23.8389575Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:23.8389695Z auto tmp9 = tmp2 + tmp1; 2023-01-11T21:41:23.8389884Z auto tmp10 = (tmp3 != tmp3) ? tmp3 : std::max(tmp9, tmp3); 2023-01-11T21:41:23.8390065Z auto tmp11 = (tmp5 != tmp5) ? tmp5 : std::min(tmp10, tmp5); 2023-01-11T21:41:23.8390175Z auto tmp12 = tmp11 / tmp7; 2023-01-11T21:41:23.8390378Z auto tmp13 = tmp0 - tmp1; 2023-01-11T21:41:23.8390502Z auto tmp14 = tmp13 + tmp1; 2023-01-11T21:41:23.8390681Z auto tmp15 = (tmp3 != tmp3) ? tmp3 : std::max(tmp14, tmp3); 2023-01-11T21:41:23.8390943Z auto tmp16 = (tmp5 != tmp5) ? tmp5 : std::min(tmp15, tmp5); 2023-01-11T21:41:23.8391067Z auto tmp17 = tmp16 / tmp7; 2023-01-11T21:41:23.8391189Z out_ptr0[i0] = tmp8; 2023-01-11T21:41:23.8391305Z out_ptr1[i0] = tmp12; 2023-01-11T21:41:23.8391404Z out_ptr2[i0] = tmp17; 2023-01-11T21:41:23.8391495Z } 2023-01-11T21:41:23.8391587Z } 2023-01-11T21:41:23.8391675Z } 2023-01-11T21:41:23.8391799Z ''') 2023-01-11T21:41:23.8391808Z 2023-01-11T21:41:23.8391814Z 2023-01-11T21:41:23.8391948Z async_compile.wait(globals()) 2023-01-11T21:41:23.8392054Z del async_compile 2023-01-11T21:41:23.8392062Z 2023-01-11T21:41:23.8392153Z def call(args): 2023-01-11T21:41:23.8392252Z arg0_1, = args 2023-01-11T21:41:23.8392355Z args.clear() 2023-01-11T21:41:23.8392656Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8392947Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8393302Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8393585Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8393665Z del arg0_1 2023-01-11T21:41:23.8393852Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8393862Z 2023-01-11T21:41:23.8393869Z 2023-01-11T21:41:23.8393979Z if __name__ == "__main__": 2023-01-11T21:41:23.8394146Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8394326Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8394620Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8394780Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8394787Z 2023-01-11T21:41:23.8394879Z ok (1.660s) 2023-01-11T21:41:23.8395598Z test_hardswish_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8395789Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8396178Z [2023-01-11 21:34:31,821] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 224 2023-01-11T21:41:23.8396591Z [2023-01-11 21:34:33,422] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 224 2023-01-11T21:41:23.8396600Z 2023-01-11T21:41:23.8396741Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8396842Z import torch 2023-01-11T21:41:23.8396951Z import random 2023-01-11T21:41:23.8397122Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8397303Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8397317Z 2023-01-11T21:41:23.8397431Z aten = torch.ops.aten 2023-01-11T21:41:23.8397614Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8397750Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8397757Z 2023-01-11T21:41:23.8397763Z 2023-01-11T21:41:23.8397968Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8398270Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8398449Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8398593Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8398734Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8398868Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8398937Z { 2023-01-11T21:41:23.8399077Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8399170Z { 2023-01-11T21:41:23.8399281Z #pragma omp for 2023-01-11T21:41:23.8399481Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8399570Z { 2023-01-11T21:41:23.8399760Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8399960Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.8400085Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8400290Z auto tmp3 = at::vec::Vectorized(static_cast(0.0)); 2023-01-11T21:41:23.8400454Z auto tmp4 = at::vec::maximum(tmp2, tmp3); 2023-01-11T21:41:23.8400658Z auto tmp5 = at::vec::Vectorized(static_cast(6.0)); 2023-01-11T21:41:23.8400812Z auto tmp6 = at::vec::minimum(tmp4, tmp5); 2023-01-11T21:41:23.8400936Z auto tmp7 = tmp0 * tmp6; 2023-01-11T21:41:23.8401136Z auto tmp8 = at::vec::Vectorized(static_cast(6)); 2023-01-11T21:41:23.8401250Z auto tmp9 = tmp7 / tmp8; 2023-01-11T21:41:23.8401434Z auto tmp10 = tmp2 + tmp1; 2023-01-11T21:41:23.8401601Z auto tmp11 = at::vec::maximum(tmp10, tmp3); 2023-01-11T21:41:23.8401764Z auto tmp12 = at::vec::minimum(tmp11, tmp5); 2023-01-11T21:41:23.8401887Z auto tmp13 = tmp2 * tmp12; 2023-01-11T21:41:23.8402007Z auto tmp14 = tmp13 / tmp8; 2023-01-11T21:41:23.8402200Z auto tmp15 = tmp0 - tmp1; 2023-01-11T21:41:23.8402308Z auto tmp16 = tmp15 + tmp1; 2023-01-11T21:41:23.8402459Z auto tmp17 = at::vec::maximum(tmp16, tmp3); 2023-01-11T21:41:23.8402614Z auto tmp18 = at::vec::minimum(tmp17, tmp5); 2023-01-11T21:41:23.8402743Z auto tmp19 = tmp15 * tmp18; 2023-01-11T21:41:23.8402869Z auto tmp20 = tmp19 / tmp8; 2023-01-11T21:41:23.8403001Z tmp9.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8403135Z tmp14.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8403251Z tmp20.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8403344Z } 2023-01-11T21:41:23.8403488Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8403611Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8403701Z { 2023-01-11T21:41:23.8403820Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8403964Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.8404065Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8404205Z auto tmp3 = static_cast(0.0); 2023-01-11T21:41:23.8404387Z auto tmp4 = (tmp3 != tmp3) ? tmp3 : std::max(tmp2, tmp3); 2023-01-11T21:41:23.8404533Z auto tmp5 = static_cast(6.0); 2023-01-11T21:41:23.8404714Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::min(tmp4, tmp5); 2023-01-11T21:41:23.8404838Z auto tmp7 = tmp0 * tmp6; 2023-01-11T21:41:23.8404982Z auto tmp8 = static_cast(6); 2023-01-11T21:41:23.8405100Z auto tmp9 = tmp7 / tmp8; 2023-01-11T21:41:23.8405209Z auto tmp10 = tmp2 + tmp1; 2023-01-11T21:41:23.8405393Z auto tmp11 = (tmp3 != tmp3) ? tmp3 : std::max(tmp10, tmp3); 2023-01-11T21:41:23.8405573Z auto tmp12 = (tmp5 != tmp5) ? tmp5 : std::min(tmp11, tmp5); 2023-01-11T21:41:23.8405702Z auto tmp13 = tmp2 * tmp12; 2023-01-11T21:41:23.8405825Z auto tmp14 = tmp13 / tmp8; 2023-01-11T21:41:23.8406031Z auto tmp15 = tmp0 - tmp1; 2023-01-11T21:41:23.8406151Z auto tmp16 = tmp15 + tmp1; 2023-01-11T21:41:23.8406311Z auto tmp17 = (tmp3 != tmp3) ? tmp3 : std::max(tmp16, tmp3); 2023-01-11T21:41:23.8406492Z auto tmp18 = (tmp5 != tmp5) ? tmp5 : std::min(tmp17, tmp5); 2023-01-11T21:41:23.8406612Z auto tmp19 = tmp15 * tmp18; 2023-01-11T21:41:23.8406732Z auto tmp20 = tmp19 / tmp8; 2023-01-11T21:41:23.8406845Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.8406958Z out_ptr1[i0] = tmp14; 2023-01-11T21:41:23.8407071Z out_ptr2[i0] = tmp20; 2023-01-11T21:41:23.8407217Z } 2023-01-11T21:41:23.8407312Z } 2023-01-11T21:41:23.8407399Z } 2023-01-11T21:41:23.8407513Z ''') 2023-01-11T21:41:23.8407523Z 2023-01-11T21:41:23.8407531Z 2023-01-11T21:41:23.8407654Z async_compile.wait(globals()) 2023-01-11T21:41:23.8407754Z del async_compile 2023-01-11T21:41:23.8407760Z 2023-01-11T21:41:23.8407856Z def call(args): 2023-01-11T21:41:23.8407947Z arg0_1, = args 2023-01-11T21:41:23.8408039Z args.clear() 2023-01-11T21:41:23.8408327Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8408606Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8408885Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8409168Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8409266Z del arg0_1 2023-01-11T21:41:23.8409384Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8409451Z 2023-01-11T21:41:23.8409459Z 2023-01-11T21:41:23.8409574Z if __name__ == "__main__": 2023-01-11T21:41:23.8409730Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8409918Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8410217Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8410373Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8410380Z 2023-01-11T21:41:23.8410473Z ok (1.693s) 2023-01-11T21:41:23.8411208Z test_hardtanh_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8411390Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8411802Z [2023-01-11 21:34:33,529] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 225 2023-01-11T21:41:23.8412211Z [2023-01-11 21:34:35,134] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 225 2023-01-11T21:41:23.8412220Z 2023-01-11T21:41:23.8412347Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8412455Z import torch 2023-01-11T21:41:23.8412558Z import random 2023-01-11T21:41:23.8412729Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8412906Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8412913Z 2023-01-11T21:41:23.8413031Z aten = torch.ops.aten 2023-01-11T21:41:23.8413230Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8413365Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8413373Z 2023-01-11T21:41:23.8413380Z 2023-01-11T21:41:23.8413565Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8413870Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8414043Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8414187Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8414323Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8414460Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8414548Z { 2023-01-11T21:41:23.8414676Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8414767Z { 2023-01-11T21:41:23.8414877Z #pragma omp for 2023-01-11T21:41:23.8414990Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8415082Z { 2023-01-11T21:41:23.8415281Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8415602Z auto tmp1 = at::vec::Vectorized(static_cast(-1.0)); 2023-01-11T21:41:23.8415762Z auto tmp2 = at::vec::maximum(tmp0, tmp1); 2023-01-11T21:41:23.8416026Z auto tmp3 = at::vec::Vectorized(static_cast(1.0)); 2023-01-11T21:41:23.8416186Z auto tmp4 = at::vec::minimum(tmp2, tmp3); 2023-01-11T21:41:23.8416378Z auto tmp5 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8416508Z auto tmp6 = tmp0 + tmp5; 2023-01-11T21:41:23.8416664Z auto tmp7 = at::vec::maximum(tmp6, tmp1); 2023-01-11T21:41:23.8416815Z auto tmp8 = at::vec::minimum(tmp7, tmp3); 2023-01-11T21:41:23.8417009Z auto tmp9 = tmp0 - tmp5; 2023-01-11T21:41:23.8417167Z auto tmp10 = at::vec::maximum(tmp9, tmp1); 2023-01-11T21:41:23.8417292Z auto tmp11 = at::vec::minimum(tmp10, tmp3); 2023-01-11T21:41:23.8417408Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8417527Z tmp8.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8417642Z tmp11.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8417719Z } 2023-01-11T21:41:23.8417891Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8417999Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8418063Z { 2023-01-11T21:41:23.8418171Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8418370Z auto tmp1 = static_cast(-1.0); 2023-01-11T21:41:23.8418528Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:23.8418659Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:23.8418810Z auto tmp4 = (tmp3 != tmp3) ? tmp3 : std::min(tmp2, tmp3); 2023-01-11T21:41:23.8418935Z auto tmp5 = static_cast(1); 2023-01-11T21:41:23.8419027Z auto tmp6 = tmp0 + tmp5; 2023-01-11T21:41:23.8419182Z auto tmp7 = (tmp1 != tmp1) ? tmp1 : std::max(tmp6, tmp1); 2023-01-11T21:41:23.8419335Z auto tmp8 = (tmp3 != tmp3) ? tmp3 : std::min(tmp7, tmp3); 2023-01-11T21:41:23.8419499Z auto tmp9 = tmp0 - tmp5; 2023-01-11T21:41:23.8419663Z auto tmp10 = (tmp1 != tmp1) ? tmp1 : std::max(tmp9, tmp1); 2023-01-11T21:41:23.8419819Z auto tmp11 = (tmp3 != tmp3) ? tmp3 : std::min(tmp10, tmp3); 2023-01-11T21:41:23.8419923Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.8420025Z out_ptr1[i0] = tmp8; 2023-01-11T21:41:23.8420110Z out_ptr2[i0] = tmp11; 2023-01-11T21:41:23.8420190Z } 2023-01-11T21:41:23.8420268Z } 2023-01-11T21:41:23.8420347Z } 2023-01-11T21:41:23.8420449Z ''') 2023-01-11T21:41:23.8420458Z 2023-01-11T21:41:23.8420463Z 2023-01-11T21:41:23.8420576Z async_compile.wait(globals()) 2023-01-11T21:41:23.8420670Z del async_compile 2023-01-11T21:41:23.8420677Z 2023-01-11T21:41:23.8420754Z def call(args): 2023-01-11T21:41:23.8420847Z arg0_1, = args 2023-01-11T21:41:23.8420939Z args.clear() 2023-01-11T21:41:23.8421198Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8421460Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8421727Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8421983Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8422069Z del arg0_1 2023-01-11T21:41:23.8422159Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8422165Z 2023-01-11T21:41:23.8422171Z 2023-01-11T21:41:23.8422266Z if __name__ == "__main__": 2023-01-11T21:41:23.8422584Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8422750Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8423011Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8423147Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8423154Z 2023-01-11T21:41:23.8423238Z ok (1.676s) 2023-01-11T21:41:23.8423872Z test_horizonal_fusion1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8424137Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8424474Z [2023-01-11 21:34:35,154] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 226 2023-01-11T21:41:23.8424827Z [2023-01-11 21:34:36,729] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 226 2023-01-11T21:41:23.8424833Z 2023-01-11T21:41:23.8424952Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8425041Z import torch 2023-01-11T21:41:23.8425128Z import random 2023-01-11T21:41:23.8425272Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8425483Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8425494Z 2023-01-11T21:41:23.8425593Z aten = torch.ops.aten 2023-01-11T21:41:23.8425750Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8425865Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8425871Z 2023-01-11T21:41:23.8425876Z 2023-01-11T21:41:23.8426052Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8426319Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8426472Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8426604Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8426733Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8426856Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8426963Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8427081Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8427166Z { 2023-01-11T21:41:23.8427295Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8427373Z { 2023-01-11T21:41:23.8427471Z #pragma omp for 2023-01-11T21:41:23.8427575Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.8427639Z { 2023-01-11T21:41:23.8427817Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8427985Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8428094Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8428209Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8428288Z } 2023-01-11T21:41:23.8428408Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8428503Z for(long i0=2048; i0<2048; i0+=1) 2023-01-11T21:41:23.8428580Z { 2023-01-11T21:41:23.8428685Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8428790Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8428899Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8429000Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8429079Z } 2023-01-11T21:41:23.8429161Z #pragma omp for 2023-01-11T21:41:23.8429264Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8429343Z { 2023-01-11T21:41:23.8429444Z #pragma GCC ivdep 2023-01-11T21:41:23.8429550Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:23.8429631Z { 2023-01-11T21:41:23.8429742Z for(long i2=0; i2<2; i2+=1) 2023-01-11T21:41:23.8429812Z { 2023-01-11T21:41:23.8430001Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i2) + (16*i1) + (256*i0)); 2023-01-11T21:41:23.8430165Z auto tmp1 = at::vec::Vectorized(in_ptr2[i1]); 2023-01-11T21:41:23.8430356Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + (8*i2) + (16*i1) + (256*i0)); 2023-01-11T21:41:23.8430545Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8430766Z auto tmp4 = tmp3 * tmp1; 2023-01-11T21:41:23.8430905Z tmp2.store(out_ptr1 + (8*i2) + (16*i1) + (256*i0)); 2023-01-11T21:41:23.8431043Z tmp4.store(out_ptr2 + (8*i2) + (16*i1) + (256*i0)); 2023-01-11T21:41:23.8431110Z } 2023-01-11T21:41:23.8431233Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8431344Z for(long i2=16; i2<16; i2+=1) 2023-01-11T21:41:23.8431429Z { 2023-01-11T21:41:23.8431563Z auto tmp0 = in_ptr0[i2 + (16*i1) + (256*i0)]; 2023-01-11T21:41:23.8431676Z auto tmp1 = in_ptr2[i1]; 2023-01-11T21:41:23.8431810Z auto tmp3 = in_ptr1[i2 + (16*i1) + (256*i0)]; 2023-01-11T21:41:23.8431976Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8432090Z auto tmp4 = tmp3 * tmp1; 2023-01-11T21:41:23.8432218Z out_ptr1[i2 + (16*i1) + (256*i0)] = tmp2; 2023-01-11T21:41:23.8432393Z out_ptr2[i2 + (16*i1) + (256*i0)] = tmp4; 2023-01-11T21:41:23.8432480Z } 2023-01-11T21:41:23.8432561Z } 2023-01-11T21:41:23.8432640Z } 2023-01-11T21:41:23.8432703Z } 2023-01-11T21:41:23.8432778Z } 2023-01-11T21:41:23.8432881Z ''') 2023-01-11T21:41:23.8432889Z 2023-01-11T21:41:23.8432894Z 2023-01-11T21:41:23.8433015Z async_compile.wait(globals()) 2023-01-11T21:41:23.8433109Z del async_compile 2023-01-11T21:41:23.8433116Z 2023-01-11T21:41:23.8433208Z def call(args): 2023-01-11T21:41:23.8433311Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8433388Z args.clear() 2023-01-11T21:41:23.8433668Z buf0 = empty_strided((8, 16, 16), (256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8434012Z buf1 = empty_strided((8, 16, 16), (256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8434280Z buf2 = empty_strided((8, 16, 16), (256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8434601Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8434689Z del arg0_1 2023-01-11T21:41:23.8434776Z del arg1_1 2023-01-11T21:41:23.8434864Z del arg2_1 2023-01-11T21:41:23.8434957Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8434965Z 2023-01-11T21:41:23.8434970Z 2023-01-11T21:41:23.8435072Z if __name__ == "__main__": 2023-01-11T21:41:23.8435223Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8435380Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8435663Z arg0_1 = rand_strided((8, 16, 16), (256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8435956Z arg1_1 = rand_strided((8, 16, 16), (256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8436253Z arg2_1 = rand_strided((1, 16, 1), (16, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8436474Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8436484Z 2023-01-11T21:41:23.8436602Z ok (1.596s) 2023-01-11T21:41:23.8437810Z test_horizonal_fusion2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8438094Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8438711Z [2023-01-11 21:34:36,751] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 227 2023-01-11T21:41:23.8439296Z [2023-01-11 21:34:38,348] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 227 2023-01-11T21:41:23.8439411Z 2023-01-11T21:41:23.8439632Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8439786Z import torch 2023-01-11T21:41:23.8439941Z import random 2023-01-11T21:41:23.8440206Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8440476Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8440504Z 2023-01-11T21:41:23.8440671Z aten = torch.ops.aten 2023-01-11T21:41:23.8440983Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8441198Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8441209Z 2023-01-11T21:41:23.8441218Z 2023-01-11T21:41:23.8441545Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8442048Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8442320Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8442547Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8442854Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8443084Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8443304Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8443520Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8443651Z { 2023-01-11T21:41:23.8443864Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8444007Z { 2023-01-11T21:41:23.8444164Z #pragma omp for 2023-01-11T21:41:23.8444350Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.8444482Z { 2023-01-11T21:41:23.8444789Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8445101Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8445288Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8445496Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8445631Z } 2023-01-11T21:41:23.8445828Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8446023Z for(long i0=1024; i0<1024; i0+=1) 2023-01-11T21:41:23.8446168Z { 2023-01-11T21:41:23.8446353Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8446574Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8446757Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8446938Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8447056Z } 2023-01-11T21:41:23.8447227Z #pragma omp for 2023-01-11T21:41:23.8447403Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8447539Z { 2023-01-11T21:41:23.8447834Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8448145Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8448336Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8448525Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8448667Z } 2023-01-11T21:41:23.8448882Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8449073Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.8449211Z { 2023-01-11T21:41:23.8449392Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.8449609Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8449780Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8449955Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8450088Z } 2023-01-11T21:41:23.8450260Z #pragma omp for 2023-01-11T21:41:23.8450439Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8450576Z { 2023-01-11T21:41:23.8450863Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:23.8451158Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.8451346Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8451551Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8451689Z } 2023-01-11T21:41:23.8451900Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8452161Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:23.8452301Z { 2023-01-11T21:41:23.8452471Z auto tmp0 = in_ptr2[i0]; 2023-01-11T21:41:23.8452691Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.8452877Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8453038Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:23.8453172Z } 2023-01-11T21:41:23.8453302Z } 2023-01-11T21:41:23.8453418Z } 2023-01-11T21:41:23.8453599Z ''') 2023-01-11T21:41:23.8453609Z 2023-01-11T21:41:23.8453618Z 2023-01-11T21:41:23.8453822Z async_compile.wait(globals()) 2023-01-11T21:41:23.8453979Z del async_compile 2023-01-11T21:41:23.8453990Z 2023-01-11T21:41:23.8454145Z def call(args): 2023-01-11T21:41:23.8454325Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8454468Z args.clear() 2023-01-11T21:41:23.8454938Z buf0 = empty_strided((8, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8455438Z buf1 = empty_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8455874Z buf2 = empty_strided((16, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8456430Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8456585Z del arg0_1 2023-01-11T21:41:23.8456734Z del arg1_1 2023-01-11T21:41:23.8456878Z del arg2_1 2023-01-11T21:41:23.8457069Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8457079Z 2023-01-11T21:41:23.8457087Z 2023-01-11T21:41:23.8457259Z if __name__ == "__main__": 2023-01-11T21:41:23.8457514Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8457796Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8458282Z arg0_1 = rand_strided((8, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8458740Z arg1_1 = rand_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8459185Z arg2_1 = rand_strided((16, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8459466Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8459476Z 2023-01-11T21:41:23.8459620Z ok (1.618s) 2023-01-11T21:41:23.8460817Z test_index1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8461097Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8461714Z [2023-01-11 21:34:38,392] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 228 2023-01-11T21:41:23.8462508Z [2023-01-11 21:34:40,005] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 228 2023-01-11T21:41:23.8463619Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8463900Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8464540Z [2023-01-11 21:34:40,049] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 229 2023-01-11T21:41:23.8465179Z [2023-01-11 21:34:41,645] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 229 2023-01-11T21:41:23.8465191Z 2023-01-11T21:41:23.8465408Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8465680Z import torch 2023-01-11T21:41:23.8465849Z import random 2023-01-11T21:41:23.8466099Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8466393Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8466403Z 2023-01-11T21:41:23.8466577Z aten = torch.ops.aten 2023-01-11T21:41:23.8466897Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8467107Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8467117Z 2023-01-11T21:41:23.8467126Z 2023-01-11T21:41:23.8467450Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8467953Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8468229Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8468440Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8468671Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8468893Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8469115Z { 2023-01-11T21:41:23.8469348Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8469487Z { 2023-01-11T21:41:23.8469655Z #pragma omp for 2023-01-11T21:41:23.8469819Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8469955Z { 2023-01-11T21:41:23.8470132Z #pragma GCC ivdep 2023-01-11T21:41:23.8470326Z for(long i1=0; i1<12; i1+=1) 2023-01-11T21:41:23.8470469Z { 2023-01-11T21:41:23.8470612Z { 2023-01-11T21:41:23.8470755Z { 2023-01-11T21:41:23.8470949Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8471157Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8471396Z auto tmp2 = in_ptr2[i1 + (12*tmp1) + (96*tmp0)]; 2023-01-11T21:41:23.8471595Z out_ptr0[i1 + (12*i0)] = tmp2; 2023-01-11T21:41:23.8471738Z } 2023-01-11T21:41:23.8471885Z } 2023-01-11T21:41:23.8472037Z } 2023-01-11T21:41:23.8472159Z } 2023-01-11T21:41:23.8472290Z } 2023-01-11T21:41:23.8472421Z } 2023-01-11T21:41:23.8472600Z ''') 2023-01-11T21:41:23.8472610Z 2023-01-11T21:41:23.8472619Z 2023-01-11T21:41:23.8472833Z async_compile.wait(globals()) 2023-01-11T21:41:23.8473004Z del async_compile 2023-01-11T21:41:23.8473013Z 2023-01-11T21:41:23.8473176Z def call(args): 2023-01-11T21:41:23.8473339Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8473495Z args.clear() 2023-01-11T21:41:23.8474029Z buf0 = empty_strided((4, 12), (12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8474480Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8474633Z del arg0_1 2023-01-11T21:41:23.8474782Z del arg1_1 2023-01-11T21:41:23.8474930Z del arg2_1 2023-01-11T21:41:23.8475080Z return (buf0, ) 2023-01-11T21:41:23.8475108Z 2023-01-11T21:41:23.8475122Z 2023-01-11T21:41:23.8475285Z if __name__ == "__main__": 2023-01-11T21:41:23.8475554Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8475844Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8476320Z arg0_1 = rand_strided((8, 8, 12), (96, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8476749Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8477187Z arg2_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8477473Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8477483Z 2023-01-11T21:41:23.8477492Z 2023-01-11T21:41:23.8477714Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8477852Z import torch 2023-01-11T21:41:23.8478011Z import random 2023-01-11T21:41:23.8478285Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8478569Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8478664Z 2023-01-11T21:41:23.8478841Z aten = torch.ops.aten 2023-01-11T21:41:23.8479148Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8479363Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8479373Z 2023-01-11T21:41:23.8479382Z 2023-01-11T21:41:23.8479694Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8480172Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8480447Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8480676Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8480902Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8481124Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8481260Z { 2023-01-11T21:41:23.8481491Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8481609Z { 2023-01-11T21:41:23.8481779Z #pragma omp for 2023-01-11T21:41:23.8482026Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8482166Z { 2023-01-11T21:41:23.8482350Z #pragma GCC ivdep 2023-01-11T21:41:23.8482535Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8482676Z { 2023-01-11T21:41:23.8482838Z #pragma GCC ivdep 2023-01-11T21:41:23.8483033Z for(long i2=0; i2<12; i2+=1) 2023-01-11T21:41:23.8483173Z { 2023-01-11T21:41:23.8483312Z { 2023-01-11T21:41:23.8483457Z { 2023-01-11T21:41:23.8483661Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8483846Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8484097Z auto tmp2 = in_ptr2[i2 + (12*tmp1) + (96*tmp0)]; 2023-01-11T21:41:23.8484314Z out_ptr0[i2 + (12*i1) + (48*i0)] = tmp2; 2023-01-11T21:41:23.8484468Z } 2023-01-11T21:41:23.8484609Z } 2023-01-11T21:41:23.8484755Z } 2023-01-11T21:41:23.8484891Z } 2023-01-11T21:41:23.8485016Z } 2023-01-11T21:41:23.8485156Z } 2023-01-11T21:41:23.8485288Z } 2023-01-11T21:41:23.8485472Z ''') 2023-01-11T21:41:23.8485482Z 2023-01-11T21:41:23.8485491Z 2023-01-11T21:41:23.8485692Z async_compile.wait(globals()) 2023-01-11T21:41:23.8485853Z del async_compile 2023-01-11T21:41:23.8485862Z 2023-01-11T21:41:23.8486019Z def call(args): 2023-01-11T21:41:23.8486204Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8486342Z args.clear() 2023-01-11T21:41:23.8486670Z buf0 = empty_strided((4, 4, 12), (48, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8486913Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8487015Z del arg0_1 2023-01-11T21:41:23.8487117Z del arg1_1 2023-01-11T21:41:23.8487219Z del arg2_1 2023-01-11T21:41:23.8487325Z return (buf0, ) 2023-01-11T21:41:23.8487341Z 2023-01-11T21:41:23.8487347Z 2023-01-11T21:41:23.8487436Z if __name__ == "__main__": 2023-01-11T21:41:23.8487603Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8487789Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8488107Z arg0_1 = rand_strided((8, 8, 12), (96, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8488400Z arg1_1 = rand_strided((1, 4), (4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8488680Z arg2_1 = rand_strided((4, 1), (1, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8488866Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8488874Z 2023-01-11T21:41:23.8488974Z ok (3.296s) 2023-01-11T21:41:23.8489731Z test_index2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8489991Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8490395Z [2023-01-11 21:34:41,716] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 230 2023-01-11T21:41:23.8490791Z [2023-01-11 21:34:43,324] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 230 2023-01-11T21:41:23.8490800Z 2023-01-11T21:41:23.8490964Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8491088Z import torch 2023-01-11T21:41:23.8491203Z import random 2023-01-11T21:41:23.8491399Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8491606Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8491614Z 2023-01-11T21:41:23.8491732Z aten = torch.ops.aten 2023-01-11T21:41:23.8492027Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8492181Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8492189Z 2023-01-11T21:41:23.8492195Z 2023-01-11T21:41:23.8492416Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8492735Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8492914Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8493073Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8493220Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8493352Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8493444Z { 2023-01-11T21:41:23.8493589Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8493674Z { 2023-01-11T21:41:23.8493787Z #pragma omp for 2023-01-11T21:41:23.8493909Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8494004Z { 2023-01-11T21:41:23.8494120Z #pragma GCC ivdep 2023-01-11T21:41:23.8494241Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.8494330Z { 2023-01-11T21:41:23.8494421Z { 2023-01-11T21:41:23.8494518Z { 2023-01-11T21:41:23.8494651Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8494799Z auto tmp1 = in_ptr1[i1 + (64*tmp0)]; 2023-01-11T21:41:23.8494917Z out_ptr0[i1 + (64*i0)] = tmp1; 2023-01-11T21:41:23.8495006Z } 2023-01-11T21:41:23.8495087Z } 2023-01-11T21:41:23.8495173Z } 2023-01-11T21:41:23.8495253Z } 2023-01-11T21:41:23.8495352Z #pragma omp for 2023-01-11T21:41:23.8495465Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8495531Z { 2023-01-11T21:41:23.8495647Z #pragma GCC ivdep 2023-01-11T21:41:23.8495761Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8495851Z { 2023-01-11T21:41:23.8495980Z #pragma GCC ivdep 2023-01-11T21:41:23.8496095Z for(long i2=0; i2<8; i2+=1) 2023-01-11T21:41:23.8496170Z { 2023-01-11T21:41:23.8496268Z { 2023-01-11T21:41:23.8496357Z { 2023-01-11T21:41:23.8496490Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8496646Z auto tmp1 = in_ptr1[i2 + (8*tmp0) + (64*i0)]; 2023-01-11T21:41:23.8496798Z out_ptr1[i2 + (8*i1) + (32*i0)] = tmp1; 2023-01-11T21:41:23.8496894Z } 2023-01-11T21:41:23.8496971Z } 2023-01-11T21:41:23.8497066Z } 2023-01-11T21:41:23.8497157Z } 2023-01-11T21:41:23.8497246Z } 2023-01-11T21:41:23.8497335Z } 2023-01-11T21:41:23.8497423Z } 2023-01-11T21:41:23.8497555Z ''') 2023-01-11T21:41:23.8497563Z 2023-01-11T21:41:23.8497568Z 2023-01-11T21:41:23.8497683Z async_compile.wait(globals()) 2023-01-11T21:41:23.8497869Z del async_compile 2023-01-11T21:41:23.8497875Z 2023-01-11T21:41:23.8497968Z def call(args): 2023-01-11T21:41:23.8498072Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8498171Z args.clear() 2023-01-11T21:41:23.8498498Z buf0 = empty_strided((1, 4, 8, 8), (256, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8498809Z buf1 = empty_strided((8, 1, 4, 8), (32, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8499097Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8499186Z del arg0_1 2023-01-11T21:41:23.8499281Z del arg1_1 2023-01-11T21:41:23.8499400Z return (buf0, buf1, ) 2023-01-11T21:41:23.8499410Z 2023-01-11T21:41:23.8499415Z 2023-01-11T21:41:23.8499524Z if __name__ == "__main__": 2023-01-11T21:41:23.8499680Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8499858Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8500229Z arg0_1 = rand_strided((8, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8500507Z arg1_1 = rand_strided((1, 4), (4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8500666Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8500675Z 2023-01-11T21:41:23.8500778Z ok (1.680s) 2023-01-11T21:41:23.8501451Z test_index3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8501644Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8502039Z [2023-01-11 21:34:43,367] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 231 2023-01-11T21:41:23.8502498Z [2023-01-11 21:34:43,389] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.index 2023-01-11T21:41:23.8502928Z [2023-01-11 21:34:43,392] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 231 2023-01-11T21:41:23.8502937Z 2023-01-11T21:41:23.8503096Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8503205Z import torch 2023-01-11T21:41:23.8503278Z import random 2023-01-11T21:41:23.8503428Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8503596Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8503604Z 2023-01-11T21:41:23.8503710Z aten = torch.ops.aten 2023-01-11T21:41:23.8503915Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8504042Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8504048Z 2023-01-11T21:41:23.8504054Z 2023-01-11T21:41:23.8504167Z async_compile.wait(globals()) 2023-01-11T21:41:23.8504277Z del async_compile 2023-01-11T21:41:23.8504291Z 2023-01-11T21:41:23.8504368Z def call(args): 2023-01-11T21:41:23.8504483Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8504587Z args.clear() 2023-01-11T21:41:23.8504775Z buf0 = aten.index(as_strided(arg0_1, (3, 4, 1, 4, 3), (192, 48, 0, 12, 1)), [None, arg1_1, None, arg2_1]) 2023-01-11T21:41:23.8504864Z del arg0_1 2023-01-11T21:41:23.8504956Z del arg1_1 2023-01-11T21:41:23.8505044Z del arg2_1 2023-01-11T21:41:23.8505119Z buf1 = buf0 2023-01-11T21:41:23.8505260Z assert_size_stride(buf1, (3, 3, 1, 3), (9, 3, 3, 1)) 2023-01-11T21:41:23.8505346Z del buf0 2023-01-11T21:41:23.8505441Z return (buf1, ) 2023-01-11T21:41:23.8505448Z 2023-01-11T21:41:23.8505454Z 2023-01-11T21:41:23.8505548Z if __name__ == "__main__": 2023-01-11T21:41:23.8505694Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8505853Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8506199Z arg0_1 = rand_strided((3, 4, 4, 4, 3), (192, 48, 12, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8506612Z arg1_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8506891Z arg2_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8507088Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8507095Z 2023-01-11T21:41:23.8507197Z ok (0.068s) 2023-01-11T21:41:23.8507863Z test_index_put1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8508059Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8508575Z [2023-01-11 21:34:43,790] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 232 2023-01-11T21:41:23.8509013Z [2023-01-11 21:34:45,431] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 232 2023-01-11T21:41:23.8509728Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8509931Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8510333Z [2023-01-11 21:34:46,422] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 233 2023-01-11T21:41:23.8510764Z [2023-01-11 21:34:48,076] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 233 2023-01-11T21:41:23.8510773Z 2023-01-11T21:41:23.8510916Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8511020Z import torch 2023-01-11T21:41:23.8511120Z import random 2023-01-11T21:41:23.8511297Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8511475Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8511482Z 2023-01-11T21:41:23.8511587Z aten = torch.ops.aten 2023-01-11T21:41:23.8511750Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8511894Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8511901Z 2023-01-11T21:41:23.8511909Z 2023-01-11T21:41:23.8512136Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8512444Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8512625Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8512776Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8512918Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8513053Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8513167Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8513287Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8513366Z { 2023-01-11T21:41:23.8513493Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8513574Z { 2023-01-11T21:41:23.8513672Z #pragma omp for 2023-01-11T21:41:23.8513855Z for(long i0=0; i0<1254400; i0+=1) 2023-01-11T21:41:23.8513925Z { 2023-01-11T21:41:23.8514107Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8514284Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8514396Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8514520Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8514635Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8514719Z } 2023-01-11T21:41:23.8514928Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8515067Z for(long i0=10035200; i0<10035200; i0+=1) 2023-01-11T21:41:23.8515161Z { 2023-01-11T21:41:23.8515278Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8515407Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8515517Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8515621Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8515710Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8515787Z } 2023-01-11T21:41:23.8515888Z #pragma omp for 2023-01-11T21:41:23.8515995Z for(long i0=0; i0<601; i0+=1) 2023-01-11T21:41:23.8516077Z { 2023-01-11T21:41:23.8516179Z #pragma GCC ivdep 2023-01-11T21:41:23.8516295Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.8516362Z { 2023-01-11T21:41:23.8516458Z { 2023-01-11T21:41:23.8516552Z { 2023-01-11T21:41:23.8516768Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.8516941Z auto tmp1 = in_ptr2[i1 + (12544*i0)]; 2023-01-11T21:41:23.8517106Z auto tmp2 = static_cast(1); 2023-01-11T21:41:23.8517250Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.8517402Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8517538Z auto tmp5 = tmp1 + tmp4; 2023-01-11T21:41:23.8517692Z out_ptr0[i1 + (12544*tmp0)] = tmp1; 2023-01-11T21:41:23.8517830Z out_ptr1[i1 + (12544*tmp3)] = tmp5; 2023-01-11T21:41:23.8517921Z } 2023-01-11T21:41:23.8518007Z } 2023-01-11T21:41:23.8518087Z } 2023-01-11T21:41:23.8518152Z } 2023-01-11T21:41:23.8518254Z #pragma omp for 2023-01-11T21:41:23.8518377Z for(long i0=0; i0<1254400; i0+=1) 2023-01-11T21:41:23.8518474Z { 2023-01-11T21:41:23.8518680Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8518880Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8519004Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8519113Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8519203Z } 2023-01-11T21:41:23.8519327Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8519455Z for(long i0=10035200; i0<10035200; i0+=1) 2023-01-11T21:41:23.8519539Z { 2023-01-11T21:41:23.8519659Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.8519788Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8519887Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8519995Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:23.8520087Z } 2023-01-11T21:41:23.8520169Z } 2023-01-11T21:41:23.8520258Z } 2023-01-11T21:41:23.8520398Z ''') 2023-01-11T21:41:23.8520408Z 2023-01-11T21:41:23.8520414Z 2023-01-11T21:41:23.8520552Z async_compile.wait(globals()) 2023-01-11T21:41:23.8520648Z del async_compile 2023-01-11T21:41:23.8520656Z 2023-01-11T21:41:23.8520757Z def call(args): 2023-01-11T21:41:23.8520879Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8520981Z args.clear() 2023-01-11T21:41:23.8521324Z buf0 = empty_strided((800, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8521647Z buf2 = empty_strided((800, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8521967Z buf4 = empty_strided((800, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8522304Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.8522397Z del arg0_1 2023-01-11T21:41:23.8522495Z del arg1_1 2023-01-11T21:41:23.8522594Z del arg2_1 2023-01-11T21:41:23.8522792Z return (buf0, buf4, ) 2023-01-11T21:41:23.8522805Z 2023-01-11T21:41:23.8522811Z 2023-01-11T21:41:23.8522929Z if __name__ == "__main__": 2023-01-11T21:41:23.8523099Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8523292Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8523646Z arg0_1 = rand_strided((800, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8523916Z arg1_1 = rand_strided((601, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8524237Z arg2_1 = rand_strided((601, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8524419Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8524426Z 2023-01-11T21:41:23.8524432Z 2023-01-11T21:41:23.8524572Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8524675Z import torch 2023-01-11T21:41:23.8524769Z import random 2023-01-11T21:41:23.8524927Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8525196Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8525206Z 2023-01-11T21:41:23.8525299Z aten = torch.ops.aten 2023-01-11T21:41:23.8525493Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8525624Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8525631Z 2023-01-11T21:41:23.8525637Z 2023-01-11T21:41:23.8525846Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8526112Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8526282Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8526439Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8526597Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8526722Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8526867Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8527008Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8527095Z { 2023-01-11T21:41:23.8527248Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8527340Z { 2023-01-11T21:41:23.8527459Z #pragma omp for 2023-01-11T21:41:23.8527571Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8527663Z { 2023-01-11T21:41:23.8527847Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8528032Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8528144Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8528268Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8528391Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8528466Z } 2023-01-11T21:41:23.8528607Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8528730Z for(long i0=8192; i0<8192; i0+=1) 2023-01-11T21:41:23.8528816Z { 2023-01-11T21:41:23.8528948Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8529091Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8529217Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8529314Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8529420Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8529515Z } 2023-01-11T21:41:23.8529621Z #pragma omp for 2023-01-11T21:41:23.8529738Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8529826Z { 2023-01-11T21:41:23.8529933Z #pragma GCC ivdep 2023-01-11T21:41:23.8530055Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.8530137Z { 2023-01-11T21:41:23.8530223Z { 2023-01-11T21:41:23.8530308Z { 2023-01-11T21:41:23.8530445Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.8530583Z auto tmp1 = in_ptr2[i0]; 2023-01-11T21:41:23.8530720Z auto tmp2 = static_cast(1); 2023-01-11T21:41:23.8530953Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.8531108Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8531250Z auto tmp5 = tmp1 + tmp4; 2023-01-11T21:41:23.8531391Z out_ptr0[i1 + (8*tmp0)] = tmp1; 2023-01-11T21:41:23.8531541Z out_ptr1[i1 + (8*tmp3)] = tmp5; 2023-01-11T21:41:23.8531634Z } 2023-01-11T21:41:23.8531726Z } 2023-01-11T21:41:23.8531799Z } 2023-01-11T21:41:23.8531894Z } 2023-01-11T21:41:23.8531998Z #pragma omp for 2023-01-11T21:41:23.8532112Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8532205Z { 2023-01-11T21:41:23.8532400Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8532605Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8532715Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8532914Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8533001Z } 2023-01-11T21:41:23.8533134Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8533256Z for(long i0=8192; i0<8192; i0+=1) 2023-01-11T21:41:23.8533341Z { 2023-01-11T21:41:23.8533448Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.8533582Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8533695Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8533798Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:23.8533884Z } 2023-01-11T21:41:23.8533973Z } 2023-01-11T21:41:23.8534063Z } 2023-01-11T21:41:23.8534172Z ''') 2023-01-11T21:41:23.8534179Z 2023-01-11T21:41:23.8534198Z 2023-01-11T21:41:23.8534311Z async_compile.wait(globals()) 2023-01-11T21:41:23.8534424Z del async_compile 2023-01-11T21:41:23.8534431Z 2023-01-11T21:41:23.8534530Z def call(args): 2023-01-11T21:41:23.8534638Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8534742Z args.clear() 2023-01-11T21:41:23.8535058Z buf0 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8535350Z buf2 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8535632Z buf4 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8535977Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.8536075Z del arg0_1 2023-01-11T21:41:23.8536169Z del arg1_1 2023-01-11T21:41:23.8536272Z del arg2_1 2023-01-11T21:41:23.8536378Z return (buf0, buf4, ) 2023-01-11T21:41:23.8536386Z 2023-01-11T21:41:23.8536392Z 2023-01-11T21:41:23.8536494Z if __name__ == "__main__": 2023-01-11T21:41:23.8536653Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8536826Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8537153Z arg0_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8537421Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8537698Z arg2_1 = rand_strided((4, 1, 1), (1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8537866Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8537874Z 2023-01-11T21:41:23.8537963Z ok (4.686s) 2023-01-11T21:41:23.8538633Z test_index_put2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8538804Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8539273Z [2023-01-11 21:34:48,345] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 234 2023-01-11T21:41:23.8539633Z [2023-01-11 21:34:50,618] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 234 2023-01-11T21:41:23.8539659Z 2023-01-11T21:41:23.8539791Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8539891Z import torch 2023-01-11T21:41:23.8539984Z import random 2023-01-11T21:41:23.8540153Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8540345Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8540352Z 2023-01-11T21:41:23.8540468Z aten = torch.ops.aten 2023-01-11T21:41:23.8540673Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8540794Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8540800Z 2023-01-11T21:41:23.8540822Z 2023-01-11T21:41:23.8541006Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8541404Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8541591Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8541745Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8541895Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8542029Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8542122Z { 2023-01-11T21:41:23.8542250Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8542465Z { 2023-01-11T21:41:23.8542584Z #pragma omp for 2023-01-11T21:41:23.8542702Z for(long i0=0; i0<156800; i0+=1) 2023-01-11T21:41:23.8542786Z { 2023-01-11T21:41:23.8543001Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8543149Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8543229Z } 2023-01-11T21:41:23.8543379Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8543525Z for(long i0=1254400; i0<1254400; i0+=1) 2023-01-11T21:41:23.8543626Z { 2023-01-11T21:41:23.8543759Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8543882Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8543964Z } 2023-01-11T21:41:23.8544085Z #pragma omp for 2023-01-11T21:41:23.8544216Z for(long i0=0; i0<600; i0+=1) 2023-01-11T21:41:23.8544315Z { 2023-01-11T21:41:23.8544439Z #pragma GCC ivdep 2023-01-11T21:41:23.8544577Z for(long i1=0; i1<12544; i1+=1) 2023-01-11T21:41:23.8544679Z { 2023-01-11T21:41:23.8544766Z { 2023-01-11T21:41:23.8544868Z { 2023-01-11T21:41:23.8545007Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:23.8545161Z auto tmp1 = in_ptr2[i1 + (12544*i0)]; 2023-01-11T21:41:23.8545335Z atomic_add(&out_ptr0[i1 + (12544*tmp0)], tmp1); 2023-01-11T21:41:23.8545438Z } 2023-01-11T21:41:23.8545548Z } 2023-01-11T21:41:23.8545619Z } 2023-01-11T21:41:23.8545707Z } 2023-01-11T21:41:23.8545801Z } 2023-01-11T21:41:23.8545894Z } 2023-01-11T21:41:23.8546039Z ''') 2023-01-11T21:41:23.8546049Z 2023-01-11T21:41:23.8546055Z 2023-01-11T21:41:23.8546181Z async_compile.wait(globals()) 2023-01-11T21:41:23.8546283Z del async_compile 2023-01-11T21:41:23.8546289Z 2023-01-11T21:41:23.8546373Z def call(args): 2023-01-11T21:41:23.8546492Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8546598Z args.clear() 2023-01-11T21:41:23.8546946Z buf0 = empty_strided((100, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8547220Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8547321Z del arg0_1 2023-01-11T21:41:23.8547416Z del arg1_1 2023-01-11T21:41:23.8547507Z del arg2_1 2023-01-11T21:41:23.8547713Z return (buf0, ) 2023-01-11T21:41:23.8547720Z 2023-01-11T21:41:23.8547726Z 2023-01-11T21:41:23.8547833Z if __name__ == "__main__": 2023-01-11T21:41:23.8547995Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8548184Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8548528Z arg0_1 = rand_strided((100, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8548826Z arg1_1 = rand_strided((600, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8549155Z arg2_1 = rand_strided((600, 256, 7, 7), (12544, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8549333Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8549342Z 2023-01-11T21:41:23.8549422Z ok (2.728s) 2023-01-11T21:41:23.8550191Z test_index_put3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8550373Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8550755Z [2023-01-11 21:34:50,902] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 235 2023-01-11T21:41:23.8551136Z [2023-01-11 21:34:52,598] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 235 2023-01-11T21:41:23.8551143Z 2023-01-11T21:41:23.8551275Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8551373Z import torch 2023-01-11T21:41:23.8551470Z import random 2023-01-11T21:41:23.8551628Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8551774Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8551781Z 2023-01-11T21:41:23.8551890Z aten = torch.ops.aten 2023-01-11T21:41:23.8552081Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8552205Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8552211Z 2023-01-11T21:41:23.8552217Z 2023-01-11T21:41:23.8552405Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8552686Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8552845Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8552986Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8553108Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8553232Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8553310Z { 2023-01-11T21:41:23.8553446Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8553533Z { 2023-01-11T21:41:23.8553641Z #pragma omp for 2023-01-11T21:41:23.8553826Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8553910Z { 2023-01-11T21:41:23.8554018Z #pragma GCC ivdep 2023-01-11T21:41:23.8554131Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.8554219Z { 2023-01-11T21:41:23.8554325Z #pragma GCC ivdep 2023-01-11T21:41:23.8554445Z for(long i2=0; i2<2; i2+=1) 2023-01-11T21:41:23.8554536Z { 2023-01-11T21:41:23.8554617Z { 2023-01-11T21:41:23.8554717Z { 2023-01-11T21:41:23.8554842Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8554990Z auto tmp1 = in_ptr1[i2 + (2*i0)]; 2023-01-11T21:41:23.8555140Z out_ptr0[i2 + (2*tmp0) + (8*i0)] = tmp1; 2023-01-11T21:41:23.8555233Z } 2023-01-11T21:41:23.8555323Z } 2023-01-11T21:41:23.8555409Z } 2023-01-11T21:41:23.8555502Z } 2023-01-11T21:41:23.8555588Z } 2023-01-11T21:41:23.8555792Z #pragma omp for 2023-01-11T21:41:23.8555921Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8556017Z { 2023-01-11T21:41:23.8556201Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8556405Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8556545Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8556680Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8556769Z } 2023-01-11T21:41:23.8556897Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8557018Z for(long i0=8192; i0<8192; i0+=1) 2023-01-11T21:41:23.8557101Z { 2023-01-11T21:41:23.8557198Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8557332Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8557454Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8557568Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8557660Z } 2023-01-11T21:41:23.8557851Z #pragma omp for 2023-01-11T21:41:23.8557963Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8558059Z { 2023-01-11T21:41:23.8558179Z #pragma GCC ivdep 2023-01-11T21:41:23.8558300Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.8558393Z { 2023-01-11T21:41:23.8558506Z #pragma GCC ivdep 2023-01-11T21:41:23.8558628Z for(long i2=0; i2<2; i2+=1) 2023-01-11T21:41:23.8558703Z { 2023-01-11T21:41:23.8558791Z { 2023-01-11T21:41:23.8558883Z { 2023-01-11T21:41:23.8559018Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8559163Z auto tmp3 = in_ptr1[i2 + (2*i0)]; 2023-01-11T21:41:23.8559306Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8559432Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8559568Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8559703Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.8559853Z out_ptr1[i2 + (2*tmp2) + (8*i0)] = tmp5; 2023-01-11T21:41:23.8559947Z } 2023-01-11T21:41:23.8560042Z } 2023-01-11T21:41:23.8560133Z } 2023-01-11T21:41:23.8560221Z } 2023-01-11T21:41:23.8560290Z } 2023-01-11T21:41:23.8560376Z } 2023-01-11T21:41:23.8560462Z } 2023-01-11T21:41:23.8560593Z ''') 2023-01-11T21:41:23.8560601Z 2023-01-11T21:41:23.8560608Z 2023-01-11T21:41:23.8560739Z async_compile.wait(globals()) 2023-01-11T21:41:23.8560842Z del async_compile 2023-01-11T21:41:23.8560849Z 2023-01-11T21:41:23.8560942Z def call(args): 2023-01-11T21:41:23.8561039Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8561137Z args.clear() 2023-01-11T21:41:23.8561450Z buf1 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8561708Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8561795Z del arg0_1 2023-01-11T21:41:23.8561880Z del arg1_1 2023-01-11T21:41:23.8561967Z del arg2_1 2023-01-11T21:41:23.8562043Z return (buf1, ) 2023-01-11T21:41:23.8562067Z 2023-01-11T21:41:23.8562073Z 2023-01-11T21:41:23.8562176Z if __name__ == "__main__": 2023-01-11T21:41:23.8562338Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8562525Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8562831Z arg0_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8563113Z arg1_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8563398Z arg2_1 = rand_strided((1024, 1, 2), (2, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8563576Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8563670Z 2023-01-11T21:41:23.8563773Z ok (1.796s) 2023-01-11T21:41:23.8564470Z test_index_put_as_masked_fill_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8564639Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8565030Z [2023-01-11 21:34:52,652] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 236 2023-01-11T21:41:23.8565409Z [2023-01-11 21:34:54,231] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 236 2023-01-11T21:41:23.8566099Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8566298Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8566690Z [2023-01-11 21:34:54,280] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 237 2023-01-11T21:41:23.8567095Z [2023-01-11 21:34:55,892] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 237 2023-01-11T21:41:23.8567105Z 2023-01-11T21:41:23.8567249Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8567348Z import torch 2023-01-11T21:41:23.8567452Z import random 2023-01-11T21:41:23.8567598Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8567775Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8567787Z 2023-01-11T21:41:23.8567904Z aten = torch.ops.aten 2023-01-11T21:41:23.8568106Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8568248Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8568256Z 2023-01-11T21:41:23.8568262Z 2023-01-11T21:41:23.8568465Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8568769Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8568945Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.8569090Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8569241Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8569371Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8569457Z { 2023-01-11T21:41:23.8569595Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8569674Z { 2023-01-11T21:41:23.8569777Z #pragma omp for 2023-01-11T21:41:23.8569877Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8569960Z { 2023-01-11T21:41:23.8570098Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.8570278Z flag_to_float(in_ptr0 + 8*i0, g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.8570496Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.8570666Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:23.8570843Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:23.8570987Z auto tmp3 = decltype(tmp1)::blendv(tmp2, tmp1, tmp0); 2023-01-11T21:41:23.8571113Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8571209Z } 2023-01-11T21:41:23.8571350Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8571483Z for(long i0=8192; i0<8192; i0+=1) 2023-01-11T21:41:23.8571574Z { 2023-01-11T21:41:23.8571692Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8571877Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:23.8571988Z auto tmp2 = in_ptr2[i0]; 2023-01-11T21:41:23.8572117Z auto tmp3 = tmp0 ? tmp1 : tmp2; 2023-01-11T21:41:23.8572222Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.8572304Z } 2023-01-11T21:41:23.8572380Z } 2023-01-11T21:41:23.8572457Z } 2023-01-11T21:41:23.8572568Z ''') 2023-01-11T21:41:23.8572575Z 2023-01-11T21:41:23.8572582Z 2023-01-11T21:41:23.8572709Z async_compile.wait(globals()) 2023-01-11T21:41:23.8572808Z del async_compile 2023-01-11T21:41:23.8572816Z 2023-01-11T21:41:23.8572917Z def call(args): 2023-01-11T21:41:23.8573036Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8573138Z args.clear() 2023-01-11T21:41:23.8573461Z buf0 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8573730Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8573830Z del arg0_1 2023-01-11T21:41:23.8574068Z del arg1_1 2023-01-11T21:41:23.8574165Z del arg2_1 2023-01-11T21:41:23.8574273Z return (buf0, ) 2023-01-11T21:41:23.8574281Z 2023-01-11T21:41:23.8574287Z 2023-01-11T21:41:23.8574393Z if __name__ == "__main__": 2023-01-11T21:41:23.8574559Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8574752Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8575043Z arg0_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8575330Z arg1_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8575601Z arg2_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8575783Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8575792Z 2023-01-11T21:41:23.8575799Z 2023-01-11T21:41:23.8575940Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8576042Z import torch 2023-01-11T21:41:23.8576148Z import random 2023-01-11T21:41:23.8576294Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8576435Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8576456Z 2023-01-11T21:41:23.8576548Z aten = torch.ops.aten 2023-01-11T21:41:23.8576743Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8576870Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8576877Z 2023-01-11T21:41:23.8576882Z 2023-01-11T21:41:23.8577075Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8577351Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8577511Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.8577659Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8577789Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8577921Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8578013Z { 2023-01-11T21:41:23.8578146Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8578243Z { 2023-01-11T21:41:23.8578356Z #pragma omp for 2023-01-11T21:41:23.8578473Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.8578550Z { 2023-01-11T21:41:23.8578694Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.8578881Z flag_to_float(in_ptr0 + 8*i0, g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.8579075Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.8579242Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8579404Z auto tmp2 = at::vec::Vectorized(in_ptr2[0]); 2023-01-11T21:41:23.8579512Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.8579682Z auto tmp4 = decltype(tmp3)::blendv(tmp1, tmp3, tmp0); 2023-01-11T21:41:23.8579795Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8579976Z } 2023-01-11T21:41:23.8580115Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8580239Z for(long i0=8192; i0<8192; i0+=1) 2023-01-11T21:41:23.8580319Z { 2023-01-11T21:41:23.8580434Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8580552Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8580649Z auto tmp2 = in_ptr2[0]; 2023-01-11T21:41:23.8580756Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.8580876Z auto tmp4 = tmp0 ? tmp3 : tmp1; 2023-01-11T21:41:23.8580983Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:23.8581065Z } 2023-01-11T21:41:23.8581151Z } 2023-01-11T21:41:23.8581242Z } 2023-01-11T21:41:23.8581367Z ''') 2023-01-11T21:41:23.8581377Z 2023-01-11T21:41:23.8581384Z 2023-01-11T21:41:23.8581525Z async_compile.wait(globals()) 2023-01-11T21:41:23.8581632Z del async_compile 2023-01-11T21:41:23.8581639Z 2023-01-11T21:41:23.8581735Z def call(args): 2023-01-11T21:41:23.8581930Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8582046Z args.clear() 2023-01-11T21:41:23.8582510Z buf0 = empty_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8582813Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8582912Z del arg0_1 2023-01-11T21:41:23.8583010Z del arg1_1 2023-01-11T21:41:23.8583109Z del arg2_1 2023-01-11T21:41:23.8583211Z return (buf0, ) 2023-01-11T21:41:23.8583220Z 2023-01-11T21:41:23.8583226Z 2023-01-11T21:41:23.8583334Z if __name__ == "__main__": 2023-01-11T21:41:23.8583523Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8583723Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8584053Z arg0_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8584356Z arg1_1 = rand_strided((1024, 4, 2), (8, 2, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8584633Z arg2_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8584804Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8584812Z 2023-01-11T21:41:23.8584901Z ok (3.291s) 2023-01-11T21:41:23.8585619Z test_index_put_fallback1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8585804Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8586246Z [2023-01-11 21:34:55,943] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 238 2023-01-11T21:41:23.8586730Z [2023-01-11 21:34:57,738] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 238 2023-01-11T21:41:23.8587478Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8587667Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8588113Z [2023-01-11 21:34:57,787] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 239 2023-01-11T21:41:23.8588570Z [2023-01-11 21:34:57,794] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 239 2023-01-11T21:41:23.8588579Z 2023-01-11T21:41:23.8588728Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8588840Z import torch 2023-01-11T21:41:23.8588952Z import random 2023-01-11T21:41:23.8589264Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8589447Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8589455Z 2023-01-11T21:41:23.8589552Z aten = torch.ops.aten 2023-01-11T21:41:23.8589744Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8589876Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8589884Z 2023-01-11T21:41:23.8589890Z 2023-01-11T21:41:23.8590101Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8590383Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8590548Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8590688Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8590772Z { 2023-01-11T21:41:23.8590869Z #pragma GCC ivdep 2023-01-11T21:41:23.8590982Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.8591069Z { 2023-01-11T21:41:23.8591168Z { 2023-01-11T21:41:23.8591341Z { 2023-01-11T21:41:23.8591470Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8591590Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8591658Z } 2023-01-11T21:41:23.8591739Z } 2023-01-11T21:41:23.8591825Z } 2023-01-11T21:41:23.8591916Z } 2023-01-11T21:41:23.8592036Z ''') 2023-01-11T21:41:23.8592045Z 2023-01-11T21:41:23.8592050Z 2023-01-11T21:41:23.8592174Z async_compile.wait(globals()) 2023-01-11T21:41:23.8592284Z del async_compile 2023-01-11T21:41:23.8592291Z 2023-01-11T21:41:23.8592372Z def call(args): 2023-01-11T21:41:23.8592483Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8592583Z args.clear() 2023-01-11T21:41:23.8592867Z buf0 = empty_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8593068Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8593169Z del arg0_1 2023-01-11T21:41:23.8593322Z aten.index_put_(buf0, [arg1_1], arg2_1, False) 2023-01-11T21:41:23.8593409Z del arg1_1 2023-01-11T21:41:23.8593507Z del arg2_1 2023-01-11T21:41:23.8593611Z return (buf0, ) 2023-01-11T21:41:23.8593619Z 2023-01-11T21:41:23.8593626Z 2023-01-11T21:41:23.8593806Z if __name__ == "__main__": 2023-01-11T21:41:23.8593976Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8594160Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8594450Z arg0_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8594741Z arg1_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8595018Z arg2_1 = rand_strided((2, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8595201Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8595210Z 2023-01-11T21:41:23.8595217Z 2023-01-11T21:41:23.8595358Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8595458Z import torch 2023-01-11T21:41:23.8595567Z import random 2023-01-11T21:41:23.8595733Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8595920Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8595927Z 2023-01-11T21:41:23.8596041Z aten = torch.ops.aten 2023-01-11T21:41:23.8596211Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8596346Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8596353Z 2023-01-11T21:41:23.8596359Z 2023-01-11T21:41:23.8596554Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8596859Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8597027Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8597175Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8597260Z { 2023-01-11T21:41:23.8597356Z #pragma GCC ivdep 2023-01-11T21:41:23.8597469Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.8597553Z { 2023-01-11T21:41:23.8597725Z { 2023-01-11T21:41:23.8597815Z { 2023-01-11T21:41:23.8597944Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8598067Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8598143Z } 2023-01-11T21:41:23.8598228Z } 2023-01-11T21:41:23.8598315Z } 2023-01-11T21:41:23.8598404Z } 2023-01-11T21:41:23.8598529Z ''') 2023-01-11T21:41:23.8598537Z 2023-01-11T21:41:23.8598543Z 2023-01-11T21:41:23.8598674Z async_compile.wait(globals()) 2023-01-11T21:41:23.8598776Z del async_compile 2023-01-11T21:41:23.8598783Z 2023-01-11T21:41:23.8598864Z def call(args): 2023-01-11T21:41:23.8598981Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8599083Z args.clear() 2023-01-11T21:41:23.8599371Z buf0 = empty_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8599561Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8599662Z del arg0_1 2023-01-11T21:41:23.8599884Z aten.index_put_(buf0, [arg1_1], arg2_1, True) 2023-01-11T21:41:23.8599972Z del arg1_1 2023-01-11T21:41:23.8600064Z del arg2_1 2023-01-11T21:41:23.8600161Z return (buf0, ) 2023-01-11T21:41:23.8600169Z 2023-01-11T21:41:23.8600174Z 2023-01-11T21:41:23.8600283Z if __name__ == "__main__": 2023-01-11T21:41:23.8600445Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8600635Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8600931Z arg0_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8601196Z arg1_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8601455Z arg2_1 = rand_strided((2, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8601630Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8601637Z 2023-01-11T21:41:23.8601726Z ok (1.901s) 2023-01-11T21:41:23.8602464Z test_index_put_fallback2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8602655Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8603048Z [2023-01-11 21:34:57,846] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 240 2023-01-11T21:41:23.8603432Z [2023-01-11 21:34:59,407] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 240 2023-01-11T21:41:23.8604049Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8604230Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8604603Z [2023-01-11 21:34:59,458] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 241 2023-01-11T21:41:23.8604981Z [2023-01-11 21:34:59,467] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 241 2023-01-11T21:41:23.8604992Z 2023-01-11T21:41:23.8605110Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8605204Z import torch 2023-01-11T21:41:23.8605298Z import random 2023-01-11T21:41:23.8605459Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8605629Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8605636Z 2023-01-11T21:41:23.8605741Z aten = torch.ops.aten 2023-01-11T21:41:23.8605929Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8606128Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8606152Z 2023-01-11T21:41:23.8606158Z 2023-01-11T21:41:23.8606341Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8606636Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8606798Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8606935Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8607017Z { 2023-01-11T21:41:23.8607162Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8607246Z { 2023-01-11T21:41:23.8607340Z #pragma omp for 2023-01-11T21:41:23.8607460Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:23.8607551Z { 2023-01-11T21:41:23.8607641Z { 2023-01-11T21:41:23.8607734Z { 2023-01-11T21:41:23.8607863Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8607974Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8608042Z } 2023-01-11T21:41:23.8608204Z } 2023-01-11T21:41:23.8608295Z } 2023-01-11T21:41:23.8608379Z } 2023-01-11T21:41:23.8608462Z } 2023-01-11T21:41:23.8608586Z ''') 2023-01-11T21:41:23.8608594Z 2023-01-11T21:41:23.8608600Z 2023-01-11T21:41:23.8608729Z async_compile.wait(globals()) 2023-01-11T21:41:23.8608814Z del async_compile 2023-01-11T21:41:23.8608821Z 2023-01-11T21:41:23.8608922Z def call(args): 2023-01-11T21:41:23.8609044Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8609145Z args.clear() 2023-01-11T21:41:23.8609437Z buf0 = empty_strided((1, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8609615Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8609700Z del arg0_1 2023-01-11T21:41:23.8609840Z aten.index_put_(buf0, [None,arg1_1,arg2_1], arg3_1, False) 2023-01-11T21:41:23.8609928Z del arg1_1 2023-01-11T21:41:23.8610013Z del arg2_1 2023-01-11T21:41:23.8610097Z del arg3_1 2023-01-11T21:41:23.8610193Z return (buf0, ) 2023-01-11T21:41:23.8610201Z 2023-01-11T21:41:23.8610206Z 2023-01-11T21:41:23.8610304Z if __name__ == "__main__": 2023-01-11T21:41:23.8610455Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8610601Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8610904Z arg0_1 = rand_strided((1, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8611163Z arg1_1 = rand_strided((2, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8611446Z arg2_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8611724Z arg3_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8611901Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8611909Z 2023-01-11T21:41:23.8611915Z 2023-01-11T21:41:23.8612042Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8612134Z import torch 2023-01-11T21:41:23.8612222Z import random 2023-01-11T21:41:23.8612378Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8612546Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8612553Z 2023-01-11T21:41:23.8612659Z aten = torch.ops.aten 2023-01-11T21:41:23.8612846Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8612979Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8612987Z 2023-01-11T21:41:23.8612994Z 2023-01-11T21:41:23.8613193Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8613491Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8613643Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8613778Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8613858Z { 2023-01-11T21:41:23.8613996Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8614081Z { 2023-01-11T21:41:23.8614194Z #pragma omp for 2023-01-11T21:41:23.8614401Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:23.8614478Z { 2023-01-11T21:41:23.8614560Z { 2023-01-11T21:41:23.8614648Z { 2023-01-11T21:41:23.8614768Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8614878Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:23.8614972Z } 2023-01-11T21:41:23.8615062Z } 2023-01-11T21:41:23.8615134Z } 2023-01-11T21:41:23.8615224Z } 2023-01-11T21:41:23.8615310Z } 2023-01-11T21:41:23.8615435Z ''') 2023-01-11T21:41:23.8615445Z 2023-01-11T21:41:23.8615452Z 2023-01-11T21:41:23.8615584Z async_compile.wait(globals()) 2023-01-11T21:41:23.8615687Z del async_compile 2023-01-11T21:41:23.8615694Z 2023-01-11T21:41:23.8615790Z def call(args): 2023-01-11T21:41:23.8615893Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8615988Z args.clear() 2023-01-11T21:41:23.8616293Z buf0 = empty_strided((1, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8616560Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8616661Z del arg0_1 2023-01-11T21:41:23.8616833Z aten.index_put_(buf0, [None,arg1_1,arg2_1], arg3_1, True) 2023-01-11T21:41:23.8616923Z del arg1_1 2023-01-11T21:41:23.8617001Z del arg2_1 2023-01-11T21:41:23.8617095Z del arg3_1 2023-01-11T21:41:23.8617198Z return (buf0, ) 2023-01-11T21:41:23.8617206Z 2023-01-11T21:41:23.8617213Z 2023-01-11T21:41:23.8617327Z if __name__ == "__main__": 2023-01-11T21:41:23.8617504Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8617688Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8618002Z arg0_1 = rand_strided((1, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8618290Z arg1_1 = rand_strided((2, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8618550Z arg2_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8618838Z arg3_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8619027Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8619035Z 2023-01-11T21:41:23.8619129Z ok (1.672s) 2023-01-11T21:41:23.8619857Z test_index_select_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8620041Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8620436Z [2023-01-11 21:34:59,509] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 242 2023-01-11T21:41:23.8620845Z [2023-01-11 21:35:01,244] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 242 2023-01-11T21:41:23.8621472Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8621654Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8622032Z [2023-01-11 21:35:01,287] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 243 2023-01-11T21:41:23.8622566Z [2023-01-11 21:35:02,956] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 243 2023-01-11T21:41:23.8622576Z 2023-01-11T21:41:23.8622712Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8622814Z import torch 2023-01-11T21:41:23.8622914Z import random 2023-01-11T21:41:23.8623204Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8623385Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8623393Z 2023-01-11T21:41:23.8623509Z aten = torch.ops.aten 2023-01-11T21:41:23.8623696Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8623833Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8623840Z 2023-01-11T21:41:23.8623847Z 2023-01-11T21:41:23.8624078Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8624369Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8624534Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.8624677Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8624813Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8624941Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8625155Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8625248Z { 2023-01-11T21:41:23.8625390Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8625479Z { 2023-01-11T21:41:23.8625590Z #pragma omp for 2023-01-11T21:41:23.8625707Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8625797Z { 2023-01-11T21:41:23.8625894Z #pragma GCC ivdep 2023-01-11T21:41:23.8626009Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.8626099Z { 2023-01-11T21:41:23.8626188Z { 2023-01-11T21:41:23.8626279Z { 2023-01-11T21:41:23.8626414Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8626560Z auto tmp1 = in_ptr1[i1 + (64*tmp0)]; 2023-01-11T21:41:23.8626681Z out_ptr0[i1 + (64*i0)] = tmp1; 2023-01-11T21:41:23.8626773Z } 2023-01-11T21:41:23.8626857Z } 2023-01-11T21:41:23.8626944Z } 2023-01-11T21:41:23.8627033Z } 2023-01-11T21:41:23.8627152Z #pragma omp for 2023-01-11T21:41:23.8627249Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8627334Z { 2023-01-11T21:41:23.8627443Z #pragma GCC ivdep 2023-01-11T21:41:23.8627561Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8627646Z { 2023-01-11T21:41:23.8627758Z #pragma GCC ivdep 2023-01-11T21:41:23.8627878Z for(long i2=0; i2<8; i2+=1) 2023-01-11T21:41:23.8627951Z { 2023-01-11T21:41:23.8628043Z { 2023-01-11T21:41:23.8628138Z { 2023-01-11T21:41:23.8628282Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8628443Z auto tmp1 = in_ptr1[i2 + (8*tmp0) + (64*i0)]; 2023-01-11T21:41:23.8628596Z out_ptr1[i2 + (8*i1) + (32*i0)] = tmp1; 2023-01-11T21:41:23.8628691Z } 2023-01-11T21:41:23.8628768Z } 2023-01-11T21:41:23.8628873Z } 2023-01-11T21:41:23.8628965Z } 2023-01-11T21:41:23.8629049Z } 2023-01-11T21:41:23.8629167Z #pragma omp for 2023-01-11T21:41:23.8629286Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8629378Z { 2023-01-11T21:41:23.8629476Z #pragma GCC ivdep 2023-01-11T21:41:23.8629594Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8629685Z { 2023-01-11T21:41:23.8629804Z #pragma GCC ivdep 2023-01-11T21:41:23.8629928Z for(long i2=0; i2<4; i2+=1) 2023-01-11T21:41:23.8630024Z { 2023-01-11T21:41:23.8630121Z { 2023-01-11T21:41:23.8630200Z { 2023-01-11T21:41:23.8630329Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8630457Z auto tmp1 = in_ptr0[i2]; 2023-01-11T21:41:23.8630616Z auto tmp2 = in_ptr1[tmp1 + (8*tmp0) + (64*i0)]; 2023-01-11T21:41:23.8630845Z out_ptr2[i2 + (4*i1) + (16*i0)] = tmp2; 2023-01-11T21:41:23.8630943Z } 2023-01-11T21:41:23.8631038Z } 2023-01-11T21:41:23.8631117Z } 2023-01-11T21:41:23.8631206Z } 2023-01-11T21:41:23.8631290Z } 2023-01-11T21:41:23.8631383Z } 2023-01-11T21:41:23.8631469Z } 2023-01-11T21:41:23.8631604Z ''') 2023-01-11T21:41:23.8631614Z 2023-01-11T21:41:23.8631620Z 2023-01-11T21:41:23.8631755Z async_compile.wait(globals()) 2023-01-11T21:41:23.8631840Z del async_compile 2023-01-11T21:41:23.8631846Z 2023-01-11T21:41:23.8631945Z def call(args): 2023-01-11T21:41:23.8632054Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8632154Z args.clear() 2023-01-11T21:41:23.8632465Z buf0 = empty_strided((4, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8632765Z buf1 = empty_strided((8, 4, 8), (32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8633156Z buf2 = empty_strided((8, 4, 4), (16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8633465Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8633562Z del arg0_1 2023-01-11T21:41:23.8633661Z del arg1_1 2023-01-11T21:41:23.8633873Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8633880Z 2023-01-11T21:41:23.8633887Z 2023-01-11T21:41:23.8633992Z if __name__ == "__main__": 2023-01-11T21:41:23.8634152Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8634317Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8642662Z arg0_1 = rand_strided((8, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8643082Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.8643258Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8643266Z 2023-01-11T21:41:23.8643280Z 2023-01-11T21:41:23.8643419Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8643520Z import torch 2023-01-11T21:41:23.8643623Z import random 2023-01-11T21:41:23.8643791Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8643967Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8643976Z 2023-01-11T21:41:23.8644089Z aten = torch.ops.aten 2023-01-11T21:41:23.8644270Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8644401Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8644409Z 2023-01-11T21:41:23.8644414Z 2023-01-11T21:41:23.8644622Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8644917Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8645090Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8645248Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8645393Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8645533Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8645659Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8645746Z { 2023-01-11T21:41:23.8645886Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8645972Z { 2023-01-11T21:41:23.8646081Z #pragma omp for 2023-01-11T21:41:23.8646194Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8646278Z { 2023-01-11T21:41:23.8646372Z #pragma GCC ivdep 2023-01-11T21:41:23.8646488Z for(long i1=0; i1<64; i1+=1) 2023-01-11T21:41:23.8646576Z { 2023-01-11T21:41:23.8646665Z { 2023-01-11T21:41:23.8646752Z { 2023-01-11T21:41:23.8646880Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8647008Z auto tmp1 = in_ptr1[i1 + (64*tmp0)]; 2023-01-11T21:41:23.8647145Z out_ptr0[i1 + (64*i0)] = tmp1; 2023-01-11T21:41:23.8647354Z } 2023-01-11T21:41:23.8647440Z } 2023-01-11T21:41:23.8647526Z } 2023-01-11T21:41:23.8647609Z } 2023-01-11T21:41:23.8647713Z #pragma omp for 2023-01-11T21:41:23.8647807Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8647890Z { 2023-01-11T21:41:23.8648000Z #pragma GCC ivdep 2023-01-11T21:41:23.8648110Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8648198Z { 2023-01-11T21:41:23.8648306Z #pragma GCC ivdep 2023-01-11T21:41:23.8648424Z for(long i2=0; i2<8; i2+=1) 2023-01-11T21:41:23.8648495Z { 2023-01-11T21:41:23.8648584Z { 2023-01-11T21:41:23.8648676Z { 2023-01-11T21:41:23.8648801Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8648952Z auto tmp1 = in_ptr1[i2 + (8*tmp0) + (64*i0)]; 2023-01-11T21:41:23.8649152Z out_ptr1[i2 + (8*i1) + (32*i0)] = tmp1; 2023-01-11T21:41:23.8649252Z } 2023-01-11T21:41:23.8649325Z } 2023-01-11T21:41:23.8649410Z } 2023-01-11T21:41:23.8649492Z } 2023-01-11T21:41:23.8649577Z } 2023-01-11T21:41:23.8649682Z #pragma omp for 2023-01-11T21:41:23.8649789Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8649874Z { 2023-01-11T21:41:23.8649964Z #pragma GCC ivdep 2023-01-11T21:41:23.8650074Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8650158Z { 2023-01-11T21:41:23.8650269Z #pragma GCC ivdep 2023-01-11T21:41:23.8650387Z for(long i2=0; i2<4; i2+=1) 2023-01-11T21:41:23.8650471Z { 2023-01-11T21:41:23.8650546Z { 2023-01-11T21:41:23.8650639Z { 2023-01-11T21:41:23.8650770Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.8650897Z auto tmp1 = in_ptr0[i2]; 2023-01-11T21:41:23.8651057Z auto tmp2 = in_ptr1[tmp1 + (8*tmp0) + (64*i0)]; 2023-01-11T21:41:23.8651193Z out_ptr2[i2 + (4*i1) + (16*i0)] = tmp2; 2023-01-11T21:41:23.8651284Z } 2023-01-11T21:41:23.8651372Z } 2023-01-11T21:41:23.8651443Z } 2023-01-11T21:41:23.8651528Z } 2023-01-11T21:41:23.8651613Z } 2023-01-11T21:41:23.8651700Z } 2023-01-11T21:41:23.8651785Z } 2023-01-11T21:41:23.8651915Z ''') 2023-01-11T21:41:23.8651924Z 2023-01-11T21:41:23.8651931Z 2023-01-11T21:41:23.8652048Z async_compile.wait(globals()) 2023-01-11T21:41:23.8652145Z del async_compile 2023-01-11T21:41:23.8652153Z 2023-01-11T21:41:23.8652249Z def call(args): 2023-01-11T21:41:23.8652347Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8652445Z args.clear() 2023-01-11T21:41:23.8652763Z buf0 = empty_strided((4, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8653090Z buf1 = empty_strided((8, 4, 8), (32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8653386Z buf2 = empty_strided((8, 4, 4), (16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8653696Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8653793Z del arg0_1 2023-01-11T21:41:23.8653884Z del arg1_1 2023-01-11T21:41:23.8654002Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8654011Z 2023-01-11T21:41:23.8654017Z 2023-01-11T21:41:23.8654121Z if __name__ == "__main__": 2023-01-11T21:41:23.8654279Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8654458Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8654766Z arg0_1 = rand_strided((8, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8655038Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8655299Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8655307Z 2023-01-11T21:41:23.8655402Z ok (3.490s) 2023-01-11T21:41:23.8656100Z test_indirect_load_broadcast_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8656276Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8656665Z [2023-01-11 21:35:03,049] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 244 2023-01-11T21:41:23.8657055Z [2023-01-11 21:35:04,652] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 244 2023-01-11T21:41:23.8657128Z 2023-01-11T21:41:23.8657265Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8657362Z import torch 2023-01-11T21:41:23.8657443Z import random 2023-01-11T21:41:23.8657601Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8657764Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8657773Z 2023-01-11T21:41:23.8657880Z aten = torch.ops.aten 2023-01-11T21:41:23.8658065Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8658187Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8658194Z 2023-01-11T21:41:23.8658201Z 2023-01-11T21:41:23.8658397Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8658684Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8658833Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8658975Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8659121Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8659258Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8659351Z { 2023-01-11T21:41:23.8659491Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8659580Z { 2023-01-11T21:41:23.8659676Z #pragma omp for 2023-01-11T21:41:23.8659789Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8659875Z { 2023-01-11T21:41:23.8659983Z #pragma GCC ivdep 2023-01-11T21:41:23.8660100Z for(long i1=0; i1<21; i1+=1) 2023-01-11T21:41:23.8660187Z { 2023-01-11T21:41:23.8660279Z { 2023-01-11T21:41:23.8660358Z { 2023-01-11T21:41:23.8660495Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:23.8660630Z auto tmp2 = in_ptr2[i0]; 2023-01-11T21:41:23.8660780Z auto tmp1 = in_ptr1[i1 + (512*tmp0)]; 2023-01-11T21:41:23.8660916Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.8661062Z out_ptr0[i1 + (21*i0)] = tmp3; 2023-01-11T21:41:23.8661154Z } 2023-01-11T21:41:23.8661231Z } 2023-01-11T21:41:23.8661317Z } 2023-01-11T21:41:23.8661407Z } 2023-01-11T21:41:23.8661491Z } 2023-01-11T21:41:23.8661576Z } 2023-01-11T21:41:23.8661700Z ''') 2023-01-11T21:41:23.8661707Z 2023-01-11T21:41:23.8661713Z 2023-01-11T21:41:23.8661834Z async_compile.wait(globals()) 2023-01-11T21:41:23.8661917Z del async_compile 2023-01-11T21:41:23.8661923Z 2023-01-11T21:41:23.8662013Z def call(args): 2023-01-11T21:41:23.8662124Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8662218Z args.clear() 2023-01-11T21:41:23.8662676Z buf0 = empty_strided((32, 21), (21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8662938Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8663141Z del arg0_1 2023-01-11T21:41:23.8663219Z del arg1_1 2023-01-11T21:41:23.8663311Z del arg2_1 2023-01-11T21:41:23.8663413Z return (buf0, ) 2023-01-11T21:41:23.8663421Z 2023-01-11T21:41:23.8663427Z 2023-01-11T21:41:23.8663537Z if __name__ == "__main__": 2023-01-11T21:41:23.8663705Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8663887Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8664182Z arg0_1 = rand_strided((32, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8664482Z arg1_1 = rand_strided((9521, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8664753Z arg2_1 = rand_strided((32, 21), (1, 32), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8664932Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8664940Z 2023-01-11T21:41:23.8665034Z ok (1.800s) 2023-01-11T21:41:23.8665833Z test_inf_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8666033Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8666436Z [2023-01-11 21:35:04,786] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 245 2023-01-11T21:41:23.8666832Z [2023-01-11 21:35:06,451] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 245 2023-01-11T21:41:23.8666842Z 2023-01-11T21:41:23.8666971Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8667068Z import torch 2023-01-11T21:41:23.8667157Z import random 2023-01-11T21:41:23.8667298Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8667480Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8667491Z 2023-01-11T21:41:23.8667596Z aten = torch.ops.aten 2023-01-11T21:41:23.8667785Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8667907Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8667914Z 2023-01-11T21:41:23.8667919Z 2023-01-11T21:41:23.8668123Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8668418Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8668583Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8668703Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8668836Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8668963Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8669055Z { 2023-01-11T21:41:23.8669199Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8669290Z { 2023-01-11T21:41:23.8669399Z #pragma omp for 2023-01-11T21:41:23.8669507Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8669594Z { 2023-01-11T21:41:23.8669789Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8670001Z auto tmp1 = at::vec::Vectorized(std::numeric_limits::infinity()); 2023-01-11T21:41:23.8670118Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8670444Z auto tmp3 = at::vec::Vectorized(-std::numeric_limits::infinity()); 2023-01-11T21:41:23.8670559Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.8670657Z auto tmp5 = tmp0 * tmp3; 2023-01-11T21:41:23.8670782Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8670903Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8671033Z tmp5.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8671122Z } 2023-01-11T21:41:23.8671259Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8671375Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.8671529Z { 2023-01-11T21:41:23.8671644Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8671816Z auto tmp1 = std::numeric_limits::infinity(); 2023-01-11T21:41:23.8671941Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8672286Z auto tmp3 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.8672405Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:23.8672527Z auto tmp5 = tmp0 * tmp3; 2023-01-11T21:41:23.8672622Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8672739Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.8672851Z out_ptr2[i0] = tmp5; 2023-01-11T21:41:23.8672940Z } 2023-01-11T21:41:23.8673031Z } 2023-01-11T21:41:23.8673110Z } 2023-01-11T21:41:23.8673225Z ''') 2023-01-11T21:41:23.8673233Z 2023-01-11T21:41:23.8673238Z 2023-01-11T21:41:23.8673349Z async_compile.wait(globals()) 2023-01-11T21:41:23.8673448Z del async_compile 2023-01-11T21:41:23.8673460Z 2023-01-11T21:41:23.8673625Z def call(args): 2023-01-11T21:41:23.8673792Z arg0_1, = args 2023-01-11T21:41:23.8673892Z args.clear() 2023-01-11T21:41:23.8674186Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8674476Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8674738Z buf2 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8674980Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8675072Z del arg0_1 2023-01-11T21:41:23.8675189Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:23.8675198Z 2023-01-11T21:41:23.8675204Z 2023-01-11T21:41:23.8675309Z if __name__ == "__main__": 2023-01-11T21:41:23.8675460Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8675628Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8675912Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8676055Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8676062Z 2023-01-11T21:41:23.8676133Z ok (1.695s) 2023-01-11T21:41:23.8676832Z test_inplace_activations_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8677015Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8677415Z [2023-01-11 21:35:06,603] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 246 2023-01-11T21:41:23.8677820Z [2023-01-11 21:35:08,372] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 246 2023-01-11T21:41:23.8677831Z 2023-01-11T21:41:23.8677957Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8678048Z import torch 2023-01-11T21:41:23.8678143Z import random 2023-01-11T21:41:23.8678297Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8678442Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8678449Z 2023-01-11T21:41:23.8678552Z aten = torch.ops.aten 2023-01-11T21:41:23.8678741Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8678872Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8678880Z 2023-01-11T21:41:23.8678887Z 2023-01-11T21:41:23.8679079Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8679374Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8679538Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8679677Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8679884Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8680020Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.8680193Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.8680395Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.8680596Z float* __restrict__ out_ptr5, 2023-01-11T21:41:23.8680778Z float* __restrict__ out_ptr6) 2023-01-11T21:41:23.8680903Z { 2023-01-11T21:41:23.8681087Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8681212Z { 2023-01-11T21:41:23.8681365Z #pragma omp for 2023-01-11T21:41:23.8681528Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.8681654Z { 2023-01-11T21:41:23.8681782Z { 2023-01-11T21:41:23.8681908Z { 2023-01-11T21:41:23.8682066Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8682265Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8682523Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8682732Z auto tmp3 = static_cast(3); 2023-01-11T21:41:23.8682901Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.8683104Z auto tmp5 = static_cast(0.0); 2023-01-11T21:41:23.8683349Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::max(tmp4, tmp5); 2023-01-11T21:41:23.8683540Z auto tmp7 = static_cast(6.0); 2023-01-11T21:41:23.8683778Z auto tmp8 = (tmp7 != tmp7) ? tmp7 : std::min(tmp6, tmp7); 2023-01-11T21:41:23.8683938Z auto tmp9 = tmp2 * tmp8; 2023-01-11T21:41:23.8684122Z auto tmp10 = static_cast(6); 2023-01-11T21:41:23.8684285Z auto tmp11 = tmp9 / tmp10; 2023-01-11T21:41:23.8684595Z auto tmp12 = static_cast(-1.0); 2023-01-11T21:41:23.8684821Z auto tmp13 = (tmp12 != tmp12) ? tmp12 : std::max(tmp2, tmp12); 2023-01-11T21:41:23.8685016Z auto tmp14 = static_cast(1.0); 2023-01-11T21:41:23.8685237Z auto tmp15 = (tmp14 != tmp14) ? tmp14 : std::min(tmp13, tmp14); 2023-01-11T21:41:23.8685426Z auto tmp16 = static_cast(0); 2023-01-11T21:41:23.8685588Z auto tmp17 = tmp2 > tmp16; 2023-01-11T21:41:23.8685781Z auto tmp18 = static_cast(0.01); 2023-01-11T21:41:23.8685949Z auto tmp19 = tmp2 * tmp18; 2023-01-11T21:41:23.8686135Z auto tmp20 = tmp17 ? tmp2 : tmp19; 2023-01-11T21:41:23.8686420Z auto tmp21 = std::exp(-tmp2); 2023-01-11T21:41:23.8686585Z auto tmp22 = 1 / (1 + tmp21); 2023-01-11T21:41:23.8686748Z auto tmp23 = tmp2 * tmp22; 2023-01-11T21:41:23.8686934Z auto tmp24 = std::log1p(tmp2); 2023-01-11T21:41:23.8687130Z auto tmp25 = static_cast(0); 2023-01-11T21:41:23.8687338Z auto tmp26 = static_cast(99.0); 2023-01-11T21:41:23.8687526Z auto tmp27 = tmp25 ? tmp26 : tmp2; 2023-01-11T21:41:23.8687717Z auto tmp28 = static_cast(1); 2023-01-11T21:41:23.8687904Z auto tmp29 = tmp28 ? tmp26 : tmp2; 2023-01-11T21:41:23.8688049Z out_ptr0[i0] = tmp11; 2023-01-11T21:41:23.8688211Z out_ptr1[i0] = tmp15; 2023-01-11T21:41:23.8688377Z out_ptr2[i0] = tmp20; 2023-01-11T21:41:23.8688534Z out_ptr3[i0] = tmp23; 2023-01-11T21:41:23.8688692Z out_ptr4[i0] = tmp24; 2023-01-11T21:41:23.8688846Z out_ptr5[i0] = tmp27; 2023-01-11T21:41:23.8689002Z out_ptr6[i0] = tmp29; 2023-01-11T21:41:23.8689118Z } 2023-01-11T21:41:23.8689244Z } 2023-01-11T21:41:23.8689371Z } 2023-01-11T21:41:23.8689493Z } 2023-01-11T21:41:23.8689702Z } 2023-01-11T21:41:23.8689866Z ''') 2023-01-11T21:41:23.8689876Z 2023-01-11T21:41:23.8689882Z 2023-01-11T21:41:23.8690064Z async_compile.wait(globals()) 2023-01-11T21:41:23.8690195Z del async_compile 2023-01-11T21:41:23.8690217Z 2023-01-11T21:41:23.8690336Z def call(args): 2023-01-11T21:41:23.8690469Z arg0_1, = args 2023-01-11T21:41:23.8690605Z args.clear() 2023-01-11T21:41:23.8690976Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8691339Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8691685Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8692036Z buf3 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8692362Z buf4 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8692694Z buf5 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8693112Z buf6 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8693647Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr())) 2023-01-11T21:41:23.8693763Z del arg0_1 2023-01-11T21:41:23.8693955Z return (buf0, buf1, buf2, buf3, buf4, buf5, buf6, ) 2023-01-11T21:41:23.8693965Z 2023-01-11T21:41:23.8693973Z 2023-01-11T21:41:23.8694112Z if __name__ == "__main__": 2023-01-11T21:41:23.8694327Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8694547Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8694911Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8695119Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8695127Z 2023-01-11T21:41:23.8695261Z ok (1.923s) 2023-01-11T21:41:23.8695912Z test_inplace_add_cpu (__main__.CpuTests) ... [2023-01-11 21:35:08,393] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 247 2023-01-11T21:41:23.8696429Z [2023-01-11 21:35:10,091] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 247 2023-01-11T21:41:23.8696438Z 2023-01-11T21:41:23.8696618Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8696758Z import torch 2023-01-11T21:41:23.8696873Z import random 2023-01-11T21:41:23.8697040Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8697255Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8697263Z 2023-01-11T21:41:23.8697413Z aten = torch.ops.aten 2023-01-11T21:41:23.8697660Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8697838Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8697847Z 2023-01-11T21:41:23.8697854Z 2023-01-11T21:41:23.8698126Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8698538Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8698777Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8698964Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8699155Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8699277Z { 2023-01-11T21:41:23.8699463Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8699576Z { 2023-01-11T21:41:23.8699722Z #pragma omp for 2023-01-11T21:41:23.8699878Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8699987Z { 2023-01-11T21:41:23.8700247Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8700504Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8700667Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8700826Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8701040Z } 2023-01-11T21:41:23.8701226Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8701361Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8701478Z { 2023-01-11T21:41:23.8701641Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8701798Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8701958Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8702112Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8702228Z } 2023-01-11T21:41:23.8702464Z } 2023-01-11T21:41:23.8702556Z } 2023-01-11T21:41:23.8702722Z ''') 2023-01-11T21:41:23.8702732Z 2023-01-11T21:41:23.8702739Z 2023-01-11T21:41:23.8702914Z async_compile.wait(globals()) 2023-01-11T21:41:23.8703049Z del async_compile 2023-01-11T21:41:23.8703057Z 2023-01-11T21:41:23.8703191Z def call(args): 2023-01-11T21:41:23.8703332Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8703451Z args.clear() 2023-01-11T21:41:23.8703889Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.8704025Z del arg1_1 2023-01-11T21:41:23.8704174Z return (arg0_1, ) 2023-01-11T21:41:23.8704182Z 2023-01-11T21:41:23.8704189Z 2023-01-11T21:41:23.8704339Z if __name__ == "__main__": 2023-01-11T21:41:23.8704555Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8704778Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8705118Z arg0_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8705435Z arg1_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8705651Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8705659Z 2023-01-11T21:41:23.8705791Z ok (1.716s) 2023-01-11T21:41:23.8706427Z test_inplace_mixed_dtype_ops_cpu (__main__.CpuTests) ... [2023-01-11 21:35:10,139] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 248 2023-01-11T21:41:23.8706891Z [2023-01-11 21:35:11,800] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 248 2023-01-11T21:41:23.8706899Z 2023-01-11T21:41:23.8707044Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8707152Z import torch 2023-01-11T21:41:23.8707255Z import random 2023-01-11T21:41:23.8707411Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8707592Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8707600Z 2023-01-11T21:41:23.8707743Z aten = torch.ops.aten 2023-01-11T21:41:23.8707983Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8708137Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8708145Z 2023-01-11T21:41:23.8708150Z 2023-01-11T21:41:23.8708397Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8708772Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8708981Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8709164Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8709347Z const double* __restrict__ in_ptr1) 2023-01-11T21:41:23.8709459Z { 2023-01-11T21:41:23.8709621Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8709723Z { 2023-01-11T21:41:23.8709857Z #pragma omp for 2023-01-11T21:41:23.8709996Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8710095Z { 2023-01-11T21:41:23.8710198Z { 2023-01-11T21:41:23.8710308Z { 2023-01-11T21:41:23.8710458Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8710601Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8710772Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.8710927Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:23.8711093Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:23.8711250Z auto tmp5 = tmp4 + tmp1; 2023-01-11T21:41:23.8711534Z auto tmp6 = static_cast(tmp5); 2023-01-11T21:41:23.8711693Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:23.8711844Z auto tmp8 = tmp7 * tmp1; 2023-01-11T21:41:23.8712036Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:23.8712203Z in_out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.8712319Z } 2023-01-11T21:41:23.8712434Z } 2023-01-11T21:41:23.8712533Z } 2023-01-11T21:41:23.8712643Z } 2023-01-11T21:41:23.8712736Z } 2023-01-11T21:41:23.8712881Z ''') 2023-01-11T21:41:23.8712891Z 2023-01-11T21:41:23.8712898Z 2023-01-11T21:41:23.8713046Z async_compile.wait(globals()) 2023-01-11T21:41:23.8713124Z del async_compile 2023-01-11T21:41:23.8713131Z 2023-01-11T21:41:23.8713222Z def call(args): 2023-01-11T21:41:23.8713316Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8713405Z args.clear() 2023-01-11T21:41:23.8713828Z buf0 = empty_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8713980Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.8714250Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr())) 2023-01-11T21:41:23.8714328Z del arg0_1 2023-01-11T21:41:23.8714425Z del arg1_1 2023-01-11T21:41:23.8714533Z return (buf1, ) 2023-01-11T21:41:23.8714541Z 2023-01-11T21:41:23.8714546Z 2023-01-11T21:41:23.8714661Z if __name__ == "__main__": 2023-01-11T21:41:23.8714853Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8715063Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8715402Z arg0_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8715766Z arg1_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8715960Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8715976Z 2023-01-11T21:41:23.8716072Z ok (1.709s) 2023-01-11T21:41:23.8716678Z test_input_mutation1_cpu (__main__.CpuTests) ... [2023-01-11 21:35:11,820] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 249 2023-01-11T21:41:23.8717056Z [2023-01-11 21:35:11,833] torch._inductor.scheduler: [DEBUG] remove_buffer('buf0') 2023-01-11T21:41:23.8717428Z [2023-01-11 21:35:11,835] torch._inductor.scheduler: [DEBUG] remove_buffer('buf0') 2023-01-11T21:41:23.8717878Z [2023-01-11 21:35:13,436] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 249 2023-01-11T21:41:23.8717889Z 2023-01-11T21:41:23.8718052Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8718159Z import torch 2023-01-11T21:41:23.8718267Z import random 2023-01-11T21:41:23.8718470Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8718691Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8718699Z 2023-01-11T21:41:23.8718841Z aten = torch.ops.aten 2023-01-11T21:41:23.8719103Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8719256Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8719264Z 2023-01-11T21:41:23.8719271Z 2023-01-11T21:41:23.8719503Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8719823Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8720001Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8720169Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.8720332Z float* __restrict__ out_ptr2) 2023-01-11T21:41:23.8720435Z { 2023-01-11T21:41:23.8720609Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8720698Z { 2023-01-11T21:41:23.8720833Z #pragma omp for 2023-01-11T21:41:23.8720954Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8721070Z { 2023-01-11T21:41:23.8721311Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8721694Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8721848Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8721990Z auto tmp3 = tmp2 * tmp2; 2023-01-11T21:41:23.8722208Z auto tmp4 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8722352Z auto tmp5 = tmp2 + tmp4; 2023-01-11T21:41:23.8722474Z auto tmp6 = tmp3 / tmp5; 2023-01-11T21:41:23.8722625Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8722764Z tmp6.store(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8722862Z } 2023-01-11T21:41:23.8723007Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8723134Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8723232Z { 2023-01-11T21:41:23.8723357Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.8723517Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8723733Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8723889Z auto tmp3 = tmp2 * tmp2; 2023-01-11T21:41:23.8724058Z auto tmp4 = static_cast(2); 2023-01-11T21:41:23.8724205Z auto tmp5 = tmp2 + tmp4; 2023-01-11T21:41:23.8724355Z auto tmp6 = tmp3 / tmp5; 2023-01-11T21:41:23.8724488Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8724637Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:23.8724755Z } 2023-01-11T21:41:23.8724869Z } 2023-01-11T21:41:23.8724985Z } 2023-01-11T21:41:23.8725152Z ''') 2023-01-11T21:41:23.8725165Z 2023-01-11T21:41:23.8725172Z 2023-01-11T21:41:23.8725343Z async_compile.wait(globals()) 2023-01-11T21:41:23.8725471Z del async_compile 2023-01-11T21:41:23.8725479Z 2023-01-11T21:41:23.8725601Z def call(args): 2023-01-11T21:41:23.8725729Z arg0_1, = args 2023-01-11T21:41:23.8725850Z args.clear() 2023-01-11T21:41:23.8726182Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8726455Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:23.8726558Z del arg0_1 2023-01-11T21:41:23.8726655Z return (buf2, ) 2023-01-11T21:41:23.8726663Z 2023-01-11T21:41:23.8726684Z 2023-01-11T21:41:23.8726783Z if __name__ == "__main__": 2023-01-11T21:41:23.8726966Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8727165Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8727478Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8727664Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8727672Z 2023-01-11T21:41:23.8727785Z ok (1.636s) 2023-01-11T21:41:23.8728400Z test_input_mutation2_cpu (__main__.CpuTests) ... [2023-01-11 21:35:13,505] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 250 2023-01-11T21:41:23.8728768Z [2023-01-11 21:35:13,508] torch._inductor.graph: [WARNING] Creating implicit fallback for: 2023-01-11T21:41:23.8728941Z target: aten.expand_copy.default 2023-01-11T21:41:23.8729081Z args[0]: TensorBox(StorageBox( 2023-01-11T21:41:23.8729194Z Pointwise( 2023-01-11T21:41:23.8729353Z 'cpu', 2023-01-11T21:41:23.8729495Z torch.float32, 2023-01-11T21:41:23.8729662Z tmp0 = constant(66.0, torch.float32) 2023-01-11T21:41:23.8729773Z return tmp0 2023-01-11T21:41:23.8729890Z , 2023-01-11T21:41:23.8730019Z ranges=[1], 2023-01-11T21:41:23.8730217Z origins={lift_fresh_copy, _tensor_constant0} 2023-01-11T21:41:23.8730328Z ) 2023-01-11T21:41:23.8730423Z )) 2023-01-11T21:41:23.8730537Z args[1]: [64] 2023-01-11T21:41:23.8730996Z [2023-01-11 21:35:13,516] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.expand_copy.default 2023-01-11T21:41:23.8731461Z [2023-01-11 21:35:15,200] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 250 2023-01-11T21:41:23.8731470Z 2023-01-11T21:41:23.8731743Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8731870Z import torch 2023-01-11T21:41:23.8731987Z import random 2023-01-11T21:41:23.8732195Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8732430Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8732438Z 2023-01-11T21:41:23.8732595Z aten = torch.ops.aten 2023-01-11T21:41:23.8732841Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8733021Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8733031Z 2023-01-11T21:41:23.8733038Z 2023-01-11T21:41:23.8733307Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8733687Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8733918Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8734106Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8734289Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8734492Z { 2023-01-11T21:41:23.8734678Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8734805Z { 2023-01-11T21:41:23.8734959Z #pragma omp for 2023-01-11T21:41:23.8735118Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8735246Z { 2023-01-11T21:41:23.8735503Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8735759Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8735912Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8736095Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8736225Z } 2023-01-11T21:41:23.8736413Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8736572Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8736700Z { 2023-01-11T21:41:23.8736851Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8737042Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8737216Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8737376Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8737502Z } 2023-01-11T21:41:23.8737660Z #pragma omp single 2023-01-11T21:41:23.8737787Z { 2023-01-11T21:41:23.8737901Z { 2023-01-11T21:41:23.8738031Z { 2023-01-11T21:41:23.8738235Z auto tmp0 = static_cast(66.0); 2023-01-11T21:41:23.8738397Z out_ptr1[0] = tmp0; 2023-01-11T21:41:23.8738527Z } 2023-01-11T21:41:23.8738657Z } 2023-01-11T21:41:23.8738785Z } 2023-01-11T21:41:23.8738895Z } 2023-01-11T21:41:23.8739021Z } 2023-01-11T21:41:23.8739196Z ''') 2023-01-11T21:41:23.8739206Z 2023-01-11T21:41:23.8739212Z 2023-01-11T21:41:23.8739471Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.8739851Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8740088Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8740280Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8740391Z { 2023-01-11T21:41:23.8740583Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8740711Z { 2023-01-11T21:41:23.8740866Z #pragma omp for 2023-01-11T21:41:23.8741027Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8741156Z { 2023-01-11T21:41:23.8741416Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8741656Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8741824Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8742001Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8742132Z } 2023-01-11T21:41:23.8742457Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8742627Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8742754Z { 2023-01-11T21:41:23.8742905Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8743214Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8743379Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8743536Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8743663Z } 2023-01-11T21:41:23.8743788Z } 2023-01-11T21:41:23.8743913Z } 2023-01-11T21:41:23.8744075Z ''') 2023-01-11T21:41:23.8744084Z 2023-01-11T21:41:23.8744090Z 2023-01-11T21:41:23.8744271Z async_compile.wait(globals()) 2023-01-11T21:41:23.8744419Z del async_compile 2023-01-11T21:41:23.8744427Z 2023-01-11T21:41:23.8744571Z def call(args): 2023-01-11T21:41:23.8744724Z primals_1, = args 2023-01-11T21:41:23.8744868Z args.clear() 2023-01-11T21:41:23.8745245Z buf0 = empty_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8745603Z buf1 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8745910Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8746152Z del primals_1 2023-01-11T21:41:23.8746399Z buf2 = torch.ops.aten.expand_copy.default(buf1, [64]) 2023-01-11T21:41:23.8746532Z del buf1 2023-01-11T21:41:23.8746671Z buf3 = buf2 2023-01-11T21:41:23.8746857Z assert_size_stride(buf3, (64, ), (1, )) 2023-01-11T21:41:23.8746992Z del buf2 2023-01-11T21:41:23.8747351Z buf4 = empty_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8747603Z kernel_cpp_1(c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.8747808Z return (as_strided(buf3, (1, 64), (64, 1)), buf0, buf4, ) 2023-01-11T21:41:23.8747817Z 2023-01-11T21:41:23.8747825Z 2023-01-11T21:41:23.8747974Z if __name__ == "__main__": 2023-01-11T21:41:23.8748193Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8748433Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8748816Z primals_1 = rand_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8749047Z print_performance(lambda: call([primals_1])) 2023-01-11T21:41:23.8749055Z 2023-01-11T21:41:23.8749175Z ok (1.765s) 2023-01-11T21:41:23.8749821Z test_input_mutation3_cpu (__main__.CpuTests) ... [2023-01-11 21:35:15,252] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 251 2023-01-11T21:41:23.8750333Z [2023-01-11 21:35:16,918] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 251 2023-01-11T21:41:23.8750342Z 2023-01-11T21:41:23.8750529Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8750670Z import torch 2023-01-11T21:41:23.8750813Z import random 2023-01-11T21:41:23.8751038Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8751273Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8751281Z 2023-01-11T21:41:23.8751424Z aten = torch.ops.aten 2023-01-11T21:41:23.8751684Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8751874Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8751883Z 2023-01-11T21:41:23.8751890Z 2023-01-11T21:41:23.8752152Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8752533Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8752764Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8752946Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8753063Z { 2023-01-11T21:41:23.8753228Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8753345Z { 2023-01-11T21:41:23.8753448Z #pragma omp for 2023-01-11T21:41:23.8753565Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8753666Z { 2023-01-11T21:41:23.8753938Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8754142Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8754253Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8754465Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8754555Z } 2023-01-11T21:41:23.8754711Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8754825Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8754920Z { 2023-01-11T21:41:23.8755045Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8755188Z auto tmp1 = static_cast(1); 2023-01-11T21:41:23.8755316Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8755440Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8755535Z } 2023-01-11T21:41:23.8755635Z #pragma omp for 2023-01-11T21:41:23.8755755Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8755848Z { 2023-01-11T21:41:23.8756035Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8756239Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8756365Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8756549Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8756630Z } 2023-01-11T21:41:23.8756754Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8756863Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8756928Z { 2023-01-11T21:41:23.8757048Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8757189Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8757306Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8757421Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8757513Z } 2023-01-11T21:41:23.8757621Z #pragma omp for 2023-01-11T21:41:23.8757719Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8757807Z { 2023-01-11T21:41:23.8758004Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8758205Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:23.8758332Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8758414Z } 2023-01-11T21:41:23.8758542Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8758633Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8758721Z { 2023-01-11T21:41:23.8758845Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8759076Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:23.8759196Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:23.8759311Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8759392Z } 2023-01-11T21:41:23.8759479Z #pragma omp for 2023-01-11T21:41:23.8759594Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8759676Z { 2023-01-11T21:41:23.8759856Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8760036Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:23.8760147Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8760271Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8760337Z } 2023-01-11T21:41:23.8760462Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8760569Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8760650Z { 2023-01-11T21:41:23.8760763Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8760895Z auto tmp1 = static_cast(3); 2023-01-11T21:41:23.8761008Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.8761096Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8761181Z } 2023-01-11T21:41:23.8761284Z #pragma omp for 2023-01-11T21:41:23.8761393Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8761477Z { 2023-01-11T21:41:23.8761660Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8761836Z auto tmp1 = at::vec::Vectorized(static_cast(4)); 2023-01-11T21:41:23.8761934Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8762145Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8762229Z } 2023-01-11T21:41:23.8762360Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8762472Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8762558Z { 2023-01-11T21:41:23.8762672Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8762784Z auto tmp1 = static_cast(4); 2023-01-11T21:41:23.8762890Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8762988Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8763066Z } 2023-01-11T21:41:23.8763163Z #pragma omp for 2023-01-11T21:41:23.8763266Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8763342Z { 2023-01-11T21:41:23.8763510Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8763682Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.8763800Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8763881Z } 2023-01-11T21:41:23.8764065Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8764180Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8764258Z { 2023-01-11T21:41:23.8764365Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8764483Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8764598Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8764690Z } 2023-01-11T21:41:23.8764774Z } 2023-01-11T21:41:23.8764857Z } 2023-01-11T21:41:23.8764969Z ''') 2023-01-11T21:41:23.8764993Z 2023-01-11T21:41:23.8764998Z 2023-01-11T21:41:23.8765110Z async_compile.wait(globals()) 2023-01-11T21:41:23.8765218Z del async_compile 2023-01-11T21:41:23.8765226Z 2023-01-11T21:41:23.8765321Z def call(args): 2023-01-11T21:41:23.8765413Z arg0_1, = args 2023-01-11T21:41:23.8765508Z args.clear() 2023-01-11T21:41:23.8765696Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.8765832Z return (as_strided(arg0_1, (64, ), (1, )), ) 2023-01-11T21:41:23.8765849Z 2023-01-11T21:41:23.8765854Z 2023-01-11T21:41:23.8765959Z if __name__ == "__main__": 2023-01-11T21:41:23.8766106Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8766274Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8766575Z arg0_1 = rand_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8766727Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8766734Z 2023-01-11T21:41:23.8766828Z ok (1.717s) 2023-01-11T21:41:23.8767320Z test_input_mutation4_cpu (__main__.CpuTests) ... [2023-01-11 21:35:16,934] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 252 2023-01-11T21:41:23.8767688Z [2023-01-11 21:35:18,507] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 252 2023-01-11T21:41:23.8767696Z 2023-01-11T21:41:23.8767820Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8767894Z import torch 2023-01-11T21:41:23.8767995Z import random 2023-01-11T21:41:23.8768152Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8768312Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8768318Z 2023-01-11T21:41:23.8768427Z aten = torch.ops.aten 2023-01-11T21:41:23.8768605Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8768724Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8768731Z 2023-01-11T21:41:23.8768736Z 2023-01-11T21:41:23.8768922Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8769184Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8769344Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8769476Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8769560Z { 2023-01-11T21:41:23.8769690Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8769770Z { 2023-01-11T21:41:23.8769962Z #pragma omp for 2023-01-11T21:41:23.8770055Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8770134Z { 2023-01-11T21:41:23.8770315Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8770493Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.8770626Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8770721Z } 2023-01-11T21:41:23.8770855Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8770954Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.8771038Z { 2023-01-11T21:41:23.8771154Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:23.8771271Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8771381Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8771463Z } 2023-01-11T21:41:23.8771548Z } 2023-01-11T21:41:23.8771612Z } 2023-01-11T21:41:23.8771733Z ''') 2023-01-11T21:41:23.8771741Z 2023-01-11T21:41:23.8771747Z 2023-01-11T21:41:23.8771960Z async_compile.wait(globals()) 2023-01-11T21:41:23.8772068Z del async_compile 2023-01-11T21:41:23.8772074Z 2023-01-11T21:41:23.8772169Z def call(args): 2023-01-11T21:41:23.8772261Z arg0_1, = args 2023-01-11T21:41:23.8772353Z args.clear() 2023-01-11T21:41:23.8772523Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg0_1.data_ptr())) 2023-01-11T21:41:23.8772619Z return (arg0_1, ) 2023-01-11T21:41:23.8772625Z 2023-01-11T21:41:23.8772631Z 2023-01-11T21:41:23.8772729Z if __name__ == "__main__": 2023-01-11T21:41:23.8772879Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8773040Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8773331Z arg0_1 = rand_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8773473Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8773480Z 2023-01-11T21:41:23.8773567Z ok (1.589s) 2023-01-11T21:41:23.8774253Z test_invalid_operand_issue1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8774409Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8774784Z [2023-01-11 21:35:19,343] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 253 2023-01-11T21:41:23.8775177Z [2023-01-11 21:35:21,531] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 253 2023-01-11T21:41:23.8775187Z 2023-01-11T21:41:23.8775319Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8775413Z import torch 2023-01-11T21:41:23.8775514Z import random 2023-01-11T21:41:23.8775677Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8775855Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8775862Z 2023-01-11T21:41:23.8775952Z aten = torch.ops.aten 2023-01-11T21:41:23.8776135Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8776258Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8776265Z 2023-01-11T21:41:23.8776270Z 2023-01-11T21:41:23.8776474Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8776753Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8776910Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.8777052Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.8777188Z const long* __restrict__ in_ptr2, 2023-01-11T21:41:23.8777312Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:23.8777443Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8777527Z { 2023-01-11T21:41:23.8777755Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8777841Z { 2023-01-11T21:41:23.8777948Z #pragma omp for 2023-01-11T21:41:23.8778055Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.8778123Z { 2023-01-11T21:41:23.8778227Z #pragma GCC ivdep 2023-01-11T21:41:23.8778340Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:23.8778425Z { 2023-01-11T21:41:23.8778536Z #pragma GCC ivdep 2023-01-11T21:41:23.8778663Z for(long i2=0; i2<768; i2+=1) 2023-01-11T21:41:23.8778750Z { 2023-01-11T21:41:23.8778825Z { 2023-01-11T21:41:23.8778910Z { 2023-01-11T21:41:23.8779039Z auto tmp3 = in_ptr0[i0]; 2023-01-11T21:41:23.8779176Z auto tmp8 = in_ptr2[i1 + (128*i0)]; 2023-01-11T21:41:23.8779316Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:23.8779536Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.8779679Z auto tmp2 = tmp0 == tmp1; 2023-01-11T21:41:23.8779804Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8779929Z auto tmp5 = tmp0 >= tmp4; 2023-01-11T21:41:23.8780041Z auto tmp6 = 0; 2023-01-11T21:41:23.8780143Z if(tmp5) 2023-01-11T21:41:23.8780238Z { 2023-01-11T21:41:23.8780530Z auto tmp7 = in_ptr1[(-1) + i1 + (127*i0)]; 2023-01-11T21:41:23.8780651Z tmp6 = tmp7; 2023-01-11T21:41:23.8780732Z } 2023-01-11T21:41:23.8780883Z auto tmp9 = tmp5 ? tmp6 : tmp8; 2023-01-11T21:41:23.8781032Z auto tmp10 = tmp2 ? tmp3 : tmp9; 2023-01-11T21:41:23.8781187Z auto tmp11 = in_ptr3[i2 + (768*tmp10)]; 2023-01-11T21:41:23.8781339Z out_ptr0[i2 + (768*i1) + (98304*i0)] = tmp11; 2023-01-11T21:41:23.8781435Z } 2023-01-11T21:41:23.8781524Z } 2023-01-11T21:41:23.8781625Z } 2023-01-11T21:41:23.8781719Z } 2023-01-11T21:41:23.8781822Z } 2023-01-11T21:41:23.8781905Z } 2023-01-11T21:41:23.8781987Z } 2023-01-11T21:41:23.8782102Z ''') 2023-01-11T21:41:23.8782112Z 2023-01-11T21:41:23.8782117Z 2023-01-11T21:41:23.8782249Z async_compile.wait(globals()) 2023-01-11T21:41:23.8782490Z del async_compile 2023-01-11T21:41:23.8782497Z 2023-01-11T21:41:23.8782585Z def call(args): 2023-01-11T21:41:23.8782734Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1 = args 2023-01-11T21:41:23.8782845Z args.clear() 2023-01-11T21:41:23.8783166Z buf0 = empty_strided((8, 128, 768), (98304, 768, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8783458Z kernel_cpp_0(c_void_p(arg3_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg4_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8783553Z del arg0_1 2023-01-11T21:41:23.8783640Z del arg2_1 2023-01-11T21:41:23.8783711Z del arg3_1 2023-01-11T21:41:23.8783803Z del arg4_1 2023-01-11T21:41:23.8783898Z return (buf0, ) 2023-01-11T21:41:23.8783905Z 2023-01-11T21:41:23.8783910Z 2023-01-11T21:41:23.8784010Z if __name__ == "__main__": 2023-01-11T21:41:23.8784165Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8784337Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8784630Z arg0_1 = rand_strided((50005, 768), (768, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8784901Z arg1_1 = rand_strided((8, 128), (128, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8785161Z arg2_1 = rand_strided((8, 127), (127, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8785424Z arg3_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8785828Z arg4_1 = rand_strided((8, 128), (128, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.8786024Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1])) 2023-01-11T21:41:23.8786034Z 2023-01-11T21:41:23.8786142Z ok (3.820s) 2023-01-11T21:41:23.8786841Z test_isinf2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8787006Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8787374Z [2023-01-11 21:35:22,359] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 254 2023-01-11T21:41:23.8787805Z [2023-01-11 21:35:23,975] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 254 2023-01-11T21:41:23.8787820Z 2023-01-11T21:41:23.8787946Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8788022Z import torch 2023-01-11T21:41:23.8788113Z import random 2023-01-11T21:41:23.8788264Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8788417Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8788423Z 2023-01-11T21:41:23.8788525Z aten = torch.ops.aten 2023-01-11T21:41:23.8788700Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8788818Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8788826Z 2023-01-11T21:41:23.8788832Z 2023-01-11T21:41:23.8789000Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8789268Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8789429Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8789558Z bool* __restrict__ out_ptr0) 2023-01-11T21:41:23.8789643Z { 2023-01-11T21:41:23.8789769Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8789849Z { 2023-01-11T21:41:23.8789936Z #pragma omp for 2023-01-11T21:41:23.8790042Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:23.8790124Z { 2023-01-11T21:41:23.8790207Z { 2023-01-11T21:41:23.8790292Z { 2023-01-11T21:41:23.8790411Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8790550Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.8790670Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8790786Z auto tmp3 = tmp1 < tmp2; 2023-01-11T21:41:23.8790917Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8791037Z auto tmp5 = tmp1 < tmp4; 2023-01-11T21:41:23.8791174Z auto tmp6 = static_cast(1.0); 2023-01-11T21:41:23.8791353Z auto tmp7 = std::numeric_limits::infinity(); 2023-01-11T21:41:23.8791483Z auto tmp8 = tmp5 ? tmp6 : tmp7; 2023-01-11T21:41:23.8791616Z auto tmp9 = static_cast(3); 2023-01-11T21:41:23.8791737Z auto tmp10 = tmp1 < tmp9; 2023-01-11T21:41:23.8791881Z auto tmp11 = static_cast(2.0); 2023-01-11T21:41:23.8792024Z auto tmp12 = static_cast(4); 2023-01-11T21:41:23.8792152Z auto tmp13 = tmp1 < tmp12; 2023-01-11T21:41:23.8792511Z auto tmp14 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.8792706Z auto tmp15 = std::numeric_limits::quiet_NaN(); 2023-01-11T21:41:23.8792855Z auto tmp16 = tmp13 ? tmp14 : tmp15; 2023-01-11T21:41:23.8792974Z auto tmp17 = tmp10 ? tmp11 : tmp16; 2023-01-11T21:41:23.8793114Z auto tmp18 = tmp3 ? tmp8 : tmp17; 2023-01-11T21:41:23.8793306Z auto tmp19 = tmp0 == tmp18; 2023-01-11T21:41:23.8793418Z out_ptr0[i0] = tmp19; 2023-01-11T21:41:23.8793502Z } 2023-01-11T21:41:23.8793588Z } 2023-01-11T21:41:23.8793678Z } 2023-01-11T21:41:23.8793814Z } 2023-01-11T21:41:23.8793894Z } 2023-01-11T21:41:23.8794006Z ''') 2023-01-11T21:41:23.8794014Z 2023-01-11T21:41:23.8794019Z 2023-01-11T21:41:23.8794138Z async_compile.wait(globals()) 2023-01-11T21:41:23.8794233Z del async_compile 2023-01-11T21:41:23.8794239Z 2023-01-11T21:41:23.8794330Z def call(args): 2023-01-11T21:41:23.8794418Z arg0_1, = args 2023-01-11T21:41:23.8794504Z args.clear() 2023-01-11T21:41:23.8794792Z buf0 = empty_strided((5, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8794978Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8795071Z del arg0_1 2023-01-11T21:41:23.8795170Z return (buf0, ) 2023-01-11T21:41:23.8795177Z 2023-01-11T21:41:23.8795243Z 2023-01-11T21:41:23.8795351Z if __name__ == "__main__": 2023-01-11T21:41:23.8795511Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8795682Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8795950Z arg0_1 = rand_strided((5, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8796097Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8796103Z 2023-01-11T21:41:23.8796200Z ok (1.649s) 2023-01-11T21:41:23.8796862Z test_isinf_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8797039Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8797426Z [2023-01-11 21:35:23,993] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 255 2023-01-11T21:41:23.8797823Z [2023-01-11 21:35:25,555] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 255 2023-01-11T21:41:23.8798456Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8798630Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8799031Z [2023-01-11 21:35:25,571] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 256 2023-01-11T21:41:23.8799446Z [2023-01-11 21:35:27,131] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 256 2023-01-11T21:41:23.8799459Z 2023-01-11T21:41:23.8799583Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8799686Z import torch 2023-01-11T21:41:23.8799787Z import random 2023-01-11T21:41:23.8799954Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8800135Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8800142Z 2023-01-11T21:41:23.8800259Z aten = torch.ops.aten 2023-01-11T21:41:23.8800464Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8800591Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8800613Z 2023-01-11T21:41:23.8800619Z 2023-01-11T21:41:23.8800812Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8801115Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8801290Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8801432Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.8801647Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:23.8801734Z { 2023-01-11T21:41:23.8801880Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8801958Z { 2023-01-11T21:41:23.8802069Z #pragma omp for 2023-01-11T21:41:23.8802186Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:23.8802274Z { 2023-01-11T21:41:23.8802363Z { 2023-01-11T21:41:23.8802452Z { 2023-01-11T21:41:23.8802583Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8802720Z auto tmp1 = std::isinf(tmp0); 2023-01-11T21:41:23.8802861Z auto tmp2 = std::isnan(tmp0); 2023-01-11T21:41:23.8802977Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8803093Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8803182Z } 2023-01-11T21:41:23.8803270Z } 2023-01-11T21:41:23.8803361Z } 2023-01-11T21:41:23.8803433Z } 2023-01-11T21:41:23.8803525Z } 2023-01-11T21:41:23.8803700Z ''') 2023-01-11T21:41:23.8803710Z 2023-01-11T21:41:23.8803716Z 2023-01-11T21:41:23.8803851Z async_compile.wait(globals()) 2023-01-11T21:41:23.8803955Z del async_compile 2023-01-11T21:41:23.8803962Z 2023-01-11T21:41:23.8804065Z def call(args): 2023-01-11T21:41:23.8804161Z arg0_1, = args 2023-01-11T21:41:23.8804247Z args.clear() 2023-01-11T21:41:23.8804528Z buf0 = empty_strided((5, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8804807Z buf1 = empty_strided((5, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8805042Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8805142Z del arg0_1 2023-01-11T21:41:23.8805249Z return (buf0, buf1, ) 2023-01-11T21:41:23.8805257Z 2023-01-11T21:41:23.8805263Z 2023-01-11T21:41:23.8805369Z if __name__ == "__main__": 2023-01-11T21:41:23.8805530Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8805699Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8805992Z arg0_1 = rand_strided((5, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8806145Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8806152Z 2023-01-11T21:41:23.8806158Z 2023-01-11T21:41:23.8806291Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8806387Z import torch 2023-01-11T21:41:23.8806486Z import random 2023-01-11T21:41:23.8806647Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8806818Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8806825Z 2023-01-11T21:41:23.8806920Z aten = torch.ops.aten 2023-01-11T21:41:23.8807109Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8807237Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8807244Z 2023-01-11T21:41:23.8807250Z 2023-01-11T21:41:23.8807442Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8807741Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8807921Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.8808058Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:23.8808195Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:23.8808264Z { 2023-01-11T21:41:23.8808402Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8808487Z { 2023-01-11T21:41:23.8808596Z #pragma omp for 2023-01-11T21:41:23.8808714Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:23.8808799Z { 2023-01-11T21:41:23.8808869Z { 2023-01-11T21:41:23.8808960Z { 2023-01-11T21:41:23.8809089Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8809234Z auto tmp1 = std::isinf(tmp0); 2023-01-11T21:41:23.8809373Z auto tmp2 = std::isnan(tmp0); 2023-01-11T21:41:23.8809493Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8809684Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:23.8809759Z } 2023-01-11T21:41:23.8809850Z } 2023-01-11T21:41:23.8809933Z } 2023-01-11T21:41:23.8810017Z } 2023-01-11T21:41:23.8810098Z } 2023-01-11T21:41:23.8810215Z ''') 2023-01-11T21:41:23.8810224Z 2023-01-11T21:41:23.8810229Z 2023-01-11T21:41:23.8810357Z async_compile.wait(globals()) 2023-01-11T21:41:23.8810444Z del async_compile 2023-01-11T21:41:23.8810465Z 2023-01-11T21:41:23.8810546Z def call(args): 2023-01-11T21:41:23.8810642Z arg0_1, = args 2023-01-11T21:41:23.8810745Z args.clear() 2023-01-11T21:41:23.8811025Z buf0 = empty_strided((5, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8811297Z buf1 = empty_strided((5, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8811533Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8811631Z del arg0_1 2023-01-11T21:41:23.8811724Z return (buf0, buf1, ) 2023-01-11T21:41:23.8811789Z 2023-01-11T21:41:23.8811797Z 2023-01-11T21:41:23.8811905Z if __name__ == "__main__": 2023-01-11T21:41:23.8812068Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8812242Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8812529Z arg0_1 = rand_strided((5, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.8812684Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8812692Z 2023-01-11T21:41:23.8812783Z ok (3.173s) 2023-01-11T21:41:23.8813286Z test_kernel_names_cpu (__main__.CpuTests) ... [2023-01-11 21:35:27,170] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 257 2023-01-11T21:41:23.8813684Z [2023-01-11 21:35:28,791] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 257 2023-01-11T21:41:23.8813708Z 2023-01-11T21:41:23.8813827Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8813925Z import torch 2023-01-11T21:41:23.8814034Z import random 2023-01-11T21:41:23.8814201Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8814378Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8814384Z 2023-01-11T21:41:23.8814496Z aten = torch.ops.aten 2023-01-11T21:41:23.8814695Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8814810Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8814817Z 2023-01-11T21:41:23.8814823Z 2023-01-11T21:41:23.8815012Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8815316Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8815489Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8815626Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.8815712Z { 2023-01-11T21:41:23.8815850Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8815932Z { 2023-01-11T21:41:23.8816028Z #pragma omp for 2023-01-11T21:41:23.8816145Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.8816239Z { 2023-01-11T21:41:23.8816433Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8816627Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8816746Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8816876Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8816947Z } 2023-01-11T21:41:23.8817080Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8817195Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.8817280Z { 2023-01-11T21:41:23.8817396Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8817535Z auto tmp1 = static_cast(2); 2023-01-11T21:41:23.8817651Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.8817745Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8817831Z } 2023-01-11T21:41:23.8817987Z } 2023-01-11T21:41:23.8818075Z } 2023-01-11T21:41:23.8818192Z ''') 2023-01-11T21:41:23.8818201Z 2023-01-11T21:41:23.8818207Z 2023-01-11T21:41:23.8818339Z async_compile.wait(globals()) 2023-01-11T21:41:23.8818441Z del async_compile 2023-01-11T21:41:23.8818448Z 2023-01-11T21:41:23.8818531Z def call(args): 2023-01-11T21:41:23.8818626Z arg0_1, = args 2023-01-11T21:41:23.8818727Z args.clear() 2023-01-11T21:41:23.8819011Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8819204Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.8819300Z del arg0_1 2023-01-11T21:41:23.8819397Z return (buf0, ) 2023-01-11T21:41:23.8819403Z 2023-01-11T21:41:23.8819408Z 2023-01-11T21:41:23.8819499Z if __name__ == "__main__": 2023-01-11T21:41:23.8819663Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8819842Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8820192Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8820356Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8820364Z 2023-01-11T21:41:23.8820456Z ok (1.642s) 2023-01-11T21:41:23.8821154Z test_kwargs_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8821335Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8821731Z [2023-01-11 21:35:28,832] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 258 2023-01-11T21:41:23.8822047Z [2023-01-11 21:35:28,834] torch._inductor.graph: [WARNING] Creating implicit fallback for: 2023-01-11T21:41:23.8822207Z target: aten._histogramdd_bin_edges.default 2023-01-11T21:41:23.8822476Z args[0]: TensorBox(StorageBox( 2023-01-11T21:41:23.8822844Z InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.float32, size=[4, 2], stride=[2, 1])) 2023-01-11T21:41:23.8822933Z )) 2023-01-11T21:41:23.8823028Z args[1]: [3, 3] 2023-01-11T21:41:23.8823243Z kwargs: {'weight': TensorBox(StorageBox( 2023-01-11T21:41:23.8823591Z InputBuffer(name='arg1_1', layout=FixedLayout('cpu', torch.float32, size=[4], stride=[1])) 2023-01-11T21:41:23.8823662Z ))} 2023-01-11T21:41:23.8824074Z [2023-01-11 21:35:28,852] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten._histogramdd_bin_edges.default 2023-01-11T21:41:23.8824399Z [2023-01-11 21:35:28,855] torch._inductor.graph: [WARNING] Creating implicit fallback for: 2023-01-11T21:41:23.8824555Z target: aten._histogramdd_from_bin_cts.default 2023-01-11T21:41:23.8824684Z args[0]: TensorBox(StorageBox( 2023-01-11T21:41:23.8825046Z InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.float32, size=[4, 2], stride=[2, 1])) 2023-01-11T21:41:23.8825130Z )) 2023-01-11T21:41:23.8825208Z args[1]: [3, 3] 2023-01-11T21:41:23.8825415Z kwargs: {'weight': TensorBox(StorageBox( 2023-01-11T21:41:23.8825750Z InputBuffer(name='arg1_1', layout=FixedLayout('cpu', torch.float32, size=[4], stride=[1])) 2023-01-11T21:41:23.8825834Z ))} 2023-01-11T21:41:23.8826255Z [2023-01-11 21:35:28,872] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten._histogramdd_from_bin_cts.default 2023-01-11T21:41:23.8826645Z [2023-01-11 21:35:28,876] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 258 2023-01-11T21:41:23.8826653Z 2023-01-11T21:41:23.8826786Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8826889Z import torch 2023-01-11T21:41:23.8826970Z import random 2023-01-11T21:41:23.8827133Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8827305Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8827417Z 2023-01-11T21:41:23.8827524Z aten = torch.ops.aten 2023-01-11T21:41:23.8827713Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8827842Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8827850Z 2023-01-11T21:41:23.8827857Z 2023-01-11T21:41:23.8827980Z async_compile.wait(globals()) 2023-01-11T21:41:23.8828083Z del async_compile 2023-01-11T21:41:23.8828090Z 2023-01-11T21:41:23.8828175Z def call(args): 2023-01-11T21:41:23.8828283Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8828384Z args.clear() 2023-01-11T21:41:23.8828600Z buf0 = torch.ops.aten._histogramdd_bin_edges.default(arg0_1, [3, 3], weight=arg1_1) 2023-01-11T21:41:23.8828697Z buf1 = buf0[0] 2023-01-11T21:41:23.8828826Z assert_size_stride(buf1, (4, ), (1, )) 2023-01-11T21:41:23.8828925Z buf2 = buf0[1] 2023-01-11T21:41:23.8829037Z assert_size_stride(buf2, (4, ), (1, )) 2023-01-11T21:41:23.8829125Z del buf0 2023-01-11T21:41:23.8829429Z buf3 = torch.ops.aten._histogramdd_from_bin_cts.default(arg0_1, [3, 3], weight=arg1_1) 2023-01-11T21:41:23.8829531Z del arg0_1 2023-01-11T21:41:23.8829622Z del arg1_1 2023-01-11T21:41:23.8829718Z buf4 = buf3 2023-01-11T21:41:23.8829854Z assert_size_stride(buf4, (3, 3), (3, 1)) 2023-01-11T21:41:23.8829927Z del buf3 2023-01-11T21:41:23.8830039Z return (buf4, buf1, buf2, ) 2023-01-11T21:41:23.8830047Z 2023-01-11T21:41:23.8830053Z 2023-01-11T21:41:23.8830158Z if __name__ == "__main__": 2023-01-11T21:41:23.8830324Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8830500Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8830796Z arg0_1 = rand_strided((4, 2), (2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8831077Z arg1_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8831228Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8831250Z 2023-01-11T21:41:23.8831331Z ok (0.086s) 2023-01-11T21:41:23.8832093Z test_l1_loss_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8832282Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8832739Z [2023-01-11 21:35:28,903] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 259 2023-01-11T21:41:23.8833166Z [2023-01-11 21:35:30,568] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 259 2023-01-11T21:41:23.8833175Z 2023-01-11T21:41:23.8833329Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8833445Z import torch 2023-01-11T21:41:23.8833553Z import random 2023-01-11T21:41:23.8833818Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8833997Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8834005Z 2023-01-11T21:41:23.8834128Z aten = torch.ops.aten 2023-01-11T21:41:23.8834340Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8834478Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8834486Z 2023-01-11T21:41:23.8834492Z 2023-01-11T21:41:23.8834723Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8835072Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8835256Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8835418Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.8835568Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8835741Z const float* __restrict__ in_ptr1) 2023-01-11T21:41:23.8835843Z { 2023-01-11T21:41:23.8835984Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.8836212Z auto out_ptr1 = in_out_ptr1; 2023-01-11T21:41:23.8836307Z { 2023-01-11T21:41:23.8836633Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.8836742Z float tmp4 = 0; 2023-01-11T21:41:23.8836931Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:23.8837033Z float tmp6 = 0; 2023-01-11T21:41:23.8837194Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:23.8837364Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8837467Z { 2023-01-11T21:41:23.8837696Z #pragma omp for reduction(+:tmp4_vec) reduction(+:tmp6_vec) 2023-01-11T21:41:23.8837818Z for(long i0=0; i0<192; i0+=1) 2023-01-11T21:41:23.8837924Z { 2023-01-11T21:41:23.8838155Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8838445Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.8838687Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8838826Z auto tmp3 = tmp2.abs(); 2023-01-11T21:41:23.8838978Z auto tmp5 = tmp2 * tmp2; 2023-01-11T21:41:23.8839109Z tmp4_vec += tmp3; 2023-01-11T21:41:23.8839227Z tmp6_vec += tmp5; 2023-01-11T21:41:23.8839334Z } 2023-01-11T21:41:23.8839657Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp4_vec); 2023-01-11T21:41:23.8839947Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:23.8840162Z #pragma omp for simd simdlen(4) reduction(+:tmp4) reduction(+:tmp6) 2023-01-11T21:41:23.8840288Z for(long i0=1536; i0<1536; i0+=1) 2023-01-11T21:41:23.8840378Z { 2023-01-11T21:41:23.8840520Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8840633Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.8840834Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8840969Z auto tmp3 = std::abs(tmp2); 2023-01-11T21:41:23.8841105Z auto tmp5 = tmp2 * tmp2; 2023-01-11T21:41:23.8841209Z tmp4 += tmp3; 2023-01-11T21:41:23.8841311Z tmp6 += tmp5; 2023-01-11T21:41:23.8841399Z } 2023-01-11T21:41:23.8841468Z } 2023-01-11T21:41:23.8841574Z out_ptr0[0] = tmp4; 2023-01-11T21:41:23.8841682Z out_ptr1[0] = tmp6; 2023-01-11T21:41:23.8841765Z } 2023-01-11T21:41:23.8841850Z { 2023-01-11T21:41:23.8841934Z { 2023-01-11T21:41:23.8842036Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:23.8842194Z auto tmp1 = static_cast(1536); 2023-01-11T21:41:23.8842311Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8842431Z in_out_ptr0[0] = tmp2; 2023-01-11T21:41:23.8842523Z } 2023-01-11T21:41:23.8842605Z } 2023-01-11T21:41:23.8842687Z { 2023-01-11T21:41:23.8842760Z { 2023-01-11T21:41:23.8842877Z auto tmp0 = out_ptr1[0]; 2023-01-11T21:41:23.8843024Z auto tmp1 = static_cast(1536); 2023-01-11T21:41:23.8843151Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8843269Z in_out_ptr1[0] = tmp2; 2023-01-11T21:41:23.8843363Z } 2023-01-11T21:41:23.8843451Z } 2023-01-11T21:41:23.8843519Z } 2023-01-11T21:41:23.8843639Z ''') 2023-01-11T21:41:23.8843647Z 2023-01-11T21:41:23.8843654Z 2023-01-11T21:41:23.8843787Z async_compile.wait(globals()) 2023-01-11T21:41:23.8843897Z del async_compile 2023-01-11T21:41:23.8843905Z 2023-01-11T21:41:23.8844009Z def call(args): 2023-01-11T21:41:23.8844124Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.8844223Z args.clear() 2023-01-11T21:41:23.8844494Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8844864Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8844998Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:23.8845133Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:23.8845454Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr())) 2023-01-11T21:41:23.8845552Z del arg0_1 2023-01-11T21:41:23.8845646Z del arg1_1 2023-01-11T21:41:23.8845741Z return (buf2, buf3, ) 2023-01-11T21:41:23.8845764Z 2023-01-11T21:41:23.8845769Z 2023-01-11T21:41:23.8845865Z if __name__ == "__main__": 2023-01-11T21:41:23.8846034Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8846212Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8846545Z arg0_1 = rand_strided((2, 3, 16, 16), (768, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8846946Z arg1_1 = rand_strided((2, 3, 16, 16), (768, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8847127Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.8847135Z 2023-01-11T21:41:23.8847237Z ok (1.693s) 2023-01-11T21:41:23.8848038Z test_layer_norm_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8848241Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8848660Z [2023-01-11 21:35:30,641] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 260 2023-01-11T21:41:23.8849080Z [2023-01-11 21:35:32,826] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 260 2023-01-11T21:41:23.8849097Z 2023-01-11T21:41:23.8849233Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8849339Z import torch 2023-01-11T21:41:23.8849445Z import random 2023-01-11T21:41:23.8849627Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8849820Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8849828Z 2023-01-11T21:41:23.8849952Z aten = torch.ops.aten 2023-01-11T21:41:23.8850151Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8850292Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8850300Z 2023-01-11T21:41:23.8850308Z 2023-01-11T21:41:23.8850517Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8850836Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8851019Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8851187Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.8851356Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8851514Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.8851657Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.8851811Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8851955Z float* __restrict__ out_ptr3) 2023-01-11T21:41:23.8852049Z { 2023-01-11T21:41:23.8852179Z auto out_ptr2 = in_out_ptr0; 2023-01-11T21:41:23.8852309Z auto out_ptr1 = in_out_ptr1; 2023-01-11T21:41:23.8852467Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8852547Z { 2023-01-11T21:41:23.8852660Z #pragma omp for 2023-01-11T21:41:23.8852789Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8852877Z { 2023-01-11T21:41:23.8852961Z { 2023-01-11T21:41:23.8853254Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.8853375Z float tmp1 = 0; 2023-01-11T21:41:23.8853626Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:23.8853763Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8853863Z { 2023-01-11T21:41:23.8854085Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.8854204Z tmp1_vec += tmp0; 2023-01-11T21:41:23.8854304Z } 2023-01-11T21:41:23.8854613Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:23.8854811Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:23.8854930Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:23.8855019Z { 2023-01-11T21:41:23.8855158Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.8855263Z tmp1 += tmp0; 2023-01-11T21:41:23.8855448Z } 2023-01-11T21:41:23.8855572Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8855661Z } 2023-01-11T21:41:23.8855739Z } 2023-01-11T21:41:23.8855840Z #pragma omp for 2023-01-11T21:41:23.8855950Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8856037Z { 2023-01-11T21:41:23.8856121Z { 2023-01-11T21:41:23.8856397Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.8856502Z float tmp6 = 0; 2023-01-11T21:41:23.8856651Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:23.8856760Z float tmp7 = 0; 2023-01-11T21:41:23.8856935Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:23.8857060Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8857152Z { 2023-01-11T21:41:23.8857380Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.8857571Z auto tmp1 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:23.8857761Z auto tmp2 = at::vec::Vectorized(static_cast(32)); 2023-01-11T21:41:23.8857877Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:23.8858079Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:23.8858199Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:23.8858317Z tmp6_vec += tmp5; 2023-01-11T21:41:23.8858427Z tmp7_vec += tmp0; 2023-01-11T21:41:23.8858519Z } 2023-01-11T21:41:23.8858815Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:23.8859073Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:23.8859273Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:23.8859403Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:23.8859495Z { 2023-01-11T21:41:23.8859641Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.8859781Z auto tmp1 = out_ptr0[i0]; 2023-01-11T21:41:23.8859942Z auto tmp2 = static_cast(32); 2023-01-11T21:41:23.8860076Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:23.8860288Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:23.8860393Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:23.8860504Z tmp6 += tmp5; 2023-01-11T21:41:23.8860633Z tmp7 += tmp0; 2023-01-11T21:41:23.8860738Z } 2023-01-11T21:41:23.8860870Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.8860997Z out_ptr2[i0] = tmp7; 2023-01-11T21:41:23.8861163Z } 2023-01-11T21:41:23.8861231Z } 2023-01-11T21:41:23.8861332Z #pragma omp for 2023-01-11T21:41:23.8861439Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8861522Z { 2023-01-11T21:41:23.8861702Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + 8*i0); 2023-01-11T21:41:23.8861880Z auto tmp1 = at::vec::Vectorized(static_cast(32)); 2023-01-11T21:41:23.8861993Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8862106Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8862189Z } 2023-01-11T21:41:23.8862472Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8862591Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8862671Z { 2023-01-11T21:41:23.8862783Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:23.8862916Z auto tmp1 = static_cast(32); 2023-01-11T21:41:23.8863012Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8863205Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8863290Z } 2023-01-11T21:41:23.8863392Z #pragma omp for 2023-01-11T21:41:23.8863499Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8863580Z { 2023-01-11T21:41:23.8863744Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8863923Z auto tmp1 = at::vec::Vectorized(static_cast(32)); 2023-01-11T21:41:23.8864031Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8864332Z auto tmp3 = at::vec::Vectorized(static_cast(1e-05)); 2023-01-11T21:41:23.8864441Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.8864552Z auto tmp5 = tmp4.rsqrt(); 2023-01-11T21:41:23.8864677Z tmp5.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:23.8864760Z } 2023-01-11T21:41:23.8864872Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8864976Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8865061Z { 2023-01-11T21:41:23.8865171Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.8865303Z auto tmp1 = static_cast(32); 2023-01-11T21:41:23.8865412Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.8865615Z auto tmp3 = static_cast(1e-05); 2023-01-11T21:41:23.8865707Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.8865830Z auto tmp5 = 1 / std::sqrt(tmp4); 2023-01-11T21:41:23.8865938Z in_out_ptr1[i0] = tmp5; 2023-01-11T21:41:23.8866018Z } 2023-01-11T21:41:23.8866118Z #pragma omp for 2023-01-11T21:41:23.8866220Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8866304Z { 2023-01-11T21:41:23.8866396Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.8866478Z { 2023-01-11T21:41:23.8866665Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.8866835Z auto tmp1 = at::vec::Vectorized(in_out_ptr0[i0]); 2023-01-11T21:41:23.8867010Z auto tmp3 = at::vec::Vectorized(in_out_ptr1[i0]); 2023-01-11T21:41:23.8867187Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:23.8867361Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr2 + 8*i1); 2023-01-11T21:41:23.8867542Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8867642Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8867755Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:23.8867868Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:23.8868037Z auto tmp9 = at::vec::clamp_min(tmp8, decltype(tmp8)(0)); 2023-01-11T21:41:23.8868171Z tmp9.store(out_ptr3 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.8868257Z } 2023-01-11T21:41:23.8868377Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.8868477Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:23.8868644Z { 2023-01-11T21:41:23.8868775Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.8868897Z auto tmp1 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8869018Z auto tmp3 = in_out_ptr1[i0]; 2023-01-11T21:41:23.8869132Z auto tmp5 = in_ptr1[i1]; 2023-01-11T21:41:23.8869243Z auto tmp7 = in_ptr2[i1]; 2023-01-11T21:41:23.8869408Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.8869522Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.8869633Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:23.8869748Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:23.8869869Z auto tmp9 = tmp8 * (tmp8>0); 2023-01-11T21:41:23.8869992Z out_ptr3[i1 + (32*i0)] = tmp9; 2023-01-11T21:41:23.8870075Z } 2023-01-11T21:41:23.8870144Z } 2023-01-11T21:41:23.8870225Z } 2023-01-11T21:41:23.8870306Z } 2023-01-11T21:41:23.8870415Z ''') 2023-01-11T21:41:23.8870427Z 2023-01-11T21:41:23.8870480Z 2023-01-11T21:41:23.8870604Z async_compile.wait(globals()) 2023-01-11T21:41:23.8870701Z del async_compile 2023-01-11T21:41:23.8870708Z 2023-01-11T21:41:23.8870804Z def call(args): 2023-01-11T21:41:23.8870928Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.8871026Z args.clear() 2023-01-11T21:41:23.8871318Z buf0 = empty_strided((16, 1), (1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8871587Z buf1 = empty_strided((16, 1), (1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8871853Z buf2 = empty_strided((16, 1), (1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8871999Z buf3 = as_strided(buf2, (16, 1), (1, 1)); del buf2 # reuse 2023-01-11T21:41:23.8872146Z buf4 = as_strided(buf1, (16, 1), (1, 1)); del buf1 # reuse 2023-01-11T21:41:23.8872417Z buf5 = empty_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8872780Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(primals_3.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf5.data_ptr())) 2023-01-11T21:41:23.8872956Z return (buf5, primals_1, primals_2, primals_3, buf3, buf4, ) 2023-01-11T21:41:23.8872962Z 2023-01-11T21:41:23.8872968Z 2023-01-11T21:41:23.8873068Z if __name__ == "__main__": 2023-01-11T21:41:23.8873219Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8873384Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8873663Z primals_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8874013Z primals_2 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8874298Z primals_3 = rand_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8874479Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.8874486Z 2023-01-11T21:41:23.8874561Z ok (2.258s) 2023-01-11T21:41:23.8875223Z test_leaky_relu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8875393Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8875755Z [2023-01-11 21:35:32,865] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 261 2023-01-11T21:41:23.8876121Z [2023-01-11 21:35:34,654] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 261 2023-01-11T21:41:23.8876128Z 2023-01-11T21:41:23.8876253Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8876345Z import torch 2023-01-11T21:41:23.8876437Z import random 2023-01-11T21:41:23.8876661Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8876809Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8876816Z 2023-01-11T21:41:23.8876920Z aten = torch.ops.aten 2023-01-11T21:41:23.8877098Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8877223Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8877230Z 2023-01-11T21:41:23.8877235Z 2023-01-11T21:41:23.8877420Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8877692Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8877849Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8877984Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8878096Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8878174Z { 2023-01-11T21:41:23.8878303Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8878384Z { 2023-01-11T21:41:23.8878534Z #pragma omp for 2023-01-11T21:41:23.8878644Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.8878725Z { 2023-01-11T21:41:23.8878794Z { 2023-01-11T21:41:23.8878877Z { 2023-01-11T21:41:23.8878997Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8879130Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.8879247Z auto tmp2 = tmp0 > tmp1; 2023-01-11T21:41:23.8879386Z auto tmp3 = static_cast(0.2); 2023-01-11T21:41:23.8879504Z auto tmp4 = tmp0 * tmp3; 2023-01-11T21:41:23.8879617Z auto tmp5 = tmp2 ? tmp0 : tmp4; 2023-01-11T21:41:23.8879749Z auto tmp6 = static_cast(2); 2023-01-11T21:41:23.8879866Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:23.8879998Z auto tmp8 = static_cast(1); 2023-01-11T21:41:23.8880114Z auto tmp9 = tmp0 + tmp8; 2023-01-11T21:41:23.8880237Z auto tmp10 = tmp9 > tmp1; 2023-01-11T21:41:23.8880377Z auto tmp11 = static_cast(0.01); 2023-01-11T21:41:23.8880480Z auto tmp12 = tmp9 * tmp11; 2023-01-11T21:41:23.8880611Z auto tmp13 = tmp10 ? tmp9 : tmp12; 2023-01-11T21:41:23.8880719Z out_ptr0[i0] = tmp7; 2023-01-11T21:41:23.8880829Z out_ptr1[i0] = tmp13; 2023-01-11T21:41:23.8880913Z } 2023-01-11T21:41:23.8880998Z } 2023-01-11T21:41:23.8881079Z } 2023-01-11T21:41:23.8881142Z } 2023-01-11T21:41:23.8881221Z } 2023-01-11T21:41:23.8881331Z ''') 2023-01-11T21:41:23.8881338Z 2023-01-11T21:41:23.8881343Z 2023-01-11T21:41:23.8881461Z async_compile.wait(globals()) 2023-01-11T21:41:23.8881556Z del async_compile 2023-01-11T21:41:23.8881561Z 2023-01-11T21:41:23.8881652Z def call(args): 2023-01-11T21:41:23.8881740Z arg0_1, = args 2023-01-11T21:41:23.8881820Z args.clear() 2023-01-11T21:41:23.8882098Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8882366Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8882586Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8882676Z del arg0_1 2023-01-11T21:41:23.8882778Z return (buf0, buf1, ) 2023-01-11T21:41:23.8882785Z 2023-01-11T21:41:23.8882791Z 2023-01-11T21:41:23.8882888Z if __name__ == "__main__": 2023-01-11T21:41:23.8883040Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8883193Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8883476Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8883626Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8883632Z 2023-01-11T21:41:23.8883721Z ok (1.828s) 2023-01-11T21:41:23.8884480Z test_lgamma_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8884671Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8885135Z [2023-01-11 21:35:34,688] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 262 2023-01-11T21:41:23.8885545Z [2023-01-11 21:35:36,287] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 262 2023-01-11T21:41:23.8885553Z 2023-01-11T21:41:23.8885683Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8885778Z import torch 2023-01-11T21:41:23.8885860Z import random 2023-01-11T21:41:23.8886026Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8886290Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8886299Z 2023-01-11T21:41:23.8886425Z aten = torch.ops.aten 2023-01-11T21:41:23.8886649Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8886791Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8886798Z 2023-01-11T21:41:23.8886806Z 2023-01-11T21:41:23.8887032Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8887397Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8887619Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.8887804Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.8887982Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.8888092Z { 2023-01-11T21:41:23.8888274Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8888382Z { 2023-01-11T21:41:23.8888504Z #pragma omp for 2023-01-11T21:41:23.8888662Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:23.8888776Z { 2023-01-11T21:41:23.8889030Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.8889186Z auto tmp1 = tmp0.lgamma(); 2023-01-11T21:41:23.8889434Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.8889583Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.8889816Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.8889925Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.8890053Z auto tmp6 = tmp5.cos(); 2023-01-11T21:41:23.8890187Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.8890312Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.8890407Z } 2023-01-11T21:41:23.8890537Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8890648Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:23.8890734Z { 2023-01-11T21:41:23.8890870Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.8891031Z auto tmp1 = std::lgamma(tmp0); 2023-01-11T21:41:23.8891199Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.8891340Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:23.8891508Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.8891643Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.8891777Z auto tmp6 = std::cos(tmp5); 2023-01-11T21:41:23.8891888Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.8891998Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.8892084Z } 2023-01-11T21:41:23.8892171Z } 2023-01-11T21:41:23.8892255Z } 2023-01-11T21:41:23.8892382Z ''') 2023-01-11T21:41:23.8892390Z 2023-01-11T21:41:23.8892395Z 2023-01-11T21:41:23.8892507Z async_compile.wait(globals()) 2023-01-11T21:41:23.8892608Z del async_compile 2023-01-11T21:41:23.8892615Z 2023-01-11T21:41:23.8892711Z def call(args): 2023-01-11T21:41:23.8892883Z arg0_1, = args 2023-01-11T21:41:23.8892984Z args.clear() 2023-01-11T21:41:23.8893283Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8893582Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8893810Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8893903Z del arg0_1 2023-01-11T21:41:23.8894012Z return (buf0, buf1, ) 2023-01-11T21:41:23.8894020Z 2023-01-11T21:41:23.8894026Z 2023-01-11T21:41:23.8894132Z if __name__ == "__main__": 2023-01-11T21:41:23.8894297Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8894473Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8894771Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8894930Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.8894937Z 2023-01-11T21:41:23.8895081Z ok (1.633s) 2023-01-11T21:41:23.8895802Z test_linear1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8895986Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8896382Z [2023-01-11 21:35:36,340] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 263 2023-01-11T21:41:23.8896789Z [2023-01-11 21:35:36,348] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 263 2023-01-11T21:41:23.8896797Z 2023-01-11T21:41:23.8896934Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8897035Z import torch 2023-01-11T21:41:23.8897136Z import random 2023-01-11T21:41:23.8897312Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8897469Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8897491Z 2023-01-11T21:41:23.8897584Z aten = torch.ops.aten 2023-01-11T21:41:23.8897778Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8897908Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8897915Z 2023-01-11T21:41:23.8897922Z 2023-01-11T21:41:23.8898118Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8898423Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8898590Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.8898674Z { 2023-01-11T21:41:23.8898800Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8898884Z { 2023-01-11T21:41:23.8898994Z #pragma omp for 2023-01-11T21:41:23.8899108Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.8899195Z { 2023-01-11T21:41:23.8899407Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8899597Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:23.8899731Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8899802Z } 2023-01-11T21:41:23.8899937Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8900052Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:23.8900143Z { 2023-01-11T21:41:23.8900273Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8900478Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:23.8900581Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:23.8900700Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.8900785Z } 2023-01-11T21:41:23.8900870Z } 2023-01-11T21:41:23.8900951Z } 2023-01-11T21:41:23.8901064Z ''') 2023-01-11T21:41:23.8901073Z 2023-01-11T21:41:23.8901078Z 2023-01-11T21:41:23.8901205Z async_compile.wait(globals()) 2023-01-11T21:41:23.8901371Z del async_compile 2023-01-11T21:41:23.8901392Z 2023-01-11T21:41:23.8901478Z def call(args): 2023-01-11T21:41:23.8901620Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.8901721Z args.clear() 2023-01-11T21:41:23.8902017Z buf0 = empty_strided((2, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8902254Z aten.addmm.out(primals_2, primals_3, as_strided(primals_1, (8, 16), (1, 8)), beta=1, alpha=1, out=buf0) 2023-01-11T21:41:23.8902491Z del primals_1 2023-01-11T21:41:23.8902594Z del primals_2 2023-01-11T21:41:23.8902699Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.8902848Z kernel_cpp_0(c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8902977Z return (buf1, primals_3, buf1, ) 2023-01-11T21:41:23.8902985Z 2023-01-11T21:41:23.8902992Z 2023-01-11T21:41:23.8903101Z if __name__ == "__main__": 2023-01-11T21:41:23.8903266Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8903535Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8903849Z primals_1 = rand_strided((16, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8904152Z primals_2 = rand_strided((16, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8904434Z primals_3 = rand_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8904636Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.8904643Z 2023-01-11T21:41:23.8904737Z ok (0.061s) 2023-01-11T21:41:23.8905454Z test_linear2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8905640Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8906041Z [2023-01-11 21:35:36,557] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 264 2023-01-11T21:41:23.8906450Z [2023-01-11 21:35:36,603] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 264 2023-01-11T21:41:23.8906458Z 2023-01-11T21:41:23.8906594Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8906698Z import torch 2023-01-11T21:41:23.8906783Z import random 2023-01-11T21:41:23.8906950Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8907129Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8907137Z 2023-01-11T21:41:23.8907256Z aten = torch.ops.aten 2023-01-11T21:41:23.8907452Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8907586Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8907594Z 2023-01-11T21:41:23.8907600Z 2023-01-11T21:41:23.8907801Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.8908108Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8908260Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.8908347Z { 2023-01-11T21:41:23.8908488Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8908574Z { 2023-01-11T21:41:23.8908683Z #pragma omp for 2023-01-11T21:41:23.8908800Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8908887Z { 2023-01-11T21:41:23.8909079Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8909264Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.8909401Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8909489Z } 2023-01-11T21:41:23.8909628Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8909749Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8909835Z { 2023-01-11T21:41:23.8910056Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8910185Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8910303Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8910392Z } 2023-01-11T21:41:23.8910477Z } 2023-01-11T21:41:23.8910563Z } 2023-01-11T21:41:23.8910685Z ''') 2023-01-11T21:41:23.8910692Z 2023-01-11T21:41:23.8910700Z 2023-01-11T21:41:23.8910882Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.8911186Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8911357Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.8911442Z { 2023-01-11T21:41:23.8911585Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8911669Z { 2023-01-11T21:41:23.8911780Z #pragma omp for 2023-01-11T21:41:23.8911881Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8911969Z { 2023-01-11T21:41:23.8912242Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8912438Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.8912571Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8912664Z } 2023-01-11T21:41:23.8912800Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8912903Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8912992Z { 2023-01-11T21:41:23.8913119Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8913245Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8913363Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8913452Z } 2023-01-11T21:41:23.8913540Z } 2023-01-11T21:41:23.8913613Z } 2023-01-11T21:41:23.8913804Z ''') 2023-01-11T21:41:23.8913813Z 2023-01-11T21:41:23.8913819Z 2023-01-11T21:41:23.8914020Z kernel_cpp_2 = async_compile.cpp(''' 2023-01-11T21:41:23.8914321Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8914503Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:23.8914589Z { 2023-01-11T21:41:23.8914732Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8914819Z { 2023-01-11T21:41:23.8914914Z #pragma omp for 2023-01-11T21:41:23.8915032Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.8915123Z { 2023-01-11T21:41:23.8915322Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8915503Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:23.8915643Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.8915732Z } 2023-01-11T21:41:23.8915860Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.8915976Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:23.8916063Z { 2023-01-11T21:41:23.8916191Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8916317Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8916440Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8916524Z } 2023-01-11T21:41:23.8916600Z } 2023-01-11T21:41:23.8916685Z } 2023-01-11T21:41:23.8916806Z ''') 2023-01-11T21:41:23.8916814Z 2023-01-11T21:41:23.8916820Z 2023-01-11T21:41:23.8917021Z kernel_cpp_3 = async_compile.cpp(''' 2023-01-11T21:41:23.8917324Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.8917496Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.8917636Z bool* __restrict__ out_ptr0) 2023-01-11T21:41:23.8917703Z { 2023-01-11T21:41:23.8917849Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.8917935Z { 2023-01-11T21:41:23.8918044Z #pragma omp for 2023-01-11T21:41:23.8918162Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.8918249Z { 2023-01-11T21:41:23.8918342Z { 2023-01-11T21:41:23.8918420Z { 2023-01-11T21:41:23.8918641Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:23.8918778Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:23.8918932Z auto tmp2 = static_cast(0); 2023-01-11T21:41:23.8919066Z auto tmp3 = tmp1 <= tmp2; 2023-01-11T21:41:23.8919198Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.8919323Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.8919401Z } 2023-01-11T21:41:23.8919493Z } 2023-01-11T21:41:23.8919581Z } 2023-01-11T21:41:23.8919666Z } 2023-01-11T21:41:23.8919747Z } 2023-01-11T21:41:23.8919865Z ''') 2023-01-11T21:41:23.8919874Z 2023-01-11T21:41:23.8919881Z 2023-01-11T21:41:23.8920015Z async_compile.wait(globals()) 2023-01-11T21:41:23.8920106Z del async_compile 2023-01-11T21:41:23.8920112Z 2023-01-11T21:41:23.8920214Z def call(args): 2023-01-11T21:41:23.8920474Z primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9 = args 2023-01-11T21:41:23.8920644Z args.clear() 2023-01-11T21:41:23.8920950Z buf0 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8921194Z aten.addmm.out(primals_2, primals_9, as_strided(primals_1, (8, 8), (1, 8)), beta=1, alpha=1, out=buf0) 2023-01-11T21:41:23.8921298Z del primals_1 2023-01-11T21:41:23.8921383Z del primals_2 2023-01-11T21:41:23.8921501Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.8921647Z kernel_cpp_0(c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.8921947Z buf2 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8922181Z aten.addmm.out(primals_4, buf1, as_strided(primals_3, (8, 8), (1, 8)), beta=1, alpha=1, out=buf2) 2023-01-11T21:41:23.8922284Z del primals_4 2023-01-11T21:41:23.8922403Z buf3 = buf2; del buf2 # reuse 2023-01-11T21:41:23.8922536Z kernel_cpp_1(c_void_p(buf3.data_ptr())) 2023-01-11T21:41:23.8922836Z buf4 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8923066Z aten.addmm.out(primals_6, buf3, as_strided(primals_5, (8, 8), (1, 8)), beta=1, alpha=1, out=buf4) 2023-01-11T21:41:23.8923167Z del primals_6 2023-01-11T21:41:23.8923285Z buf5 = buf4; del buf4 # reuse 2023-01-11T21:41:23.8923430Z kernel_cpp_2(c_void_p(buf5.data_ptr())) 2023-01-11T21:41:23.8923720Z buf6 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8923947Z aten.addmm.out(primals_8, buf5, as_strided(primals_7, (8, 8), (1, 8)), beta=1, alpha=1, out=buf6) 2023-01-11T21:41:23.8924037Z del primals_8 2023-01-11T21:41:23.8924156Z buf7 = buf6; del buf6 # reuse 2023-01-11T21:41:23.8924438Z buf8 = empty_strided((2, 8), (8, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.8924628Z kernel_cpp_3(c_void_p(buf7.data_ptr()), c_void_p(buf8.data_ptr())) 2023-01-11T21:41:23.8924927Z return (buf7, primals_9, buf1, buf3, buf5, buf8, as_strided(primals_7, (8, 8), (8, 1)), as_strided(primals_5, (8, 8), (8, 1)), as_strided(primals_3, (8, 8), (8, 1)), ) 2023-01-11T21:41:23.8924940Z 2023-01-11T21:41:23.8924946Z 2023-01-11T21:41:23.8925052Z if __name__ == "__main__": 2023-01-11T21:41:23.8925212Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8925389Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8925684Z primals_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8925986Z primals_2 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8926294Z primals_3 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8926590Z primals_4 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8926886Z primals_5 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8927187Z primals_6 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8927560Z primals_7 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8927853Z primals_8 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8928138Z primals_9 = rand_strided((2, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.8928453Z print_performance(lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7, primals_8, primals_9])) 2023-01-11T21:41:23.8928462Z 2023-01-11T21:41:23.8928554Z ok (0.258s) 2023-01-11T21:41:23.8929287Z test_linear_binary_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8929530Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8929946Z [2023-01-11 21:35:36,705] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 265 2023-01-11T21:41:23.8930366Z [2023-01-11 21:35:36,707] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 265 2023-01-11T21:41:23.8931029Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8931209Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8931608Z [2023-01-11 21:35:36,739] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 266 2023-01-11T21:41:23.8932016Z [2023-01-11 21:35:36,741] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 266 2023-01-11T21:41:23.8932666Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8932847Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8933228Z [2023-01-11 21:35:36,825] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 267 2023-01-11T21:41:23.8933641Z [2023-01-11 21:35:36,828] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 267 2023-01-11T21:41:23.8934302Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8934489Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8934873Z [2023-01-11 21:35:36,856] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 268 2023-01-11T21:41:23.8935272Z [2023-01-11 21:35:36,858] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 268 2023-01-11T21:41:23.8935943Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8936132Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8936608Z [2023-01-11 21:35:36,944] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 269 2023-01-11T21:41:23.8937012Z [2023-01-11 21:35:36,946] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 269 2023-01-11T21:41:23.8937683Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8937869Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8938251Z [2023-01-11 21:35:36,977] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 270 2023-01-11T21:41:23.8938697Z [2023-01-11 21:35:36,979] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 270 2023-01-11T21:41:23.8939361Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8939547Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8939939Z [2023-01-11 21:35:37,068] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 271 2023-01-11T21:41:23.8940349Z [2023-01-11 21:35:37,070] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 271 2023-01-11T21:41:23.8941001Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8941187Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8941576Z [2023-01-11 21:35:37,101] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 272 2023-01-11T21:41:23.8941585Z 2023-01-11T21:41:23.8941720Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8941819Z import torch 2023-01-11T21:41:23.8941905Z import random 2023-01-11T21:41:23.8942075Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8942250Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8942257Z 2023-01-11T21:41:23.8942547Z aten = torch.ops.aten 2023-01-11T21:41:23.8942746Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8942880Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8942897Z 2023-01-11T21:41:23.8942903Z 2023-01-11T21:41:23.8943032Z async_compile.wait(globals()) 2023-01-11T21:41:23.8943120Z del async_compile 2023-01-11T21:41:23.8943142Z 2023-01-11T21:41:23.8943227Z def call(args): 2023-01-11T21:41:23.8943357Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8943460Z args.clear() 2023-01-11T21:41:23.8943811Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.8943906Z del arg0_1 2023-01-11T21:41:23.8943996Z del arg1_1 2023-01-11T21:41:23.8944075Z del arg2_1 2023-01-11T21:41:23.8944163Z del arg3_1 2023-01-11T21:41:23.8944263Z return (buf0, ) 2023-01-11T21:41:23.8944271Z 2023-01-11T21:41:23.8944277Z 2023-01-11T21:41:23.8944382Z if __name__ == "__main__": 2023-01-11T21:41:23.8944552Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8944735Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8945160Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8945463Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8945765Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8946079Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8946271Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8946280Z 2023-01-11T21:41:23.8946286Z 2023-01-11T21:41:23.8946421Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8946524Z import torch 2023-01-11T21:41:23.8946629Z import random 2023-01-11T21:41:23.8946803Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8946984Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8946992Z 2023-01-11T21:41:23.8947091Z aten = torch.ops.aten 2023-01-11T21:41:23.8947378Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8947519Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8947526Z 2023-01-11T21:41:23.8947532Z 2023-01-11T21:41:23.8947663Z async_compile.wait(globals()) 2023-01-11T21:41:23.8947767Z del async_compile 2023-01-11T21:41:23.8947774Z 2023-01-11T21:41:23.8947875Z def call(args): 2023-01-11T21:41:23.8947991Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8948095Z args.clear() 2023-01-11T21:41:23.8948429Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.8948523Z del arg0_1 2023-01-11T21:41:23.8948618Z del arg1_1 2023-01-11T21:41:23.8948711Z del arg2_1 2023-01-11T21:41:23.8948814Z return (buf0, ) 2023-01-11T21:41:23.8948821Z 2023-01-11T21:41:23.8948828Z 2023-01-11T21:41:23.8948934Z if __name__ == "__main__": 2023-01-11T21:41:23.8949098Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8949260Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8949579Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8949894Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8950211Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8950388Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8950397Z 2023-01-11T21:41:23.8950403Z 2023-01-11T21:41:23.8950543Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8950640Z import torch 2023-01-11T21:41:23.8950740Z import random 2023-01-11T21:41:23.8950895Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8951067Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8951074Z 2023-01-11T21:41:23.8951184Z aten = torch.ops.aten 2023-01-11T21:41:23.8951379Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8951507Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8951522Z 2023-01-11T21:41:23.8951528Z 2023-01-11T21:41:23.8951656Z async_compile.wait(globals()) 2023-01-11T21:41:23.8951756Z del async_compile 2023-01-11T21:41:23.8951763Z 2023-01-11T21:41:23.8951862Z def call(args): 2023-01-11T21:41:23.8951972Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8952069Z args.clear() 2023-01-11T21:41:23.8952417Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.8952516Z del arg0_1 2023-01-11T21:41:23.8952609Z del arg1_1 2023-01-11T21:41:23.8952709Z del arg2_1 2023-01-11T21:41:23.8952802Z del arg3_1 2023-01-11T21:41:23.8952885Z return (buf0, ) 2023-01-11T21:41:23.8952893Z 2023-01-11T21:41:23.8952914Z 2023-01-11T21:41:23.8953006Z if __name__ == "__main__": 2023-01-11T21:41:23.8953169Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8953345Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8953792Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8954114Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8954455Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8954795Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8954991Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8955016Z 2023-01-11T21:41:23.8955023Z 2023-01-11T21:41:23.8955164Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8955280Z import torch 2023-01-11T21:41:23.8955393Z import random 2023-01-11T21:41:23.8955590Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8955801Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8955808Z 2023-01-11T21:41:23.8955937Z aten = torch.ops.aten 2023-01-11T21:41:23.8956225Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8956366Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8956389Z 2023-01-11T21:41:23.8956396Z 2023-01-11T21:41:23.8956527Z async_compile.wait(globals()) 2023-01-11T21:41:23.8956646Z del async_compile 2023-01-11T21:41:23.8956654Z 2023-01-11T21:41:23.8956761Z def call(args): 2023-01-11T21:41:23.8956894Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8957001Z args.clear() 2023-01-11T21:41:23.8957396Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.8957505Z del arg0_1 2023-01-11T21:41:23.8957597Z del arg1_1 2023-01-11T21:41:23.8957708Z del arg2_1 2023-01-11T21:41:23.8957823Z return (buf0, ) 2023-01-11T21:41:23.8957830Z 2023-01-11T21:41:23.8957837Z 2023-01-11T21:41:23.8957957Z if __name__ == "__main__": 2023-01-11T21:41:23.8958146Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8958353Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8958713Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8959041Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8959384Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8959586Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8959594Z 2023-01-11T21:41:23.8959601Z 2023-01-11T21:41:23.8959762Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8959875Z import torch 2023-01-11T21:41:23.8959990Z import random 2023-01-11T21:41:23.8960184Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8960389Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8960397Z 2023-01-11T21:41:23.8960513Z aten = torch.ops.aten 2023-01-11T21:41:23.8960742Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8960898Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8960914Z 2023-01-11T21:41:23.8960922Z 2023-01-11T21:41:23.8961072Z async_compile.wait(globals()) 2023-01-11T21:41:23.8961193Z del async_compile 2023-01-11T21:41:23.8961200Z 2023-01-11T21:41:23.8961316Z def call(args): 2023-01-11T21:41:23.8961457Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8961572Z args.clear() 2023-01-11T21:41:23.8961947Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.8962053Z del arg0_1 2023-01-11T21:41:23.8962154Z del arg1_1 2023-01-11T21:41:23.8962258Z del arg2_1 2023-01-11T21:41:23.8962368Z del arg3_1 2023-01-11T21:41:23.8962485Z return (buf0, ) 2023-01-11T21:41:23.8962493Z 2023-01-11T21:41:23.8962500Z 2023-01-11T21:41:23.8962625Z if __name__ == "__main__": 2023-01-11T21:41:23.8962798Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8963000Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8963360Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8963759Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8964123Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8964482Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8964703Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8964712Z 2023-01-11T21:41:23.8964720Z 2023-01-11T21:41:23.8964879Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8964995Z import torch 2023-01-11T21:41:23.8965096Z import random 2023-01-11T21:41:23.8965287Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8965492Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8965500Z 2023-01-11T21:41:23.8965628Z aten = torch.ops.aten 2023-01-11T21:41:23.8965900Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8966058Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8966066Z 2023-01-11T21:41:23.8966074Z 2023-01-11T21:41:23.8966222Z async_compile.wait(globals()) 2023-01-11T21:41:23.8966340Z del async_compile 2023-01-11T21:41:23.8966349Z 2023-01-11T21:41:23.8966446Z def call(args): 2023-01-11T21:41:23.8966577Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8966693Z args.clear() 2023-01-11T21:41:23.8967092Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.8967200Z del arg0_1 2023-01-11T21:41:23.8967309Z del arg1_1 2023-01-11T21:41:23.8967416Z del arg2_1 2023-01-11T21:41:23.8967521Z return (buf0, ) 2023-01-11T21:41:23.8967529Z 2023-01-11T21:41:23.8967537Z 2023-01-11T21:41:23.8967659Z if __name__ == "__main__": 2023-01-11T21:41:23.8967850Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8968058Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8968418Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8968774Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8969136Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8969339Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8969347Z 2023-01-11T21:41:23.8969354Z 2023-01-11T21:41:23.8969494Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8969605Z import torch 2023-01-11T21:41:23.8969724Z import random 2023-01-11T21:41:23.8969916Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8970123Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8970131Z 2023-01-11T21:41:23.8970262Z aten = torch.ops.aten 2023-01-11T21:41:23.8970487Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8970626Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8970656Z 2023-01-11T21:41:23.8970663Z 2023-01-11T21:41:23.8970797Z async_compile.wait(globals()) 2023-01-11T21:41:23.8970918Z del async_compile 2023-01-11T21:41:23.8970925Z 2023-01-11T21:41:23.8971043Z def call(args): 2023-01-11T21:41:23.8971189Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8971308Z args.clear() 2023-01-11T21:41:23.8971716Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.8971824Z del arg0_1 2023-01-11T21:41:23.8971917Z del arg1_1 2023-01-11T21:41:23.8972025Z del arg2_1 2023-01-11T21:41:23.8972128Z del arg3_1 2023-01-11T21:41:23.8972244Z return (buf0, ) 2023-01-11T21:41:23.8972252Z 2023-01-11T21:41:23.8972259Z 2023-01-11T21:41:23.8972384Z if __name__ == "__main__": 2023-01-11T21:41:23.8972579Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8972787Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8973197Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8973519Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8973858Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8974199Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8974420Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8974427Z 2023-01-11T21:41:23.8974899Z [2023-01-11 21:35:37,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 272 2023-01-11T21:41:23.8975730Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8975947Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8976407Z [2023-01-11 21:35:37,190] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 273 2023-01-11T21:41:23.8976878Z [2023-01-11 21:35:37,193] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 273 2023-01-11T21:41:23.8977644Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8977831Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8978288Z [2023-01-11 21:35:37,224] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 274 2023-01-11T21:41:23.8978758Z [2023-01-11 21:35:37,226] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 274 2023-01-11T21:41:23.8979524Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8979731Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8980176Z [2023-01-11 21:35:37,321] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 275 2023-01-11T21:41:23.8980637Z [2023-01-11 21:35:37,323] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 275 2023-01-11T21:41:23.8981397Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8981610Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8982055Z [2023-01-11 21:35:37,353] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 276 2023-01-11T21:41:23.8982655Z [2023-01-11 21:35:37,356] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 276 2023-01-11T21:41:23.8983421Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8983717Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8984162Z [2023-01-11 21:35:37,443] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 277 2023-01-11T21:41:23.8984630Z [2023-01-11 21:35:37,445] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 277 2023-01-11T21:41:23.8985405Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8985613Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8986114Z [2023-01-11 21:35:37,476] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 278 2023-01-11T21:41:23.8986591Z [2023-01-11 21:35:37,478] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 278 2023-01-11T21:41:23.8987359Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.8987576Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.8988034Z [2023-01-11 21:35:37,564] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 279 2023-01-11T21:41:23.8988045Z 2023-01-11T21:41:23.8988203Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8988302Z import torch 2023-01-11T21:41:23.8988416Z import random 2023-01-11T21:41:23.8988609Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8988825Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8988834Z 2023-01-11T21:41:23.8988963Z aten = torch.ops.aten 2023-01-11T21:41:23.8989193Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8989348Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8989357Z 2023-01-11T21:41:23.8989364Z 2023-01-11T21:41:23.8989511Z async_compile.wait(globals()) 2023-01-11T21:41:23.8989614Z del async_compile 2023-01-11T21:41:23.8989637Z 2023-01-11T21:41:23.8989742Z def call(args): 2023-01-11T21:41:23.8989879Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8989992Z args.clear() 2023-01-11T21:41:23.8990388Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.8990499Z del arg0_1 2023-01-11T21:41:23.8990608Z del arg1_1 2023-01-11T21:41:23.8990700Z del arg2_1 2023-01-11T21:41:23.8990816Z return (buf0, ) 2023-01-11T21:41:23.8990830Z 2023-01-11T21:41:23.8990840Z 2023-01-11T21:41:23.8990967Z if __name__ == "__main__": 2023-01-11T21:41:23.8991161Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8991373Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8991725Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8992070Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8992411Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8992601Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.8992623Z 2023-01-11T21:41:23.8992630Z 2023-01-11T21:41:23.8992768Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8992879Z import torch 2023-01-11T21:41:23.8992989Z import random 2023-01-11T21:41:23.8993182Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8993390Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8993494Z 2023-01-11T21:41:23.8993619Z aten = torch.ops.aten 2023-01-11T21:41:23.8993908Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8994044Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8994053Z 2023-01-11T21:41:23.8994075Z 2023-01-11T21:41:23.8994208Z async_compile.wait(globals()) 2023-01-11T21:41:23.8994331Z del async_compile 2023-01-11T21:41:23.8994338Z 2023-01-11T21:41:23.8994456Z def call(args): 2023-01-11T21:41:23.8994600Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.8994716Z args.clear() 2023-01-11T21:41:23.8995117Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.8995226Z del arg0_1 2023-01-11T21:41:23.8995318Z del arg1_1 2023-01-11T21:41:23.8995423Z del arg2_1 2023-01-11T21:41:23.8995521Z del arg3_1 2023-01-11T21:41:23.8995633Z return (buf0, ) 2023-01-11T21:41:23.8995641Z 2023-01-11T21:41:23.8995648Z 2023-01-11T21:41:23.8995821Z if __name__ == "__main__": 2023-01-11T21:41:23.8996017Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.8996224Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.8996559Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8996901Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8997258Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8997619Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.8997836Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.8997845Z 2023-01-11T21:41:23.8997852Z 2023-01-11T21:41:23.8998004Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.8998117Z import torch 2023-01-11T21:41:23.8998233Z import random 2023-01-11T21:41:23.8998417Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.8998627Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.8998634Z 2023-01-11T21:41:23.8998760Z aten = torch.ops.aten 2023-01-11T21:41:23.8998984Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.8999134Z async_compile = AsyncCompile() 2023-01-11T21:41:23.8999143Z 2023-01-11T21:41:23.8999151Z 2023-01-11T21:41:23.8999302Z async_compile.wait(globals()) 2023-01-11T21:41:23.8999425Z del async_compile 2023-01-11T21:41:23.8999433Z 2023-01-11T21:41:23.8999554Z def call(args): 2023-01-11T21:41:23.8999674Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.8999789Z args.clear() 2023-01-11T21:41:23.9000176Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.9000285Z del arg0_1 2023-01-11T21:41:23.9000391Z del arg1_1 2023-01-11T21:41:23.9000494Z del arg2_1 2023-01-11T21:41:23.9000610Z return (buf0, ) 2023-01-11T21:41:23.9000623Z 2023-01-11T21:41:23.9000634Z 2023-01-11T21:41:23.9000755Z if __name__ == "__main__": 2023-01-11T21:41:23.9000929Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9001135Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9001481Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9001843Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9002197Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9002397Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9002405Z 2023-01-11T21:41:23.9002411Z 2023-01-11T21:41:23.9002570Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9002687Z import torch 2023-01-11T21:41:23.9002790Z import random 2023-01-11T21:41:23.9002986Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9003196Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9003256Z 2023-01-11T21:41:23.9003385Z aten = torch.ops.aten 2023-01-11T21:41:23.9003617Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9003765Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9003771Z 2023-01-11T21:41:23.9003777Z 2023-01-11T21:41:23.9003922Z async_compile.wait(globals()) 2023-01-11T21:41:23.9004040Z del async_compile 2023-01-11T21:41:23.9004048Z 2023-01-11T21:41:23.9004150Z def call(args): 2023-01-11T21:41:23.9004295Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9004411Z args.clear() 2023-01-11T21:41:23.9004811Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.9004908Z del arg0_1 2023-01-11T21:41:23.9005018Z del arg1_1 2023-01-11T21:41:23.9005119Z del arg2_1 2023-01-11T21:41:23.9005210Z del arg3_1 2023-01-11T21:41:23.9005324Z return (buf0, ) 2023-01-11T21:41:23.9005331Z 2023-01-11T21:41:23.9005343Z 2023-01-11T21:41:23.9005506Z if __name__ == "__main__": 2023-01-11T21:41:23.9005697Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9005901Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9006255Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9006591Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9006918Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9007263Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9007481Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9007490Z 2023-01-11T21:41:23.9007497Z 2023-01-11T21:41:23.9007654Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9007765Z import torch 2023-01-11T21:41:23.9007885Z import random 2023-01-11T21:41:23.9008085Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9008295Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9008304Z 2023-01-11T21:41:23.9008415Z aten = torch.ops.aten 2023-01-11T21:41:23.9008638Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9008788Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9008795Z 2023-01-11T21:41:23.9008802Z 2023-01-11T21:41:23.9008948Z async_compile.wait(globals()) 2023-01-11T21:41:23.9009062Z del async_compile 2023-01-11T21:41:23.9009070Z 2023-01-11T21:41:23.9009180Z def call(args): 2023-01-11T21:41:23.9009311Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9009432Z args.clear() 2023-01-11T21:41:23.9009807Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.9009913Z del arg0_1 2023-01-11T21:41:23.9010020Z del arg1_1 2023-01-11T21:41:23.9010125Z del arg2_1 2023-01-11T21:41:23.9010237Z return (buf0, ) 2023-01-11T21:41:23.9010244Z 2023-01-11T21:41:23.9010258Z 2023-01-11T21:41:23.9010379Z if __name__ == "__main__": 2023-01-11T21:41:23.9010571Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9010776Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9011113Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9011453Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9011785Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9011991Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9011999Z 2023-01-11T21:41:23.9012006Z 2023-01-11T21:41:23.9012162Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9012277Z import torch 2023-01-11T21:41:23.9012391Z import random 2023-01-11T21:41:23.9012580Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9012774Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9012838Z 2023-01-11T21:41:23.9012964Z aten = torch.ops.aten 2023-01-11T21:41:23.9013194Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9013341Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9013350Z 2023-01-11T21:41:23.9013357Z 2023-01-11T21:41:23.9013506Z async_compile.wait(globals()) 2023-01-11T21:41:23.9013627Z del async_compile 2023-01-11T21:41:23.9013634Z 2023-01-11T21:41:23.9013745Z def call(args): 2023-01-11T21:41:23.9013891Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9013991Z args.clear() 2023-01-11T21:41:23.9014397Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.9014506Z del arg0_1 2023-01-11T21:41:23.9014615Z del arg1_1 2023-01-11T21:41:23.9014717Z del arg2_1 2023-01-11T21:41:23.9014816Z del arg3_1 2023-01-11T21:41:23.9014913Z return (buf0, ) 2023-01-11T21:41:23.9014934Z 2023-01-11T21:41:23.9014941Z 2023-01-11T21:41:23.9015114Z if __name__ == "__main__": 2023-01-11T21:41:23.9015306Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9015512Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9015857Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9016195Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9016552Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9016901Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9017118Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9017127Z 2023-01-11T21:41:23.9017135Z 2023-01-11T21:41:23.9017277Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9017389Z import torch 2023-01-11T21:41:23.9017508Z import random 2023-01-11T21:41:23.9017704Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9017916Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9017924Z 2023-01-11T21:41:23.9018053Z aten = torch.ops.aten 2023-01-11T21:41:23.9018279Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9018414Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9018437Z 2023-01-11T21:41:23.9018444Z 2023-01-11T21:41:23.9018578Z async_compile.wait(globals()) 2023-01-11T21:41:23.9018691Z del async_compile 2023-01-11T21:41:23.9018698Z 2023-01-11T21:41:23.9018811Z def call(args): 2023-01-11T21:41:23.9018937Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9019053Z args.clear() 2023-01-11T21:41:23.9019455Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.9019567Z del arg0_1 2023-01-11T21:41:23.9019664Z del arg1_1 2023-01-11T21:41:23.9019772Z del arg2_1 2023-01-11T21:41:23.9019894Z return (buf0, ) 2023-01-11T21:41:23.9019901Z 2023-01-11T21:41:23.9019912Z 2023-01-11T21:41:23.9020037Z if __name__ == "__main__": 2023-01-11T21:41:23.9020228Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9020430Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9020778Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9021139Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9021483Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9021692Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9021700Z 2023-01-11T21:41:23.9022171Z [2023-01-11 21:35:37,566] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 279 2023-01-11T21:41:23.9023073Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9023371Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9023827Z [2023-01-11 21:35:37,595] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 280 2023-01-11T21:41:23.9024289Z [2023-01-11 21:35:37,597] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 280 2023-01-11T21:41:23.9025045Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9025305Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9025765Z [2023-01-11 21:35:37,688] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 281 2023-01-11T21:41:23.9026235Z [2023-01-11 21:35:37,691] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 281 2023-01-11T21:41:23.9026997Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9027209Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9027667Z [2023-01-11 21:35:37,724] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 282 2023-01-11T21:41:23.9028135Z [2023-01-11 21:35:37,727] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 282 2023-01-11T21:41:23.9028915Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9029123Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9029586Z [2023-01-11 21:35:37,812] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 283 2023-01-11T21:41:23.9030053Z [2023-01-11 21:35:37,815] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 283 2023-01-11T21:41:23.9030830Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9031039Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9031489Z [2023-01-11 21:35:37,845] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 284 2023-01-11T21:41:23.9031953Z [2023-01-11 21:35:37,847] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 284 2023-01-11T21:41:23.9032740Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9032942Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9033440Z [2023-01-11 21:35:37,934] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 285 2023-01-11T21:41:23.9033968Z [2023-01-11 21:35:37,936] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 285 2023-01-11T21:41:23.9034721Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9034925Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9035376Z [2023-01-11 21:35:37,967] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 286 2023-01-11T21:41:23.9035386Z 2023-01-11T21:41:23.9035589Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9035703Z import torch 2023-01-11T21:41:23.9035816Z import random 2023-01-11T21:41:23.9035997Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9036202Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9036211Z 2023-01-11T21:41:23.9036334Z aten = torch.ops.aten 2023-01-11T21:41:23.9036559Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9036710Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9036718Z 2023-01-11T21:41:23.9036726Z 2023-01-11T21:41:23.9036872Z async_compile.wait(globals()) 2023-01-11T21:41:23.9036986Z del async_compile 2023-01-11T21:41:23.9036994Z 2023-01-11T21:41:23.9037108Z def call(args): 2023-01-11T21:41:23.9037238Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9037352Z args.clear() 2023-01-11T21:41:23.9037751Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'add') 2023-01-11T21:41:23.9037866Z del arg0_1 2023-01-11T21:41:23.9037980Z del arg1_1 2023-01-11T21:41:23.9038085Z del arg2_1 2023-01-11T21:41:23.9038192Z del arg3_1 2023-01-11T21:41:23.9038297Z return (buf0, ) 2023-01-11T21:41:23.9038304Z 2023-01-11T21:41:23.9038322Z 2023-01-11T21:41:23.9038429Z if __name__ == "__main__": 2023-01-11T21:41:23.9038616Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9038822Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9039173Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9039517Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9039855Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9040192Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9040390Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9040417Z 2023-01-11T21:41:23.9040429Z 2023-01-11T21:41:23.9040569Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9040683Z import torch 2023-01-11T21:41:23.9040797Z import random 2023-01-11T21:41:23.9040996Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9041204Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9041212Z 2023-01-11T21:41:23.9041344Z aten = torch.ops.aten 2023-01-11T21:41:23.9041571Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9041711Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9041718Z 2023-01-11T21:41:23.9041739Z 2023-01-11T21:41:23.9041869Z async_compile.wait(globals()) 2023-01-11T21:41:23.9041987Z del async_compile 2023-01-11T21:41:23.9041994Z 2023-01-11T21:41:23.9042108Z def call(args): 2023-01-11T21:41:23.9042239Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9042356Z args.clear() 2023-01-11T21:41:23.9042754Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'add') 2023-01-11T21:41:23.9042921Z del arg0_1 2023-01-11T21:41:23.9043014Z del arg1_1 2023-01-11T21:41:23.9043117Z del arg2_1 2023-01-11T21:41:23.9043233Z return (buf0, ) 2023-01-11T21:41:23.9043241Z 2023-01-11T21:41:23.9043248Z 2023-01-11T21:41:23.9043368Z if __name__ == "__main__": 2023-01-11T21:41:23.9043558Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9043767Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9044115Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9044440Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9044781Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9044986Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9044993Z 2023-01-11T21:41:23.9045000Z 2023-01-11T21:41:23.9045157Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9045315Z import torch 2023-01-11T21:41:23.9045433Z import random 2023-01-11T21:41:23.9045630Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9045832Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9045840Z 2023-01-11T21:41:23.9045954Z aten = torch.ops.aten 2023-01-11T21:41:23.9046183Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9046331Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9046340Z 2023-01-11T21:41:23.9046347Z 2023-01-11T21:41:23.9046494Z async_compile.wait(globals()) 2023-01-11T21:41:23.9046610Z del async_compile 2023-01-11T21:41:23.9046618Z 2023-01-11T21:41:23.9046732Z def call(args): 2023-01-11T21:41:23.9046878Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9046993Z args.clear() 2023-01-11T21:41:23.9047383Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9047495Z del arg0_1 2023-01-11T21:41:23.9047611Z del arg1_1 2023-01-11T21:41:23.9047719Z del arg2_1 2023-01-11T21:41:23.9047825Z del arg3_1 2023-01-11T21:41:23.9047942Z return (buf0, ) 2023-01-11T21:41:23.9047950Z 2023-01-11T21:41:23.9047957Z 2023-01-11T21:41:23.9048080Z if __name__ == "__main__": 2023-01-11T21:41:23.9048261Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9048463Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9048813Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9049144Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9049495Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9049855Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9050073Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9050085Z 2023-01-11T21:41:23.9050096Z 2023-01-11T21:41:23.9050251Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9050366Z import torch 2023-01-11T21:41:23.9050469Z import random 2023-01-11T21:41:23.9050660Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9050860Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9050868Z 2023-01-11T21:41:23.9050992Z aten = torch.ops.aten 2023-01-11T21:41:23.9051214Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9051366Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9051374Z 2023-01-11T21:41:23.9051380Z 2023-01-11T21:41:23.9051525Z async_compile.wait(globals()) 2023-01-11T21:41:23.9051629Z del async_compile 2023-01-11T21:41:23.9051651Z 2023-01-11T21:41:23.9051749Z def call(args): 2023-01-11T21:41:23.9051880Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9051998Z args.clear() 2023-01-11T21:41:23.9052399Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9052561Z del arg0_1 2023-01-11T21:41:23.9052667Z del arg1_1 2023-01-11T21:41:23.9052763Z del arg2_1 2023-01-11T21:41:23.9052877Z return (buf0, ) 2023-01-11T21:41:23.9052885Z 2023-01-11T21:41:23.9052892Z 2023-01-11T21:41:23.9053016Z if __name__ == "__main__": 2023-01-11T21:41:23.9053205Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9053411Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9053764Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9054121Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9054475Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9054667Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9054688Z 2023-01-11T21:41:23.9054695Z 2023-01-11T21:41:23.9054887Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9055005Z import torch 2023-01-11T21:41:23.9055116Z import random 2023-01-11T21:41:23.9055307Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9055514Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9055522Z 2023-01-11T21:41:23.9055652Z aten = torch.ops.aten 2023-01-11T21:41:23.9055881Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9056019Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9056041Z 2023-01-11T21:41:23.9056048Z 2023-01-11T21:41:23.9056180Z async_compile.wait(globals()) 2023-01-11T21:41:23.9056298Z del async_compile 2023-01-11T21:41:23.9056305Z 2023-01-11T21:41:23.9056424Z def call(args): 2023-01-11T21:41:23.9056567Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9056680Z args.clear() 2023-01-11T21:41:23.9057071Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9057179Z del arg0_1 2023-01-11T21:41:23.9057271Z del arg1_1 2023-01-11T21:41:23.9057377Z del arg2_1 2023-01-11T21:41:23.9057482Z del arg3_1 2023-01-11T21:41:23.9057598Z return (buf0, ) 2023-01-11T21:41:23.9057606Z 2023-01-11T21:41:23.9057613Z 2023-01-11T21:41:23.9057739Z if __name__ == "__main__": 2023-01-11T21:41:23.9057929Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9058139Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9058481Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9058822Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9059158Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9059486Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9059699Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9059710Z 2023-01-11T21:41:23.9059721Z 2023-01-11T21:41:23.9059879Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9059993Z import torch 2023-01-11T21:41:23.9060109Z import random 2023-01-11T21:41:23.9060284Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9060483Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9060491Z 2023-01-11T21:41:23.9060615Z aten = torch.ops.aten 2023-01-11T21:41:23.9060839Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9060983Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9060991Z 2023-01-11T21:41:23.9060999Z 2023-01-11T21:41:23.9061145Z async_compile.wait(globals()) 2023-01-11T21:41:23.9061263Z del async_compile 2023-01-11T21:41:23.9061270Z 2023-01-11T21:41:23.9061377Z def call(args): 2023-01-11T21:41:23.9061493Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9061610Z args.clear() 2023-01-11T21:41:23.9061997Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9062160Z del arg0_1 2023-01-11T21:41:23.9062273Z del arg1_1 2023-01-11T21:41:23.9062508Z del arg2_1 2023-01-11T21:41:23.9062620Z return (buf0, ) 2023-01-11T21:41:23.9062629Z 2023-01-11T21:41:23.9062636Z 2023-01-11T21:41:23.9062759Z if __name__ == "__main__": 2023-01-11T21:41:23.9062933Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9063135Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9063488Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9063826Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9064168Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9064372Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9064379Z 2023-01-11T21:41:23.9064386Z 2023-01-11T21:41:23.9064542Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9064738Z import torch 2023-01-11T21:41:23.9064834Z import random 2023-01-11T21:41:23.9065024Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9065227Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9065235Z 2023-01-11T21:41:23.9065361Z aten = torch.ops.aten 2023-01-11T21:41:23.9065583Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9065738Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9065745Z 2023-01-11T21:41:23.9065752Z 2023-01-11T21:41:23.9065897Z async_compile.wait(globals()) 2023-01-11T21:41:23.9066012Z del async_compile 2023-01-11T21:41:23.9066020Z 2023-01-11T21:41:23.9066117Z def call(args): 2023-01-11T21:41:23.9066261Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9066376Z args.clear() 2023-01-11T21:41:23.9066778Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9066891Z del arg0_1 2023-01-11T21:41:23.9067006Z del arg1_1 2023-01-11T21:41:23.9067110Z del arg2_1 2023-01-11T21:41:23.9067204Z del arg3_1 2023-01-11T21:41:23.9067321Z return (buf0, ) 2023-01-11T21:41:23.9067329Z 2023-01-11T21:41:23.9067337Z 2023-01-11T21:41:23.9067465Z if __name__ == "__main__": 2023-01-11T21:41:23.9067655Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9067860Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9068219Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9068556Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9068914Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9069251Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9069468Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9069480Z 2023-01-11T21:41:23.9069957Z [2023-01-11 21:35:37,969] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 286 2023-01-11T21:41:23.9070725Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9070931Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9071377Z [2023-01-11 21:35:38,053] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 287 2023-01-11T21:41:23.9071832Z [2023-01-11 21:35:38,055] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 287 2023-01-11T21:41:23.9072591Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9072870Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9073325Z [2023-01-11 21:35:38,084] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 288 2023-01-11T21:41:23.9073849Z [2023-01-11 21:35:38,085] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 288 2023-01-11T21:41:23.9074643Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9074863Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9075324Z [2023-01-11 21:35:38,200] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 289 2023-01-11T21:41:23.9075791Z [2023-01-11 21:35:38,204] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 289 2023-01-11T21:41:23.9076541Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9076749Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9077196Z [2023-01-11 21:35:38,238] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 290 2023-01-11T21:41:23.9077667Z [2023-01-11 21:35:38,240] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 290 2023-01-11T21:41:23.9078433Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9078639Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9079080Z [2023-01-11 21:35:38,325] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 291 2023-01-11T21:41:23.9079544Z [2023-01-11 21:35:38,327] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 291 2023-01-11T21:41:23.9080286Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9080502Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9080957Z [2023-01-11 21:35:38,355] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 292 2023-01-11T21:41:23.9081416Z [2023-01-11 21:35:38,356] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 292 2023-01-11T21:41:23.9081426Z 2023-01-11T21:41:23.9081584Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9081702Z import torch 2023-01-11T21:41:23.9081815Z import random 2023-01-11T21:41:23.9082007Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9082208Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9082261Z 2023-01-11T21:41:23.9082377Z aten = torch.ops.aten 2023-01-11T21:41:23.9082600Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9082751Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9082759Z 2023-01-11T21:41:23.9082766Z 2023-01-11T21:41:23.9082917Z async_compile.wait(globals()) 2023-01-11T21:41:23.9083039Z del async_compile 2023-01-11T21:41:23.9083047Z 2023-01-11T21:41:23.9083162Z def call(args): 2023-01-11T21:41:23.9083291Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9083405Z args.clear() 2023-01-11T21:41:23.9083786Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9083895Z del arg0_1 2023-01-11T21:41:23.9084008Z del arg1_1 2023-01-11T21:41:23.9084116Z del arg2_1 2023-01-11T21:41:23.9084231Z return (buf0, ) 2023-01-11T21:41:23.9084239Z 2023-01-11T21:41:23.9084246Z 2023-01-11T21:41:23.9084370Z if __name__ == "__main__": 2023-01-11T21:41:23.9084620Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9084817Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9085164Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9085521Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9085867Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9086069Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9086077Z 2023-01-11T21:41:23.9086083Z 2023-01-11T21:41:23.9086236Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9086351Z import torch 2023-01-11T21:41:23.9086465Z import random 2023-01-11T21:41:23.9086642Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9086841Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9086847Z 2023-01-11T21:41:23.9086947Z aten = torch.ops.aten 2023-01-11T21:41:23.9087130Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9087251Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9087258Z 2023-01-11T21:41:23.9087264Z 2023-01-11T21:41:23.9087381Z async_compile.wait(globals()) 2023-01-11T21:41:23.9087473Z del async_compile 2023-01-11T21:41:23.9087479Z 2023-01-11T21:41:23.9087571Z def call(args): 2023-01-11T21:41:23.9087671Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9087765Z args.clear() 2023-01-11T21:41:23.9088085Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9088173Z del arg0_1 2023-01-11T21:41:23.9088261Z del arg1_1 2023-01-11T21:41:23.9088348Z del arg2_1 2023-01-11T21:41:23.9088430Z del arg3_1 2023-01-11T21:41:23.9088507Z return (buf0, ) 2023-01-11T21:41:23.9088513Z 2023-01-11T21:41:23.9088518Z 2023-01-11T21:41:23.9088616Z if __name__ == "__main__": 2023-01-11T21:41:23.9088767Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9088939Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9089220Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9089481Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9089755Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9090019Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9090175Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9090183Z 2023-01-11T21:41:23.9090204Z 2023-01-11T21:41:23.9090313Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9090403Z import torch 2023-01-11T21:41:23.9090494Z import random 2023-01-11T21:41:23.9090646Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9090807Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9090877Z 2023-01-11T21:41:23.9090983Z aten = torch.ops.aten 2023-01-11T21:41:23.9091163Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9091267Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9091273Z 2023-01-11T21:41:23.9091292Z 2023-01-11T21:41:23.9091392Z async_compile.wait(globals()) 2023-01-11T21:41:23.9091483Z del async_compile 2023-01-11T21:41:23.9091489Z 2023-01-11T21:41:23.9091581Z def call(args): 2023-01-11T21:41:23.9091684Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9091775Z args.clear() 2023-01-11T21:41:23.9092089Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9092176Z del arg0_1 2023-01-11T21:41:23.9092252Z del arg1_1 2023-01-11T21:41:23.9092336Z del arg2_1 2023-01-11T21:41:23.9092428Z return (buf0, ) 2023-01-11T21:41:23.9092434Z 2023-01-11T21:41:23.9092439Z 2023-01-11T21:41:23.9092536Z if __name__ == "__main__": 2023-01-11T21:41:23.9092726Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9092892Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9093163Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9093415Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9093688Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9093850Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9093856Z 2023-01-11T21:41:23.9093862Z 2023-01-11T21:41:23.9093988Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9094077Z import torch 2023-01-11T21:41:23.9094170Z import random 2023-01-11T21:41:23.9094321Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9094477Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9094483Z 2023-01-11T21:41:23.9094569Z aten = torch.ops.aten 2023-01-11T21:41:23.9094751Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9094871Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9094878Z 2023-01-11T21:41:23.9094883Z 2023-01-11T21:41:23.9094997Z async_compile.wait(globals()) 2023-01-11T21:41:23.9095092Z del async_compile 2023-01-11T21:41:23.9095098Z 2023-01-11T21:41:23.9095187Z def call(args): 2023-01-11T21:41:23.9095302Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9095394Z args.clear() 2023-01-11T21:41:23.9095691Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9095777Z del arg0_1 2023-01-11T21:41:23.9095865Z del arg1_1 2023-01-11T21:41:23.9095952Z del arg2_1 2023-01-11T21:41:23.9096038Z del arg3_1 2023-01-11T21:41:23.9096126Z return (buf0, ) 2023-01-11T21:41:23.9096132Z 2023-01-11T21:41:23.9096138Z 2023-01-11T21:41:23.9096235Z if __name__ == "__main__": 2023-01-11T21:41:23.9096370Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9096537Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9096810Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9097079Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9097383Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9097701Z arg3_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9097907Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9097916Z 2023-01-11T21:41:23.9097923Z 2023-01-11T21:41:23.9098061Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9098146Z import torch 2023-01-11T21:41:23.9098250Z import random 2023-01-11T21:41:23.9098433Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9098615Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9098691Z 2023-01-11T21:41:23.9098813Z aten = torch.ops.aten 2023-01-11T21:41:23.9099023Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9099157Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9099165Z 2023-01-11T21:41:23.9099171Z 2023-01-11T21:41:23.9099300Z async_compile.wait(globals()) 2023-01-11T21:41:23.9099391Z del async_compile 2023-01-11T21:41:23.9099413Z 2023-01-11T21:41:23.9099499Z def call(args): 2023-01-11T21:41:23.9099615Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9099717Z args.clear() 2023-01-11T21:41:23.9100073Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9100171Z del arg0_1 2023-01-11T21:41:23.9100267Z del arg1_1 2023-01-11T21:41:23.9100350Z del arg2_1 2023-01-11T21:41:23.9100452Z return (buf0, ) 2023-01-11T21:41:23.9100459Z 2023-01-11T21:41:23.9100465Z 2023-01-11T21:41:23.9100576Z if __name__ == "__main__": 2023-01-11T21:41:23.9100806Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9100999Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9101308Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9101630Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9101935Z arg2_1 = rand_strided((2, 3, 30), (90, 30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9102107Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9102130Z 2023-01-11T21:41:23.9102135Z 2023-01-11T21:41:23.9102258Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9102494Z import torch 2023-01-11T21:41:23.9102604Z import random 2023-01-11T21:41:23.9102789Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9102975Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9102983Z 2023-01-11T21:41:23.9103100Z aten = torch.ops.aten 2023-01-11T21:41:23.9103315Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9103437Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9103459Z 2023-01-11T21:41:23.9103465Z 2023-01-11T21:41:23.9103589Z async_compile.wait(globals()) 2023-01-11T21:41:23.9103695Z del async_compile 2023-01-11T21:41:23.9103703Z 2023-01-11T21:41:23.9103801Z def call(args): 2023-01-11T21:41:23.9103931Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9104040Z args.clear() 2023-01-11T21:41:23.9104400Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg2_1, arg3_1, arg0_1, arg1_1, 'sub') 2023-01-11T21:41:23.9104506Z del arg0_1 2023-01-11T21:41:23.9104594Z del arg1_1 2023-01-11T21:41:23.9104699Z del arg2_1 2023-01-11T21:41:23.9104800Z del arg3_1 2023-01-11T21:41:23.9104909Z return (buf0, ) 2023-01-11T21:41:23.9104917Z 2023-01-11T21:41:23.9104924Z 2023-01-11T21:41:23.9105031Z if __name__ == "__main__": 2023-01-11T21:41:23.9105209Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9105416Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9105736Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9106048Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9106363Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9106665Z arg3_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9106860Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9106867Z 2023-01-11T21:41:23.9106874Z 2023-01-11T21:41:23.9107010Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9107115Z import torch 2023-01-11T21:41:23.9107217Z import random 2023-01-11T21:41:23.9107373Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9107561Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9107681Z 2023-01-11T21:41:23.9107810Z aten = torch.ops.aten 2023-01-11T21:41:23.9108018Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9108162Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9108169Z 2023-01-11T21:41:23.9108175Z 2023-01-11T21:41:23.9108320Z async_compile.wait(globals()) 2023-01-11T21:41:23.9108430Z del async_compile 2023-01-11T21:41:23.9108438Z 2023-01-11T21:41:23.9108549Z def call(args): 2023-01-11T21:41:23.9108661Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9108780Z args.clear() 2023-01-11T21:41:23.9109163Z buf0 = torch.ops.mkldnn._linear_pointwise.binary(arg1_1, arg2_1, arg0_1, None, 'sub') 2023-01-11T21:41:23.9109269Z del arg0_1 2023-01-11T21:41:23.9109370Z del arg1_1 2023-01-11T21:41:23.9109475Z del arg2_1 2023-01-11T21:41:23.9109588Z return (buf0, ) 2023-01-11T21:41:23.9109595Z 2023-01-11T21:41:23.9109601Z 2023-01-11T21:41:23.9109719Z if __name__ == "__main__": 2023-01-11T21:41:23.9109959Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9110165Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9110504Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9110834Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9111160Z arg2_1 = rand_strided((2, 30), (30, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9111357Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9111366Z 2023-01-11T21:41:23.9111475Z ok (1.750s) 2023-01-11T21:41:23.9112293Z test_linear_packed_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9112496Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9112926Z [2023-01-11 21:35:38,391] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 293 2023-01-11T21:41:23.9113373Z [2023-01-11 21:35:38,410] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 293 2023-01-11T21:41:23.9114200Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9114401Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9114844Z [2023-01-11 21:35:38,437] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 294 2023-01-11T21:41:23.9115305Z [2023-01-11 21:35:38,449] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 294 2023-01-11T21:41:23.9116075Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9116282Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9116736Z [2023-01-11 21:35:38,482] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 295 2023-01-11T21:41:23.9117195Z [2023-01-11 21:35:38,500] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 295 2023-01-11T21:41:23.9117964Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9118273Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9118723Z [2023-01-11 21:35:38,525] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 296 2023-01-11T21:41:23.9119184Z [2023-01-11 21:35:38,537] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 296 2023-01-11T21:41:23.9119194Z 2023-01-11T21:41:23.9119349Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9119461Z import torch 2023-01-11T21:41:23.9119574Z import random 2023-01-11T21:41:23.9119762Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9119967Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9119978Z 2023-01-11T21:41:23.9120177Z aten = torch.ops.aten 2023-01-11T21:41:23.9120394Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9120543Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9120551Z 2023-01-11T21:41:23.9120559Z 2023-01-11T21:41:23.9120707Z async_compile.wait(globals()) 2023-01-11T21:41:23.9120824Z del async_compile 2023-01-11T21:41:23.9120832Z 2023-01-11T21:41:23.9120948Z def call(args): 2023-01-11T21:41:23.9121089Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9121197Z args.clear() 2023-01-11T21:41:23.9121383Z buf0 = torch.ops.mkl._mkl_linear(arg3_1, arg2_1, arg0_1, arg1_1, 6) 2023-01-11T21:41:23.9121490Z del arg0_1 2023-01-11T21:41:23.9121597Z del arg1_1 2023-01-11T21:41:23.9121706Z del arg2_1 2023-01-11T21:41:23.9121808Z del arg3_1 2023-01-11T21:41:23.9121923Z return (buf0, ) 2023-01-11T21:41:23.9121930Z 2023-01-11T21:41:23.9121937Z 2023-01-11T21:41:23.9122055Z if __name__ == "__main__": 2023-01-11T21:41:23.9122253Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9122445Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9122813Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9123142Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9123479Z arg2_1 = rand_strided((1982689, 1), (1, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9123826Z arg3_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9124036Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9124044Z 2023-01-11T21:41:23.9124051Z 2023-01-11T21:41:23.9124213Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9124328Z import torch 2023-01-11T21:41:23.9124429Z import random 2023-01-11T21:41:23.9124622Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9124833Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9124846Z 2023-01-11T21:41:23.9124976Z aten = torch.ops.aten 2023-01-11T21:41:23.9134111Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9134313Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9134321Z 2023-01-11T21:41:23.9134327Z 2023-01-11T21:41:23.9134459Z async_compile.wait(globals()) 2023-01-11T21:41:23.9134558Z del async_compile 2023-01-11T21:41:23.9134565Z 2023-01-11T21:41:23.9134645Z def call(args): 2023-01-11T21:41:23.9134759Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9134859Z args.clear() 2023-01-11T21:41:23.9135035Z buf0 = torch.ops.mkl._mkl_linear(arg2_1, arg1_1, arg0_1, None, 6) 2023-01-11T21:41:23.9135127Z del arg0_1 2023-01-11T21:41:23.9135218Z del arg1_1 2023-01-11T21:41:23.9135312Z del arg2_1 2023-01-11T21:41:23.9135405Z return (buf0, ) 2023-01-11T21:41:23.9135414Z 2023-01-11T21:41:23.9135420Z 2023-01-11T21:41:23.9135531Z if __name__ == "__main__": 2023-01-11T21:41:23.9135825Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9136025Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9136387Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9136692Z arg1_1 = rand_strided((1982689, 1), (1, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9137002Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9137184Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9137192Z 2023-01-11T21:41:23.9137198Z 2023-01-11T21:41:23.9137316Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9137415Z import torch 2023-01-11T21:41:23.9137514Z import random 2023-01-11T21:41:23.9137680Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9137854Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9137862Z 2023-01-11T21:41:23.9137971Z aten = torch.ops.aten 2023-01-11T21:41:23.9138246Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9138401Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9138409Z 2023-01-11T21:41:23.9138415Z 2023-01-11T21:41:23.9138528Z async_compile.wait(globals()) 2023-01-11T21:41:23.9138634Z del async_compile 2023-01-11T21:41:23.9138642Z 2023-01-11T21:41:23.9138747Z def call(args): 2023-01-11T21:41:23.9138872Z arg0_1, arg1_1, arg2_1, arg3_1 = args 2023-01-11T21:41:23.9138969Z args.clear() 2023-01-11T21:41:23.9139149Z buf0 = torch.ops.mkl._mkl_linear(arg3_1, arg2_1, arg0_1, arg1_1, 2) 2023-01-11T21:41:23.9139240Z del arg0_1 2023-01-11T21:41:23.9139319Z del arg1_1 2023-01-11T21:41:23.9139414Z del arg2_1 2023-01-11T21:41:23.9139508Z del arg3_1 2023-01-11T21:41:23.9139608Z return (buf0, ) 2023-01-11T21:41:23.9139615Z 2023-01-11T21:41:23.9139621Z 2023-01-11T21:41:23.9139729Z if __name__ == "__main__": 2023-01-11T21:41:23.9139899Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9140086Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9140398Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9140681Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9140988Z arg2_1 = rand_strided((1982689, 1), (1, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9141279Z arg3_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9141467Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1])) 2023-01-11T21:41:23.9141474Z 2023-01-11T21:41:23.9141480Z 2023-01-11T21:41:23.9141605Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9141706Z import torch 2023-01-11T21:41:23.9141810Z import random 2023-01-11T21:41:23.9141982Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9142144Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9142155Z 2023-01-11T21:41:23.9142273Z aten = torch.ops.aten 2023-01-11T21:41:23.9142626Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9142759Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9142767Z 2023-01-11T21:41:23.9142773Z 2023-01-11T21:41:23.9142904Z async_compile.wait(globals()) 2023-01-11T21:41:23.9152995Z del async_compile 2023-01-11T21:41:23.9153006Z 2023-01-11T21:41:23.9153109Z def call(args): 2023-01-11T21:41:23.9153227Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9153311Z args.clear() 2023-01-11T21:41:23.9153496Z buf0 = torch.ops.mkl._mkl_linear(arg2_1, arg1_1, arg0_1, None, 2) 2023-01-11T21:41:23.9153591Z del arg0_1 2023-01-11T21:41:23.9153684Z del arg1_1 2023-01-11T21:41:23.9153848Z del arg2_1 2023-01-11T21:41:23.9153951Z return (buf0, ) 2023-01-11T21:41:23.9153959Z 2023-01-11T21:41:23.9153966Z 2023-01-11T21:41:23.9154072Z if __name__ == "__main__": 2023-01-11T21:41:23.9154230Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9154534Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9154874Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9155181Z arg1_1 = rand_strided((1982689, 1), (1, 0), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9155475Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9155649Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9155656Z 2023-01-11T21:41:23.9155751Z ok (0.180s) 2023-01-11T21:41:23.9156447Z test_linear_unary_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9156741Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9157119Z [2023-01-11 21:35:38,620] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 297 2023-01-11T21:41:23.9157506Z [2023-01-11 21:35:38,623] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 297 2023-01-11T21:41:23.9158150Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9158325Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9158712Z [2023-01-11 21:35:38,651] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 298 2023-01-11T21:41:23.9159113Z [2023-01-11 21:35:38,653] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 298 2023-01-11T21:41:23.9159744Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9159925Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9160318Z [2023-01-11 21:35:38,739] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 299 2023-01-11T21:41:23.9160720Z [2023-01-11 21:35:38,742] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 299 2023-01-11T21:41:23.9161354Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9161541Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9161921Z [2023-01-11 21:35:38,770] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 300 2023-01-11T21:41:23.9162301Z [2023-01-11 21:35:38,772] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 300 2023-01-11T21:41:23.9162937Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9163219Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9163602Z [2023-01-11 21:35:38,858] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 301 2023-01-11T21:41:23.9163994Z [2023-01-11 21:35:38,860] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 301 2023-01-11T21:41:23.9164618Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9164794Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9165181Z [2023-01-11 21:35:38,888] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 302 2023-01-11T21:41:23.9165630Z [2023-01-11 21:35:38,891] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 302 2023-01-11T21:41:23.9166287Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9166472Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9166860Z [2023-01-11 21:35:38,973] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 303 2023-01-11T21:41:23.9167243Z [2023-01-11 21:35:38,976] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 303 2023-01-11T21:41:23.9167879Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9168064Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9168460Z [2023-01-11 21:35:39,003] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 304 2023-01-11T21:41:23.9168846Z [2023-01-11 21:35:39,005] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 304 2023-01-11T21:41:23.9169508Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9169694Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9170080Z [2023-01-11 21:35:39,090] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 305 2023-01-11T21:41:23.9170089Z 2023-01-11T21:41:23.9170219Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9170318Z import torch 2023-01-11T21:41:23.9170403Z import random 2023-01-11T21:41:23.9170564Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9170732Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9170740Z 2023-01-11T21:41:23.9170849Z aten = torch.ops.aten 2023-01-11T21:41:23.9171049Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9171182Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9171190Z 2023-01-11T21:41:23.9171197Z 2023-01-11T21:41:23.9171327Z async_compile.wait(globals()) 2023-01-11T21:41:23.9171435Z del async_compile 2023-01-11T21:41:23.9171508Z 2023-01-11T21:41:23.9171597Z def call(args): 2023-01-11T21:41:23.9171711Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9171812Z args.clear() 2023-01-11T21:41:23.9172142Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'relu', [], '') 2023-01-11T21:41:23.9172237Z del arg0_1 2023-01-11T21:41:23.9172330Z del arg1_1 2023-01-11T21:41:23.9172420Z del arg2_1 2023-01-11T21:41:23.9172505Z return (buf0, ) 2023-01-11T21:41:23.9172513Z 2023-01-11T21:41:23.9172535Z 2023-01-11T21:41:23.9172626Z if __name__ == "__main__": 2023-01-11T21:41:23.9172785Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9172963Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9173260Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9173545Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9173898Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9174079Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9174086Z 2023-01-11T21:41:23.9174092Z 2023-01-11T21:41:23.9174228Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9174316Z import torch 2023-01-11T21:41:23.9174419Z import random 2023-01-11T21:41:23.9174581Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9174759Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9174766Z 2023-01-11T21:41:23.9174879Z aten = torch.ops.aten 2023-01-11T21:41:23.9175077Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9175206Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9175213Z 2023-01-11T21:41:23.9175218Z 2023-01-11T21:41:23.9175327Z async_compile.wait(globals()) 2023-01-11T21:41:23.9175431Z del async_compile 2023-01-11T21:41:23.9175438Z 2023-01-11T21:41:23.9175535Z def call(args): 2023-01-11T21:41:23.9175642Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9175753Z args.clear() 2023-01-11T21:41:23.9176091Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'relu', [], '') 2023-01-11T21:41:23.9176193Z del arg0_1 2023-01-11T21:41:23.9176274Z del arg1_1 2023-01-11T21:41:23.9176380Z return (buf0, ) 2023-01-11T21:41:23.9176387Z 2023-01-11T21:41:23.9176393Z 2023-01-11T21:41:23.9176507Z if __name__ == "__main__": 2023-01-11T21:41:23.9176674Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9176847Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9177155Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9177468Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9177638Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9177646Z 2023-01-11T21:41:23.9177652Z 2023-01-11T21:41:23.9177789Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9177880Z import torch 2023-01-11T21:41:23.9177982Z import random 2023-01-11T21:41:23.9178157Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9178336Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9178343Z 2023-01-11T21:41:23.9178459Z aten = torch.ops.aten 2023-01-11T21:41:23.9178657Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9178787Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9178794Z 2023-01-11T21:41:23.9178800Z 2023-01-11T21:41:23.9178929Z async_compile.wait(globals()) 2023-01-11T21:41:23.9179018Z del async_compile 2023-01-11T21:41:23.9179025Z 2023-01-11T21:41:23.9179129Z def call(args): 2023-01-11T21:41:23.9179250Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9179350Z args.clear() 2023-01-11T21:41:23.9179677Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'relu', [], '') 2023-01-11T21:41:23.9179777Z del arg0_1 2023-01-11T21:41:23.9179874Z del arg1_1 2023-01-11T21:41:23.9180025Z del arg2_1 2023-01-11T21:41:23.9180123Z return (buf0, ) 2023-01-11T21:41:23.9180130Z 2023-01-11T21:41:23.9180135Z 2023-01-11T21:41:23.9180245Z if __name__ == "__main__": 2023-01-11T21:41:23.9180413Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9180604Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9180929Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9181235Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9181552Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9181730Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9181738Z 2023-01-11T21:41:23.9181745Z 2023-01-11T21:41:23.9181886Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9181986Z import torch 2023-01-11T21:41:23.9182094Z import random 2023-01-11T21:41:23.9182606Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9182806Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9182813Z 2023-01-11T21:41:23.9182936Z aten = torch.ops.aten 2023-01-11T21:41:23.9183133Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9183271Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9183278Z 2023-01-11T21:41:23.9183285Z 2023-01-11T21:41:23.9183421Z async_compile.wait(globals()) 2023-01-11T21:41:23.9183533Z del async_compile 2023-01-11T21:41:23.9183540Z 2023-01-11T21:41:23.9183646Z def call(args): 2023-01-11T21:41:23.9183767Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9183878Z args.clear() 2023-01-11T21:41:23.9184242Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'relu', [], '') 2023-01-11T21:41:23.9184327Z del arg0_1 2023-01-11T21:41:23.9184427Z del arg1_1 2023-01-11T21:41:23.9184536Z return (buf0, ) 2023-01-11T21:41:23.9184544Z 2023-01-11T21:41:23.9184555Z 2023-01-11T21:41:23.9184670Z if __name__ == "__main__": 2023-01-11T21:41:23.9184838Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9185027Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9185352Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9185674Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9185839Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9185846Z 2023-01-11T21:41:23.9185866Z 2023-01-11T21:41:23.9185992Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9186099Z import torch 2023-01-11T21:41:23.9186208Z import random 2023-01-11T21:41:23.9186385Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9186574Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9186582Z 2023-01-11T21:41:23.9186695Z aten = torch.ops.aten 2023-01-11T21:41:23.9186910Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9187041Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9187050Z 2023-01-11T21:41:23.9187057Z 2023-01-11T21:41:23.9187192Z async_compile.wait(globals()) 2023-01-11T21:41:23.9187301Z del async_compile 2023-01-11T21:41:23.9187308Z 2023-01-11T21:41:23.9187418Z def call(args): 2023-01-11T21:41:23.9187539Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9187638Z args.clear() 2023-01-11T21:41:23.9187999Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'sigmoid', [], '') 2023-01-11T21:41:23.9188083Z del arg0_1 2023-01-11T21:41:23.9188185Z del arg1_1 2023-01-11T21:41:23.9188285Z del arg2_1 2023-01-11T21:41:23.9188399Z return (buf0, ) 2023-01-11T21:41:23.9188406Z 2023-01-11T21:41:23.9188411Z 2023-01-11T21:41:23.9188523Z if __name__ == "__main__": 2023-01-11T21:41:23.9188705Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9188902Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9189371Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9189678Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9190023Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9190222Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9190231Z 2023-01-11T21:41:23.9190238Z 2023-01-11T21:41:23.9190407Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9190517Z import torch 2023-01-11T21:41:23.9190631Z import random 2023-01-11T21:41:23.9190813Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9191008Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9191017Z 2023-01-11T21:41:23.9191125Z aten = torch.ops.aten 2023-01-11T21:41:23.9191338Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9191492Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9191600Z 2023-01-11T21:41:23.9191609Z 2023-01-11T21:41:23.9191756Z async_compile.wait(globals()) 2023-01-11T21:41:23.9191878Z del async_compile 2023-01-11T21:41:23.9191886Z 2023-01-11T21:41:23.9191994Z def call(args): 2023-01-11T21:41:23.9192105Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9192213Z args.clear() 2023-01-11T21:41:23.9192550Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'sigmoid', [], '') 2023-01-11T21:41:23.9192657Z del arg0_1 2023-01-11T21:41:23.9192759Z del arg1_1 2023-01-11T21:41:23.9192864Z return (buf0, ) 2023-01-11T21:41:23.9192872Z 2023-01-11T21:41:23.9192879Z 2023-01-11T21:41:23.9192998Z if __name__ == "__main__": 2023-01-11T21:41:23.9193178Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9193374Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9193690Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9194120Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9194304Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9194312Z 2023-01-11T21:41:23.9194318Z 2023-01-11T21:41:23.9194469Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9194576Z import torch 2023-01-11T21:41:23.9194687Z import random 2023-01-11T21:41:23.9194868Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9195064Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9195071Z 2023-01-11T21:41:23.9195182Z aten = torch.ops.aten 2023-01-11T21:41:23.9195409Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9195565Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9195572Z 2023-01-11T21:41:23.9195579Z 2023-01-11T21:41:23.9195721Z async_compile.wait(globals()) 2023-01-11T21:41:23.9195839Z del async_compile 2023-01-11T21:41:23.9195847Z 2023-01-11T21:41:23.9195997Z def call(args): 2023-01-11T21:41:23.9196130Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9196242Z args.clear() 2023-01-11T21:41:23.9196603Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'sigmoid', [], '') 2023-01-11T21:41:23.9196710Z del arg0_1 2023-01-11T21:41:23.9196814Z del arg1_1 2023-01-11T21:41:23.9196913Z del arg2_1 2023-01-11T21:41:23.9197019Z return (buf0, ) 2023-01-11T21:41:23.9197026Z 2023-01-11T21:41:23.9197033Z 2023-01-11T21:41:23.9197152Z if __name__ == "__main__": 2023-01-11T21:41:23.9197325Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9197506Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9197846Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9198180Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9198515Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9198811Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9198820Z 2023-01-11T21:41:23.9198828Z 2023-01-11T21:41:23.9198984Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9199099Z import torch 2023-01-11T21:41:23.9199217Z import random 2023-01-11T21:41:23.9199391Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9199597Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9199606Z 2023-01-11T21:41:23.9199727Z aten = torch.ops.aten 2023-01-11T21:41:23.9199921Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9200055Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9200061Z 2023-01-11T21:41:23.9200068Z 2023-01-11T21:41:23.9200198Z async_compile.wait(globals()) 2023-01-11T21:41:23.9200305Z del async_compile 2023-01-11T21:41:23.9200312Z 2023-01-11T21:41:23.9200411Z def call(args): 2023-01-11T21:41:23.9200504Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9200602Z args.clear() 2023-01-11T21:41:23.9201043Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'sigmoid', [], '') 2023-01-11T21:41:23.9201141Z del arg0_1 2023-01-11T21:41:23.9201236Z del arg1_1 2023-01-11T21:41:23.9201338Z return (buf0, ) 2023-01-11T21:41:23.9201345Z 2023-01-11T21:41:23.9201351Z 2023-01-11T21:41:23.9201458Z if __name__ == "__main__": 2023-01-11T21:41:23.9201618Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9201785Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9202091Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9202398Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9202566Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9202575Z 2023-01-11T21:41:23.9202994Z [2023-01-11 21:35:39,092] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 305 2023-01-11T21:41:23.9203662Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9203848Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9204248Z [2023-01-11 21:35:39,123] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 306 2023-01-11T21:41:23.9204660Z [2023-01-11 21:35:39,125] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 306 2023-01-11T21:41:23.9205321Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9205493Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9205877Z [2023-01-11 21:35:39,210] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 307 2023-01-11T21:41:23.9206280Z [2023-01-11 21:35:39,212] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 307 2023-01-11T21:41:23.9206928Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9207108Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9207581Z [2023-01-11 21:35:39,240] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 308 2023-01-11T21:41:23.9207988Z [2023-01-11 21:35:39,243] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 308 2023-01-11T21:41:23.9208622Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9208803Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9209191Z [2023-01-11 21:35:39,356] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 309 2023-01-11T21:41:23.9209595Z [2023-01-11 21:35:39,359] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 309 2023-01-11T21:41:23.9210313Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9210493Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9210870Z [2023-01-11 21:35:39,418] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 310 2023-01-11T21:41:23.9211267Z [2023-01-11 21:35:39,420] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 310 2023-01-11T21:41:23.9211923Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9212099Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9212480Z [2023-01-11 21:35:39,538] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 311 2023-01-11T21:41:23.9212879Z [2023-01-11 21:35:39,541] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 311 2023-01-11T21:41:23.9213513Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9213688Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9214080Z [2023-01-11 21:35:39,624] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 312 2023-01-11T21:41:23.9214476Z [2023-01-11 21:35:39,627] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 312 2023-01-11T21:41:23.9215117Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9215296Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9215662Z [2023-01-11 21:35:39,875] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 313 2023-01-11T21:41:23.9215672Z 2023-01-11T21:41:23.9215800Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9215896Z import torch 2023-01-11T21:41:23.9216069Z import random 2023-01-11T21:41:23.9216234Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9216407Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9216415Z 2023-01-11T21:41:23.9216526Z aten = torch.ops.aten 2023-01-11T21:41:23.9216705Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9216834Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9216840Z 2023-01-11T21:41:23.9216848Z 2023-01-11T21:41:23.9216977Z async_compile.wait(globals()) 2023-01-11T21:41:23.9217084Z del async_compile 2023-01-11T21:41:23.9217090Z 2023-01-11T21:41:23.9217187Z def call(args): 2023-01-11T21:41:23.9217307Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9217407Z args.clear() 2023-01-11T21:41:23.9217730Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'tanh', [], '') 2023-01-11T21:41:23.9217812Z del arg0_1 2023-01-11T21:41:23.9217903Z del arg1_1 2023-01-11T21:41:23.9217996Z del arg2_1 2023-01-11T21:41:23.9218150Z return (buf0, ) 2023-01-11T21:41:23.9218158Z 2023-01-11T21:41:23.9218164Z 2023-01-11T21:41:23.9218273Z if __name__ == "__main__": 2023-01-11T21:41:23.9218437Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9218616Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9218918Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9219194Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9219502Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9219679Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9219686Z 2023-01-11T21:41:23.9219692Z 2023-01-11T21:41:23.9219826Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9219926Z import torch 2023-01-11T21:41:23.9220028Z import random 2023-01-11T21:41:23.9220189Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9220352Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9220371Z 2023-01-11T21:41:23.9220467Z aten = torch.ops.aten 2023-01-11T21:41:23.9220662Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9220790Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9220798Z 2023-01-11T21:41:23.9220804Z 2023-01-11T21:41:23.9220933Z async_compile.wait(globals()) 2023-01-11T21:41:23.9221035Z del async_compile 2023-01-11T21:41:23.9221041Z 2023-01-11T21:41:23.9221140Z def call(args): 2023-01-11T21:41:23.9221248Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9221333Z args.clear() 2023-01-11T21:41:23.9221654Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'tanh', [], '') 2023-01-11T21:41:23.9221747Z del arg0_1 2023-01-11T21:41:23.9221841Z del arg1_1 2023-01-11T21:41:23.9221943Z return (buf0, ) 2023-01-11T21:41:23.9221949Z 2023-01-11T21:41:23.9221954Z 2023-01-11T21:41:23.9222064Z if __name__ == "__main__": 2023-01-11T21:41:23.9222228Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9222547Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9222841Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9223146Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9223311Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9223319Z 2023-01-11T21:41:23.9223325Z 2023-01-11T21:41:23.9223458Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9223556Z import torch 2023-01-11T21:41:23.9223655Z import random 2023-01-11T21:41:23.9223815Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9223986Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9223993Z 2023-01-11T21:41:23.9224088Z aten = torch.ops.aten 2023-01-11T21:41:23.9224282Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9224519Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9224526Z 2023-01-11T21:41:23.9224532Z 2023-01-11T21:41:23.9224661Z async_compile.wait(globals()) 2023-01-11T21:41:23.9224766Z del async_compile 2023-01-11T21:41:23.9224773Z 2023-01-11T21:41:23.9224873Z def call(args): 2023-01-11T21:41:23.9224986Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9225082Z args.clear() 2023-01-11T21:41:23.9225391Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'tanh', [], '') 2023-01-11T21:41:23.9225484Z del arg0_1 2023-01-11T21:41:23.9225581Z del arg1_1 2023-01-11T21:41:23.9225672Z del arg2_1 2023-01-11T21:41:23.9225771Z return (buf0, ) 2023-01-11T21:41:23.9225778Z 2023-01-11T21:41:23.9225784Z 2023-01-11T21:41:23.9225891Z if __name__ == "__main__": 2023-01-11T21:41:23.9226053Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9226215Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9226585Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9226878Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9227170Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9227348Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9227356Z 2023-01-11T21:41:23.9227362Z 2023-01-11T21:41:23.9227494Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9227593Z import torch 2023-01-11T21:41:23.9227694Z import random 2023-01-11T21:41:23.9227842Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9228012Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9228019Z 2023-01-11T21:41:23.9228126Z aten = torch.ops.aten 2023-01-11T21:41:23.9228317Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9228445Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9228455Z 2023-01-11T21:41:23.9228465Z 2023-01-11T21:41:23.9228590Z async_compile.wait(globals()) 2023-01-11T21:41:23.9228690Z del async_compile 2023-01-11T21:41:23.9228697Z 2023-01-11T21:41:23.9228794Z def call(args): 2023-01-11T21:41:23.9228886Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9228981Z args.clear() 2023-01-11T21:41:23.9229299Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'tanh', [], '') 2023-01-11T21:41:23.9229393Z del arg0_1 2023-01-11T21:41:23.9229493Z del arg1_1 2023-01-11T21:41:23.9229597Z return (buf0, ) 2023-01-11T21:41:23.9229605Z 2023-01-11T21:41:23.9229610Z 2023-01-11T21:41:23.9229715Z if __name__ == "__main__": 2023-01-11T21:41:23.9229860Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9230037Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9230333Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9230630Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9230796Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9230804Z 2023-01-11T21:41:23.9230810Z 2023-01-11T21:41:23.9230939Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9231041Z import torch 2023-01-11T21:41:23.9231139Z import random 2023-01-11T21:41:23.9231287Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9231458Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9231465Z 2023-01-11T21:41:23.9231573Z aten = torch.ops.aten 2023-01-11T21:41:23.9231764Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9231892Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9231899Z 2023-01-11T21:41:23.9231905Z 2023-01-11T21:41:23.9232028Z async_compile.wait(globals()) 2023-01-11T21:41:23.9232133Z del async_compile 2023-01-11T21:41:23.9232140Z 2023-01-11T21:41:23.9232239Z def call(args): 2023-01-11T21:41:23.9232338Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9232519Z args.clear() 2023-01-11T21:41:23.9232863Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardswish', [], '') 2023-01-11T21:41:23.9232955Z del arg0_1 2023-01-11T21:41:23.9233048Z del arg1_1 2023-01-11T21:41:23.9233141Z del arg2_1 2023-01-11T21:41:23.9233240Z return (buf0, ) 2023-01-11T21:41:23.9233246Z 2023-01-11T21:41:23.9233252Z 2023-01-11T21:41:23.9233359Z if __name__ == "__main__": 2023-01-11T21:41:23.9233508Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9233679Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9234077Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9234367Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9234672Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9234914Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9234927Z 2023-01-11T21:41:23.9234934Z 2023-01-11T21:41:23.9235071Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9235170Z import torch 2023-01-11T21:41:23.9235256Z import random 2023-01-11T21:41:23.9235418Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9235588Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9235595Z 2023-01-11T21:41:23.9235706Z aten = torch.ops.aten 2023-01-11T21:41:23.9235896Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9236025Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9236033Z 2023-01-11T21:41:23.9236039Z 2023-01-11T21:41:23.9236166Z async_compile.wait(globals()) 2023-01-11T21:41:23.9236251Z del async_compile 2023-01-11T21:41:23.9236273Z 2023-01-11T21:41:23.9236353Z def call(args): 2023-01-11T21:41:23.9236455Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9236554Z args.clear() 2023-01-11T21:41:23.9236889Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardswish', [], '') 2023-01-11T21:41:23.9236988Z del arg0_1 2023-01-11T21:41:23.9237085Z del arg1_1 2023-01-11T21:41:23.9237171Z return (buf0, ) 2023-01-11T21:41:23.9237193Z 2023-01-11T21:41:23.9237199Z 2023-01-11T21:41:23.9237291Z if __name__ == "__main__": 2023-01-11T21:41:23.9237455Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9237631Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9237927Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9238228Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9238390Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9238398Z 2023-01-11T21:41:23.9238403Z 2023-01-11T21:41:23.9238533Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9238628Z import torch 2023-01-11T21:41:23.9238710Z import random 2023-01-11T21:41:23.9238879Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9239050Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9239056Z 2023-01-11T21:41:23.9239168Z aten = torch.ops.aten 2023-01-11T21:41:23.9239356Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9239484Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9239491Z 2023-01-11T21:41:23.9239496Z 2023-01-11T21:41:23.9239621Z async_compile.wait(globals()) 2023-01-11T21:41:23.9239722Z del async_compile 2023-01-11T21:41:23.9239729Z 2023-01-11T21:41:23.9239812Z def call(args): 2023-01-11T21:41:23.9239925Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9240022Z args.clear() 2023-01-11T21:41:23.9240358Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardswish', [], '') 2023-01-11T21:41:23.9240456Z del arg0_1 2023-01-11T21:41:23.9240551Z del arg1_1 2023-01-11T21:41:23.9240647Z del arg2_1 2023-01-11T21:41:23.9240806Z return (buf0, ) 2023-01-11T21:41:23.9240816Z 2023-01-11T21:41:23.9240822Z 2023-01-11T21:41:23.9240926Z if __name__ == "__main__": 2023-01-11T21:41:23.9241093Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9241268Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9241564Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9241855Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9242149Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9242321Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9242329Z 2023-01-11T21:41:23.9242334Z 2023-01-11T21:41:23.9242453Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9242549Z import torch 2023-01-11T21:41:23.9242645Z import random 2023-01-11T21:41:23.9242806Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9243042Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9243050Z 2023-01-11T21:41:23.9243161Z aten = torch.ops.aten 2023-01-11T21:41:23.9243356Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9243485Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9243491Z 2023-01-11T21:41:23.9243497Z 2023-01-11T21:41:23.9243609Z async_compile.wait(globals()) 2023-01-11T21:41:23.9243709Z del async_compile 2023-01-11T21:41:23.9243715Z 2023-01-11T21:41:23.9243813Z def call(args): 2023-01-11T21:41:23.9243917Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9244016Z args.clear() 2023-01-11T21:41:23.9244348Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardswish', [], '') 2023-01-11T21:41:23.9244444Z del arg0_1 2023-01-11T21:41:23.9244521Z del arg1_1 2023-01-11T21:41:23.9244621Z return (buf0, ) 2023-01-11T21:41:23.9244628Z 2023-01-11T21:41:23.9244633Z 2023-01-11T21:41:23.9244735Z if __name__ == "__main__": 2023-01-11T21:41:23.9244903Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9245074Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9245366Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9245664Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9245832Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9245838Z 2023-01-11T21:41:23.9246228Z [2023-01-11 21:35:39,878] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 313 2023-01-11T21:41:23.9246873Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9247061Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9247452Z [2023-01-11 21:35:39,908] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 314 2023-01-11T21:41:23.9247851Z [2023-01-11 21:35:39,910] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 314 2023-01-11T21:41:23.9248505Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9248688Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9249086Z [2023-01-11 21:35:39,994] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 315 2023-01-11T21:41:23.9249559Z [2023-01-11 21:35:39,997] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 315 2023-01-11T21:41:23.9250208Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9250388Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9250842Z [2023-01-11 21:35:40,025] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 316 2023-01-11T21:41:23.9251310Z [2023-01-11 21:35:40,027] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 316 2023-01-11T21:41:23.9252158Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9252382Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9252852Z [2023-01-11 21:35:40,114] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 317 2023-01-11T21:41:23.9253346Z [2023-01-11 21:35:40,117] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 317 2023-01-11T21:41:23.9254164Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9254398Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9254863Z [2023-01-11 21:35:40,148] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 318 2023-01-11T21:41:23.9255350Z [2023-01-11 21:35:40,150] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 318 2023-01-11T21:41:23.9256164Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9256382Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9256857Z [2023-01-11 21:35:40,236] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 319 2023-01-11T21:41:23.9257346Z [2023-01-11 21:35:40,239] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 319 2023-01-11T21:41:23.9258129Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9258348Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9258829Z [2023-01-11 21:35:40,268] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 320 2023-01-11T21:41:23.9259314Z [2023-01-11 21:35:40,271] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 320 2023-01-11T21:41:23.9260126Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9260395Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9260867Z [2023-01-11 21:35:40,356] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 321 2023-01-11T21:41:23.9260877Z 2023-01-11T21:41:23.9261045Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9261167Z import torch 2023-01-11T21:41:23.9261282Z import random 2023-01-11T21:41:23.9261471Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9261688Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9261696Z 2023-01-11T21:41:23.9261831Z aten = torch.ops.aten 2023-01-11T21:41:23.9262072Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9262294Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9262303Z 2023-01-11T21:41:23.9262448Z 2023-01-11T21:41:23.9262609Z async_compile.wait(globals()) 2023-01-11T21:41:23.9262727Z del async_compile 2023-01-11T21:41:23.9262735Z 2023-01-11T21:41:23.9262860Z def call(args): 2023-01-11T21:41:23.9262981Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9263101Z args.clear() 2023-01-11T21:41:23.9263517Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'leaky_relu', [0.1], '') 2023-01-11T21:41:23.9263635Z del arg0_1 2023-01-11T21:41:23.9263752Z del arg1_1 2023-01-11T21:41:23.9263862Z del arg2_1 2023-01-11T21:41:23.9263983Z return (buf0, ) 2023-01-11T21:41:23.9263992Z 2023-01-11T21:41:23.9263999Z 2023-01-11T21:41:23.9264111Z if __name__ == "__main__": 2023-01-11T21:41:23.9264312Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9264534Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9264913Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9265262Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9265631Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9265841Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9265849Z 2023-01-11T21:41:23.9265857Z 2023-01-11T21:41:23.9266017Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9266124Z import torch 2023-01-11T21:41:23.9266247Z import random 2023-01-11T21:41:23.9266453Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9266675Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9266683Z 2023-01-11T21:41:23.9266822Z aten = torch.ops.aten 2023-01-11T21:41:23.9267064Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9267225Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9267233Z 2023-01-11T21:41:23.9267245Z 2023-01-11T21:41:23.9267400Z async_compile.wait(globals()) 2023-01-11T21:41:23.9267513Z del async_compile 2023-01-11T21:41:23.9267521Z 2023-01-11T21:41:23.9267639Z def call(args): 2023-01-11T21:41:23.9267766Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9267885Z args.clear() 2023-01-11T21:41:23.9268295Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'leaky_relu', [0.1], '') 2023-01-11T21:41:23.9268411Z del arg0_1 2023-01-11T21:41:23.9268523Z del arg1_1 2023-01-11T21:41:23.9268633Z return (buf0, ) 2023-01-11T21:41:23.9268641Z 2023-01-11T21:41:23.9268663Z 2023-01-11T21:41:23.9268776Z if __name__ == "__main__": 2023-01-11T21:41:23.9268971Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9269181Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9269547Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9269926Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9270213Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9270222Z 2023-01-11T21:41:23.9270229Z 2023-01-11T21:41:23.9270394Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9270510Z import torch 2023-01-11T21:41:23.9270619Z import random 2023-01-11T21:41:23.9270818Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9271030Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9271038Z 2023-01-11T21:41:23.9271169Z aten = torch.ops.aten 2023-01-11T21:41:23.9271403Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9271568Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9271577Z 2023-01-11T21:41:23.9271584Z 2023-01-11T21:41:23.9271738Z async_compile.wait(globals()) 2023-01-11T21:41:23.9271847Z del async_compile 2023-01-11T21:41:23.9271869Z 2023-01-11T21:41:23.9271978Z def call(args): 2023-01-11T21:41:23.9272116Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9272295Z args.clear() 2023-01-11T21:41:23.9272719Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'leaky_relu', [0.1], '') 2023-01-11T21:41:23.9272839Z del arg0_1 2023-01-11T21:41:23.9272955Z del arg1_1 2023-01-11T21:41:23.9273054Z del arg2_1 2023-01-11T21:41:23.9273180Z return (buf0, ) 2023-01-11T21:41:23.9273187Z 2023-01-11T21:41:23.9273193Z 2023-01-11T21:41:23.9273324Z if __name__ == "__main__": 2023-01-11T21:41:23.9273521Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9273803Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9274165Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9274514Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9274872Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9275076Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9275103Z 2023-01-11T21:41:23.9275111Z 2023-01-11T21:41:23.9275262Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9275382Z import torch 2023-01-11T21:41:23.9275503Z import random 2023-01-11T21:41:23.9275705Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9275919Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9275928Z 2023-01-11T21:41:23.9276064Z aten = torch.ops.aten 2023-01-11T21:41:23.9276295Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9276437Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9276460Z 2023-01-11T21:41:23.9276468Z 2023-01-11T21:41:23.9276608Z async_compile.wait(globals()) 2023-01-11T21:41:23.9276730Z del async_compile 2023-01-11T21:41:23.9276738Z 2023-01-11T21:41:23.9276862Z def call(args): 2023-01-11T21:41:23.9276993Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9277115Z args.clear() 2023-01-11T21:41:23.9277530Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'leaky_relu', [0.1], '') 2023-01-11T21:41:23.9277653Z del arg0_1 2023-01-11T21:41:23.9277752Z del arg1_1 2023-01-11T21:41:23.9277876Z return (buf0, ) 2023-01-11T21:41:23.9277884Z 2023-01-11T21:41:23.9277891Z 2023-01-11T21:41:23.9278022Z if __name__ == "__main__": 2023-01-11T21:41:23.9278223Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9278442Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9278801Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9279163Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9279360Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9279369Z 2023-01-11T21:41:23.9279377Z 2023-01-11T21:41:23.9279523Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9279647Z import torch 2023-01-11T21:41:23.9279765Z import random 2023-01-11T21:41:23.9280022Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9280241Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9280249Z 2023-01-11T21:41:23.9280387Z aten = torch.ops.aten 2023-01-11T21:41:23.9280627Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9280781Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9280789Z 2023-01-11T21:41:23.9280797Z 2023-01-11T21:41:23.9280932Z async_compile.wait(globals()) 2023-01-11T21:41:23.9281060Z del async_compile 2023-01-11T21:41:23.9281068Z 2023-01-11T21:41:23.9281187Z def call(args): 2023-01-11T21:41:23.9281329Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9281447Z args.clear() 2023-01-11T21:41:23.9281879Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardtanh', [-0.5, 4], '') 2023-01-11T21:41:23.9282000Z del arg0_1 2023-01-11T21:41:23.9282097Z del arg1_1 2023-01-11T21:41:23.9282211Z del arg2_1 2023-01-11T21:41:23.9282331Z return (buf0, ) 2023-01-11T21:41:23.9282380Z 2023-01-11T21:41:23.9282388Z 2023-01-11T21:41:23.9282522Z if __name__ == "__main__": 2023-01-11T21:41:23.9282724Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9282939Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9283311Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9283669Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9284025Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9284241Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9284249Z 2023-01-11T21:41:23.9284257Z 2023-01-11T21:41:23.9284426Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9284547Z import torch 2023-01-11T21:41:23.9284670Z import random 2023-01-11T21:41:23.9284875Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9285103Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9285112Z 2023-01-11T21:41:23.9285243Z aten = torch.ops.aten 2023-01-11T21:41:23.9285470Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9285630Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9285639Z 2023-01-11T21:41:23.9285646Z 2023-01-11T21:41:23.9285803Z async_compile.wait(globals()) 2023-01-11T21:41:23.9285928Z del async_compile 2023-01-11T21:41:23.9285936Z 2023-01-11T21:41:23.9286058Z def call(args): 2023-01-11T21:41:23.9286189Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9286312Z args.clear() 2023-01-11T21:41:23.9286730Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardtanh', [-0.5, 4], '') 2023-01-11T21:41:23.9286827Z del arg0_1 2023-01-11T21:41:23.9286942Z del arg1_1 2023-01-11T21:41:23.9287066Z return (buf0, ) 2023-01-11T21:41:23.9287074Z 2023-01-11T21:41:23.9287082Z 2023-01-11T21:41:23.9287217Z if __name__ == "__main__": 2023-01-11T21:41:23.9287426Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9287643Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9288014Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9288374Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9288576Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9288584Z 2023-01-11T21:41:23.9288591Z 2023-01-11T21:41:23.9288757Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9288876Z import torch 2023-01-11T21:41:23.9288997Z import random 2023-01-11T21:41:23.9289201Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9289414Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9289422Z 2023-01-11T21:41:23.9289553Z aten = torch.ops.aten 2023-01-11T21:41:23.9289774Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9289981Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9289990Z 2023-01-11T21:41:23.9289997Z 2023-01-11T21:41:23.9290149Z async_compile.wait(globals()) 2023-01-11T21:41:23.9290275Z del async_compile 2023-01-11T21:41:23.9290283Z 2023-01-11T21:41:23.9290399Z def call(args): 2023-01-11T21:41:23.9290544Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9290665Z args.clear() 2023-01-11T21:41:23.9291092Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardtanh', [-0.5, 4], '') 2023-01-11T21:41:23.9291191Z del arg0_1 2023-01-11T21:41:23.9291309Z del arg1_1 2023-01-11T21:41:23.9291422Z del arg2_1 2023-01-11T21:41:23.9291546Z return (buf0, ) 2023-01-11T21:41:23.9291552Z 2023-01-11T21:41:23.9291558Z 2023-01-11T21:41:23.9291691Z if __name__ == "__main__": 2023-01-11T21:41:23.9291889Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9292108Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9292498Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9292858Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9293213Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9293427Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9293436Z 2023-01-11T21:41:23.9293442Z 2023-01-11T21:41:23.9293607Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9293728Z import torch 2023-01-11T21:41:23.9293849Z import random 2023-01-11T21:41:23.9294046Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9294238Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9294262Z 2023-01-11T21:41:23.9294383Z aten = torch.ops.aten 2023-01-11T21:41:23.9294619Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9294774Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9294782Z 2023-01-11T21:41:23.9294793Z 2023-01-11T21:41:23.9294950Z async_compile.wait(globals()) 2023-01-11T21:41:23.9295078Z del async_compile 2023-01-11T21:41:23.9295086Z 2023-01-11T21:41:23.9295204Z def call(args): 2023-01-11T21:41:23.9295333Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9295432Z args.clear() 2023-01-11T21:41:23.9295858Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardtanh', [-0.5, 4], '') 2023-01-11T21:41:23.9295973Z del arg0_1 2023-01-11T21:41:23.9296087Z del arg1_1 2023-01-11T21:41:23.9296202Z return (buf0, ) 2023-01-11T21:41:23.9296210Z 2023-01-11T21:41:23.9296217Z 2023-01-11T21:41:23.9296348Z if __name__ == "__main__": 2023-01-11T21:41:23.9296545Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9296761Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9297110Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9297474Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9297679Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9297688Z 2023-01-11T21:41:23.9298175Z [2023-01-11 21:35:40,359] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 321 2023-01-11T21:41:23.9298985Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9299205Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9299674Z [2023-01-11 21:35:40,390] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 322 2023-01-11T21:41:23.9300165Z [2023-01-11 21:35:40,392] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 322 2023-01-11T21:41:23.9301062Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9301279Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9301762Z [2023-01-11 21:35:40,479] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 323 2023-01-11T21:41:23.9302239Z [2023-01-11 21:35:40,482] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 323 2023-01-11T21:41:23.9303235Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9303465Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9303946Z [2023-01-11 21:35:40,509] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 324 2023-01-11T21:41:23.9304434Z [2023-01-11 21:35:40,512] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 324 2023-01-11T21:41:23.9305239Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9305460Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9305941Z [2023-01-11 21:35:40,599] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 325 2023-01-11T21:41:23.9306422Z [2023-01-11 21:35:40,602] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 325 2023-01-11T21:41:23.9307221Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9307441Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9307916Z [2023-01-11 21:35:40,635] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 326 2023-01-11T21:41:23.9308396Z [2023-01-11 21:35:40,637] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 326 2023-01-11T21:41:23.9309218Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9309435Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9309914Z [2023-01-11 21:35:40,726] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 327 2023-01-11T21:41:23.9310406Z [2023-01-11 21:35:40,729] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 327 2023-01-11T21:41:23.9311220Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9311493Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9311979Z [2023-01-11 21:35:40,761] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 328 2023-01-11T21:41:23.9312468Z [2023-01-11 21:35:40,764] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 328 2023-01-11T21:41:23.9313273Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9313580Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9314154Z [2023-01-11 21:35:40,856] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 329 2023-01-11T21:41:23.9314166Z 2023-01-11T21:41:23.9314310Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9314430Z import torch 2023-01-11T21:41:23.9314555Z import random 2023-01-11T21:41:23.9314762Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9314978Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9314986Z 2023-01-11T21:41:23.9315123Z aten = torch.ops.aten 2023-01-11T21:41:23.9315360Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9315510Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9315533Z 2023-01-11T21:41:23.9315541Z 2023-01-11T21:41:23.9315681Z async_compile.wait(globals()) 2023-01-11T21:41:23.9315806Z del async_compile 2023-01-11T21:41:23.9315815Z 2023-01-11T21:41:23.9315933Z def call(args): 2023-01-11T21:41:23.9316075Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9316206Z args.clear() 2023-01-11T21:41:23.9316605Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'gelu', [], 'none') 2023-01-11T21:41:23.9316721Z del arg0_1 2023-01-11T21:41:23.9316813Z del arg1_1 2023-01-11T21:41:23.9316928Z del arg2_1 2023-01-11T21:41:23.9317052Z return (buf0, ) 2023-01-11T21:41:23.9317060Z 2023-01-11T21:41:23.9317067Z 2023-01-11T21:41:23.9317199Z if __name__ == "__main__": 2023-01-11T21:41:23.9317399Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9317611Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9317983Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9318339Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9318701Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9318921Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9318934Z 2023-01-11T21:41:23.9318942Z 2023-01-11T21:41:23.9319106Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9319227Z import torch 2023-01-11T21:41:23.9319347Z import random 2023-01-11T21:41:23.9319545Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9319764Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9319772Z 2023-01-11T21:41:23.9319908Z aten = torch.ops.aten 2023-01-11T21:41:23.9320133Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9320289Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9320297Z 2023-01-11T21:41:23.9320305Z 2023-01-11T21:41:23.9320458Z async_compile.wait(globals()) 2023-01-11T21:41:23.9320585Z del async_compile 2023-01-11T21:41:23.9320594Z 2023-01-11T21:41:23.9320719Z def call(args): 2023-01-11T21:41:23.9320850Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9320961Z args.clear() 2023-01-11T21:41:23.9321350Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'gelu', [], 'none') 2023-01-11T21:41:23.9321515Z del arg0_1 2023-01-11T21:41:23.9321633Z del arg1_1 2023-01-11T21:41:23.9321759Z return (buf0, ) 2023-01-11T21:41:23.9321767Z 2023-01-11T21:41:23.9321775Z 2023-01-11T21:41:23.9321899Z if __name__ == "__main__": 2023-01-11T21:41:23.9322100Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9322319Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9322687Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9323051Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9323252Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9323261Z 2023-01-11T21:41:23.9323268Z 2023-01-11T21:41:23.9323426Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9323544Z import torch 2023-01-11T21:41:23.9323667Z import random 2023-01-11T21:41:23.9323911Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9324128Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9324136Z 2023-01-11T21:41:23.9324271Z aten = torch.ops.aten 2023-01-11T21:41:23.9324499Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9324656Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9324664Z 2023-01-11T21:41:23.9324672Z 2023-01-11T21:41:23.9324819Z async_compile.wait(globals()) 2023-01-11T21:41:23.9324946Z del async_compile 2023-01-11T21:41:23.9324954Z 2023-01-11T21:41:23.9325074Z def call(args): 2023-01-11T21:41:23.9325217Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9325338Z args.clear() 2023-01-11T21:41:23.9325738Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'gelu', [], 'none') 2023-01-11T21:41:23.9325840Z del arg0_1 2023-01-11T21:41:23.9325950Z del arg1_1 2023-01-11T21:41:23.9326065Z del arg2_1 2023-01-11T21:41:23.9326186Z return (buf0, ) 2023-01-11T21:41:23.9326203Z 2023-01-11T21:41:23.9326211Z 2023-01-11T21:41:23.9326346Z if __name__ == "__main__": 2023-01-11T21:41:23.9326543Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9326763Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9327116Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9327466Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9327830Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9328045Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9328053Z 2023-01-11T21:41:23.9328060Z 2023-01-11T21:41:23.9328229Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9328348Z import torch 2023-01-11T21:41:23.9328467Z import random 2023-01-11T21:41:23.9328673Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9328881Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9328889Z 2023-01-11T21:41:23.9329028Z aten = torch.ops.aten 2023-01-11T21:41:23.9329265Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9329428Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9329437Z 2023-01-11T21:41:23.9329444Z 2023-01-11T21:41:23.9329599Z async_compile.wait(globals()) 2023-01-11T21:41:23.9329731Z del async_compile 2023-01-11T21:41:23.9329739Z 2023-01-11T21:41:23.9329862Z def call(args): 2023-01-11T21:41:23.9329990Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9330089Z args.clear() 2023-01-11T21:41:23.9330481Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'gelu', [], 'none') 2023-01-11T21:41:23.9330594Z del arg0_1 2023-01-11T21:41:23.9330712Z del arg1_1 2023-01-11T21:41:23.9330831Z return (buf0, ) 2023-01-11T21:41:23.9330839Z 2023-01-11T21:41:23.9330846Z 2023-01-11T21:41:23.9330978Z if __name__ == "__main__": 2023-01-11T21:41:23.9331229Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9331430Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9331791Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9332157Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9332357Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9332365Z 2023-01-11T21:41:23.9332373Z 2023-01-11T21:41:23.9332536Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9332655Z import torch 2023-01-11T21:41:23.9332774Z import random 2023-01-11T21:41:23.9332982Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9333178Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9333199Z 2023-01-11T21:41:23.9333321Z aten = torch.ops.aten 2023-01-11T21:41:23.9333554Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9333770Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9333784Z 2023-01-11T21:41:23.9333791Z 2023-01-11T21:41:23.9333942Z async_compile.wait(globals()) 2023-01-11T21:41:23.9334072Z del async_compile 2023-01-11T21:41:23.9334081Z 2023-01-11T21:41:23.9334197Z def call(args): 2023-01-11T21:41:23.9334335Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9334437Z args.clear() 2023-01-11T21:41:23.9334839Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'gelu', [], 'tanh') 2023-01-11T21:41:23.9334956Z del arg0_1 2023-01-11T21:41:23.9335067Z del arg1_1 2023-01-11T21:41:23.9335184Z del arg2_1 2023-01-11T21:41:23.9335308Z return (buf0, ) 2023-01-11T21:41:23.9335316Z 2023-01-11T21:41:23.9335324Z 2023-01-11T21:41:23.9335454Z if __name__ == "__main__": 2023-01-11T21:41:23.9335653Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9335851Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9336219Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9336569Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9336942Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9337159Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9337167Z 2023-01-11T21:41:23.9337175Z 2023-01-11T21:41:23.9337339Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9337457Z import torch 2023-01-11T21:41:23.9337580Z import random 2023-01-11T21:41:23.9337764Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9337979Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9337987Z 2023-01-11T21:41:23.9338123Z aten = torch.ops.aten 2023-01-11T21:41:23.9338360Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9338522Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9338530Z 2023-01-11T21:41:23.9338541Z 2023-01-11T21:41:23.9338705Z async_compile.wait(globals()) 2023-01-11T21:41:23.9338834Z del async_compile 2023-01-11T21:41:23.9338842Z 2023-01-11T21:41:23.9338966Z def call(args): 2023-01-11T21:41:23.9339080Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9339200Z args.clear() 2023-01-11T21:41:23.9339601Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'gelu', [], 'tanh') 2023-01-11T21:41:23.9339718Z del arg0_1 2023-01-11T21:41:23.9339834Z del arg1_1 2023-01-11T21:41:23.9339950Z return (buf0, ) 2023-01-11T21:41:23.9339958Z 2023-01-11T21:41:23.9339965Z 2023-01-11T21:41:23.9340093Z if __name__ == "__main__": 2023-01-11T21:41:23.9340274Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9340492Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9340858Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9341234Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9341475Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9341484Z 2023-01-11T21:41:23.9341491Z 2023-01-11T21:41:23.9341654Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9341774Z import torch 2023-01-11T21:41:23.9341896Z import random 2023-01-11T21:41:23.9342083Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9342296Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9342304Z 2023-01-11T21:41:23.9342543Z aten = torch.ops.aten 2023-01-11T21:41:23.9342781Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9342940Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9342949Z 2023-01-11T21:41:23.9342955Z 2023-01-11T21:41:23.9343111Z async_compile.wait(globals()) 2023-01-11T21:41:23.9343236Z del async_compile 2023-01-11T21:41:23.9343244Z 2023-01-11T21:41:23.9343366Z def call(args): 2023-01-11T21:41:23.9343482Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9343671Z args.clear() 2023-01-11T21:41:23.9344080Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'gelu', [], 'tanh') 2023-01-11T21:41:23.9344192Z del arg0_1 2023-01-11T21:41:23.9344310Z del arg1_1 2023-01-11T21:41:23.9344424Z del arg2_1 2023-01-11T21:41:23.9344546Z return (buf0, ) 2023-01-11T21:41:23.9344554Z 2023-01-11T21:41:23.9344562Z 2023-01-11T21:41:23.9344668Z if __name__ == "__main__": 2023-01-11T21:41:23.9344868Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9345085Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9345447Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9345799Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9346155Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9346368Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9346385Z 2023-01-11T21:41:23.9346393Z 2023-01-11T21:41:23.9346555Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9346661Z import torch 2023-01-11T21:41:23.9346782Z import random 2023-01-11T21:41:23.9346989Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9347206Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9347215Z 2023-01-11T21:41:23.9347355Z aten = torch.ops.aten 2023-01-11T21:41:23.9347586Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9347743Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9347751Z 2023-01-11T21:41:23.9347758Z 2023-01-11T21:41:23.9347914Z async_compile.wait(globals()) 2023-01-11T21:41:23.9348021Z del async_compile 2023-01-11T21:41:23.9348029Z 2023-01-11T21:41:23.9348147Z def call(args): 2023-01-11T21:41:23.9348271Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9348393Z args.clear() 2023-01-11T21:41:23.9348788Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'gelu', [], 'tanh') 2023-01-11T21:41:23.9348906Z del arg0_1 2023-01-11T21:41:23.9349024Z del arg1_1 2023-01-11T21:41:23.9349128Z return (buf0, ) 2023-01-11T21:41:23.9349135Z 2023-01-11T21:41:23.9349158Z 2023-01-11T21:41:23.9349272Z if __name__ == "__main__": 2023-01-11T21:41:23.9349469Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9349687Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9350054Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9350412Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9350612Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9350620Z 2023-01-11T21:41:23.9351112Z [2023-01-11 21:35:40,859] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 329 2023-01-11T21:41:23.9351912Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9352195Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9352654Z [2023-01-11 21:35:40,891] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 330 2023-01-11T21:41:23.9353148Z [2023-01-11 21:35:40,894] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 330 2023-01-11T21:41:23.9354060Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9354287Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9354760Z [2023-01-11 21:35:40,980] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 331 2023-01-11T21:41:23.9355249Z [2023-01-11 21:35:40,983] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 331 2023-01-11T21:41:23.9356041Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9356262Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9356745Z [2023-01-11 21:35:41,013] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 332 2023-01-11T21:41:23.9357236Z [2023-01-11 21:35:41,016] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 332 2023-01-11T21:41:23.9358048Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9358266Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9358722Z [2023-01-11 21:35:41,128] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 333 2023-01-11T21:41:23.9359193Z [2023-01-11 21:35:41,130] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 333 2023-01-11T21:41:23.9359992Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9360215Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9360680Z [2023-01-11 21:35:41,186] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 334 2023-01-11T21:41:23.9361164Z [2023-01-11 21:35:41,189] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 334 2023-01-11T21:41:23.9361969Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9362236Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9362706Z [2023-01-11 21:35:41,313] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 335 2023-01-11T21:41:23.9363193Z [2023-01-11 21:35:41,316] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 335 2023-01-11T21:41:23.9364003Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9364228Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9364724Z [2023-01-11 21:35:41,391] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 336 2023-01-11T21:41:23.9365218Z [2023-01-11 21:35:41,393] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 336 2023-01-11T21:41:23.9365227Z 2023-01-11T21:41:23.9365388Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9365510Z import torch 2023-01-11T21:41:23.9365628Z import random 2023-01-11T21:41:23.9365832Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9366051Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9366060Z 2023-01-11T21:41:23.9366199Z aten = torch.ops.aten 2023-01-11T21:41:23.9366424Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9366581Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9366589Z 2023-01-11T21:41:23.9366597Z 2023-01-11T21:41:23.9366749Z async_compile.wait(globals()) 2023-01-11T21:41:23.9366876Z del async_compile 2023-01-11T21:41:23.9366883Z 2023-01-11T21:41:23.9366998Z def call(args): 2023-01-11T21:41:23.9367151Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9367270Z args.clear() 2023-01-11T21:41:23.9367691Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardtanh', [0, 6], '') 2023-01-11T21:41:23.9367797Z del arg0_1 2023-01-11T21:41:23.9367913Z del arg1_1 2023-01-11T21:41:23.9368026Z del arg2_1 2023-01-11T21:41:23.9368150Z return (buf0, ) 2023-01-11T21:41:23.9368158Z 2023-01-11T21:41:23.9368165Z 2023-01-11T21:41:23.9368295Z if __name__ == "__main__": 2023-01-11T21:41:23.9368492Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9368711Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9369059Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9369413Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9369780Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9370006Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9370014Z 2023-01-11T21:41:23.9370021Z 2023-01-11T21:41:23.9370188Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9370303Z import torch 2023-01-11T21:41:23.9370428Z import random 2023-01-11T21:41:23.9370623Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9370818Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9370826Z 2023-01-11T21:41:23.9370960Z aten = torch.ops.aten 2023-01-11T21:41:23.9371197Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9371359Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9371368Z 2023-01-11T21:41:23.9371375Z 2023-01-11T21:41:23.9371525Z async_compile.wait(globals()) 2023-01-11T21:41:23.9371651Z del async_compile 2023-01-11T21:41:23.9371659Z 2023-01-11T21:41:23.9371783Z def call(args): 2023-01-11T21:41:23.9371910Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9372017Z args.clear() 2023-01-11T21:41:23.9372476Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardtanh', [0, 6], '') 2023-01-11T21:41:23.9372588Z del arg0_1 2023-01-11T21:41:23.9372702Z del arg1_1 2023-01-11T21:41:23.9372826Z return (buf0, ) 2023-01-11T21:41:23.9372834Z 2023-01-11T21:41:23.9372841Z 2023-01-11T21:41:23.9372974Z if __name__ == "__main__": 2023-01-11T21:41:23.9373168Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9373370Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9373728Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9374107Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9374306Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9374314Z 2023-01-11T21:41:23.9374321Z 2023-01-11T21:41:23.9374481Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9374604Z import torch 2023-01-11T21:41:23.9374731Z import random 2023-01-11T21:41:23.9374974Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9375171Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9375195Z 2023-01-11T21:41:23.9375311Z aten = torch.ops.aten 2023-01-11T21:41:23.9375550Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9375711Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9375718Z 2023-01-11T21:41:23.9375725Z 2023-01-11T21:41:23.9375885Z async_compile.wait(globals()) 2023-01-11T21:41:23.9376013Z del async_compile 2023-01-11T21:41:23.9376021Z 2023-01-11T21:41:23.9376141Z def call(args): 2023-01-11T21:41:23.9376280Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9376387Z args.clear() 2023-01-11T21:41:23.9376798Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'hardtanh', [0, 6], '') 2023-01-11T21:41:23.9376917Z del arg0_1 2023-01-11T21:41:23.9377030Z del arg1_1 2023-01-11T21:41:23.9377149Z del arg2_1 2023-01-11T21:41:23.9377279Z return (buf0, ) 2023-01-11T21:41:23.9377288Z 2023-01-11T21:41:23.9377295Z 2023-01-11T21:41:23.9377424Z if __name__ == "__main__": 2023-01-11T21:41:23.9377627Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9377832Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9378182Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9378533Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9378883Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9379095Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9379103Z 2023-01-11T21:41:23.9379111Z 2023-01-11T21:41:23.9379280Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9379398Z import torch 2023-01-11T21:41:23.9379519Z import random 2023-01-11T21:41:23.9379706Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9379924Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9379933Z 2023-01-11T21:41:23.9380068Z aten = torch.ops.aten 2023-01-11T21:41:23.9380305Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9380463Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9380472Z 2023-01-11T21:41:23.9380479Z 2023-01-11T21:41:23.9380637Z async_compile.wait(globals()) 2023-01-11T21:41:23.9380758Z del async_compile 2023-01-11T21:41:23.9380766Z 2023-01-11T21:41:23.9380884Z def call(args): 2023-01-11T21:41:23.9380991Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9381111Z args.clear() 2023-01-11T21:41:23.9381511Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'hardtanh', [0, 6], '') 2023-01-11T21:41:23.9381624Z del arg0_1 2023-01-11T21:41:23.9381741Z del arg1_1 2023-01-11T21:41:23.9381860Z return (buf0, ) 2023-01-11T21:41:23.9381868Z 2023-01-11T21:41:23.9381875Z 2023-01-11T21:41:23.9382003Z if __name__ == "__main__": 2023-01-11T21:41:23.9382233Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9382566Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9382928Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9383287Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9383481Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9383489Z 2023-01-11T21:41:23.9383497Z 2023-01-11T21:41:23.9383661Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9383786Z import torch 2023-01-11T21:41:23.9383899Z import random 2023-01-11T21:41:23.9384086Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9384297Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9384306Z 2023-01-11T21:41:23.9384439Z aten = torch.ops.aten 2023-01-11T21:41:23.9384677Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9384912Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9384922Z 2023-01-11T21:41:23.9384929Z 2023-01-11T21:41:23.9385086Z async_compile.wait(globals()) 2023-01-11T21:41:23.9385213Z del async_compile 2023-01-11T21:41:23.9385221Z 2023-01-11T21:41:23.9385339Z def call(args): 2023-01-11T21:41:23.9385467Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9385585Z args.clear() 2023-01-11T21:41:23.9385978Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'swish', [], '') 2023-01-11T21:41:23.9386092Z del arg0_1 2023-01-11T21:41:23.9386203Z del arg1_1 2023-01-11T21:41:23.9386315Z del arg2_1 2023-01-11T21:41:23.9386434Z return (buf0, ) 2023-01-11T21:41:23.9386442Z 2023-01-11T21:41:23.9386447Z 2023-01-11T21:41:23.9386559Z if __name__ == "__main__": 2023-01-11T21:41:23.9386748Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9386956Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9387328Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9387679Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9388056Z arg2_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9388266Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9388274Z 2023-01-11T21:41:23.9388282Z 2023-01-11T21:41:23.9388446Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9388552Z import torch 2023-01-11T21:41:23.9388665Z import random 2023-01-11T21:41:23.9388860Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9389079Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9389086Z 2023-01-11T21:41:23.9389225Z aten = torch.ops.aten 2023-01-11T21:41:23.9389462Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9389620Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9389632Z 2023-01-11T21:41:23.9389644Z 2023-01-11T21:41:23.9389799Z async_compile.wait(globals()) 2023-01-11T21:41:23.9389911Z del async_compile 2023-01-11T21:41:23.9389919Z 2023-01-11T21:41:23.9390032Z def call(args): 2023-01-11T21:41:23.9390163Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9390277Z args.clear() 2023-01-11T21:41:23.9390670Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'swish', [], '') 2023-01-11T21:41:23.9390777Z del arg0_1 2023-01-11T21:41:23.9390892Z del arg1_1 2023-01-11T21:41:23.9390990Z return (buf0, ) 2023-01-11T21:41:23.9390998Z 2023-01-11T21:41:23.9391018Z 2023-01-11T21:41:23.9391134Z if __name__ == "__main__": 2023-01-11T21:41:23.9391327Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9391538Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9391894Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9392270Z arg1_1 = rand_strided((2, 3, 10), (30, 10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9392531Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9392539Z 2023-01-11T21:41:23.9392547Z 2023-01-11T21:41:23.9392706Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9392808Z import torch 2023-01-11T21:41:23.9392929Z import random 2023-01-11T21:41:23.9393129Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9393340Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9393349Z 2023-01-11T21:41:23.9393491Z aten = torch.ops.aten 2023-01-11T21:41:23.9393785Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9393940Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9393948Z 2023-01-11T21:41:23.9393956Z 2023-01-11T21:41:23.9394109Z async_compile.wait(globals()) 2023-01-11T21:41:23.9394219Z del async_compile 2023-01-11T21:41:23.9394243Z 2023-01-11T21:41:23.9394344Z def call(args): 2023-01-11T21:41:23.9394486Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9394654Z args.clear() 2023-01-11T21:41:23.9395042Z buf0 = torch.ops.mkldnn._linear_pointwise(arg2_1, arg0_1, arg1_1, 'swish', [], '') 2023-01-11T21:41:23.9395163Z del arg0_1 2023-01-11T21:41:23.9395276Z del arg1_1 2023-01-11T21:41:23.9395376Z del arg2_1 2023-01-11T21:41:23.9395494Z return (buf0, ) 2023-01-11T21:41:23.9395503Z 2023-01-11T21:41:23.9395510Z 2023-01-11T21:41:23.9395640Z if __name__ == "__main__": 2023-01-11T21:41:23.9395830Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9396043Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9396403Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9396758Z arg1_1 = rand_strided((30, ), (1, ), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9397115Z arg2_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9397321Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9397345Z 2023-01-11T21:41:23.9397352Z 2023-01-11T21:41:23.9397504Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9397619Z import torch 2023-01-11T21:41:23.9397740Z import random 2023-01-11T21:41:23.9397940Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9398148Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9398156Z 2023-01-11T21:41:23.9398287Z aten = torch.ops.aten 2023-01-11T21:41:23.9398528Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9398670Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9398679Z 2023-01-11T21:41:23.9398699Z 2023-01-11T21:41:23.9398833Z async_compile.wait(globals()) 2023-01-11T21:41:23.9398956Z del async_compile 2023-01-11T21:41:23.9398965Z 2023-01-11T21:41:23.9399081Z def call(args): 2023-01-11T21:41:23.9399206Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9399322Z args.clear() 2023-01-11T21:41:23.9399715Z buf0 = torch.ops.mkldnn._linear_pointwise(arg1_1, arg0_1, None, 'swish', [], '') 2023-01-11T21:41:23.9399824Z del arg0_1 2023-01-11T21:41:23.9399920Z del arg1_1 2023-01-11T21:41:23.9400037Z return (buf0, ) 2023-01-11T21:41:23.9400045Z 2023-01-11T21:41:23.9400052Z 2023-01-11T21:41:23.9400182Z if __name__ == "__main__": 2023-01-11T21:41:23.9400378Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9400588Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9400948Z arg0_1 = rand_strided((30, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9401303Z arg1_1 = rand_strided((2, 10), (10, 1), device='cpu', dtype=torch.bfloat16) 2023-01-11T21:41:23.9401484Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9401507Z 2023-01-11T21:41:23.9401604Z ok (2.862s) 2023-01-11T21:41:23.9402497Z test_linspace1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9402768Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9403243Z [2023-01-11 21:35:41,455] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 337 2023-01-11T21:41:23.9403731Z [2023-01-11 21:35:43,118] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 337 2023-01-11T21:41:23.9403740Z 2023-01-11T21:41:23.9403907Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9404027Z import torch 2023-01-11T21:41:23.9404154Z import random 2023-01-11T21:41:23.9404355Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9404546Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9404554Z 2023-01-11T21:41:23.9404752Z aten = torch.ops.aten 2023-01-11T21:41:23.9404987Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9405151Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9405159Z 2023-01-11T21:41:23.9405167Z 2023-01-11T21:41:23.9405405Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9405775Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9405982Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9406154Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9406242Z { 2023-01-11T21:41:23.9406412Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9406517Z { 2023-01-11T21:41:23.9406644Z #pragma omp for 2023-01-11T21:41:23.9406786Z for(long i0=0; i0<7; i0+=1) 2023-01-11T21:41:23.9406891Z { 2023-01-11T21:41:23.9406986Z { 2023-01-11T21:41:23.9407096Z { 2023-01-11T21:41:23.9407263Z auto tmp4 = in_ptr0[i0]; 2023-01-11T21:41:23.9407448Z auto tmp0 = static_cast(0.125); 2023-01-11T21:41:23.9407631Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.9407791Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9407943Z auto tmp3 = tmp2 + tmp0; 2023-01-11T21:41:23.9408092Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.9408224Z out_ptr0[i0] = tmp5; 2023-01-11T21:41:23.9408336Z } 2023-01-11T21:41:23.9408449Z } 2023-01-11T21:41:23.9408550Z } 2023-01-11T21:41:23.9408656Z } 2023-01-11T21:41:23.9408754Z } 2023-01-11T21:41:23.9408880Z ''') 2023-01-11T21:41:23.9408889Z 2023-01-11T21:41:23.9408911Z 2023-01-11T21:41:23.9409055Z async_compile.wait(globals()) 2023-01-11T21:41:23.9409182Z del async_compile 2023-01-11T21:41:23.9409191Z 2023-01-11T21:41:23.9409317Z def call(args): 2023-01-11T21:41:23.9409432Z arg0_1, = args 2023-01-11T21:41:23.9409564Z args.clear() 2023-01-11T21:41:23.9409914Z buf0 = empty_strided((1, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9410152Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9410252Z del arg0_1 2023-01-11T21:41:23.9410374Z return (buf0, ) 2023-01-11T21:41:23.9410383Z 2023-01-11T21:41:23.9410391Z 2023-01-11T21:41:23.9410525Z if __name__ == "__main__": 2023-01-11T21:41:23.9410728Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9410943Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9411297Z arg0_1 = rand_strided((1, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9411487Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9411496Z 2023-01-11T21:41:23.9411608Z ok (1.719s) 2023-01-11T21:41:23.9412478Z test_linspace2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9412739Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9413217Z [2023-01-11 21:35:43,150] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 338 2023-01-11T21:41:23.9413716Z [2023-01-11 21:35:44,698] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 338 2023-01-11T21:41:23.9413724Z 2023-01-11T21:41:23.9413894Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9414017Z import torch 2023-01-11T21:41:23.9414132Z import random 2023-01-11T21:41:23.9414336Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9414549Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9414598Z 2023-01-11T21:41:23.9414721Z aten = torch.ops.aten 2023-01-11T21:41:23.9414963Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9415121Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9415129Z 2023-01-11T21:41:23.9415136Z 2023-01-11T21:41:23.9415375Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9415740Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9415947Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9416123Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9416218Z { 2023-01-11T21:41:23.9416311Z { 2023-01-11T21:41:23.9416413Z { 2023-01-11T21:41:23.9416563Z auto tmp4 = in_ptr0[0]; 2023-01-11T21:41:23.9416735Z auto tmp0 = static_cast(0.0); 2023-01-11T21:41:23.9416907Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.9417057Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9417190Z auto tmp3 = tmp2 + tmp1; 2023-01-11T21:41:23.9417330Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.9417464Z out_ptr0[0] = tmp5; 2023-01-11T21:41:23.9417566Z } 2023-01-11T21:41:23.9417671Z } 2023-01-11T21:41:23.9417774Z } 2023-01-11T21:41:23.9417910Z ''') 2023-01-11T21:41:23.9417918Z 2023-01-11T21:41:23.9417926Z 2023-01-11T21:41:23.9418066Z async_compile.wait(globals()) 2023-01-11T21:41:23.9418195Z del async_compile 2023-01-11T21:41:23.9418203Z 2023-01-11T21:41:23.9418326Z def call(args): 2023-01-11T21:41:23.9418447Z arg0_1, = args 2023-01-11T21:41:23.9418566Z args.clear() 2023-01-11T21:41:23.9418926Z buf0 = empty_strided((1, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9419164Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9419278Z del arg0_1 2023-01-11T21:41:23.9419383Z return (buf0, ) 2023-01-11T21:41:23.9419392Z 2023-01-11T21:41:23.9419403Z 2023-01-11T21:41:23.9419538Z if __name__ == "__main__": 2023-01-11T21:41:23.9419736Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9419955Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9420309Z arg0_1 = rand_strided((1, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9420494Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9420502Z 2023-01-11T21:41:23.9420614Z ok (1.580s) 2023-01-11T21:41:23.9421499Z test_linspace3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9421711Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9422227Z [2023-01-11 21:35:44,727] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 339 2023-01-11T21:41:23.9422839Z [2023-01-11 21:35:44,729] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 339 2023-01-11T21:41:23.9422848Z 2023-01-11T21:41:23.9423013Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9423124Z import torch 2023-01-11T21:41:23.9423243Z import random 2023-01-11T21:41:23.9423440Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9423647Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9423655Z 2023-01-11T21:41:23.9423795Z aten = torch.ops.aten 2023-01-11T21:41:23.9424020Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9424176Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9424185Z 2023-01-11T21:41:23.9424193Z 2023-01-11T21:41:23.9424351Z async_compile.wait(globals()) 2023-01-11T21:41:23.9424479Z del async_compile 2023-01-11T21:41:23.9424557Z 2023-01-11T21:41:23.9424682Z def call(args): 2023-01-11T21:41:23.9425027Z buf0 = empty_strided((0, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9425151Z return (buf0, ) 2023-01-11T21:41:23.9425160Z 2023-01-11T21:41:23.9425167Z 2023-01-11T21:41:23.9425296Z if __name__ == "__main__": 2023-01-11T21:41:23.9425480Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9425699Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9425870Z print_performance(lambda: call([])) 2023-01-11T21:41:23.9425878Z 2023-01-11T21:41:23.9425991Z ok (0.031s) 2023-01-11T21:41:23.9426618Z test_list_clearing_cpu (__main__.CpuTests) ... [2023-01-11 21:35:44,758] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph None 2023-01-11T21:41:23.9427108Z [2023-01-11 21:35:46,376] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph None 2023-01-11T21:41:23.9427117Z 2023-01-11T21:41:23.9427292Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9427408Z import torch 2023-01-11T21:41:23.9427512Z import random 2023-01-11T21:41:23.9427709Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9427915Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9427924Z 2023-01-11T21:41:23.9428054Z aten = torch.ops.aten 2023-01-11T21:41:23.9428285Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9428448Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9428456Z 2023-01-11T21:41:23.9428463Z 2023-01-11T21:41:23.9428702Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9429067Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9429263Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9429443Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9429616Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9429727Z { 2023-01-11T21:41:23.9429900Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9430007Z { 2023-01-11T21:41:23.9430140Z #pragma omp for 2023-01-11T21:41:23.9430271Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:23.9430375Z { 2023-01-11T21:41:23.9430625Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9430857Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.9431003Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9431162Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9431272Z } 2023-01-11T21:41:23.9431438Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9431558Z for(long i0=24; i0<25; i0+=1) 2023-01-11T21:41:23.9431663Z { 2023-01-11T21:41:23.9431804Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9431948Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.9432157Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9432292Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9432385Z } 2023-01-11T21:41:23.9432484Z } 2023-01-11T21:41:23.9432587Z } 2023-01-11T21:41:23.9432724Z ''') 2023-01-11T21:41:23.9432732Z 2023-01-11T21:41:23.9432740Z 2023-01-11T21:41:23.9432897Z async_compile.wait(globals()) 2023-01-11T21:41:23.9433019Z del async_compile 2023-01-11T21:41:23.9433028Z 2023-01-11T21:41:23.9433150Z def call(args): 2023-01-11T21:41:23.9433264Z x_1, y_1 = args 2023-01-11T21:41:23.9433368Z args.clear() 2023-01-11T21:41:23.9433771Z buf0 = empty_strided((5, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9434047Z kernel_cpp_0(c_void_p(x_1.data_ptr()), c_void_p(y_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9434159Z del x_1 2023-01-11T21:41:23.9434267Z del y_1 2023-01-11T21:41:23.9434613Z buf1 = empty_strided((5, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9434812Z aten.mm.out(buf0, buf0, out=buf1) 2023-01-11T21:41:23.9434910Z return (buf1, ) 2023-01-11T21:41:23.9434919Z 2023-01-11T21:41:23.9434926Z 2023-01-11T21:41:23.9435056Z if __name__ == "__main__": 2023-01-11T21:41:23.9435248Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9435459Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9435803Z x_1 = rand_strided((5, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9436131Z y_1 = rand_strided((5, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9436325Z print_performance(lambda: call([x_1, y_1])) 2023-01-11T21:41:23.9436332Z 2023-01-11T21:41:23.9436443Z ok (1.647s) 2023-01-11T21:41:23.9437303Z test_log1p_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9437533Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9438011Z [2023-01-11 21:35:46,394] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 340 2023-01-11T21:41:23.9438502Z [2023-01-11 21:35:48,014] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 340 2023-01-11T21:41:23.9439299Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9439514Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9440003Z [2023-01-11 21:35:48,032] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 341 2023-01-11T21:41:23.9440486Z [2023-01-11 21:35:49,654] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 341 2023-01-11T21:41:23.9441295Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9441515Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9441984Z [2023-01-11 21:35:49,672] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 342 2023-01-11T21:41:23.9442467Z [2023-01-11 21:35:51,319] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 342 2023-01-11T21:41:23.9443299Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9443520Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9443995Z [2023-01-11 21:35:51,346] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 343 2023-01-11T21:41:23.9444480Z [2023-01-11 21:35:52,923] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 343 2023-01-11T21:41:23.9445317Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9445548Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9446018Z [2023-01-11 21:35:52,941] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 344 2023-01-11T21:41:23.9446028Z 2023-01-11T21:41:23.9446197Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9446316Z import torch 2023-01-11T21:41:23.9446434Z import random 2023-01-11T21:41:23.9446624Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9446840Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9446848Z 2023-01-11T21:41:23.9446981Z aten = torch.ops.aten 2023-01-11T21:41:23.9447218Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9447379Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9447387Z 2023-01-11T21:41:23.9447395Z 2023-01-11T21:41:23.9447639Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9448018Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9448229Z extern "C" void kernel(const half* __restrict__ in_ptr0, 2023-01-11T21:41:23.9448380Z half* __restrict__ out_ptr0, 2023-01-11T21:41:23.9448543Z half* __restrict__ out_ptr1) 2023-01-11T21:41:23.9448653Z { 2023-01-11T21:41:23.9448823Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9448926Z { 2023-01-11T21:41:23.9449051Z #pragma omp for 2023-01-11T21:41:23.9449190Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.9449284Z { 2023-01-11T21:41:23.9449390Z { 2023-01-11T21:41:23.9449500Z { 2023-01-11T21:41:23.9449693Z auto tmp0 = static_cast(in_ptr0[i0]); 2023-01-11T21:41:23.9449870Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9450048Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9450203Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9450334Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9450478Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9450584Z } 2023-01-11T21:41:23.9450695Z } 2023-01-11T21:41:23.9450801Z } 2023-01-11T21:41:23.9450907Z } 2023-01-11T21:41:23.9451015Z } 2023-01-11T21:41:23.9451135Z ''') 2023-01-11T21:41:23.9451143Z 2023-01-11T21:41:23.9451151Z 2023-01-11T21:41:23.9451304Z async_compile.wait(globals()) 2023-01-11T21:41:23.9451431Z del async_compile 2023-01-11T21:41:23.9451439Z 2023-01-11T21:41:23.9451560Z def call(args): 2023-01-11T21:41:23.9451679Z arg0_1, = args 2023-01-11T21:41:23.9451799Z args.clear() 2023-01-11T21:41:23.9452150Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9452479Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9452811Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9452928Z del arg0_1 2023-01-11T21:41:23.9453059Z return (buf0, buf1, ) 2023-01-11T21:41:23.9453066Z 2023-01-11T21:41:23.9453072Z 2023-01-11T21:41:23.9453199Z if __name__ == "__main__": 2023-01-11T21:41:23.9453395Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9453613Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9453970Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9454141Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9454166Z 2023-01-11T21:41:23.9454174Z 2023-01-11T21:41:23.9454319Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9454441Z import torch 2023-01-11T21:41:23.9454564Z import random 2023-01-11T21:41:23.9454766Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9455051Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9455064Z 2023-01-11T21:41:23.9455197Z aten = torch.ops.aten 2023-01-11T21:41:23.9455431Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9455575Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9455582Z 2023-01-11T21:41:23.9455604Z 2023-01-11T21:41:23.9455820Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9456169Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9456361Z extern "C" void kernel(const half* __restrict__ in_ptr0, 2023-01-11T21:41:23.9456519Z half* __restrict__ out_ptr0, 2023-01-11T21:41:23.9456674Z half* __restrict__ out_ptr1) 2023-01-11T21:41:23.9456776Z { 2023-01-11T21:41:23.9456939Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9457021Z { 2023-01-11T21:41:23.9457154Z #pragma omp for 2023-01-11T21:41:23.9457288Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.9457397Z { 2023-01-11T21:41:23.9457497Z { 2023-01-11T21:41:23.9457602Z { 2023-01-11T21:41:23.9457785Z auto tmp0 = static_cast(in_ptr0[i0]); 2023-01-11T21:41:23.9457955Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9458134Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9458284Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9458424Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9458563Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9458670Z } 2023-01-11T21:41:23.9458757Z } 2023-01-11T21:41:23.9458864Z } 2023-01-11T21:41:23.9458962Z } 2023-01-11T21:41:23.9459064Z } 2023-01-11T21:41:23.9459197Z ''') 2023-01-11T21:41:23.9459204Z 2023-01-11T21:41:23.9459210Z 2023-01-11T21:41:23.9459363Z async_compile.wait(globals()) 2023-01-11T21:41:23.9459483Z del async_compile 2023-01-11T21:41:23.9459495Z 2023-01-11T21:41:23.9459612Z def call(args): 2023-01-11T21:41:23.9459707Z arg0_1, = args 2023-01-11T21:41:23.9459814Z args.clear() 2023-01-11T21:41:23.9460146Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9460488Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9460771Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9460881Z del arg0_1 2023-01-11T21:41:23.9460995Z return (buf0, buf1, ) 2023-01-11T21:41:23.9461003Z 2023-01-11T21:41:23.9461010Z 2023-01-11T21:41:23.9461118Z if __name__ == "__main__": 2023-01-11T21:41:23.9461305Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9461502Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9461837Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float16) 2023-01-11T21:41:23.9462016Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9462079Z 2023-01-11T21:41:23.9462087Z 2023-01-11T21:41:23.9462241Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9462472Z import torch 2023-01-11T21:41:23.9462594Z import random 2023-01-11T21:41:23.9462772Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9462980Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9462988Z 2023-01-11T21:41:23.9463115Z aten = torch.ops.aten 2023-01-11T21:41:23.9463347Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9463497Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9463505Z 2023-01-11T21:41:23.9463512Z 2023-01-11T21:41:23.9463739Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9464095Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9464299Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9464448Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9464684Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9464778Z { 2023-01-11T21:41:23.9464948Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9465053Z { 2023-01-11T21:41:23.9465185Z #pragma omp for 2023-01-11T21:41:23.9465316Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9465403Z { 2023-01-11T21:41:23.9465632Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9465783Z auto tmp1 = tmp0.log1p(); 2023-01-11T21:41:23.9466012Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.9466152Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9466307Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9466454Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9466538Z } 2023-01-11T21:41:23.9466693Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9466826Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.9466939Z { 2023-01-11T21:41:23.9467080Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9467232Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9467402Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9467522Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9467651Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9467780Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9467875Z } 2023-01-11T21:41:23.9467968Z } 2023-01-11T21:41:23.9468064Z } 2023-01-11T21:41:23.9468188Z ''') 2023-01-11T21:41:23.9468197Z 2023-01-11T21:41:23.9468204Z 2023-01-11T21:41:23.9468335Z async_compile.wait(globals()) 2023-01-11T21:41:23.9468456Z del async_compile 2023-01-11T21:41:23.9468465Z 2023-01-11T21:41:23.9468578Z def call(args): 2023-01-11T21:41:23.9468689Z arg0_1, = args 2023-01-11T21:41:23.9468806Z args.clear() 2023-01-11T21:41:23.9469141Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9469485Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9469768Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9469865Z del arg0_1 2023-01-11T21:41:23.9469988Z return (buf0, buf1, ) 2023-01-11T21:41:23.9469995Z 2023-01-11T21:41:23.9470002Z 2023-01-11T21:41:23.9470124Z if __name__ == "__main__": 2023-01-11T21:41:23.9470319Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9470529Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9470874Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9471055Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9471061Z 2023-01-11T21:41:23.9471069Z 2023-01-11T21:41:23.9471231Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9471327Z import torch 2023-01-11T21:41:23.9471431Z import random 2023-01-11T21:41:23.9471709Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9471914Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9471922Z 2023-01-11T21:41:23.9472050Z aten = torch.ops.aten 2023-01-11T21:41:23.9472284Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9472440Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9472448Z 2023-01-11T21:41:23.9472455Z 2023-01-11T21:41:23.9472687Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9473027Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9473229Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9473386Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9473552Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9473652Z { 2023-01-11T21:41:23.9473875Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9473989Z { 2023-01-11T21:41:23.9474165Z #pragma omp for 2023-01-11T21:41:23.9474295Z for(long i0=0; i0<25; i0+=1) 2023-01-11T21:41:23.9474400Z { 2023-01-11T21:41:23.9474625Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9474769Z auto tmp1 = tmp0.log1p(); 2023-01-11T21:41:23.9474990Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.9475131Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9475267Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9475418Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9475519Z } 2023-01-11T21:41:23.9475681Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9475817Z for(long i0=200; i0<201; i0+=1) 2023-01-11T21:41:23.9475917Z { 2023-01-11T21:41:23.9476058Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9476199Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9476367Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9476508Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9476641Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9476768Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9476872Z } 2023-01-11T21:41:23.9476975Z } 2023-01-11T21:41:23.9477060Z } 2023-01-11T21:41:23.9477201Z ''') 2023-01-11T21:41:23.9477211Z 2023-01-11T21:41:23.9477218Z 2023-01-11T21:41:23.9477366Z async_compile.wait(globals()) 2023-01-11T21:41:23.9477485Z del async_compile 2023-01-11T21:41:23.9477493Z 2023-01-11T21:41:23.9477611Z def call(args): 2023-01-11T21:41:23.9477723Z arg0_1, = args 2023-01-11T21:41:23.9477837Z args.clear() 2023-01-11T21:41:23.9478171Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9478497Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9478779Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9478891Z del arg0_1 2023-01-11T21:41:23.9479018Z return (buf0, buf1, ) 2023-01-11T21:41:23.9479025Z 2023-01-11T21:41:23.9479032Z 2023-01-11T21:41:23.9479161Z if __name__ == "__main__": 2023-01-11T21:41:23.9479353Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9479568Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9479891Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9480071Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9480077Z 2023-01-11T21:41:23.9480548Z [2023-01-11 21:35:54,650] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 344 2023-01-11T21:41:23.9481321Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9481577Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9482027Z [2023-01-11 21:35:54,668] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 345 2023-01-11T21:41:23.9482494Z [2023-01-11 21:35:56,303] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 345 2023-01-11T21:41:23.9483256Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9483461Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9483961Z [2023-01-11 21:35:56,322] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 346 2023-01-11T21:41:23.9484421Z [2023-01-11 21:35:57,885] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 346 2023-01-11T21:41:23.9485169Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9485358Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9485796Z [2023-01-11 21:35:57,902] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 347 2023-01-11T21:41:23.9486251Z [2023-01-11 21:35:59,533] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 347 2023-01-11T21:41:23.9486999Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9487209Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9487650Z [2023-01-11 21:35:59,556] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 348 2023-01-11T21:41:23.9487658Z 2023-01-11T21:41:23.9487811Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9487930Z import torch 2023-01-11T21:41:23.9488045Z import random 2023-01-11T21:41:23.9488223Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9488426Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9488434Z 2023-01-11T21:41:23.9488567Z aten = torch.ops.aten 2023-01-11T21:41:23.9488789Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9488941Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9488950Z 2023-01-11T21:41:23.9488957Z 2023-01-11T21:41:23.9489185Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9489538Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9489744Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.9489906Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.9490055Z double* __restrict__ out_ptr1) 2023-01-11T21:41:23.9490152Z { 2023-01-11T21:41:23.9490311Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9490411Z { 2023-01-11T21:41:23.9490533Z #pragma omp for 2023-01-11T21:41:23.9490667Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.9490754Z { 2023-01-11T21:41:23.9490852Z { 2023-01-11T21:41:23.9491004Z { 2023-01-11T21:41:23.9491152Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9491320Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9491487Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9491637Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9491755Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9491893Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9491998Z } 2023-01-11T21:41:23.9492098Z } 2023-01-11T21:41:23.9492195Z } 2023-01-11T21:41:23.9492288Z } 2023-01-11T21:41:23.9492385Z } 2023-01-11T21:41:23.9492505Z ''') 2023-01-11T21:41:23.9492513Z 2023-01-11T21:41:23.9492521Z 2023-01-11T21:41:23.9492669Z async_compile.wait(globals()) 2023-01-11T21:41:23.9492786Z del async_compile 2023-01-11T21:41:23.9492794Z 2023-01-11T21:41:23.9492909Z def call(args): 2023-01-11T21:41:23.9493024Z arg0_1, = args 2023-01-11T21:41:23.9493177Z args.clear() 2023-01-11T21:41:23.9493510Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9493816Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9494087Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9494196Z del arg0_1 2023-01-11T21:41:23.9494323Z return (buf0, buf1, ) 2023-01-11T21:41:23.9494331Z 2023-01-11T21:41:23.9494340Z 2023-01-11T21:41:23.9494460Z if __name__ == "__main__": 2023-01-11T21:41:23.9494643Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9494852Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9495184Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9495353Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9495377Z 2023-01-11T21:41:23.9495383Z 2023-01-11T21:41:23.9495528Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9495638Z import torch 2023-01-11T21:41:23.9495753Z import random 2023-01-11T21:41:23.9495938Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9496141Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9496147Z 2023-01-11T21:41:23.9496276Z aten = torch.ops.aten 2023-01-11T21:41:23.9496502Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9496637Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9496645Z 2023-01-11T21:41:23.9496667Z 2023-01-11T21:41:23.9496883Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9497238Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9497442Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.9497606Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.9497768Z double* __restrict__ out_ptr1) 2023-01-11T21:41:23.9497874Z { 2023-01-11T21:41:23.9498035Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9498118Z { 2023-01-11T21:41:23.9498246Z #pragma omp for 2023-01-11T21:41:23.9498388Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.9498497Z { 2023-01-11T21:41:23.9498602Z { 2023-01-11T21:41:23.9498706Z { 2023-01-11T21:41:23.9498841Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9499006Z auto tmp1 = std::log1p(tmp0); 2023-01-11T21:41:23.9499184Z auto tmp2 = static_cast(2); 2023-01-11T21:41:23.9499332Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9499474Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9499603Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9499709Z } 2023-01-11T21:41:23.9499810Z } 2023-01-11T21:41:23.9499898Z } 2023-01-11T21:41:23.9500001Z } 2023-01-11T21:41:23.9500142Z } 2023-01-11T21:41:23.9500278Z ''') 2023-01-11T21:41:23.9500286Z 2023-01-11T21:41:23.9500294Z 2023-01-11T21:41:23.9500443Z async_compile.wait(globals()) 2023-01-11T21:41:23.9500564Z del async_compile 2023-01-11T21:41:23.9500572Z 2023-01-11T21:41:23.9500689Z def call(args): 2023-01-11T21:41:23.9500790Z arg0_1, = args 2023-01-11T21:41:23.9500905Z args.clear() 2023-01-11T21:41:23.9501232Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9501574Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9501854Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9501966Z del arg0_1 2023-01-11T21:41:23.9502095Z return (buf0, buf1, ) 2023-01-11T21:41:23.9502104Z 2023-01-11T21:41:23.9502111Z 2023-01-11T21:41:23.9502240Z if __name__ == "__main__": 2023-01-11T21:41:23.9502567Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9502861Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9503214Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9503395Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9503403Z 2023-01-11T21:41:23.9503411Z 2023-01-11T21:41:23.9503564Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9503677Z import torch 2023-01-11T21:41:23.9503796Z import random 2023-01-11T21:41:23.9503983Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9504191Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9504200Z 2023-01-11T21:41:23.9504332Z aten = torch.ops.aten 2023-01-11T21:41:23.9504561Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9504719Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9504727Z 2023-01-11T21:41:23.9504735Z 2023-01-11T21:41:23.9504976Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9505340Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9505537Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.9505682Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9505847Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9505946Z { 2023-01-11T21:41:23.9506115Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9506223Z { 2023-01-11T21:41:23.9506358Z #pragma omp for 2023-01-11T21:41:23.9506497Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.9506589Z { 2023-01-11T21:41:23.9506691Z { 2023-01-11T21:41:23.9506800Z { 2023-01-11T21:41:23.9506952Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9507131Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.9507302Z auto tmp2 = std::log1p(tmp1); 2023-01-11T21:41:23.9507478Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.9507622Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.9507763Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9507900Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9508007Z } 2023-01-11T21:41:23.9508116Z } 2023-01-11T21:41:23.9508216Z } 2023-01-11T21:41:23.9508315Z } 2023-01-11T21:41:23.9508399Z } 2023-01-11T21:41:23.9508533Z ''') 2023-01-11T21:41:23.9508541Z 2023-01-11T21:41:23.9508549Z 2023-01-11T21:41:23.9508704Z async_compile.wait(globals()) 2023-01-11T21:41:23.9508827Z del async_compile 2023-01-11T21:41:23.9508835Z 2023-01-11T21:41:23.9508954Z def call(args): 2023-01-11T21:41:23.9509071Z arg0_1, = args 2023-01-11T21:41:23.9509189Z args.clear() 2023-01-11T21:41:23.9509514Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9509847Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9510199Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9510320Z del arg0_1 2023-01-11T21:41:23.9510455Z return (buf0, buf1, ) 2023-01-11T21:41:23.9510463Z 2023-01-11T21:41:23.9510471Z 2023-01-11T21:41:23.9510603Z if __name__ == "__main__": 2023-01-11T21:41:23.9510802Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9511011Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9511340Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.9511531Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9511539Z 2023-01-11T21:41:23.9511546Z 2023-01-11T21:41:23.9511702Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9511826Z import torch 2023-01-11T21:41:23.9511948Z import random 2023-01-11T21:41:23.9512152Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9512363Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9512420Z 2023-01-11T21:41:23.9512548Z aten = torch.ops.aten 2023-01-11T21:41:23.9512765Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9512916Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9512925Z 2023-01-11T21:41:23.9512932Z 2023-01-11T21:41:23.9513162Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9513528Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9513788Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:23.9513952Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9514114Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9514211Z { 2023-01-11T21:41:23.9514369Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9514468Z { 2023-01-11T21:41:23.9514592Z #pragma omp for 2023-01-11T21:41:23.9514733Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.9514841Z { 2023-01-11T21:41:23.9514945Z { 2023-01-11T21:41:23.9515038Z { 2023-01-11T21:41:23.9515190Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9515370Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.9515540Z auto tmp2 = std::log1p(tmp1); 2023-01-11T21:41:23.9515717Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.9515861Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.9515994Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9516133Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9516226Z } 2023-01-11T21:41:23.9516331Z } 2023-01-11T21:41:23.9516434Z } 2023-01-11T21:41:23.9516537Z } 2023-01-11T21:41:23.9516630Z } 2023-01-11T21:41:23.9516756Z ''') 2023-01-11T21:41:23.9516764Z 2023-01-11T21:41:23.9516771Z 2023-01-11T21:41:23.9516908Z async_compile.wait(globals()) 2023-01-11T21:41:23.9517025Z del async_compile 2023-01-11T21:41:23.9517042Z 2023-01-11T21:41:23.9517156Z def call(args): 2023-01-11T21:41:23.9517277Z arg0_1, = args 2023-01-11T21:41:23.9517394Z args.clear() 2023-01-11T21:41:23.9517744Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9518090Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9518369Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9518471Z del arg0_1 2023-01-11T21:41:23.9518606Z return (buf0, buf1, ) 2023-01-11T21:41:23.9518614Z 2023-01-11T21:41:23.9518622Z 2023-01-11T21:41:23.9518750Z if __name__ == "__main__": 2023-01-11T21:41:23.9518950Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9519165Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9519506Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:23.9519745Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9519754Z 2023-01-11T21:41:23.9520243Z [2023-01-11 21:36:01,293] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 348 2023-01-11T21:41:23.9521020Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9521219Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9521682Z [2023-01-11 21:36:01,310] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 349 2023-01-11T21:41:23.9522151Z [2023-01-11 21:36:02,916] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 349 2023-01-11T21:41:23.9522165Z 2023-01-11T21:41:23.9522381Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9522501Z import torch 2023-01-11T21:41:23.9522621Z import random 2023-01-11T21:41:23.9522819Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9523034Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9523042Z 2023-01-11T21:41:23.9523160Z aten = torch.ops.aten 2023-01-11T21:41:23.9523400Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9523554Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9523562Z 2023-01-11T21:41:23.9523569Z 2023-01-11T21:41:23.9523807Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9524162Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9524363Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9524531Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9524700Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9524785Z { 2023-01-11T21:41:23.9524956Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9525058Z { 2023-01-11T21:41:23.9525194Z #pragma omp for 2023-01-11T21:41:23.9525334Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.9525431Z { 2023-01-11T21:41:23.9525535Z { 2023-01-11T21:41:23.9525635Z { 2023-01-11T21:41:23.9525792Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9525976Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.9526147Z auto tmp2 = std::log1p(tmp1); 2023-01-11T21:41:23.9526322Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.9526474Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.9526614Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9526735Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9526845Z } 2023-01-11T21:41:23.9526950Z } 2023-01-11T21:41:23.9527061Z } 2023-01-11T21:41:23.9527162Z } 2023-01-11T21:41:23.9527265Z } 2023-01-11T21:41:23.9527382Z ''') 2023-01-11T21:41:23.9527406Z 2023-01-11T21:41:23.9527414Z 2023-01-11T21:41:23.9527549Z async_compile.wait(globals()) 2023-01-11T21:41:23.9527667Z del async_compile 2023-01-11T21:41:23.9527675Z 2023-01-11T21:41:23.9527788Z def call(args): 2023-01-11T21:41:23.9527905Z arg0_1, = args 2023-01-11T21:41:23.9528026Z args.clear() 2023-01-11T21:41:23.9528368Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9528706Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9528971Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9529091Z del arg0_1 2023-01-11T21:41:23.9529220Z return (buf0, buf1, ) 2023-01-11T21:41:23.9529229Z 2023-01-11T21:41:23.9529236Z 2023-01-11T21:41:23.9529362Z if __name__ == "__main__": 2023-01-11T21:41:23.9529613Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9529829Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9530169Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9530356Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9530366Z 2023-01-11T21:41:23.9530373Z 2023-01-11T21:41:23.9530525Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9530626Z import torch 2023-01-11T21:41:23.9530751Z import random 2023-01-11T21:41:23.9530946Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9531157Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9531166Z 2023-01-11T21:41:23.9531294Z aten = torch.ops.aten 2023-01-11T21:41:23.9531527Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9531685Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9531692Z 2023-01-11T21:41:23.9531702Z 2023-01-11T21:41:23.9531968Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9532329Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9532532Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9532700Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9532865Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9532971Z { 2023-01-11T21:41:23.9533138Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9533227Z { 2023-01-11T21:41:23.9533354Z #pragma omp for 2023-01-11T21:41:23.9533498Z for(long i0=0; i0<201; i0+=1) 2023-01-11T21:41:23.9533600Z { 2023-01-11T21:41:23.9533711Z { 2023-01-11T21:41:23.9533818Z { 2023-01-11T21:41:23.9533968Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9534132Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:23.9534305Z auto tmp2 = std::log1p(tmp1); 2023-01-11T21:41:23.9534484Z auto tmp3 = static_cast(2); 2023-01-11T21:41:23.9534635Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:23.9534777Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9534917Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9535029Z } 2023-01-11T21:41:23.9535113Z } 2023-01-11T21:41:23.9535215Z } 2023-01-11T21:41:23.9535319Z } 2023-01-11T21:41:23.9535420Z } 2023-01-11T21:41:23.9535561Z ''') 2023-01-11T21:41:23.9535570Z 2023-01-11T21:41:23.9535577Z 2023-01-11T21:41:23.9535730Z async_compile.wait(globals()) 2023-01-11T21:41:23.9535854Z del async_compile 2023-01-11T21:41:23.9535862Z 2023-01-11T21:41:23.9535965Z def call(args): 2023-01-11T21:41:23.9536084Z arg0_1, = args 2023-01-11T21:41:23.9536206Z args.clear() 2023-01-11T21:41:23.9536553Z buf0 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9536896Z buf1 = empty_strided((201, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9537179Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9537296Z del arg0_1 2023-01-11T21:41:23.9537426Z return (buf0, buf1, ) 2023-01-11T21:41:23.9537435Z 2023-01-11T21:41:23.9537442Z 2023-01-11T21:41:23.9537555Z if __name__ == "__main__": 2023-01-11T21:41:23.9537753Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9537954Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9538292Z arg0_1 = rand_strided((201, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9538475Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9538483Z 2023-01-11T21:41:23.9538595Z ok (16.541s) 2023-01-11T21:41:23.9539429Z test_log2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9539691Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9540158Z [2023-01-11 21:36:02,938] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 350 2023-01-11T21:41:23.9540625Z [2023-01-11 21:36:04,580] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 350 2023-01-11T21:41:23.9540649Z 2023-01-11T21:41:23.9540796Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9540914Z import torch 2023-01-11T21:41:23.9541034Z import random 2023-01-11T21:41:23.9541238Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9541450Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9541457Z 2023-01-11T21:41:23.9541657Z aten = torch.ops.aten 2023-01-11T21:41:23.9541891Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9542034Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9542042Z 2023-01-11T21:41:23.9542065Z 2023-01-11T21:41:23.9542278Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9542766Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9542967Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9543132Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9543296Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9543399Z { 2023-01-11T21:41:23.9543567Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9543655Z { 2023-01-11T21:41:23.9543780Z #pragma omp for 2023-01-11T21:41:23.9543918Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9544025Z { 2023-01-11T21:41:23.9544263Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9544404Z auto tmp1 = tmp0.log(); 2023-01-11T21:41:23.9544645Z auto tmp2 = at::vec::Vectorized(static_cast(1.4426950408889634)); 2023-01-11T21:41:23.9544779Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9545011Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:23.9545154Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.9545295Z auto tmp6 = tmp5.log(); 2023-01-11T21:41:23.9545439Z auto tmp7 = tmp6 * tmp2; 2023-01-11T21:41:23.9545665Z auto tmp8 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:23.9545886Z auto tmp9 = tmp7 - tmp8; 2023-01-11T21:41:23.9546028Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9546182Z tmp9.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9546286Z } 2023-01-11T21:41:23.9546445Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9546595Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.9546701Z { 2023-01-11T21:41:23.9546846Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9546978Z auto tmp1 = std::log(tmp0); 2023-01-11T21:41:23.9547164Z auto tmp2 = static_cast(1.4426950408889634); 2023-01-11T21:41:23.9547309Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9547478Z auto tmp4 = static_cast(1); 2023-01-11T21:41:23.9547621Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:23.9547775Z auto tmp6 = std::log(tmp5); 2023-01-11T21:41:23.9547917Z auto tmp7 = tmp6 * tmp2; 2023-01-11T21:41:23.9548074Z auto tmp8 = static_cast(2); 2023-01-11T21:41:23.9548294Z auto tmp9 = tmp7 - tmp8; 2023-01-11T21:41:23.9548426Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.9548561Z out_ptr1[i0] = tmp9; 2023-01-11T21:41:23.9548674Z } 2023-01-11T21:41:23.9548853Z } 2023-01-11T21:41:23.9548964Z } 2023-01-11T21:41:23.9549084Z ''') 2023-01-11T21:41:23.9549093Z 2023-01-11T21:41:23.9549116Z 2023-01-11T21:41:23.9549258Z async_compile.wait(globals()) 2023-01-11T21:41:23.9549380Z del async_compile 2023-01-11T21:41:23.9549388Z 2023-01-11T21:41:23.9549506Z def call(args): 2023-01-11T21:41:23.9549618Z arg0_1, = args 2023-01-11T21:41:23.9549738Z args.clear() 2023-01-11T21:41:23.9550077Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9550420Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9550691Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9550807Z del arg0_1 2023-01-11T21:41:23.9550940Z return (buf0, buf1, ) 2023-01-11T21:41:23.9550948Z 2023-01-11T21:41:23.9550955Z 2023-01-11T21:41:23.9551084Z if __name__ == "__main__": 2023-01-11T21:41:23.9551279Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9551564Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9551906Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9552089Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9552098Z 2023-01-11T21:41:23.9552196Z ok (1.683s) 2023-01-11T21:41:23.9553037Z test_log_fp64_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9553255Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9553791Z [2023-01-11 21:36:04,666] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 351 2023-01-11T21:41:23.9554283Z [2023-01-11 21:36:06,524] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 351 2023-01-11T21:41:23.9554292Z 2023-01-11T21:41:23.9554447Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9554569Z import torch 2023-01-11T21:41:23.9554685Z import random 2023-01-11T21:41:23.9554885Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9555081Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9555090Z 2023-01-11T21:41:23.9555215Z aten = torch.ops.aten 2023-01-11T21:41:23.9555452Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9555612Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9555620Z 2023-01-11T21:41:23.9555627Z 2023-01-11T21:41:23.9555859Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9556222Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9556445Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:23.9556622Z double* __restrict__ out_ptr0, 2023-01-11T21:41:23.9556767Z double* __restrict__ out_ptr1) 2023-01-11T21:41:23.9556869Z { 2023-01-11T21:41:23.9557039Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9557142Z { 2023-01-11T21:41:23.9557267Z #pragma omp for 2023-01-11T21:41:23.9557406Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.9557512Z { 2023-01-11T21:41:23.9557601Z { 2023-01-11T21:41:23.9557708Z { 2023-01-11T21:41:23.9557862Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9558034Z auto tmp1 = std::log(tmp0); 2023-01-11T21:41:23.9558223Z auto tmp2 = static_cast(1.4426950408889634); 2023-01-11T21:41:23.9558379Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:23.9558515Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9558640Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9558804Z } 2023-01-11T21:41:23.9558917Z } 2023-01-11T21:41:23.9559024Z } 2023-01-11T21:41:23.9559129Z } 2023-01-11T21:41:23.9559225Z } 2023-01-11T21:41:23.9559348Z ''') 2023-01-11T21:41:23.9559356Z 2023-01-11T21:41:23.9559378Z 2023-01-11T21:41:23.9559516Z async_compile.wait(globals()) 2023-01-11T21:41:23.9559636Z del async_compile 2023-01-11T21:41:23.9559644Z 2023-01-11T21:41:23.9559768Z def call(args): 2023-01-11T21:41:23.9559889Z arg0_1, = args 2023-01-11T21:41:23.9560008Z args.clear() 2023-01-11T21:41:23.9560353Z buf0 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9560703Z buf1 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9560966Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9561085Z del arg0_1 2023-01-11T21:41:23.9561215Z return (buf0, buf1, ) 2023-01-11T21:41:23.9561268Z 2023-01-11T21:41:23.9561276Z 2023-01-11T21:41:23.9561404Z if __name__ == "__main__": 2023-01-11T21:41:23.9561599Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9561808Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9562152Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:23.9562336Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9562343Z 2023-01-11T21:41:23.9562441Z ok (1.919s) 2023-01-11T21:41:23.9563301Z test_log_softmax_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9563516Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9563986Z [2023-01-11 21:36:06,573] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 352 2023-01-11T21:41:23.9563995Z 2023-01-11T21:41:23.9564155Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9564273Z import torch 2023-01-11T21:41:23.9564393Z import random 2023-01-11T21:41:23.9564590Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9564803Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9564811Z 2023-01-11T21:41:23.9564931Z aten = torch.ops.aten 2023-01-11T21:41:23.9565161Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9565318Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9565325Z 2023-01-11T21:41:23.9565333Z 2023-01-11T21:41:23.9565564Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9565930Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9566142Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9566317Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9566485Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9566636Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.9566791Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.9566943Z float* __restrict__ out_ptr3, 2023-01-11T21:41:23.9567099Z float* __restrict__ out_ptr4, 2023-01-11T21:41:23.9567255Z float* __restrict__ out_ptr5, 2023-01-11T21:41:23.9567419Z float* __restrict__ out_ptr6, 2023-01-11T21:41:23.9567578Z float* __restrict__ out_ptr7, 2023-01-11T21:41:23.9567723Z float* __restrict__ out_ptr8) 2023-01-11T21:41:23.9567828Z { 2023-01-11T21:41:23.9567997Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9568139Z { 2023-01-11T21:41:23.9568273Z #pragma omp for 2023-01-11T21:41:23.9568415Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9568524Z { 2023-01-11T21:41:23.9568607Z { 2023-01-11T21:41:23.9569303Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:23.9569705Z float tmp3 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9569908Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:23.9570054Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9570159Z { 2023-01-11T21:41:23.9570395Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9570644Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9570842Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9571030Z tmp3_vec = at::vec::maximum(tmp3_vec, tmp2); 2023-01-11T21:41:23.9571130Z } 2023-01-11T21:41:23.9571471Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp3_vec); 2023-01-11T21:41:23.9571657Z #pragma omp simd simdlen(4) reduction(max:tmp3) 2023-01-11T21:41:23.9571792Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9571890Z { 2023-01-11T21:41:23.9572034Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9572188Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:23.9572314Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9572463Z tmp3 = std::max(tmp3, tmp2); 2023-01-11T21:41:23.9572565Z } 2023-01-11T21:41:23.9572687Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:23.9572788Z } 2023-01-11T21:41:23.9572876Z } 2023-01-11T21:41:23.9572972Z #pragma omp for 2023-01-11T21:41:23.9573095Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9573189Z { 2023-01-11T21:41:23.9573283Z { 2023-01-11T21:41:23.9573589Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.9573709Z float tmp6 = 0; 2023-01-11T21:41:23.9573896Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:23.9574546Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:23.9574886Z float tmp7 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9575056Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:23.9575193Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9575289Z { 2023-01-11T21:41:23.9575502Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9575718Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9575913Z auto tmp3 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:23.9576049Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9576258Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.9576385Z auto tmp5 = tmp4.exp(); 2023-01-11T21:41:23.9576504Z tmp6_vec += tmp5; 2023-01-11T21:41:23.9576686Z tmp7_vec = at::vec::maximum(tmp7_vec, tmp1); 2023-01-11T21:41:23.9576778Z } 2023-01-11T21:41:23.9577088Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:23.9577513Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp7_vec); 2023-01-11T21:41:23.9577730Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(max:tmp7) 2023-01-11T21:41:23.9577862Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9577941Z { 2023-01-11T21:41:23.9578085Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9578234Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:23.9578371Z auto tmp3 = out_ptr0[i0]; 2023-01-11T21:41:23.9578508Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9578723Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.9578875Z auto tmp5 = std::exp(tmp4); 2023-01-11T21:41:23.9578988Z tmp6 += tmp5; 2023-01-11T21:41:23.9579119Z tmp7 = std::max(tmp7, tmp1); 2023-01-11T21:41:23.9579280Z } 2023-01-11T21:41:23.9579396Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:23.9579515Z out_ptr2[i0] = tmp7; 2023-01-11T21:41:23.9579606Z } 2023-01-11T21:41:23.9579693Z } 2023-01-11T21:41:23.9579801Z #pragma omp for 2023-01-11T21:41:23.9579923Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9580011Z { 2023-01-11T21:41:23.9580101Z { 2023-01-11T21:41:23.9580193Z { 2023-01-11T21:41:23.9580527Z float tmp1 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9580656Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.9580735Z { 2023-01-11T21:41:23.9580829Z { 2023-01-11T21:41:23.9580966Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.9581112Z tmp1 = std::max(tmp1, tmp0); 2023-01-11T21:41:23.9581211Z } 2023-01-11T21:41:23.9581306Z } 2023-01-11T21:41:23.9581420Z out_ptr3[i0] = tmp1; 2023-01-11T21:41:23.9581489Z } 2023-01-11T21:41:23.9581579Z } 2023-01-11T21:41:23.9581659Z } 2023-01-11T21:41:23.9581767Z #pragma omp for 2023-01-11T21:41:23.9581877Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9581957Z { 2023-01-11T21:41:23.9582036Z { 2023-01-11T21:41:23.9582108Z { 2023-01-11T21:41:23.9582215Z float tmp4 = 0; 2023-01-11T21:41:23.9582495Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.9582589Z { 2023-01-11T21:41:23.9582680Z { 2023-01-11T21:41:23.9582816Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.9582946Z auto tmp1 = out_ptr3[i0]; 2023-01-11T21:41:23.9583160Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9583319Z auto tmp3 = std::exp(tmp2); 2023-01-11T21:41:23.9583434Z tmp4 += tmp3; 2023-01-11T21:41:23.9583524Z } 2023-01-11T21:41:23.9583613Z } 2023-01-11T21:41:23.9583725Z out_ptr4[i0] = tmp4; 2023-01-11T21:41:23.9583818Z } 2023-01-11T21:41:23.9583892Z } 2023-01-11T21:41:23.9583972Z } 2023-01-11T21:41:23.9584077Z #pragma omp for 2023-01-11T21:41:23.9584192Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9584278Z { 2023-01-11T21:41:23.9584363Z { 2023-01-11T21:41:23.9584634Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.9584735Z float tmp4 = 0; 2023-01-11T21:41:23.9584910Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:23.9585040Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9585248Z { 2023-01-11T21:41:23.9585457Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9585640Z auto tmp1 = at::vec::Vectorized(out_ptr2[i0]); 2023-01-11T21:41:23.9585858Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9585990Z auto tmp3 = tmp2.exp(); 2023-01-11T21:41:23.9586095Z tmp4_vec += tmp3; 2023-01-11T21:41:23.9586192Z } 2023-01-11T21:41:23.9586480Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp4_vec); 2023-01-11T21:41:23.9586664Z #pragma omp simd simdlen(4) reduction(+:tmp4) 2023-01-11T21:41:23.9586790Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9586883Z { 2023-01-11T21:41:23.9587021Z auto tmp0 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:23.9587225Z auto tmp1 = out_ptr2[i0]; 2023-01-11T21:41:23.9587439Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9587578Z auto tmp3 = std::exp(tmp2); 2023-01-11T21:41:23.9587689Z tmp4 += tmp3; 2023-01-11T21:41:23.9587779Z } 2023-01-11T21:41:23.9587891Z out_ptr5[i0] = tmp4; 2023-01-11T21:41:23.9587974Z } 2023-01-11T21:41:23.9588045Z } 2023-01-11T21:41:23.9588155Z #pragma omp for 2023-01-11T21:41:23.9588266Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9588360Z { 2023-01-11T21:41:23.9588472Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9588560Z { 2023-01-11T21:41:23.9588766Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9588957Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9589143Z auto tmp3 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:23.9589317Z auto tmp5 = at::vec::Vectorized(out_ptr1[i0]); 2023-01-11T21:41:23.9589510Z auto tmp8 = at::vec::Vectorized::loadu(out_ptr3 + 8*i1); 2023-01-11T21:41:23.9589706Z auto tmp10 = at::vec::Vectorized::loadu(out_ptr4 + 8*i1); 2023-01-11T21:41:23.9589890Z auto tmp13 = at::vec::Vectorized(out_ptr2[i0]); 2023-01-11T21:41:23.9590084Z auto tmp15 = at::vec::Vectorized(out_ptr5[i0]); 2023-01-11T21:41:23.9590220Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9590432Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.9590537Z auto tmp6 = tmp5.log(); 2023-01-11T21:41:23.9590729Z auto tmp7 = tmp4 - tmp6; 2023-01-11T21:41:23.9590927Z auto tmp9 = tmp0 - tmp8; 2023-01-11T21:41:23.9591069Z auto tmp11 = tmp10.log(); 2023-01-11T21:41:23.9591294Z auto tmp12 = tmp9 - tmp11; 2023-01-11T21:41:23.9591508Z auto tmp14 = tmp1 - tmp13; 2023-01-11T21:41:23.9591639Z auto tmp16 = tmp15.log(); 2023-01-11T21:41:23.9591827Z auto tmp17 = tmp14 - tmp16; 2023-01-11T21:41:23.9591982Z tmp7.store(out_ptr6 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9592139Z tmp12.store(out_ptr7 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9592303Z tmp17.store(out_ptr8 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9592409Z } 2023-01-11T21:41:23.9592562Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.9592695Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9592782Z { 2023-01-11T21:41:23.9592936Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9593095Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:23.9593236Z auto tmp3 = out_ptr0[i0]; 2023-01-11T21:41:23.9593391Z auto tmp5 = out_ptr1[i0]; 2023-01-11T21:41:23.9593614Z auto tmp8 = out_ptr3[i1]; 2023-01-11T21:41:23.9593823Z auto tmp10 = out_ptr4[i1]; 2023-01-11T21:41:23.9593955Z auto tmp13 = out_ptr2[i0]; 2023-01-11T21:41:23.9594091Z auto tmp15 = out_ptr5[i0]; 2023-01-11T21:41:23.9594232Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9594458Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:23.9594614Z auto tmp6 = std::log(tmp5); 2023-01-11T21:41:23.9594820Z auto tmp7 = tmp4 - tmp6; 2023-01-11T21:41:23.9595024Z auto tmp9 = tmp0 - tmp8; 2023-01-11T21:41:23.9595159Z auto tmp11 = std::log(tmp10); 2023-01-11T21:41:23.9595367Z auto tmp12 = tmp9 - tmp11; 2023-01-11T21:41:23.9595582Z auto tmp14 = tmp1 - tmp13; 2023-01-11T21:41:23.9595731Z auto tmp16 = std::log(tmp15); 2023-01-11T21:41:23.9595943Z auto tmp17 = tmp14 - tmp16; 2023-01-11T21:41:23.9596160Z out_ptr6[i1 + (8*i0)] = tmp7; 2023-01-11T21:41:23.9596308Z out_ptr7[i1 + (8*i0)] = tmp12; 2023-01-11T21:41:23.9596452Z out_ptr8[i1 + (8*i0)] = tmp17; 2023-01-11T21:41:23.9596542Z } 2023-01-11T21:41:23.9596645Z } 2023-01-11T21:41:23.9596729Z } 2023-01-11T21:41:23.9596826Z } 2023-01-11T21:41:23.9596954Z ''') 2023-01-11T21:41:23.9596964Z 2023-01-11T21:41:23.9596970Z 2023-01-11T21:41:23.9597115Z async_compile.wait(globals()) 2023-01-11T21:41:23.9597237Z del async_compile 2023-01-11T21:41:23.9597246Z 2023-01-11T21:41:23.9597357Z def call(args): 2023-01-11T21:41:23.9597484Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9597602Z args.clear() 2023-01-11T21:41:23.9597944Z buf0 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9598286Z buf1 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9598626Z buf6 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9598966Z buf3 = empty_strided((1, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9599279Z buf4 = empty_strided((1, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9599605Z buf7 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9599942Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9600274Z buf5 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9600606Z buf8 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9601221Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf8.data_ptr())) 2023-01-11T21:41:23.9601350Z del arg0_1 2023-01-11T21:41:23.9601459Z del arg1_1 2023-01-11T21:41:23.9601579Z return (buf2, buf5, buf8, ) 2023-01-11T21:41:23.9601603Z 2023-01-11T21:41:23.9601610Z 2023-01-11T21:41:23.9601717Z if __name__ == "__main__": 2023-01-11T21:41:23.9601914Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9602122Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9602460Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9602804Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9603006Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9603480Z [2023-01-11 21:36:08,277] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 352 2023-01-11T21:41:23.9603490Z 2023-01-11T21:41:23.9603590Z ok (1.753s) 2023-01-11T21:41:23.9604434Z test_logsumexp_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9604759Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9605220Z [2023-01-11 21:36:08,328] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 353 2023-01-11T21:41:23.9605701Z [2023-01-11 21:36:10,028] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 353 2023-01-11T21:41:23.9605711Z 2023-01-11T21:41:23.9605871Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9605993Z import torch 2023-01-11T21:41:23.9606115Z import random 2023-01-11T21:41:23.9606322Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9606593Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9606603Z 2023-01-11T21:41:23.9606716Z aten = torch.ops.aten 2023-01-11T21:41:23.9606954Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9607111Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9607120Z 2023-01-11T21:41:23.9607128Z 2023-01-11T21:41:23.9607365Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9607730Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9607941Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.9608117Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.9608297Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9608446Z float* __restrict__ out_ptr1, 2023-01-11T21:41:23.9608610Z float* __restrict__ out_ptr3) 2023-01-11T21:41:23.9608709Z { 2023-01-11T21:41:23.9608863Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:23.9609004Z auto out_ptr2 = in_out_ptr1; 2023-01-11T21:41:23.9609172Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9609270Z { 2023-01-11T21:41:23.9609385Z #pragma omp for 2023-01-11T21:41:23.9609523Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9609623Z { 2023-01-11T21:41:23.9609734Z { 2023-01-11T21:41:23.9610408Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:23.9610790Z float tmp1 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9611003Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:23.9611151Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9611248Z { 2023-01-11T21:41:23.9611497Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9611700Z tmp1_vec = at::vec::maximum(tmp1_vec, tmp0); 2023-01-11T21:41:23.9611815Z } 2023-01-11T21:41:23.9612187Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp1_vec); 2023-01-11T21:41:23.9612392Z #pragma omp simd simdlen(4) reduction(max:tmp1) 2023-01-11T21:41:23.9612536Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9612639Z { 2023-01-11T21:41:23.9612787Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9612945Z tmp1 = std::max(tmp1, tmp0); 2023-01-11T21:41:23.9613046Z } 2023-01-11T21:41:23.9613162Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:23.9613256Z } 2023-01-11T21:41:23.9613355Z } 2023-01-11T21:41:23.9613488Z #pragma omp for 2023-01-11T21:41:23.9613607Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9613771Z { 2023-01-11T21:41:23.9613873Z { 2023-01-11T21:41:23.9614199Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.9614338Z float tmp4 = 0; 2023-01-11T21:41:23.9614545Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:23.9614692Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9614776Z { 2023-01-11T21:41:23.9615021Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9615238Z auto tmp1 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:23.9615481Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9615636Z auto tmp3 = tmp2.exp(); 2023-01-11T21:41:23.9615774Z tmp4_vec += tmp3; 2023-01-11T21:41:23.9615881Z } 2023-01-11T21:41:23.9616266Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp4_vec); 2023-01-11T21:41:23.9616456Z #pragma omp simd simdlen(4) reduction(+:tmp4) 2023-01-11T21:41:23.9616599Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9616700Z { 2023-01-11T21:41:23.9616863Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9617017Z auto tmp1 = out_ptr0[i0]; 2023-01-11T21:41:23.9617262Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9617595Z auto tmp3 = std::exp(tmp2); 2023-01-11T21:41:23.9617728Z tmp4 += tmp3; 2023-01-11T21:41:23.9617823Z } 2023-01-11T21:41:23.9617961Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9618070Z } 2023-01-11T21:41:23.9618174Z } 2023-01-11T21:41:23.9618303Z #pragma omp for 2023-01-11T21:41:23.9618450Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9618545Z { 2023-01-11T21:41:23.9618645Z { 2023-01-11T21:41:23.9618744Z { 2023-01-11T21:41:23.9618896Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.9619047Z auto tmp2 = out_ptr0[i0]; 2023-01-11T21:41:23.9619208Z auto tmp1 = std::log(tmp0); 2023-01-11T21:41:23.9619368Z auto tmp3 = std::abs(tmp2); 2023-01-11T21:41:23.9619562Z auto tmp4 = std::numeric_limits::infinity(); 2023-01-11T21:41:23.9619713Z auto tmp5 = tmp3 == tmp4; 2023-01-11T21:41:23.9619890Z auto tmp6 = static_cast(0.0); 2023-01-11T21:41:23.9620050Z auto tmp7 = tmp5 ? tmp6 : tmp2; 2023-01-11T21:41:23.9620201Z auto tmp8 = tmp1 + tmp7; 2023-01-11T21:41:23.9620341Z in_out_ptr0[i0] = tmp8; 2023-01-11T21:41:23.9620451Z } 2023-01-11T21:41:23.9620550Z } 2023-01-11T21:41:23.9620649Z } 2023-01-11T21:41:23.9620769Z #pragma omp for 2023-01-11T21:41:23.9620906Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9621015Z { 2023-01-11T21:41:23.9621115Z { 2023-01-11T21:41:23.9621217Z { 2023-01-11T21:41:23.9621589Z float tmp1 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9621745Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.9621856Z { 2023-01-11T21:41:23.9621959Z { 2023-01-11T21:41:23.9622115Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.9622289Z tmp1 = std::max(tmp1, tmp0); 2023-01-11T21:41:23.9622534Z } 2023-01-11T21:41:23.9622636Z } 2023-01-11T21:41:23.9622780Z out_ptr2[i0] = tmp1; 2023-01-11T21:41:23.9622883Z } 2023-01-11T21:41:23.9623087Z } 2023-01-11T21:41:23.9623186Z } 2023-01-11T21:41:23.9623312Z #pragma omp for 2023-01-11T21:41:23.9623441Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9623531Z { 2023-01-11T21:41:23.9623634Z { 2023-01-11T21:41:23.9623735Z { 2023-01-11T21:41:23.9623866Z float tmp4 = 0; 2023-01-11T21:41:23.9624023Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:23.9624137Z { 2023-01-11T21:41:23.9624252Z { 2023-01-11T21:41:23.9624409Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:23.9624574Z auto tmp1 = out_ptr2[i0]; 2023-01-11T21:41:23.9624830Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9625002Z auto tmp3 = std::exp(tmp2); 2023-01-11T21:41:23.9625140Z tmp4 += tmp3; 2023-01-11T21:41:23.9625255Z } 2023-01-11T21:41:23.9625421Z } 2023-01-11T21:41:23.9625552Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:23.9625655Z } 2023-01-11T21:41:23.9625758Z } 2023-01-11T21:41:23.9625866Z } 2023-01-11T21:41:23.9626001Z #pragma omp for 2023-01-11T21:41:23.9626133Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9626221Z { 2023-01-11T21:41:23.9626322Z { 2023-01-11T21:41:23.9626435Z { 2023-01-11T21:41:23.9626590Z auto tmp0 = out_ptr3[i0]; 2023-01-11T21:41:23.9626743Z auto tmp2 = out_ptr2[i0]; 2023-01-11T21:41:23.9626906Z auto tmp1 = std::log(tmp0); 2023-01-11T21:41:23.9627068Z auto tmp3 = std::abs(tmp2); 2023-01-11T21:41:23.9627285Z auto tmp4 = std::numeric_limits::infinity(); 2023-01-11T21:41:23.9627424Z auto tmp5 = tmp3 == tmp4; 2023-01-11T21:41:23.9627612Z auto tmp6 = static_cast(0.0); 2023-01-11T21:41:23.9627780Z auto tmp7 = tmp5 ? tmp6 : tmp2; 2023-01-11T21:41:23.9627928Z auto tmp8 = tmp1 + tmp7; 2023-01-11T21:41:23.9628104Z auto tmp9 = static_cast(2); 2023-01-11T21:41:23.9628347Z auto tmp10 = tmp8 - tmp9; 2023-01-11T21:41:23.9628492Z in_out_ptr1[i0] = tmp10; 2023-01-11T21:41:23.9628583Z } 2023-01-11T21:41:23.9628687Z } 2023-01-11T21:41:23.9628797Z } 2023-01-11T21:41:23.9628903Z } 2023-01-11T21:41:23.9629002Z } 2023-01-11T21:41:23.9629132Z ''') 2023-01-11T21:41:23.9629142Z 2023-01-11T21:41:23.9629149Z 2023-01-11T21:41:23.9629306Z async_compile.wait(globals()) 2023-01-11T21:41:23.9629412Z del async_compile 2023-01-11T21:41:23.9629420Z 2023-01-11T21:41:23.9629540Z def call(args): 2023-01-11T21:41:23.9629659Z arg0_1, = args 2023-01-11T21:41:23.9629777Z args.clear() 2023-01-11T21:41:23.9630116Z buf0 = empty_strided((8, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9630441Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9630618Z buf2 = as_strided(buf0, (8, ), (1, )); del buf0 # reuse 2023-01-11T21:41:23.9630936Z buf3 = empty_strided((1, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9631261Z buf4 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9631442Z buf5 = as_strided(buf3, (8, ), (1, )); del buf3 # reuse 2023-01-11T21:41:23.9631809Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:23.9631924Z del arg0_1 2023-01-11T21:41:23.9632056Z return (buf2, buf5, ) 2023-01-11T21:41:23.9632063Z 2023-01-11T21:41:23.9632070Z 2023-01-11T21:41:23.9632201Z if __name__ == "__main__": 2023-01-11T21:41:23.9632403Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9632657Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9633003Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9633189Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9633197Z 2023-01-11T21:41:23.9633312Z ok (1.776s) 2023-01-11T21:41:23.9634222Z test_long_tensor_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9634441Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9634915Z [2023-01-11 21:36:10,079] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 354 2023-01-11T21:41:23.9635450Z [2023-01-11 21:36:11,749] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 354 2023-01-11T21:41:23.9635462Z 2023-01-11T21:41:23.9635616Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9635738Z import torch 2023-01-11T21:41:23.9635844Z import random 2023-01-11T21:41:23.9636046Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9636253Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9636261Z 2023-01-11T21:41:23.9636390Z aten = torch.ops.aten 2023-01-11T21:41:23.9636626Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9636785Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9636793Z 2023-01-11T21:41:23.9636799Z 2023-01-11T21:41:23.9637029Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9637395Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9637584Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9637752Z long* __restrict__ out_ptr0, 2023-01-11T21:41:23.9637917Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9638017Z { 2023-01-11T21:41:23.9638184Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9638290Z { 2023-01-11T21:41:23.9638416Z #pragma omp for 2023-01-11T21:41:23.9638535Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:23.9638634Z { 2023-01-11T21:41:23.9638744Z { 2023-01-11T21:41:23.9638852Z { 2023-01-11T21:41:23.9639003Z auto tmp1 = in_ptr0[i0]; 2023-01-11T21:41:23.9639178Z auto tmp0 = static_cast(294); 2023-01-11T21:41:23.9639411Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:23.9639570Z auto tmp3 = static_cast(295); 2023-01-11T21:41:23.9639723Z auto tmp4 = tmp3 + tmp1; 2023-01-11T21:41:23.9639861Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9640009Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:23.9640115Z } 2023-01-11T21:41:23.9640222Z } 2023-01-11T21:41:23.9640302Z } 2023-01-11T21:41:23.9640408Z } 2023-01-11T21:41:23.9640508Z } 2023-01-11T21:41:23.9640642Z ''') 2023-01-11T21:41:23.9640651Z 2023-01-11T21:41:23.9640658Z 2023-01-11T21:41:23.9640812Z async_compile.wait(globals()) 2023-01-11T21:41:23.9640934Z del async_compile 2023-01-11T21:41:23.9640942Z 2023-01-11T21:41:23.9641061Z def call(args): 2023-01-11T21:41:23.9641173Z arg0_1, = args 2023-01-11T21:41:23.9641279Z args.clear() 2023-01-11T21:41:23.9641613Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9641938Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9642221Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9642384Z del arg0_1 2023-01-11T21:41:23.9642520Z return (buf0, buf1, ) 2023-01-11T21:41:23.9642528Z 2023-01-11T21:41:23.9642535Z 2023-01-11T21:41:23.9642666Z if __name__ == "__main__": 2023-01-11T21:41:23.9642843Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9643057Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9643397Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9643583Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9643590Z 2023-01-11T21:41:23.9643699Z ok (1.697s) 2023-01-11T21:41:23.9644300Z test_lowmem_dropout1_cpu (__main__.CpuTests) ... [2023-01-11 21:36:11,779] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 355 2023-01-11T21:41:23.9644723Z [2023-01-11 21:36:11,786] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 355 2023-01-11T21:41:23.9645168Z [2023-01-11 21:36:11,788] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling BACKWARDS graph 355 2023-01-11T21:41:23.9645591Z [2023-01-11 21:36:11,796] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling BACKWARDS graph 355 2023-01-11T21:41:23.9645976Z [2023-01-11 21:36:11,951] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 356 2023-01-11T21:41:23.9646368Z [2023-01-11 21:36:11,952] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.9646773Z [2023-01-11 21:36:11,961] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 356 2023-01-11T21:41:23.9647164Z [2023-01-11 21:36:11,964] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling BACKWARDS graph 356 2023-01-11T21:41:23.9647172Z 2023-01-11T21:41:23.9647307Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9647413Z import torch 2023-01-11T21:41:23.9647516Z import random 2023-01-11T21:41:23.9647682Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9647853Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9647873Z 2023-01-11T21:41:23.9647971Z aten = torch.ops.aten 2023-01-11T21:41:23.9648167Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9648303Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9648311Z 2023-01-11T21:41:23.9648317Z 2023-01-11T21:41:23.9648513Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9648811Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9648981Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9649131Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9649261Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9649348Z { 2023-01-11T21:41:23.9649491Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9649582Z { 2023-01-11T21:41:23.9649691Z #pragma omp for 2023-01-11T21:41:23.9649813Z for(long i0=0; i0<12500; i0+=1) 2023-01-11T21:41:23.9649904Z { 2023-01-11T21:41:23.9650098Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9650284Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.9650412Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9650551Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9650641Z } 2023-01-11T21:41:23.9650780Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9650916Z for(long i0=100000; i0<100000; i0+=1) 2023-01-11T21:41:23.9650992Z { 2023-01-11T21:41:23.9651112Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9651233Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.9651346Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9651459Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9651545Z } 2023-01-11T21:41:23.9651633Z } 2023-01-11T21:41:23.9651777Z } 2023-01-11T21:41:23.9651910Z ''') 2023-01-11T21:41:23.9651918Z 2023-01-11T21:41:23.9651924Z 2023-01-11T21:41:23.9652054Z async_compile.wait(globals()) 2023-01-11T21:41:23.9652168Z del async_compile 2023-01-11T21:41:23.9652174Z 2023-01-11T21:41:23.9652275Z def call(args): 2023-01-11T21:41:23.9652403Z primals_1, primals_2 = args 2023-01-11T21:41:23.9652506Z args.clear() 2023-01-11T21:41:23.9652803Z buf0 = empty_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9653049Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9653148Z del primals_2 2023-01-11T21:41:23.9653275Z return (buf0, primals_1, ) 2023-01-11T21:41:23.9653282Z 2023-01-11T21:41:23.9653288Z 2023-01-11T21:41:23.9653399Z if __name__ == "__main__": 2023-01-11T21:41:23.9653573Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9653751Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9654128Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9654439Z primals_2 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9654606Z print_performance(lambda: call([primals_1, primals_2])) 2023-01-11T21:41:23.9654613Z 2023-01-11T21:41:23.9654619Z 2023-01-11T21:41:23.9654751Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9654846Z import torch 2023-01-11T21:41:23.9654941Z import random 2023-01-11T21:41:23.9655106Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9655274Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9655282Z 2023-01-11T21:41:23.9655392Z aten = torch.ops.aten 2023-01-11T21:41:23.9655570Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9655698Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9655706Z 2023-01-11T21:41:23.9655711Z 2023-01-11T21:41:23.9655914Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9656225Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9656401Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9656553Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9656694Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9656782Z { 2023-01-11T21:41:23.9656906Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9656993Z { 2023-01-11T21:41:23.9657103Z #pragma omp for 2023-01-11T21:41:23.9657222Z for(long i0=0; i0<12500; i0+=1) 2023-01-11T21:41:23.9657314Z { 2023-01-11T21:41:23.9657507Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9657701Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.9657826Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9657942Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9658037Z } 2023-01-11T21:41:23.9658170Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9658303Z for(long i0=100000; i0<100000; i0+=1) 2023-01-11T21:41:23.9658394Z { 2023-01-11T21:41:23.9658514Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9658617Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.9658732Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:23.9658845Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9658937Z } 2023-01-11T21:41:23.9659023Z } 2023-01-11T21:41:23.9659108Z } 2023-01-11T21:41:23.9659229Z ''') 2023-01-11T21:41:23.9659236Z 2023-01-11T21:41:23.9659243Z 2023-01-11T21:41:23.9659355Z async_compile.wait(globals()) 2023-01-11T21:41:23.9659460Z del async_compile 2023-01-11T21:41:23.9659468Z 2023-01-11T21:41:23.9659570Z def call(args): 2023-01-11T21:41:23.9659695Z primals_1, tangents_1 = args 2023-01-11T21:41:23.9659802Z args.clear() 2023-01-11T21:41:23.9660187Z buf0 = empty_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9660452Z kernel_cpp_0(c_void_p(tangents_1.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9660556Z del primals_1 2023-01-11T21:41:23.9660640Z del tangents_1 2023-01-11T21:41:23.9660747Z return (None, buf0, ) 2023-01-11T21:41:23.9660754Z 2023-01-11T21:41:23.9660760Z 2023-01-11T21:41:23.9660863Z if __name__ == "__main__": 2023-01-11T21:41:23.9661037Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9661232Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9661537Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9661861Z tangents_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9662066Z print_performance(lambda: call([primals_1, tangents_1])) 2023-01-11T21:41:23.9662074Z 2023-01-11T21:41:23.9662081Z 2023-01-11T21:41:23.9662271Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9671514Z import torch 2023-01-11T21:41:23.9671637Z import random 2023-01-11T21:41:23.9671812Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9671991Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9671999Z 2023-01-11T21:41:23.9672115Z aten = torch.ops.aten 2023-01-11T21:41:23.9672313Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9672432Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9672678Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.9672687Z 2023-01-11T21:41:23.9672693Z 2023-01-11T21:41:23.9672929Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9673230Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9673399Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:23.9673559Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9673708Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.9673925Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9674020Z { 2023-01-11T21:41:23.9674150Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9674235Z { 2023-01-11T21:41:23.9674346Z #pragma omp for 2023-01-11T21:41:23.9674466Z for(long i0=0; i0<100000; i0+=1) 2023-01-11T21:41:23.9674553Z { 2023-01-11T21:41:23.9674639Z { 2023-01-11T21:41:23.9674714Z { 2023-01-11T21:41:23.9674834Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.9674966Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.9675094Z auto tmp7 = in_ptr2[i0]; 2023-01-11T21:41:23.9675239Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.9675440Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9675599Z auto tmp3 = static_cast(0.33); 2023-01-11T21:41:23.9675714Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9675865Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9675989Z auto tmp8 = tmp6 * tmp7; 2023-01-11T21:41:23.9676115Z auto tmp9 = tmp5 * tmp8; 2023-01-11T21:41:23.9676280Z auto tmp10 = static_cast(1.492537313432836); 2023-01-11T21:41:23.9676406Z auto tmp11 = tmp9 * tmp10; 2023-01-11T21:41:23.9676524Z out_ptr0[i0] = tmp11; 2023-01-11T21:41:23.9676625Z } 2023-01-11T21:41:23.9676722Z } 2023-01-11T21:41:23.9676819Z } 2023-01-11T21:41:23.9676925Z } 2023-01-11T21:41:23.9677030Z } 2023-01-11T21:41:23.9677161Z ''') 2023-01-11T21:41:23.9677168Z 2023-01-11T21:41:23.9677174Z 2023-01-11T21:41:23.9677307Z async_compile.wait(globals()) 2023-01-11T21:41:23.9677538Z del async_compile 2023-01-11T21:41:23.9677549Z 2023-01-11T21:41:23.9677629Z def call(args): 2023-01-11T21:41:23.9677755Z primals_1, primals_2 = args 2023-01-11T21:41:23.9677852Z args.clear() 2023-01-11T21:41:23.9678048Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.9678418Z buf0 = empty_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9678780Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9678905Z del primals_2 2023-01-11T21:41:23.9679088Z return (buf0, primals_1, seed_cpu_None.clone(), ) 2023-01-11T21:41:23.9679112Z 2023-01-11T21:41:23.9679120Z 2023-01-11T21:41:23.9679231Z if __name__ == "__main__": 2023-01-11T21:41:23.9679432Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9679649Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9680089Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9680472Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9680842Z primals_2 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9681069Z print_performance(lambda: call([primals_1, primals_2])) 2023-01-11T21:41:23.9681077Z 2023-01-11T21:41:23.9681586Z [2023-01-11 21:36:11,974] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling BACKWARDS graph 356 2023-01-11T21:41:23.9681595Z 2023-01-11T21:41:23.9681745Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9681863Z import torch 2023-01-11T21:41:23.9681990Z import random 2023-01-11T21:41:23.9682196Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9682416Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9682424Z 2023-01-11T21:41:23.9682556Z aten = torch.ops.aten 2023-01-11T21:41:23.9682798Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9682947Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9682971Z 2023-01-11T21:41:23.9682978Z 2023-01-11T21:41:23.9683205Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9683587Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9683797Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9683982Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9684164Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:23.9684338Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9684438Z { 2023-01-11T21:41:23.9684599Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9684700Z { 2023-01-11T21:41:23.9684836Z #pragma omp for 2023-01-11T21:41:23.9684981Z for(long i0=0; i0<100000; i0+=1) 2023-01-11T21:41:23.9685091Z { 2023-01-11T21:41:23.9685211Z { 2023-01-11T21:41:23.9685322Z { 2023-01-11T21:41:23.9685465Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.9685594Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.9685723Z auto tmp10 = in_ptr2[i0]; 2023-01-11T21:41:23.9685864Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.9686061Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9686211Z auto tmp3 = static_cast(0.33); 2023-01-11T21:41:23.9686340Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9686479Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9686612Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.9686778Z auto tmp8 = static_cast(1.492537313432836); 2023-01-11T21:41:23.9686903Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.9687035Z auto tmp11 = tmp9 * tmp10; 2023-01-11T21:41:23.9687207Z out_ptr0[i0] = tmp11; 2023-01-11T21:41:23.9687299Z } 2023-01-11T21:41:23.9687369Z } 2023-01-11T21:41:23.9687460Z } 2023-01-11T21:41:23.9687548Z } 2023-01-11T21:41:23.9687636Z } 2023-01-11T21:41:23.9687757Z ''') 2023-01-11T21:41:23.9687766Z 2023-01-11T21:41:23.9687772Z 2023-01-11T21:41:23.9687905Z async_compile.wait(globals()) 2023-01-11T21:41:23.9688011Z del async_compile 2023-01-11T21:41:23.9688019Z 2023-01-11T21:41:23.9688122Z def call(args): 2023-01-11T21:41:23.9688266Z primals_1, philox_seed_like, tangents_1 = args 2023-01-11T21:41:23.9688368Z args.clear() 2023-01-11T21:41:23.9688667Z buf0 = empty_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9689022Z kernel_cpp_0(c_void_p(philox_seed_like.data_ptr()), c_void_p(tangents_1.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9689141Z del philox_seed_like 2023-01-11T21:41:23.9689293Z del primals_1 2023-01-11T21:41:23.9689392Z del tangents_1 2023-01-11T21:41:23.9689483Z return (None, buf0, ) 2023-01-11T21:41:23.9689490Z 2023-01-11T21:41:23.9689497Z 2023-01-11T21:41:23.9689606Z if __name__ == "__main__": 2023-01-11T21:41:23.9689770Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9689946Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9690255Z primals_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9690561Z philox_seed_like = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9690865Z tangents_1 = rand_strided((100000, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9691080Z print_performance(lambda: call([primals_1, philox_seed_like, tangents_1])) 2023-01-11T21:41:23.9691086Z 2023-01-11T21:41:23.9691165Z ok (0.230s) 2023-01-11T21:41:23.9691691Z test_lowmem_dropout2_cpu (__main__.CpuTests) ... [2023-01-11 21:36:12,220] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 357 2023-01-11T21:41:23.9692111Z [2023-01-11 21:36:12,222] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:23.9692552Z [2023-01-11 21:36:13,866] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 357 2023-01-11T21:41:23.9692938Z [2023-01-11 21:36:13,869] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling BACKWARDS graph 357 2023-01-11T21:41:23.9693330Z [2023-01-11 21:36:15,598] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling BACKWARDS graph 357 2023-01-11T21:41:23.9693338Z 2023-01-11T21:41:23.9693463Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9693555Z import torch 2023-01-11T21:41:23.9693653Z import random 2023-01-11T21:41:23.9693803Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9693978Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9693989Z 2023-01-11T21:41:23.9694102Z aten = torch.ops.aten 2023-01-11T21:41:23.9694293Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9694423Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9694656Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:23.9694664Z 2023-01-11T21:41:23.9694671Z 2023-01-11T21:41:23.9694871Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9695188Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9695375Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.9695516Z const long* __restrict__ seed0) 2023-01-11T21:41:23.9695600Z { 2023-01-11T21:41:23.9695735Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9695821Z { 2023-01-11T21:41:23.9695930Z #pragma omp for 2023-01-11T21:41:23.9696048Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.9696200Z { 2023-01-11T21:41:23.9696285Z { 2023-01-11T21:41:23.9696378Z { 2023-01-11T21:41:23.9696498Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.9696634Z auto tmp6 = in_out_ptr0[i0]; 2023-01-11T21:41:23.9696775Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.9696969Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9697103Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.9697226Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9697373Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9697499Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.9697645Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.9697769Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.9697944Z in_out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.9698025Z } 2023-01-11T21:41:23.9698112Z } 2023-01-11T21:41:23.9698201Z } 2023-01-11T21:41:23.9698288Z } 2023-01-11T21:41:23.9698370Z } 2023-01-11T21:41:23.9698490Z ''') 2023-01-11T21:41:23.9698497Z 2023-01-11T21:41:23.9698503Z 2023-01-11T21:41:23.9698698Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.9698974Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9699138Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.9699278Z const long* __restrict__ seed0) 2023-01-11T21:41:23.9699359Z { 2023-01-11T21:41:23.9699495Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9699581Z { 2023-01-11T21:41:23.9699682Z #pragma omp for 2023-01-11T21:41:23.9699782Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.9699863Z { 2023-01-11T21:41:23.9699950Z { 2023-01-11T21:41:23.9700047Z { 2023-01-11T21:41:23.9700163Z auto tmp0 = seed0[0]; 2023-01-11T21:41:23.9700293Z auto tmp6 = in_out_ptr0[i0]; 2023-01-11T21:41:23.9700440Z auto tmp1 = static_cast(256 + i0); 2023-01-11T21:41:23.9700620Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9700766Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.9700890Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9701032Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9701158Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.9701298Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.9701417Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.9701523Z in_out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.9701606Z } 2023-01-11T21:41:23.9701701Z } 2023-01-11T21:41:23.9701790Z } 2023-01-11T21:41:23.9701874Z } 2023-01-11T21:41:23.9701957Z } 2023-01-11T21:41:23.9702073Z ''') 2023-01-11T21:41:23.9702081Z 2023-01-11T21:41:23.9702086Z 2023-01-11T21:41:23.9702199Z async_compile.wait(globals()) 2023-01-11T21:41:23.9702298Z del async_compile 2023-01-11T21:41:23.9702305Z 2023-01-11T21:41:23.9702546Z def call(args): 2023-01-11T21:41:23.9702694Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:23.9702794Z args.clear() 2023-01-11T21:41:23.9702983Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:23.9703286Z buf0 = empty_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9703472Z aten.mm.out(primals_3, as_strided(primals_1, (32, 32), (1, 32)), out=buf0) 2023-01-11T21:41:23.9703560Z del primals_1 2023-01-11T21:41:23.9703677Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:23.9703884Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(seed_cpu_None.data_ptr())) 2023-01-11T21:41:23.9704273Z buf2 = empty_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9704445Z aten.mm.out(buf1, as_strided(primals_2, (32, 32), (1, 32)), out=buf2) 2023-01-11T21:41:23.9704560Z buf3 = buf2; del buf2 # reuse 2023-01-11T21:41:23.9704762Z kernel_cpp_1(c_void_p(buf3.data_ptr()), c_void_p(seed_cpu_None.data_ptr())) 2023-01-11T21:41:23.9704970Z return (buf3, primals_3, seed_cpu_None.clone(), buf1, as_strided(primals_2, (32, 32), (32, 1)), ) 2023-01-11T21:41:23.9704994Z 2023-01-11T21:41:23.9705000Z 2023-01-11T21:41:23.9705086Z if __name__ == "__main__": 2023-01-11T21:41:23.9705242Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9705409Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9705692Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9705988Z primals_1 = rand_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9706361Z primals_2 = rand_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9706654Z primals_3 = rand_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9706848Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:23.9706855Z 2023-01-11T21:41:23.9706861Z 2023-01-11T21:41:23.9706976Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9707068Z import torch 2023-01-11T21:41:23.9707163Z import random 2023-01-11T21:41:23.9707321Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9707482Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9707489Z 2023-01-11T21:41:23.9707594Z aten = torch.ops.aten 2023-01-11T21:41:23.9707772Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9707880Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9707900Z 2023-01-11T21:41:23.9707906Z 2023-01-11T21:41:23.9708086Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9708372Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9708538Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9708685Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9708826Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9708912Z { 2023-01-11T21:41:23.9709056Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9709129Z { 2023-01-11T21:41:23.9709233Z #pragma omp for 2023-01-11T21:41:23.9709348Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.9709432Z { 2023-01-11T21:41:23.9709520Z { 2023-01-11T21:41:23.9709611Z { 2023-01-11T21:41:23.9709738Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.9709855Z auto tmp6 = in_ptr1[i0]; 2023-01-11T21:41:23.9710007Z auto tmp1 = static_cast(256 + i0); 2023-01-11T21:41:23.9710221Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9710370Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.9710498Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9710651Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9710780Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.9710915Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.9711040Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.9711157Z out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.9711247Z } 2023-01-11T21:41:23.9711336Z } 2023-01-11T21:41:23.9711422Z } 2023-01-11T21:41:23.9711506Z } 2023-01-11T21:41:23.9711574Z } 2023-01-11T21:41:23.9711696Z ''') 2023-01-11T21:41:23.9711704Z 2023-01-11T21:41:23.9711712Z 2023-01-11T21:41:23.9711915Z kernel_cpp_1 = async_compile.cpp(''' 2023-01-11T21:41:23.9712288Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9712454Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.9712602Z const long* __restrict__ in_ptr0) 2023-01-11T21:41:23.9712686Z { 2023-01-11T21:41:23.9712810Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9712895Z { 2023-01-11T21:41:23.9713002Z #pragma omp for 2023-01-11T21:41:23.9713113Z for(long i0=0; i0<256; i0+=1) 2023-01-11T21:41:23.9713199Z { 2023-01-11T21:41:23.9713286Z { 2023-01-11T21:41:23.9713376Z { 2023-01-11T21:41:23.9713489Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:23.9713621Z auto tmp6 = in_out_ptr0[i0]; 2023-01-11T21:41:23.9713840Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:23.9714039Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:23.9714245Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:23.9714375Z auto tmp4 = tmp2 > tmp3; 2023-01-11T21:41:23.9714529Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:23.9714643Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:23.9714793Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.9714918Z auto tmp9 = tmp7 * tmp8; 2023-01-11T21:41:23.9715045Z in_out_ptr0[i0] = tmp9; 2023-01-11T21:41:23.9715135Z } 2023-01-11T21:41:23.9715224Z } 2023-01-11T21:41:23.9715312Z } 2023-01-11T21:41:23.9715383Z } 2023-01-11T21:41:23.9715463Z } 2023-01-11T21:41:23.9715590Z ''') 2023-01-11T21:41:23.9715598Z 2023-01-11T21:41:23.9715605Z 2023-01-11T21:41:23.9715740Z async_compile.wait(globals()) 2023-01-11T21:41:23.9715845Z del async_compile 2023-01-11T21:41:23.9715852Z 2023-01-11T21:41:23.9715960Z def call(args): 2023-01-11T21:41:23.9716159Z primals_3, philox_seed_like, mul_1, permute_4, tangents_1 = args 2023-01-11T21:41:23.9716260Z args.clear() 2023-01-11T21:41:23.9716550Z buf0 = empty_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9716826Z kernel_cpp_0(c_void_p(philox_seed_like.data_ptr()), c_void_p(tangents_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9716935Z del tangents_1 2023-01-11T21:41:23.9717238Z buf1 = empty_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9717409Z aten.mm.out(as_strided(buf0, (32, 8), (1, 32)), mul_1, out=buf1) 2023-01-11T21:41:23.9717502Z del mul_1 2023-01-11T21:41:23.9717793Z buf2 = empty_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9717925Z aten.mm.out(buf0, permute_4, out=buf2) 2023-01-11T21:41:23.9718015Z del buf0 2023-01-11T21:41:23.9718119Z del permute_4 2023-01-11T21:41:23.9718240Z buf3 = buf2; del buf2 # reuse 2023-01-11T21:41:23.9718462Z kernel_cpp_1(c_void_p(buf3.data_ptr()), c_void_p(philox_seed_like.data_ptr())) 2023-01-11T21:41:23.9718578Z del philox_seed_like 2023-01-11T21:41:23.9718883Z buf4 = empty_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9719057Z aten.mm.out(as_strided(buf3, (32, 8), (1, 32)), primals_3, out=buf4) 2023-01-11T21:41:23.9719149Z del buf3 2023-01-11T21:41:23.9719251Z del primals_3 2023-01-11T21:41:23.9719444Z return (as_strided(buf4, (32, 32), (32, 1)), as_strided(buf1, (32, 32), (32, 1)), None, ) 2023-01-11T21:41:23.9719451Z 2023-01-11T21:41:23.9719456Z 2023-01-11T21:41:23.9719565Z if __name__ == "__main__": 2023-01-11T21:41:23.9719742Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9719938Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9720272Z primals_3 = rand_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9720671Z philox_seed_like = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9720986Z mul_1 = rand_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9721320Z permute_4 = rand_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9721643Z tangents_1 = rand_strided((8, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9721912Z print_performance(lambda: call([primals_3, philox_seed_like, mul_1, permute_4, tangents_1])) 2023-01-11T21:41:23.9721921Z 2023-01-11T21:41:23.9722020Z ok (3.617s) 2023-01-11T21:41:23.9722802Z test_masked_fill_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9723068Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9723504Z [2023-01-11 21:36:15,637] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 358 2023-01-11T21:41:23.9723945Z [2023-01-11 21:36:17,255] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 358 2023-01-11T21:41:23.9723954Z 2023-01-11T21:41:23.9724084Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9724189Z import torch 2023-01-11T21:41:23.9724296Z import random 2023-01-11T21:41:23.9724478Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9724669Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9724677Z 2023-01-11T21:41:23.9724798Z aten = torch.ops.aten 2023-01-11T21:41:23.9725011Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9725137Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9725158Z 2023-01-11T21:41:23.9725169Z 2023-01-11T21:41:23.9725372Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9725699Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9725882Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.9726042Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9726189Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9726336Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9726423Z { 2023-01-11T21:41:23.9726558Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9726649Z { 2023-01-11T21:41:23.9726766Z #pragma omp for 2023-01-11T21:41:23.9726889Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.9726984Z { 2023-01-11T21:41:23.9727103Z #pragma GCC ivdep 2023-01-11T21:41:23.9727228Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:23.9727305Z { 2023-01-11T21:41:23.9727405Z { 2023-01-11T21:41:23.9727502Z { 2023-01-11T21:41:23.9727647Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.9727802Z auto tmp2 = in_ptr1[i1 + (16*i0)]; 2023-01-11T21:41:23.9728090Z auto tmp1 = static_cast(-10000.0); 2023-01-11T21:41:23.9728246Z auto tmp3 = tmp0 ? tmp1 : tmp2; 2023-01-11T21:41:23.9728395Z auto tmp4 = static_cast(2); 2023-01-11T21:41:23.9728541Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:23.9728679Z auto tmp6 = tmp0 == 0; 2023-01-11T21:41:23.9728851Z auto tmp7 = static_cast(667.0); 2023-01-11T21:41:23.9729018Z auto tmp8 = static_cast(2.0); 2023-01-11T21:41:23.9729158Z auto tmp9 = tmp2 / tmp8; 2023-01-11T21:41:23.9729313Z auto tmp10 = tmp6 ? tmp7 : tmp9; 2023-01-11T21:41:23.9729525Z out_ptr0[i1 + (16*i0)] = tmp5; 2023-01-11T21:41:23.9729671Z out_ptr1[i1 + (16*i0)] = tmp10; 2023-01-11T21:41:23.9729768Z } 2023-01-11T21:41:23.9729866Z } 2023-01-11T21:41:23.9729963Z } 2023-01-11T21:41:23.9730056Z } 2023-01-11T21:41:23.9730146Z } 2023-01-11T21:41:23.9730223Z } 2023-01-11T21:41:23.9730351Z ''') 2023-01-11T21:41:23.9730360Z 2023-01-11T21:41:23.9730367Z 2023-01-11T21:41:23.9730510Z async_compile.wait(globals()) 2023-01-11T21:41:23.9730620Z del async_compile 2023-01-11T21:41:23.9730628Z 2023-01-11T21:41:23.9730735Z def call(args): 2023-01-11T21:41:23.9730848Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9730956Z args.clear() 2023-01-11T21:41:23.9731267Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9731587Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9731958Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9732064Z del arg0_1 2023-01-11T21:41:23.9732167Z del arg1_1 2023-01-11T21:41:23.9732285Z return (buf0, buf1, ) 2023-01-11T21:41:23.9732293Z 2023-01-11T21:41:23.9732299Z 2023-01-11T21:41:23.9732415Z if __name__ == "__main__": 2023-01-11T21:41:23.9732596Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9732774Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9733090Z arg0_1 = rand_strided((1, 16), (16, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.9733413Z arg1_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9733593Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9733602Z 2023-01-11T21:41:23.9733701Z ok (1.658s) 2023-01-11T21:41:23.9734281Z test_masked_fill_promotion_cpu (__main__.CpuTests) ... [2023-01-11 21:36:17,280] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 359 2023-01-11T21:41:23.9734719Z [2023-01-11 21:36:18,807] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 359 2023-01-11T21:41:23.9735135Z [2023-01-11 21:36:18,830] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 360 2023-01-11T21:41:23.9735569Z [2023-01-11 21:36:20,397] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 360 2023-01-11T21:41:23.9735576Z 2023-01-11T21:41:23.9735703Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9735808Z import torch 2023-01-11T21:41:23.9735909Z import random 2023-01-11T21:41:23.9736090Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9736281Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9736289Z 2023-01-11T21:41:23.9736410Z aten = torch.ops.aten 2023-01-11T21:41:23.9736628Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9736774Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9736782Z 2023-01-11T21:41:23.9736789Z 2023-01-11T21:41:23.9736994Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9737329Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9737515Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.9737680Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9737834Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9737926Z { 2023-01-11T21:41:23.9738078Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9738153Z { 2023-01-11T21:41:23.9738269Z #pragma omp for 2023-01-11T21:41:23.9738396Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.9738492Z { 2023-01-11T21:41:23.9738618Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:23.9738717Z { 2023-01-11T21:41:23.9738965Z float g_tmp_buffer_in_ptr0[8] = {0}; 2023-01-11T21:41:23.9739145Z flag_to_float(in_ptr0 + 8*i1, g_tmp_buffer_in_ptr0, 8); 2023-01-11T21:41:23.9739384Z auto tmp0 = at::vec::Vectorized::loadu(g_tmp_buffer_in_ptr0); 2023-01-11T21:41:23.9739607Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (16*i0)); 2023-01-11T21:41:23.9739824Z auto tmp1 = at::vec::Vectorized(static_cast(3.5)); 2023-01-11T21:41:23.9740027Z auto tmp3 = decltype(tmp1)::blendv(tmp2, tmp1, tmp0); 2023-01-11T21:41:23.9740187Z tmp3.store(out_ptr0 + (8*i1) + (16*i0)); 2023-01-11T21:41:23.9740281Z } 2023-01-11T21:41:23.9740425Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.9740542Z for(long i1=16; i1<16; i1+=1) 2023-01-11T21:41:23.9740639Z { 2023-01-11T21:41:23.9740773Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.9741001Z auto tmp2 = in_ptr1[i1 + (16*i0)]; 2023-01-11T21:41:23.9741169Z auto tmp1 = static_cast(3.5); 2023-01-11T21:41:23.9741317Z auto tmp3 = tmp0 ? tmp1 : tmp2; 2023-01-11T21:41:23.9741455Z out_ptr0[i1 + (16*i0)] = tmp3; 2023-01-11T21:41:23.9741537Z } 2023-01-11T21:41:23.9741629Z } 2023-01-11T21:41:23.9741725Z } 2023-01-11T21:41:23.9741818Z } 2023-01-11T21:41:23.9741955Z ''') 2023-01-11T21:41:23.9741964Z 2023-01-11T21:41:23.9741970Z 2023-01-11T21:41:23.9742114Z async_compile.wait(globals()) 2023-01-11T21:41:23.9742222Z del async_compile 2023-01-11T21:41:23.9742230Z 2023-01-11T21:41:23.9742550Z def call(args): 2023-01-11T21:41:23.9742670Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9742777Z args.clear() 2023-01-11T21:41:23.9743111Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9743366Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9743474Z del arg0_1 2023-01-11T21:41:23.9743572Z del arg1_1 2023-01-11T21:41:23.9743665Z return (buf0, ) 2023-01-11T21:41:23.9743687Z 2023-01-11T21:41:23.9743693Z 2023-01-11T21:41:23.9743790Z if __name__ == "__main__": 2023-01-11T21:41:23.9743968Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9744158Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9744476Z arg0_1 = rand_strided((1, 16), (16, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.9744804Z arg1_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9744983Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9744992Z 2023-01-11T21:41:23.9744998Z 2023-01-11T21:41:23.9745146Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9745252Z import torch 2023-01-11T21:41:23.9745344Z import random 2023-01-11T21:41:23.9745525Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9745724Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9745733Z 2023-01-11T21:41:23.9745853Z aten = torch.ops.aten 2023-01-11T21:41:23.9746067Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9746207Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9746214Z 2023-01-11T21:41:23.9746221Z 2023-01-11T21:41:23.9746440Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9746774Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9746948Z extern "C" void kernel(const bool* __restrict__ in_ptr0, 2023-01-11T21:41:23.9747107Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:23.9747258Z long* __restrict__ out_ptr0) 2023-01-11T21:41:23.9747351Z { 2023-01-11T21:41:23.9747505Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9747598Z { 2023-01-11T21:41:23.9747714Z #pragma omp for 2023-01-11T21:41:23.9747943Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.9748040Z { 2023-01-11T21:41:23.9748165Z #pragma GCC ivdep 2023-01-11T21:41:23.9748300Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:23.9748396Z { 2023-01-11T21:41:23.9748492Z { 2023-01-11T21:41:23.9748583Z { 2023-01-11T21:41:23.9748727Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:23.9748882Z auto tmp3 = in_ptr1[i1 + (16*i0)]; 2023-01-11T21:41:23.9749052Z auto tmp1 = static_cast(3.5); 2023-01-11T21:41:23.9749222Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:23.9749377Z auto tmp4 = tmp0 ? tmp2 : tmp3; 2023-01-11T21:41:23.9749524Z out_ptr0[i1 + (16*i0)] = tmp4; 2023-01-11T21:41:23.9749625Z } 2023-01-11T21:41:23.9749705Z } 2023-01-11T21:41:23.9749802Z } 2023-01-11T21:41:23.9750014Z } 2023-01-11T21:41:23.9750113Z } 2023-01-11T21:41:23.9750204Z } 2023-01-11T21:41:23.9750338Z ''') 2023-01-11T21:41:23.9750346Z 2023-01-11T21:41:23.9750353Z 2023-01-11T21:41:23.9750494Z async_compile.wait(globals()) 2023-01-11T21:41:23.9750591Z del async_compile 2023-01-11T21:41:23.9750599Z 2023-01-11T21:41:23.9750705Z def call(args): 2023-01-11T21:41:23.9750819Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9750927Z args.clear() 2023-01-11T21:41:23.9751250Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9751509Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9751608Z del arg0_1 2023-01-11T21:41:23.9751694Z del arg1_1 2023-01-11T21:41:23.9751799Z return (buf0, ) 2023-01-11T21:41:23.9751807Z 2023-01-11T21:41:23.9751813Z 2023-01-11T21:41:23.9751927Z if __name__ == "__main__": 2023-01-11T21:41:23.9752105Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9752301Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9752624Z arg0_1 = rand_strided((1, 16), (16, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:23.9752942Z arg1_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9753117Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9753125Z 2023-01-11T21:41:23.9753211Z ok (3.141s) 2023-01-11T21:41:23.9754064Z test_max_min_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9754266Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9754707Z [2023-01-11 21:36:20,422] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 361 2023-01-11T21:41:23.9755152Z [2023-01-11 21:36:22,068] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 361 2023-01-11T21:41:23.9755864Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9756059Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9756486Z [2023-01-11 21:36:22,085] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 362 2023-01-11T21:41:23.9756934Z [2023-01-11 21:36:22,095] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 362 2023-01-11T21:41:23.9757025Z 2023-01-11T21:41:23.9757175Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9757282Z import torch 2023-01-11T21:41:23.9757375Z import random 2023-01-11T21:41:23.9757555Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9757748Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9757756Z 2023-01-11T21:41:23.9757879Z aten = torch.ops.aten 2023-01-11T21:41:23.9758092Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9758234Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9758241Z 2023-01-11T21:41:23.9758248Z 2023-01-11T21:41:23.9758467Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9758800Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9758973Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9759134Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9759351Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9759503Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9759594Z { 2023-01-11T21:41:23.9759747Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9759839Z { 2023-01-11T21:41:23.9759943Z #pragma omp for 2023-01-11T21:41:23.9760067Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.9760161Z { 2023-01-11T21:41:23.9760374Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9760580Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.9760751Z auto tmp2 = at::vec::maximum(tmp0, tmp1); 2023-01-11T21:41:23.9760920Z auto tmp3 = at::vec::minimum(tmp0, tmp1); 2023-01-11T21:41:23.9761049Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9761188Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9761280Z } 2023-01-11T21:41:23.9761430Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9761562Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.9761657Z { 2023-01-11T21:41:23.9761789Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9761906Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.9762102Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:23.9762295Z auto tmp3 = (tmp1 != tmp1) ? tmp1 : std::min(tmp0, tmp1); 2023-01-11T21:41:23.9762421Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9762540Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9762635Z } 2023-01-11T21:41:23.9762728Z } 2023-01-11T21:41:23.9762808Z } 2023-01-11T21:41:23.9762943Z ''') 2023-01-11T21:41:23.9762953Z 2023-01-11T21:41:23.9762960Z 2023-01-11T21:41:23.9763101Z async_compile.wait(globals()) 2023-01-11T21:41:23.9763216Z del async_compile 2023-01-11T21:41:23.9763223Z 2023-01-11T21:41:23.9763332Z def call(args): 2023-01-11T21:41:23.9763446Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9763557Z args.clear() 2023-01-11T21:41:23.9763880Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9764183Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9764498Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9764603Z del arg0_1 2023-01-11T21:41:23.9764707Z del arg1_1 2023-01-11T21:41:23.9764828Z return (buf0, buf1, ) 2023-01-11T21:41:23.9764835Z 2023-01-11T21:41:23.9764842Z 2023-01-11T21:41:23.9764958Z if __name__ == "__main__": 2023-01-11T21:41:23.9765144Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9765347Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9765660Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9765976Z arg1_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9766247Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9766254Z 2023-01-11T21:41:23.9766261Z 2023-01-11T21:41:23.9766409Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9766518Z import torch 2023-01-11T21:41:23.9766628Z import random 2023-01-11T21:41:23.9766808Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9766986Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9767007Z 2023-01-11T21:41:23.9767113Z aten = torch.ops.aten 2023-01-11T21:41:23.9767323Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9767467Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9767475Z 2023-01-11T21:41:23.9767482Z 2023-01-11T21:41:23.9767704Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9768032Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9768221Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9768456Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9768621Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9768764Z float* __restrict__ out_ptr1) 2023-01-11T21:41:23.9768862Z { 2023-01-11T21:41:23.9769020Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9769116Z { 2023-01-11T21:41:23.9769239Z #pragma omp for 2023-01-11T21:41:23.9769367Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.9769450Z { 2023-01-11T21:41:23.9769668Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9769883Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:23.9770052Z auto tmp2 = at::vec::maximum(tmp0, tmp1); 2023-01-11T21:41:23.9770219Z auto tmp3 = at::vec::minimum(tmp0, tmp1); 2023-01-11T21:41:23.9770362Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:23.9770510Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9770605Z } 2023-01-11T21:41:23.9770736Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9770862Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.9770955Z { 2023-01-11T21:41:23.9771087Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9771216Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:23.9771405Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:23.9771595Z auto tmp3 = (tmp1 != tmp1) ? tmp1 : std::min(tmp0, tmp1); 2023-01-11T21:41:23.9771706Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9771826Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:23.9771924Z } 2023-01-11T21:41:23.9772019Z } 2023-01-11T21:41:23.9772112Z } 2023-01-11T21:41:23.9772250Z ''') 2023-01-11T21:41:23.9772260Z 2023-01-11T21:41:23.9772266Z 2023-01-11T21:41:23.9772407Z async_compile.wait(globals()) 2023-01-11T21:41:23.9772503Z del async_compile 2023-01-11T21:41:23.9772532Z 2023-01-11T21:41:23.9772625Z def call(args): 2023-01-11T21:41:23.9772740Z arg0_1, arg1_1 = args 2023-01-11T21:41:23.9772855Z args.clear() 2023-01-11T21:41:23.9773179Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9773499Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9773809Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9773915Z del arg0_1 2023-01-11T21:41:23.9774003Z del arg1_1 2023-01-11T21:41:23.9774125Z return (buf0, buf1, ) 2023-01-11T21:41:23.9774132Z 2023-01-11T21:41:23.9774140Z 2023-01-11T21:41:23.9774261Z if __name__ == "__main__": 2023-01-11T21:41:23.9774444Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9774643Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9774973Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9775371Z arg1_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9775541Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:23.9775563Z 2023-01-11T21:41:23.9775646Z ok (1.699s) 2023-01-11T21:41:23.9776466Z test_max_pool2d1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9776679Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9777144Z [2023-01-11 21:36:22,117] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 363 2023-01-11T21:41:23.9777673Z [2023-01-11 21:36:23,787] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 363 2023-01-11T21:41:23.9777688Z 2023-01-11T21:41:23.9777841Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9777956Z import torch 2023-01-11T21:41:23.9778068Z import random 2023-01-11T21:41:23.9778258Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9778444Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9778451Z 2023-01-11T21:41:23.9778574Z aten = torch.ops.aten 2023-01-11T21:41:23.9778797Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9778948Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9778955Z 2023-01-11T21:41:23.9778962Z 2023-01-11T21:41:23.9779189Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9779533Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9779731Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9779896Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9780038Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9780130Z { 2023-01-11T21:41:23.9780288Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9780388Z { 2023-01-11T21:41:23.9780510Z #pragma omp for 2023-01-11T21:41:23.9780638Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9780736Z { 2023-01-11T21:41:23.9780847Z #pragma GCC ivdep 2023-01-11T21:41:23.9780975Z for(long i1=0; i1<7; i1+=1) 2023-01-11T21:41:23.9781075Z { 2023-01-11T21:41:23.9781203Z #pragma GCC ivdep 2023-01-11T21:41:23.9781344Z for(long i2=0; i2<7; i2+=1) 2023-01-11T21:41:23.9781444Z { 2023-01-11T21:41:23.9781537Z { 2023-01-11T21:41:23.9781644Z { 2023-01-11T21:41:23.9781821Z auto tmp0 = in_ptr0[(2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9782009Z auto tmp1 = in_ptr0[1 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9782187Z auto tmp3 = in_ptr0[2 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9782523Z auto tmp5 = in_ptr0[16 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9782705Z auto tmp7 = in_ptr0[17 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9782873Z auto tmp9 = in_ptr0[18 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9783033Z auto tmp11 = in_ptr0[32 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9783206Z auto tmp13 = in_ptr0[33 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9783374Z auto tmp15 = in_ptr0[34 + (2*i2) + (32*i1) + (256*i0)]; 2023-01-11T21:41:23.9783584Z auto tmp2 = (tmp0 != tmp0) ? tmp0 : std::max(tmp1, tmp0); 2023-01-11T21:41:23.9783891Z auto tmp4 = (tmp2 != tmp2) ? tmp2 : std::max(tmp3, tmp2); 2023-01-11T21:41:23.9784082Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp5, tmp4); 2023-01-11T21:41:23.9784274Z auto tmp8 = (tmp6 != tmp6) ? tmp6 : std::max(tmp7, tmp6); 2023-01-11T21:41:23.9784474Z auto tmp10 = (tmp8 != tmp8) ? tmp8 : std::max(tmp9, tmp8); 2023-01-11T21:41:23.9784683Z auto tmp12 = (tmp10 != tmp10) ? tmp10 : std::max(tmp11, tmp10); 2023-01-11T21:41:23.9784887Z auto tmp14 = (tmp12 != tmp12) ? tmp12 : std::max(tmp13, tmp12); 2023-01-11T21:41:23.9785074Z auto tmp16 = (tmp14 != tmp14) ? tmp14 : std::max(tmp15, tmp14); 2023-01-11T21:41:23.9785256Z auto tmp17 = static_cast((2*i2) + (32*i1)); 2023-01-11T21:41:23.9785438Z auto tmp18 = static_cast(1 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9785660Z auto tmp19 = tmp1 > tmp0; 2023-01-11T21:41:23.9785823Z auto tmp20 = tmp19 ? tmp18 : tmp17; 2023-01-11T21:41:23.9786006Z auto tmp21 = static_cast(2 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9786149Z auto tmp22 = tmp3 > tmp2; 2023-01-11T21:41:23.9786305Z auto tmp23 = tmp22 ? tmp21 : tmp20; 2023-01-11T21:41:23.9786475Z auto tmp24 = static_cast(16 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9786618Z auto tmp25 = tmp5 > tmp4; 2023-01-11T21:41:23.9786780Z auto tmp26 = tmp25 ? tmp24 : tmp23; 2023-01-11T21:41:23.9786964Z auto tmp27 = static_cast(17 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9787108Z auto tmp28 = tmp7 > tmp6; 2023-01-11T21:41:23.9787264Z auto tmp29 = tmp28 ? tmp27 : tmp26; 2023-01-11T21:41:23.9787453Z auto tmp30 = static_cast(18 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9787584Z auto tmp31 = tmp9 > tmp8; 2023-01-11T21:41:23.9787743Z auto tmp32 = tmp31 ? tmp30 : tmp29; 2023-01-11T21:41:23.9787923Z auto tmp33 = static_cast(32 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9788074Z auto tmp34 = tmp11 > tmp10; 2023-01-11T21:41:23.9788233Z auto tmp35 = tmp34 ? tmp33 : tmp32; 2023-01-11T21:41:23.9788416Z auto tmp36 = static_cast(33 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9788561Z auto tmp37 = tmp13 > tmp12; 2023-01-11T21:41:23.9788718Z auto tmp38 = tmp37 ? tmp36 : tmp35; 2023-01-11T21:41:23.9788883Z auto tmp39 = static_cast(34 + (2*i2) + (32*i1)); 2023-01-11T21:41:23.9789032Z auto tmp40 = tmp15 > tmp14; 2023-01-11T21:41:23.9789198Z auto tmp41 = tmp40 ? tmp39 : tmp38; 2023-01-11T21:41:23.9789356Z out_ptr0[i2 + (7*i1) + (49*i0)] = tmp16; 2023-01-11T21:41:23.9789514Z out_ptr1[i2 + (7*i1) + (49*i0)] = tmp41; 2023-01-11T21:41:23.9789618Z } 2023-01-11T21:41:23.9789717Z } 2023-01-11T21:41:23.9789797Z } 2023-01-11T21:41:23.9789890Z } 2023-01-11T21:41:23.9789983Z } 2023-01-11T21:41:23.9790076Z } 2023-01-11T21:41:23.9790166Z } 2023-01-11T21:41:23.9790313Z ''') 2023-01-11T21:41:23.9790324Z 2023-01-11T21:41:23.9790330Z 2023-01-11T21:41:23.9790474Z async_compile.wait(globals()) 2023-01-11T21:41:23.9790569Z del async_compile 2023-01-11T21:41:23.9790590Z 2023-01-11T21:41:23.9790681Z def call(args): 2023-01-11T21:41:23.9790787Z arg0_1, = args 2023-01-11T21:41:23.9790890Z args.clear() 2023-01-11T21:41:23.9791320Z buf0 = empty_strided((2, 4, 7, 7), (196, 49, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9791670Z buf1 = empty_strided((2, 4, 7, 7), (196, 49, 7, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9791935Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9792037Z del arg0_1 2023-01-11T21:41:23.9792142Z return (buf0, buf1, ) 2023-01-11T21:41:23.9792150Z 2023-01-11T21:41:23.9792157Z 2023-01-11T21:41:23.9792273Z if __name__ == "__main__": 2023-01-11T21:41:23.9792458Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9792657Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9793023Z arg0_1 = rand_strided((2, 4, 16, 16), (1024, 256, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9793193Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9793202Z 2023-01-11T21:41:23.9793302Z ok (1.691s) 2023-01-11T21:41:23.9794257Z test_max_pool2d2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9794465Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9794896Z [2023-01-11 21:36:23,862] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 364 2023-01-11T21:41:23.9795340Z [2023-01-11 21:36:25,525] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 364 2023-01-11T21:41:23.9795348Z 2023-01-11T21:41:23.9795495Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9795604Z import torch 2023-01-11T21:41:23.9795712Z import random 2023-01-11T21:41:23.9795899Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9796101Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9796110Z 2023-01-11T21:41:23.9796234Z aten = torch.ops.aten 2023-01-11T21:41:23.9796437Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9796578Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9796585Z 2023-01-11T21:41:23.9796592Z 2023-01-11T21:41:23.9796814Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9797148Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9797333Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9797488Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9797636Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9797729Z { 2023-01-11T21:41:23.9797869Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9797959Z { 2023-01-11T21:41:23.9798075Z #pragma omp for 2023-01-11T21:41:23.9798208Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.9798304Z { 2023-01-11T21:41:23.9798425Z #pragma GCC ivdep 2023-01-11T21:41:23.9798553Z for(long i1=0; i1<27; i1+=1) 2023-01-11T21:41:23.9798636Z { 2023-01-11T21:41:23.9798760Z #pragma GCC ivdep 2023-01-11T21:41:23.9798896Z for(long i2=0; i2<27; i2+=1) 2023-01-11T21:41:23.9798996Z { 2023-01-11T21:41:23.9799101Z { 2023-01-11T21:41:23.9799204Z { 2023-01-11T21:41:23.9799365Z auto tmp0 = in_ptr0[(2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9799543Z auto tmp1 = in_ptr0[1 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9799717Z auto tmp3 = in_ptr0[2 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9799893Z auto tmp5 = in_ptr0[55 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9800154Z auto tmp7 = in_ptr0[56 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9800325Z auto tmp9 = in_ptr0[57 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9800504Z auto tmp11 = in_ptr0[110 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9800679Z auto tmp13 = in_ptr0[111 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9800854Z auto tmp15 = in_ptr0[112 + (2*i2) + (110*i1) + (3025*i0)]; 2023-01-11T21:41:23.9801045Z auto tmp2 = (tmp0 != tmp0) ? tmp0 : std::max(tmp1, tmp0); 2023-01-11T21:41:23.9801238Z auto tmp4 = (tmp2 != tmp2) ? tmp2 : std::max(tmp3, tmp2); 2023-01-11T21:41:23.9801427Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp5, tmp4); 2023-01-11T21:41:23.9801616Z auto tmp8 = (tmp6 != tmp6) ? tmp6 : std::max(tmp7, tmp6); 2023-01-11T21:41:23.9801899Z auto tmp10 = (tmp8 != tmp8) ? tmp8 : std::max(tmp9, tmp8); 2023-01-11T21:41:23.9802107Z auto tmp12 = (tmp10 != tmp10) ? tmp10 : std::max(tmp11, tmp10); 2023-01-11T21:41:23.9802308Z auto tmp14 = (tmp12 != tmp12) ? tmp12 : std::max(tmp13, tmp12); 2023-01-11T21:41:23.9802509Z auto tmp16 = (tmp14 != tmp14) ? tmp14 : std::max(tmp15, tmp14); 2023-01-11T21:41:23.9802689Z auto tmp17 = static_cast((2*i2) + (110*i1)); 2023-01-11T21:41:23.9802858Z auto tmp18 = static_cast(1 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9803006Z auto tmp19 = tmp1 > tmp0; 2023-01-11T21:41:23.9803163Z auto tmp20 = tmp19 ? tmp18 : tmp17; 2023-01-11T21:41:23.9803350Z auto tmp21 = static_cast(2 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9803502Z auto tmp22 = tmp3 > tmp2; 2023-01-11T21:41:23.9803662Z auto tmp23 = tmp22 ? tmp21 : tmp20; 2023-01-11T21:41:23.9803849Z auto tmp24 = static_cast(55 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9803994Z auto tmp25 = tmp5 > tmp4; 2023-01-11T21:41:23.9804139Z auto tmp26 = tmp25 ? tmp24 : tmp23; 2023-01-11T21:41:23.9804322Z auto tmp27 = static_cast(56 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9804468Z auto tmp28 = tmp7 > tmp6; 2023-01-11T21:41:23.9804625Z auto tmp29 = tmp28 ? tmp27 : tmp26; 2023-01-11T21:41:23.9804809Z auto tmp30 = static_cast(57 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9804953Z auto tmp31 = tmp9 > tmp8; 2023-01-11T21:41:23.9805108Z auto tmp32 = tmp31 ? tmp30 : tmp29; 2023-01-11T21:41:23.9805299Z auto tmp33 = static_cast(110 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9805434Z auto tmp34 = tmp11 > tmp10; 2023-01-11T21:41:23.9805591Z auto tmp35 = tmp34 ? tmp33 : tmp32; 2023-01-11T21:41:23.9805775Z auto tmp36 = static_cast(111 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9805923Z auto tmp37 = tmp13 > tmp12; 2023-01-11T21:41:23.9806074Z auto tmp38 = tmp37 ? tmp36 : tmp35; 2023-01-11T21:41:23.9806254Z auto tmp39 = static_cast(112 + (2*i2) + (110*i1)); 2023-01-11T21:41:23.9806399Z auto tmp40 = tmp15 > tmp14; 2023-01-11T21:41:23.9806541Z auto tmp41 = tmp40 ? tmp39 : tmp38; 2023-01-11T21:41:23.9806702Z out_ptr0[i2 + (27*i1) + (729*i0)] = tmp16; 2023-01-11T21:41:23.9806849Z out_ptr1[i2 + (27*i1) + (729*i0)] = tmp41; 2023-01-11T21:41:23.9807031Z } 2023-01-11T21:41:23.9807128Z } 2023-01-11T21:41:23.9807218Z } 2023-01-11T21:41:23.9807310Z } 2023-01-11T21:41:23.9807380Z } 2023-01-11T21:41:23.9807460Z } 2023-01-11T21:41:23.9807543Z } 2023-01-11T21:41:23.9807678Z ''') 2023-01-11T21:41:23.9807687Z 2023-01-11T21:41:23.9807693Z 2023-01-11T21:41:23.9807829Z async_compile.wait(globals()) 2023-01-11T21:41:23.9807936Z del async_compile 2023-01-11T21:41:23.9807943Z 2023-01-11T21:41:23.9808043Z def call(args): 2023-01-11T21:41:23.9808143Z arg0_1, = args 2023-01-11T21:41:23.9808226Z args.clear() 2023-01-11T21:41:23.9808568Z buf0 = empty_strided((16, 64, 27, 27), (46656, 729, 27, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9808896Z buf1 = empty_strided((16, 64, 27, 27), (46656, 729, 27, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9809201Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9809304Z del arg0_1 2023-01-11T21:41:23.9809414Z return (buf0, buf1, ) 2023-01-11T21:41:23.9809421Z 2023-01-11T21:41:23.9809427Z 2023-01-11T21:41:23.9809536Z if __name__ == "__main__": 2023-01-11T21:41:23.9809704Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9809864Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9810209Z arg0_1 = rand_strided((16, 64, 55, 55), (193600, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9810365Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9810372Z 2023-01-11T21:41:23.9810467Z ok (1.800s) 2023-01-11T21:41:23.9811184Z test_max_pool2d3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9811378Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9811789Z [2023-01-11 21:36:25,610] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 365 2023-01-11T21:41:23.9811796Z 2023-01-11T21:41:23.9811931Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9812034Z import torch 2023-01-11T21:41:23.9812130Z import random 2023-01-11T21:41:23.9812310Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9812514Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9812523Z 2023-01-11T21:41:23.9812646Z aten = torch.ops.aten 2023-01-11T21:41:23.9812870Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9813019Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9813028Z 2023-01-11T21:41:23.9813034Z 2023-01-11T21:41:23.9813260Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9813571Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9813738Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9813883Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9814033Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9814124Z { 2023-01-11T21:41:23.9814271Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9814364Z { 2023-01-11T21:41:23.9814482Z #pragma omp for 2023-01-11T21:41:23.9814594Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:23.9814687Z { 2023-01-11T21:41:23.9814816Z #pragma GCC ivdep 2023-01-11T21:41:23.9814943Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:23.9815042Z { 2023-01-11T21:41:23.9815137Z { 2023-01-11T21:41:23.9815234Z { 2023-01-11T21:41:23.9815586Z auto tmp0 = static_cast((-1) + (2*i0)); 2023-01-11T21:41:23.9815748Z auto tmp1 = static_cast(0); 2023-01-11T21:41:23.9815894Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:23.9816050Z auto tmp3 = static_cast(8); 2023-01-11T21:41:23.9816196Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:23.9816326Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:23.9816615Z auto tmp6 = static_cast((-1) + (2*i1)); 2023-01-11T21:41:23.9816749Z auto tmp7 = tmp6 >= tmp1; 2023-01-11T21:41:23.9816881Z auto tmp8 = tmp6 < tmp3; 2023-01-11T21:41:23.9817014Z auto tmp9 = tmp7 & tmp8; 2023-01-11T21:41:23.9817150Z auto tmp10 = tmp5 & tmp9; 2023-01-11T21:41:23.9817520Z float tmp11 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9817702Z if(tmp10) 2023-01-11T21:41:23.9817801Z { 2023-01-11T21:41:23.9818073Z auto tmp12 = in_ptr0[(-9) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9818204Z tmp11 = tmp12; 2023-01-11T21:41:23.9818310Z } 2023-01-11T21:41:23.9818481Z auto tmp13 = static_cast(2*i1); 2023-01-11T21:41:23.9818629Z auto tmp14 = tmp13 >= tmp1; 2023-01-11T21:41:23.9818775Z auto tmp15 = tmp13 < tmp3; 2023-01-11T21:41:23.9818920Z auto tmp16 = tmp14 & tmp15; 2023-01-11T21:41:23.9819063Z auto tmp17 = tmp5 & tmp16; 2023-01-11T21:41:23.9819418Z float tmp18 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9819533Z if(tmp17) 2023-01-11T21:41:23.9819636Z { 2023-01-11T21:41:23.9819919Z auto tmp19 = in_ptr0[(-8) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9820051Z tmp18 = tmp19; 2023-01-11T21:41:23.9820152Z } 2023-01-11T21:41:23.9820355Z auto tmp20 = (tmp11 != tmp11) ? tmp11 : std::max(tmp18, tmp11); 2023-01-11T21:41:23.9820516Z auto tmp21 = static_cast(1 + (2*i1)); 2023-01-11T21:41:23.9820661Z auto tmp22 = tmp21 >= tmp1; 2023-01-11T21:41:23.9820800Z auto tmp23 = tmp21 < tmp3; 2023-01-11T21:41:23.9820936Z auto tmp24 = tmp22 & tmp23; 2023-01-11T21:41:23.9821077Z auto tmp25 = tmp5 & tmp24; 2023-01-11T21:41:23.9821435Z float tmp26 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9821548Z if(tmp25) 2023-01-11T21:41:23.9821650Z { 2023-01-11T21:41:23.9821913Z auto tmp27 = in_ptr0[(-7) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9822043Z tmp26 = tmp27; 2023-01-11T21:41:23.9822148Z } 2023-01-11T21:41:23.9822492Z auto tmp28 = (tmp20 != tmp20) ? tmp20 : std::max(tmp26, tmp20); 2023-01-11T21:41:23.9822665Z auto tmp29 = static_cast(2*i0); 2023-01-11T21:41:23.9822810Z auto tmp30 = tmp29 >= tmp1; 2023-01-11T21:41:23.9822955Z auto tmp31 = tmp29 < tmp3; 2023-01-11T21:41:23.9823084Z auto tmp32 = tmp30 & tmp31; 2023-01-11T21:41:23.9823227Z auto tmp33 = tmp32 & tmp9; 2023-01-11T21:41:23.9823593Z float tmp34 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9823708Z if(tmp33) 2023-01-11T21:41:23.9823810Z { 2023-01-11T21:41:23.9824085Z auto tmp35 = in_ptr0[(-1) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9824316Z tmp34 = tmp35; 2023-01-11T21:41:23.9824420Z } 2023-01-11T21:41:23.9824606Z auto tmp36 = (tmp28 != tmp28) ? tmp28 : std::max(tmp34, tmp28); 2023-01-11T21:41:23.9824748Z auto tmp37 = tmp32 & tmp16; 2023-01-11T21:41:23.9825098Z float tmp38 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9825207Z if(tmp37) 2023-01-11T21:41:23.9825304Z { 2023-01-11T21:41:23.9825457Z auto tmp39 = in_ptr0[(2*i1) + (16*i0)]; 2023-01-11T21:41:23.9825582Z tmp38 = tmp39; 2023-01-11T21:41:23.9825671Z } 2023-01-11T21:41:23.9825866Z auto tmp40 = (tmp36 != tmp36) ? tmp36 : std::max(tmp38, tmp36); 2023-01-11T21:41:23.9825995Z auto tmp41 = tmp32 & tmp24; 2023-01-11T21:41:23.9826371Z float tmp42 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9826474Z if(tmp41) 2023-01-11T21:41:23.9826563Z { 2023-01-11T21:41:23.9826715Z auto tmp43 = in_ptr0[1 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9826824Z tmp42 = tmp43; 2023-01-11T21:41:23.9826901Z } 2023-01-11T21:41:23.9827073Z auto tmp44 = (tmp40 != tmp40) ? tmp40 : std::max(tmp42, tmp40); 2023-01-11T21:41:23.9827220Z auto tmp45 = static_cast(1 + (2*i0)); 2023-01-11T21:41:23.9827342Z auto tmp46 = tmp45 >= tmp1; 2023-01-11T21:41:23.9827461Z auto tmp47 = tmp45 < tmp3; 2023-01-11T21:41:23.9827590Z auto tmp48 = tmp46 & tmp47; 2023-01-11T21:41:23.9827716Z auto tmp49 = tmp48 & tmp9; 2023-01-11T21:41:23.9828025Z float tmp50 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9828134Z if(tmp49) 2023-01-11T21:41:23.9828227Z { 2023-01-11T21:41:23.9828379Z auto tmp51 = in_ptr0[7 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9828495Z tmp50 = tmp51; 2023-01-11T21:41:23.9828588Z } 2023-01-11T21:41:23.9828772Z auto tmp52 = (tmp44 != tmp44) ? tmp44 : std::max(tmp50, tmp44); 2023-01-11T21:41:23.9828906Z auto tmp53 = tmp48 & tmp16; 2023-01-11T21:41:23.9829228Z float tmp54 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9829336Z if(tmp53) 2023-01-11T21:41:23.9829431Z { 2023-01-11T21:41:23.9829591Z auto tmp55 = in_ptr0[8 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9829713Z tmp54 = tmp55; 2023-01-11T21:41:23.9829818Z } 2023-01-11T21:41:23.9830022Z auto tmp56 = (tmp52 != tmp52) ? tmp52 : std::max(tmp54, tmp52); 2023-01-11T21:41:23.9830166Z auto tmp57 = tmp48 & tmp24; 2023-01-11T21:41:23.9830513Z float tmp58 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9830627Z if(tmp57) 2023-01-11T21:41:23.9830729Z { 2023-01-11T21:41:23.9830895Z auto tmp59 = in_ptr0[9 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9831018Z tmp58 = tmp59; 2023-01-11T21:41:23.9831117Z } 2023-01-11T21:41:23.9831315Z auto tmp60 = (tmp56 != tmp56) ? tmp56 : std::max(tmp58, tmp56); 2023-01-11T21:41:23.9831654Z float tmp61 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9831766Z if(tmp10) 2023-01-11T21:41:23.9831868Z { 2023-01-11T21:41:23.9832147Z auto tmp62 = in_ptr0[(-9) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9832355Z tmp61 = tmp62; 2023-01-11T21:41:23.9832461Z } 2023-01-11T21:41:23.9832756Z auto tmp63 = static_cast((-9) + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9833111Z float tmp64 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9833211Z if(tmp17) 2023-01-11T21:41:23.9833314Z { 2023-01-11T21:41:23.9833591Z auto tmp65 = in_ptr0[(-8) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9833718Z tmp64 = tmp65; 2023-01-11T21:41:23.9833917Z } 2023-01-11T21:41:23.9834219Z auto tmp66 = static_cast((-8) + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9834364Z auto tmp67 = tmp64 > tmp61; 2023-01-11T21:41:23.9834508Z auto tmp68 = tmp67 ? tmp66 : tmp63; 2023-01-11T21:41:23.9834783Z auto tmp69 = (tmp61 != tmp61) ? tmp61 : std::max(tmp64, tmp61); 2023-01-11T21:41:23.9835150Z float tmp70 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9835265Z if(tmp25) 2023-01-11T21:41:23.9835371Z { 2023-01-11T21:41:23.9835647Z auto tmp71 = in_ptr0[(-7) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9835773Z tmp70 = tmp71; 2023-01-11T21:41:23.9835875Z } 2023-01-11T21:41:23.9836154Z auto tmp72 = static_cast((-7) + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9836296Z auto tmp73 = tmp70 > tmp69; 2023-01-11T21:41:23.9836456Z auto tmp74 = tmp73 ? tmp72 : tmp68; 2023-01-11T21:41:23.9836661Z auto tmp75 = (tmp69 != tmp69) ? tmp69 : std::max(tmp70, tmp69); 2023-01-11T21:41:23.9837011Z float tmp76 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9837122Z if(tmp33) 2023-01-11T21:41:23.9837225Z { 2023-01-11T21:41:23.9837488Z auto tmp77 = in_ptr0[(-1) + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9837615Z tmp76 = tmp77; 2023-01-11T21:41:23.9837718Z } 2023-01-11T21:41:23.9838013Z auto tmp78 = static_cast((-1) + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9838154Z auto tmp79 = tmp76 > tmp75; 2023-01-11T21:41:23.9838303Z auto tmp80 = tmp79 ? tmp78 : tmp74; 2023-01-11T21:41:23.9838500Z auto tmp81 = (tmp75 != tmp75) ? tmp75 : std::max(tmp76, tmp75); 2023-01-11T21:41:23.9838855Z float tmp82 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9838953Z if(tmp37) 2023-01-11T21:41:23.9839058Z { 2023-01-11T21:41:23.9839228Z auto tmp83 = in_ptr0[(2*i1) + (16*i0)]; 2023-01-11T21:41:23.9839350Z tmp82 = tmp83; 2023-01-11T21:41:23.9839451Z } 2023-01-11T21:41:23.9839628Z auto tmp84 = static_cast((2*i1) + (16*i0)); 2023-01-11T21:41:23.9839769Z auto tmp85 = tmp82 > tmp81; 2023-01-11T21:41:23.9839912Z auto tmp86 = tmp85 ? tmp84 : tmp80; 2023-01-11T21:41:23.9840111Z auto tmp87 = (tmp81 != tmp81) ? tmp81 : std::max(tmp82, tmp81); 2023-01-11T21:41:23.9840466Z float tmp88 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9840577Z if(tmp41) 2023-01-11T21:41:23.9840680Z { 2023-01-11T21:41:23.9840839Z auto tmp89 = in_ptr0[1 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9840966Z tmp88 = tmp89; 2023-01-11T21:41:23.9841144Z } 2023-01-11T21:41:23.9841309Z auto tmp90 = static_cast(1 + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9841448Z auto tmp91 = tmp88 > tmp87; 2023-01-11T21:41:23.9841604Z auto tmp92 = tmp91 ? tmp90 : tmp86; 2023-01-11T21:41:23.9841806Z auto tmp93 = (tmp87 != tmp87) ? tmp87 : std::max(tmp88, tmp87); 2023-01-11T21:41:23.9842168Z float tmp94 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9842285Z if(tmp49) 2023-01-11T21:41:23.9842386Z { 2023-01-11T21:41:23.9842549Z auto tmp95 = in_ptr0[7 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9842664Z tmp94 = tmp95; 2023-01-11T21:41:23.9842761Z } 2023-01-11T21:41:23.9842940Z auto tmp96 = static_cast(7 + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9843149Z auto tmp97 = tmp94 > tmp93; 2023-01-11T21:41:23.9843308Z auto tmp98 = tmp97 ? tmp96 : tmp92; 2023-01-11T21:41:23.9843501Z auto tmp99 = (tmp93 != tmp93) ? tmp93 : std::max(tmp94, tmp93); 2023-01-11T21:41:23.9843863Z float tmp100 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9843970Z if(tmp53) 2023-01-11T21:41:23.9844055Z { 2023-01-11T21:41:23.9844219Z auto tmp101 = in_ptr0[8 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9844349Z tmp100 = tmp101; 2023-01-11T21:41:23.9844451Z } 2023-01-11T21:41:23.9844633Z auto tmp102 = static_cast(8 + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9844784Z auto tmp103 = tmp100 > tmp99; 2023-01-11T21:41:23.9844946Z auto tmp104 = tmp103 ? tmp102 : tmp98; 2023-01-11T21:41:23.9845146Z auto tmp105 = (tmp99 != tmp99) ? tmp99 : std::max(tmp100, tmp99); 2023-01-11T21:41:23.9845511Z float tmp106 = -std::numeric_limits::infinity(); 2023-01-11T21:41:23.9845629Z if(tmp57) 2023-01-11T21:41:23.9845730Z { 2023-01-11T21:41:23.9845895Z auto tmp107 = in_ptr0[9 + (2*i1) + (16*i0)]; 2023-01-11T21:41:23.9846024Z tmp106 = tmp107; 2023-01-11T21:41:23.9846125Z } 2023-01-11T21:41:23.9846302Z auto tmp108 = static_cast(9 + (2*i1) + (16*i0)); 2023-01-11T21:41:23.9846433Z auto tmp109 = tmp106 > tmp105; 2023-01-11T21:41:23.9846592Z auto tmp110 = tmp109 ? tmp108 : tmp104; 2023-01-11T21:41:23.9846800Z auto tmp111 = (tmp105 != tmp105) ? tmp105 : std::max(tmp106, tmp105); 2023-01-11T21:41:23.9846943Z out_ptr0[i1 + (4*i0)] = tmp60; 2023-01-11T21:41:23.9847095Z out_ptr1[i1 + (4*i0)] = tmp110; 2023-01-11T21:41:23.9847197Z } 2023-01-11T21:41:23.9847294Z } 2023-01-11T21:41:23.9847374Z } 2023-01-11T21:41:23.9847470Z } 2023-01-11T21:41:23.9847562Z } 2023-01-11T21:41:23.9847650Z } 2023-01-11T21:41:23.9847776Z ''') 2023-01-11T21:41:23.9847787Z 2023-01-11T21:41:23.9847793Z 2023-01-11T21:41:23.9847934Z async_compile.wait(globals()) 2023-01-11T21:41:23.9848042Z del async_compile 2023-01-11T21:41:23.9848049Z 2023-01-11T21:41:23.9848141Z def call(args): 2023-01-11T21:41:23.9848244Z arg0_1, = args 2023-01-11T21:41:23.9848352Z args.clear() 2023-01-11T21:41:23.9848700Z buf0 = empty_strided((1, 1, 4, 4), (16, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9849035Z buf1 = empty_strided((1, 1, 4, 4), (16, 16, 4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9849302Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9849476Z del arg0_1 2023-01-11T21:41:23.9849594Z return (buf0, buf1, ) 2023-01-11T21:41:23.9849604Z 2023-01-11T21:41:23.9849610Z 2023-01-11T21:41:23.9849711Z if __name__ == "__main__": 2023-01-11T21:41:23.9849889Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9850083Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9850433Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9850597Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9851037Z [2023-01-11 21:36:27,324] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 365 2023-01-11T21:41:23.9851045Z 2023-01-11T21:41:23.9851147Z ok (1.737s) 2023-01-11T21:41:23.9851957Z test_max_pool2d4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9852155Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9852539Z [2023-01-11 21:36:27,364] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 366 2023-01-11T21:41:23.9852951Z [2023-01-11 21:36:29,060] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 366 2023-01-11T21:41:23.9852959Z 2023-01-11T21:41:23.9853102Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9853209Z import torch 2023-01-11T21:41:23.9853318Z import random 2023-01-11T21:41:23.9853488Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9853683Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9853691Z 2023-01-11T21:41:23.9853827Z aten = torch.ops.aten 2023-01-11T21:41:23.9854038Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9854185Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9854194Z 2023-01-11T21:41:23.9854202Z 2023-01-11T21:41:23.9854426Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9854758Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9854928Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9855070Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9855223Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9855311Z { 2023-01-11T21:41:23.9855442Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9855531Z { 2023-01-11T21:41:23.9855645Z #pragma omp for 2023-01-11T21:41:23.9855770Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:23.9855865Z { 2023-01-11T21:41:23.9855996Z #pragma GCC ivdep 2023-01-11T21:41:23.9856116Z for(long i1=0; i1<55; i1+=1) 2023-01-11T21:41:23.9856187Z { 2023-01-11T21:41:23.9856301Z #pragma GCC ivdep 2023-01-11T21:41:23.9856434Z for(long i2=0; i2<55; i2+=1) 2023-01-11T21:41:23.9856534Z { 2023-01-11T21:41:23.9856637Z { 2023-01-11T21:41:23.9856738Z { 2023-01-11T21:41:23.9856894Z auto tmp0 = in_ptr0[(2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857068Z auto tmp1 = in_ptr0[1 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857236Z auto tmp3 = in_ptr0[2 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857404Z auto tmp5 = in_ptr0[111 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857571Z auto tmp7 = in_ptr0[112 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857815Z auto tmp9 = in_ptr0[113 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9857979Z auto tmp11 = in_ptr0[222 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9858146Z auto tmp13 = in_ptr0[223 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9858315Z auto tmp15 = in_ptr0[224 + (2*i2) + (222*i1) + (12321*i0)]; 2023-01-11T21:41:23.9858495Z auto tmp2 = (tmp0 != tmp0) ? tmp0 : std::max(tmp1, tmp0); 2023-01-11T21:41:23.9858678Z auto tmp4 = (tmp2 != tmp2) ? tmp2 : std::max(tmp3, tmp2); 2023-01-11T21:41:23.9858860Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp5, tmp4); 2023-01-11T21:41:23.9859042Z auto tmp8 = (tmp6 != tmp6) ? tmp6 : std::max(tmp7, tmp6); 2023-01-11T21:41:23.9859221Z auto tmp10 = (tmp8 != tmp8) ? tmp8 : std::max(tmp9, tmp8); 2023-01-11T21:41:23.9859490Z auto tmp12 = (tmp10 != tmp10) ? tmp10 : std::max(tmp11, tmp10); 2023-01-11T21:41:23.9859686Z auto tmp14 = (tmp12 != tmp12) ? tmp12 : std::max(tmp13, tmp12); 2023-01-11T21:41:23.9859874Z auto tmp16 = (tmp14 != tmp14) ? tmp14 : std::max(tmp15, tmp14); 2023-01-11T21:41:23.9860045Z auto tmp17 = static_cast((2*i2) + (222*i1)); 2023-01-11T21:41:23.9860216Z auto tmp18 = static_cast(1 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9860367Z auto tmp19 = tmp1 > tmp0; 2023-01-11T21:41:23.9860529Z auto tmp20 = tmp19 ? tmp18 : tmp17; 2023-01-11T21:41:23.9860708Z auto tmp21 = static_cast(2 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9860850Z auto tmp22 = tmp3 > tmp2; 2023-01-11T21:41:23.9861007Z auto tmp23 = tmp22 ? tmp21 : tmp20; 2023-01-11T21:41:23.9861199Z auto tmp24 = static_cast(111 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9861339Z auto tmp25 = tmp5 > tmp4; 2023-01-11T21:41:23.9861484Z auto tmp26 = tmp25 ? tmp24 : tmp23; 2023-01-11T21:41:23.9861671Z auto tmp27 = static_cast(112 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9861819Z auto tmp28 = tmp7 > tmp6; 2023-01-11T21:41:23.9861976Z auto tmp29 = tmp28 ? tmp27 : tmp26; 2023-01-11T21:41:23.9862158Z auto tmp30 = static_cast(113 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9862304Z auto tmp31 = tmp9 > tmp8; 2023-01-11T21:41:23.9862624Z auto tmp32 = tmp31 ? tmp30 : tmp29; 2023-01-11T21:41:23.9862812Z auto tmp33 = static_cast(222 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9862957Z auto tmp34 = tmp11 > tmp10; 2023-01-11T21:41:23.9863115Z auto tmp35 = tmp34 ? tmp33 : tmp32; 2023-01-11T21:41:23.9863298Z auto tmp36 = static_cast(223 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9863447Z auto tmp37 = tmp13 > tmp12; 2023-01-11T21:41:23.9863605Z auto tmp38 = tmp37 ? tmp36 : tmp35; 2023-01-11T21:41:23.9863786Z auto tmp39 = static_cast(224 + (2*i2) + (222*i1)); 2023-01-11T21:41:23.9863935Z auto tmp40 = tmp15 > tmp14; 2023-01-11T21:41:23.9864090Z auto tmp41 = tmp40 ? tmp39 : tmp38; 2023-01-11T21:41:23.9864237Z out_ptr0[i2 + (55*i1) + (3025*i0)] = tmp16; 2023-01-11T21:41:23.9864397Z out_ptr1[i2 + (55*i1) + (3025*i0)] = tmp41; 2023-01-11T21:41:23.9864500Z } 2023-01-11T21:41:23.9864719Z } 2023-01-11T21:41:23.9864815Z } 2023-01-11T21:41:23.9864910Z } 2023-01-11T21:41:23.9864990Z } 2023-01-11T21:41:23.9865083Z } 2023-01-11T21:41:23.9865172Z } 2023-01-11T21:41:23.9865320Z ''') 2023-01-11T21:41:23.9865330Z 2023-01-11T21:41:23.9865336Z 2023-01-11T21:41:23.9865477Z async_compile.wait(globals()) 2023-01-11T21:41:23.9865589Z del async_compile 2023-01-11T21:41:23.9865596Z 2023-01-11T21:41:23.9865701Z def call(args): 2023-01-11T21:41:23.9865803Z arg0_1, = args 2023-01-11T21:41:23.9865893Z args.clear() 2023-01-11T21:41:23.9866256Z buf0 = empty_strided((2, 8, 55, 55), (24200, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9866607Z buf1 = empty_strided((2, 8, 55, 55), (24200, 3025, 55, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9866865Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9866970Z del arg0_1 2023-01-11T21:41:23.9867181Z return (buf0, buf1, ) 2023-01-11T21:41:23.9867190Z 2023-01-11T21:41:23.9867197Z 2023-01-11T21:41:23.9867316Z if __name__ == "__main__": 2023-01-11T21:41:23.9867494Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9867676Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9868053Z arg0_1 = rand_strided((2, 8, 111, 111), (98568, 12321, 111, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9868222Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9868230Z 2023-01-11T21:41:23.9868327Z ok (1.738s) 2023-01-11T21:41:23.9869098Z test_max_pool2d5_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9869305Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9869747Z [2023-01-11 21:36:29,133] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 367 2023-01-11T21:41:23.9870199Z [2023-01-11 21:36:30,823] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 367 2023-01-11T21:41:23.9870208Z 2023-01-11T21:41:23.9870356Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9870450Z import torch 2023-01-11T21:41:23.9870553Z import random 2023-01-11T21:41:23.9870738Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9870927Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9870935Z 2023-01-11T21:41:23.9871059Z aten = torch.ops.aten 2023-01-11T21:41:23.9871269Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9871410Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9871418Z 2023-01-11T21:41:23.9871429Z 2023-01-11T21:41:23.9871653Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9871966Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9872156Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9872311Z float* __restrict__ out_ptr0, 2023-01-11T21:41:23.9872457Z long* __restrict__ out_ptr1) 2023-01-11T21:41:23.9872548Z { 2023-01-11T21:41:23.9872701Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9872794Z { 2023-01-11T21:41:23.9872894Z #pragma omp for 2023-01-11T21:41:23.9873020Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:23.9873113Z { 2023-01-11T21:41:23.9873232Z #pragma GCC ivdep 2023-01-11T21:41:23.9873360Z for(long i1=0; i1<18; i1+=1) 2023-01-11T21:41:23.9873459Z { 2023-01-11T21:41:23.9873581Z #pragma GCC ivdep 2023-01-11T21:41:23.9873704Z for(long i2=0; i2<18; i2+=1) 2023-01-11T21:41:23.9873958Z { 2023-01-11T21:41:23.9874060Z { 2023-01-11T21:41:23.9874162Z { 2023-01-11T21:41:23.9874337Z auto tmp0 = in_ptr0[(3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9874514Z auto tmp1 = in_ptr0[1 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9874687Z auto tmp3 = in_ptr0[2 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9874851Z auto tmp5 = in_ptr0[55 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875021Z auto tmp7 = in_ptr0[56 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875189Z auto tmp9 = in_ptr0[57 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875366Z auto tmp11 = in_ptr0[110 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875598Z auto tmp13 = in_ptr0[111 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875784Z auto tmp15 = in_ptr0[112 + (3*i2) + (165*i1) + (3025*i0)]; 2023-01-11T21:41:23.9875993Z auto tmp2 = (tmp0 != tmp0) ? tmp0 : std::max(tmp1, tmp0); 2023-01-11T21:41:23.9876186Z auto tmp4 = (tmp2 != tmp2) ? tmp2 : std::max(tmp3, tmp2); 2023-01-11T21:41:23.9876373Z auto tmp6 = (tmp4 != tmp4) ? tmp4 : std::max(tmp5, tmp4); 2023-01-11T21:41:23.9876546Z auto tmp8 = (tmp6 != tmp6) ? tmp6 : std::max(tmp7, tmp6); 2023-01-11T21:41:23.9876740Z auto tmp10 = (tmp8 != tmp8) ? tmp8 : std::max(tmp9, tmp8); 2023-01-11T21:41:23.9876935Z auto tmp12 = (tmp10 != tmp10) ? tmp10 : std::max(tmp11, tmp10); 2023-01-11T21:41:23.9877130Z auto tmp14 = (tmp12 != tmp12) ? tmp12 : std::max(tmp13, tmp12); 2023-01-11T21:41:23.9877329Z auto tmp16 = (tmp14 != tmp14) ? tmp14 : std::max(tmp15, tmp14); 2023-01-11T21:41:23.9877506Z auto tmp17 = static_cast((3*i2) + (165*i1)); 2023-01-11T21:41:23.9877680Z auto tmp18 = static_cast(1 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9877824Z auto tmp19 = tmp1 > tmp0; 2023-01-11T21:41:23.9877981Z auto tmp20 = tmp19 ? tmp18 : tmp17; 2023-01-11T21:41:23.9878149Z auto tmp21 = static_cast(2 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9878291Z auto tmp22 = tmp3 > tmp2; 2023-01-11T21:41:23.9878441Z auto tmp23 = tmp22 ? tmp21 : tmp20; 2023-01-11T21:41:23.9878611Z auto tmp24 = static_cast(55 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9878748Z auto tmp25 = tmp5 > tmp4; 2023-01-11T21:41:23.9878903Z auto tmp26 = tmp25 ? tmp24 : tmp23; 2023-01-11T21:41:23.9879076Z auto tmp27 = static_cast(56 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9879206Z auto tmp28 = tmp7 > tmp6; 2023-01-11T21:41:23.9879352Z auto tmp29 = tmp28 ? tmp27 : tmp26; 2023-01-11T21:41:23.9879521Z auto tmp30 = static_cast(57 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9879662Z auto tmp31 = tmp9 > tmp8; 2023-01-11T21:41:23.9879807Z auto tmp32 = tmp31 ? tmp30 : tmp29; 2023-01-11T21:41:23.9879981Z auto tmp33 = static_cast(110 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9880122Z auto tmp34 = tmp11 > tmp10; 2023-01-11T21:41:23.9880267Z auto tmp35 = tmp34 ? tmp33 : tmp32; 2023-01-11T21:41:23.9880426Z auto tmp36 = static_cast(111 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9880694Z auto tmp37 = tmp13 > tmp12; 2023-01-11T21:41:23.9880844Z auto tmp38 = tmp37 ? tmp36 : tmp35; 2023-01-11T21:41:23.9881016Z auto tmp39 = static_cast(112 + (3*i2) + (165*i1)); 2023-01-11T21:41:23.9881151Z auto tmp40 = tmp15 > tmp14; 2023-01-11T21:41:23.9881294Z auto tmp41 = tmp40 ? tmp39 : tmp38; 2023-01-11T21:41:23.9881440Z out_ptr0[i2 + (18*i1) + (324*i0)] = tmp16; 2023-01-11T21:41:23.9881589Z out_ptr1[i2 + (18*i1) + (324*i0)] = tmp41; 2023-01-11T21:41:23.9881672Z } 2023-01-11T21:41:23.9881767Z } 2023-01-11T21:41:23.9881870Z } 2023-01-11T21:41:23.9881967Z } 2023-01-11T21:41:23.9882050Z } 2023-01-11T21:41:23.9882141Z } 2023-01-11T21:41:23.9882225Z } 2023-01-11T21:41:23.9882346Z ''') 2023-01-11T21:41:23.9882358Z 2023-01-11T21:41:23.9882425Z 2023-01-11T21:41:23.9882557Z async_compile.wait(globals()) 2023-01-11T21:41:23.9882659Z del async_compile 2023-01-11T21:41:23.9882666Z 2023-01-11T21:41:23.9882763Z def call(args): 2023-01-11T21:41:23.9882867Z arg0_1, = args 2023-01-11T21:41:23.9882965Z args.clear() 2023-01-11T21:41:23.9883312Z buf0 = empty_strided((16, 64, 18, 18), (20736, 324, 18, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9883632Z buf1 = empty_strided((16, 64, 18, 18), (20736, 324, 18, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9883880Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:23.9883975Z del arg0_1 2023-01-11T21:41:23.9884081Z return (buf0, buf1, ) 2023-01-11T21:41:23.9884089Z 2023-01-11T21:41:23.9884094Z 2023-01-11T21:41:23.9884197Z if __name__ == "__main__": 2023-01-11T21:41:23.9884361Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9884545Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9884881Z arg0_1 = rand_strided((16, 64, 55, 55), (193600, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9885023Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9885043Z 2023-01-11T21:41:23.9885123Z ok (1.817s) 2023-01-11T21:41:23.9885831Z test_max_pool2d6_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9886013Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9886428Z [2023-01-11 21:36:30,960] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 368 2023-01-11T21:41:23.9886816Z [2023-01-11 21:36:30,966] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.max_pool2d_with_indices 2023-01-11T21:41:23.9887214Z [2023-01-11 21:36:30,968] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 368 2023-01-11T21:41:23.9887222Z 2023-01-11T21:41:23.9887360Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9887460Z import torch 2023-01-11T21:41:23.9887562Z import random 2023-01-11T21:41:23.9887720Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9887896Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9887903Z 2023-01-11T21:41:23.9888012Z aten = torch.ops.aten 2023-01-11T21:41:23.9888208Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9888336Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9888342Z 2023-01-11T21:41:23.9888349Z 2023-01-11T21:41:23.9888475Z async_compile.wait(globals()) 2023-01-11T21:41:23.9888570Z del async_compile 2023-01-11T21:41:23.9888653Z 2023-01-11T21:41:23.9888757Z def call(args): 2023-01-11T21:41:23.9888846Z arg0_1, = args 2023-01-11T21:41:23.9888943Z args.clear() 2023-01-11T21:41:23.9889124Z buf0 = aten.max_pool2d_with_indices(arg0_1, [13, 13], [13, 13], [0, 0], 1, False) 2023-01-11T21:41:23.9889220Z del arg0_1 2023-01-11T21:41:23.9889316Z buf1 = buf0[0] 2023-01-11T21:41:23.9889474Z assert_size_stride(buf1, (16, 64, 4, 4), (1024, 16, 4, 1)) 2023-01-11T21:41:23.9889574Z buf2 = buf0[1] 2023-01-11T21:41:23.9889718Z assert_size_stride(buf2, (16, 64, 4, 4), (1024, 16, 4, 1)) 2023-01-11T21:41:23.9889805Z del buf0 2023-01-11T21:41:23.9889918Z return (buf1, buf2, ) 2023-01-11T21:41:23.9889924Z 2023-01-11T21:41:23.9889931Z 2023-01-11T21:41:23.9890040Z if __name__ == "__main__": 2023-01-11T21:41:23.9890202Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9890385Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9890817Z arg0_1 = rand_strided((16, 64, 55, 55), (193600, 3025, 55, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9890972Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:23.9890993Z 2023-01-11T21:41:23.9891075Z ok (0.129s) 2023-01-11T21:41:23.9891842Z test_max_pool2d_with_indices_backward2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9892027Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9892439Z [2023-01-11 21:36:31,034] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 369 2023-01-11T21:41:23.9892853Z [2023-01-11 21:36:32,688] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 369 2023-01-11T21:41:23.9892866Z 2023-01-11T21:41:23.9892994Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9893097Z import torch 2023-01-11T21:41:23.9893197Z import random 2023-01-11T21:41:23.9893360Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9893520Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9893528Z 2023-01-11T21:41:23.9893634Z aten = torch.ops.aten 2023-01-11T21:41:23.9893833Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9893968Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9893975Z 2023-01-11T21:41:23.9893980Z 2023-01-11T21:41:23.9894199Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9894531Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9894717Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9894879Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9895031Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9895121Z { 2023-01-11T21:41:23.9895276Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9895367Z { 2023-01-11T21:41:23.9895490Z #pragma omp for 2023-01-11T21:41:23.9895617Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9895709Z { 2023-01-11T21:41:23.9895818Z #pragma GCC ivdep 2023-01-11T21:41:23.9895948Z for(long i1=0; i1<40; i1+=1) 2023-01-11T21:41:23.9896043Z { 2023-01-11T21:41:23.9896166Z #pragma GCC ivdep 2023-01-11T21:41:23.9896300Z for(long i2=0; i2<56; i2+=1) 2023-01-11T21:41:23.9896397Z { 2023-01-11T21:41:23.9896482Z { 2023-01-11T21:41:23.9896584Z { 2023-01-11T21:41:23.9896757Z auto tmp0 = static_cast(i2 + (56*i1)); 2023-01-11T21:41:23.9896932Z auto tmp1 = static_cast((i1 / 2)); 2023-01-11T21:41:23.9897187Z auto tmp2 = static_cast((i2 / 2)); 2023-01-11T21:41:23.9897365Z auto tmp3 = static_cast(1 + (((1 + i1) / 2))); 2023-01-11T21:41:23.9897543Z auto tmp4 = static_cast(1 + (((1 + i2) / 2))); 2023-01-11T21:41:23.9897703Z auto tmp5 = static_cast(0); 2023-01-11T21:41:23.9897895Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::max(tmp1, tmp5); 2023-01-11T21:41:23.9898088Z auto tmp7 = (tmp5 != tmp5) ? tmp5 : std::max(tmp2, tmp5); 2023-01-11T21:41:23.9898253Z auto tmp8 = static_cast(21); 2023-01-11T21:41:23.9898450Z auto tmp9 = (tmp8 != tmp8) ? tmp8 : std::min(tmp3, tmp8); 2023-01-11T21:41:23.9898612Z auto tmp10 = static_cast(29); 2023-01-11T21:41:23.9898869Z auto tmp11 = (tmp10 != tmp10) ? tmp10 : std::min(tmp4, tmp10); 2023-01-11T21:41:23.9899027Z auto tmp12 = tmp6 + tmp5; 2023-01-11T21:41:23.9899171Z auto tmp13 = tmp7 + tmp5; 2023-01-11T21:41:23.9899319Z auto tmp14 = static_cast(1); 2023-01-11T21:41:23.9899550Z auto tmp15 = tmp9 - tmp14; 2023-01-11T21:41:23.9899741Z auto tmp16 = (tmp15 != tmp15) ? tmp15 : std::min(tmp12, tmp15); 2023-01-11T21:41:23.9899968Z auto tmp17 = tmp11 - tmp14; 2023-01-11T21:41:23.9900157Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp13, tmp17); 2023-01-11T21:41:23.9900325Z auto tmp19 = in_ptr0[tmp18 + (29*tmp16) + (609*i0)]; 2023-01-11T21:41:23.9900503Z auto tmp20 = in_ptr1[tmp18 + (29*tmp16) + (609*i0)]; 2023-01-11T21:41:23.9900644Z auto tmp21 = tmp19 == tmp0; 2023-01-11T21:41:23.9900811Z auto tmp22 = static_cast(0.0); 2023-01-11T21:41:23.9900944Z auto tmp23 = tmp21 ? tmp20 : tmp22; 2023-01-11T21:41:23.9901082Z auto tmp24 = tmp7 + tmp14; 2023-01-11T21:41:23.9901272Z auto tmp25 = (tmp17 != tmp17) ? tmp17 : std::min(tmp24, tmp17); 2023-01-11T21:41:23.9901442Z auto tmp26 = in_ptr0[tmp25 + (29*tmp16) + (609*i0)]; 2023-01-11T21:41:23.9901607Z auto tmp27 = in_ptr1[tmp25 + (29*tmp16) + (609*i0)]; 2023-01-11T21:41:23.9901742Z auto tmp28 = tmp26 == tmp0; 2023-01-11T21:41:23.9901882Z auto tmp29 = tmp12 < tmp9; 2023-01-11T21:41:23.9902019Z auto tmp30 = tmp24 < tmp11; 2023-01-11T21:41:23.9902171Z auto tmp31 = tmp29 & tmp30; 2023-01-11T21:41:23.9902460Z auto tmp32 = tmp31 & tmp28; 2023-01-11T21:41:23.9902626Z auto tmp33 = tmp23 + tmp27; 2023-01-11T21:41:23.9902792Z auto tmp34 = tmp32 ? tmp33 : tmp23; 2023-01-11T21:41:23.9902942Z auto tmp35 = tmp6 + tmp14; 2023-01-11T21:41:23.9903155Z auto tmp36 = (tmp15 != tmp15) ? tmp15 : std::min(tmp35, tmp15); 2023-01-11T21:41:23.9903340Z auto tmp37 = in_ptr0[tmp18 + (29*tmp36) + (609*i0)]; 2023-01-11T21:41:23.9903511Z auto tmp38 = in_ptr1[tmp18 + (29*tmp36) + (609*i0)]; 2023-01-11T21:41:23.9903661Z auto tmp39 = tmp37 == tmp0; 2023-01-11T21:41:23.9903812Z auto tmp40 = tmp35 < tmp9; 2023-01-11T21:41:23.9903960Z auto tmp41 = tmp13 < tmp11; 2023-01-11T21:41:23.9904107Z auto tmp42 = tmp40 & tmp41; 2023-01-11T21:41:23.9904253Z auto tmp43 = tmp42 & tmp39; 2023-01-11T21:41:23.9904519Z auto tmp44 = tmp34 + tmp38; 2023-01-11T21:41:23.9904677Z auto tmp45 = tmp43 ? tmp44 : tmp34; 2023-01-11T21:41:23.9904845Z auto tmp46 = in_ptr0[tmp25 + (29*tmp36) + (609*i0)]; 2023-01-11T21:41:23.9905022Z auto tmp47 = in_ptr1[tmp25 + (29*tmp36) + (609*i0)]; 2023-01-11T21:41:23.9905171Z auto tmp48 = tmp46 == tmp0; 2023-01-11T21:41:23.9905312Z auto tmp49 = tmp40 & tmp30; 2023-01-11T21:41:23.9905458Z auto tmp50 = tmp49 & tmp48; 2023-01-11T21:41:23.9905602Z auto tmp51 = tmp45 + tmp47; 2023-01-11T21:41:23.9905761Z auto tmp52 = tmp50 ? tmp51 : tmp45; 2023-01-11T21:41:23.9905905Z out_ptr0[i2 + (56*i1) + (2240*i0)] = tmp52; 2023-01-11T21:41:23.9906009Z } 2023-01-11T21:41:23.9906188Z } 2023-01-11T21:41:23.9906296Z } 2023-01-11T21:41:23.9906391Z } 2023-01-11T21:41:23.9906486Z } 2023-01-11T21:41:23.9906580Z } 2023-01-11T21:41:23.9906656Z } 2023-01-11T21:41:23.9906793Z ''') 2023-01-11T21:41:23.9906802Z 2023-01-11T21:41:23.9906810Z 2023-01-11T21:41:23.9906950Z async_compile.wait(globals()) 2023-01-11T21:41:23.9907063Z del async_compile 2023-01-11T21:41:23.9907070Z 2023-01-11T21:41:23.9907176Z def call(args): 2023-01-11T21:41:23.9907294Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9907399Z args.clear() 2023-01-11T21:41:23.9907787Z buf0 = empty_strided((2, 4, 40, 56), (8960, 2240, 56, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9908046Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9908154Z del arg0_1 2023-01-11T21:41:23.9908254Z del arg2_1 2023-01-11T21:41:23.9908364Z return (buf0, ) 2023-01-11T21:41:23.9908377Z 2023-01-11T21:41:23.9908388Z 2023-01-11T21:41:23.9908503Z if __name__ == "__main__": 2023-01-11T21:41:23.9908683Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9908879Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9909229Z arg0_1 = rand_strided((2, 4, 21, 29), (2436, 609, 29, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9909585Z arg1_1 = rand_strided((2, 4, 40, 56), (8960, 2240, 56, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9909937Z arg2_1 = rand_strided((2, 4, 21, 29), (2436, 609, 29, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9910126Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9910135Z 2023-01-11T21:41:23.9910234Z ok (1.680s) 2023-01-11T21:41:23.9911058Z test_max_pool2d_with_indices_backward3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9911256Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9911695Z [2023-01-11 21:36:32,953] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 370 2023-01-11T21:41:23.9912133Z [2023-01-11 21:36:34,567] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 370 2023-01-11T21:41:23.9912140Z 2023-01-11T21:41:23.9912278Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9912364Z import torch 2023-01-11T21:41:23.9912465Z import random 2023-01-11T21:41:23.9912639Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9912815Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9912822Z 2023-01-11T21:41:23.9912933Z aten = torch.ops.aten 2023-01-11T21:41:23.9913209Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9913343Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9913352Z 2023-01-11T21:41:23.9913358Z 2023-01-11T21:41:23.9913560Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9913956Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9914147Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9914314Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9914472Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9914566Z { 2023-01-11T21:41:23.9914725Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9914818Z { 2023-01-11T21:41:23.9914917Z #pragma omp for 2023-01-11T21:41:23.9915045Z for(long i0=0; i0<8192; i0+=1) 2023-01-11T21:41:23.9915141Z { 2023-01-11T21:41:23.9915265Z #pragma GCC ivdep 2023-01-11T21:41:23.9915490Z for(long i1=0; i1<37; i1+=1) 2023-01-11T21:41:23.9915591Z { 2023-01-11T21:41:23.9915717Z #pragma GCC ivdep 2023-01-11T21:41:23.9915841Z for(long i2=0; i2<38; i2+=1) 2023-01-11T21:41:23.9915939Z { 2023-01-11T21:41:23.9916041Z { 2023-01-11T21:41:23.9916140Z { 2023-01-11T21:41:23.9916316Z auto tmp0 = static_cast(i2 + (38*i1)); 2023-01-11T21:41:23.9916494Z auto tmp1 = static_cast(((1 + i1) / 2)); 2023-01-11T21:41:23.9916670Z auto tmp2 = static_cast(((1 + i2) / 2)); 2023-01-11T21:41:23.9916831Z auto tmp3 = static_cast(1 + (i1 / 2)); 2023-01-11T21:41:23.9917002Z auto tmp4 = static_cast(1 + (i2 / 2)); 2023-01-11T21:41:23.9917166Z auto tmp5 = static_cast(0); 2023-01-11T21:41:23.9917389Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::max(tmp1, tmp5); 2023-01-11T21:41:23.9917589Z auto tmp7 = (tmp5 != tmp5) ? tmp5 : std::max(tmp2, tmp5); 2023-01-11T21:41:23.9917756Z auto tmp8 = static_cast(19); 2023-01-11T21:41:23.9917957Z auto tmp9 = (tmp8 != tmp8) ? tmp8 : std::min(tmp3, tmp8); 2023-01-11T21:41:23.9918158Z auto tmp10 = (tmp8 != tmp8) ? tmp8 : std::min(tmp4, tmp8); 2023-01-11T21:41:23.9918295Z auto tmp11 = tmp6 + tmp5; 2023-01-11T21:41:23.9918415Z auto tmp12 = tmp7 + tmp5; 2023-01-11T21:41:23.9918568Z auto tmp13 = static_cast(1); 2023-01-11T21:41:23.9918812Z auto tmp14 = tmp9 - tmp13; 2023-01-11T21:41:23.9919007Z auto tmp15 = (tmp14 != tmp14) ? tmp14 : std::min(tmp11, tmp14); 2023-01-11T21:41:23.9919241Z auto tmp16 = tmp10 - tmp13; 2023-01-11T21:41:23.9919435Z auto tmp17 = (tmp16 != tmp16) ? tmp16 : std::min(tmp12, tmp16); 2023-01-11T21:41:23.9919609Z auto tmp18 = in_ptr0[tmp17 + (19*tmp15) + (361*i0)]; 2023-01-11T21:41:23.9919781Z auto tmp19 = in_ptr1[tmp17 + (19*tmp15) + (361*i0)]; 2023-01-11T21:41:23.9919912Z auto tmp20 = tmp18 == tmp0; 2023-01-11T21:41:23.9920077Z auto tmp21 = static_cast(0.0); 2023-01-11T21:41:23.9920234Z auto tmp22 = tmp20 ? tmp19 : tmp21; 2023-01-11T21:41:23.9920382Z out_ptr0[i2 + (38*i1) + (1406*i0)] = tmp22; 2023-01-11T21:41:23.9920480Z } 2023-01-11T21:41:23.9920577Z } 2023-01-11T21:41:23.9920672Z } 2023-01-11T21:41:23.9920757Z } 2023-01-11T21:41:23.9920857Z } 2023-01-11T21:41:23.9921031Z } 2023-01-11T21:41:23.9921136Z } 2023-01-11T21:41:23.9921279Z ''') 2023-01-11T21:41:23.9921288Z 2023-01-11T21:41:23.9921296Z 2023-01-11T21:41:23.9921443Z async_compile.wait(globals()) 2023-01-11T21:41:23.9921555Z del async_compile 2023-01-11T21:41:23.9921562Z 2023-01-11T21:41:23.9921657Z def call(args): 2023-01-11T21:41:23.9921784Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9921894Z args.clear() 2023-01-11T21:41:23.9922278Z buf0 = empty_strided((32, 256, 37, 38), (359936, 1406, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9922541Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9922648Z del arg0_1 2023-01-11T21:41:23.9922748Z del arg2_1 2023-01-11T21:41:23.9922846Z return (buf0, ) 2023-01-11T21:41:23.9922854Z 2023-01-11T21:41:23.9922875Z 2023-01-11T21:41:23.9922979Z if __name__ == "__main__": 2023-01-11T21:41:23.9923163Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9923440Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9923833Z arg0_1 = rand_strided((32, 256, 19, 19), (92416, 361, 19, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9924207Z arg1_1 = rand_strided((32, 256, 37, 38), (359936, 1406, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9924575Z arg2_1 = rand_strided((32, 256, 19, 19), (92416, 361, 19, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9924756Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9924762Z 2023-01-11T21:41:23.9924848Z ok (2.439s) 2023-01-11T21:41:23.9925541Z test_max_pool2d_with_indices_backward4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9925722Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9926106Z [2023-01-11 21:36:35,151] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 371 2023-01-11T21:41:23.9926113Z 2023-01-11T21:41:23.9926250Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9926352Z import torch 2023-01-11T21:41:23.9926460Z import random 2023-01-11T21:41:23.9926648Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9926845Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9926852Z 2023-01-11T21:41:23.9926972Z aten = torch.ops.aten 2023-01-11T21:41:23.9927169Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9927316Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9927325Z 2023-01-11T21:41:23.9927331Z 2023-01-11T21:41:23.9927558Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9927899Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9928085Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9928255Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9928411Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9928502Z { 2023-01-11T21:41:23.9928642Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9928738Z { 2023-01-11T21:41:23.9928858Z #pragma omp for 2023-01-11T21:41:23.9928988Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:23.9929084Z { 2023-01-11T21:41:23.9929207Z #pragma GCC ivdep 2023-01-11T21:41:23.9929321Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:23.9929413Z { 2023-01-11T21:41:23.9929539Z #pragma GCC ivdep 2023-01-11T21:41:23.9929676Z for(long i2=0; i2<4; i2+=1) 2023-01-11T21:41:23.9929773Z { 2023-01-11T21:41:23.9929962Z { 2023-01-11T21:41:23.9930066Z { 2023-01-11T21:41:23.9930230Z auto tmp0 = static_cast(i2 + (4*i1)); 2023-01-11T21:41:23.9930522Z auto tmp1 = static_cast((-2) + i1); 2023-01-11T21:41:23.9930783Z auto tmp2 = static_cast((-2) + i2); 2023-01-11T21:41:23.9930935Z auto tmp3 = static_cast(3 + i1); 2023-01-11T21:41:23.9931094Z auto tmp4 = static_cast(3 + i2); 2023-01-11T21:41:23.9931248Z auto tmp5 = static_cast(0); 2023-01-11T21:41:23.9931444Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::max(tmp1, tmp5); 2023-01-11T21:41:23.9931622Z auto tmp7 = (tmp5 != tmp5) ? tmp5 : std::max(tmp2, tmp5); 2023-01-11T21:41:23.9931758Z auto tmp8 = static_cast(3); 2023-01-11T21:41:23.9932017Z auto tmp9 = (tmp8 != tmp8) ? tmp8 : std::min(tmp3, tmp8); 2023-01-11T21:41:23.9932173Z auto tmp10 = static_cast(4); 2023-01-11T21:41:23.9932365Z auto tmp11 = (tmp10 != tmp10) ? tmp10 : std::min(tmp4, tmp10); 2023-01-11T21:41:23.9932503Z auto tmp12 = tmp6 + tmp5; 2023-01-11T21:41:23.9932642Z auto tmp13 = tmp7 + tmp5; 2023-01-11T21:41:23.9932789Z auto tmp14 = static_cast(1); 2023-01-11T21:41:23.9933038Z auto tmp15 = tmp9 - tmp14; 2023-01-11T21:41:23.9933236Z auto tmp16 = (tmp15 != tmp15) ? tmp15 : std::min(tmp12, tmp15); 2023-01-11T21:41:23.9933479Z auto tmp17 = tmp11 - tmp14; 2023-01-11T21:41:23.9933686Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp13, tmp17); 2023-01-11T21:41:23.9933872Z auto tmp19 = in_ptr0[tmp18 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9934061Z auto tmp20 = in_ptr1[tmp18 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9934209Z auto tmp21 = tmp19 == tmp0; 2023-01-11T21:41:23.9934387Z auto tmp22 = static_cast(0.0); 2023-01-11T21:41:23.9934549Z auto tmp23 = tmp21 ? tmp20 : tmp22; 2023-01-11T21:41:23.9934689Z auto tmp24 = tmp7 + tmp14; 2023-01-11T21:41:23.9934896Z auto tmp25 = (tmp17 != tmp17) ? tmp17 : std::min(tmp24, tmp17); 2023-01-11T21:41:23.9935078Z auto tmp26 = in_ptr0[tmp25 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9935265Z auto tmp27 = in_ptr1[tmp25 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9935411Z auto tmp28 = tmp26 == tmp0; 2023-01-11T21:41:23.9935555Z auto tmp29 = tmp12 < tmp9; 2023-01-11T21:41:23.9935713Z auto tmp30 = tmp24 < tmp11; 2023-01-11T21:41:23.9935858Z auto tmp31 = tmp29 & tmp30; 2023-01-11T21:41:23.9935988Z auto tmp32 = tmp31 & tmp28; 2023-01-11T21:41:23.9936133Z auto tmp33 = tmp23 + tmp27; 2023-01-11T21:41:23.9936295Z auto tmp34 = tmp32 ? tmp33 : tmp23; 2023-01-11T21:41:23.9936462Z auto tmp35 = static_cast(2); 2023-01-11T21:41:23.9936608Z auto tmp36 = tmp7 + tmp35; 2023-01-11T21:41:23.9936812Z auto tmp37 = (tmp17 != tmp17) ? tmp17 : std::min(tmp36, tmp17); 2023-01-11T21:41:23.9936984Z auto tmp38 = in_ptr0[tmp37 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9937150Z auto tmp39 = in_ptr1[tmp37 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9937273Z auto tmp40 = tmp38 == tmp0; 2023-01-11T21:41:23.9937520Z auto tmp41 = tmp36 < tmp11; 2023-01-11T21:41:23.9937656Z auto tmp42 = tmp29 & tmp41; 2023-01-11T21:41:23.9937795Z auto tmp43 = tmp42 & tmp40; 2023-01-11T21:41:23.9937932Z auto tmp44 = tmp34 + tmp39; 2023-01-11T21:41:23.9938075Z auto tmp45 = tmp43 ? tmp44 : tmp34; 2023-01-11T21:41:23.9938204Z auto tmp46 = tmp7 + tmp8; 2023-01-11T21:41:23.9938386Z auto tmp47 = (tmp17 != tmp17) ? tmp17 : std::min(tmp46, tmp17); 2023-01-11T21:41:23.9938539Z auto tmp48 = in_ptr0[tmp47 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9938696Z auto tmp49 = in_ptr1[tmp47 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9938831Z auto tmp50 = tmp48 == tmp0; 2023-01-11T21:41:23.9938970Z auto tmp51 = tmp46 < tmp11; 2023-01-11T21:41:23.9939175Z auto tmp52 = tmp29 & tmp51; 2023-01-11T21:41:23.9939316Z auto tmp53 = tmp52 & tmp50; 2023-01-11T21:41:23.9939452Z auto tmp54 = tmp45 + tmp49; 2023-01-11T21:41:23.9939587Z auto tmp55 = tmp53 ? tmp54 : tmp45; 2023-01-11T21:41:23.9939731Z auto tmp56 = tmp7 + tmp10; 2023-01-11T21:41:23.9939924Z auto tmp57 = (tmp17 != tmp17) ? tmp17 : std::min(tmp56, tmp17); 2023-01-11T21:41:23.9940101Z auto tmp58 = in_ptr0[tmp57 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9940262Z auto tmp59 = in_ptr1[tmp57 + (4*tmp16) + (12*i0)]; 2023-01-11T21:41:23.9940391Z auto tmp60 = tmp58 == tmp0; 2023-01-11T21:41:23.9940527Z auto tmp61 = tmp56 < tmp11; 2023-01-11T21:41:23.9940664Z auto tmp62 = tmp29 & tmp61; 2023-01-11T21:41:23.9940798Z auto tmp63 = tmp62 & tmp60; 2023-01-11T21:41:23.9940942Z auto tmp64 = tmp55 + tmp59; 2023-01-11T21:41:23.9941102Z auto tmp65 = tmp63 ? tmp64 : tmp55; 2023-01-11T21:41:23.9941240Z auto tmp66 = tmp6 + tmp14; 2023-01-11T21:41:23.9941430Z auto tmp67 = (tmp15 != tmp15) ? tmp15 : std::min(tmp66, tmp15); 2023-01-11T21:41:23.9941585Z auto tmp68 = in_ptr0[tmp18 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9941740Z auto tmp69 = in_ptr1[tmp18 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9941871Z auto tmp70 = tmp68 == tmp0; 2023-01-11T21:41:23.9941987Z auto tmp71 = tmp66 < tmp9; 2023-01-11T21:41:23.9942116Z auto tmp72 = tmp13 < tmp11; 2023-01-11T21:41:23.9942244Z auto tmp73 = tmp71 & tmp72; 2023-01-11T21:41:23.9942547Z auto tmp74 = tmp73 & tmp70; 2023-01-11T21:41:23.9942684Z auto tmp75 = tmp65 + tmp69; 2023-01-11T21:41:23.9942825Z auto tmp76 = tmp74 ? tmp75 : tmp65; 2023-01-11T21:41:23.9942985Z auto tmp77 = in_ptr0[tmp25 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9943145Z auto tmp78 = in_ptr1[tmp25 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9943264Z auto tmp79 = tmp77 == tmp0; 2023-01-11T21:41:23.9943394Z auto tmp80 = tmp71 & tmp30; 2023-01-11T21:41:23.9943526Z auto tmp81 = tmp80 & tmp79; 2023-01-11T21:41:23.9943652Z auto tmp82 = tmp76 + tmp78; 2023-01-11T21:41:23.9943789Z auto tmp83 = tmp81 ? tmp82 : tmp76; 2023-01-11T21:41:23.9943950Z auto tmp84 = in_ptr0[tmp37 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9944234Z auto tmp85 = in_ptr1[tmp37 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9944352Z auto tmp86 = tmp84 == tmp0; 2023-01-11T21:41:23.9944484Z auto tmp87 = tmp71 & tmp41; 2023-01-11T21:41:23.9944611Z auto tmp88 = tmp87 & tmp86; 2023-01-11T21:41:23.9944738Z auto tmp89 = tmp83 + tmp85; 2023-01-11T21:41:23.9944880Z auto tmp90 = tmp88 ? tmp89 : tmp83; 2023-01-11T21:41:23.9945038Z auto tmp91 = in_ptr0[tmp47 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9945197Z auto tmp92 = in_ptr1[tmp47 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9945326Z auto tmp93 = tmp91 == tmp0; 2023-01-11T21:41:23.9945439Z auto tmp94 = tmp71 & tmp51; 2023-01-11T21:41:23.9945566Z auto tmp95 = tmp94 & tmp93; 2023-01-11T21:41:23.9945772Z auto tmp96 = tmp90 + tmp92; 2023-01-11T21:41:23.9945923Z auto tmp97 = tmp95 ? tmp96 : tmp90; 2023-01-11T21:41:23.9946085Z auto tmp98 = in_ptr0[tmp57 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9946251Z auto tmp99 = in_ptr1[tmp57 + (4*tmp67) + (12*i0)]; 2023-01-11T21:41:23.9946388Z auto tmp100 = tmp98 == tmp0; 2023-01-11T21:41:23.9946511Z auto tmp101 = tmp71 & tmp61; 2023-01-11T21:41:23.9946651Z auto tmp102 = tmp101 & tmp100; 2023-01-11T21:41:23.9946781Z auto tmp103 = tmp97 + tmp99; 2023-01-11T21:41:23.9946926Z auto tmp104 = tmp102 ? tmp103 : tmp97; 2023-01-11T21:41:23.9947055Z auto tmp105 = tmp6 + tmp35; 2023-01-11T21:41:23.9947250Z auto tmp106 = (tmp15 != tmp15) ? tmp15 : std::min(tmp105, tmp15); 2023-01-11T21:41:23.9947422Z auto tmp107 = in_ptr0[tmp18 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9947587Z auto tmp108 = in_ptr1[tmp18 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9947711Z auto tmp109 = tmp107 == tmp0; 2023-01-11T21:41:23.9947849Z auto tmp110 = tmp105 < tmp9; 2023-01-11T21:41:23.9947986Z auto tmp111 = tmp110 & tmp72; 2023-01-11T21:41:23.9948127Z auto tmp112 = tmp111 & tmp109; 2023-01-11T21:41:23.9948263Z auto tmp113 = tmp104 + tmp108; 2023-01-11T21:41:23.9948413Z auto tmp114 = tmp112 ? tmp113 : tmp104; 2023-01-11T21:41:23.9948577Z auto tmp115 = in_ptr0[tmp25 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9948741Z auto tmp116 = in_ptr1[tmp25 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9948864Z auto tmp117 = tmp115 == tmp0; 2023-01-11T21:41:23.9948999Z auto tmp118 = tmp110 & tmp30; 2023-01-11T21:41:23.9949139Z auto tmp119 = tmp118 & tmp117; 2023-01-11T21:41:23.9949272Z auto tmp120 = tmp114 + tmp116; 2023-01-11T21:41:23.9949420Z auto tmp121 = tmp119 ? tmp120 : tmp114; 2023-01-11T21:41:23.9949583Z auto tmp122 = in_ptr0[tmp37 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9949742Z auto tmp123 = in_ptr1[tmp37 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9949872Z auto tmp124 = tmp122 == tmp0; 2023-01-11T21:41:23.9949993Z auto tmp125 = tmp110 & tmp41; 2023-01-11T21:41:23.9950128Z auto tmp126 = tmp125 & tmp124; 2023-01-11T21:41:23.9950267Z auto tmp127 = tmp121 + tmp123; 2023-01-11T21:41:23.9950418Z auto tmp128 = tmp126 ? tmp127 : tmp121; 2023-01-11T21:41:23.9950660Z auto tmp129 = in_ptr0[tmp47 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9950817Z auto tmp130 = in_ptr1[tmp47 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9950953Z auto tmp131 = tmp129 == tmp0; 2023-01-11T21:41:23.9951071Z auto tmp132 = tmp110 & tmp51; 2023-01-11T21:41:23.9951208Z auto tmp133 = tmp132 & tmp131; 2023-01-11T21:41:23.9951340Z auto tmp134 = tmp128 + tmp130; 2023-01-11T21:41:23.9951485Z auto tmp135 = tmp133 ? tmp134 : tmp128; 2023-01-11T21:41:23.9951647Z auto tmp136 = in_ptr0[tmp57 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9951804Z auto tmp137 = in_ptr1[tmp57 + (4*tmp106) + (12*i0)]; 2023-01-11T21:41:23.9951938Z auto tmp138 = tmp136 == tmp0; 2023-01-11T21:41:23.9952137Z auto tmp139 = tmp110 & tmp61; 2023-01-11T21:41:23.9952266Z auto tmp140 = tmp139 & tmp138; 2023-01-11T21:41:23.9952405Z auto tmp141 = tmp135 + tmp137; 2023-01-11T21:41:23.9952553Z auto tmp142 = tmp140 ? tmp141 : tmp135; 2023-01-11T21:41:23.9952692Z auto tmp143 = tmp6 + tmp8; 2023-01-11T21:41:23.9952882Z auto tmp144 = (tmp15 != tmp15) ? tmp15 : std::min(tmp143, tmp15); 2023-01-11T21:41:23.9953041Z auto tmp145 = in_ptr0[tmp18 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9953203Z auto tmp146 = in_ptr1[tmp18 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9953340Z auto tmp147 = tmp145 == tmp0; 2023-01-11T21:41:23.9953455Z auto tmp148 = tmp143 < tmp9; 2023-01-11T21:41:23.9953591Z auto tmp149 = tmp148 & tmp72; 2023-01-11T21:41:23.9953812Z auto tmp150 = tmp149 & tmp147; 2023-01-11T21:41:23.9953953Z auto tmp151 = tmp142 + tmp146; 2023-01-11T21:41:23.9954103Z auto tmp152 = tmp150 ? tmp151 : tmp142; 2023-01-11T21:41:23.9954263Z auto tmp153 = in_ptr0[tmp25 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9954426Z auto tmp154 = in_ptr1[tmp25 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9954566Z auto tmp155 = tmp153 == tmp0; 2023-01-11T21:41:23.9954683Z auto tmp156 = tmp148 & tmp30; 2023-01-11T21:41:23.9954823Z auto tmp157 = tmp156 & tmp155; 2023-01-11T21:41:23.9954963Z auto tmp158 = tmp152 + tmp154; 2023-01-11T21:41:23.9955113Z auto tmp159 = tmp157 ? tmp158 : tmp152; 2023-01-11T21:41:23.9955275Z auto tmp160 = in_ptr0[tmp37 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9955442Z auto tmp161 = in_ptr1[tmp37 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9955578Z auto tmp162 = tmp160 == tmp0; 2023-01-11T21:41:23.9955698Z auto tmp163 = tmp148 & tmp41; 2023-01-11T21:41:23.9955838Z auto tmp164 = tmp163 & tmp162; 2023-01-11T21:41:23.9955971Z auto tmp165 = tmp159 + tmp161; 2023-01-11T21:41:23.9956128Z auto tmp166 = tmp164 ? tmp165 : tmp159; 2023-01-11T21:41:23.9956297Z auto tmp167 = in_ptr0[tmp47 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9956462Z auto tmp168 = in_ptr1[tmp47 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9956603Z auto tmp169 = tmp167 == tmp0; 2023-01-11T21:41:23.9956744Z auto tmp170 = tmp148 & tmp51; 2023-01-11T21:41:23.9956951Z auto tmp171 = tmp170 & tmp169; 2023-01-11T21:41:23.9957092Z auto tmp172 = tmp166 + tmp168; 2023-01-11T21:41:23.9957255Z auto tmp173 = tmp171 ? tmp172 : tmp166; 2023-01-11T21:41:23.9957426Z auto tmp174 = in_ptr0[tmp57 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9957596Z auto tmp175 = in_ptr1[tmp57 + (4*tmp144) + (12*i0)]; 2023-01-11T21:41:23.9957745Z auto tmp176 = tmp174 == tmp0; 2023-01-11T21:41:23.9957892Z auto tmp177 = tmp148 & tmp61; 2023-01-11T21:41:23.9958045Z auto tmp178 = tmp177 & tmp176; 2023-01-11T21:41:23.9958182Z auto tmp179 = tmp173 + tmp175; 2023-01-11T21:41:23.9958348Z auto tmp180 = tmp178 ? tmp179 : tmp173; 2023-01-11T21:41:23.9958508Z auto tmp181 = tmp6 + tmp10; 2023-01-11T21:41:23.9958775Z auto tmp182 = (tmp15 != tmp15) ? tmp15 : std::min(tmp181, tmp15); 2023-01-11T21:41:23.9958951Z auto tmp183 = in_ptr0[tmp18 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9959139Z auto tmp184 = in_ptr1[tmp18 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9959293Z auto tmp185 = tmp183 == tmp0; 2023-01-11T21:41:23.9959443Z auto tmp186 = tmp181 < tmp9; 2023-01-11T21:41:23.9959578Z auto tmp187 = tmp186 & tmp72; 2023-01-11T21:41:23.9959731Z auto tmp188 = tmp187 & tmp185; 2023-01-11T21:41:23.9959887Z auto tmp189 = tmp180 + tmp184; 2023-01-11T21:41:23.9960048Z auto tmp190 = tmp188 ? tmp189 : tmp180; 2023-01-11T21:41:23.9960227Z auto tmp191 = in_ptr0[tmp25 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9960416Z auto tmp192 = in_ptr1[tmp25 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9960563Z auto tmp193 = tmp191 == tmp0; 2023-01-11T21:41:23.9960691Z auto tmp194 = tmp186 & tmp30; 2023-01-11T21:41:23.9960842Z auto tmp195 = tmp194 & tmp193; 2023-01-11T21:41:23.9960980Z auto tmp196 = tmp190 + tmp192; 2023-01-11T21:41:23.9961126Z auto tmp197 = tmp195 ? tmp196 : tmp190; 2023-01-11T21:41:23.9961304Z auto tmp198 = in_ptr0[tmp37 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9961485Z auto tmp199 = in_ptr1[tmp37 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9961636Z auto tmp200 = tmp198 == tmp0; 2023-01-11T21:41:23.9961773Z auto tmp201 = tmp186 & tmp41; 2023-01-11T21:41:23.9961900Z auto tmp202 = tmp201 & tmp200; 2023-01-11T21:41:23.9962037Z auto tmp203 = tmp197 + tmp199; 2023-01-11T21:41:23.9962190Z auto tmp204 = tmp202 ? tmp203 : tmp197; 2023-01-11T21:41:23.9962364Z auto tmp205 = in_ptr0[tmp47 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9962537Z auto tmp206 = in_ptr1[tmp47 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9962678Z auto tmp207 = tmp205 == tmp0; 2023-01-11T21:41:23.9962845Z auto tmp208 = tmp186 & tmp51; 2023-01-11T21:41:23.9963002Z auto tmp209 = tmp208 & tmp207; 2023-01-11T21:41:23.9963155Z auto tmp210 = tmp204 + tmp206; 2023-01-11T21:41:23.9963339Z auto tmp211 = tmp209 ? tmp210 : tmp204; 2023-01-11T21:41:23.9963544Z auto tmp212 = in_ptr0[tmp57 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9963721Z auto tmp213 = in_ptr1[tmp57 + (4*tmp182) + (12*i0)]; 2023-01-11T21:41:23.9963943Z auto tmp214 = tmp212 == tmp0; 2023-01-11T21:41:23.9964115Z auto tmp215 = tmp186 & tmp61; 2023-01-11T21:41:23.9964265Z auto tmp216 = tmp215 & tmp214; 2023-01-11T21:41:23.9964403Z auto tmp217 = tmp211 + tmp213; 2023-01-11T21:41:23.9964565Z auto tmp218 = tmp216 ? tmp217 : tmp211; 2023-01-11T21:41:23.9964714Z out_ptr0[i2 + (4*i1) + (12*i0)] = tmp218; 2023-01-11T21:41:23.9964814Z } 2023-01-11T21:41:23.9964914Z } 2023-01-11T21:41:23.9965016Z } 2023-01-11T21:41:23.9965115Z } 2023-01-11T21:41:23.9965192Z } 2023-01-11T21:41:23.9965290Z } 2023-01-11T21:41:23.9965388Z } 2023-01-11T21:41:23.9965558Z ''') 2023-01-11T21:41:23.9965567Z 2023-01-11T21:41:23.9965574Z 2023-01-11T21:41:23.9965725Z async_compile.wait(globals()) 2023-01-11T21:41:23.9965838Z del async_compile 2023-01-11T21:41:23.9965936Z 2023-01-11T21:41:23.9966046Z def call(args): 2023-01-11T21:41:23.9966173Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9966273Z args.clear() 2023-01-11T21:41:23.9966640Z buf0 = empty_strided((2, 64, 3, 4), (768, 12, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9966909Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9967021Z del arg0_1 2023-01-11T21:41:23.9967130Z del arg2_1 2023-01-11T21:41:23.9967245Z return (buf0, ) 2023-01-11T21:41:23.9967252Z 2023-01-11T21:41:23.9967258Z 2023-01-11T21:41:23.9967380Z if __name__ == "__main__": 2023-01-11T21:41:23.9967557Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9967773Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9968125Z arg0_1 = rand_strided((2, 64, 3, 4), (768, 12, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9968490Z arg1_1 = rand_strided((2, 64, 3, 4), (768, 12, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9968829Z arg2_1 = rand_strided((2, 64, 3, 4), (768, 12, 4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9969030Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9969492Z [2023-01-11 21:36:36,967] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 371 2023-01-11T21:41:23.9969502Z 2023-01-11T21:41:23.9969604Z ok (1.850s) 2023-01-11T21:41:23.9970446Z test_max_pool2d_with_indices_backward5_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9970662Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9971072Z [2023-01-11 21:36:37,003] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 372 2023-01-11T21:41:23.9971472Z [2023-01-11 21:36:37,018] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.max_pool2d_with_indices_backward 2023-01-11T21:41:23.9971875Z [2023-01-11 21:36:37,021] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 372 2023-01-11T21:41:23.9971884Z 2023-01-11T21:41:23.9972033Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9972143Z import torch 2023-01-11T21:41:23.9972254Z import random 2023-01-11T21:41:23.9972432Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9972637Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9972644Z 2023-01-11T21:41:23.9972765Z aten = torch.ops.aten 2023-01-11T21:41:23.9973007Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9973248Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9973256Z 2023-01-11T21:41:23.9973262Z 2023-01-11T21:41:23.9973411Z async_compile.wait(globals()) 2023-01-11T21:41:23.9973526Z del async_compile 2023-01-11T21:41:23.9973534Z 2023-01-11T21:41:23.9973646Z def call(args): 2023-01-11T21:41:23.9973779Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9973901Z args.clear() 2023-01-11T21:41:23.9974132Z buf0 = aten.max_pool2d_with_indices_backward(arg0_1, arg1_1, [13, 13], [1, 1], [2, 2], [1, 1], False, arg2_1) 2023-01-11T21:41:23.9974238Z del arg0_1 2023-01-11T21:41:23.9974333Z del arg1_1 2023-01-11T21:41:23.9974425Z del arg2_1 2023-01-11T21:41:23.9974521Z buf1 = buf0 2023-01-11T21:41:23.9974680Z assert_size_stride(buf1, (2, 64, 20, 20), (25600, 400, 20, 1)) 2023-01-11T21:41:23.9974774Z del buf0 2023-01-11T21:41:23.9974857Z return (buf1, ) 2023-01-11T21:41:23.9974864Z 2023-01-11T21:41:23.9974870Z 2023-01-11T21:41:23.9974980Z if __name__ == "__main__": 2023-01-11T21:41:23.9975209Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9975392Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9975754Z arg0_1 = rand_strided((2, 64, 12, 12), (9216, 144, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9976103Z arg1_1 = rand_strided((2, 64, 20, 20), (25600, 400, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9976440Z arg2_1 = rand_strided((2, 64, 12, 12), (9216, 144, 12, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9976631Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9976641Z 2023-01-11T21:41:23.9976727Z ok (0.045s) 2023-01-11T21:41:23.9977565Z test_max_pool2d_with_indices_backward_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9977774Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9978227Z [2023-01-11 21:36:37,047] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 373 2023-01-11T21:41:23.9978680Z [2023-01-11 21:36:38,621] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 373 2023-01-11T21:41:23.9978689Z 2023-01-11T21:41:23.9978844Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9978956Z import torch 2023-01-11T21:41:23.9979070Z import random 2023-01-11T21:41:23.9979260Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9979443Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9979468Z 2023-01-11T21:41:23.9979575Z aten = torch.ops.aten 2023-01-11T21:41:23.9979798Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9979958Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9979966Z 2023-01-11T21:41:23.9979974Z 2023-01-11T21:41:23.9980198Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9980541Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9980728Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:23.9980885Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:23.9981019Z float* __restrict__ out_ptr0) 2023-01-11T21:41:23.9981110Z { 2023-01-11T21:41:23.9981250Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9981344Z { 2023-01-11T21:41:23.9981464Z #pragma omp for 2023-01-11T21:41:23.9981601Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9981695Z { 2023-01-11T21:41:23.9981808Z #pragma GCC ivdep 2023-01-11T21:41:23.9981950Z for(long i1=0; i1<18; i1+=1) 2023-01-11T21:41:23.9982058Z { 2023-01-11T21:41:23.9982305Z #pragma GCC ivdep 2023-01-11T21:41:23.9982757Z for(long i2=0; i2<14; i2+=1) 2023-01-11T21:41:23.9982865Z { 2023-01-11T21:41:23.9982934Z { 2023-01-11T21:41:23.9982990Z { 2023-01-11T21:41:23.9983111Z auto tmp0 = static_cast(i2 + (14*i1)); 2023-01-11T21:41:23.9983222Z auto tmp1 = static_cast((i1 / 2)); 2023-01-11T21:41:23.9983329Z auto tmp2 = static_cast((i2 / 2)); 2023-01-11T21:41:23.9983442Z auto tmp3 = static_cast(1 + (i1 / 2)); 2023-01-11T21:41:23.9983554Z auto tmp4 = static_cast(1 + (i2 / 2)); 2023-01-11T21:41:23.9983659Z auto tmp5 = static_cast(0); 2023-01-11T21:41:23.9983798Z auto tmp6 = (tmp5 != tmp5) ? tmp5 : std::max(tmp1, tmp5); 2023-01-11T21:41:23.9984024Z auto tmp7 = (tmp5 != tmp5) ? tmp5 : std::max(tmp2, tmp5); 2023-01-11T21:41:23.9984134Z auto tmp8 = static_cast(9); 2023-01-11T21:41:23.9984261Z auto tmp9 = (tmp8 != tmp8) ? tmp8 : std::min(tmp3, tmp8); 2023-01-11T21:41:23.9984367Z auto tmp10 = static_cast(7); 2023-01-11T21:41:23.9984498Z auto tmp11 = (tmp10 != tmp10) ? tmp10 : std::min(tmp4, tmp10); 2023-01-11T21:41:23.9984597Z auto tmp12 = tmp6 + tmp5; 2023-01-11T21:41:23.9984691Z auto tmp13 = tmp7 + tmp5; 2023-01-11T21:41:23.9984796Z auto tmp14 = static_cast(1); 2023-01-11T21:41:23.9984960Z auto tmp15 = tmp9 - tmp14; 2023-01-11T21:41:23.9985089Z auto tmp16 = (tmp15 != tmp15) ? tmp15 : std::min(tmp12, tmp15); 2023-01-11T21:41:23.9985238Z auto tmp17 = tmp11 - tmp14; 2023-01-11T21:41:23.9985373Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp13, tmp17); 2023-01-11T21:41:23.9985494Z auto tmp19 = in_ptr0[tmp18 + (7*tmp16) + (63*i0)]; 2023-01-11T21:41:23.9985612Z auto tmp20 = in_ptr1[tmp18 + (7*tmp16) + (63*i0)]; 2023-01-11T21:41:23.9985710Z auto tmp21 = tmp19 == tmp0; 2023-01-11T21:41:23.9985821Z auto tmp22 = static_cast(0.0); 2023-01-11T21:41:23.9985912Z auto tmp23 = tmp21 ? tmp20 : tmp22; 2023-01-11T21:41:23.9986019Z out_ptr0[i2 + (14*i1) + (252*i0)] = tmp23; 2023-01-11T21:41:23.9986087Z } 2023-01-11T21:41:23.9986155Z } 2023-01-11T21:41:23.9986220Z } 2023-01-11T21:41:23.9986286Z } 2023-01-11T21:41:23.9986348Z } 2023-01-11T21:41:23.9986395Z } 2023-01-11T21:41:23.9986458Z } 2023-01-11T21:41:23.9986540Z ''') 2023-01-11T21:41:23.9986547Z 2023-01-11T21:41:23.9986551Z 2023-01-11T21:41:23.9986643Z async_compile.wait(globals()) 2023-01-11T21:41:23.9986715Z del async_compile 2023-01-11T21:41:23.9986720Z 2023-01-11T21:41:23.9986790Z def call(args): 2023-01-11T21:41:23.9986874Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:23.9986932Z args.clear() 2023-01-11T21:41:23.9987154Z buf0 = empty_strided((2, 4, 18, 14), (1008, 252, 14, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9987320Z kernel_cpp_0(c_void_p(arg2_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:23.9987389Z del arg0_1 2023-01-11T21:41:23.9987454Z del arg2_1 2023-01-11T21:41:23.9987526Z return (buf0, ) 2023-01-11T21:41:23.9987531Z 2023-01-11T21:41:23.9987535Z 2023-01-11T21:41:23.9987613Z if __name__ == "__main__": 2023-01-11T21:41:23.9987730Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:23.9987844Z from torch._inductor.utils import print_performance 2023-01-11T21:41:23.9988103Z arg0_1 = rand_strided((2, 4, 9, 7), (252, 63, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9988320Z arg1_1 = rand_strided((2, 4, 18, 14), (1008, 252, 14, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:23.9988532Z arg2_1 = rand_strided((2, 4, 9, 7), (252, 63, 7, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:23.9988652Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:23.9988656Z 2023-01-11T21:41:23.9988724Z ok (1.598s) 2023-01-11T21:41:23.9989188Z test_mean_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:23.9989341Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:23.9989608Z [2023-01-11 21:36:38,644] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 374 2023-01-11T21:41:23.9989863Z [2023-01-11 21:36:40,337] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 374 2023-01-11T21:41:23.9989882Z 2023-01-11T21:41:23.9989962Z from ctypes import c_void_p, c_long 2023-01-11T21:41:23.9990031Z import torch 2023-01-11T21:41:23.9990101Z import random 2023-01-11T21:41:23.9990217Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:23.9990335Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:23.9990340Z 2023-01-11T21:41:23.9990417Z aten = torch.ops.aten 2023-01-11T21:41:23.9990552Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:23.9990630Z async_compile = AsyncCompile() 2023-01-11T21:41:23.9990635Z 2023-01-11T21:41:23.9990650Z 2023-01-11T21:41:23.9990772Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:23.9990982Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:23.9991097Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:23.9991198Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:23.9991301Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:23.9991398Z float* __restrict__ out_ptr2, 2023-01-11T21:41:23.9991492Z float* __restrict__ out_ptr3) 2023-01-11T21:41:23.9991541Z { 2023-01-11T21:41:23.9991625Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:23.9991706Z auto out_ptr0 = in_out_ptr1; 2023-01-11T21:41:23.9991766Z { 2023-01-11T21:41:23.9991950Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.9992022Z float tmp1 = 0; 2023-01-11T21:41:23.9992135Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:23.9992229Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9992291Z { 2023-01-11T21:41:23.9992396Z #pragma omp for reduction(+:tmp1_vec) 2023-01-11T21:41:23.9992478Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9992541Z { 2023-01-11T21:41:23.9992674Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:23.9992753Z tmp1_vec += tmp0; 2023-01-11T21:41:23.9992803Z } 2023-01-11T21:41:23.9992997Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:23.9993119Z #pragma omp for simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:23.9993203Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:23.9993264Z { 2023-01-11T21:41:23.9993351Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:23.9993423Z tmp1 += tmp0; 2023-01-11T21:41:23.9993516Z } 2023-01-11T21:41:23.9993565Z } 2023-01-11T21:41:23.9993641Z out_ptr0[0] = tmp1; 2023-01-11T21:41:23.9993702Z } 2023-01-11T21:41:23.9993867Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:23.9993930Z { 2023-01-11T21:41:23.9994005Z #pragma omp for 2023-01-11T21:41:23.9994073Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:23.9994134Z { 2023-01-11T21:41:23.9994198Z { 2023-01-11T21:41:23.9994387Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:23.9994466Z float tmp1 = 0; 2023-01-11T21:41:23.9994587Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:23.9994676Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9994741Z { 2023-01-11T21:41:23.9994899Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9994985Z tmp1_vec += tmp0; 2023-01-11T21:41:23.9995049Z } 2023-01-11T21:41:23.9995246Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:23.9995362Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:23.9995449Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9995512Z { 2023-01-11T21:41:23.9995609Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:23.9995673Z tmp1 += tmp0; 2023-01-11T21:41:23.9995736Z } 2023-01-11T21:41:23.9995815Z out_ptr1[i0] = tmp1; 2023-01-11T21:41:23.9995877Z } 2023-01-11T21:41:23.9995938Z } 2023-01-11T21:41:23.9996013Z #pragma omp for 2023-01-11T21:41:23.9996080Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:23.9996139Z { 2023-01-11T21:41:23.9996273Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:23.9996406Z auto tmp1 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:23.9996491Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.9996583Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:23.9996645Z } 2023-01-11T21:41:23.9996739Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:23.9996806Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:23.9996865Z { 2023-01-11T21:41:23.9996950Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:23.9997047Z auto tmp1 = static_cast(8); 2023-01-11T21:41:23.9997128Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:23.9997206Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:23.9997267Z } 2023-01-11T21:41:23.9997329Z #pragma omp for 2023-01-11T21:41:23.9997407Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:23.9997470Z { 2023-01-11T21:41:23.9997552Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:23.9997613Z { 2023-01-11T21:41:23.9997751Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.9997890Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 8 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.9998017Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr0 + 16 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.9998155Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr0 + 24 + (8*i1) + (32*i0)); 2023-01-11T21:41:23.9998241Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9998326Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.9998409Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:23.9998541Z auto tmp7 = at::vec::Vectorized(static_cast(4)); 2023-01-11T21:41:23.9998623Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:23.9998755Z tmp8.store(out_ptr2 + (8*i0) + (8*i1)); 2023-01-11T21:41:23.9998806Z } 2023-01-11T21:41:23.9998893Z #pragma omp simd simdlen(4) 2023-01-11T21:41:23.9998972Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:23.9999033Z { 2023-01-11T21:41:23.9999129Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:23.9999228Z auto tmp1 = in_ptr0[8 + i1 + (32*i0)]; 2023-01-11T21:41:23.9999325Z auto tmp3 = in_ptr0[16 + i1 + (32*i0)]; 2023-01-11T21:41:23.9999409Z auto tmp5 = in_ptr0[24 + i1 + (32*i0)]; 2023-01-11T21:41:23.9999494Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:23.9999578Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:23.9999661Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:23.9999760Z auto tmp7 = static_cast(4); 2023-01-11T21:41:23.9999845Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:23.9999967Z out_ptr2[i1 + (8*i0)] = tmp8; 2023-01-11T21:41:24.0000021Z } 2023-01-11T21:41:24.0000081Z } 2023-01-11T21:41:24.0000156Z #pragma omp for 2023-01-11T21:41:24.0000236Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0000297Z { 2023-01-11T21:41:24.0000428Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0000561Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 32 + (8*i0)); 2023-01-11T21:41:24.0000633Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0000761Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0000844Z auto tmp4 = tmp2 / tmp3; 2023-01-11T21:41:24.0000932Z tmp4.store(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0000993Z } 2023-01-11T21:41:24.0001086Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0001168Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0001216Z { 2023-01-11T21:41:24.0001302Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0001388Z auto tmp1 = in_ptr0[32 + i0]; 2023-01-11T21:41:24.0001471Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0001569Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0001649Z auto tmp4 = tmp2 / tmp3; 2023-01-11T21:41:24.0001726Z out_ptr3[i0] = tmp4; 2023-01-11T21:41:24.0001776Z } 2023-01-11T21:41:24.0001852Z #pragma omp single 2023-01-11T21:41:24.0001913Z { 2023-01-11T21:41:24.0001973Z { 2023-01-11T21:41:24.0002036Z { 2023-01-11T21:41:24.0002126Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:24.0002227Z auto tmp1 = static_cast(64); 2023-01-11T21:41:24.0002304Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0002390Z in_out_ptr1[0] = tmp2; 2023-01-11T21:41:24.0002454Z } 2023-01-11T21:41:24.0002517Z } 2023-01-11T21:41:24.0002584Z } 2023-01-11T21:41:24.0002645Z } 2023-01-11T21:41:24.0002692Z } 2023-01-11T21:41:24.0002780Z ''') 2023-01-11T21:41:24.0002786Z 2023-01-11T21:41:24.0002790Z 2023-01-11T21:41:24.0002880Z async_compile.wait(globals()) 2023-01-11T21:41:24.0002952Z del async_compile 2023-01-11T21:41:24.0002956Z 2023-01-11T21:41:24.0003026Z def call(args): 2023-01-11T21:41:24.0003094Z arg0_1, = args 2023-01-11T21:41:24.0003163Z args.clear() 2023-01-11T21:41:24.0003351Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0003540Z buf1 = empty_strided((1, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0003623Z buf2 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0003830Z buf3 = empty_strided((1, 2, 1, 8), (16, 8, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0004021Z buf4 = empty_strided((4, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0004104Z buf5 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0004347Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:24.0004414Z del arg0_1 2023-01-11T21:41:24.0004498Z return (buf5, buf2, buf3, buf4, ) 2023-01-11T21:41:24.0004503Z 2023-01-11T21:41:24.0004509Z 2023-01-11T21:41:24.0004572Z if __name__ == "__main__": 2023-01-11T21:41:24.0004685Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0004807Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0005017Z arg0_1 = rand_strided((1, 2, 4, 8), (64, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0005123Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0005128Z 2023-01-11T21:41:24.0005192Z ok (1.716s) 2023-01-11T21:41:24.0005700Z test_min_max_reduction_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0005828Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0006093Z [2023-01-11 21:36:40,364] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 375 2023-01-11T21:41:24.0006347Z [2023-01-11 21:36:41,985] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 375 2023-01-11T21:41:24.0006364Z 2023-01-11T21:41:24.0006443Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0006511Z import torch 2023-01-11T21:41:24.0006580Z import random 2023-01-11T21:41:24.0006692Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0006812Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0006817Z 2023-01-11T21:41:24.0006899Z aten = torch.ops.aten 2023-01-11T21:41:24.0007033Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0007111Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0007115Z 2023-01-11T21:41:24.0007132Z 2023-01-11T21:41:24.0007253Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0007457Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0007575Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0007678Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0007776Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0007870Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0007963Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0008012Z { 2023-01-11T21:41:24.0008072Z { 2023-01-11T21:41:24.0008441Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:24.0008656Z float tmp3 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0008772Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0009014Z #pragma omp declare reduction(min:at::vec::Vectorized:omp_out = at::vec::minimum(omp_out, omp_in)) initializer(omp_priv={{std::numeric_limits::infinity()}}) 2023-01-11T21:41:24.0009134Z float tmp4 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0009248Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:24.0009440Z float tmp7 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0009541Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.0009642Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0009705Z { 2023-01-11T21:41:24.0009871Z #pragma omp for reduction(max:tmp3_vec) reduction(min:tmp4_vec) reduction(max:tmp7_vec) 2023-01-11T21:41:24.0009987Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0010051Z { 2023-01-11T21:41:24.0010184Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0010318Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0010392Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0010519Z auto tmp5 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0010602Z auto tmp6 = tmp0 + tmp5; 2023-01-11T21:41:24.0010715Z tmp3_vec = at::vec::maximum(tmp3_vec, tmp2); 2023-01-11T21:41:24.0010828Z tmp4_vec = at::vec::minimum(tmp4_vec, tmp2); 2023-01-11T21:41:24.0010935Z tmp7_vec = at::vec::maximum(tmp7_vec, tmp6); 2023-01-11T21:41:24.0010996Z } 2023-01-11T21:41:24.0011234Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp3_vec); 2023-01-11T21:41:24.0011443Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::minimum(x, y);}, tmp4_vec); 2023-01-11T21:41:24.0011647Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp7_vec); 2023-01-11T21:41:24.0011803Z #pragma omp for simd simdlen(4) reduction(max:tmp3) reduction(min:tmp4) reduction(max:tmp7) 2023-01-11T21:41:24.0011887Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0011951Z { 2023-01-11T21:41:24.0012036Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0012120Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.0012206Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0012307Z auto tmp5 = static_cast(1); 2023-01-11T21:41:24.0012384Z auto tmp6 = tmp0 + tmp5; 2023-01-11T21:41:24.0012478Z tmp3 = std::max(tmp3, tmp2); 2023-01-11T21:41:24.0012596Z tmp4 = std::min(tmp4, tmp2); 2023-01-11T21:41:24.0012726Z tmp7 = std::max(tmp7, tmp6); 2023-01-11T21:41:24.0012814Z } 2023-01-11T21:41:24.0012888Z } 2023-01-11T21:41:24.0012965Z out_ptr0[0] = tmp3; 2023-01-11T21:41:24.0013027Z out_ptr1[0] = tmp4; 2023-01-11T21:41:24.0013099Z out_ptr2[0] = tmp7; 2023-01-11T21:41:24.0013160Z } 2023-01-11T21:41:24.0013218Z } 2023-01-11T21:41:24.0013300Z ''') 2023-01-11T21:41:24.0013305Z 2023-01-11T21:41:24.0013309Z 2023-01-11T21:41:24.0013399Z async_compile.wait(globals()) 2023-01-11T21:41:24.0013471Z del async_compile 2023-01-11T21:41:24.0013475Z 2023-01-11T21:41:24.0013532Z def call(args): 2023-01-11T21:41:24.0013608Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0013679Z args.clear() 2023-01-11T21:41:24.0013875Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0014056Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0014252Z buf2 = empty_strided((1, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0014466Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0014536Z del arg0_1 2023-01-11T21:41:24.0014590Z del arg1_1 2023-01-11T21:41:24.0014674Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0014679Z 2023-01-11T21:41:24.0014683Z 2023-01-11T21:41:24.0014761Z if __name__ == "__main__": 2023-01-11T21:41:24.0014877Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0015002Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0015199Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0015440Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0015556Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0015561Z 2023-01-11T21:41:24.0015615Z ok (1.656s) 2023-01-11T21:41:24.0016099Z test_misaligned_address_issue1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0016225Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0016490Z [2023-01-11 21:36:42,019] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 376 2023-01-11T21:41:24.0016925Z [2023-01-11 21:36:43,567] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 376 2023-01-11T21:41:24.0016940Z 2023-01-11T21:41:24.0017076Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0017152Z import torch 2023-01-11T21:41:24.0017224Z import random 2023-01-11T21:41:24.0017339Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0017447Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0017452Z 2023-01-11T21:41:24.0017531Z aten = torch.ops.aten 2023-01-11T21:41:24.0017665Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0017758Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0017763Z 2023-01-11T21:41:24.0017767Z 2023-01-11T21:41:24.0017906Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0018110Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0018231Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:24.0018336Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0018427Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0018488Z { 2023-01-11T21:41:24.0018551Z { 2023-01-11T21:41:24.0018615Z { 2023-01-11T21:41:24.0018700Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.0018788Z auto tmp1 = in_ptr1[tmp0]; 2023-01-11T21:41:24.0018872Z out_ptr0[0] = tmp1; 2023-01-11T21:41:24.0018923Z } 2023-01-11T21:41:24.0018985Z } 2023-01-11T21:41:24.0019047Z } 2023-01-11T21:41:24.0019126Z ''') 2023-01-11T21:41:24.0019131Z 2023-01-11T21:41:24.0019135Z 2023-01-11T21:41:24.0019224Z async_compile.wait(globals()) 2023-01-11T21:41:24.0019298Z del async_compile 2023-01-11T21:41:24.0019302Z 2023-01-11T21:41:24.0019374Z def call(args): 2023-01-11T21:41:24.0019435Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0019507Z args.clear() 2023-01-11T21:41:24.0019703Z buf0 = empty_strided((1, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0019868Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0019937Z del arg0_1 2023-01-11T21:41:24.0020004Z del arg1_1 2023-01-11T21:41:24.0020106Z return (buf0, ) 2023-01-11T21:41:24.0020112Z 2023-01-11T21:41:24.0020117Z 2023-01-11T21:41:24.0020215Z if __name__ == "__main__": 2023-01-11T21:41:24.0020366Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0020533Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0020832Z arg0_1 = rand_strided((1, 1000), (1000, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0021120Z arg1_1 = rand_strided((1, 1), (1, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0021238Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0021243Z 2023-01-11T21:41:24.0021310Z ok (1.574s) 2023-01-11T21:41:24.0021780Z test_mm_views_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0021961Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0022230Z [2023-01-11 21:36:43,587] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 377 2023-01-11T21:41:24.0022620Z [2023-01-11 21:36:43,589] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 377 2023-01-11T21:41:24.0022626Z 2023-01-11T21:41:24.0022721Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0022790Z import torch 2023-01-11T21:41:24.0022860Z import random 2023-01-11T21:41:24.0022977Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0023098Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0023106Z 2023-01-11T21:41:24.0023244Z aten = torch.ops.aten 2023-01-11T21:41:24.0023367Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0023458Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0023462Z 2023-01-11T21:41:24.0023467Z 2023-01-11T21:41:24.0023556Z async_compile.wait(globals()) 2023-01-11T21:41:24.0023628Z del async_compile 2023-01-11T21:41:24.0023633Z 2023-01-11T21:41:24.0023702Z def call(args): 2023-01-11T21:41:24.0023777Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0023848Z args.clear() 2023-01-11T21:41:24.0024050Z buf0 = empty_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0024159Z aten.mm.out(arg0_1, as_strided(arg1_1, (32, 32), (32, 1)), out=buf0) 2023-01-11T21:41:24.0024226Z del arg0_1 2023-01-11T21:41:24.0024295Z del arg1_1 2023-01-11T21:41:24.0024366Z return (buf0, ) 2023-01-11T21:41:24.0024371Z 2023-01-11T21:41:24.0024375Z 2023-01-11T21:41:24.0024451Z if __name__ == "__main__": 2023-01-11T21:41:24.0024569Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0024692Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0024895Z arg0_1 = rand_strided((32, 32), (1, 32), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0025091Z arg1_1 = rand_strided((32, 1, 32), (32, 1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0025207Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0025212Z 2023-01-11T21:41:24.0025280Z ok (0.023s) 2023-01-11T21:41:24.0025753Z test_move_arange_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0025881Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0026143Z [2023-01-11 21:36:43,622] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 378 2023-01-11T21:41:24.0026407Z [2023-01-11 21:36:45,226] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 378 2023-01-11T21:41:24.0026413Z 2023-01-11T21:41:24.0026505Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0026572Z import torch 2023-01-11T21:41:24.0026631Z import random 2023-01-11T21:41:24.0026742Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0026860Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0026865Z 2023-01-11T21:41:24.0026941Z aten = torch.ops.aten 2023-01-11T21:41:24.0027072Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0027163Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0027168Z 2023-01-11T21:41:24.0027172Z 2023-01-11T21:41:24.0027303Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0027595Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0027758Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0027893Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0027970Z { 2023-01-11T21:41:24.0028072Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0028134Z { 2023-01-11T21:41:24.0028211Z #pragma omp for 2023-01-11T21:41:24.0028291Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0028341Z { 2023-01-11T21:41:24.0028407Z { 2023-01-11T21:41:24.0028470Z { 2023-01-11T21:41:24.0028563Z auto tmp2 = in_ptr0[i0]; 2023-01-11T21:41:24.0028665Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0028770Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0028861Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0028968Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0029033Z } 2023-01-11T21:41:24.0029098Z } 2023-01-11T21:41:24.0029158Z } 2023-01-11T21:41:24.0029219Z } 2023-01-11T21:41:24.0029281Z } 2023-01-11T21:41:24.0029352Z ''') 2023-01-11T21:41:24.0029368Z 2023-01-11T21:41:24.0029373Z 2023-01-11T21:41:24.0029450Z async_compile.wait(globals()) 2023-01-11T21:41:24.0029519Z del async_compile 2023-01-11T21:41:24.0029524Z 2023-01-11T21:41:24.0029593Z def call(args): 2023-01-11T21:41:24.0029660Z arg0_1, = args 2023-01-11T21:41:24.0029729Z args.clear() 2023-01-11T21:41:24.0029924Z buf0 = empty_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0030055Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0030112Z del arg0_1 2023-01-11T21:41:24.0030180Z return (buf0, ) 2023-01-11T21:41:24.0030184Z 2023-01-11T21:41:24.0030188Z 2023-01-11T21:41:24.0030261Z if __name__ == "__main__": 2023-01-11T21:41:24.0030378Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0030500Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0030693Z arg0_1 = rand_strided((32, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0030799Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0030804Z 2023-01-11T21:41:24.0030869Z ok (1.669s) 2023-01-11T21:41:24.0030993Z test_multi_device_cpu (__main__.CpuTests) ... skip: requires cuda (0.001s) 2023-01-11T21:41:24.0031153Z test_multi_gpu_device_cpu (__main__.CpuTests) ... skip: requires multiple cuda devices (0.001s) 2023-01-11T21:41:24.0031296Z test_multilayer_low_prec_cpu (__main__.CpuTests) ... skip: requires CUDA (0.001s) 2023-01-11T21:41:24.0031439Z test_nan_to_num_cpu (__main__.CpuTests) ... skip: Skipping due to op bugs (0.001s) 2023-01-11T21:41:24.0031902Z test_narrow_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0032029Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0032292Z [2023-01-11 21:36:45,308] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 379 2023-01-11T21:41:24.0032557Z [2023-01-11 21:36:46,840] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 379 2023-01-11T21:41:24.0032562Z 2023-01-11T21:41:24.0032654Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0032721Z import torch 2023-01-11T21:41:24.0032778Z import random 2023-01-11T21:41:24.0032892Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0033011Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0033043Z 2023-01-11T21:41:24.0033124Z aten = torch.ops.aten 2023-01-11T21:41:24.0033256Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0033346Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0033351Z 2023-01-11T21:41:24.0033355Z 2023-01-11T21:41:24.0033488Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0033690Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0033854Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0033954Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0034014Z { 2023-01-11T21:41:24.0034114Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0034176Z { 2023-01-11T21:41:24.0034251Z #pragma omp for 2023-01-11T21:41:24.0034319Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:24.0034381Z { 2023-01-11T21:41:24.0034560Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 640 + (8*i0)); 2023-01-11T21:41:24.0034696Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0034782Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0034912Z auto tmp3 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0034996Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0035085Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0035134Z } 2023-01-11T21:41:24.0035225Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0035310Z for(long i0=1024; i0<1024; i0+=1) 2023-01-11T21:41:24.0035371Z { 2023-01-11T21:41:24.0035460Z auto tmp0 = in_ptr0[640 + i0]; 2023-01-11T21:41:24.0035558Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0035642Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0035725Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.0035807Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0035891Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0035953Z } 2023-01-11T21:41:24.0036013Z } 2023-01-11T21:41:24.0036072Z } 2023-01-11T21:41:24.0036152Z ''') 2023-01-11T21:41:24.0036157Z 2023-01-11T21:41:24.0036162Z 2023-01-11T21:41:24.0036237Z async_compile.wait(globals()) 2023-01-11T21:41:24.0036308Z del async_compile 2023-01-11T21:41:24.0036313Z 2023-01-11T21:41:24.0036385Z def call(args): 2023-01-11T21:41:24.0036453Z arg0_1, = args 2023-01-11T21:41:24.0036523Z args.clear() 2023-01-11T21:41:24.0036725Z buf0 = empty_strided((16, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0036855Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0036950Z return (as_strided(arg0_1, (64, 16), (64, 1), 10), buf0, ) 2023-01-11T21:41:24.0036968Z 2023-01-11T21:41:24.0036972Z 2023-01-11T21:41:24.0037035Z if __name__ == "__main__": 2023-01-11T21:41:24.0037145Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0037271Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0037468Z arg0_1 = rand_strided((64, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0037574Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0037579Z 2023-01-11T21:41:24.0037645Z ok (1.577s) 2023-01-11T21:41:24.0038123Z test_new_empty_strided_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0038248Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0038507Z [2023-01-11 21:36:46,886] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 380 2023-01-11T21:41:24.0038791Z [2023-01-11 21:36:48,742] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 380 2023-01-11T21:41:24.0038808Z 2023-01-11T21:41:24.0038888Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0038957Z import torch 2023-01-11T21:41:24.0039027Z import random 2023-01-11T21:41:24.0039142Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0039260Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0039265Z 2023-01-11T21:41:24.0039341Z aten = torch.ops.aten 2023-01-11T21:41:24.0039472Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0039549Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0039554Z 2023-01-11T21:41:24.0039559Z 2023-01-11T21:41:24.0039690Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0039892Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0040004Z extern "C" void kernel(float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0040095Z { 2023-01-11T21:41:24.0040192Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0040252Z { 2023-01-11T21:41:24.0040316Z #pragma omp for 2023-01-11T21:41:24.0040397Z for(long i0=0; i0<2048; i0+=1) 2023-01-11T21:41:24.0040458Z { 2023-01-11T21:41:24.0040592Z auto tmp0 = at::vec::Vectorized(static_cast(123)); 2023-01-11T21:41:24.0040681Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0040741Z } 2023-01-11T21:41:24.0040833Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0040909Z for(long i0=16384; i0<16384; i0+=1) 2023-01-11T21:41:24.0040969Z { 2023-01-11T21:41:24.0041068Z auto tmp0 = static_cast(123); 2023-01-11T21:41:24.0041147Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0041207Z } 2023-01-11T21:41:24.0041266Z } 2023-01-11T21:41:24.0041324Z } 2023-01-11T21:41:24.0041390Z ''') 2023-01-11T21:41:24.0041397Z 2023-01-11T21:41:24.0041404Z 2023-01-11T21:41:24.0041490Z async_compile.wait(globals()) 2023-01-11T21:41:24.0041560Z del async_compile 2023-01-11T21:41:24.0041565Z 2023-01-11T21:41:24.0041634Z def call(args): 2023-01-11T21:41:24.0041700Z arg0_1, = args 2023-01-11T21:41:24.0041768Z args.clear() 2023-01-11T21:41:24.0041985Z buf0 = empty_strided((1, 128, 128), (16384, 128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0042084Z kernel_cpp_0(c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0042142Z return (buf0, ) 2023-01-11T21:41:24.0042147Z 2023-01-11T21:41:24.0042151Z 2023-01-11T21:41:24.0042225Z if __name__ == "__main__": 2023-01-11T21:41:24.0042338Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0042459Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0042651Z arg0_1 = rand_strided((55, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0042757Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0042765Z 2023-01-11T21:41:24.0042831Z ok (1.903s) 2023-01-11T21:41:24.0043296Z test_new_ones_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0043420Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0043668Z [2023-01-11 21:36:48,845] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 381 2023-01-11T21:41:24.0043929Z [2023-01-11 21:36:51,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 381 2023-01-11T21:41:24.0043934Z 2023-01-11T21:41:24.0044027Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0044094Z import torch 2023-01-11T21:41:24.0044192Z import random 2023-01-11T21:41:24.0044308Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0044428Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0044433Z 2023-01-11T21:41:24.0044509Z aten = torch.ops.aten 2023-01-11T21:41:24.0044627Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0044716Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0044721Z 2023-01-11T21:41:24.0044725Z 2023-01-11T21:41:24.0044857Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0045060Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0045173Z extern "C" void kernel(float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0045268Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0045328Z { 2023-01-11T21:41:24.0045377Z { 2023-01-11T21:41:24.0045438Z { 2023-01-11T21:41:24.0045537Z auto tmp0 = static_cast(1); 2023-01-11T21:41:24.0045644Z out_ptr0[0] = tmp0; 2023-01-11T21:41:24.0045706Z } 2023-01-11T21:41:24.0045765Z } 2023-01-11T21:41:24.0045823Z { 2023-01-11T21:41:24.0045871Z { 2023-01-11T21:41:24.0045966Z auto tmp0 = static_cast(0); 2023-01-11T21:41:24.0046042Z out_ptr1[0] = tmp0; 2023-01-11T21:41:24.0046103Z } 2023-01-11T21:41:24.0046162Z } 2023-01-11T21:41:24.0046219Z } 2023-01-11T21:41:24.0046285Z ''') 2023-01-11T21:41:24.0046301Z 2023-01-11T21:41:24.0046306Z 2023-01-11T21:41:24.0046382Z async_compile.wait(globals()) 2023-01-11T21:41:24.0046452Z del async_compile 2023-01-11T21:41:24.0046457Z 2023-01-11T21:41:24.0046524Z def call(args): 2023-01-11T21:41:24.0046592Z arg0_1, = args 2023-01-11T21:41:24.0046659Z args.clear() 2023-01-11T21:41:24.0046844Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0047022Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0047146Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0047222Z return (buf0, buf1, ) 2023-01-11T21:41:24.0047227Z 2023-01-11T21:41:24.0047231Z 2023-01-11T21:41:24.0047306Z if __name__ == "__main__": 2023-01-11T21:41:24.0047416Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0047536Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0047728Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0047835Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0047840Z 2023-01-11T21:41:24.0047906Z ok (2.359s) 2023-01-11T21:41:24.0048379Z test_nll_loss_forward_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0048493Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0048752Z [2023-01-11 21:36:51,185] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 382 2023-01-11T21:41:24.0049017Z [2023-01-11 21:36:53,639] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 382 2023-01-11T21:41:24.0049022Z 2023-01-11T21:41:24.0049119Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0049189Z import torch 2023-01-11T21:41:24.0049260Z import random 2023-01-11T21:41:24.0049375Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0049493Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0049498Z 2023-01-11T21:41:24.0049563Z aten = torch.ops.aten 2023-01-11T21:41:24.0049699Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0049791Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0049828Z 2023-01-11T21:41:24.0049832Z 2023-01-11T21:41:24.0049967Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0050174Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0050292Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0050399Z const long* __restrict__ in_ptr0, 2023-01-11T21:41:24.0050505Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0050594Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0050655Z { 2023-01-11T21:41:24.0050741Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0050803Z { 2023-01-11T21:41:24.0050868Z { 2023-01-11T21:41:24.0050944Z float tmp3 = 0; 2023-01-11T21:41:24.0051051Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0051103Z { 2023-01-11T21:41:24.0051207Z #pragma omp for reduction(+:tmp3) 2023-01-11T21:41:24.0051327Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:24.0051394Z { 2023-01-11T21:41:24.0051461Z { 2023-01-11T21:41:24.0051557Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0051663Z auto tmp1 = in_ptr1[tmp0 + (5*i0)]; 2023-01-11T21:41:24.0051780Z auto tmp2 = -tmp1; 2023-01-11T21:41:24.0051859Z tmp3 += tmp2; 2023-01-11T21:41:24.0051926Z } 2023-01-11T21:41:24.0051992Z } 2023-01-11T21:41:24.0052055Z } 2023-01-11T21:41:24.0052133Z out_ptr0[0] = tmp3; 2023-01-11T21:41:24.0052197Z } 2023-01-11T21:41:24.0052244Z } 2023-01-11T21:41:24.0052304Z { 2023-01-11T21:41:24.0052367Z { 2023-01-11T21:41:24.0052453Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:24.0052553Z auto tmp1 = static_cast(5); 2023-01-11T21:41:24.0052638Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0052709Z in_out_ptr0[0] = tmp2; 2023-01-11T21:41:24.0052771Z } 2023-01-11T21:41:24.0052832Z } 2023-01-11T21:41:24.0052892Z { 2023-01-11T21:41:24.0052954Z { 2023-01-11T21:41:24.0053056Z auto tmp0 = static_cast(5.0); 2023-01-11T21:41:24.0053134Z out_ptr1[0] = tmp0; 2023-01-11T21:41:24.0053184Z } 2023-01-11T21:41:24.0053245Z } 2023-01-11T21:41:24.0053306Z } 2023-01-11T21:41:24.0053384Z ''') 2023-01-11T21:41:24.0053389Z 2023-01-11T21:41:24.0053394Z 2023-01-11T21:41:24.0053483Z async_compile.wait(globals()) 2023-01-11T21:41:24.0053555Z del async_compile 2023-01-11T21:41:24.0053560Z 2023-01-11T21:41:24.0053629Z def call(args): 2023-01-11T21:41:24.0053690Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0053759Z args.clear() 2023-01-11T21:41:24.0053943Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0054028Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0054210Z buf2 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0054399Z kernel_cpp_0(c_void_p(buf1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0054466Z del arg0_1 2023-01-11T21:41:24.0054519Z del arg1_1 2023-01-11T21:41:24.0054593Z return (buf1, buf2, ) 2023-01-11T21:41:24.0054598Z 2023-01-11T21:41:24.0054602Z 2023-01-11T21:41:24.0054679Z if __name__ == "__main__": 2023-01-11T21:41:24.0054791Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0054913Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0055106Z arg0_1 = rand_strided((5, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0055295Z arg1_1 = rand_strided((5, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0055409Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0055414Z 2023-01-11T21:41:24.0055479Z ok (2.536s) 2023-01-11T21:41:24.0055985Z test_no_mega_fusion_during_lowering_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0056112Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0056372Z [2023-01-11 21:36:53,938] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 383 2023-01-11T21:41:24.0056377Z 2023-01-11T21:41:24.0056471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0056538Z import torch 2023-01-11T21:41:24.0056607Z import random 2023-01-11T21:41:24.0056722Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0056840Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0056874Z 2023-01-11T21:41:24.0056951Z aten = torch.ops.aten 2023-01-11T21:41:24.0057069Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0057158Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0057162Z 2023-01-11T21:41:24.0057167Z 2023-01-11T21:41:24.0057299Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0057502Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0057616Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0057719Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0057822Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0057921Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0058011Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:24.0058109Z const float* __restrict__ in_ptr4, 2023-01-11T21:41:24.0058214Z const float* __restrict__ in_ptr5, 2023-01-11T21:41:24.0058311Z const float* __restrict__ in_ptr6, 2023-01-11T21:41:24.0058410Z const float* __restrict__ in_ptr7, 2023-01-11T21:41:24.0058506Z const float* __restrict__ in_ptr8, 2023-01-11T21:41:24.0058602Z const float* __restrict__ in_ptr9, 2023-01-11T21:41:24.0058694Z const float* __restrict__ in_ptr10, 2023-01-11T21:41:24.0058795Z const float* __restrict__ in_ptr11, 2023-01-11T21:41:24.0058894Z const float* __restrict__ in_ptr12, 2023-01-11T21:41:24.0058994Z const float* __restrict__ in_ptr13, 2023-01-11T21:41:24.0059093Z const float* __restrict__ in_ptr14, 2023-01-11T21:41:24.0059192Z const float* __restrict__ in_ptr15, 2023-01-11T21:41:24.0059292Z const float* __restrict__ in_ptr16, 2023-01-11T21:41:24.0059394Z const float* __restrict__ in_ptr17, 2023-01-11T21:41:24.0059480Z const float* __restrict__ in_ptr18, 2023-01-11T21:41:24.0059577Z const float* __restrict__ in_ptr19, 2023-01-11T21:41:24.0059676Z const float* __restrict__ in_ptr20, 2023-01-11T21:41:24.0059774Z const float* __restrict__ in_ptr21, 2023-01-11T21:41:24.0059872Z const float* __restrict__ in_ptr22, 2023-01-11T21:41:24.0059969Z const float* __restrict__ in_ptr23, 2023-01-11T21:41:24.0060068Z const float* __restrict__ in_ptr24, 2023-01-11T21:41:24.0060155Z const float* __restrict__ in_ptr25, 2023-01-11T21:41:24.0060251Z const float* __restrict__ in_ptr26, 2023-01-11T21:41:24.0060348Z const float* __restrict__ in_ptr27, 2023-01-11T21:41:24.0060448Z const float* __restrict__ in_ptr28, 2023-01-11T21:41:24.0060588Z const float* __restrict__ in_ptr29, 2023-01-11T21:41:24.0060685Z const float* __restrict__ in_ptr30, 2023-01-11T21:41:24.0060781Z const float* __restrict__ in_ptr31, 2023-01-11T21:41:24.0060877Z const float* __restrict__ in_ptr32, 2023-01-11T21:41:24.0060961Z const float* __restrict__ in_ptr33, 2023-01-11T21:41:24.0061058Z const float* __restrict__ in_ptr34, 2023-01-11T21:41:24.0061154Z const float* __restrict__ in_ptr35, 2023-01-11T21:41:24.0061256Z const float* __restrict__ in_ptr36, 2023-01-11T21:41:24.0061353Z const float* __restrict__ in_ptr37, 2023-01-11T21:41:24.0061450Z const float* __restrict__ in_ptr38, 2023-01-11T21:41:24.0061546Z const float* __restrict__ in_ptr39, 2023-01-11T21:41:24.0061663Z const float* __restrict__ in_ptr40, 2023-01-11T21:41:24.0061761Z const float* __restrict__ in_ptr41, 2023-01-11T21:41:24.0061859Z const float* __restrict__ in_ptr42, 2023-01-11T21:41:24.0061957Z const float* __restrict__ in_ptr43, 2023-01-11T21:41:24.0062056Z const float* __restrict__ in_ptr44, 2023-01-11T21:41:24.0062154Z const float* __restrict__ in_ptr45, 2023-01-11T21:41:24.0062251Z const float* __restrict__ in_ptr46, 2023-01-11T21:41:24.0062495Z const float* __restrict__ in_ptr47, 2023-01-11T21:41:24.0062618Z const float* __restrict__ in_ptr48, 2023-01-11T21:41:24.0062717Z const float* __restrict__ in_ptr49) 2023-01-11T21:41:24.0062777Z { 2023-01-11T21:41:24.0062864Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0062960Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0063029Z { 2023-01-11T21:41:24.0063106Z #pragma omp for 2023-01-11T21:41:24.0063174Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0063237Z { 2023-01-11T21:41:24.0063373Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0063503Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0063629Z auto tmp4 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:24.0063754Z auto tmp6 = at::vec::Vectorized::loadu(in_ptr3 + 8*i0); 2023-01-11T21:41:24.0063877Z auto tmp8 = at::vec::Vectorized::loadu(in_ptr4 + 8*i0); 2023-01-11T21:41:24.0064006Z auto tmp10 = at::vec::Vectorized::loadu(in_ptr5 + 8*i0); 2023-01-11T21:41:24.0064121Z auto tmp12 = at::vec::Vectorized::loadu(in_ptr6 + 8*i0); 2023-01-11T21:41:24.0064244Z auto tmp14 = at::vec::Vectorized::loadu(in_ptr7 + 8*i0); 2023-01-11T21:41:24.0064391Z auto tmp16 = at::vec::Vectorized::loadu(in_ptr8 + 8*i0); 2023-01-11T21:41:24.0064514Z auto tmp1 = tmp0 + tmp0; 2023-01-11T21:41:24.0064599Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0064678Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:24.0064757Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.0064836Z auto tmp9 = tmp7 + tmp8; 2023-01-11T21:41:24.0064908Z auto tmp11 = tmp9 + tmp10; 2023-01-11T21:41:24.0064995Z auto tmp13 = tmp11 + tmp12; 2023-01-11T21:41:24.0065079Z auto tmp15 = tmp13 + tmp14; 2023-01-11T21:41:24.0065161Z auto tmp17 = tmp15 + tmp16; 2023-01-11T21:41:24.0065249Z tmp17.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0065310Z } 2023-01-11T21:41:24.0065401Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0065470Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0065530Z { 2023-01-11T21:41:24.0065683Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0065764Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.0065844Z auto tmp4 = in_ptr2[i0]; 2023-01-11T21:41:24.0065924Z auto tmp6 = in_ptr3[i0]; 2023-01-11T21:41:24.0065990Z auto tmp8 = in_ptr4[i0]; 2023-01-11T21:41:24.0066073Z auto tmp10 = in_ptr5[i0]; 2023-01-11T21:41:24.0066155Z auto tmp12 = in_ptr6[i0]; 2023-01-11T21:41:24.0066236Z auto tmp14 = in_ptr7[i0]; 2023-01-11T21:41:24.0066317Z auto tmp16 = in_ptr8[i0]; 2023-01-11T21:41:24.0066398Z auto tmp1 = tmp0 + tmp0; 2023-01-11T21:41:24.0066477Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0066544Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:24.0066622Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.0066701Z auto tmp9 = tmp7 + tmp8; 2023-01-11T21:41:24.0066786Z auto tmp11 = tmp9 + tmp10; 2023-01-11T21:41:24.0066907Z auto tmp13 = tmp11 + tmp12; 2023-01-11T21:41:24.0066994Z auto tmp15 = tmp13 + tmp14; 2023-01-11T21:41:24.0067074Z auto tmp17 = tmp15 + tmp16; 2023-01-11T21:41:24.0067143Z out_ptr0[i0] = tmp17; 2023-01-11T21:41:24.0067205Z } 2023-01-11T21:41:24.0067280Z #pragma omp for 2023-01-11T21:41:24.0067359Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0067423Z { 2023-01-11T21:41:24.0067552Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0067680Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr9 + 8*i0); 2023-01-11T21:41:24.0067804Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr10 + 8*i0); 2023-01-11T21:41:24.0067919Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr11 + 8*i0); 2023-01-11T21:41:24.0068041Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr12 + 8*i0); 2023-01-11T21:41:24.0068166Z auto tmp9 = at::vec::Vectorized::loadu(in_ptr13 + 8*i0); 2023-01-11T21:41:24.0068298Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr14 + 8*i0); 2023-01-11T21:41:24.0068428Z auto tmp13 = at::vec::Vectorized::loadu(in_ptr15 + 8*i0); 2023-01-11T21:41:24.0068556Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr16 + 8*i0); 2023-01-11T21:41:24.0068640Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0068722Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0068790Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0068869Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0068951Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0069034Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0069118Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0069201Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0069297Z tmp16.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0069346Z } 2023-01-11T21:41:24.0069442Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0069522Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0069583Z { 2023-01-11T21:41:24.0069665Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:24.0069747Z auto tmp1 = in_ptr9[i0]; 2023-01-11T21:41:24.0069829Z auto tmp3 = in_ptr10[i0]; 2023-01-11T21:41:24.0069898Z auto tmp5 = in_ptr11[i0]; 2023-01-11T21:41:24.0069979Z auto tmp7 = in_ptr12[i0]; 2023-01-11T21:41:24.0070059Z auto tmp9 = in_ptr13[i0]; 2023-01-11T21:41:24.0070143Z auto tmp11 = in_ptr14[i0]; 2023-01-11T21:41:24.0070228Z auto tmp13 = in_ptr15[i0]; 2023-01-11T21:41:24.0070313Z auto tmp15 = in_ptr16[i0]; 2023-01-11T21:41:24.0070394Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0070463Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0070542Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0070622Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0070736Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0070819Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0070902Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0070987Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0071058Z in_out_ptr0[i0] = tmp16; 2023-01-11T21:41:24.0071118Z } 2023-01-11T21:41:24.0071193Z #pragma omp for 2023-01-11T21:41:24.0076741Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0076823Z { 2023-01-11T21:41:24.0076977Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0077114Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr17 + 8*i0); 2023-01-11T21:41:24.0077230Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr18 + 8*i0); 2023-01-11T21:41:24.0077356Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr19 + 8*i0); 2023-01-11T21:41:24.0077550Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr20 + 8*i0); 2023-01-11T21:41:24.0077678Z auto tmp9 = at::vec::Vectorized::loadu(in_ptr21 + 8*i0); 2023-01-11T21:41:24.0077810Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr22 + 8*i0); 2023-01-11T21:41:24.0077940Z auto tmp13 = at::vec::Vectorized::loadu(in_ptr23 + 8*i0); 2023-01-11T21:41:24.0078066Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr24 + 8*i0); 2023-01-11T21:41:24.0078151Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0078234Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0078302Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0078384Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0078467Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0078553Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0078639Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0078722Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0078825Z tmp16.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0078875Z } 2023-01-11T21:41:24.0078970Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0079052Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0079115Z { 2023-01-11T21:41:24.0079201Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0079286Z auto tmp1 = in_ptr17[i0]; 2023-01-11T21:41:24.0079368Z auto tmp3 = in_ptr18[i0]; 2023-01-11T21:41:24.0079438Z auto tmp5 = in_ptr19[i0]; 2023-01-11T21:41:24.0079520Z auto tmp7 = in_ptr20[i0]; 2023-01-11T21:41:24.0079600Z auto tmp9 = in_ptr21[i0]; 2023-01-11T21:41:24.0079685Z auto tmp11 = in_ptr22[i0]; 2023-01-11T21:41:24.0079768Z auto tmp13 = in_ptr23[i0]; 2023-01-11T21:41:24.0079849Z auto tmp15 = in_ptr24[i0]; 2023-01-11T21:41:24.0079933Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0080002Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0080088Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0080168Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0080249Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0080335Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0080419Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0080502Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0080571Z in_out_ptr0[i0] = tmp16; 2023-01-11T21:41:24.0080634Z } 2023-01-11T21:41:24.0080710Z #pragma omp for 2023-01-11T21:41:24.0080791Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0080853Z { 2023-01-11T21:41:24.0080989Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0081120Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr25 + 8*i0); 2023-01-11T21:41:24.0081236Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr26 + 8*i0); 2023-01-11T21:41:24.0081364Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr27 + 8*i0); 2023-01-11T21:41:24.0081527Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr28 + 8*i0); 2023-01-11T21:41:24.0081650Z auto tmp9 = at::vec::Vectorized::loadu(in_ptr29 + 8*i0); 2023-01-11T21:41:24.0081780Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr30 + 8*i0); 2023-01-11T21:41:24.0081911Z auto tmp13 = at::vec::Vectorized::loadu(in_ptr31 + 8*i0); 2023-01-11T21:41:24.0082038Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr32 + 8*i0); 2023-01-11T21:41:24.0082124Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0082208Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0082277Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0082358Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0082443Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0082529Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0082613Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0082729Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0082830Z tmp16.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0082880Z } 2023-01-11T21:41:24.0082974Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0083056Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0083119Z { 2023-01-11T21:41:24.0083207Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0083294Z auto tmp1 = in_ptr25[i0]; 2023-01-11T21:41:24.0083364Z auto tmp3 = in_ptr26[i0]; 2023-01-11T21:41:24.0083446Z auto tmp5 = in_ptr27[i0]; 2023-01-11T21:41:24.0083529Z auto tmp7 = in_ptr28[i0]; 2023-01-11T21:41:24.0083610Z auto tmp9 = in_ptr29[i0]; 2023-01-11T21:41:24.0083697Z auto tmp11 = in_ptr30[i0]; 2023-01-11T21:41:24.0083785Z auto tmp13 = in_ptr31[i0]; 2023-01-11T21:41:24.0083871Z auto tmp15 = in_ptr32[i0]; 2023-01-11T21:41:24.0083944Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0084029Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0084111Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0084193Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0084277Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0084362Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0084446Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0084518Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0084602Z in_out_ptr0[i0] = tmp16; 2023-01-11T21:41:24.0084668Z } 2023-01-11T21:41:24.0084745Z #pragma omp for 2023-01-11T21:41:24.0084827Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0084889Z { 2023-01-11T21:41:24.0085026Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0085142Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr33 + 8*i0); 2023-01-11T21:41:24.0085273Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr34 + 8*i0); 2023-01-11T21:41:24.0085401Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr35 + 8*i0); 2023-01-11T21:41:24.0085526Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr36 + 8*i0); 2023-01-11T21:41:24.0085649Z auto tmp9 = at::vec::Vectorized::loadu(in_ptr37 + 8*i0); 2023-01-11T21:41:24.0085781Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr38 + 8*i0); 2023-01-11T21:41:24.0085912Z auto tmp13 = at::vec::Vectorized::loadu(in_ptr39 + 8*i0); 2023-01-11T21:41:24.0086040Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr40 + 8*i0); 2023-01-11T21:41:24.0086126Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0086196Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0086283Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0086365Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0086447Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0086563Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0086646Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0086733Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0086817Z tmp16.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0086880Z } 2023-01-11T21:41:24.0086976Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0087057Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0087120Z { 2023-01-11T21:41:24.0087208Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0087291Z auto tmp1 = in_ptr33[i0]; 2023-01-11T21:41:24.0087361Z auto tmp3 = in_ptr34[i0]; 2023-01-11T21:41:24.0087444Z auto tmp5 = in_ptr35[i0]; 2023-01-11T21:41:24.0087526Z auto tmp7 = in_ptr36[i0]; 2023-01-11T21:41:24.0087609Z auto tmp9 = in_ptr37[i0]; 2023-01-11T21:41:24.0087693Z auto tmp11 = in_ptr38[i0]; 2023-01-11T21:41:24.0087779Z auto tmp13 = in_ptr39[i0]; 2023-01-11T21:41:24.0087890Z auto tmp15 = in_ptr40[i0]; 2023-01-11T21:41:24.0087962Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0088044Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0088127Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0088208Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0088288Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0088370Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0088455Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0088528Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0088608Z in_out_ptr0[i0] = tmp16; 2023-01-11T21:41:24.0088671Z } 2023-01-11T21:41:24.0088746Z #pragma omp for 2023-01-11T21:41:24.0088827Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0088891Z { 2023-01-11T21:41:24.0089026Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0089144Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr41 + 8*i0); 2023-01-11T21:41:24.0089273Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr42 + 8*i0); 2023-01-11T21:41:24.0089396Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr43 + 8*i0); 2023-01-11T21:41:24.0089518Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr44 + 8*i0); 2023-01-11T21:41:24.0089639Z auto tmp9 = at::vec::Vectorized::loadu(in_ptr45 + 8*i0); 2023-01-11T21:41:24.0089768Z auto tmp11 = at::vec::Vectorized::loadu(in_ptr46 + 8*i0); 2023-01-11T21:41:24.0089894Z auto tmp13 = at::vec::Vectorized::loadu(in_ptr47 + 8*i0); 2023-01-11T21:41:24.0090021Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr48 + 8*i0); 2023-01-11T21:41:24.0090104Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0090174Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0090255Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0090340Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0090422Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0090504Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0090587Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0090669Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0090754Z tmp16.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0090815Z } 2023-01-11T21:41:24.0090908Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0090988Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0091052Z { 2023-01-11T21:41:24.0091139Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0091224Z auto tmp1 = in_ptr41[i0]; 2023-01-11T21:41:24.0091294Z auto tmp3 = in_ptr42[i0]; 2023-01-11T21:41:24.0091374Z auto tmp5 = in_ptr43[i0]; 2023-01-11T21:41:24.0091454Z auto tmp7 = in_ptr44[i0]; 2023-01-11T21:41:24.0091533Z auto tmp9 = in_ptr45[i0]; 2023-01-11T21:41:24.0091646Z auto tmp11 = in_ptr46[i0]; 2023-01-11T21:41:24.0091728Z auto tmp13 = in_ptr47[i0]; 2023-01-11T21:41:24.0091811Z auto tmp15 = in_ptr48[i0]; 2023-01-11T21:41:24.0091882Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0091962Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0092041Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0092120Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0092201Z auto tmp10 = tmp8 + tmp9; 2023-01-11T21:41:24.0092284Z auto tmp12 = tmp10 + tmp11; 2023-01-11T21:41:24.0092366Z auto tmp14 = tmp12 + tmp13; 2023-01-11T21:41:24.0092436Z auto tmp16 = tmp14 + tmp15; 2023-01-11T21:41:24.0092518Z in_out_ptr0[i0] = tmp16; 2023-01-11T21:41:24.0092577Z } 2023-01-11T21:41:24.0092650Z #pragma omp for 2023-01-11T21:41:24.0092729Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0092789Z { 2023-01-11T21:41:24.0092952Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0093069Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr49 + 8*i0); 2023-01-11T21:41:24.0093150Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0093244Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0093305Z } 2023-01-11T21:41:24.0093396Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0093475Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0093533Z { 2023-01-11T21:41:24.0093609Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0093689Z auto tmp1 = in_ptr49[i0]; 2023-01-11T21:41:24.0093771Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0093852Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0093912Z } 2023-01-11T21:41:24.0093972Z } 2023-01-11T21:41:24.0094020Z } 2023-01-11T21:41:24.0094128Z ''') 2023-01-11T21:41:24.0094135Z 2023-01-11T21:41:24.0094140Z 2023-01-11T21:41:24.0094235Z async_compile.wait(globals()) 2023-01-11T21:41:24.0094311Z del async_compile 2023-01-11T21:41:24.0094316Z 2023-01-11T21:41:24.0094386Z def call(args): 2023-01-11T21:41:24.0094716Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1, arg9_1, arg10_1, arg11_1, arg12_1, arg13_1, arg14_1, arg15_1, arg16_1, arg17_1, arg18_1, arg19_1, arg20_1, arg21_1, arg22_1, arg23_1, arg24_1, arg25_1, arg26_1, arg27_1, arg28_1, arg29_1, arg30_1, arg31_1, arg32_1, arg33_1, arg34_1, arg35_1, arg36_1, arg37_1, arg38_1, arg39_1, arg40_1, arg41_1, arg42_1, arg43_1, arg44_1, arg45_1, arg46_1, arg47_1, arg48_1, arg49_1 = args 2023-01-11T21:41:24.0094788Z args.clear() 2023-01-11T21:41:24.0094993Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0095077Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0095146Z buf2 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0095224Z buf3 = buf2; del buf2 # reuse 2023-01-11T21:41:24.0095304Z buf4 = buf3; del buf3 # reuse 2023-01-11T21:41:24.0095387Z buf5 = buf4; del buf4 # reuse 2023-01-11T21:41:24.0095466Z buf6 = buf5; del buf5 # reuse 2023-01-11T21:41:24.0096712Z kernel_cpp_0(c_void_p(buf6.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg3_1.data_ptr()), c_void_p(arg4_1.data_ptr()), c_void_p(arg5_1.data_ptr()), c_void_p(arg6_1.data_ptr()), c_void_p(arg7_1.data_ptr()), c_void_p(arg8_1.data_ptr()), c_void_p(arg9_1.data_ptr()), c_void_p(arg10_1.data_ptr()), c_void_p(arg11_1.data_ptr()), c_void_p(arg12_1.data_ptr()), c_void_p(arg13_1.data_ptr()), c_void_p(arg14_1.data_ptr()), c_void_p(arg15_1.data_ptr()), c_void_p(arg16_1.data_ptr()), c_void_p(arg17_1.data_ptr()), c_void_p(arg18_1.data_ptr()), c_void_p(arg19_1.data_ptr()), c_void_p(arg20_1.data_ptr()), c_void_p(arg21_1.data_ptr()), c_void_p(arg22_1.data_ptr()), c_void_p(arg23_1.data_ptr()), c_void_p(arg24_1.data_ptr()), c_void_p(arg25_1.data_ptr()), c_void_p(arg26_1.data_ptr()), c_void_p(arg27_1.data_ptr()), c_void_p(arg28_1.data_ptr()), c_void_p(arg29_1.data_ptr()), c_void_p(arg30_1.data_ptr()), c_void_p(arg31_1.data_ptr()), c_void_p(arg32_1.data_ptr()), c_void_p(arg33_1.data_ptr()), c_void_p(arg34_1.data_ptr()), c_void_p(arg35_1.data_ptr()), c_void_p(arg36_1.data_ptr()), c_void_p(arg37_1.data_ptr()), c_void_p(arg38_1.data_ptr()), c_void_p(arg39_1.data_ptr()), c_void_p(arg40_1.data_ptr()), c_void_p(arg41_1.data_ptr()), c_void_p(arg42_1.data_ptr()), c_void_p(arg43_1.data_ptr()), c_void_p(arg44_1.data_ptr()), c_void_p(arg45_1.data_ptr()), c_void_p(arg46_1.data_ptr()), c_void_p(arg47_1.data_ptr()), c_void_p(arg48_1.data_ptr()), c_void_p(arg49_1.data_ptr())) 2023-01-11T21:41:24.0096817Z del arg0_1 2023-01-11T21:41:24.0096882Z del arg10_1 2023-01-11T21:41:24.0096946Z del arg11_1 2023-01-11T21:41:24.0097009Z del arg12_1 2023-01-11T21:41:24.0097072Z del arg13_1 2023-01-11T21:41:24.0097136Z del arg14_1 2023-01-11T21:41:24.0097189Z del arg15_1 2023-01-11T21:41:24.0097281Z del arg16_1 2023-01-11T21:41:24.0097347Z del arg17_1 2023-01-11T21:41:24.0097410Z del arg18_1 2023-01-11T21:41:24.0097473Z del arg19_1 2023-01-11T21:41:24.0097537Z del arg1_1 2023-01-11T21:41:24.0097589Z del arg20_1 2023-01-11T21:41:24.0097653Z del arg21_1 2023-01-11T21:41:24.0097718Z del arg22_1 2023-01-11T21:41:24.0097783Z del arg23_1 2023-01-11T21:41:24.0097848Z del arg24_1 2023-01-11T21:41:24.0097913Z del arg25_1 2023-01-11T21:41:24.0097977Z del arg26_1 2023-01-11T21:41:24.0098029Z del arg27_1 2023-01-11T21:41:24.0098095Z del arg28_1 2023-01-11T21:41:24.0098162Z del arg29_1 2023-01-11T21:41:24.0098228Z del arg2_1 2023-01-11T21:41:24.0098293Z del arg30_1 2023-01-11T21:41:24.0098356Z del arg31_1 2023-01-11T21:41:24.0098407Z del arg32_1 2023-01-11T21:41:24.0098470Z del arg33_1 2023-01-11T21:41:24.0098534Z del arg34_1 2023-01-11T21:41:24.0098599Z del arg35_1 2023-01-11T21:41:24.0098662Z del arg36_1 2023-01-11T21:41:24.0098733Z del arg37_1 2023-01-11T21:41:24.0098797Z del arg38_1 2023-01-11T21:41:24.0098850Z del arg39_1 2023-01-11T21:41:24.0098914Z del arg3_1 2023-01-11T21:41:24.0098979Z del arg40_1 2023-01-11T21:41:24.0099042Z del arg41_1 2023-01-11T21:41:24.0099104Z del arg42_1 2023-01-11T21:41:24.0099169Z del arg43_1 2023-01-11T21:41:24.0099221Z del arg44_1 2023-01-11T21:41:24.0099284Z del arg45_1 2023-01-11T21:41:24.0099347Z del arg46_1 2023-01-11T21:41:24.0099409Z del arg47_1 2023-01-11T21:41:24.0099472Z del arg48_1 2023-01-11T21:41:24.0099536Z del arg49_1 2023-01-11T21:41:24.0099600Z del arg4_1 2023-01-11T21:41:24.0099652Z del arg5_1 2023-01-11T21:41:24.0099716Z del arg6_1 2023-01-11T21:41:24.0099783Z del arg7_1 2023-01-11T21:41:24.0099845Z del arg8_1 2023-01-11T21:41:24.0099910Z del arg9_1 2023-01-11T21:41:24.0099979Z return (buf6, ) 2023-01-11T21:41:24.0099984Z 2023-01-11T21:41:24.0099988Z 2023-01-11T21:41:24.0100065Z if __name__ == "__main__": 2023-01-11T21:41:24.0100170Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0100293Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0100495Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0100687Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0100873Z arg2_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0101058Z arg3_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0101240Z arg4_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0101423Z arg5_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0101594Z arg6_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0101777Z arg7_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0102004Z arg8_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0102187Z arg9_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0102546Z arg10_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0102793Z arg11_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0102983Z arg12_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0103159Z arg13_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0103343Z arg14_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0103527Z arg15_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0103711Z arg16_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0103960Z arg17_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0104147Z arg18_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0104331Z arg19_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0104517Z arg20_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0104689Z arg21_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0104871Z arg22_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105053Z arg23_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105236Z arg24_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105422Z arg25_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105605Z arg26_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105794Z arg27_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0105976Z arg28_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0106148Z arg29_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0106329Z arg30_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0106511Z arg31_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0106693Z arg32_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0106875Z arg33_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107057Z arg34_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107241Z arg35_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107423Z arg36_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107599Z arg37_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107784Z arg38_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0107964Z arg39_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0108147Z arg40_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0108329Z arg41_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0108512Z arg42_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0108698Z arg43_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0108879Z arg44_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0109050Z arg45_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0109235Z arg46_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0109468Z arg47_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0109652Z arg48_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0109834Z arg49_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0110196Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1, arg9_1, arg10_1, arg11_1, arg12_1, arg13_1, arg14_1, arg15_1, arg16_1, arg17_1, arg18_1, arg19_1, arg20_1, arg21_1, arg22_1, arg23_1, arg24_1, arg25_1, arg26_1, arg27_1, arg28_1, arg29_1, arg30_1, arg31_1, arg32_1, arg33_1, arg34_1, arg35_1, arg36_1, arg37_1, arg38_1, arg39_1, arg40_1, arg41_1, arg42_1, arg43_1, arg44_1, arg45_1, arg46_1, arg47_1, arg48_1, arg49_1])) 2023-01-11T21:41:24.0110473Z [2023-01-11 21:36:55,820] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 383 2023-01-11T21:41:24.0110482Z 2023-01-11T21:41:24.0110587Z --> 7 2023-01-11T21:41:24.0110654Z ok (2.187s) 2023-01-11T21:41:24.0111130Z test_no_op_reduction_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0111255Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0111504Z [2023-01-11 21:36:55,847] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 384 2023-01-11T21:41:24.0111766Z [2023-01-11 21:36:57,494] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 384 2023-01-11T21:41:24.0111773Z 2023-01-11T21:41:24.0111865Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0111938Z import torch 2023-01-11T21:41:24.0112008Z import random 2023-01-11T21:41:24.0112123Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0112241Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0112246Z 2023-01-11T21:41:24.0112322Z aten = torch.ops.aten 2023-01-11T21:41:24.0112443Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0112532Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0112538Z 2023-01-11T21:41:24.0112542Z 2023-01-11T21:41:24.0112677Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0112878Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0112996Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0113096Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0113192Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0113251Z { 2023-01-11T21:41:24.0113341Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0113401Z { 2023-01-11T21:41:24.0113476Z #pragma omp for 2023-01-11T21:41:24.0113557Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0113618Z { 2023-01-11T21:41:24.0113814Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0113949Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0114021Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0114113Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0114201Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0114264Z } 2023-01-11T21:41:24.0114357Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0114438Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.0114501Z { 2023-01-11T21:41:24.0114571Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0114670Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0114789Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0114871Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0114950Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.0115012Z } 2023-01-11T21:41:24.0115073Z } 2023-01-11T21:41:24.0115120Z } 2023-01-11T21:41:24.0115204Z ''') 2023-01-11T21:41:24.0115210Z 2023-01-11T21:41:24.0115214Z 2023-01-11T21:41:24.0115305Z async_compile.wait(globals()) 2023-01-11T21:41:24.0115379Z del async_compile 2023-01-11T21:41:24.0115383Z 2023-01-11T21:41:24.0115455Z def call(args): 2023-01-11T21:41:24.0115525Z arg0_1, = args 2023-01-11T21:41:24.0115598Z args.clear() 2023-01-11T21:41:24.0115783Z buf0 = empty_strided((8, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0115986Z buf1 = empty_strided((8, 1, 1), (1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0116151Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0116223Z del arg0_1 2023-01-11T21:41:24.0116329Z return (buf0, buf1, ) 2023-01-11T21:41:24.0116335Z 2023-01-11T21:41:24.0116340Z 2023-01-11T21:41:24.0116417Z if __name__ == "__main__": 2023-01-11T21:41:24.0116530Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0116655Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0116847Z arg0_1 = rand_strided((8, 1, 1), (1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0116955Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0116960Z 2023-01-11T21:41:24.0117028Z ok (1.668s) 2023-01-11T21:41:24.0117365Z test_output_strides_cpu (__main__.CpuTests) ... [2023-01-11 21:36:57,510] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 385 2023-01-11T21:41:24.0117633Z [2023-01-11 21:36:59,139] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 385 2023-01-11T21:41:24.0117890Z [2023-01-11 21:36:59,151] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 386 2023-01-11T21:41:24.0118158Z [2023-01-11 21:36:59,153] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 386 2023-01-11T21:41:24.0118412Z [2023-01-11 21:36:59,192] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 387 2023-01-11T21:41:24.0118674Z [2023-01-11 21:36:59,196] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 387 2023-01-11T21:41:24.0119106Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:3148: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0119210Z self.assertEqual(inp.storage(), out.storage()) 2023-01-11T21:41:24.0119876Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0119971Z device=typed_storage.device, 2023-01-11T21:41:24.0119976Z 2023-01-11T21:41:24.0120069Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0120139Z import torch 2023-01-11T21:41:24.0120211Z import random 2023-01-11T21:41:24.0120327Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0120450Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0120455Z 2023-01-11T21:41:24.0120535Z aten = torch.ops.aten 2023-01-11T21:41:24.0120657Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0120748Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0120753Z 2023-01-11T21:41:24.0120788Z 2023-01-11T21:41:24.0120928Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0121136Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0121254Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0121354Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0121417Z { 2023-01-11T21:41:24.0121503Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0121563Z { 2023-01-11T21:41:24.0121640Z #pragma omp for 2023-01-11T21:41:24.0121724Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0121787Z { 2023-01-11T21:41:24.0121864Z #pragma GCC ivdep 2023-01-11T21:41:24.0121946Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:24.0121998Z { 2023-01-11T21:41:24.0122080Z #pragma GCC ivdep 2023-01-11T21:41:24.0122169Z for(long i2=0; i2<4; i2+=1) 2023-01-11T21:41:24.0122237Z { 2023-01-11T21:41:24.0122334Z { 2023-01-11T21:41:24.0122401Z { 2023-01-11T21:41:24.0122511Z auto tmp0 = in_ptr0[i1 + (16*i2) + (64*i0)]; 2023-01-11T21:41:24.0122602Z out_ptr0[i2 + (4*i1) + (64*i0)] = tmp0; 2023-01-11T21:41:24.0122670Z } 2023-01-11T21:41:24.0122739Z } 2023-01-11T21:41:24.0122801Z } 2023-01-11T21:41:24.0122864Z } 2023-01-11T21:41:24.0122923Z } 2023-01-11T21:41:24.0122981Z } 2023-01-11T21:41:24.0123026Z } 2023-01-11T21:41:24.0123104Z ''') 2023-01-11T21:41:24.0123110Z 2023-01-11T21:41:24.0123114Z 2023-01-11T21:41:24.0123201Z async_compile.wait(globals()) 2023-01-11T21:41:24.0123272Z del async_compile 2023-01-11T21:41:24.0123276Z 2023-01-11T21:41:24.0123344Z def call(args): 2023-01-11T21:41:24.0123411Z arg0_1, = args 2023-01-11T21:41:24.0123479Z args.clear() 2023-01-11T21:41:24.0123680Z buf0 = empty_strided((4, 4, 4, 4), (64, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0123813Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0123880Z del arg0_1 2023-01-11T21:41:24.0123953Z return (buf0, ) 2023-01-11T21:41:24.0123957Z 2023-01-11T21:41:24.0123961Z 2023-01-11T21:41:24.0124035Z if __name__ == "__main__": 2023-01-11T21:41:24.0124146Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0124267Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0124477Z arg0_1 = rand_strided((4, 4, 4, 4), (64, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0124572Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0124577Z 2023-01-11T21:41:24.0124592Z 2023-01-11T21:41:24.0124672Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0124742Z import torch 2023-01-11T21:41:24.0124813Z import random 2023-01-11T21:41:24.0124926Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0125050Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0125055Z 2023-01-11T21:41:24.0125131Z aten = torch.ops.aten 2023-01-11T21:41:24.0125262Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0125340Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0125344Z 2023-01-11T21:41:24.0125349Z 2023-01-11T21:41:24.0125433Z async_compile.wait(globals()) 2023-01-11T21:41:24.0125504Z del async_compile 2023-01-11T21:41:24.0125508Z 2023-01-11T21:41:24.0125577Z def call(args): 2023-01-11T21:41:24.0125643Z arg0_1, = args 2023-01-11T21:41:24.0125711Z args.clear() 2023-01-11T21:41:24.0125807Z return (as_strided(arg0_1, (64, 4), (4, 1)), ) 2023-01-11T21:41:24.0125812Z 2023-01-11T21:41:24.0125816Z 2023-01-11T21:41:24.0125887Z if __name__ == "__main__": 2023-01-11T21:41:24.0125983Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0126106Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0126346Z arg0_1 = rand_strided((4, 4, 4, 4), (64, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0126454Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0126458Z 2023-01-11T21:41:24.0126462Z 2023-01-11T21:41:24.0126553Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0126620Z import torch 2023-01-11T21:41:24.0126689Z import random 2023-01-11T21:41:24.0126802Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0126908Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0126913Z 2023-01-11T21:41:24.0126989Z aten = torch.ops.aten 2023-01-11T21:41:24.0127120Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0127210Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0127214Z 2023-01-11T21:41:24.0127218Z 2023-01-11T21:41:24.0127303Z async_compile.wait(globals()) 2023-01-11T21:41:24.0127372Z del async_compile 2023-01-11T21:41:24.0127376Z 2023-01-11T21:41:24.0127445Z def call(args): 2023-01-11T21:41:24.0127539Z arg0_1, = args 2023-01-11T21:41:24.0127596Z args.clear() 2023-01-11T21:41:24.0127699Z return (as_strided(arg0_1, (4, 4, 1), (4, 16, 0), 3), ) 2023-01-11T21:41:24.0127704Z 2023-01-11T21:41:24.0127708Z 2023-01-11T21:41:24.0127778Z if __name__ == "__main__": 2023-01-11T21:41:24.0127887Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0128005Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0128216Z arg0_1 = rand_strided((4, 4, 4, 4), (64, 16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0128323Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0128327Z 2023-01-11T21:41:24.0128389Z ok (1.701s) 2023-01-11T21:41:24.0128848Z test_permute_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0128977Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0129238Z [2023-01-11 21:36:59,218] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 388 2023-01-11T21:41:24.0129504Z [2023-01-11 21:37:01,158] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 388 2023-01-11T21:41:24.0129509Z 2023-01-11T21:41:24.0129601Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0129668Z import torch 2023-01-11T21:41:24.0129735Z import random 2023-01-11T21:41:24.0129846Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0129963Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0129967Z 2023-01-11T21:41:24.0130032Z aten = torch.ops.aten 2023-01-11T21:41:24.0130162Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0130256Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0130261Z 2023-01-11T21:41:24.0130266Z 2023-01-11T21:41:24.0130399Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0130601Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0130718Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0130817Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0130911Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0130958Z { 2023-01-11T21:41:24.0131056Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0131116Z { 2023-01-11T21:41:24.0131192Z #pragma omp for 2023-01-11T21:41:24.0131272Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0131332Z { 2023-01-11T21:41:24.0131464Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0131587Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0131703Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0131830Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0131913Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0131994Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0132082Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0132170Z tmp5.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0132221Z } 2023-01-11T21:41:24.0132312Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0132393Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0132454Z { 2023-01-11T21:41:24.0132535Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0132631Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0132712Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0132796Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0132905Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0132991Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0133068Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0133144Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:24.0133205Z } 2023-01-11T21:41:24.0133265Z } 2023-01-11T21:41:24.0133311Z } 2023-01-11T21:41:24.0133389Z ''') 2023-01-11T21:41:24.0133394Z 2023-01-11T21:41:24.0133399Z 2023-01-11T21:41:24.0133490Z async_compile.wait(globals()) 2023-01-11T21:41:24.0133561Z del async_compile 2023-01-11T21:41:24.0133566Z 2023-01-11T21:41:24.0133635Z def call(args): 2023-01-11T21:41:24.0133703Z arg0_1, = args 2023-01-11T21:41:24.0133772Z args.clear() 2023-01-11T21:41:24.0133977Z buf0 = empty_strided((2, 2, 2, 2, 2), (4, 8, 1, 16, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0134190Z buf1 = empty_strided((2, 2, 2, 2, 2), (4, 8, 1, 16, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0134356Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0134425Z del arg0_1 2023-01-11T21:41:24.0134501Z return (buf0, buf1, ) 2023-01-11T21:41:24.0134506Z 2023-01-11T21:41:24.0134510Z 2023-01-11T21:41:24.0134583Z if __name__ == "__main__": 2023-01-11T21:41:24.0134696Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0134817Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0135021Z arg0_1 = rand_strided((2, 2, 2, 2, 2), (16, 8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0135126Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0135131Z 2023-01-11T21:41:24.0135195Z ok (1.963s) 2023-01-11T21:41:24.0135660Z test_pow1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0135788Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0136048Z [2023-01-11 21:37:01,317] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 389 2023-01-11T21:41:24.0136311Z [2023-01-11 21:37:03,472] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 389 2023-01-11T21:41:24.0136316Z 2023-01-11T21:41:24.0136412Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0136478Z import torch 2023-01-11T21:41:24.0136534Z import random 2023-01-11T21:41:24.0136646Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0136765Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0136770Z 2023-01-11T21:41:24.0136845Z aten = torch.ops.aten 2023-01-11T21:41:24.0136977Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0137100Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0137105Z 2023-01-11T21:41:24.0137109Z 2023-01-11T21:41:24.0137243Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0137446Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0137560Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0137646Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0137740Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0137833Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0137925Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.0138013Z float* __restrict__ out_ptr4, 2023-01-11T21:41:24.0138102Z float* __restrict__ out_ptr5, 2023-01-11T21:41:24.0138191Z float* __restrict__ out_ptr6, 2023-01-11T21:41:24.0138269Z float* __restrict__ out_ptr7, 2023-01-11T21:41:24.0138390Z float* __restrict__ out_ptr8, 2023-01-11T21:41:24.0138482Z float* __restrict__ out_ptr9, 2023-01-11T21:41:24.0138578Z float* __restrict__ out_ptr10, 2023-01-11T21:41:24.0138674Z float* __restrict__ out_ptr11, 2023-01-11T21:41:24.0138767Z float* __restrict__ out_ptr12, 2023-01-11T21:41:24.0138860Z float* __restrict__ out_ptr13, 2023-01-11T21:41:24.0138941Z float* __restrict__ out_ptr14, 2023-01-11T21:41:24.0139034Z float* __restrict__ out_ptr15) 2023-01-11T21:41:24.0139094Z { 2023-01-11T21:41:24.0139191Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0139252Z { 2023-01-11T21:41:24.0139328Z #pragma omp for 2023-01-11T21:41:24.0139409Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0139459Z { 2023-01-11T21:41:24.0139590Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0139688Z auto tmp1 = tmp0.reciprocal(); 2023-01-11T21:41:24.0139771Z auto tmp2 = tmp1 * tmp1; 2023-01-11T21:41:24.0139849Z auto tmp3 = tmp2 * tmp2; 2023-01-11T21:41:24.0139928Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:24.0140006Z auto tmp5 = tmp2 * tmp1; 2023-01-11T21:41:24.0140073Z auto tmp6 = tmp5 * tmp5; 2023-01-11T21:41:24.0140150Z auto tmp7 = tmp6 * tmp1; 2023-01-11T21:41:24.0140227Z auto tmp8 = tmp3 * tmp1; 2023-01-11T21:41:24.0140356Z auto tmp9 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0140436Z auto tmp10 = tmp0 * tmp0; 2023-01-11T21:41:24.0140519Z auto tmp11 = tmp10 * tmp0; 2023-01-11T21:41:24.0140602Z auto tmp12 = tmp10 * tmp10; 2023-01-11T21:41:24.0140674Z auto tmp13 = tmp12 * tmp0; 2023-01-11T21:41:24.0140755Z auto tmp14 = tmp11 * tmp11; 2023-01-11T21:41:24.0140840Z auto tmp15 = tmp14 * tmp0; 2023-01-11T21:41:24.0140923Z auto tmp16 = tmp12 * tmp12; 2023-01-11T21:41:24.0141011Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0141101Z tmp7.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0141187Z tmp6.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0141262Z tmp8.store(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0141347Z tmp3.store(out_ptr4 + 8*i0); 2023-01-11T21:41:24.0141430Z tmp5.store(out_ptr5 + 8*i0); 2023-01-11T21:41:24.0141515Z tmp2.store(out_ptr6 + 8*i0); 2023-01-11T21:41:24.0141597Z tmp1.store(out_ptr7 + 8*i0); 2023-01-11T21:41:24.0141680Z tmp9.store(out_ptr8 + 8*i0); 2023-01-11T21:41:24.0141769Z tmp10.store(out_ptr9 + 8*i0); 2023-01-11T21:41:24.0141847Z tmp11.store(out_ptr10 + 8*i0); 2023-01-11T21:41:24.0141936Z tmp12.store(out_ptr11 + 8*i0); 2023-01-11T21:41:24.0142022Z tmp13.store(out_ptr12 + 8*i0); 2023-01-11T21:41:24.0142156Z tmp14.store(out_ptr13 + 8*i0); 2023-01-11T21:41:24.0142243Z tmp15.store(out_ptr14 + 8*i0); 2023-01-11T21:41:24.0142455Z tmp16.store(out_ptr15 + 8*i0); 2023-01-11T21:41:24.0142547Z } 2023-01-11T21:41:24.0142628Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0142712Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0142772Z { 2023-01-11T21:41:24.0142856Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0142935Z auto tmp1 = 1 / tmp0; 2023-01-11T21:41:24.0143018Z auto tmp2 = tmp1 * tmp1; 2023-01-11T21:41:24.0143099Z auto tmp3 = tmp2 * tmp2; 2023-01-11T21:41:24.0143167Z auto tmp4 = tmp3 * tmp3; 2023-01-11T21:41:24.0143246Z auto tmp5 = tmp2 * tmp1; 2023-01-11T21:41:24.0143327Z auto tmp6 = tmp5 * tmp5; 2023-01-11T21:41:24.0143405Z auto tmp7 = tmp6 * tmp1; 2023-01-11T21:41:24.0143484Z auto tmp8 = tmp3 * tmp1; 2023-01-11T21:41:24.0143641Z auto tmp9 = static_cast(1); 2023-01-11T21:41:24.0143725Z auto tmp10 = tmp0 * tmp0; 2023-01-11T21:41:24.0143797Z auto tmp11 = tmp10 * tmp0; 2023-01-11T21:41:24.0143880Z auto tmp12 = tmp10 * tmp10; 2023-01-11T21:41:24.0143963Z auto tmp13 = tmp12 * tmp0; 2023-01-11T21:41:24.0144044Z auto tmp14 = tmp11 * tmp11; 2023-01-11T21:41:24.0144124Z auto tmp15 = tmp14 * tmp0; 2023-01-11T21:41:24.0144205Z auto tmp16 = tmp12 * tmp12; 2023-01-11T21:41:24.0144285Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0144349Z out_ptr1[i0] = tmp7; 2023-01-11T21:41:24.0144426Z out_ptr2[i0] = tmp6; 2023-01-11T21:41:24.0144504Z out_ptr3[i0] = tmp8; 2023-01-11T21:41:24.0144579Z out_ptr4[i0] = tmp3; 2023-01-11T21:41:24.0144655Z out_ptr5[i0] = tmp5; 2023-01-11T21:41:24.0144731Z out_ptr6[i0] = tmp2; 2023-01-11T21:41:24.0144811Z out_ptr7[i0] = tmp1; 2023-01-11T21:41:24.0144874Z out_ptr8[i0] = tmp9; 2023-01-11T21:41:24.0144952Z out_ptr9[i0] = tmp10; 2023-01-11T21:41:24.0145033Z out_ptr10[i0] = tmp11; 2023-01-11T21:41:24.0145114Z out_ptr11[i0] = tmp12; 2023-01-11T21:41:24.0145194Z out_ptr12[i0] = tmp13; 2023-01-11T21:41:24.0145273Z out_ptr13[i0] = tmp14; 2023-01-11T21:41:24.0145350Z out_ptr14[i0] = tmp15; 2023-01-11T21:41:24.0145413Z out_ptr15[i0] = tmp16; 2023-01-11T21:41:24.0145478Z } 2023-01-11T21:41:24.0145542Z } 2023-01-11T21:41:24.0145603Z } 2023-01-11T21:41:24.0145698Z ''') 2023-01-11T21:41:24.0145703Z 2023-01-11T21:41:24.0145708Z 2023-01-11T21:41:24.0145801Z async_compile.wait(globals()) 2023-01-11T21:41:24.0145875Z del async_compile 2023-01-11T21:41:24.0145880Z 2023-01-11T21:41:24.0145939Z def call(args): 2023-01-11T21:41:24.0146009Z arg0_1, = args 2023-01-11T21:41:24.0146083Z args.clear() 2023-01-11T21:41:24.0146294Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0146493Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0146693Z buf2 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0146886Z buf3 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0147066Z buf4 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0147261Z buf5 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0147453Z buf6 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0147647Z buf7 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0147840Z buf8 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0148034Z buf9 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0148273Z buf10 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0148471Z buf11 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0148669Z buf12 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0148853Z buf13 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0149047Z buf14 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0149240Z buf15 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0149771Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf10.data_ptr()), c_void_p(buf11.data_ptr()), c_void_p(buf12.data_ptr()), c_void_p(buf13.data_ptr()), c_void_p(buf14.data_ptr()), c_void_p(buf15.data_ptr())) 2023-01-11T21:41:24.0149941Z return (buf0, buf1, buf2, buf3, buf4, buf5, buf6, buf7, buf8, arg0_1, buf9, buf10, buf11, buf12, buf13, buf14, buf15, ) 2023-01-11T21:41:24.0149946Z 2023-01-11T21:41:24.0149950Z 2023-01-11T21:41:24.0150027Z if __name__ == "__main__": 2023-01-11T21:41:24.0150141Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0150265Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0150463Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0150571Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0150576Z 2023-01-11T21:41:24.0150629Z ok (2.325s) 2023-01-11T21:41:24.0151096Z test_pow2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0151225Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0151490Z [2023-01-11 21:37:03,532] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 390 2023-01-11T21:41:24.0151754Z [2023-01-11 21:37:06,418] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 390 2023-01-11T21:41:24.0151760Z 2023-01-11T21:41:24.0151854Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0151928Z import torch 2023-01-11T21:41:24.0151998Z import random 2023-01-11T21:41:24.0152114Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0152222Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0152231Z 2023-01-11T21:41:24.0152310Z aten = torch.ops.aten 2023-01-11T21:41:24.0152443Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0152535Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0152540Z 2023-01-11T21:41:24.0152545Z 2023-01-11T21:41:24.0152678Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0152882Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0153000Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0153103Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0153188Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0153250Z { 2023-01-11T21:41:24.0153347Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0153409Z { 2023-01-11T21:41:24.0153487Z #pragma omp for 2023-01-11T21:41:24.0153566Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0153627Z { 2023-01-11T21:41:24.0153839Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0153977Z auto tmp0 = at::vec::Vectorized(static_cast(1000)); 2023-01-11T21:41:24.0154065Z auto tmp2 = tmp0.pow(tmp1); 2023-01-11T21:41:24.0154150Z auto tmp3 = tmp1.pow(tmp0); 2023-01-11T21:41:24.0154241Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0154328Z tmp3.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0154390Z } 2023-01-11T21:41:24.0154471Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0154551Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0154612Z { 2023-01-11T21:41:24.0154694Z auto tmp1 = in_ptr0[i0]; 2023-01-11T21:41:24.0154793Z auto tmp0 = static_cast(1000); 2023-01-11T21:41:24.0154893Z auto tmp2 = std::pow(tmp0, tmp1); 2023-01-11T21:41:24.0154992Z auto tmp3 = std::pow(tmp1, tmp0); 2023-01-11T21:41:24.0155095Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0155175Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:24.0155236Z } 2023-01-11T21:41:24.0155296Z } 2023-01-11T21:41:24.0155354Z } 2023-01-11T21:41:24.0155434Z ''') 2023-01-11T21:41:24.0155440Z 2023-01-11T21:41:24.0155444Z 2023-01-11T21:41:24.0155531Z async_compile.wait(globals()) 2023-01-11T21:41:24.0155590Z del async_compile 2023-01-11T21:41:24.0155595Z 2023-01-11T21:41:24.0155666Z def call(args): 2023-01-11T21:41:24.0155732Z arg0_1, = args 2023-01-11T21:41:24.0155801Z args.clear() 2023-01-11T21:41:24.0156001Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0156199Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0156362Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0156426Z del arg0_1 2023-01-11T21:41:24.0156490Z return (buf0, buf1, ) 2023-01-11T21:41:24.0156499Z 2023-01-11T21:41:24.0156504Z 2023-01-11T21:41:24.0156577Z if __name__ == "__main__": 2023-01-11T21:41:24.0156688Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0156809Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0157009Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0157113Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0157117Z 2023-01-11T21:41:24.0157183Z ok (2.942s) 2023-01-11T21:41:24.0157501Z test_pow3_cpu (__main__.CpuTests) ... [2023-01-11 21:37:06,456] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 391 2023-01-11T21:41:24.0157757Z [2023-01-11 21:37:09,163] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 391 2023-01-11T21:41:24.0157774Z 2023-01-11T21:41:24.0157855Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0157923Z import torch 2023-01-11T21:41:24.0157995Z import random 2023-01-11T21:41:24.0158109Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0158228Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0158233Z 2023-01-11T21:41:24.0158308Z aten = torch.ops.aten 2023-01-11T21:41:24.0158437Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0158516Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0158520Z 2023-01-11T21:41:24.0158524Z 2023-01-11T21:41:24.0158655Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0158861Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0158978Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0159079Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0159138Z { 2023-01-11T21:41:24.0159199Z { 2023-01-11T21:41:24.0159248Z { 2023-01-11T21:41:24.0159328Z auto tmp1 = in_ptr0[0]; 2023-01-11T21:41:24.0159475Z auto tmp0 = static_cast(0.12300000339746475); 2023-01-11T21:41:24.0159557Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0159648Z auto tmp3 = std::sqrt(tmp2); 2023-01-11T21:41:24.0159725Z out_ptr0[0] = tmp3; 2023-01-11T21:41:24.0159786Z } 2023-01-11T21:41:24.0159834Z } 2023-01-11T21:41:24.0159891Z } 2023-01-11T21:41:24.0159968Z ''') 2023-01-11T21:41:24.0159973Z 2023-01-11T21:41:24.0159977Z 2023-01-11T21:41:24.0160063Z async_compile.wait(globals()) 2023-01-11T21:41:24.0160135Z del async_compile 2023-01-11T21:41:24.0160140Z 2023-01-11T21:41:24.0160208Z def call(args): 2023-01-11T21:41:24.0160274Z arg0_1, = args 2023-01-11T21:41:24.0160333Z args.clear() 2023-01-11T21:41:24.0160517Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0160647Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0160714Z del arg0_1 2023-01-11T21:41:24.0160783Z return (buf0, ) 2023-01-11T21:41:24.0160819Z 2023-01-11T21:41:24.0160824Z 2023-01-11T21:41:24.0160898Z if __name__ == "__main__": 2023-01-11T21:41:24.0161010Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0161132Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0161307Z arg0_1 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0161413Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0161418Z 2023-01-11T21:41:24.0161486Z ok (2.738s) 2023-01-11T21:41:24.0161827Z test_profiler_mark_wrapper_call_cpu (__main__.CpuTests) ... STAGE:2023-01-11 21:37:09 1454:1454 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:41:24.0162084Z [2023-01-11 21:37:09,177] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 392 2023-01-11T21:41:24.0162348Z [2023-01-11 21:37:09,184] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 392 2023-01-11T21:41:24.0162602Z STAGE:2023-01-11 21:37:09 1454:1454 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:41:24.0162857Z STAGE:2023-01-11 21:37:09 1454:1454 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:41:24.0162863Z 2023-01-11T21:41:24.0162956Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0163012Z import torch 2023-01-11T21:41:24.0163080Z import random 2023-01-11T21:41:24.0163193Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0163311Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0163316Z 2023-01-11T21:41:24.0163393Z aten = torch.ops.aten 2023-01-11T21:41:24.0163522Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0163613Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0163617Z 2023-01-11T21:41:24.0163622Z 2023-01-11T21:41:24.0163753Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0163945Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0164065Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0164166Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0164263Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0164322Z { 2023-01-11T21:41:24.0164418Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0164479Z { 2023-01-11T21:41:24.0164542Z #pragma omp for 2023-01-11T21:41:24.0164623Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.0164683Z { 2023-01-11T21:41:24.0164814Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0164942Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0165025Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0165116Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0165164Z } 2023-01-11T21:41:24.0165258Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0165370Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.0165429Z { 2023-01-11T21:41:24.0165509Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0165588Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.0165670Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0165737Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0165796Z } 2023-01-11T21:41:24.0165855Z } 2023-01-11T21:41:24.0165912Z } 2023-01-11T21:41:24.0165990Z ''') 2023-01-11T21:41:24.0165995Z 2023-01-11T21:41:24.0166000Z 2023-01-11T21:41:24.0166085Z async_compile.wait(globals()) 2023-01-11T21:41:24.0166157Z del async_compile 2023-01-11T21:41:24.0166161Z 2023-01-11T21:41:24.0166218Z def call(args): 2023-01-11T21:41:24.0166289Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0166355Z args.clear() 2023-01-11T21:41:24.0166468Z from torch.profiler import record_function 2023-01-11T21:41:24.0166629Z with record_function('inductor_wrapper_call'): 2023-01-11T21:41:24.0166862Z buf0 = empty_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0167025Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0167081Z del arg0_1 2023-01-11T21:41:24.0167145Z del arg1_1 2023-01-11T21:41:24.0167214Z return (buf0, ) 2023-01-11T21:41:24.0167220Z 2023-01-11T21:41:24.0167224Z 2023-01-11T21:41:24.0167298Z if __name__ == "__main__": 2023-01-11T21:41:24.0167410Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0167531Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0167728Z arg0_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0167920Z arg1_1 = rand_strided((100, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0168022Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0168038Z 2023-01-11T21:41:24.0168095Z ok (0.022s) 2023-01-11T21:41:24.0168446Z test_rand_like_deterministic_cpu (__main__.CpuTests) ... [2023-01-11 21:37:09,265] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 393 2023-01-11T21:41:24.0168704Z [2023-01-11 21:37:09,265] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager 2023-01-11T21:41:24.0168969Z [2023-01-11 21:37:10,851] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 393 2023-01-11T21:41:24.0168974Z 2023-01-11T21:41:24.0169067Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0169135Z import torch 2023-01-11T21:41:24.0169205Z import random 2023-01-11T21:41:24.0169316Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0169422Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0169427Z 2023-01-11T21:41:24.0169504Z aten = torch.ops.aten 2023-01-11T21:41:24.0169635Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0169731Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0169889Z seed_cpu_None = None # 9130db9322feaa41c28986790b86d7dd047e77339ff46fce775dbaa5929b26ce 2023-01-11T21:41:24.0169894Z 2023-01-11T21:41:24.0169899Z 2023-01-11T21:41:24.0170031Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0170236Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0170348Z extern "C" void kernel(const long* __restrict__ seed0, 2023-01-11T21:41:24.0170436Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0170530Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0170586Z { 2023-01-11T21:41:24.0170683Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0170743Z { 2023-01-11T21:41:24.0170817Z #pragma omp for 2023-01-11T21:41:24.0170899Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:24.0170949Z { 2023-01-11T21:41:24.0171011Z { 2023-01-11T21:41:24.0171107Z { 2023-01-11T21:41:24.0171192Z auto tmp0 = seed0[0]; 2023-01-11T21:41:24.0171292Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:24.0171427Z auto tmp2 = static_cast(normalized_rand_cpu(tmp0, tmp1));; 2023-01-11T21:41:24.0171532Z auto tmp3 = static_cast(1024 + i0); 2023-01-11T21:41:24.0171654Z auto tmp4 = static_cast(normalized_rand_cpu(tmp0, tmp3));; 2023-01-11T21:41:24.0171740Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0171820Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0171884Z } 2023-01-11T21:41:24.0171945Z } 2023-01-11T21:41:24.0172003Z } 2023-01-11T21:41:24.0172060Z } 2023-01-11T21:41:24.0172106Z } 2023-01-11T21:41:24.0172183Z ''') 2023-01-11T21:41:24.0172188Z 2023-01-11T21:41:24.0172192Z 2023-01-11T21:41:24.0172281Z async_compile.wait(globals()) 2023-01-11T21:41:24.0172384Z del async_compile 2023-01-11T21:41:24.0172389Z 2023-01-11T21:41:24.0172459Z def call(args): 2023-01-11T21:41:24.0172525Z arg0_1, = args 2023-01-11T21:41:24.0172593Z args.clear() 2023-01-11T21:41:24.0172712Z torch.randint(2**31, size=(), dtype=torch.int64, out=seed_cpu_None) 2023-01-11T21:41:24.0172906Z buf0 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0173098Z buf1 = empty_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0173270Z kernel_cpp_0(c_void_p(seed_cpu_None.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0173346Z return (buf0, buf1, ) 2023-01-11T21:41:24.0173352Z 2023-01-11T21:41:24.0173357Z 2023-01-11T21:41:24.0173432Z if __name__ == "__main__": 2023-01-11T21:41:24.0173546Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0173666Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0173850Z seed_cpu_None = rand_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0174047Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0174151Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0174156Z 2023-01-11T21:41:24.0174219Z ok (1.666s) 2023-01-11T21:41:24.0174685Z test_reduction1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0174810Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0175070Z [2023-01-11 21:37:10,874] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 394 2023-01-11T21:41:24.0175339Z [2023-01-11 21:37:12,403] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 394 2023-01-11T21:41:24.0175346Z 2023-01-11T21:41:24.0175436Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0175505Z import torch 2023-01-11T21:41:24.0175561Z import random 2023-01-11T21:41:24.0175674Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0175795Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0175799Z 2023-01-11T21:41:24.0175876Z aten = torch.ops.aten 2023-01-11T21:41:24.0176007Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0176097Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0176102Z 2023-01-11T21:41:24.0176106Z 2023-01-11T21:41:24.0176237Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0176429Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0176547Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0176676Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0176772Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0176864Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0176957Z long* __restrict__ out_ptr3, 2023-01-11T21:41:24.0177048Z long* __restrict__ out_ptr4) 2023-01-11T21:41:24.0177107Z { 2023-01-11T21:41:24.0177155Z { 2023-01-11T21:41:24.0177215Z { 2023-01-11T21:41:24.0177290Z float tmp1 = 0; 2023-01-11T21:41:24.0177508Z float tmp2 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0177631Z float tmp3 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0177747Z struct IndexValue_9 {size_t index; float value;}; 2023-01-11T21:41:24.0177966Z IndexValue_9 tmp4{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0178118Z #pragma omp declare reduction(argmax : struct IndexValue_9 :\ 2023-01-11T21:41:24.0178268Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0178411Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0178643Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0178760Z struct IndexValue_10 {size_t index; float value;}; 2023-01-11T21:41:24.0178891Z IndexValue_10 tmp5{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0179028Z #pragma omp declare reduction(argmin : struct IndexValue_10 :\ 2023-01-11T21:41:24.0179173Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0179317Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0179443Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0179530Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:24.0179592Z { 2023-01-11T21:41:24.0179656Z { 2023-01-11T21:41:24.0179746Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0179825Z tmp1 += tmp0; 2023-01-11T21:41:24.0179926Z tmp2 = std::max(tmp2, tmp0); 2023-01-11T21:41:24.0180009Z tmp3 = std::min(tmp3, tmp0); 2023-01-11T21:41:24.0180102Z if (tmp4.value < tmp0) { 2023-01-11T21:41:24.0180211Z tmp4.index = i0; tmp4.value = tmp0; 2023-01-11T21:41:24.0180280Z } 2023-01-11T21:41:24.0180370Z if (tmp5.value > tmp0) { 2023-01-11T21:41:24.0180475Z tmp5.index = i0; tmp5.value = tmp0; 2023-01-11T21:41:24.0180542Z } 2023-01-11T21:41:24.0180593Z } 2023-01-11T21:41:24.0180656Z } 2023-01-11T21:41:24.0180735Z out_ptr0[0] = tmp1; 2023-01-11T21:41:24.0180815Z out_ptr1[0] = tmp2; 2023-01-11T21:41:24.0180892Z out_ptr2[0] = tmp3; 2023-01-11T21:41:24.0180976Z out_ptr3[0] = tmp4.index; 2023-01-11T21:41:24.0181058Z out_ptr4[0] = tmp5.index; 2023-01-11T21:41:24.0181107Z } 2023-01-11T21:41:24.0181168Z } 2023-01-11T21:41:24.0181230Z } 2023-01-11T21:41:24.0181310Z ''') 2023-01-11T21:41:24.0181315Z 2023-01-11T21:41:24.0181320Z 2023-01-11T21:41:24.0181409Z async_compile.wait(globals()) 2023-01-11T21:41:24.0181480Z del async_compile 2023-01-11T21:41:24.0181485Z 2023-01-11T21:41:24.0181555Z def call(args): 2023-01-11T21:41:24.0181609Z arg0_1, = args 2023-01-11T21:41:24.0181679Z args.clear() 2023-01-11T21:41:24.0181866Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0182046Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0182223Z buf2 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0182614Z buf3 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0182796Z buf4 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0183035Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:24.0183090Z del arg0_1 2023-01-11T21:41:24.0183188Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:24.0183193Z 2023-01-11T21:41:24.0183197Z 2023-01-11T21:41:24.0183274Z if __name__ == "__main__": 2023-01-11T21:41:24.0183391Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0183513Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0183707Z arg0_1 = rand_strided((3, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0183815Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0183822Z 2023-01-11T21:41:24.0183947Z ok (1.552s) 2023-01-11T21:41:24.0184418Z test_reduction2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0184530Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0184797Z [2023-01-11 21:37:12,432] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 395 2023-01-11T21:41:24.0185066Z [2023-01-11 21:37:14,022] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 395 2023-01-11T21:41:24.0185072Z 2023-01-11T21:41:24.0185165Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0185234Z import torch 2023-01-11T21:41:24.0185306Z import random 2023-01-11T21:41:24.0185424Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0185545Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0185551Z 2023-01-11T21:41:24.0185616Z aten = torch.ops.aten 2023-01-11T21:41:24.0185747Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0185838Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0185843Z 2023-01-11T21:41:24.0185847Z 2023-01-11T21:41:24.0185980Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0186189Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0186305Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0186404Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0186501Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0186582Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0186682Z long* __restrict__ out_ptr3) 2023-01-11T21:41:24.0186744Z { 2023-01-11T21:41:24.0186801Z { 2023-01-11T21:41:24.0186860Z { 2023-01-11T21:41:24.0186932Z float tmp1 = 0; 2023-01-11T21:41:24.0187135Z float tmp2 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0187265Z float tmp3 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0187409Z struct IndexValue_11 {size_t index; float value;}; 2023-01-11T21:41:24.0187580Z IndexValue_11 tmp4{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0187763Z #pragma omp declare reduction(argmin : struct IndexValue_11 :\ 2023-01-11T21:41:24.0187960Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0188158Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0188352Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0188562Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0188636Z { 2023-01-11T21:41:24.0188886Z #pragma omp for reduction(+:tmp1) reduction(max:tmp2) reduction(min:tmp3) reduction(argmin:tmp4) 2023-01-11T21:41:24.0188998Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0189079Z { 2023-01-11T21:41:24.0189169Z { 2023-01-11T21:41:24.0189300Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0189410Z tmp1 += tmp0; 2023-01-11T21:41:24.0189549Z tmp2 = std::max(tmp2, tmp0); 2023-01-11T21:41:24.0189671Z tmp3 = std::min(tmp3, tmp0); 2023-01-11T21:41:24.0189808Z if (tmp4.value > tmp0) { 2023-01-11T21:41:24.0189954Z tmp4.index = i0; tmp4.value = tmp0; 2023-01-11T21:41:24.0190048Z } 2023-01-11T21:41:24.0190191Z } 2023-01-11T21:41:24.0190280Z } 2023-01-11T21:41:24.0190368Z } 2023-01-11T21:41:24.0190463Z out_ptr0[0] = tmp1; 2023-01-11T21:41:24.0190562Z out_ptr1[0] = tmp2; 2023-01-11T21:41:24.0190668Z out_ptr2[0] = tmp3; 2023-01-11T21:41:24.0190779Z out_ptr3[0] = tmp4.index; 2023-01-11T21:41:24.0190867Z } 2023-01-11T21:41:24.0190948Z } 2023-01-11T21:41:24.0191019Z } 2023-01-11T21:41:24.0191148Z ''') 2023-01-11T21:41:24.0191156Z 2023-01-11T21:41:24.0191161Z 2023-01-11T21:41:24.0191293Z async_compile.wait(globals()) 2023-01-11T21:41:24.0191393Z del async_compile 2023-01-11T21:41:24.0191400Z 2023-01-11T21:41:24.0191504Z def call(args): 2023-01-11T21:41:24.0191612Z arg0_1, = args 2023-01-11T21:41:24.0191720Z args.clear() 2023-01-11T21:41:24.0192007Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0192285Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0192579Z buf2 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0192853Z buf3 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0193159Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.0193274Z del arg0_1 2023-01-11T21:41:24.0193392Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:24.0193400Z 2023-01-11T21:41:24.0193405Z 2023-01-11T21:41:24.0193516Z if __name__ == "__main__": 2023-01-11T21:41:24.0193637Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0193815Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0194017Z arg0_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0194124Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0194129Z 2023-01-11T21:41:24.0194195Z ok (1.619s) 2023-01-11T21:41:24.0194670Z test_reduction3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0194797Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0195063Z [2023-01-11 21:37:14,042] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 396 2023-01-11T21:41:24.0195329Z [2023-01-11 21:37:15,583] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 396 2023-01-11T21:41:24.0195335Z 2023-01-11T21:41:24.0195430Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0195515Z import torch 2023-01-11T21:41:24.0195610Z import random 2023-01-11T21:41:24.0195860Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0196035Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0196043Z 2023-01-11T21:41:24.0196160Z aten = torch.ops.aten 2023-01-11T21:41:24.0196344Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0196470Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0196479Z 2023-01-11T21:41:24.0196486Z 2023-01-11T21:41:24.0196700Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0196898Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0197019Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0197118Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0197213Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0197305Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0197398Z long* __restrict__ out_ptr3) 2023-01-11T21:41:24.0197514Z { 2023-01-11T21:41:24.0197564Z { 2023-01-11T21:41:24.0197624Z { 2023-01-11T21:41:24.0197696Z float tmp1 = 0; 2023-01-11T21:41:24.0197918Z float tmp2 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0198038Z float tmp3 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0198156Z struct IndexValue_12 {size_t index; float value;}; 2023-01-11T21:41:24.0198378Z IndexValue_12 tmp4{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0198519Z #pragma omp declare reduction(argmax : struct IndexValue_12 :\ 2023-01-11T21:41:24.0198655Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0198795Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0199026Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0199137Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0199201Z { 2023-01-11T21:41:24.0199375Z #pragma omp for reduction(+:tmp1) reduction(max:tmp2) reduction(min:tmp3) reduction(argmax:tmp4) 2023-01-11T21:41:24.0199463Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0199528Z { 2023-01-11T21:41:24.0199581Z { 2023-01-11T21:41:24.0199675Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0199754Z tmp1 += tmp0; 2023-01-11T21:41:24.0199853Z tmp2 = std::max(tmp2, tmp0); 2023-01-11T21:41:24.0199948Z tmp3 = std::min(tmp3, tmp0); 2023-01-11T21:41:24.0200038Z if (tmp4.value < tmp0) { 2023-01-11T21:41:24.0200144Z tmp4.index = i0; tmp4.value = tmp0; 2023-01-11T21:41:24.0200199Z } 2023-01-11T21:41:24.0200260Z } 2023-01-11T21:41:24.0200328Z } 2023-01-11T21:41:24.0200388Z } 2023-01-11T21:41:24.0200464Z out_ptr0[0] = tmp1; 2023-01-11T21:41:24.0200542Z out_ptr1[0] = tmp2; 2023-01-11T21:41:24.0200614Z out_ptr2[0] = tmp3; 2023-01-11T21:41:24.0200686Z out_ptr3[0] = tmp4.index; 2023-01-11T21:41:24.0200745Z } 2023-01-11T21:41:24.0200803Z } 2023-01-11T21:41:24.0200858Z } 2023-01-11T21:41:24.0200936Z ''') 2023-01-11T21:41:24.0200941Z 2023-01-11T21:41:24.0200945Z 2023-01-11T21:41:24.0201034Z async_compile.wait(globals()) 2023-01-11T21:41:24.0201102Z del async_compile 2023-01-11T21:41:24.0201107Z 2023-01-11T21:41:24.0201163Z def call(args): 2023-01-11T21:41:24.0201229Z arg0_1, = args 2023-01-11T21:41:24.0201298Z args.clear() 2023-01-11T21:41:24.0201484Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0201664Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0201873Z buf2 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0202048Z buf3 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0202249Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.0202315Z del arg0_1 2023-01-11T21:41:24.0202400Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:24.0202406Z 2023-01-11T21:41:24.0202410Z 2023-01-11T21:41:24.0202483Z if __name__ == "__main__": 2023-01-11T21:41:24.0202595Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0202715Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0202906Z arg0_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0203012Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0203017Z 2023-01-11T21:41:24.0203080Z ok (1.561s) 2023-01-11T21:41:24.0203330Z test_reduction4_cpu (__main__.CpuTests) ... skip: Non-deterministic CPU results (0.001s) 2023-01-11T21:41:24.0203818Z test_reflection_pad2d_backward_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0203944Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0204206Z [2023-01-11 21:37:15,606] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 397 2023-01-11T21:41:24.0204472Z [2023-01-11 21:37:17,171] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 397 2023-01-11T21:41:24.0204900Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0205024Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0205283Z [2023-01-11 21:37:17,192] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 398 2023-01-11T21:41:24.0205548Z [2023-01-11 21:37:18,808] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 398 2023-01-11T21:41:24.0205972Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0206099Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0206353Z [2023-01-11 21:37:18,831] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 399 2023-01-11T21:41:24.0206358Z 2023-01-11T21:41:24.0206439Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0206509Z import torch 2023-01-11T21:41:24.0206579Z import random 2023-01-11T21:41:24.0206691Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0206809Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0206814Z 2023-01-11T21:41:24.0206890Z aten = torch.ops.aten 2023-01-11T21:41:24.0207021Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0207099Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0207115Z 2023-01-11T21:41:24.0207119Z 2023-01-11T21:41:24.0207239Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0207442Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0207600Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0207697Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0207756Z { 2023-01-11T21:41:24.0207850Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0207911Z { 2023-01-11T21:41:24.0207975Z #pragma omp for 2023-01-11T21:41:24.0208053Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0208114Z { 2023-01-11T21:41:24.0208187Z #pragma GCC ivdep 2023-01-11T21:41:24.0208270Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0208332Z { 2023-01-11T21:41:24.0208393Z { 2023-01-11T21:41:24.0208446Z { 2023-01-11T21:41:24.0208548Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0208648Z auto tmp1 = static_cast(i1); 2023-01-11T21:41:24.0208778Z auto tmp2 = in_ptr0[tmp1 + (8*tmp0)]; 2023-01-11T21:41:24.0208873Z out_ptr0[i1 + (8*i0)] = tmp2; 2023-01-11T21:41:24.0208938Z } 2023-01-11T21:41:24.0208998Z } 2023-01-11T21:41:24.0209048Z } 2023-01-11T21:41:24.0209107Z } 2023-01-11T21:41:24.0209164Z } 2023-01-11T21:41:24.0209220Z } 2023-01-11T21:41:24.0209296Z ''') 2023-01-11T21:41:24.0209301Z 2023-01-11T21:41:24.0209305Z 2023-01-11T21:41:24.0209390Z async_compile.wait(globals()) 2023-01-11T21:41:24.0209459Z del async_compile 2023-01-11T21:41:24.0209464Z 2023-01-11T21:41:24.0209519Z def call(args): 2023-01-11T21:41:24.0209591Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0209659Z args.clear() 2023-01-11T21:41:24.0209868Z buf0 = empty_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0210001Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0210067Z del arg0_1 2023-01-11T21:41:24.0210139Z return (buf0, ) 2023-01-11T21:41:24.0210144Z 2023-01-11T21:41:24.0210148Z 2023-01-11T21:41:24.0210211Z if __name__ == "__main__": 2023-01-11T21:41:24.0210323Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0210445Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0210658Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0210865Z arg1_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0210976Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0210981Z 2023-01-11T21:41:24.0210985Z 2023-01-11T21:41:24.0211075Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0211147Z import torch 2023-01-11T21:41:24.0211205Z import random 2023-01-11T21:41:24.0211318Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0211435Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0211442Z 2023-01-11T21:41:24.0211521Z aten = torch.ops.aten 2023-01-11T21:41:24.0211652Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0211741Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0211746Z 2023-01-11T21:41:24.0211751Z 2023-01-11T21:41:24.0211883Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0212085Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0212193Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0212291Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0212350Z { 2023-01-11T21:41:24.0212446Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0212503Z { 2023-01-11T21:41:24.0212578Z #pragma omp for 2023-01-11T21:41:24.0212655Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0212708Z { 2023-01-11T21:41:24.0212787Z #pragma GCC ivdep 2023-01-11T21:41:24.0212898Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0212961Z { 2023-01-11T21:41:24.0213025Z { 2023-01-11T21:41:24.0213089Z { 2023-01-11T21:41:24.0213193Z auto tmp0 = static_cast(1 + i0); 2023-01-11T21:41:24.0213288Z auto tmp1 = static_cast(1 + i1); 2023-01-11T21:41:24.0213393Z auto tmp2 = in_ptr0[tmp1 + (10*tmp0)]; 2023-01-11T21:41:24.0213492Z auto tmp3 = static_cast(i1); 2023-01-11T21:41:24.0213581Z auto tmp4 = tmp3 >= 1; 2023-01-11T21:41:24.0213670Z auto tmp5 = tmp3 <= 1; 2023-01-11T21:41:24.0213762Z auto tmp6 = tmp4 & tmp5; 2023-01-11T21:41:24.0213845Z float tmp7 = 0.0; 2023-01-11T21:41:24.0213907Z if(tmp6) 2023-01-11T21:41:24.0213971Z { 2023-01-11T21:41:24.0214110Z auto tmp8 = static_cast(1 + i0); 2023-01-11T21:41:24.0214288Z auto tmp9 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0214395Z auto tmp10 = in_ptr0[tmp9 + (10*tmp8)]; 2023-01-11T21:41:24.0214474Z tmp7 = tmp10; 2023-01-11T21:41:24.0214539Z } 2023-01-11T21:41:24.0214621Z auto tmp11 = tmp2 + tmp7; 2023-01-11T21:41:24.0214710Z auto tmp12 = tmp3 >= 6; 2023-01-11T21:41:24.0214795Z auto tmp13 = tmp3 <= 6; 2023-01-11T21:41:24.0214888Z auto tmp14 = tmp12 & tmp13; 2023-01-11T21:41:24.0214972Z float tmp15 = 0.0; 2023-01-11T21:41:24.0215046Z if(tmp14) 2023-01-11T21:41:24.0215111Z { 2023-01-11T21:41:24.0215215Z auto tmp16 = static_cast(1 + i0); 2023-01-11T21:41:24.0215382Z auto tmp17 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0215492Z auto tmp18 = in_ptr0[tmp17 + (10*tmp16)]; 2023-01-11T21:41:24.0215576Z tmp15 = tmp18; 2023-01-11T21:41:24.0215640Z } 2023-01-11T21:41:24.0215735Z auto tmp19 = tmp11 + tmp15; 2023-01-11T21:41:24.0215837Z auto tmp20 = static_cast(i0); 2023-01-11T21:41:24.0215929Z auto tmp21 = tmp20 >= 1; 2023-01-11T21:41:24.0216008Z auto tmp22 = tmp20 <= 1; 2023-01-11T21:41:24.0216100Z auto tmp23 = tmp21 & tmp22; 2023-01-11T21:41:24.0216183Z float tmp24 = 0.0; 2023-01-11T21:41:24.0216259Z if(tmp23) 2023-01-11T21:41:24.0216328Z { 2023-01-11T21:41:24.0216503Z auto tmp25 = static_cast(1 + ((-1)*i0)); 2023-01-11T21:41:24.0216612Z auto tmp26 = static_cast(1 + i1); 2023-01-11T21:41:24.0216710Z auto tmp27 = in_ptr0[tmp26 + (10*tmp25)]; 2023-01-11T21:41:24.0216792Z tmp24 = tmp27; 2023-01-11T21:41:24.0216859Z } 2023-01-11T21:41:24.0216951Z auto tmp28 = tmp19 + tmp24; 2023-01-11T21:41:24.0217042Z auto tmp29 = tmp20 >= 6; 2023-01-11T21:41:24.0217134Z auto tmp30 = tmp20 <= 6; 2023-01-11T21:41:24.0217231Z auto tmp31 = tmp29 & tmp30; 2023-01-11T21:41:24.0217301Z float tmp32 = 0.0; 2023-01-11T21:41:24.0217377Z if(tmp31) 2023-01-11T21:41:24.0217444Z { 2023-01-11T21:41:24.0217623Z auto tmp33 = static_cast(15 + ((-1)*i0)); 2023-01-11T21:41:24.0217731Z auto tmp34 = static_cast(1 + i1); 2023-01-11T21:41:24.0217841Z auto tmp35 = in_ptr0[tmp34 + (10*tmp33)]; 2023-01-11T21:41:24.0217950Z tmp32 = tmp35; 2023-01-11T21:41:24.0218016Z } 2023-01-11T21:41:24.0218097Z auto tmp36 = tmp28 + tmp32; 2023-01-11T21:41:24.0218190Z auto tmp37 = tmp23 & tmp6; 2023-01-11T21:41:24.0218273Z float tmp38 = 0.0; 2023-01-11T21:41:24.0218348Z if(tmp37) 2023-01-11T21:41:24.0218415Z { 2023-01-11T21:41:24.0218590Z auto tmp39 = static_cast(1 + ((-1)*i0)); 2023-01-11T21:41:24.0218765Z auto tmp40 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0218862Z auto tmp41 = in_ptr0[tmp40 + (10*tmp39)]; 2023-01-11T21:41:24.0218943Z tmp38 = tmp41; 2023-01-11T21:41:24.0219009Z } 2023-01-11T21:41:24.0219103Z auto tmp42 = tmp36 + tmp38; 2023-01-11T21:41:24.0219223Z auto tmp43 = tmp23 & tmp14; 2023-01-11T21:41:24.0219309Z float tmp44 = 0.0; 2023-01-11T21:41:24.0219384Z if(tmp43) 2023-01-11T21:41:24.0219439Z { 2023-01-11T21:41:24.0219614Z auto tmp45 = static_cast(1 + ((-1)*i0)); 2023-01-11T21:41:24.0219793Z auto tmp46 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0219898Z auto tmp47 = in_ptr0[tmp46 + (10*tmp45)]; 2023-01-11T21:41:24.0219979Z tmp44 = tmp47; 2023-01-11T21:41:24.0220046Z } 2023-01-11T21:41:24.0220137Z auto tmp48 = tmp42 + tmp44; 2023-01-11T21:41:24.0220217Z auto tmp49 = tmp31 & tmp6; 2023-01-11T21:41:24.0220295Z float tmp50 = 0.0; 2023-01-11T21:41:24.0220367Z if(tmp49) 2023-01-11T21:41:24.0220439Z { 2023-01-11T21:41:24.0220616Z auto tmp51 = static_cast(15 + ((-1)*i0)); 2023-01-11T21:41:24.0220788Z auto tmp52 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0220892Z auto tmp53 = in_ptr0[tmp52 + (10*tmp51)]; 2023-01-11T21:41:24.0220973Z tmp50 = tmp53; 2023-01-11T21:41:24.0221027Z } 2023-01-11T21:41:24.0221118Z auto tmp54 = tmp48 + tmp50; 2023-01-11T21:41:24.0221209Z auto tmp55 = tmp31 & tmp14; 2023-01-11T21:41:24.0221292Z float tmp56 = 0.0; 2023-01-11T21:41:24.0221363Z if(tmp55) 2023-01-11T21:41:24.0221427Z { 2023-01-11T21:41:24.0221599Z auto tmp57 = static_cast(15 + ((-1)*i0)); 2023-01-11T21:41:24.0221761Z auto tmp58 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0221872Z auto tmp59 = in_ptr0[tmp58 + (10*tmp57)]; 2023-01-11T21:41:24.0221952Z tmp56 = tmp59; 2023-01-11T21:41:24.0222015Z } 2023-01-11T21:41:24.0222105Z auto tmp60 = tmp54 + tmp56; 2023-01-11T21:41:24.0222199Z out_ptr0[i1 + (8*i0)] = tmp60; 2023-01-11T21:41:24.0222265Z } 2023-01-11T21:41:24.0222316Z } 2023-01-11T21:41:24.0222542Z } 2023-01-11T21:41:24.0222624Z } 2023-01-11T21:41:24.0222686Z } 2023-01-11T21:41:24.0222746Z } 2023-01-11T21:41:24.0222827Z ''') 2023-01-11T21:41:24.0222833Z 2023-01-11T21:41:24.0222837Z 2023-01-11T21:41:24.0222930Z async_compile.wait(globals()) 2023-01-11T21:41:24.0222988Z del async_compile 2023-01-11T21:41:24.0222993Z 2023-01-11T21:41:24.0223062Z def call(args): 2023-01-11T21:41:24.0223135Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0223205Z args.clear() 2023-01-11T21:41:24.0223502Z buf0 = empty_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0223636Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0223703Z del arg0_1 2023-01-11T21:41:24.0223759Z return (buf0, ) 2023-01-11T21:41:24.0223780Z 2023-01-11T21:41:24.0223784Z 2023-01-11T21:41:24.0223845Z if __name__ == "__main__": 2023-01-11T21:41:24.0223955Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0224073Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0224294Z arg0_1 = rand_strided((1, 1, 10, 10), (100, 100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0224505Z arg1_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0224616Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0224622Z 2023-01-11T21:41:24.0224932Z [2023-01-11 21:37:20,418] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 399 2023-01-11T21:41:24.0224941Z 2023-01-11T21:41:24.0225031Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0225087Z import torch 2023-01-11T21:41:24.0225156Z import random 2023-01-11T21:41:24.0225265Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0225382Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0225386Z 2023-01-11T21:41:24.0225461Z aten = torch.ops.aten 2023-01-11T21:41:24.0225590Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0225679Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0225684Z 2023-01-11T21:41:24.0225688Z 2023-01-11T21:41:24.0225818Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0226008Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0226119Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0226220Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0226281Z { 2023-01-11T21:41:24.0226375Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0226435Z { 2023-01-11T21:41:24.0226507Z #pragma omp for 2023-01-11T21:41:24.0226575Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0226636Z { 2023-01-11T21:41:24.0226713Z #pragma GCC ivdep 2023-01-11T21:41:24.0226793Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0226854Z { 2023-01-11T21:41:24.0226917Z { 2023-01-11T21:41:24.0226984Z { 2023-01-11T21:41:24.0227078Z auto tmp0 = static_cast(3 + i0); 2023-01-11T21:41:24.0227181Z auto tmp1 = static_cast(1 + i1); 2023-01-11T21:41:24.0227284Z auto tmp2 = in_ptr0[tmp1 + (11*tmp0)]; 2023-01-11T21:41:24.0227384Z auto tmp3 = static_cast(i1); 2023-01-11T21:41:24.0227472Z auto tmp4 = tmp3 >= 1; 2023-01-11T21:41:24.0227565Z auto tmp5 = tmp3 <= 1; 2023-01-11T21:41:24.0227657Z auto tmp6 = tmp4 & tmp5; 2023-01-11T21:41:24.0227729Z float tmp7 = 0.0; 2023-01-11T21:41:24.0227799Z if(tmp6) 2023-01-11T21:41:24.0227863Z { 2023-01-11T21:41:24.0227967Z auto tmp8 = static_cast(3 + i0); 2023-01-11T21:41:24.0228144Z auto tmp9 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0228249Z auto tmp10 = in_ptr0[tmp9 + (11*tmp8)]; 2023-01-11T21:41:24.0228328Z tmp7 = tmp10; 2023-01-11T21:41:24.0228383Z } 2023-01-11T21:41:24.0228474Z auto tmp11 = tmp2 + tmp7; 2023-01-11T21:41:24.0228562Z auto tmp12 = tmp3 >= 5; 2023-01-11T21:41:24.0228647Z auto tmp13 = tmp3 <= 6; 2023-01-11T21:41:24.0228776Z auto tmp14 = tmp12 & tmp13; 2023-01-11T21:41:24.0228858Z float tmp15 = 0.0; 2023-01-11T21:41:24.0228932Z if(tmp14) 2023-01-11T21:41:24.0228986Z { 2023-01-11T21:41:24.0229091Z auto tmp16 = static_cast(3 + i0); 2023-01-11T21:41:24.0229268Z auto tmp17 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0229377Z auto tmp18 = in_ptr0[tmp17 + (11*tmp16)]; 2023-01-11T21:41:24.0229454Z tmp15 = tmp18; 2023-01-11T21:41:24.0229517Z } 2023-01-11T21:41:24.0229606Z auto tmp19 = tmp11 + tmp15; 2023-01-11T21:41:24.0229710Z auto tmp20 = static_cast(i0); 2023-01-11T21:41:24.0229789Z auto tmp21 = tmp20 >= 1; 2023-01-11T21:41:24.0229877Z auto tmp22 = tmp20 <= 3; 2023-01-11T21:41:24.0229999Z auto tmp23 = tmp21 & tmp22; 2023-01-11T21:41:24.0230082Z float tmp24 = 0.0; 2023-01-11T21:41:24.0230154Z if(tmp23) 2023-01-11T21:41:24.0230219Z { 2023-01-11T21:41:24.0230394Z auto tmp25 = static_cast(3 + ((-1)*i0)); 2023-01-11T21:41:24.0230490Z auto tmp26 = static_cast(1 + i1); 2023-01-11T21:41:24.0230598Z auto tmp27 = in_ptr0[tmp26 + (11*tmp25)]; 2023-01-11T21:41:24.0230677Z tmp24 = tmp27; 2023-01-11T21:41:24.0230745Z } 2023-01-11T21:41:24.0230835Z auto tmp28 = tmp19 + tmp24; 2023-01-11T21:41:24.0230924Z auto tmp29 = tmp20 >= 3; 2023-01-11T21:41:24.0231013Z auto tmp30 = tmp20 <= 6; 2023-01-11T21:41:24.0231094Z auto tmp31 = tmp29 & tmp30; 2023-01-11T21:41:24.0231178Z float tmp32 = 0.0; 2023-01-11T21:41:24.0231251Z if(tmp31) 2023-01-11T21:41:24.0231315Z { 2023-01-11T21:41:24.0231489Z auto tmp33 = static_cast(17 + ((-1)*i0)); 2023-01-11T21:41:24.0231593Z auto tmp34 = static_cast(1 + i1); 2023-01-11T21:41:24.0231700Z auto tmp35 = in_ptr0[tmp34 + (11*tmp33)]; 2023-01-11T21:41:24.0231768Z tmp32 = tmp35; 2023-01-11T21:41:24.0231832Z } 2023-01-11T21:41:24.0231920Z auto tmp36 = tmp28 + tmp32; 2023-01-11T21:41:24.0232013Z auto tmp37 = tmp23 & tmp6; 2023-01-11T21:41:24.0232093Z float tmp38 = 0.0; 2023-01-11T21:41:24.0232163Z if(tmp37) 2023-01-11T21:41:24.0232226Z { 2023-01-11T21:41:24.0232386Z auto tmp39 = static_cast(3 + ((-1)*i0)); 2023-01-11T21:41:24.0232558Z auto tmp40 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0232666Z auto tmp41 = in_ptr0[tmp40 + (11*tmp39)]; 2023-01-11T21:41:24.0232744Z tmp38 = tmp41; 2023-01-11T21:41:24.0232808Z } 2023-01-11T21:41:24.0232898Z auto tmp42 = tmp36 + tmp38; 2023-01-11T21:41:24.0232986Z auto tmp43 = tmp23 & tmp14; 2023-01-11T21:41:24.0233065Z float tmp44 = 0.0; 2023-01-11T21:41:24.0233127Z if(tmp43) 2023-01-11T21:41:24.0233189Z { 2023-01-11T21:41:24.0233359Z auto tmp45 = static_cast(3 + ((-1)*i0)); 2023-01-11T21:41:24.0233538Z auto tmp46 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0233644Z auto tmp47 = in_ptr0[tmp46 + (11*tmp45)]; 2023-01-11T21:41:24.0233821Z tmp44 = tmp47; 2023-01-11T21:41:24.0233890Z } 2023-01-11T21:41:24.0233969Z auto tmp48 = tmp42 + tmp44; 2023-01-11T21:41:24.0234058Z auto tmp49 = tmp31 & tmp6; 2023-01-11T21:41:24.0234138Z float tmp50 = 0.0; 2023-01-11T21:41:24.0234211Z if(tmp49) 2023-01-11T21:41:24.0234275Z { 2023-01-11T21:41:24.0234452Z auto tmp51 = static_cast(17 + ((-1)*i0)); 2023-01-11T21:41:24.0234624Z auto tmp52 = static_cast(1 + ((-1)*i1)); 2023-01-11T21:41:24.0234719Z auto tmp53 = in_ptr0[tmp52 + (11*tmp51)]; 2023-01-11T21:41:24.0234797Z tmp50 = tmp53; 2023-01-11T21:41:24.0234862Z } 2023-01-11T21:41:24.0234955Z auto tmp54 = tmp48 + tmp50; 2023-01-11T21:41:24.0235075Z auto tmp55 = tmp31 & tmp14; 2023-01-11T21:41:24.0235159Z float tmp56 = 0.0; 2023-01-11T21:41:24.0235230Z if(tmp55) 2023-01-11T21:41:24.0235294Z { 2023-01-11T21:41:24.0235455Z auto tmp57 = static_cast(17 + ((-1)*i0)); 2023-01-11T21:41:24.0235626Z auto tmp58 = static_cast(15 + ((-1)*i1)); 2023-01-11T21:41:24.0235735Z auto tmp59 = in_ptr0[tmp58 + (11*tmp57)]; 2023-01-11T21:41:24.0235813Z tmp56 = tmp59; 2023-01-11T21:41:24.0235877Z } 2023-01-11T21:41:24.0235964Z auto tmp60 = tmp54 + tmp56; 2023-01-11T21:41:24.0236058Z out_ptr0[i1 + (8*i0)] = tmp60; 2023-01-11T21:41:24.0236112Z } 2023-01-11T21:41:24.0236173Z } 2023-01-11T21:41:24.0236233Z } 2023-01-11T21:41:24.0236293Z } 2023-01-11T21:41:24.0236354Z } 2023-01-11T21:41:24.0236413Z } 2023-01-11T21:41:24.0236478Z ''') 2023-01-11T21:41:24.0236494Z 2023-01-11T21:41:24.0236499Z 2023-01-11T21:41:24.0236575Z async_compile.wait(globals()) 2023-01-11T21:41:24.0236643Z del async_compile 2023-01-11T21:41:24.0236649Z 2023-01-11T21:41:24.0236713Z def call(args): 2023-01-11T21:41:24.0236789Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0236855Z args.clear() 2023-01-11T21:41:24.0237069Z buf0 = empty_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0237198Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0237253Z del arg0_1 2023-01-11T21:41:24.0237320Z return (buf0, ) 2023-01-11T21:41:24.0237325Z 2023-01-11T21:41:24.0237330Z 2023-01-11T21:41:24.0237403Z if __name__ == "__main__": 2023-01-11T21:41:24.0237510Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0237630Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0237851Z arg0_1 = rand_strided((1, 1, 15, 11), (165, 165, 11, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0238060Z arg1_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0238172Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0238176Z 2023-01-11T21:41:24.0238230Z ok (4.834s) 2023-01-11T21:41:24.0238703Z test_reflection_pad2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0238827Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0239093Z [2023-01-11 21:37:20,448] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 400 2023-01-11T21:41:24.0239386Z [2023-01-11 21:37:22,080] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 400 2023-01-11T21:41:24.0239392Z 2023-01-11T21:41:24.0239481Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0239547Z import torch 2023-01-11T21:41:24.0239614Z import random 2023-01-11T21:41:24.0239724Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0239830Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0239843Z 2023-01-11T21:41:24.0239907Z aten = torch.ops.aten 2023-01-11T21:41:24.0240036Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0240124Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0240129Z 2023-01-11T21:41:24.0240133Z 2023-01-11T21:41:24.0240261Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0240464Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0240607Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0240704Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0240787Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0240843Z { 2023-01-11T21:41:24.0240935Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0240994Z { 2023-01-11T21:41:24.0241068Z #pragma omp for 2023-01-11T21:41:24.0241149Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.0241208Z { 2023-01-11T21:41:24.0241274Z #pragma GCC ivdep 2023-01-11T21:41:24.0241355Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.0241415Z { 2023-01-11T21:41:24.0241475Z { 2023-01-11T21:41:24.0241539Z { 2023-01-11T21:41:24.0241640Z auto tmp0 = static_cast(7); 2023-01-11T21:41:24.0241740Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:24.0241829Z auto tmp2 = static_cast(1); 2023-01-11T21:41:24.0241971Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0242074Z auto tmp4 = std::abs(tmp3); 2023-01-11T21:41:24.0242208Z auto tmp5 = tmp0 - tmp4; 2023-01-11T21:41:24.0242303Z auto tmp6 = std::abs(tmp5); 2023-01-11T21:41:24.0242437Z auto tmp7 = tmp0 - tmp6; 2023-01-11T21:41:24.0242536Z auto tmp8 = static_cast(i1); 2023-01-11T21:41:24.0242661Z auto tmp9 = tmp8 - tmp2; 2023-01-11T21:41:24.0242757Z auto tmp10 = std::abs(tmp9); 2023-01-11T21:41:24.0242895Z auto tmp11 = tmp0 - tmp10; 2023-01-11T21:41:24.0242991Z auto tmp12 = std::abs(tmp11); 2023-01-11T21:41:24.0243131Z auto tmp13 = tmp0 - tmp12; 2023-01-11T21:41:24.0243231Z auto tmp14 = in_ptr0[tmp13 + (8*tmp7)]; 2023-01-11T21:41:24.0243329Z out_ptr0[i1 + (10*i0)] = tmp14; 2023-01-11T21:41:24.0243392Z } 2023-01-11T21:41:24.0243443Z } 2023-01-11T21:41:24.0243503Z } 2023-01-11T21:41:24.0243562Z } 2023-01-11T21:41:24.0243636Z #pragma omp for 2023-01-11T21:41:24.0243713Z for(long i0=0; i0<15; i0+=1) 2023-01-11T21:41:24.0243772Z { 2023-01-11T21:41:24.0243837Z #pragma GCC ivdep 2023-01-11T21:41:24.0243917Z for(long i1=0; i1<11; i1+=1) 2023-01-11T21:41:24.0243979Z { 2023-01-11T21:41:24.0244039Z { 2023-01-11T21:41:24.0244101Z { 2023-01-11T21:41:24.0244200Z auto tmp0 = static_cast(7); 2023-01-11T21:41:24.0244299Z auto tmp1 = static_cast(i0); 2023-01-11T21:41:24.0244387Z auto tmp2 = static_cast(3); 2023-01-11T21:41:24.0244524Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0244649Z auto tmp4 = std::abs(tmp3); 2023-01-11T21:41:24.0244784Z auto tmp5 = tmp0 - tmp4; 2023-01-11T21:41:24.0244878Z auto tmp6 = std::abs(tmp5); 2023-01-11T21:41:24.0245012Z auto tmp7 = tmp0 - tmp6; 2023-01-11T21:41:24.0245107Z auto tmp8 = static_cast(i1); 2023-01-11T21:41:24.0245206Z auto tmp9 = static_cast(1); 2023-01-11T21:41:24.0245332Z auto tmp10 = tmp8 - tmp9; 2023-01-11T21:41:24.0245429Z auto tmp11 = std::abs(tmp10); 2023-01-11T21:41:24.0245570Z auto tmp12 = tmp0 - tmp11; 2023-01-11T21:41:24.0245666Z auto tmp13 = std::abs(tmp12); 2023-01-11T21:41:24.0245803Z auto tmp14 = tmp0 - tmp13; 2023-01-11T21:41:24.0245906Z auto tmp15 = in_ptr0[tmp14 + (8*tmp7)]; 2023-01-11T21:41:24.0246025Z out_ptr1[i1 + (11*i0)] = tmp15; 2023-01-11T21:41:24.0246082Z } 2023-01-11T21:41:24.0246145Z } 2023-01-11T21:41:24.0246208Z } 2023-01-11T21:41:24.0246270Z } 2023-01-11T21:41:24.0246333Z } 2023-01-11T21:41:24.0246392Z } 2023-01-11T21:41:24.0246470Z ''') 2023-01-11T21:41:24.0246475Z 2023-01-11T21:41:24.0246479Z 2023-01-11T21:41:24.0246556Z async_compile.wait(globals()) 2023-01-11T21:41:24.0246627Z del async_compile 2023-01-11T21:41:24.0246632Z 2023-01-11T21:41:24.0246702Z def call(args): 2023-01-11T21:41:24.0246769Z arg0_1, = args 2023-01-11T21:41:24.0246839Z args.clear() 2023-01-11T21:41:24.0247055Z buf0 = empty_strided((1, 1, 10, 10), (100, 100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0247267Z buf1 = empty_strided((1, 1, 15, 11), (165, 165, 11, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0247432Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0247492Z del arg0_1 2023-01-11T21:41:24.0247569Z return (buf0, buf1, ) 2023-01-11T21:41:24.0247574Z 2023-01-11T21:41:24.0247578Z 2023-01-11T21:41:24.0247654Z if __name__ == "__main__": 2023-01-11T21:41:24.0247767Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0247888Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0248096Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0248202Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0248207Z 2023-01-11T21:41:24.0248271Z ok (1.661s) 2023-01-11T21:41:24.0248722Z test_relu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0248851Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0249110Z [2023-01-11 21:37:22,103] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 401 2023-01-11T21:41:24.0249374Z [2023-01-11 21:37:22,116] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 401 2023-01-11T21:41:24.0249379Z 2023-01-11T21:41:24.0249471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0249540Z import torch 2023-01-11T21:41:24.0249609Z import random 2023-01-11T21:41:24.0249720Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0249839Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0249844Z 2023-01-11T21:41:24.0249907Z aten = torch.ops.aten 2023-01-11T21:41:24.0250038Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0250130Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0250168Z 2023-01-11T21:41:24.0250172Z 2023-01-11T21:41:24.0250305Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0250505Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0250620Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0250724Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0250824Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0250906Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0250969Z { 2023-01-11T21:41:24.0251066Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0251126Z { 2023-01-11T21:41:24.0251200Z #pragma omp for 2023-01-11T21:41:24.0251280Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0251339Z { 2023-01-11T21:41:24.0251459Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0251613Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0251740Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:24.0251819Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.0251937Z auto tmp4 = at::vec::clamp_min(tmp3, decltype(tmp3)(0)); 2023-01-11T21:41:24.0252065Z auto tmp5 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:24.0252147Z auto tmp6 = tmp4 / tmp5; 2023-01-11T21:41:24.0252235Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0252311Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0252369Z } 2023-01-11T21:41:24.0252460Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0252535Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0252593Z { 2023-01-11T21:41:24.0252671Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0252739Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.0252826Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:24.0252905Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.0252990Z auto tmp4 = tmp3 * (tmp3>0); 2023-01-11T21:41:24.0253088Z auto tmp5 = static_cast(10); 2023-01-11T21:41:24.0253165Z auto tmp6 = tmp4 / tmp5; 2023-01-11T21:41:24.0253239Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0253303Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0253364Z } 2023-01-11T21:41:24.0253422Z } 2023-01-11T21:41:24.0253481Z } 2023-01-11T21:41:24.0253557Z ''') 2023-01-11T21:41:24.0253562Z 2023-01-11T21:41:24.0253566Z 2023-01-11T21:41:24.0253651Z async_compile.wait(globals()) 2023-01-11T21:41:24.0253718Z del async_compile 2023-01-11T21:41:24.0253723Z 2023-01-11T21:41:24.0253786Z def call(args): 2023-01-11T21:41:24.0253847Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0253913Z args.clear() 2023-01-11T21:41:24.0254104Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0254297Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0254483Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0254554Z del arg0_1 2023-01-11T21:41:24.0254616Z del arg1_1 2023-01-11T21:41:24.0254678Z return (buf0, buf1, ) 2023-01-11T21:41:24.0254682Z 2023-01-11T21:41:24.0254686Z 2023-01-11T21:41:24.0254756Z if __name__ == "__main__": 2023-01-11T21:41:24.0254866Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0254984Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0255179Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0255368Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0255479Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0255484Z 2023-01-11T21:41:24.0255583Z ok (0.036s) 2023-01-11T21:41:24.0256054Z test_remainder_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0256167Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0256424Z [2023-01-11 21:37:22,144] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 402 2023-01-11T21:41:24.0256690Z [2023-01-11 21:37:23,673] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 402 2023-01-11T21:41:24.0256695Z 2023-01-11T21:41:24.0256782Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0256850Z import torch 2023-01-11T21:41:24.0256914Z import random 2023-01-11T21:41:24.0257059Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0257175Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0257180Z 2023-01-11T21:41:24.0257244Z aten = torch.ops.aten 2023-01-11T21:41:24.0257375Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0257462Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0257466Z 2023-01-11T21:41:24.0257471Z 2023-01-11T21:41:24.0257601Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0257801Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0257912Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0258015Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0258108Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0258191Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0258281Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0258341Z { 2023-01-11T21:41:24.0258433Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0258488Z { 2023-01-11T21:41:24.0258560Z #pragma omp for 2023-01-11T21:41:24.0258638Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0258687Z { 2023-01-11T21:41:24.0258746Z { 2023-01-11T21:41:24.0258809Z { 2023-01-11T21:41:24.0258896Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0258983Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.0259075Z auto tmp2 = mod(tmp0, tmp1); 2023-01-11T21:41:24.0259159Z auto tmp3 = tmp2 + tmp1; 2023-01-11T21:41:24.0259266Z auto tmp4 = ((tmp2 != 0) & ((tmp2 < 0) != (tmp1 < 0))) ? tmp3 : tmp2; 2023-01-11T21:41:24.0259365Z auto tmp5 = static_cast(1); 2023-01-11T21:41:24.0259452Z auto tmp6 = tmp0 + tmp5; 2023-01-11T21:41:24.0259585Z auto tmp7 = tmp1 - tmp5; 2023-01-11T21:41:24.0259676Z auto tmp8 = mod(tmp6, tmp7); 2023-01-11T21:41:24.0259759Z auto tmp9 = tmp8 + tmp7; 2023-01-11T21:41:24.0259883Z auto tmp10 = ((tmp8 != 0) & ((tmp8 < 0) != (tmp7 < 0))) ? tmp9 : tmp8; 2023-01-11T21:41:24.0260008Z auto tmp11 = tmp0 - tmp5; 2023-01-11T21:41:24.0260094Z auto tmp12 = tmp1 + tmp5; 2023-01-11T21:41:24.0260186Z auto tmp13 = mod(tmp11, tmp12); 2023-01-11T21:41:24.0260274Z auto tmp14 = tmp13 + tmp12; 2023-01-11T21:41:24.0260394Z auto tmp15 = ((tmp13 != 0) & ((tmp13 < 0) != (tmp12 < 0))) ? tmp14 : tmp13; 2023-01-11T21:41:24.0260474Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0260554Z out_ptr1[i0] = tmp10; 2023-01-11T21:41:24.0260634Z out_ptr2[i0] = tmp15; 2023-01-11T21:41:24.0260685Z } 2023-01-11T21:41:24.0260789Z } 2023-01-11T21:41:24.0260849Z } 2023-01-11T21:41:24.0260909Z } 2023-01-11T21:41:24.0260966Z } 2023-01-11T21:41:24.0261041Z ''') 2023-01-11T21:41:24.0261046Z 2023-01-11T21:41:24.0261050Z 2023-01-11T21:41:24.0261136Z async_compile.wait(globals()) 2023-01-11T21:41:24.0261194Z del async_compile 2023-01-11T21:41:24.0261199Z 2023-01-11T21:41:24.0261272Z def call(args): 2023-01-11T21:41:24.0261342Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0261410Z args.clear() 2023-01-11T21:41:24.0261601Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0261792Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0261975Z buf2 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0262178Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0262274Z del arg0_1 2023-01-11T21:41:24.0262473Z del arg1_1 2023-01-11T21:41:24.0262594Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0262600Z 2023-01-11T21:41:24.0262604Z 2023-01-11T21:41:24.0262680Z if __name__ == "__main__": 2023-01-11T21:41:24.0262794Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0262915Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0263113Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0263293Z arg1_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0263403Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0263408Z 2023-01-11T21:41:24.0263469Z ok (1.558s) 2023-01-11T21:41:24.0263940Z test_repeat_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0264070Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0264328Z [2023-01-11 21:37:23,693] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 403 2023-01-11T21:41:24.0264592Z [2023-01-11 21:37:25,455] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 403 2023-01-11T21:41:24.0264597Z 2023-01-11T21:41:24.0264686Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0264749Z import torch 2023-01-11T21:41:24.0264805Z import random 2023-01-11T21:41:24.0264918Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0265034Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0265040Z 2023-01-11T21:41:24.0265116Z aten = torch.ops.aten 2023-01-11T21:41:24.0265249Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0265339Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0265344Z 2023-01-11T21:41:24.0265348Z 2023-01-11T21:41:24.0265479Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0265677Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0265781Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0265874Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0265971Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0266062Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0266120Z { 2023-01-11T21:41:24.0266215Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0266271Z { 2023-01-11T21:41:24.0266348Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0266423Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0266481Z { 2023-01-11T21:41:24.0266617Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0266681Z { 2023-01-11T21:41:24.0266761Z #pragma GCC ivdep 2023-01-11T21:41:24.0266846Z for(long i2=0; i2<12; i2+=1) 2023-01-11T21:41:24.0266897Z { 2023-01-11T21:41:24.0266982Z for(long i3=0; i3<1; i3+=1) 2023-01-11T21:41:24.0267045Z { 2023-01-11T21:41:24.0267198Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i3) + (8*(i2 % 4)) + (32*(i1 % 2))); 2023-01-11T21:41:24.0267317Z tmp0.store(out_ptr0 + (8*i2) + (8*i3) + (96*i1) + (384*i0)); 2023-01-11T21:41:24.0267380Z } 2023-01-11T21:41:24.0267476Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0267562Z for(long i3=8; i3<8; i3+=1) 2023-01-11T21:41:24.0267615Z { 2023-01-11T21:41:24.0267725Z auto tmp0 = in_ptr0[i3 + (8*(i2 % 4)) + (32*(i1 % 2))]; 2023-01-11T21:41:24.0267865Z out_ptr0[i3 + (8*i2) + (96*i1) + (384*i0)] = tmp0; 2023-01-11T21:41:24.0267928Z } 2023-01-11T21:41:24.0267991Z } 2023-01-11T21:41:24.0268052Z } 2023-01-11T21:41:24.0268110Z } 2023-01-11T21:41:24.0268173Z #pragma omp for 2023-01-11T21:41:24.0268250Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0268310Z { 2023-01-11T21:41:24.0268391Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0268451Z { 2023-01-11T21:41:24.0268582Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i1); 2023-01-11T21:41:24.0268680Z tmp0.store(out_ptr1 + (8*i1) + (64*i0)); 2023-01-11T21:41:24.0268730Z } 2023-01-11T21:41:24.0268820Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0268900Z for(long i1=64; i1<64; i1+=1) 2023-01-11T21:41:24.0268960Z { 2023-01-11T21:41:24.0269045Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.0269136Z out_ptr1[i1 + (64*i0)] = tmp0; 2023-01-11T21:41:24.0269186Z } 2023-01-11T21:41:24.0269245Z } 2023-01-11T21:41:24.0269319Z #pragma omp for 2023-01-11T21:41:24.0269399Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0269459Z { 2023-01-11T21:41:24.0269536Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0269598Z { 2023-01-11T21:41:24.0269719Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i1); 2023-01-11T21:41:24.0269820Z tmp0.store(out_ptr2 + (8*i1) + (64*i0)); 2023-01-11T21:41:24.0269882Z } 2023-01-11T21:41:24.0269971Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0270051Z for(long i1=64; i1<64; i1+=1) 2023-01-11T21:41:24.0270110Z { 2023-01-11T21:41:24.0270194Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.0270272Z out_ptr2[i1 + (64*i0)] = tmp0; 2023-01-11T21:41:24.0270335Z } 2023-01-11T21:41:24.0270392Z } 2023-01-11T21:41:24.0270451Z } 2023-01-11T21:41:24.0270510Z } 2023-01-11T21:41:24.0270586Z ''') 2023-01-11T21:41:24.0270592Z 2023-01-11T21:41:24.0270596Z 2023-01-11T21:41:24.0270682Z async_compile.wait(globals()) 2023-01-11T21:41:24.0270740Z del async_compile 2023-01-11T21:41:24.0270754Z 2023-01-11T21:41:24.0270812Z def call(args): 2023-01-11T21:41:24.0270875Z arg0_1, = args 2023-01-11T21:41:24.0270942Z args.clear() 2023-01-11T21:41:24.0271159Z buf0 = empty_strided((2, 4, 12, 8), (384, 96, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0271367Z buf1 = empty_strided((8, 2, 4, 8), (64, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0271589Z buf2 = empty_strided((2, 1, 1, 2, 4, 8), (64, 64, 64, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0271775Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0271860Z del arg0_1 2023-01-11T21:41:24.0271940Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0271945Z 2023-01-11T21:41:24.0271949Z 2023-01-11T21:41:24.0272021Z if __name__ == "__main__": 2023-01-11T21:41:24.0272131Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0272250Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0272460Z arg0_1 = rand_strided((1, 2, 4, 8), (64, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0272562Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0272567Z 2023-01-11T21:41:24.0272631Z ok (1.781s) 2023-01-11T21:41:24.0273122Z test_roi_align_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0273238Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0273497Z [2023-01-11 21:37:27,326] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 404 2023-01-11T21:41:24.0273814Z [2023-01-11 21:37:27,743] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.torchvision.roi_align 2023-01-11T21:41:24.0274081Z [2023-01-11 21:37:27,745] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 404 2023-01-11T21:41:24.0274086Z 2023-01-11T21:41:24.0274176Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0274242Z import torch 2023-01-11T21:41:24.0274310Z import random 2023-01-11T21:41:24.0274421Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0274527Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0274542Z 2023-01-11T21:41:24.0274607Z aten = torch.ops.aten 2023-01-11T21:41:24.0274737Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0274828Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0274833Z 2023-01-11T21:41:24.0274837Z 2023-01-11T21:41:24.0274923Z async_compile.wait(globals()) 2023-01-11T21:41:24.0274992Z del async_compile 2023-01-11T21:41:24.0274997Z 2023-01-11T21:41:24.0275065Z def call(args): 2023-01-11T21:41:24.0275134Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0275191Z args.clear() 2023-01-11T21:41:24.0275331Z buf0 = torch.ops.torchvision.roi_align(arg0_1, arg1_1, 0.25, 7, 7, 2, False) 2023-01-11T21:41:24.0275394Z del arg0_1 2023-01-11T21:41:24.0275458Z del arg1_1 2023-01-11T21:41:24.0275524Z buf1 = buf0 2023-01-11T21:41:24.0275630Z assert_size_stride(buf1, (2292, 256, 7, 7), (12544, 49, 7, 1)) 2023-01-11T21:41:24.0275693Z del buf0 2023-01-11T21:41:24.0275751Z return (buf1, ) 2023-01-11T21:41:24.0275756Z 2023-01-11T21:41:24.0275768Z 2023-01-11T21:41:24.0275833Z if __name__ == "__main__": 2023-01-11T21:41:24.0275941Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0276060Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0276294Z arg0_1 = rand_strided((4, 256, 296, 304), (23035904, 89984, 304, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0276493Z arg1_1 = rand_strided((2292, 5), (5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0276604Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0276608Z 2023-01-11T21:41:24.0276670Z ok (5.252s) 2023-01-11T21:41:24.0277128Z test_roll_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0277277Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0277535Z [2023-01-11 21:37:30,740] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 405 2023-01-11T21:41:24.0277797Z [2023-01-11 21:37:32,471] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 405 2023-01-11T21:41:24.0277802Z 2023-01-11T21:41:24.0277893Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0277959Z import torch 2023-01-11T21:41:24.0278030Z import random 2023-01-11T21:41:24.0278146Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0278266Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0278271Z 2023-01-11T21:41:24.0278336Z aten = torch.ops.aten 2023-01-11T21:41:24.0278469Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0278563Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0278567Z 2023-01-11T21:41:24.0278572Z 2023-01-11T21:41:24.0278733Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0278937Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0279053Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0279153Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0279250Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0279298Z { 2023-01-11T21:41:24.0279395Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0279456Z { 2023-01-11T21:41:24.0279546Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0279627Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0279690Z { 2023-01-11T21:41:24.0279772Z for(long i1=0; i1<56; i1+=1) 2023-01-11T21:41:24.0279821Z { 2023-01-11T21:41:24.0279900Z #pragma GCC ivdep 2023-01-11T21:41:24.0279988Z for(long i2=0; i2<56; i2+=1) 2023-01-11T21:41:24.0280055Z { 2023-01-11T21:41:24.0280149Z for(long i3=0; i3<2; i3+=1) 2023-01-11T21:41:24.0280214Z { 2023-01-11T21:41:24.0280379Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i3) + (16*((46 + i2) % 56)) + (896*((3 + i1) % 56)) + (50176*i0)); 2023-01-11T21:41:24.0280497Z tmp0.store(out_ptr0 + (8*i3) + (16*i2) + (896*i1) + (50176*i0)); 2023-01-11T21:41:24.0280550Z } 2023-01-11T21:41:24.0280647Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0280738Z for(long i3=16; i3<16; i3+=1) 2023-01-11T21:41:24.0280804Z { 2023-01-11T21:41:24.0280929Z auto tmp0 = in_ptr0[i3 + (16*((46 + i2) % 56)) + (896*((3 + i1) % 56)) + (50176*i0)]; 2023-01-11T21:41:24.0281038Z out_ptr0[i3 + (16*i2) + (896*i1) + (50176*i0)] = tmp0; 2023-01-11T21:41:24.0281102Z } 2023-01-11T21:41:24.0281154Z } 2023-01-11T21:41:24.0281223Z } 2023-01-11T21:41:24.0281286Z } 2023-01-11T21:41:24.0281376Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0281457Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0281518Z { 2023-01-11T21:41:24.0281600Z for(long i1=0; i1<56; i1+=1) 2023-01-11T21:41:24.0281650Z { 2023-01-11T21:41:24.0281731Z #pragma GCC ivdep 2023-01-11T21:41:24.0281821Z for(long i2=0; i2<56; i2+=1) 2023-01-11T21:41:24.0281885Z { 2023-01-11T21:41:24.0281966Z #pragma GCC ivdep 2023-01-11T21:41:24.0282055Z for(long i3=0; i3<16; i3+=1) 2023-01-11T21:41:24.0282108Z { 2023-01-11T21:41:24.0282175Z { 2023-01-11T21:41:24.0282244Z { 2023-01-11T21:41:24.0282367Z auto tmp0 = in_ptr0[(100347 + i3 + (16*i2) + (896*i1) + (50176*i0)) % 100352]; 2023-01-11T21:41:24.0282516Z out_ptr1[i3 + (16*i2) + (896*i1) + (50176*i0)] = tmp0; 2023-01-11T21:41:24.0282584Z } 2023-01-11T21:41:24.0282650Z } 2023-01-11T21:41:24.0282712Z } 2023-01-11T21:41:24.0282763Z } 2023-01-11T21:41:24.0282824Z } 2023-01-11T21:41:24.0282880Z } 2023-01-11T21:41:24.0282940Z } 2023-01-11T21:41:24.0282999Z } 2023-01-11T21:41:24.0283075Z ''') 2023-01-11T21:41:24.0283080Z 2023-01-11T21:41:24.0283084Z 2023-01-11T21:41:24.0283170Z async_compile.wait(globals()) 2023-01-11T21:41:24.0283229Z del async_compile 2023-01-11T21:41:24.0283233Z 2023-01-11T21:41:24.0283301Z def call(args): 2023-01-11T21:41:24.0283370Z arg0_1, = args 2023-01-11T21:41:24.0283438Z args.clear() 2023-01-11T21:41:24.0283658Z buf0 = empty_strided((2, 56, 56, 16), (50176, 896, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0283903Z buf1 = empty_strided((2, 56, 56, 16), (50176, 896, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0284069Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0284124Z del arg0_1 2023-01-11T21:41:24.0284197Z return (buf0, buf1, ) 2023-01-11T21:41:24.0284203Z 2023-01-11T21:41:24.0284207Z 2023-01-11T21:41:24.0284280Z if __name__ == "__main__": 2023-01-11T21:41:24.0284393Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0284513Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0284734Z arg0_1 = rand_strided((2, 56, 56, 16), (50176, 896, 16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0284837Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0284842Z 2023-01-11T21:41:24.0284910Z ok (1.767s) 2023-01-11T21:41:24.0285387Z test_round_correctness_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0285502Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0285759Z [2023-01-11 21:37:32,487] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 406 2023-01-11T21:41:24.0286021Z [2023-01-11 21:37:34,001] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 406 2023-01-11T21:41:24.0286027Z 2023-01-11T21:41:24.0286117Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0286183Z import torch 2023-01-11T21:41:24.0286251Z import random 2023-01-11T21:41:24.0286364Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0286481Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0286486Z 2023-01-11T21:41:24.0286554Z aten = torch.ops.aten 2023-01-11T21:41:24.0286684Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0286772Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0286777Z 2023-01-11T21:41:24.0286781Z 2023-01-11T21:41:24.0286913Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0287117Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0287235Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.0287333Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.0287389Z { 2023-01-11T21:41:24.0287473Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0287527Z { 2023-01-11T21:41:24.0287601Z #pragma omp for 2023-01-11T21:41:24.0287686Z for(long i0=0; i0<200; i0+=1) 2023-01-11T21:41:24.0287746Z { 2023-01-11T21:41:24.0287806Z { 2023-01-11T21:41:24.0287868Z { 2023-01-11T21:41:24.0287981Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0288085Z auto tmp1 = std::nearbyint(tmp0); 2023-01-11T21:41:24.0288165Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0288228Z } 2023-01-11T21:41:24.0288288Z } 2023-01-11T21:41:24.0288346Z } 2023-01-11T21:41:24.0288402Z } 2023-01-11T21:41:24.0288448Z } 2023-01-11T21:41:24.0288525Z ''') 2023-01-11T21:41:24.0288531Z 2023-01-11T21:41:24.0288535Z 2023-01-11T21:41:24.0288620Z async_compile.wait(globals()) 2023-01-11T21:41:24.0288688Z del async_compile 2023-01-11T21:41:24.0288692Z 2023-01-11T21:41:24.0288761Z def call(args): 2023-01-11T21:41:24.0288827Z arg0_1, = args 2023-01-11T21:41:24.0288896Z args.clear() 2023-01-11T21:41:24.0289079Z buf0 = empty_strided((200, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0289210Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0289275Z del arg0_1 2023-01-11T21:41:24.0289371Z return (buf0, ) 2023-01-11T21:41:24.0289376Z 2023-01-11T21:41:24.0289380Z 2023-01-11T21:41:24.0289455Z if __name__ == "__main__": 2023-01-11T21:41:24.0289569Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0289689Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0289881Z arg0_1 = rand_strided((200, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0289976Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0289980Z 2023-01-11T21:41:24.0290043Z ok (1.527s) 2023-01-11T21:41:24.0290502Z test_round_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0290632Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0290893Z [2023-01-11 21:37:34,024] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 407 2023-01-11T21:41:24.0291159Z [2023-01-11 21:37:35,575] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 407 2023-01-11T21:41:24.0291165Z 2023-01-11T21:41:24.0291256Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0291322Z import torch 2023-01-11T21:41:24.0291388Z import random 2023-01-11T21:41:24.0291490Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0291607Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0291612Z 2023-01-11T21:41:24.0291687Z aten = torch.ops.aten 2023-01-11T21:41:24.0291819Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0291907Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0291912Z 2023-01-11T21:41:24.0291916Z 2023-01-11T21:41:24.0292051Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0292254Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0292370Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0292460Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0292556Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0292651Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0292742Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0292800Z { 2023-01-11T21:41:24.0292892Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0292952Z { 2023-01-11T21:41:24.0293016Z #pragma omp for 2023-01-11T21:41:24.0293094Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0293155Z { 2023-01-11T21:41:24.0293286Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0293371Z auto tmp1 = tmp0.round(); 2023-01-11T21:41:24.0293537Z auto tmp2 = at::vec::Vectorized(static_cast(100.0)); 2023-01-11T21:41:24.0293618Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:24.0293690Z auto tmp4 = tmp3.round(); 2023-01-11T21:41:24.0293825Z auto tmp5 = at::vec::Vectorized(static_cast(0.01)); 2023-01-11T21:41:24.0293905Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.0293993Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0294079Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0294139Z } 2023-01-11T21:41:24.0294229Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0294307Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0294356Z { 2023-01-11T21:41:24.0294436Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0294533Z auto tmp1 = std::nearbyint(tmp0); 2023-01-11T21:41:24.0294632Z auto tmp2 = static_cast(100.0); 2023-01-11T21:41:24.0294739Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:24.0294835Z auto tmp4 = std::nearbyint(tmp3); 2023-01-11T21:41:24.0294932Z auto tmp5 = static_cast(0.01); 2023-01-11T21:41:24.0295002Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.0295078Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0295155Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0295214Z } 2023-01-11T21:41:24.0295287Z #pragma omp for 2023-01-11T21:41:24.0295365Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0295413Z { 2023-01-11T21:41:24.0295542Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0295672Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0295752Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0295834Z auto tmp3 = tmp2.round(); 2023-01-11T21:41:24.0295921Z tmp3.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0295982Z } 2023-01-11T21:41:24.0296073Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0296141Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0296201Z { 2023-01-11T21:41:24.0296280Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0296375Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0296454Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0296551Z auto tmp3 = std::nearbyint(tmp2); 2023-01-11T21:41:24.0296626Z out_ptr2[i0] = tmp3; 2023-01-11T21:41:24.0296675Z } 2023-01-11T21:41:24.0296733Z } 2023-01-11T21:41:24.0296788Z } 2023-01-11T21:41:24.0296864Z ''') 2023-01-11T21:41:24.0296869Z 2023-01-11T21:41:24.0296874Z 2023-01-11T21:41:24.0296960Z async_compile.wait(globals()) 2023-01-11T21:41:24.0297028Z del async_compile 2023-01-11T21:41:24.0297034Z 2023-01-11T21:41:24.0297101Z def call(args): 2023-01-11T21:41:24.0297162Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0297229Z args.clear() 2023-01-11T21:41:24.0297427Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0297620Z buf2 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0297805Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0298015Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0298082Z del arg0_1 2023-01-11T21:41:24.0298135Z del arg1_1 2023-01-11T21:41:24.0298215Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0298219Z 2023-01-11T21:41:24.0298223Z 2023-01-11T21:41:24.0298297Z if __name__ == "__main__": 2023-01-11T21:41:24.0298407Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0298526Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0298718Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0298956Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0299067Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0299072Z 2023-01-11T21:41:24.0299136Z ok (1.574s) 2023-01-11T21:41:24.0299583Z test_rsqrt_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0299706Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0299966Z [2023-01-11 21:37:35,596] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 408 2023-01-11T21:41:24.0300256Z [2023-01-11 21:37:37,160] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 408 2023-01-11T21:41:24.0300264Z 2023-01-11T21:41:24.0300358Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0300425Z import torch 2023-01-11T21:41:24.0300493Z import random 2023-01-11T21:41:24.0300605Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0300725Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0300730Z 2023-01-11T21:41:24.0300793Z aten = torch.ops.aten 2023-01-11T21:41:24.0300928Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0301015Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0301020Z 2023-01-11T21:41:24.0301024Z 2023-01-11T21:41:24.0301153Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0301356Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0301472Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0301571Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0301665Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0301713Z { 2023-01-11T21:41:24.0301809Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0301867Z { 2023-01-11T21:41:24.0301940Z #pragma omp for 2023-01-11T21:41:24.0302017Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0302074Z { 2023-01-11T21:41:24.0302207Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0302280Z auto tmp1 = tmp0.rsqrt(); 2023-01-11T21:41:24.0302576Z auto tmp2 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0302664Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.0302747Z auto tmp4 = tmp3.rsqrt(); 2023-01-11T21:41:24.0302874Z auto tmp5 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0303001Z auto tmp6 = tmp4 - tmp5; 2023-01-11T21:41:24.0303092Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0303171Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0303231Z } 2023-01-11T21:41:24.0303323Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0303403Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0303463Z { 2023-01-11T21:41:24.0303544Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0303638Z auto tmp1 = 1 / std::sqrt(tmp0); 2023-01-11T21:41:24.0303723Z auto tmp2 = static_cast(1); 2023-01-11T21:41:24.0303806Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.0303899Z auto tmp4 = 1 / std::sqrt(tmp3); 2023-01-11T21:41:24.0303995Z auto tmp5 = static_cast(2); 2023-01-11T21:41:24.0304116Z auto tmp6 = tmp4 - tmp5; 2023-01-11T21:41:24.0304193Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0304268Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0304316Z } 2023-01-11T21:41:24.0304375Z } 2023-01-11T21:41:24.0304489Z } 2023-01-11T21:41:24.0304568Z ''') 2023-01-11T21:41:24.0304574Z 2023-01-11T21:41:24.0304578Z 2023-01-11T21:41:24.0304666Z async_compile.wait(globals()) 2023-01-11T21:41:24.0304738Z del async_compile 2023-01-11T21:41:24.0304742Z 2023-01-11T21:41:24.0304810Z def call(args): 2023-01-11T21:41:24.0304865Z arg0_1, = args 2023-01-11T21:41:24.0304933Z args.clear() 2023-01-11T21:41:24.0305125Z buf0 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0305311Z buf1 = empty_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0305475Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0305543Z del arg0_1 2023-01-11T21:41:24.0305621Z return (buf0, buf1, ) 2023-01-11T21:41:24.0305626Z 2023-01-11T21:41:24.0305630Z 2023-01-11T21:41:24.0305704Z if __name__ == "__main__": 2023-01-11T21:41:24.0305804Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0305964Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0306159Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0306264Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0306268Z 2023-01-11T21:41:24.0306331Z ok (1.585s) 2023-01-11T21:41:24.0306796Z test_scatter1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0306918Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0307174Z [2023-01-11 21:37:37,183] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 409 2023-01-11T21:41:24.0307440Z [2023-01-11 21:37:38,732] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 409 2023-01-11T21:41:24.0307446Z 2023-01-11T21:41:24.0307539Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0307594Z import torch 2023-01-11T21:41:24.0307659Z import random 2023-01-11T21:41:24.0307770Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0307887Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0307892Z 2023-01-11T21:41:24.0307967Z aten = torch.ops.aten 2023-01-11T21:41:24.0308098Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0308187Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0308192Z 2023-01-11T21:41:24.0308196Z 2023-01-11T21:41:24.0308315Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0308517Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0308632Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0308743Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0308846Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0308942Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0309001Z { 2023-01-11T21:41:24.0309096Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0309144Z { 2023-01-11T21:41:24.0309218Z #pragma omp for 2023-01-11T21:41:24.0309300Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.0309363Z { 2023-01-11T21:41:24.0309427Z { 2023-01-11T21:41:24.0309490Z { 2023-01-11T21:41:24.0309567Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0309648Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0309711Z } 2023-01-11T21:41:24.0309773Z } 2023-01-11T21:41:24.0309837Z } 2023-01-11T21:41:24.0309914Z #pragma omp single 2023-01-11T21:41:24.0309975Z { 2023-01-11T21:41:24.0310056Z { 2023-01-11T21:41:24.0310117Z { 2023-01-11T21:41:24.0310207Z auto tmp0 = in_ptr1[0]; 2023-01-11T21:41:24.0310296Z auto tmp1 = in_ptr2[0]; 2023-01-11T21:41:24.0310383Z out_ptr0[tmp0] = tmp1; 2023-01-11T21:41:24.0310446Z } 2023-01-11T21:41:24.0310509Z } 2023-01-11T21:41:24.0310557Z } 2023-01-11T21:41:24.0310617Z } 2023-01-11T21:41:24.0310676Z } 2023-01-11T21:41:24.0310754Z ''') 2023-01-11T21:41:24.0310759Z 2023-01-11T21:41:24.0310763Z 2023-01-11T21:41:24.0310850Z async_compile.wait(globals()) 2023-01-11T21:41:24.0310921Z del async_compile 2023-01-11T21:41:24.0310926Z 2023-01-11T21:41:24.0310995Z def call(args): 2023-01-11T21:41:24.0311063Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0311134Z args.clear() 2023-01-11T21:41:24.0311329Z buf0 = empty_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0311546Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0311616Z del arg0_1 2023-01-11T21:41:24.0311681Z del arg1_1 2023-01-11T21:41:24.0311746Z del arg2_1 2023-01-11T21:41:24.0311804Z return (buf0, ) 2023-01-11T21:41:24.0311809Z 2023-01-11T21:41:24.0311813Z 2023-01-11T21:41:24.0311889Z if __name__ == "__main__": 2023-01-11T21:41:24.0312001Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0312123Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0312318Z arg0_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0312514Z arg1_1 = rand_strided((1, 1), (1, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0312699Z arg2_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0312820Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0312825Z 2023-01-11T21:41:24.0312882Z ok (1.573s) 2023-01-11T21:41:24.0313348Z test_scatter2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0313475Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0313789Z [2023-01-11 21:37:38,755] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 410 2023-01-11T21:41:24.0314059Z [2023-01-11 21:37:40,299] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 410 2023-01-11T21:41:24.0314064Z 2023-01-11T21:41:24.0314154Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0314225Z import torch 2023-01-11T21:41:24.0314296Z import random 2023-01-11T21:41:24.0314412Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0314520Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0314525Z 2023-01-11T21:41:24.0314599Z aten = torch.ops.aten 2023-01-11T21:41:24.0314729Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0314816Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0314821Z 2023-01-11T21:41:24.0314825Z 2023-01-11T21:41:24.0314953Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0315158Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0315275Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0315379Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0315470Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0315566Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0315626Z { 2023-01-11T21:41:24.0315760Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0315817Z { 2023-01-11T21:41:24.0315893Z #pragma omp for 2023-01-11T21:41:24.0315972Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:24.0316021Z { 2023-01-11T21:41:24.0316150Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0316238Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0316302Z } 2023-01-11T21:41:24.0316393Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0316478Z for(long i0=32768; i0<32768; i0+=1) 2023-01-11T21:41:24.0316539Z { 2023-01-11T21:41:24.0316608Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0316684Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0316746Z } 2023-01-11T21:41:24.0316818Z #pragma omp for 2023-01-11T21:41:24.0316896Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0316954Z { 2023-01-11T21:41:24.0317029Z #pragma GCC ivdep 2023-01-11T21:41:24.0317130Z for(long i1=0; i1<512; i1+=1) 2023-01-11T21:41:24.0317193Z { 2023-01-11T21:41:24.0317257Z { 2023-01-11T21:41:24.0317322Z { 2023-01-11T21:41:24.0317423Z auto tmp0 = in_ptr1[i1 + (512*i0)]; 2023-01-11T21:41:24.0317522Z auto tmp1 = in_ptr2[i1 + (512*i0)]; 2023-01-11T21:41:24.0317633Z atomic_add(&out_ptr0[i1 + (512*tmp0)], tmp1); 2023-01-11T21:41:24.0317686Z } 2023-01-11T21:41:24.0317746Z } 2023-01-11T21:41:24.0317809Z } 2023-01-11T21:41:24.0317866Z } 2023-01-11T21:41:24.0317923Z } 2023-01-11T21:41:24.0317983Z } 2023-01-11T21:41:24.0318048Z ''') 2023-01-11T21:41:24.0318053Z 2023-01-11T21:41:24.0318069Z 2023-01-11T21:41:24.0318145Z async_compile.wait(globals()) 2023-01-11T21:41:24.0318213Z del async_compile 2023-01-11T21:41:24.0318217Z 2023-01-11T21:41:24.0318283Z def call(args): 2023-01-11T21:41:24.0318363Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0318429Z args.clear() 2023-01-11T21:41:24.0318627Z buf0 = empty_strided((64, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0318812Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0318867Z del arg0_1 2023-01-11T21:41:24.0318929Z del arg1_1 2023-01-11T21:41:24.0318991Z del arg2_1 2023-01-11T21:41:24.0319058Z return (buf0, ) 2023-01-11T21:41:24.0319063Z 2023-01-11T21:41:24.0319067Z 2023-01-11T21:41:24.0319139Z if __name__ == "__main__": 2023-01-11T21:41:24.0319251Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0319371Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0319571Z arg0_1 = rand_strided((64, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0319760Z arg1_1 = rand_strided((64, 512), (512, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0319959Z arg2_1 = rand_strided((64, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0320077Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0320082Z 2023-01-11T21:41:24.0320143Z ok (1.569s) 2023-01-11T21:41:24.0320611Z test_scatter3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0320737Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0320997Z [2023-01-11 21:37:40,323] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 411 2023-01-11T21:41:24.0321264Z [2023-01-11 21:37:41,901] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 411 2023-01-11T21:41:24.0321303Z 2023-01-11T21:41:24.0321393Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0321450Z import torch 2023-01-11T21:41:24.0321519Z import random 2023-01-11T21:41:24.0321634Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0321752Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0321757Z 2023-01-11T21:41:24.0321831Z aten = torch.ops.aten 2023-01-11T21:41:24.0321960Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0322051Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0322056Z 2023-01-11T21:41:24.0322060Z 2023-01-11T21:41:24.0322190Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0322382Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0322498Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0322628Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0322726Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0322785Z { 2023-01-11T21:41:24.0322877Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0322935Z { 2023-01-11T21:41:24.0322998Z #pragma omp for 2023-01-11T21:41:24.0323075Z for(long i0=0; i0<235; i0+=1) 2023-01-11T21:41:24.0323135Z { 2023-01-11T21:41:24.0323266Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0323354Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0323415Z } 2023-01-11T21:41:24.0323505Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0323577Z for(long i0=1880; i0<1885; i0+=1) 2023-01-11T21:41:24.0323635Z { 2023-01-11T21:41:24.0323714Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0323791Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0323851Z } 2023-01-11T21:41:24.0323933Z #pragma omp for 2023-01-11T21:41:24.0324015Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0324063Z { 2023-01-11T21:41:24.0324123Z { 2023-01-11T21:41:24.0324184Z { 2023-01-11T21:41:24.0324270Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0324372Z auto tmp1 = static_cast(0.8); 2023-01-11T21:41:24.0324472Z atomic_add(&out_ptr0[tmp0], tmp1); 2023-01-11T21:41:24.0324536Z } 2023-01-11T21:41:24.0324587Z } 2023-01-11T21:41:24.0324644Z } 2023-01-11T21:41:24.0324702Z } 2023-01-11T21:41:24.0324758Z } 2023-01-11T21:41:24.0324833Z ''') 2023-01-11T21:41:24.0324838Z 2023-01-11T21:41:24.0324842Z 2023-01-11T21:41:24.0324929Z async_compile.wait(globals()) 2023-01-11T21:41:24.0324996Z del async_compile 2023-01-11T21:41:24.0325001Z 2023-01-11T21:41:24.0325057Z def call(args): 2023-01-11T21:41:24.0325130Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0325204Z args.clear() 2023-01-11T21:41:24.0325412Z buf0 = empty_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0325572Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0325637Z del arg0_1 2023-01-11T21:41:24.0325701Z del arg1_1 2023-01-11T21:41:24.0325758Z return (buf0, ) 2023-01-11T21:41:24.0325763Z 2023-01-11T21:41:24.0325779Z 2023-01-11T21:41:24.0325840Z if __name__ == "__main__": 2023-01-11T21:41:24.0325950Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0326069Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0326280Z arg0_1 = rand_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0326477Z arg1_1 = rand_strided((1, 1, 4), (4, 4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0326588Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0326622Z 2023-01-11T21:41:24.0326688Z ok (1.600s) 2023-01-11T21:41:24.0327152Z test_scatter4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0327266Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0327526Z [2023-01-11 21:37:41,921] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 412 2023-01-11T21:41:24.0327791Z [2023-01-11 21:37:43,501] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 412 2023-01-11T21:41:24.0327796Z 2023-01-11T21:41:24.0327886Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0327952Z import torch 2023-01-11T21:41:24.0328022Z import random 2023-01-11T21:41:24.0328162Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0328282Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0328286Z 2023-01-11T21:41:24.0328351Z aten = torch.ops.aten 2023-01-11T21:41:24.0328478Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0328566Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0328572Z 2023-01-11T21:41:24.0328576Z 2023-01-11T21:41:24.0328707Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0328908Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0329025Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0329128Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0329228Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0329317Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0329377Z { 2023-01-11T21:41:24.0329473Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0329531Z { 2023-01-11T21:41:24.0329607Z #pragma omp for 2023-01-11T21:41:24.0329687Z for(long i0=0; i0<24304; i0+=1) 2023-01-11T21:41:24.0329747Z { 2023-01-11T21:41:24.0329867Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0329955Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0330016Z } 2023-01-11T21:41:24.0330107Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0330194Z for(long i0=194432; i0<194432; i0+=1) 2023-01-11T21:41:24.0330255Z { 2023-01-11T21:41:24.0330333Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0330400Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0330460Z } 2023-01-11T21:41:24.0330533Z #pragma omp for 2023-01-11T21:41:24.0330613Z for(long i0=0; i0<992; i0+=1) 2023-01-11T21:41:24.0330672Z { 2023-01-11T21:41:24.0330733Z { 2023-01-11T21:41:24.0330800Z { 2023-01-11T21:41:24.0330879Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0330966Z auto tmp1 = in_ptr2[i0]; 2023-01-11T21:41:24.0331062Z out_ptr0[i0 + (992*tmp0)] = tmp1; 2023-01-11T21:41:24.0331127Z } 2023-01-11T21:41:24.0331190Z } 2023-01-11T21:41:24.0331250Z } 2023-01-11T21:41:24.0331300Z } 2023-01-11T21:41:24.0331358Z } 2023-01-11T21:41:24.0331434Z ''') 2023-01-11T21:41:24.0331440Z 2023-01-11T21:41:24.0331444Z 2023-01-11T21:41:24.0331529Z async_compile.wait(globals()) 2023-01-11T21:41:24.0331599Z del async_compile 2023-01-11T21:41:24.0331604Z 2023-01-11T21:41:24.0331673Z def call(args): 2023-01-11T21:41:24.0331753Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0331819Z args.clear() 2023-01-11T21:41:24.0332011Z buf0 = empty_strided((196, 992), (992, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0332231Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0332299Z del arg0_1 2023-01-11T21:41:24.0332363Z del arg1_1 2023-01-11T21:41:24.0332426Z del arg2_1 2023-01-11T21:41:24.0332494Z return (buf0, ) 2023-01-11T21:41:24.0332499Z 2023-01-11T21:41:24.0332503Z 2023-01-11T21:41:24.0332576Z if __name__ == "__main__": 2023-01-11T21:41:24.0332678Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0332798Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0333004Z arg0_1 = rand_strided((196, 992), (992, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0333200Z arg1_1 = rand_strided((1, 992), (992, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0333398Z arg2_1 = rand_strided((1, 992), (992, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0333517Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0333525Z 2023-01-11T21:41:24.0333614Z ok (1.603s) 2023-01-11T21:41:24.0333764Z test_scatter_add1_cpu (__main__.CpuTests) ... skip: Flaky test, needs debugging (0.000s) 2023-01-11T21:41:24.0334227Z test_scatter_add2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0334353Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0334602Z [2023-01-11 21:37:43,527] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 413 2023-01-11T21:41:24.0334864Z [2023-01-11 21:37:45,114] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 413 2023-01-11T21:41:24.0334873Z 2023-01-11T21:41:24.0334964Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0335032Z import torch 2023-01-11T21:41:24.0335099Z import random 2023-01-11T21:41:24.0335211Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0335330Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0335334Z 2023-01-11T21:41:24.0335409Z aten = torch.ops.aten 2023-01-11T21:41:24.0335528Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0335617Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0335622Z 2023-01-11T21:41:24.0335626Z 2023-01-11T21:41:24.0335756Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0335956Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0336073Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0336174Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0336277Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0336376Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0336424Z { 2023-01-11T21:41:24.0336516Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0336577Z { 2023-01-11T21:41:24.0336651Z #pragma omp for 2023-01-11T21:41:24.0336727Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.0336787Z { 2023-01-11T21:41:24.0336837Z { 2023-01-11T21:41:24.0336898Z { 2023-01-11T21:41:24.0336986Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0337067Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0337128Z } 2023-01-11T21:41:24.0337188Z } 2023-01-11T21:41:24.0337248Z } 2023-01-11T21:41:24.0337310Z #pragma omp for 2023-01-11T21:41:24.0337388Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0337448Z { 2023-01-11T21:41:24.0337524Z #pragma GCC ivdep 2023-01-11T21:41:24.0337603Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0337705Z { 2023-01-11T21:41:24.0337766Z { 2023-01-11T21:41:24.0337819Z { 2023-01-11T21:41:24.0337916Z auto tmp0 = in_ptr1[i1 + (3*i0)]; 2023-01-11T21:41:24.0338014Z auto tmp1 = in_ptr2[i1 + (3*i0)]; 2023-01-11T21:41:24.0338124Z atomic_add(&out_ptr0[i1 + (3*tmp0)], tmp1); 2023-01-11T21:41:24.0338187Z } 2023-01-11T21:41:24.0338247Z } 2023-01-11T21:41:24.0338305Z } 2023-01-11T21:41:24.0338355Z } 2023-01-11T21:41:24.0338411Z } 2023-01-11T21:41:24.0338467Z } 2023-01-11T21:41:24.0338548Z ''') 2023-01-11T21:41:24.0338554Z 2023-01-11T21:41:24.0338558Z 2023-01-11T21:41:24.0338646Z async_compile.wait(globals()) 2023-01-11T21:41:24.0338718Z del async_compile 2023-01-11T21:41:24.0338723Z 2023-01-11T21:41:24.0338792Z def call(args): 2023-01-11T21:41:24.0338859Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0338962Z args.clear() 2023-01-11T21:41:24.0339158Z buf0 = empty_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0339346Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0339414Z del arg0_1 2023-01-11T21:41:24.0339482Z del arg1_1 2023-01-11T21:41:24.0339545Z del arg2_1 2023-01-11T21:41:24.0339605Z return (buf0, ) 2023-01-11T21:41:24.0339609Z 2023-01-11T21:41:24.0339613Z 2023-01-11T21:41:24.0339688Z if __name__ == "__main__": 2023-01-11T21:41:24.0339802Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0339922Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0340119Z arg0_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0340311Z arg1_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0340504Z arg2_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0340629Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0340634Z 2023-01-11T21:41:24.0340689Z ok (1.609s) 2023-01-11T21:41:24.0341158Z test_scatter_add3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0341283Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0341542Z [2023-01-11 21:37:45,136] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 414 2023-01-11T21:41:24.0341806Z [2023-01-11 21:37:46,666] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 414 2023-01-11T21:41:24.0341815Z 2023-01-11T21:41:24.0341908Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0341977Z import torch 2023-01-11T21:41:24.0342047Z import random 2023-01-11T21:41:24.0342158Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0342265Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0342270Z 2023-01-11T21:41:24.0342480Z aten = torch.ops.aten 2023-01-11T21:41:24.0342629Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0342721Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0342726Z 2023-01-11T21:41:24.0342730Z 2023-01-11T21:41:24.0342867Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0343071Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0343189Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0343294Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0343445Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0343544Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0343605Z { 2023-01-11T21:41:24.0343701Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0343761Z { 2023-01-11T21:41:24.0343840Z #pragma omp for 2023-01-11T21:41:24.0343921Z for(long i0=0; i0<235; i0+=1) 2023-01-11T21:41:24.0343970Z { 2023-01-11T21:41:24.0344105Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0344194Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0344255Z } 2023-01-11T21:41:24.0344347Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0344432Z for(long i0=1880; i0<1885; i0+=1) 2023-01-11T21:41:24.0344492Z { 2023-01-11T21:41:24.0344562Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0344642Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0344703Z } 2023-01-11T21:41:24.0344819Z #pragma omp for 2023-01-11T21:41:24.0344901Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0344958Z { 2023-01-11T21:41:24.0345021Z { 2023-01-11T21:41:24.0345072Z { 2023-01-11T21:41:24.0345164Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0345254Z auto tmp1 = in_ptr2[i0]; 2023-01-11T21:41:24.0345355Z atomic_add(&out_ptr0[tmp0], tmp1); 2023-01-11T21:41:24.0345418Z } 2023-01-11T21:41:24.0345480Z } 2023-01-11T21:41:24.0345528Z } 2023-01-11T21:41:24.0345592Z } 2023-01-11T21:41:24.0345647Z } 2023-01-11T21:41:24.0345724Z ''') 2023-01-11T21:41:24.0345729Z 2023-01-11T21:41:24.0345734Z 2023-01-11T21:41:24.0345818Z async_compile.wait(globals()) 2023-01-11T21:41:24.0345886Z del async_compile 2023-01-11T21:41:24.0345891Z 2023-01-11T21:41:24.0345957Z def call(args): 2023-01-11T21:41:24.0346035Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0346096Z args.clear() 2023-01-11T21:41:24.0346303Z buf0 = empty_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0346490Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0346557Z del arg0_1 2023-01-11T21:41:24.0346624Z del arg1_1 2023-01-11T21:41:24.0346688Z del arg2_1 2023-01-11T21:41:24.0346758Z return (buf0, ) 2023-01-11T21:41:24.0346763Z 2023-01-11T21:41:24.0346767Z 2023-01-11T21:41:24.0346827Z if __name__ == "__main__": 2023-01-11T21:41:24.0346935Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0347053Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0347260Z arg0_1 = rand_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0347462Z arg1_1 = rand_strided((1, 1, 4), (4, 4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0347668Z arg2_1 = rand_strided((1, 1, 10), (10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0347789Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0347794Z 2023-01-11T21:41:24.0347857Z ok (1.552s) 2023-01-11T21:41:24.0348334Z test_scatter_reduce1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0348457Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0348653Z [W TensorAdvancedIndexing.cpp:1739] Warning: scatter_reduce() is in beta and the API may change at any time. (function operator()) 2023-01-11T21:41:24.0348914Z [2023-01-11 21:37:46,688] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 415 2023-01-11T21:41:24.0349212Z [2023-01-11 21:37:46,700] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 415 2023-01-11T21:41:24.0349217Z 2023-01-11T21:41:24.0349306Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0349374Z import torch 2023-01-11T21:41:24.0349441Z import random 2023-01-11T21:41:24.0349551Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0349667Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0349672Z 2023-01-11T21:41:24.0349736Z aten = torch.ops.aten 2023-01-11T21:41:24.0349866Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0349957Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0349962Z 2023-01-11T21:41:24.0349966Z 2023-01-11T21:41:24.0350097Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0350302Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0350449Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0350549Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0350652Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0350739Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0350799Z { 2023-01-11T21:41:24.0350893Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0350952Z { 2023-01-11T21:41:24.0351027Z #pragma omp for 2023-01-11T21:41:24.0351106Z for(long i0=0; i0<235; i0+=1) 2023-01-11T21:41:24.0351164Z { 2023-01-11T21:41:24.0351284Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0351372Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0351431Z } 2023-01-11T21:41:24.0351522Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0351605Z for(long i0=1880; i0<1885; i0+=1) 2023-01-11T21:41:24.0351667Z { 2023-01-11T21:41:24.0351750Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0351817Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0351876Z } 2023-01-11T21:41:24.0351949Z #pragma omp for 2023-01-11T21:41:24.0352028Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0352085Z { 2023-01-11T21:41:24.0352144Z { 2023-01-11T21:41:24.0352194Z { 2023-01-11T21:41:24.0352278Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0352368Z auto tmp1 = in_ptr2[i0]; 2023-01-11T21:41:24.0352469Z atomic_add(&out_ptr0[tmp0], tmp1); 2023-01-11T21:41:24.0352530Z } 2023-01-11T21:41:24.0352592Z } 2023-01-11T21:41:24.0352650Z } 2023-01-11T21:41:24.0352697Z } 2023-01-11T21:41:24.0352753Z } 2023-01-11T21:41:24.0352827Z ''') 2023-01-11T21:41:24.0352832Z 2023-01-11T21:41:24.0352836Z 2023-01-11T21:41:24.0352923Z async_compile.wait(globals()) 2023-01-11T21:41:24.0352996Z del async_compile 2023-01-11T21:41:24.0353001Z 2023-01-11T21:41:24.0353067Z def call(args): 2023-01-11T21:41:24.0353145Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0353202Z args.clear() 2023-01-11T21:41:24.0353406Z buf0 = empty_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0353588Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0353654Z del arg0_1 2023-01-11T21:41:24.0353716Z del arg1_1 2023-01-11T21:41:24.0353836Z del arg2_1 2023-01-11T21:41:24.0353907Z return (buf0, ) 2023-01-11T21:41:24.0353912Z 2023-01-11T21:41:24.0353916Z 2023-01-11T21:41:24.0353989Z if __name__ == "__main__": 2023-01-11T21:41:24.0354091Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0354212Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0354423Z arg0_1 = rand_strided((5, 29, 13), (377, 13, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0354659Z arg1_1 = rand_strided((1, 1, 4), (4, 4, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0354859Z arg2_1 = rand_strided((1, 1, 10), (10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0354979Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0354984Z 2023-01-11T21:41:24.0355050Z ok (0.034s) 2023-01-11T21:41:24.0355522Z test_scatter_reduce2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0355644Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0355919Z [2023-01-11 21:37:46,721] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 416 2023-01-11T21:41:24.0356186Z [2023-01-11 21:37:48,322] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 416 2023-01-11T21:41:24.0356191Z 2023-01-11T21:41:24.0356282Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0356351Z import torch 2023-01-11T21:41:24.0356420Z import random 2023-01-11T21:41:24.0356533Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0356649Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0356655Z 2023-01-11T21:41:24.0356729Z aten = torch.ops.aten 2023-01-11T21:41:24.0356848Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0356938Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0356943Z 2023-01-11T21:41:24.0356948Z 2023-01-11T21:41:24.0357080Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0357283Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0357405Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0357506Z const long* __restrict__ in_ptr1, 2023-01-11T21:41:24.0357607Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0357705Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0357753Z { 2023-01-11T21:41:24.0357846Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0357904Z { 2023-01-11T21:41:24.0357978Z #pragma omp for 2023-01-11T21:41:24.0358055Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.0358114Z { 2023-01-11T21:41:24.0358175Z { 2023-01-11T21:41:24.0358225Z { 2023-01-11T21:41:24.0358315Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0358392Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0358454Z } 2023-01-11T21:41:24.0358514Z } 2023-01-11T21:41:24.0358576Z } 2023-01-11T21:41:24.0358641Z #pragma omp for 2023-01-11T21:41:24.0358718Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0358780Z { 2023-01-11T21:41:24.0358857Z #pragma GCC ivdep 2023-01-11T21:41:24.0358936Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0358996Z { 2023-01-11T21:41:24.0359056Z { 2023-01-11T21:41:24.0359108Z { 2023-01-11T21:41:24.0359205Z auto tmp0 = in_ptr1[i1 + (3*i0)]; 2023-01-11T21:41:24.0359308Z auto tmp1 = static_cast(0); 2023-01-11T21:41:24.0359404Z out_ptr0[i1 + (3*tmp0)] = tmp1; 2023-01-11T21:41:24.0359467Z } 2023-01-11T21:41:24.0359528Z } 2023-01-11T21:41:24.0359590Z } 2023-01-11T21:41:24.0359638Z } 2023-01-11T21:41:24.0359709Z #pragma omp for 2023-01-11T21:41:24.0359785Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0359875Z { 2023-01-11T21:41:24.0359954Z #pragma GCC ivdep 2023-01-11T21:41:24.0360032Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0360081Z { 2023-01-11T21:41:24.0360141Z { 2023-01-11T21:41:24.0360203Z { 2023-01-11T21:41:24.0360298Z auto tmp0 = in_ptr1[i1 + (3*i0)]; 2023-01-11T21:41:24.0360394Z auto tmp1 = in_ptr2[i1 + (3*i0)]; 2023-01-11T21:41:24.0360503Z atomic_add(&out_ptr0[i1 + (3*tmp0)], tmp1); 2023-01-11T21:41:24.0360566Z } 2023-01-11T21:41:24.0360616Z } 2023-01-11T21:41:24.0360675Z } 2023-01-11T21:41:24.0360734Z } 2023-01-11T21:41:24.0360794Z } 2023-01-11T21:41:24.0360852Z } 2023-01-11T21:41:24.0360927Z ''') 2023-01-11T21:41:24.0360932Z 2023-01-11T21:41:24.0360936Z 2023-01-11T21:41:24.0361021Z async_compile.wait(globals()) 2023-01-11T21:41:24.0361080Z del async_compile 2023-01-11T21:41:24.0361102Z 2023-01-11T21:41:24.0361185Z def call(args): 2023-01-11T21:41:24.0361263Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0361332Z args.clear() 2023-01-11T21:41:24.0361526Z buf0 = empty_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0361713Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0361778Z del arg0_1 2023-01-11T21:41:24.0361843Z del arg1_1 2023-01-11T21:41:24.0361895Z del arg2_1 2023-01-11T21:41:24.0361961Z return (buf0, ) 2023-01-11T21:41:24.0361967Z 2023-01-11T21:41:24.0361971Z 2023-01-11T21:41:24.0362043Z if __name__ == "__main__": 2023-01-11T21:41:24.0362155Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0367553Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0367796Z arg0_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0367995Z arg1_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0368180Z arg2_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0368329Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0368337Z 2023-01-11T21:41:24.0368429Z ok (1.637s) 2023-01-11T21:41:24.0368952Z test_scheduler_vertical_fusion1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0369080Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0369349Z [2023-01-11 21:37:48,462] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 417 2023-01-11T21:41:24.0369621Z [2023-01-11 21:37:50,100] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 417 2023-01-11T21:41:24.0369627Z 2023-01-11T21:41:24.0369716Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0369785Z import torch 2023-01-11T21:41:24.0369854Z import random 2023-01-11T21:41:24.0369955Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0370073Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0370078Z 2023-01-11T21:41:24.0370157Z aten = torch.ops.aten 2023-01-11T21:41:24.0370289Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0370380Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0370386Z 2023-01-11T21:41:24.0370390Z 2023-01-11T21:41:24.0370523Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0370727Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0370846Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0371006Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.0371110Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0371213Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0371311Z const float* __restrict__ in_ptr2) 2023-01-11T21:41:24.0371371Z { 2023-01-11T21:41:24.0371456Z auto out_ptr1 = in_out_ptr1; 2023-01-11T21:41:24.0371553Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0371601Z { 2023-01-11T21:41:24.0371676Z #pragma omp for 2023-01-11T21:41:24.0371760Z for(long i0=0; i0<135252; i0+=1) 2023-01-11T21:41:24.0371823Z { 2023-01-11T21:41:24.0371962Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0372090Z auto tmp8 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0372347Z auto tmp1 = at::vec::Vectorized(static_cast(-1.061519070296458e-11)); 2023-01-11T21:41:24.0372435Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0372645Z auto tmp3 = at::vec::Vectorized(static_cast(-1.988366587925593e-08)); 2023-01-11T21:41:24.0372728Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0372806Z auto tmp5 = tmp0 * tmp4; 2023-01-11T21:41:24.0373024Z auto tmp6 = at::vec::Vectorized(static_cast(-3.087032500374211e-07)); 2023-01-11T21:41:24.0373104Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.0373320Z auto tmp9 = at::vec::Vectorized(static_cast(1.55093272922008e-10)); 2023-01-11T21:41:24.0373403Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.0373485Z auto tmp11 = tmp7 + tmp10; 2023-01-11T21:41:24.0373572Z auto tmp12 = tmp11.reciprocal(); 2023-01-11T21:41:24.0373703Z auto tmp13 = at::vec::Vectorized(static_cast(1.0)); 2023-01-11T21:41:24.0373794Z auto tmp14 = tmp12 * tmp13; 2023-01-11T21:41:24.0373889Z tmp11.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0373980Z tmp14.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0374042Z } 2023-01-11T21:41:24.0374137Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0374216Z for(long i0=1082016; i0<1082016; i0+=1) 2023-01-11T21:41:24.0374278Z { 2023-01-11T21:41:24.0374360Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0374441Z auto tmp8 = in_ptr1[i0]; 2023-01-11T21:41:24.0374614Z auto tmp1 = static_cast(-1.061519070296458e-11); 2023-01-11T21:41:24.0374696Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0374866Z auto tmp3 = static_cast(-1.988366587925593e-08); 2023-01-11T21:41:24.0374936Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0375017Z auto tmp5 = tmp0 * tmp4; 2023-01-11T21:41:24.0375184Z auto tmp6 = static_cast(-3.087032500374211e-07); 2023-01-11T21:41:24.0375270Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.0375436Z auto tmp9 = static_cast(1.55093272922008e-10); 2023-01-11T21:41:24.0375548Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.0375665Z auto tmp11 = tmp7 + tmp10; 2023-01-11T21:41:24.0375733Z auto tmp12 = 1 / tmp11; 2023-01-11T21:41:24.0375832Z auto tmp13 = static_cast(1.0); 2023-01-11T21:41:24.0375916Z auto tmp14 = tmp12 * tmp13; 2023-01-11T21:41:24.0375996Z in_out_ptr0[i0] = tmp11; 2023-01-11T21:41:24.0376074Z out_ptr1[i0] = tmp14; 2023-01-11T21:41:24.0376136Z } 2023-01-11T21:41:24.0376211Z #pragma omp for 2023-01-11T21:41:24.0376279Z for(long i0=0; i0<41616; i0+=1) 2023-01-11T21:41:24.0376340Z { 2023-01-11T21:41:24.0376421Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0376483Z { 2023-01-11T21:41:24.0376631Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + (8*i1) + (26*i0)); 2023-01-11T21:41:24.0376799Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr2 + 8*i1); 2023-01-11T21:41:24.0376887Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0376981Z tmp2.store(in_out_ptr0 + (8*i1) + (26*i0)); 2023-01-11T21:41:24.0377048Z } 2023-01-11T21:41:24.0377139Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0377223Z for(long i1=24; i1<26; i1+=1) 2023-01-11T21:41:24.0377287Z { 2023-01-11T21:41:24.0377388Z auto tmp0 = in_out_ptr0[i1 + (26*i0)]; 2023-01-11T21:41:24.0377473Z auto tmp1 = in_ptr2[i1]; 2023-01-11T21:41:24.0377547Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0377641Z in_out_ptr0[i1 + (26*i0)] = tmp2; 2023-01-11T21:41:24.0377705Z } 2023-01-11T21:41:24.0377765Z } 2023-01-11T21:41:24.0377840Z #pragma omp for 2023-01-11T21:41:24.0377950Z for(long i0=0; i0<135252; i0+=1) 2023-01-11T21:41:24.0378012Z { 2023-01-11T21:41:24.0378132Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0378267Z auto tmp1 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0378349Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0378446Z tmp2.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:24.0378506Z } 2023-01-11T21:41:24.0378599Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0378688Z for(long i0=1082016; i0<1082016; i0+=1) 2023-01-11T21:41:24.0378738Z { 2023-01-11T21:41:24.0378822Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:24.0378907Z auto tmp1 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0378997Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0379115Z in_out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.0379202Z } 2023-01-11T21:41:24.0379285Z } 2023-01-11T21:41:24.0379340Z } 2023-01-11T21:41:24.0379464Z ''') 2023-01-11T21:41:24.0379473Z 2023-01-11T21:41:24.0379480Z 2023-01-11T21:41:24.0379575Z async_compile.wait(globals()) 2023-01-11T21:41:24.0379646Z del async_compile 2023-01-11T21:41:24.0379651Z 2023-01-11T21:41:24.0379720Z def call(args): 2023-01-11T21:41:24.0379796Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0379864Z args.clear() 2023-01-11T21:41:24.0380081Z buf0 = empty_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0380154Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0380366Z buf2 = empty_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0380447Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0380524Z buf4 = buf2; del buf2 # reuse 2023-01-11T21:41:24.0380737Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg2_1.data_ptr())) 2023-01-11T21:41:24.0380804Z del arg0_1 2023-01-11T21:41:24.0380870Z del arg1_1 2023-01-11T21:41:24.0380924Z del arg2_1 2023-01-11T21:41:24.0380992Z return (buf4, ) 2023-01-11T21:41:24.0380998Z 2023-01-11T21:41:24.0381002Z 2023-01-11T21:41:24.0381074Z if __name__ == "__main__": 2023-01-11T21:41:24.0381186Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0381306Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0381517Z arg0_1 = rand_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0381726Z arg1_1 = rand_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0381919Z arg2_1 = rand_strided((26, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0382027Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0382043Z 2023-01-11T21:41:24.0382096Z ok (1.802s) 2023-01-11T21:41:24.0382773Z test_select_scatter_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0382999Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0383265Z [2023-01-11 21:37:50,172] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 418 2023-01-11T21:41:24.0383530Z [2023-01-11 21:37:51,986] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 418 2023-01-11T21:41:24.0383535Z 2023-01-11T21:41:24.0383627Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0383696Z import torch 2023-01-11T21:41:24.0383765Z import random 2023-01-11T21:41:24.0383866Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0384023Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0384029Z 2023-01-11T21:41:24.0384106Z aten = torch.ops.aten 2023-01-11T21:41:24.0384239Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0384330Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0384335Z 2023-01-11T21:41:24.0384339Z 2023-01-11T21:41:24.0384475Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0384678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0384794Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0384892Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0384981Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0385077Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0385171Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0385229Z { 2023-01-11T21:41:24.0385327Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0385387Z { 2023-01-11T21:41:24.0385453Z #pragma omp for 2023-01-11T21:41:24.0385534Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0385595Z { 2023-01-11T21:41:24.0385676Z #pragma GCC ivdep 2023-01-11T21:41:24.0385760Z for(long i1=0; i1<197; i1+=1) 2023-01-11T21:41:24.0385822Z { 2023-01-11T21:41:24.0385900Z #pragma GCC ivdep 2023-01-11T21:41:24.0385976Z for(long i2=0; i2<38; i2+=1) 2023-01-11T21:41:24.0386040Z { 2023-01-11T21:41:24.0386103Z { 2023-01-11T21:41:24.0386167Z { 2023-01-11T21:41:24.0386270Z auto tmp3 = in_ptr0[i2 + (38*i0)]; 2023-01-11T21:41:24.0386379Z auto tmp4 = in_ptr1[i2 + (38*i1) + (7486*i0)]; 2023-01-11T21:41:24.0386484Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0386579Z auto tmp1 = static_cast(0); 2023-01-11T21:41:24.0386672Z auto tmp2 = tmp0 == tmp1; 2023-01-11T21:41:24.0386769Z auto tmp5 = tmp2 ? tmp3 : tmp4; 2023-01-11T21:41:24.0386871Z out_ptr0[i2 + (38*i1) + (7486*i0)] = tmp5; 2023-01-11T21:41:24.0386937Z } 2023-01-11T21:41:24.0387000Z } 2023-01-11T21:41:24.0387061Z } 2023-01-11T21:41:24.0387111Z } 2023-01-11T21:41:24.0387170Z } 2023-01-11T21:41:24.0387242Z #pragma omp for 2023-01-11T21:41:24.0387320Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0387380Z { 2023-01-11T21:41:24.0387454Z #pragma GCC ivdep 2023-01-11T21:41:24.0387539Z for(long i1=0; i1<7486; i1+=1) 2023-01-11T21:41:24.0387589Z { 2023-01-11T21:41:24.0387650Z { 2023-01-11T21:41:24.0387712Z { 2023-01-11T21:41:24.0387838Z auto tmp3 = in_ptr2[i1]; 2023-01-11T21:41:24.0387940Z auto tmp4 = in_ptr1[i1 + (7486*i0)]; 2023-01-11T21:41:24.0388042Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0388141Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0388222Z auto tmp2 = tmp0 == tmp1; 2023-01-11T21:41:24.0388318Z auto tmp5 = tmp2 ? tmp3 : tmp4; 2023-01-11T21:41:24.0388412Z out_ptr1[i1 + (7486*i0)] = tmp5; 2023-01-11T21:41:24.0388473Z } 2023-01-11T21:41:24.0388538Z } 2023-01-11T21:41:24.0388599Z } 2023-01-11T21:41:24.0388659Z } 2023-01-11T21:41:24.0388707Z } 2023-01-11T21:41:24.0388765Z } 2023-01-11T21:41:24.0388842Z ''') 2023-01-11T21:41:24.0388847Z 2023-01-11T21:41:24.0388852Z 2023-01-11T21:41:24.0388939Z async_compile.wait(globals()) 2023-01-11T21:41:24.0389009Z del async_compile 2023-01-11T21:41:24.0389016Z 2023-01-11T21:41:24.0389111Z def call(args): 2023-01-11T21:41:24.0389192Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.0389250Z args.clear() 2023-01-11T21:41:24.0389461Z buf0 = empty_strided((8, 197, 38), (7486, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0389670Z buf1 = empty_strided((8, 197, 38), (7486, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0389882Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0389945Z del arg0_1 2023-01-11T21:41:24.0390012Z del arg1_1 2023-01-11T21:41:24.0390075Z del arg2_1 2023-01-11T21:41:24.0390148Z return (buf0, buf1, ) 2023-01-11T21:41:24.0390153Z 2023-01-11T21:41:24.0390158Z 2023-01-11T21:41:24.0390220Z if __name__ == "__main__": 2023-01-11T21:41:24.0390330Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0390452Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0390662Z arg0_1 = rand_strided((8, 197, 38), (7486, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0390858Z arg1_1 = rand_strided((8, 38), (38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0391055Z arg2_1 = rand_strided((197, 38), (38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0391174Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.0391179Z 2023-01-11T21:41:24.0391242Z ok (1.850s) 2023-01-11T21:41:24.0391705Z test_sgn_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0391820Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0392081Z [2023-01-11 21:37:52,012] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 419 2023-01-11T21:41:24.0392345Z [2023-01-11 21:37:53,676] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 419 2023-01-11T21:41:24.0392351Z 2023-01-11T21:41:24.0392442Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0392512Z import torch 2023-01-11T21:41:24.0392577Z import random 2023-01-11T21:41:24.0392689Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0392807Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0392812Z 2023-01-11T21:41:24.0392878Z aten = torch.ops.aten 2023-01-11T21:41:24.0393012Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0393102Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0393107Z 2023-01-11T21:41:24.0393111Z 2023-01-11T21:41:24.0393242Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0393479Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0393599Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0393695Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0393849Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0393898Z { 2023-01-11T21:41:24.0393994Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0394055Z { 2023-01-11T21:41:24.0394130Z #pragma omp for 2023-01-11T21:41:24.0394209Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:24.0394269Z { 2023-01-11T21:41:24.0394405Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0394562Z auto tmp1 = decltype(tmp0)::blendv(decltype(tmp0)(0), decltype(tmp0)(1), decltype(tmp0)(0) < tmp0); 2023-01-11T21:41:24.0394759Z auto tmp2 = decltype(tmp0)::blendv(decltype(tmp0)(0), decltype(tmp0)(1), tmp0 < decltype(tmp0)(0)); 2023-01-11T21:41:24.0394893Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0395023Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0395107Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0395273Z auto tmp6 = decltype(tmp5)::blendv(decltype(tmp5)(0), decltype(tmp5)(1), decltype(tmp5)(0) < tmp5); 2023-01-11T21:41:24.0395436Z auto tmp7 = decltype(tmp5)::blendv(decltype(tmp5)(0), decltype(tmp5)(1), tmp5 < decltype(tmp5)(0)); 2023-01-11T21:41:24.0395558Z auto tmp8 = tmp6 - tmp7; 2023-01-11T21:41:24.0395679Z auto tmp9 = tmp8 - tmp4; 2023-01-11T21:41:24.0395757Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0395845Z tmp9.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0395904Z } 2023-01-11T21:41:24.0395994Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0396074Z for(long i0=40; i0<41; i0+=1) 2023-01-11T21:41:24.0396137Z { 2023-01-11T21:41:24.0396220Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0396293Z auto tmp1 = tmp0 > 0 ? 1 : 0; 2023-01-11T21:41:24.0396377Z auto tmp2 = tmp0 < 0 ? 1 : 0; 2023-01-11T21:41:24.0396497Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0396594Z auto tmp4 = static_cast(1); 2023-01-11T21:41:24.0396676Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0396755Z auto tmp6 = tmp5 > 0 ? 1 : 0; 2023-01-11T21:41:24.0396836Z auto tmp7 = tmp5 < 0 ? 1 : 0; 2023-01-11T21:41:24.0396946Z auto tmp8 = tmp6 - tmp7; 2023-01-11T21:41:24.0397068Z auto tmp9 = tmp8 - tmp4; 2023-01-11T21:41:24.0397147Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0397222Z out_ptr1[i0] = tmp9; 2023-01-11T21:41:24.0397282Z } 2023-01-11T21:41:24.0397339Z } 2023-01-11T21:41:24.0397396Z } 2023-01-11T21:41:24.0397462Z ''') 2023-01-11T21:41:24.0397467Z 2023-01-11T21:41:24.0397474Z 2023-01-11T21:41:24.0397565Z async_compile.wait(globals()) 2023-01-11T21:41:24.0397641Z del async_compile 2023-01-11T21:41:24.0397646Z 2023-01-11T21:41:24.0397716Z def call(args): 2023-01-11T21:41:24.0397784Z arg0_1, = args 2023-01-11T21:41:24.0397850Z args.clear() 2023-01-11T21:41:24.0398044Z buf0 = empty_strided((41, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0398223Z buf1 = empty_strided((41, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0398384Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0398449Z del arg0_1 2023-01-11T21:41:24.0398521Z return (buf0, buf1, ) 2023-01-11T21:41:24.0398527Z 2023-01-11T21:41:24.0398531Z 2023-01-11T21:41:24.0398606Z if __name__ == "__main__": 2023-01-11T21:41:24.0398718Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0398837Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0399061Z arg0_1 = rand_strided((41, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0399158Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0399163Z 2023-01-11T21:41:24.0399228Z ok (1.698s) 2023-01-11T21:41:24.0399704Z test_sgn_extremal_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0399829Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0400088Z [2023-01-11 21:37:53,702] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 420 2023-01-11T21:41:24.0400355Z [2023-01-11 21:37:55,243] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 420 2023-01-11T21:41:24.0400389Z 2023-01-11T21:41:24.0400480Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0400548Z import torch 2023-01-11T21:41:24.0400616Z import random 2023-01-11T21:41:24.0400716Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0400836Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0400841Z 2023-01-11T21:41:24.0400916Z aten = torch.ops.aten 2023-01-11T21:41:24.0401048Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0401137Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0401141Z 2023-01-11T21:41:24.0401146Z 2023-01-11T21:41:24.0401278Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0401480Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0401595Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0401682Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0401742Z { 2023-01-11T21:41:24.0401842Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0401902Z { 2023-01-11T21:41:24.0401975Z #pragma omp for 2023-01-11T21:41:24.0402054Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0402115Z { 2023-01-11T21:41:24.0402165Z { 2023-01-11T21:41:24.0402225Z { 2023-01-11T21:41:24.0402319Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0402408Z auto tmp1 = tmp0 > 0 ? 1 : 0; 2023-01-11T21:41:24.0402495Z auto tmp2 = tmp0 < 0 ? 1 : 0; 2023-01-11T21:41:24.0402626Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0402708Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0402759Z } 2023-01-11T21:41:24.0402821Z } 2023-01-11T21:41:24.0402883Z } 2023-01-11T21:41:24.0402945Z } 2023-01-11T21:41:24.0403002Z } 2023-01-11T21:41:24.0403078Z ''') 2023-01-11T21:41:24.0403083Z 2023-01-11T21:41:24.0403090Z 2023-01-11T21:41:24.0403179Z async_compile.wait(globals()) 2023-01-11T21:41:24.0403238Z del async_compile 2023-01-11T21:41:24.0403243Z 2023-01-11T21:41:24.0403310Z def call(args): 2023-01-11T21:41:24.0403375Z arg0_1, = args 2023-01-11T21:41:24.0403442Z args.clear() 2023-01-11T21:41:24.0403630Z buf0 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0403761Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0403826Z del arg0_1 2023-01-11T21:41:24.0403883Z return (buf0, ) 2023-01-11T21:41:24.0403888Z 2023-01-11T21:41:24.0403901Z 2023-01-11T21:41:24.0403963Z if __name__ == "__main__": 2023-01-11T21:41:24.0404074Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0404194Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0404382Z arg0_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0404486Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0404523Z 2023-01-11T21:41:24.0404586Z ok (1.556s) 2023-01-11T21:41:24.0405066Z test_shape_prop_torch_ones_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0405190Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0405437Z [2023-01-11 21:37:55,689] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 421 2023-01-11T21:41:24.0405704Z [2023-01-11 21:37:57,210] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 421 2023-01-11T21:41:24.0405709Z 2023-01-11T21:41:24.0405801Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0405899Z import torch 2023-01-11T21:41:24.0405970Z import random 2023-01-11T21:41:24.0406087Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0406206Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0406211Z 2023-01-11T21:41:24.0406288Z aten = torch.ops.aten 2023-01-11T21:41:24.0406408Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0406499Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0406503Z 2023-01-11T21:41:24.0406508Z 2023-01-11T21:41:24.0406643Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0406848Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0406966Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0407067Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0407128Z { 2023-01-11T21:41:24.0407225Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0407273Z { 2023-01-11T21:41:24.0407356Z #pragma omp for 2023-01-11T21:41:24.0407438Z for(long i0=0; i0<3145728; i0+=1) 2023-01-11T21:41:24.0407500Z { 2023-01-11T21:41:24.0407630Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0407762Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0407845Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0407922Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0407985Z } 2023-01-11T21:41:24.0408077Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0408166Z for(long i0=25165824; i0<25165824; i0+=1) 2023-01-11T21:41:24.0408226Z { 2023-01-11T21:41:24.0408309Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0408406Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0408476Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0408554Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0408615Z } 2023-01-11T21:41:24.0408675Z } 2023-01-11T21:41:24.0408731Z } 2023-01-11T21:41:24.0408806Z ''') 2023-01-11T21:41:24.0408812Z 2023-01-11T21:41:24.0408816Z 2023-01-11T21:41:24.0408903Z async_compile.wait(globals()) 2023-01-11T21:41:24.0408961Z del async_compile 2023-01-11T21:41:24.0408966Z 2023-01-11T21:41:24.0409036Z def call(args): 2023-01-11T21:41:24.0409104Z arg0_1, = args 2023-01-11T21:41:24.0409174Z args.clear() 2023-01-11T21:41:24.0409406Z buf0 = empty_strided((8, 12, 512, 512), (3145728, 262144, 512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0409538Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0409604Z del arg0_1 2023-01-11T21:41:24.0409661Z return (buf0, ) 2023-01-11T21:41:24.0409675Z 2023-01-11T21:41:24.0409679Z 2023-01-11T21:41:24.0409740Z if __name__ == "__main__": 2023-01-11T21:41:24.0409851Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0409973Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0410235Z arg0_1 = rand_strided((8, 12, 512, 512), (3145728, 262144, 512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0410342Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0410347Z 2023-01-11T21:41:24.0410410Z ok (3.163s) 2023-01-11T21:41:24.0410878Z test_sigmoid_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0411004Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0411263Z [2023-01-11 21:37:58,427] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 422 2023-01-11T21:41:24.0411540Z [2023-01-11 21:37:59,995] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 422 2023-01-11T21:41:24.0411561Z 2023-01-11T21:41:24.0411644Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0411712Z import torch 2023-01-11T21:41:24.0411781Z import random 2023-01-11T21:41:24.0411891Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0412005Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0412009Z 2023-01-11T21:41:24.0412085Z aten = torch.ops.aten 2023-01-11T21:41:24.0412218Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0412296Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0412301Z 2023-01-11T21:41:24.0412305Z 2023-01-11T21:41:24.0412440Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0412639Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0412755Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0412859Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0412958Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0413053Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0413112Z { 2023-01-11T21:41:24.0413197Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0413256Z { 2023-01-11T21:41:24.0413330Z #pragma omp for 2023-01-11T21:41:24.0413407Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0413467Z { 2023-01-11T21:41:24.0413600Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0413728Z auto tmp2 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0413849Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:24.0413931Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.0414058Z auto tmp4 = decltype(tmp3)(1)/(decltype(tmp3)(1) + tmp3.neg().exp()); 2023-01-11T21:41:24.0414151Z tmp1.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0414241Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0414301Z } 2023-01-11T21:41:24.0414393Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0414462Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0414521Z { 2023-01-11T21:41:24.0414602Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0414678Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.0414808Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:24.0414887Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:24.0414971Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:24.0415090Z auto tmp5 = std::exp(-tmp4); 2023-01-11T21:41:24.0415170Z auto tmp6 = 1 / (1 + tmp5); 2023-01-11T21:41:24.0415246Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0415321Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0415379Z } 2023-01-11T21:41:24.0415463Z } 2023-01-11T21:41:24.0415522Z } 2023-01-11T21:41:24.0415586Z ''') 2023-01-11T21:41:24.0415592Z 2023-01-11T21:41:24.0415596Z 2023-01-11T21:41:24.0415682Z async_compile.wait(globals()) 2023-01-11T21:41:24.0415751Z del async_compile 2023-01-11T21:41:24.0415757Z 2023-01-11T21:41:24.0415823Z def call(args): 2023-01-11T21:41:24.0415894Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0415962Z args.clear() 2023-01-11T21:41:24.0416155Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0416335Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0416523Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0416588Z del arg0_1 2023-01-11T21:41:24.0416656Z del arg1_1 2023-01-11T21:41:24.0416730Z return (buf0, buf1, ) 2023-01-11T21:41:24.0416734Z 2023-01-11T21:41:24.0416739Z 2023-01-11T21:41:24.0416841Z if __name__ == "__main__": 2023-01-11T21:41:24.0416953Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0417076Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0417260Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0417452Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0417563Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0417568Z 2023-01-11T21:41:24.0417632Z ok (1.591s) 2023-01-11T21:41:24.0418096Z test_signbit_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0418221Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0418480Z [2023-01-11 21:38:00,034] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 423 2023-01-11T21:41:24.0418741Z [2023-01-11 21:38:01,773] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 423 2023-01-11T21:41:24.0418746Z 2023-01-11T21:41:24.0418838Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0418903Z import torch 2023-01-11T21:41:24.0418960Z import random 2023-01-11T21:41:24.0419071Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0419187Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0419191Z 2023-01-11T21:41:24.0419269Z aten = torch.ops.aten 2023-01-11T21:41:24.0419400Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0419489Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0419495Z 2023-01-11T21:41:24.0419499Z 2023-01-11T21:41:24.0419627Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0419834Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0419938Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0420033Z bool* __restrict__ out_ptr0, 2023-01-11T21:41:24.0420126Z long* __restrict__ out_ptr1) 2023-01-11T21:41:24.0420184Z { 2023-01-11T21:41:24.0420280Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0420338Z { 2023-01-11T21:41:24.0420411Z #pragma omp for 2023-01-11T21:41:24.0420480Z for(long i0=0; i0<72; i0+=1) 2023-01-11T21:41:24.0420540Z { 2023-01-11T21:41:24.0420600Z { 2023-01-11T21:41:24.0420661Z { 2023-01-11T21:41:24.0420750Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0420851Z auto tmp1 = std::signbit(tmp0); 2023-01-11T21:41:24.0420961Z auto tmp2 = -tmp0; 2023-01-11T21:41:24.0421103Z auto tmp3 = std::signbit(tmp2); 2023-01-11T21:41:24.0421189Z auto tmp4 = tmp3 == 0; 2023-01-11T21:41:24.0421291Z auto tmp5 = static_cast(tmp4); 2023-01-11T21:41:24.0421392Z auto tmp6 = static_cast(1); 2023-01-11T21:41:24.0421479Z auto tmp7 = tmp5 & tmp6; 2023-01-11T21:41:24.0421559Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0421638Z out_ptr1[i0] = tmp7; 2023-01-11T21:41:24.0421689Z } 2023-01-11T21:41:24.0421752Z } 2023-01-11T21:41:24.0421812Z } 2023-01-11T21:41:24.0421869Z } 2023-01-11T21:41:24.0421927Z } 2023-01-11T21:41:24.0422004Z ''') 2023-01-11T21:41:24.0422009Z 2023-01-11T21:41:24.0422013Z 2023-01-11T21:41:24.0422098Z async_compile.wait(globals()) 2023-01-11T21:41:24.0422158Z del async_compile 2023-01-11T21:41:24.0422162Z 2023-01-11T21:41:24.0422229Z def call(args): 2023-01-11T21:41:24.0422294Z arg0_1, = args 2023-01-11T21:41:24.0422576Z args.clear() 2023-01-11T21:41:24.0422796Z buf0 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0423000Z buf1 = empty_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0423163Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0423216Z del arg0_1 2023-01-11T21:41:24.0423291Z return (buf0, buf1, ) 2023-01-11T21:41:24.0423296Z 2023-01-11T21:41:24.0423300Z 2023-01-11T21:41:24.0423373Z if __name__ == "__main__": 2023-01-11T21:41:24.0423485Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0423606Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0423815Z arg0_1 = rand_strided((1, 2, 6, 6), (72, 36, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0423921Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0423928Z 2023-01-11T21:41:24.0423995Z ok (1.777s) 2023-01-11T21:41:24.0424458Z test_silu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0424570Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0424833Z [2023-01-11 21:38:01,795] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 424 2023-01-11T21:41:24.0425095Z [2023-01-11 21:38:01,805] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 424 2023-01-11T21:41:24.0425100Z 2023-01-11T21:41:24.0425190Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0425257Z import torch 2023-01-11T21:41:24.0425322Z import random 2023-01-11T21:41:24.0425439Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0425557Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0425562Z 2023-01-11T21:41:24.0425625Z aten = torch.ops.aten 2023-01-11T21:41:24.0425755Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0425843Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0425848Z 2023-01-11T21:41:24.0425852Z 2023-01-11T21:41:24.0425983Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0426185Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0426299Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0426397Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0426456Z { 2023-01-11T21:41:24.0426539Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0426598Z { 2023-01-11T21:41:24.0426673Z #pragma omp for 2023-01-11T21:41:24.0426797Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0426858Z { 2023-01-11T21:41:24.0426992Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0427124Z auto tmp1 = decltype(tmp0)(1)/(decltype(tmp0)(1) + tmp0.neg().exp()); 2023-01-11T21:41:24.0427196Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0427285Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0427345Z } 2023-01-11T21:41:24.0427438Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0427519Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0427576Z { 2023-01-11T21:41:24.0427656Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0427777Z auto tmp1 = std::exp(-tmp0); 2023-01-11T21:41:24.0427856Z auto tmp2 = 1 / (1 + tmp1); 2023-01-11T21:41:24.0427935Z auto tmp3 = tmp0 * tmp2; 2023-01-11T21:41:24.0428013Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0428074Z } 2023-01-11T21:41:24.0428164Z } 2023-01-11T21:41:24.0428223Z } 2023-01-11T21:41:24.0428287Z ''') 2023-01-11T21:41:24.0428292Z 2023-01-11T21:41:24.0428296Z 2023-01-11T21:41:24.0428380Z async_compile.wait(globals()) 2023-01-11T21:41:24.0428449Z del async_compile 2023-01-11T21:41:24.0428454Z 2023-01-11T21:41:24.0428520Z def call(args): 2023-01-11T21:41:24.0428587Z arg0_1, = args 2023-01-11T21:41:24.0428655Z args.clear() 2023-01-11T21:41:24.0428847Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0428966Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0429029Z del arg0_1 2023-01-11T21:41:24.0429094Z return (buf0, ) 2023-01-11T21:41:24.0429099Z 2023-01-11T21:41:24.0429103Z 2023-01-11T21:41:24.0429174Z if __name__ == "__main__": 2023-01-11T21:41:24.0429284Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0429404Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0429601Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0429707Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0429711Z 2023-01-11T21:41:24.0429765Z ok (0.030s) 2023-01-11T21:41:24.0430235Z test_simplify_loops_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0430359Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0430617Z [2023-01-11 21:38:01,818] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 425 2023-01-11T21:41:24.0430880Z [2023-01-11 21:38:03,396] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 425 2023-01-11T21:41:24.0430888Z 2023-01-11T21:41:24.0430979Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0431045Z import torch 2023-01-11T21:41:24.0431112Z import random 2023-01-11T21:41:24.0431223Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0431331Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0431346Z 2023-01-11T21:41:24.0431410Z aten = torch.ops.aten 2023-01-11T21:41:24.0431540Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0431627Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0431632Z 2023-01-11T21:41:24.0431636Z 2023-01-11T21:41:24.0431767Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0431968Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0432083Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0432181Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0432300Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0432359Z { 2023-01-11T21:41:24.0432453Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0432513Z { 2023-01-11T21:41:24.0432601Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0432680Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.0432741Z { 2023-01-11T21:41:24.0432810Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0432870Z { 2023-01-11T21:41:24.0432955Z for(long i2=0; i2<3; i2+=1) 2023-01-11T21:41:24.0433017Z { 2023-01-11T21:41:24.0433162Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i2) + (30*i1) + (120*i0)); 2023-01-11T21:41:24.0433307Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i2) + (30*i0) + (180*i1)); 2023-01-11T21:41:24.0433399Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0433531Z tmp2.store(out_ptr0 + (8*i2) + (30*i1) + (120*i0)); 2023-01-11T21:41:24.0433587Z } 2023-01-11T21:41:24.0433680Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0433847Z for(long i2=24; i2<30; i2+=1) 2023-01-11T21:41:24.0433913Z { 2023-01-11T21:41:24.0434021Z auto tmp0 = in_ptr0[i2 + (30*i1) + (120*i0)]; 2023-01-11T21:41:24.0434122Z auto tmp1 = in_ptr1[i2 + (30*i0) + (180*i1)]; 2023-01-11T21:41:24.0434211Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0434300Z out_ptr0[i2 + (30*i1) + (120*i0)] = tmp2; 2023-01-11T21:41:24.0434361Z } 2023-01-11T21:41:24.0434421Z } 2023-01-11T21:41:24.0434482Z } 2023-01-11T21:41:24.0434541Z } 2023-01-11T21:41:24.0434600Z } 2023-01-11T21:41:24.0434669Z ''') 2023-01-11T21:41:24.0434686Z 2023-01-11T21:41:24.0434690Z 2023-01-11T21:41:24.0434766Z async_compile.wait(globals()) 2023-01-11T21:41:24.0434839Z del async_compile 2023-01-11T21:41:24.0434844Z 2023-01-11T21:41:24.0434909Z def call(args): 2023-01-11T21:41:24.0434982Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0435050Z args.clear() 2023-01-11T21:41:24.0435274Z buf0 = empty_strided((2, 3, 4, 5, 6), (360, 120, 30, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0435431Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0435485Z del arg0_1 2023-01-11T21:41:24.0435548Z del arg1_1 2023-01-11T21:41:24.0435615Z return (buf0, ) 2023-01-11T21:41:24.0435620Z 2023-01-11T21:41:24.0435624Z 2023-01-11T21:41:24.0435696Z if __name__ == "__main__": 2023-01-11T21:41:24.0435805Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0435927Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0436150Z arg0_1 = rand_strided((2, 3, 4, 5, 6), (360, 120, 30, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0436371Z arg1_1 = rand_strided((2, 3, 4, 5, 6), (90, 30, 180, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0436474Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0436488Z 2023-01-11T21:41:24.0436542Z ok (1.593s) 2023-01-11T21:41:24.0436999Z test_sin_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0437120Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0437386Z [2023-01-11 21:38:03,430] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 426 2023-01-11T21:41:24.0437652Z [2023-01-11 21:38:05,000] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 426 2023-01-11T21:41:24.0437696Z 2023-01-11T21:41:24.0437789Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0437858Z import torch 2023-01-11T21:41:24.0437926Z import random 2023-01-11T21:41:24.0438028Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0438147Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0438152Z 2023-01-11T21:41:24.0438228Z aten = torch.ops.aten 2023-01-11T21:41:24.0438360Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0438451Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0438456Z 2023-01-11T21:41:24.0438460Z 2023-01-11T21:41:24.0438592Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0438794Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0438912Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0439012Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0439129Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0439190Z { 2023-01-11T21:41:24.0439286Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0439346Z { 2023-01-11T21:41:24.0439417Z #pragma omp for 2023-01-11T21:41:24.0439498Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0439547Z { 2023-01-11T21:41:24.0439680Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0439763Z auto tmp1 = tmp0.sin(); 2023-01-11T21:41:24.0439894Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0439975Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0440103Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0440185Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0440264Z auto tmp6 = tmp5.sin(); 2023-01-11T21:41:24.0440341Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0440433Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0440496Z } 2023-01-11T21:41:24.0440588Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0440670Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0440728Z { 2023-01-11T21:41:24.0440809Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0440884Z auto tmp1 = std::sin(tmp0); 2023-01-11T21:41:24.0440980Z auto tmp2 = static_cast(2); 2023-01-11T21:41:24.0441060Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0441156Z auto tmp4 = static_cast(1); 2023-01-11T21:41:24.0441239Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0441325Z auto tmp6 = std::sin(tmp5); 2023-01-11T21:41:24.0441403Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0441467Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0441530Z } 2023-01-11T21:41:24.0441590Z } 2023-01-11T21:41:24.0441651Z } 2023-01-11T21:41:24.0441731Z ''') 2023-01-11T21:41:24.0441736Z 2023-01-11T21:41:24.0441740Z 2023-01-11T21:41:24.0441826Z async_compile.wait(globals()) 2023-01-11T21:41:24.0441897Z del async_compile 2023-01-11T21:41:24.0441902Z 2023-01-11T21:41:24.0441958Z def call(args): 2023-01-11T21:41:24.0442027Z arg0_1, = args 2023-01-11T21:41:24.0442097Z args.clear() 2023-01-11T21:41:24.0442298Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0442493Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0442657Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0442722Z del arg0_1 2023-01-11T21:41:24.0442786Z return (buf0, buf1, ) 2023-01-11T21:41:24.0442804Z 2023-01-11T21:41:24.0442808Z 2023-01-11T21:41:24.0442870Z if __name__ == "__main__": 2023-01-11T21:41:24.0442982Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0443134Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0443330Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0443436Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0443440Z 2023-01-11T21:41:24.0443500Z ok (1.604s) 2023-01-11T21:41:24.0443973Z test_sizehint_issue1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0444097Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0444358Z [2023-01-11 21:38:05,161] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 427 2023-01-11T21:41:24.0444639Z [2023-01-11 21:38:06,739] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 427 2023-01-11T21:41:24.0444657Z 2023-01-11T21:41:24.0444740Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0444805Z import torch 2023-01-11T21:41:24.0444870Z import random 2023-01-11T21:41:24.0444982Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0445099Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0445104Z 2023-01-11T21:41:24.0445182Z aten = torch.ops.aten 2023-01-11T21:41:24.0445312Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0445390Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0445395Z 2023-01-11T21:41:24.0445399Z 2023-01-11T21:41:24.0445529Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0445729Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0445845Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0445948Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0446006Z { 2023-01-11T21:41:24.0446100Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0446148Z { 2023-01-11T21:41:24.0446234Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0446312Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0446371Z { 2023-01-11T21:41:24.0446454Z for(long i1=0; i1<384; i1+=1) 2023-01-11T21:41:24.0446514Z { 2023-01-11T21:41:24.0446592Z #pragma GCC ivdep 2023-01-11T21:41:24.0446672Z for(long i2=0; i2<196; i2+=1) 2023-01-11T21:41:24.0446732Z { 2023-01-11T21:41:24.0446794Z { 2023-01-11T21:41:24.0446858Z { 2023-01-11T21:41:24.0446969Z auto tmp0 = static_cast(4*(i2 / 14)); 2023-01-11T21:41:24.0447080Z auto tmp1 = static_cast((i1 / 4) % 4); 2023-01-11T21:41:24.0447177Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0447275Z auto tmp3 = static_cast(4*(i2 % 14)); 2023-01-11T21:41:24.0447383Z auto tmp4 = static_cast(i1 % 4); 2023-01-11T21:41:24.0447472Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:24.0447594Z auto tmp6 = in_ptr0[tmp5 + (56*tmp2) + (3136*(i1 / 16)) + (75264*i0)]; 2023-01-11T21:41:24.0447694Z out_ptr0[i2 + (196*i1) + (75264*i0)] = tmp6; 2023-01-11T21:41:24.0447759Z } 2023-01-11T21:41:24.0447822Z } 2023-01-11T21:41:24.0447882Z } 2023-01-11T21:41:24.0447932Z } 2023-01-11T21:41:24.0447990Z } 2023-01-11T21:41:24.0448049Z } 2023-01-11T21:41:24.0448104Z } 2023-01-11T21:41:24.0448180Z ''') 2023-01-11T21:41:24.0448185Z 2023-01-11T21:41:24.0448190Z 2023-01-11T21:41:24.0448276Z async_compile.wait(globals()) 2023-01-11T21:41:24.0448405Z del async_compile 2023-01-11T21:41:24.0448410Z 2023-01-11T21:41:24.0448467Z def call(args): 2023-01-11T21:41:24.0448532Z arg0_1, = args 2023-01-11T21:41:24.0448601Z args.clear() 2023-01-11T21:41:24.0448816Z buf0 = empty_strided((2, 384, 196), (75264, 196, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0448944Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0449008Z del arg0_1 2023-01-11T21:41:24.0449076Z return (buf0, ) 2023-01-11T21:41:24.0449081Z 2023-01-11T21:41:24.0449085Z 2023-01-11T21:41:24.0449146Z if __name__ == "__main__": 2023-01-11T21:41:24.0449257Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0449376Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0449598Z arg0_1 = rand_strided((2, 24, 56, 56), (75264, 3136, 56, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0449705Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0449712Z 2023-01-11T21:41:24.0449804Z ok (1.742s) 2023-01-11T21:41:24.0450273Z test_slice1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0450398Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0450657Z [2023-01-11 21:38:06,786] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 428 2023-01-11T21:41:24.0450907Z [2023-01-11 21:38:08,341] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 428 2023-01-11T21:41:24.0450922Z 2023-01-11T21:41:24.0451002Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0451067Z import torch 2023-01-11T21:41:24.0451138Z import random 2023-01-11T21:41:24.0451249Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0451366Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0451371Z 2023-01-11T21:41:24.0451443Z aten = torch.ops.aten 2023-01-11T21:41:24.0451572Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0451651Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0451655Z 2023-01-11T21:41:24.0451667Z 2023-01-11T21:41:24.0451786Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0451982Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0452099Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0452196Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0452286Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0452342Z { 2023-01-11T21:41:24.0452436Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0452488Z { 2023-01-11T21:41:24.0452576Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0452653Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0452712Z { 2023-01-11T21:41:24.0452796Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.0452857Z { 2023-01-11T21:41:24.0452919Z { 2023-01-11T21:41:24.0452972Z { 2023-01-11T21:41:24.0453075Z auto tmp0 = in_ptr0[(2*i1) + (40*i0)]; 2023-01-11T21:41:24.0453177Z auto tmp1 = in_ptr0[20 + (2*i1) + (40*i0)]; 2023-01-11T21:41:24.0453267Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0453369Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.0453461Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:24.0453551Z auto tmp5 = tmp1 + tmp3; 2023-01-11T21:41:24.0453630Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0453760Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.0453853Z out_ptr1[i1 + (10*i0)] = tmp6; 2023-01-11T21:41:24.0453917Z } 2023-01-11T21:41:24.0453977Z } 2023-01-11T21:41:24.0454038Z } 2023-01-11T21:41:24.0454100Z } 2023-01-11T21:41:24.0454148Z } 2023-01-11T21:41:24.0454204Z } 2023-01-11T21:41:24.0454281Z ''') 2023-01-11T21:41:24.0454286Z 2023-01-11T21:41:24.0454290Z 2023-01-11T21:41:24.0454376Z async_compile.wait(globals()) 2023-01-11T21:41:24.0454444Z del async_compile 2023-01-11T21:41:24.0454449Z 2023-01-11T21:41:24.0454516Z def call(args): 2023-01-11T21:41:24.0454582Z arg0_1, = args 2023-01-11T21:41:24.0454638Z args.clear() 2023-01-11T21:41:24.0454834Z buf0 = empty_strided((2, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0455027Z buf1 = empty_strided((2, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0455213Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0455281Z del arg0_1 2023-01-11T21:41:24.0455354Z return (buf0, buf1, ) 2023-01-11T21:41:24.0455359Z 2023-01-11T21:41:24.0455363Z 2023-01-11T21:41:24.0455438Z if __name__ == "__main__": 2023-01-11T21:41:24.0455547Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0455656Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0455858Z arg0_1 = rand_strided((2, 20, 2), (40, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0455961Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0455966Z 2023-01-11T21:41:24.0456029Z ok (1.599s) 2023-01-11T21:41:24.0456491Z test_slice2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0456616Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0456871Z [2023-01-11 21:38:08,385] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 429 2023-01-11T21:41:24.0457132Z [2023-01-11 21:38:09,973] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 429 2023-01-11T21:41:24.0457137Z 2023-01-11T21:41:24.0457226Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0457281Z import torch 2023-01-11T21:41:24.0457350Z import random 2023-01-11T21:41:24.0457459Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0457580Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0457585Z 2023-01-11T21:41:24.0457660Z aten = torch.ops.aten 2023-01-11T21:41:24.0457790Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0457881Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0457886Z 2023-01-11T21:41:24.0457891Z 2023-01-11T21:41:24.0458022Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0458214Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0458331Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0458432Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0458527Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0458587Z { 2023-01-11T21:41:24.0458682Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0458740Z { 2023-01-11T21:41:24.0458804Z #pragma omp for 2023-01-11T21:41:24.0458883Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.0458943Z { 2023-01-11T21:41:24.0459006Z { 2023-01-11T21:41:24.0459070Z { 2023-01-11T21:41:24.0459166Z auto tmp0 = in_ptr0[1 + (4*i0)]; 2023-01-11T21:41:24.0459304Z auto tmp1 = in_ptr0[42 + (4*i0)]; 2023-01-11T21:41:24.0459383Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0459485Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.0459573Z auto tmp4 = tmp0 + tmp3; 2023-01-11T21:41:24.0459672Z auto tmp5 = static_cast(2); 2023-01-11T21:41:24.0459758Z auto tmp6 = tmp1 + tmp5; 2023-01-11T21:41:24.0459845Z auto tmp7 = tmp4 + tmp6; 2023-01-11T21:41:24.0459926Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0459996Z out_ptr1[i0] = tmp7; 2023-01-11T21:41:24.0460059Z } 2023-01-11T21:41:24.0460119Z } 2023-01-11T21:41:24.0460178Z } 2023-01-11T21:41:24.0460236Z } 2023-01-11T21:41:24.0460295Z } 2023-01-11T21:41:24.0460361Z ''') 2023-01-11T21:41:24.0460376Z 2023-01-11T21:41:24.0460380Z 2023-01-11T21:41:24.0460456Z async_compile.wait(globals()) 2023-01-11T21:41:24.0460557Z del async_compile 2023-01-11T21:41:24.0460562Z 2023-01-11T21:41:24.0460632Z def call(args): 2023-01-11T21:41:24.0460697Z arg0_1, = args 2023-01-11T21:41:24.0460765Z args.clear() 2023-01-11T21:41:24.0460963Z buf0 = empty_strided((1, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0461158Z buf1 = empty_strided((1, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0461307Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0461374Z del arg0_1 2023-01-11T21:41:24.0461449Z return (buf0, buf1, ) 2023-01-11T21:41:24.0461454Z 2023-01-11T21:41:24.0461458Z 2023-01-11T21:41:24.0461533Z if __name__ == "__main__": 2023-01-11T21:41:24.0461644Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0461764Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0461970Z arg0_1 = rand_strided((2, 20, 2), (40, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0462075Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0462080Z 2023-01-11T21:41:24.0462132Z ok (1.632s) 2023-01-11T21:41:24.0462788Z test_slice_mutation1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0462917Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0463178Z [2023-01-11 21:38:10,023] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 430 2023-01-11T21:41:24.0463439Z [2023-01-11 21:38:11,596] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 430 2023-01-11T21:41:24.0463448Z 2023-01-11T21:41:24.0463541Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0463610Z import torch 2023-01-11T21:41:24.0463679Z import random 2023-01-11T21:41:24.0463793Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0463900Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0463916Z 2023-01-11T21:41:24.0463980Z aten = torch.ops.aten 2023-01-11T21:41:24.0464109Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0464199Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0464204Z 2023-01-11T21:41:24.0464208Z 2023-01-11T21:41:24.0464339Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0464542Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0464656Z extern "C" void kernel(float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0464749Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0464840Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0464983Z float* __restrict__ out_ptr3) 2023-01-11T21:41:24.0465038Z { 2023-01-11T21:41:24.0465135Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0465196Z { 2023-01-11T21:41:24.0465272Z #pragma omp for 2023-01-11T21:41:24.0465356Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0465404Z { 2023-01-11T21:41:24.0465540Z auto tmp0 = at::vec::Vectorized(static_cast(0)); 2023-01-11T21:41:24.0465671Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0465752Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0465841Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0465925Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0465986Z } 2023-01-11T21:41:24.0466080Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0466148Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0466211Z { 2023-01-11T21:41:24.0466342Z auto tmp0 = static_cast(0); 2023-01-11T21:41:24.0466438Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0466519Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0466596Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0466660Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.0466719Z } 2023-01-11T21:41:24.0466790Z #pragma omp for 2023-01-11T21:41:24.0466868Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0466926Z { 2023-01-11T21:41:24.0466987Z { 2023-01-11T21:41:24.0467046Z { 2023-01-11T21:41:24.0467140Z auto tmp0 = static_cast(3.0); 2023-01-11T21:41:24.0467227Z out_ptr0[3 + (8*i0)] = tmp0; 2023-01-11T21:41:24.0467289Z } 2023-01-11T21:41:24.0467349Z } 2023-01-11T21:41:24.0467407Z } 2023-01-11T21:41:24.0467481Z #pragma omp for 2023-01-11T21:41:24.0467566Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0467616Z { 2023-01-11T21:41:24.0467745Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0467834Z tmp0.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0467895Z } 2023-01-11T21:41:24.0467984Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0468063Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0468125Z { 2023-01-11T21:41:24.0468195Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:24.0468273Z out_ptr2[i0] = tmp0; 2023-01-11T21:41:24.0468334Z } 2023-01-11T21:41:24.0468408Z #pragma omp for 2023-01-11T21:41:24.0468488Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0468549Z { 2023-01-11T21:41:24.0468683Z auto tmp0 = at::vec::Vectorized(static_cast(4.0)); 2023-01-11T21:41:24.0468768Z tmp0.store(out_ptr0 + 32 + (8*i0)); 2023-01-11T21:41:24.0468828Z } 2023-01-11T21:41:24.0468926Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0469004Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.0469064Z { 2023-01-11T21:41:24.0469164Z auto tmp0 = static_cast(4.0); 2023-01-11T21:41:24.0469246Z out_ptr0[32 + i0] = tmp0; 2023-01-11T21:41:24.0469295Z } 2023-01-11T21:41:24.0469367Z #pragma omp for 2023-01-11T21:41:24.0469445Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0469505Z { 2023-01-11T21:41:24.0469635Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0469769Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0469854Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0469929Z tmp2.store(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0469990Z } 2023-01-11T21:41:24.0470081Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0470161Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0470257Z { 2023-01-11T21:41:24.0470339Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:24.0470433Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0470504Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0470579Z out_ptr3[i0] = tmp2; 2023-01-11T21:41:24.0470639Z } 2023-01-11T21:41:24.0470699Z } 2023-01-11T21:41:24.0470756Z } 2023-01-11T21:41:24.0470836Z ''') 2023-01-11T21:41:24.0470841Z 2023-01-11T21:41:24.0470845Z 2023-01-11T21:41:24.0470928Z async_compile.wait(globals()) 2023-01-11T21:41:24.0470986Z del async_compile 2023-01-11T21:41:24.0470992Z 2023-01-11T21:41:24.0471058Z def call(args): 2023-01-11T21:41:24.0471125Z arg0_1, = args 2023-01-11T21:41:24.0471191Z args.clear() 2023-01-11T21:41:24.0471386Z buf0 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0471575Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0471797Z buf3 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0471971Z buf5 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0472159Z kernel_cpp_0(c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf5.data_ptr())) 2023-01-11T21:41:24.0472244Z return (buf0, buf1, buf3, buf5, ) 2023-01-11T21:41:24.0472249Z 2023-01-11T21:41:24.0472253Z 2023-01-11T21:41:24.0472329Z if __name__ == "__main__": 2023-01-11T21:41:24.0472440Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0472560Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0472753Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0472860Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0472864Z 2023-01-11T21:41:24.0472929Z ok (1.624s) 2023-01-11T21:41:24.0473254Z test_slice_mutation2_cpu (__main__.CpuTests) ... [2023-01-11 21:38:11,633] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 431 2023-01-11T21:41:24.0473525Z [2023-01-11 21:38:13,244] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 431 2023-01-11T21:41:24.0473530Z 2023-01-11T21:41:24.0473620Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0473688Z import torch 2023-01-11T21:41:24.0473813Z import random 2023-01-11T21:41:24.0473927Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0474045Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0474050Z 2023-01-11T21:41:24.0474124Z aten = torch.ops.aten 2023-01-11T21:41:24.0474244Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0474331Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0474336Z 2023-01-11T21:41:24.0474340Z 2023-01-11T21:41:24.0474471Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0474674Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0474792Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0474887Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0474981Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0475072Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0475121Z { 2023-01-11T21:41:24.0475212Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0475269Z { 2023-01-11T21:41:24.0475342Z #pragma omp for 2023-01-11T21:41:24.0475420Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0475480Z { 2023-01-11T21:41:24.0475603Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 20 + (8*i0)); 2023-01-11T21:41:24.0475733Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0475815Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0475899Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0475993Z } 2023-01-11T21:41:24.0476081Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0476193Z for(long i0=16; i0<20; i0+=1) 2023-01-11T21:41:24.0476275Z { 2023-01-11T21:41:24.0476357Z auto tmp0 = in_ptr0[20 + i0]; 2023-01-11T21:41:24.0476453Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0476532Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0476608Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0476667Z } 2023-01-11T21:41:24.0476737Z #pragma omp for 2023-01-11T21:41:24.0476804Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0476862Z { 2023-01-11T21:41:24.0476989Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0477088Z tmp0.store(out_ptr1 + 20 + (8*i0)); 2023-01-11T21:41:24.0477147Z } 2023-01-11T21:41:24.0477238Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0477347Z for(long i0=16; i0<20; i0+=1) 2023-01-11T21:41:24.0477399Z { 2023-01-11T21:41:24.0477478Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:24.0477558Z out_ptr1[20 + i0] = tmp0; 2023-01-11T21:41:24.0477614Z } 2023-01-11T21:41:24.0477681Z #pragma omp for 2023-01-11T21:41:24.0477757Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0477814Z { 2023-01-11T21:41:24.0477936Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 1 + (8*i0)); 2023-01-11T21:41:24.0478064Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0478147Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0478230Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0478284Z } 2023-01-11T21:41:24.0478372Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0478450Z for(long i0=8; i0<9; i0+=1) 2023-01-11T21:41:24.0478497Z { 2023-01-11T21:41:24.0478581Z auto tmp0 = out_ptr1[1 + i0]; 2023-01-11T21:41:24.0478680Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0478759Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0478833Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:24.0478891Z } 2023-01-11T21:41:24.0478962Z #pragma omp for 2023-01-11T21:41:24.0479028Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0479088Z { 2023-01-11T21:41:24.0479211Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0479299Z tmp0.store(out_ptr1 + 2 + (8*i0)); 2023-01-11T21:41:24.0479360Z } 2023-01-11T21:41:24.0479446Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0479520Z for(long i0=8; i0<9; i0+=1) 2023-01-11T21:41:24.0479567Z { 2023-01-11T21:41:24.0479647Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:24.0479721Z out_ptr1[2 + i0] = tmp0; 2023-01-11T21:41:24.0479782Z } 2023-01-11T21:41:24.0479839Z } 2023-01-11T21:41:24.0479899Z } 2023-01-11T21:41:24.0479982Z ''') 2023-01-11T21:41:24.0479988Z 2023-01-11T21:41:24.0479992Z 2023-01-11T21:41:24.0480068Z async_compile.wait(globals()) 2023-01-11T21:41:24.0480139Z del async_compile 2023-01-11T21:41:24.0480144Z 2023-01-11T21:41:24.0480215Z def call(args): 2023-01-11T21:41:24.0480281Z arg0_1, = args 2023-01-11T21:41:24.0480347Z args.clear() 2023-01-11T21:41:24.0480545Z buf0 = empty_strided((1, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0480736Z buf2 = empty_strided((1, 9), (9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0480919Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0480974Z del arg0_1 2023-01-11T21:41:24.0481034Z return () 2023-01-11T21:41:24.0481039Z 2023-01-11T21:41:24.0481043Z 2023-01-11T21:41:24.0481114Z if __name__ == "__main__": 2023-01-11T21:41:24.0481227Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0481381Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0481577Z arg0_1 = rand_strided((1, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0481682Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0481686Z 2023-01-11T21:41:24.0481749Z ok (1.646s) 2023-01-11T21:41:24.0482210Z test_slice_scatter2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0482333Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0482592Z [2023-01-11 21:38:13,281] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 432 2023-01-11T21:41:24.0482888Z [2023-01-11 21:38:14,840] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 432 2023-01-11T21:41:24.0482894Z 2023-01-11T21:41:24.0482986Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0483053Z import torch 2023-01-11T21:41:24.0483120Z import random 2023-01-11T21:41:24.0483233Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0483348Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0483353Z 2023-01-11T21:41:24.0483417Z aten = torch.ops.aten 2023-01-11T21:41:24.0483547Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0483634Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0483640Z 2023-01-11T21:41:24.0483644Z 2023-01-11T21:41:24.0483778Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0483982Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0484100Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0484197Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0484256Z { 2023-01-11T21:41:24.0484339Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0484397Z { 2023-01-11T21:41:24.0484469Z #pragma omp for 2023-01-11T21:41:24.0484549Z for(long i0=0; i0<75648; i0+=1) 2023-01-11T21:41:24.0484609Z { 2023-01-11T21:41:24.0484739Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0484826Z tmp0.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0484874Z } 2023-01-11T21:41:24.0484964Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0485053Z for(long i0=605184; i0<605184; i0+=1) 2023-01-11T21:41:24.0485111Z { 2023-01-11T21:41:24.0485190Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0485267Z out_ptr0[i0] = tmp0; 2023-01-11T21:41:24.0485316Z } 2023-01-11T21:41:24.0485372Z } 2023-01-11T21:41:24.0485431Z } 2023-01-11T21:41:24.0485509Z ''') 2023-01-11T21:41:24.0485514Z 2023-01-11T21:41:24.0485518Z 2023-01-11T21:41:24.0485605Z async_compile.wait(globals()) 2023-01-11T21:41:24.0485673Z del async_compile 2023-01-11T21:41:24.0485678Z 2023-01-11T21:41:24.0485744Z def call(args): 2023-01-11T21:41:24.0485815Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0485873Z args.clear() 2023-01-11T21:41:24.0486087Z buf0 = empty_strided((8, 197, 384), (75648, 384, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0486212Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0486277Z del arg1_1 2023-01-11T21:41:24.0486346Z return (buf0, ) 2023-01-11T21:41:24.0486351Z 2023-01-11T21:41:24.0486355Z 2023-01-11T21:41:24.0486428Z if __name__ == "__main__": 2023-01-11T21:41:24.0486538Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0486650Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0486894Z arg0_1 = rand_strided((8, 197, 384), (75648, 384, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0487103Z arg1_1 = rand_strided((8, 197, 384), (75648, 384, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0487216Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0487221Z 2023-01-11T21:41:24.0487281Z ok (1.611s) 2023-01-11T21:41:24.0487747Z test_slice_scatter_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0487871Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0488128Z [2023-01-11 21:38:14,890] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 433 2023-01-11T21:41:24.0488429Z [2023-01-11 21:38:16,491] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 433 2023-01-11T21:41:24.0488435Z 2023-01-11T21:41:24.0488528Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0488584Z import torch 2023-01-11T21:41:24.0488650Z import random 2023-01-11T21:41:24.0488763Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0488880Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0488885Z 2023-01-11T21:41:24.0488959Z aten = torch.ops.aten 2023-01-11T21:41:24.0489089Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0489177Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0489182Z 2023-01-11T21:41:24.0489187Z 2023-01-11T21:41:24.0489319Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0489513Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0489631Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0489734Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0489832Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0489924Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0489981Z { 2023-01-11T21:41:24.0490075Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0490123Z { 2023-01-11T21:41:24.0490193Z #pragma omp for 2023-01-11T21:41:24.0490273Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0490332Z { 2023-01-11T21:41:24.0490410Z #pragma GCC ivdep 2023-01-11T21:41:24.0490494Z for(long i1=0; i1<100; i1+=1) 2023-01-11T21:41:24.0490556Z { 2023-01-11T21:41:24.0490608Z { 2023-01-11T21:41:24.0490667Z { 2023-01-11T21:41:24.0490769Z auto tmp8 = in_ptr1[i1 + (100*i0)]; 2023-01-11T21:41:24.0490869Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0490972Z auto tmp1 = static_cast(10); 2023-01-11T21:41:24.0491063Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:24.0491158Z auto tmp3 = static_cast(90); 2023-01-11T21:41:24.0491238Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:24.0491329Z auto tmp5 = tmp2 & tmp4; 2023-01-11T21:41:24.0491409Z float tmp6 = 0.0; 2023-01-11T21:41:24.0491482Z if(tmp5) 2023-01-11T21:41:24.0491547Z { 2023-01-11T21:41:24.0491715Z auto tmp7 = in_ptr0[(-10) + i1 + (80*i0)]; 2023-01-11T21:41:24.0491796Z tmp6 = tmp7; 2023-01-11T21:41:24.0491850Z } 2023-01-11T21:41:24.0491945Z auto tmp9 = tmp5 ? tmp6 : tmp8; 2023-01-11T21:41:24.0492051Z auto tmp10 = static_cast(i1 % 2); 2023-01-11T21:41:24.0492180Z auto tmp11 = static_cast(0); 2023-01-11T21:41:24.0492276Z auto tmp12 = tmp10 == tmp11; 2023-01-11T21:41:24.0492366Z auto tmp13 = tmp5 & tmp12; 2023-01-11T21:41:24.0492449Z float tmp14 = 0.0; 2023-01-11T21:41:24.0492511Z if(tmp13) 2023-01-11T21:41:24.0492575Z { 2023-01-11T21:41:24.0492752Z auto tmp15 = in_ptr0[(-5) + (80*i0) + (i1 / 2)]; 2023-01-11T21:41:24.0492831Z tmp14 = tmp15; 2023-01-11T21:41:24.0492894Z } 2023-01-11T21:41:24.0492992Z auto tmp16 = tmp13 ? tmp14 : tmp8; 2023-01-11T21:41:24.0493085Z out_ptr0[i1 + (100*i0)] = tmp9; 2023-01-11T21:41:24.0493178Z out_ptr1[i1 + (100*i0)] = tmp16; 2023-01-11T21:41:24.0493231Z } 2023-01-11T21:41:24.0493292Z } 2023-01-11T21:41:24.0493382Z } 2023-01-11T21:41:24.0493444Z } 2023-01-11T21:41:24.0493502Z } 2023-01-11T21:41:24.0493559Z } 2023-01-11T21:41:24.0493625Z ''') 2023-01-11T21:41:24.0493630Z 2023-01-11T21:41:24.0493644Z 2023-01-11T21:41:24.0493723Z async_compile.wait(globals()) 2023-01-11T21:41:24.0493791Z del async_compile 2023-01-11T21:41:24.0493796Z 2023-01-11T21:41:24.0493862Z def call(args): 2023-01-11T21:41:24.0493930Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0494000Z args.clear() 2023-01-11T21:41:24.0494209Z buf0 = empty_strided((4, 8, 100), (800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0494416Z buf1 = empty_strided((4, 8, 100), (800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0494594Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0494657Z del arg0_1 2023-01-11T21:41:24.0494720Z del arg1_1 2023-01-11T21:41:24.0494799Z return (buf0, buf1, ) 2023-01-11T21:41:24.0494804Z 2023-01-11T21:41:24.0494808Z 2023-01-11T21:41:24.0494882Z if __name__ == "__main__": 2023-01-11T21:41:24.0494993Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0495113Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0495319Z arg0_1 = rand_strided((4, 8, 100), (800, 100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0495512Z arg1_1 = rand_strided((4, 8, 80), (640, 80, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0495625Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0495630Z 2023-01-11T21:41:24.0495694Z ok (1.639s) 2023-01-11T21:41:24.0496160Z test_softmax_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0496284Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0496543Z [2023-01-11 21:38:16,546] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 434 2023-01-11T21:41:24.0496548Z 2023-01-11T21:41:24.0496640Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0496710Z import torch 2023-01-11T21:41:24.0496780Z import random 2023-01-11T21:41:24.0496882Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0496997Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0497002Z 2023-01-11T21:41:24.0497076Z aten = torch.ops.aten 2023-01-11T21:41:24.0497205Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0497292Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0497297Z 2023-01-11T21:41:24.0497301Z 2023-01-11T21:41:24.0497433Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0497678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0497792Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0497880Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.0497978Z float* __restrict__ in_out_ptr2, 2023-01-11T21:41:24.0498076Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0498176Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0498270Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0498359Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0498450Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0498530Z float* __restrict__ out_ptr6, 2023-01-11T21:41:24.0498619Z float* __restrict__ out_ptr7, 2023-01-11T21:41:24.0498735Z float* __restrict__ out_ptr8) 2023-01-11T21:41:24.0498796Z { 2023-01-11T21:41:24.0498879Z auto out_ptr3 = in_out_ptr0; 2023-01-11T21:41:24.0498961Z auto out_ptr4 = in_out_ptr1; 2023-01-11T21:41:24.0499041Z auto out_ptr5 = in_out_ptr2; 2023-01-11T21:41:24.0499125Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0499183Z { 2023-01-11T21:41:24.0499254Z #pragma omp for 2023-01-11T21:41:24.0499333Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0499396Z { 2023-01-11T21:41:24.0499458Z { 2023-01-11T21:41:24.0499836Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:24.0500065Z float tmp3 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0500174Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0500379Z float tmp4 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0500502Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:24.0500591Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0500654Z { 2023-01-11T21:41:24.0500793Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0500925Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0501015Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0501118Z tmp3_vec = at::vec::maximum(tmp3_vec, tmp2); 2023-01-11T21:41:24.0501232Z tmp4_vec = at::vec::maximum(tmp4_vec, tmp1); 2023-01-11T21:41:24.0501296Z } 2023-01-11T21:41:24.0501505Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp3_vec); 2023-01-11T21:41:24.0501709Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp4_vec); 2023-01-11T21:41:24.0501852Z #pragma omp simd simdlen(4) reduction(max:tmp3) reduction(max:tmp4) 2023-01-11T21:41:24.0501937Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0502000Z { 2023-01-11T21:41:24.0502098Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0502181Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:24.0502270Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0502532Z tmp3 = std::max(tmp3, tmp2); 2023-01-11T21:41:24.0502666Z tmp4 = std::max(tmp4, tmp1); 2023-01-11T21:41:24.0502752Z } 2023-01-11T21:41:24.0502838Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0502915Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0503026Z } 2023-01-11T21:41:24.0503088Z } 2023-01-11T21:41:24.0503163Z #pragma omp for 2023-01-11T21:41:24.0503244Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0503303Z { 2023-01-11T21:41:24.0503364Z { 2023-01-11T21:41:24.0503427Z { 2023-01-11T21:41:24.0503639Z float tmp1 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0503729Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0503793Z { 2023-01-11T21:41:24.0503861Z { 2023-01-11T21:41:24.0503963Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:24.0504061Z tmp1 = std::max(tmp1, tmp0); 2023-01-11T21:41:24.0504127Z } 2023-01-11T21:41:24.0504180Z } 2023-01-11T21:41:24.0504264Z out_ptr2[i0] = tmp1; 2023-01-11T21:41:24.0504326Z } 2023-01-11T21:41:24.0504388Z } 2023-01-11T21:41:24.0504484Z } 2023-01-11T21:41:24.0504558Z #pragma omp for 2023-01-11T21:41:24.0504635Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0504684Z { 2023-01-11T21:41:24.0504744Z { 2023-01-11T21:41:24.0504931Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0505009Z float tmp12 = 0; 2023-01-11T21:41:24.0505129Z auto tmp12_vec = at::vec::Vectorized(tmp12); 2023-01-11T21:41:24.0505215Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0505278Z { 2023-01-11T21:41:24.0505404Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0505537Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0505659Z auto tmp3 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:24.0505795Z auto tmp6 = at::vec::Vectorized::loadu(out_ptr2 + 8*i1); 2023-01-11T21:41:24.0505916Z auto tmp9 = at::vec::Vectorized(out_ptr1[i0]); 2023-01-11T21:41:24.0506005Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0506139Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:24.0506229Z auto tmp5 = tmp4.exp(); 2023-01-11T21:41:24.0506348Z auto tmp7 = tmp0 - tmp6; 2023-01-11T21:41:24.0506436Z auto tmp8 = tmp7.exp(); 2023-01-11T21:41:24.0506569Z auto tmp10 = tmp1 - tmp9; 2023-01-11T21:41:24.0506656Z auto tmp11 = tmp10.exp(); 2023-01-11T21:41:24.0506760Z tmp5.store(out_ptr3 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0506859Z tmp8.store(out_ptr4 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0506963Z tmp11.store(out_ptr5 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0507048Z tmp12_vec += tmp5; 2023-01-11T21:41:24.0507099Z } 2023-01-11T21:41:24.0507293Z tmp12 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp12_vec); 2023-01-11T21:41:24.0507407Z #pragma omp simd simdlen(4) reduction(+:tmp12) 2023-01-11T21:41:24.0507491Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0507555Z { 2023-01-11T21:41:24.0507649Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0507741Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:24.0507818Z auto tmp3 = out_ptr0[i0]; 2023-01-11T21:41:24.0507908Z auto tmp6 = out_ptr2[i1]; 2023-01-11T21:41:24.0507992Z auto tmp9 = out_ptr1[i0]; 2023-01-11T21:41:24.0508077Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0508211Z auto tmp4 = tmp2 - tmp3; 2023-01-11T21:41:24.0508336Z auto tmp5 = std::exp(tmp4); 2023-01-11T21:41:24.0508467Z auto tmp7 = tmp0 - tmp6; 2023-01-11T21:41:24.0508561Z auto tmp8 = std::exp(tmp7); 2023-01-11T21:41:24.0508683Z auto tmp10 = tmp1 - tmp9; 2023-01-11T21:41:24.0508777Z auto tmp11 = std::exp(tmp10); 2023-01-11T21:41:24.0508868Z out_ptr3[i1 + (8*i0)] = tmp5; 2023-01-11T21:41:24.0508955Z out_ptr4[i1 + (8*i0)] = tmp8; 2023-01-11T21:41:24.0509044Z out_ptr5[i1 + (8*i0)] = tmp11; 2023-01-11T21:41:24.0509120Z tmp12 += tmp5; 2023-01-11T21:41:24.0509179Z } 2023-01-11T21:41:24.0509246Z out_ptr6[i0] = tmp12; 2023-01-11T21:41:24.0509308Z } 2023-01-11T21:41:24.0509369Z } 2023-01-11T21:41:24.0509441Z #pragma omp for 2023-01-11T21:41:24.0509519Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0509581Z { 2023-01-11T21:41:24.0509686Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0509738Z { 2023-01-11T21:41:24.0509876Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr3 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0509998Z auto tmp1 = at::vec::Vectorized(out_ptr6[i0]); 2023-01-11T21:41:24.0510081Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0510183Z tmp2.store(in_out_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0510243Z } 2023-01-11T21:41:24.0510331Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0510398Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0510458Z { 2023-01-11T21:41:24.0510548Z auto tmp0 = out_ptr3[i1 + (8*i0)]; 2023-01-11T21:41:24.0510634Z auto tmp1 = out_ptr6[i0]; 2023-01-11T21:41:24.0510715Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0510805Z in_out_ptr0[i1 + (8*i0)] = tmp2; 2023-01-11T21:41:24.0510871Z } 2023-01-11T21:41:24.0510921Z } 2023-01-11T21:41:24.0510994Z #pragma omp for 2023-01-11T21:41:24.0511072Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0511133Z { 2023-01-11T21:41:24.0511193Z { 2023-01-11T21:41:24.0511257Z { 2023-01-11T21:41:24.0511322Z float tmp1 = 0; 2023-01-11T21:41:24.0511409Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0511471Z { 2023-01-11T21:41:24.0511535Z { 2023-01-11T21:41:24.0511637Z auto tmp0 = out_ptr4[i0 + (8*i1)]; 2023-01-11T21:41:24.0511715Z tmp1 += tmp0; 2023-01-11T21:41:24.0511779Z } 2023-01-11T21:41:24.0511832Z } 2023-01-11T21:41:24.0511907Z out_ptr7[i0] = tmp1; 2023-01-11T21:41:24.0511969Z } 2023-01-11T21:41:24.0512028Z } 2023-01-11T21:41:24.0512091Z } 2023-01-11T21:41:24.0512164Z #pragma omp for 2023-01-11T21:41:24.0512241Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0512289Z { 2023-01-11T21:41:24.0512364Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0512422Z { 2023-01-11T21:41:24.0512562Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr4 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0512694Z auto tmp1 = at::vec::Vectorized::loadu(out_ptr7 + 8*i1); 2023-01-11T21:41:24.0512776Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0512878Z tmp2.store(in_out_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0512939Z } 2023-01-11T21:41:24.0513016Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0513093Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0513153Z { 2023-01-11T21:41:24.0513247Z auto tmp0 = out_ptr4[i1 + (8*i0)]; 2023-01-11T21:41:24.0513334Z auto tmp1 = out_ptr7[i1]; 2023-01-11T21:41:24.0513446Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0513526Z in_out_ptr1[i1 + (8*i0)] = tmp2; 2023-01-11T21:41:24.0513583Z } 2023-01-11T21:41:24.0513644Z } 2023-01-11T21:41:24.0513717Z #pragma omp for 2023-01-11T21:41:24.0513854Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0513915Z { 2023-01-11T21:41:24.0513973Z { 2023-01-11T21:41:24.0514148Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0514221Z float tmp1 = 0; 2023-01-11T21:41:24.0514339Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0514422Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0514484Z { 2023-01-11T21:41:24.0514623Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr5 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0514738Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0514800Z } 2023-01-11T21:41:24.0514983Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0515098Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0515182Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0515244Z { 2023-01-11T21:41:24.0515342Z auto tmp0 = out_ptr5[i1 + (8*i0)]; 2023-01-11T21:41:24.0515415Z tmp1 += tmp0; 2023-01-11T21:41:24.0515472Z } 2023-01-11T21:41:24.0515548Z out_ptr8[i0] = tmp1; 2023-01-11T21:41:24.0515597Z } 2023-01-11T21:41:24.0515658Z } 2023-01-11T21:41:24.0515728Z #pragma omp for 2023-01-11T21:41:24.0515807Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0515867Z { 2023-01-11T21:41:24.0515944Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0515996Z { 2023-01-11T21:41:24.0516128Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr5 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0516247Z auto tmp1 = at::vec::Vectorized(out_ptr8[i0]); 2023-01-11T21:41:24.0516331Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0516432Z tmp2.store(in_out_ptr2 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0516491Z } 2023-01-11T21:41:24.0516578Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0516655Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0516703Z { 2023-01-11T21:41:24.0516796Z auto tmp0 = out_ptr5[i1 + (8*i0)]; 2023-01-11T21:41:24.0516879Z auto tmp1 = out_ptr8[i0]; 2023-01-11T21:41:24.0516962Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0517052Z in_out_ptr2[i1 + (8*i0)] = tmp2; 2023-01-11T21:41:24.0517113Z } 2023-01-11T21:41:24.0517166Z } 2023-01-11T21:41:24.0517224Z } 2023-01-11T21:41:24.0517276Z } 2023-01-11T21:41:24.0517358Z ''') 2023-01-11T21:41:24.0517364Z 2023-01-11T21:41:24.0517368Z 2023-01-11T21:41:24.0517455Z async_compile.wait(globals()) 2023-01-11T21:41:24.0517526Z del async_compile 2023-01-11T21:41:24.0517531Z 2023-01-11T21:41:24.0517596Z def call(args): 2023-01-11T21:41:24.0517668Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0517728Z args.clear() 2023-01-11T21:41:24.0517923Z buf0 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0518116Z buf8 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0518302Z buf4 = empty_strided((1, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0518487Z buf1 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0518671Z buf5 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0518886Z buf9 = empty_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0519057Z buf2 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0519140Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0519323Z buf6 = empty_strided((1, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0519405Z buf7 = buf5; del buf5 # reuse 2023-01-11T21:41:24.0519597Z buf10 = empty_strided((8, 1), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0519676Z buf11 = buf9; del buf9 # reuse 2023-01-11T21:41:24.0520032Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf11.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf10.data_ptr())) 2023-01-11T21:41:24.0520099Z del arg0_1 2023-01-11T21:41:24.0520163Z del arg1_1 2023-01-11T21:41:24.0520264Z return (buf3, buf7, buf11, ) 2023-01-11T21:41:24.0520270Z 2023-01-11T21:41:24.0520274Z 2023-01-11T21:41:24.0520349Z if __name__ == "__main__": 2023-01-11T21:41:24.0520463Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0520584Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0520782Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0520972Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0521082Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0521351Z [2023-01-11 21:38:18,265] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 434 2023-01-11T21:41:24.0521356Z 2023-01-11T21:41:24.0521410Z ok (1.772s) 2023-01-11T21:41:24.0521884Z test_softmax_one_kernel_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0522005Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0522259Z [2023-01-11 21:38:18,289] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 435 2023-01-11T21:41:24.0522523Z [2023-01-11 21:38:19,885] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 435 2023-01-11T21:41:24.0522528Z 2023-01-11T21:41:24.0522618Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0522684Z import torch 2023-01-11T21:41:24.0522752Z import random 2023-01-11T21:41:24.0522861Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0522968Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0522975Z 2023-01-11T21:41:24.0523049Z aten = torch.ops.aten 2023-01-11T21:41:24.0523178Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0523267Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0523271Z 2023-01-11T21:41:24.0523276Z 2023-01-11T21:41:24.0523405Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0523608Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0523722Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0523823Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0523909Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0523999Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0524056Z { 2023-01-11T21:41:24.0524139Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:24.0524233Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0524289Z { 2023-01-11T21:41:24.0524396Z #pragma omp for 2023-01-11T21:41:24.0524464Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.0524523Z { 2023-01-11T21:41:24.0524583Z { 2023-01-11T21:41:24.0524952Z #pragma omp declare reduction(max:at::vec::Vectorized:omp_out = at::vec::maximum(omp_out, omp_in)) initializer(omp_priv={{-std::numeric_limits::infinity()}}) 2023-01-11T21:41:24.0525163Z float tmp1 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0525279Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0525363Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0525422Z { 2023-01-11T21:41:24.0525552Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0525664Z tmp1_vec = at::vec::maximum(tmp1_vec, tmp0); 2023-01-11T21:41:24.0525725Z } 2023-01-11T21:41:24.0525966Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return at::vec::maximum(x, y);}, tmp1_vec); 2023-01-11T21:41:24.0526084Z #pragma omp simd simdlen(4) reduction(max:tmp1) 2023-01-11T21:41:24.0526169Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:24.0526229Z { 2023-01-11T21:41:24.0526325Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:24.0526410Z tmp1 = std::max(tmp1, tmp0); 2023-01-11T21:41:24.0526472Z } 2023-01-11T21:41:24.0526552Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0526613Z } 2023-01-11T21:41:24.0526671Z } 2023-01-11T21:41:24.0526745Z #pragma omp for 2023-01-11T21:41:24.0526821Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.0526871Z { 2023-01-11T21:41:24.0526929Z { 2023-01-11T21:41:24.0527114Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0527190Z float tmp4 = 0; 2023-01-11T21:41:24.0527303Z auto tmp4_vec = at::vec::Vectorized(tmp4); 2023-01-11T21:41:24.0527387Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0527451Z { 2023-01-11T21:41:24.0527577Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0527702Z auto tmp1 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:24.0527794Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0527882Z auto tmp3 = tmp2.exp(); 2023-01-11T21:41:24.0527981Z tmp3.store(out_ptr1 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0528059Z tmp4_vec += tmp3; 2023-01-11T21:41:24.0528121Z } 2023-01-11T21:41:24.0528319Z tmp4 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp4_vec); 2023-01-11T21:41:24.0528426Z #pragma omp simd simdlen(4) reduction(+:tmp4) 2023-01-11T21:41:24.0528513Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:24.0528573Z { 2023-01-11T21:41:24.0528669Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:24.0528755Z auto tmp1 = out_ptr0[i0]; 2023-01-11T21:41:24.0528843Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0528939Z auto tmp3 = std::exp(tmp2); 2023-01-11T21:41:24.0529033Z out_ptr1[i1 + (32*i0)] = tmp3; 2023-01-11T21:41:24.0529097Z tmp4 += tmp3; 2023-01-11T21:41:24.0529161Z } 2023-01-11T21:41:24.0529241Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:24.0529304Z } 2023-01-11T21:41:24.0529365Z } 2023-01-11T21:41:24.0529439Z #pragma omp for 2023-01-11T21:41:24.0529506Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.0529600Z { 2023-01-11T21:41:24.0529679Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0529738Z { 2023-01-11T21:41:24.0529875Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0529997Z auto tmp1 = at::vec::Vectorized(out_ptr2[i0]); 2023-01-11T21:41:24.0530084Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0530187Z tmp2.store(in_out_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0530238Z } 2023-01-11T21:41:24.0530326Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0530408Z for(long i1=32; i1<32; i1+=1) 2023-01-11T21:41:24.0530468Z { 2023-01-11T21:41:24.0530560Z auto tmp0 = out_ptr1[i1 + (32*i0)]; 2023-01-11T21:41:24.0530643Z auto tmp1 = out_ptr2[i0]; 2023-01-11T21:41:24.0530727Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0530863Z in_out_ptr0[i1 + (32*i0)] = tmp2; 2023-01-11T21:41:24.0530926Z } 2023-01-11T21:41:24.0530987Z } 2023-01-11T21:41:24.0531046Z } 2023-01-11T21:41:24.0531105Z } 2023-01-11T21:41:24.0531181Z ''') 2023-01-11T21:41:24.0531186Z 2023-01-11T21:41:24.0531190Z 2023-01-11T21:41:24.0531276Z async_compile.wait(globals()) 2023-01-11T21:41:24.0531333Z del async_compile 2023-01-11T21:41:24.0531338Z 2023-01-11T21:41:24.0531407Z def call(args): 2023-01-11T21:41:24.0531474Z arg0_1, = args 2023-01-11T21:41:24.0531544Z args.clear() 2023-01-11T21:41:24.0531742Z buf0 = empty_strided((16, 1), (1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0531941Z buf1 = empty_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0532136Z buf2 = empty_strided((16, 1), (1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0532209Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0532395Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0532464Z del arg0_1 2023-01-11T21:41:24.0532534Z return (buf3, ) 2023-01-11T21:41:24.0532539Z 2023-01-11T21:41:24.0532543Z 2023-01-11T21:41:24.0532616Z if __name__ == "__main__": 2023-01-11T21:41:24.0532726Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0532846Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0533044Z arg0_1 = rand_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0533138Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0533142Z 2023-01-11T21:41:24.0533204Z ok (1.620s) 2023-01-11T21:41:24.0533664Z test_sort_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0533793Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0534053Z [2023-01-11 21:38:19,901] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 436 2023-01-11T21:41:24.0534267Z [2023-01-11 21:38:19,906] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.sort 2023-01-11T21:41:24.0534528Z [2023-01-11 21:38:19,909] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 436 2023-01-11T21:41:24.0534534Z 2023-01-11T21:41:24.0534624Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0534689Z import torch 2023-01-11T21:41:24.0534745Z import random 2023-01-11T21:41:24.0534859Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0534976Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0534982Z 2023-01-11T21:41:24.0535056Z aten = torch.ops.aten 2023-01-11T21:41:24.0535225Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0535311Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0535317Z 2023-01-11T21:41:24.0535321Z 2023-01-11T21:41:24.0535406Z async_compile.wait(globals()) 2023-01-11T21:41:24.0535475Z del async_compile 2023-01-11T21:41:24.0535479Z 2023-01-11T21:41:24.0535536Z def call(args): 2023-01-11T21:41:24.0535603Z arg0_1, = args 2023-01-11T21:41:24.0535673Z args.clear() 2023-01-11T21:41:24.0535747Z buf0 = aten.sort(arg0_1) 2023-01-11T21:41:24.0535809Z del arg0_1 2023-01-11T21:41:24.0535874Z buf1 = buf0[0] 2023-01-11T21:41:24.0535978Z assert_size_stride(buf1, (1, 1, 8, 8), (64, 64, 8, 1)) 2023-01-11T21:41:24.0536033Z buf2 = buf0[1] 2023-01-11T21:41:24.0536133Z assert_size_stride(buf2, (1, 1, 8, 8), (64, 64, 8, 1)) 2023-01-11T21:41:24.0536194Z del buf0 2023-01-11T21:41:24.0536267Z return (buf1, buf2, ) 2023-01-11T21:41:24.0536272Z 2023-01-11T21:41:24.0536276Z 2023-01-11T21:41:24.0536376Z if __name__ == "__main__": 2023-01-11T21:41:24.0536486Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0536605Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0536814Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0536908Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0536913Z 2023-01-11T21:41:24.0536979Z ok (0.023s) 2023-01-11T21:41:24.0537435Z test_split_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0537558Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0537820Z [2023-01-11 21:38:19,930] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 437 2023-01-11T21:41:24.0538083Z [2023-01-11 21:38:19,933] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 437 2023-01-11T21:41:24.0538510Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0538630Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0538884Z [2023-01-11 21:38:19,955] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 438 2023-01-11T21:41:24.0539143Z [2023-01-11 21:38:21,549] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 438 2023-01-11T21:41:24.0539154Z 2023-01-11T21:41:24.0539246Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0539303Z import torch 2023-01-11T21:41:24.0539366Z import random 2023-01-11T21:41:24.0539480Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0539598Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0539603Z 2023-01-11T21:41:24.0539676Z aten = torch.ops.aten 2023-01-11T21:41:24.0539805Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0539894Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0539899Z 2023-01-11T21:41:24.0539903Z 2023-01-11T21:41:24.0539985Z async_compile.wait(globals()) 2023-01-11T21:41:24.0540044Z del async_compile 2023-01-11T21:41:24.0540049Z 2023-01-11T21:41:24.0540117Z def call(args): 2023-01-11T21:41:24.0540185Z arg0_1, = args 2023-01-11T21:41:24.0540254Z args.clear() 2023-01-11T21:41:24.0540446Z return (as_strided(arg0_1, (2, 2, 3), (20, 10, 1)), as_strided(arg0_1, (2, 2, 3), (20, 10, 1), 3), as_strided(arg0_1, (2, 2, 3), (20, 10, 1), 6), as_strided(arg0_1, (2, 2, 1), (20, 10, 1), 9), ) 2023-01-11T21:41:24.0540480Z 2023-01-11T21:41:24.0540484Z 2023-01-11T21:41:24.0540556Z if __name__ == "__main__": 2023-01-11T21:41:24.0540662Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0540777Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0540971Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0541076Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0541081Z 2023-01-11T21:41:24.0541085Z 2023-01-11T21:41:24.0541174Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0541242Z import torch 2023-01-11T21:41:24.0541312Z import random 2023-01-11T21:41:24.0541421Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0541535Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0541539Z 2023-01-11T21:41:24.0541645Z aten = torch.ops.aten 2023-01-11T21:41:24.0541765Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0541852Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0541857Z 2023-01-11T21:41:24.0541861Z 2023-01-11T21:41:24.0541992Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0542193Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0542313Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0542579Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0542652Z { 2023-01-11T21:41:24.0542748Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0542797Z { 2023-01-11T21:41:24.0542871Z #pragma omp for 2023-01-11T21:41:24.0542949Z for(long i0=0; i0<5; i0+=1) 2023-01-11T21:41:24.0543008Z { 2023-01-11T21:41:24.0543141Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0543274Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0543355Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0543431Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0543490Z } 2023-01-11T21:41:24.0543581Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0543661Z for(long i0=40; i0<40; i0+=1) 2023-01-11T21:41:24.0543723Z { 2023-01-11T21:41:24.0543803Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0543897Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0543969Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0544044Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0544106Z } 2023-01-11T21:41:24.0544165Z } 2023-01-11T21:41:24.0544222Z } 2023-01-11T21:41:24.0544302Z ''') 2023-01-11T21:41:24.0544308Z 2023-01-11T21:41:24.0544312Z 2023-01-11T21:41:24.0544399Z async_compile.wait(globals()) 2023-01-11T21:41:24.0544458Z del async_compile 2023-01-11T21:41:24.0544465Z 2023-01-11T21:41:24.0544532Z def call(args): 2023-01-11T21:41:24.0544597Z arg0_1, = args 2023-01-11T21:41:24.0544667Z args.clear() 2023-01-11T21:41:24.0544873Z buf0 = empty_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0545002Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0545068Z del arg0_1 2023-01-11T21:41:24.0545241Z return (as_strided(buf0, (2, 2, 3), (20, 10, 1)), as_strided(buf0, (2, 2, 3), (20, 10, 1), 3), as_strided(buf0, (2, 2, 3), (20, 10, 1), 6), as_strided(buf0, (2, 2, 1), (20, 10, 0), 9), ) 2023-01-11T21:41:24.0545255Z 2023-01-11T21:41:24.0545259Z 2023-01-11T21:41:24.0545322Z if __name__ == "__main__": 2023-01-11T21:41:24.0545432Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0545547Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0545754Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0545923Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0545929Z 2023-01-11T21:41:24.0545994Z ok (1.642s) 2023-01-11T21:41:24.0546466Z test_split_with_sizes_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0546587Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0546849Z [2023-01-11 21:38:21,582] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 439 2023-01-11T21:41:24.0547103Z [2023-01-11 21:38:23,208] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 439 2023-01-11T21:41:24.0547566Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0547694Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0547953Z [2023-01-11 21:38:23,238] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 440 2023-01-11T21:41:24.0548215Z [2023-01-11 21:38:24,866] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 440 2023-01-11T21:41:24.0548647Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0548767Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0549018Z [2023-01-11 21:38:24,900] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 441 2023-01-11T21:41:24.0549024Z 2023-01-11T21:41:24.0549116Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0549183Z import torch 2023-01-11T21:41:24.0549239Z import random 2023-01-11T21:41:24.0549351Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0549467Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0549472Z 2023-01-11T21:41:24.0549547Z aten = torch.ops.aten 2023-01-11T21:41:24.0549676Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0549765Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0549770Z 2023-01-11T21:41:24.0549774Z 2023-01-11T21:41:24.0549905Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0550113Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0550229Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0550315Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0550408Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0550497Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0550555Z { 2023-01-11T21:41:24.0550650Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0550709Z { 2023-01-11T21:41:24.0550772Z #pragma omp for 2023-01-11T21:41:24.0550850Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0550909Z { 2023-01-11T21:41:24.0550984Z #pragma GCC ivdep 2023-01-11T21:41:24.0551062Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0551120Z { 2023-01-11T21:41:24.0551181Z { 2023-01-11T21:41:24.0551234Z { 2023-01-11T21:41:24.0551370Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.0551474Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0551565Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0551669Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0551761Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0551855Z out_ptr0[i1 + (3*i0)] = tmp4; 2023-01-11T21:41:24.0551908Z } 2023-01-11T21:41:24.0551969Z } 2023-01-11T21:41:24.0552029Z } 2023-01-11T21:41:24.0552090Z } 2023-01-11T21:41:24.0552167Z #pragma omp for 2023-01-11T21:41:24.0552245Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0552303Z { 2023-01-11T21:41:24.0552368Z #pragma GCC ivdep 2023-01-11T21:41:24.0552446Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0552507Z { 2023-01-11T21:41:24.0552570Z { 2023-01-11T21:41:24.0552661Z { 2023-01-11T21:41:24.0552759Z auto tmp0 = in_ptr0[3 + i1 + (10*i0)]; 2023-01-11T21:41:24.0552866Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0552944Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0553046Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0553134Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0553227Z out_ptr1[i1 + (3*i0)] = tmp4; 2023-01-11T21:41:24.0553292Z } 2023-01-11T21:41:24.0553351Z } 2023-01-11T21:41:24.0553416Z } 2023-01-11T21:41:24.0553465Z } 2023-01-11T21:41:24.0553539Z #pragma omp for 2023-01-11T21:41:24.0553616Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0553675Z { 2023-01-11T21:41:24.0553812Z #pragma GCC ivdep 2023-01-11T21:41:24.0553897Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0553950Z { 2023-01-11T21:41:24.0554010Z { 2023-01-11T21:41:24.0554071Z { 2023-01-11T21:41:24.0554173Z auto tmp0 = in_ptr0[6 + i1 + (10*i0)]; 2023-01-11T21:41:24.0554279Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0554366Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0554470Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0554559Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0554639Z out_ptr2[i1 + (4*i0)] = tmp4; 2023-01-11T21:41:24.0554704Z } 2023-01-11T21:41:24.0554770Z } 2023-01-11T21:41:24.0554829Z } 2023-01-11T21:41:24.0554889Z } 2023-01-11T21:41:24.0554947Z } 2023-01-11T21:41:24.0554993Z } 2023-01-11T21:41:24.0555072Z ''') 2023-01-11T21:41:24.0555077Z 2023-01-11T21:41:24.0555081Z 2023-01-11T21:41:24.0555171Z async_compile.wait(globals()) 2023-01-11T21:41:24.0555239Z del async_compile 2023-01-11T21:41:24.0555245Z 2023-01-11T21:41:24.0555310Z def call(args): 2023-01-11T21:41:24.0555374Z arg0_1, = args 2023-01-11T21:41:24.0555436Z args.clear() 2023-01-11T21:41:24.0555634Z buf0 = empty_strided((2, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0555822Z buf1 = empty_strided((2, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0556015Z buf2 = empty_strided((2, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0556202Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0556266Z del arg0_1 2023-01-11T21:41:24.0556345Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0556350Z 2023-01-11T21:41:24.0556354Z 2023-01-11T21:41:24.0556425Z if __name__ == "__main__": 2023-01-11T21:41:24.0556539Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0556687Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0556879Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0556983Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0556988Z 2023-01-11T21:41:24.0556992Z 2023-01-11T21:41:24.0557081Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0557148Z import torch 2023-01-11T21:41:24.0557213Z import random 2023-01-11T21:41:24.0557324Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0557440Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0557444Z 2023-01-11T21:41:24.0557517Z aten = torch.ops.aten 2023-01-11T21:41:24.0557636Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0557723Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0557728Z 2023-01-11T21:41:24.0557733Z 2023-01-11T21:41:24.0557863Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0558097Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0558215Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0558310Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0558403Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0558493Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0558541Z { 2023-01-11T21:41:24.0558633Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0558694Z { 2023-01-11T21:41:24.0558768Z #pragma omp for 2023-01-11T21:41:24.0558847Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0558908Z { 2023-01-11T21:41:24.0558975Z #pragma GCC ivdep 2023-01-11T21:41:24.0559056Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0559113Z { 2023-01-11T21:41:24.0559175Z { 2023-01-11T21:41:24.0559238Z { 2023-01-11T21:41:24.0559341Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.0559448Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0559528Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0559631Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0559718Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0559808Z out_ptr0[i1 + (4*i0)] = tmp4; 2023-01-11T21:41:24.0559873Z } 2023-01-11T21:41:24.0559938Z } 2023-01-11T21:41:24.0559998Z } 2023-01-11T21:41:24.0560047Z } 2023-01-11T21:41:24.0560121Z #pragma omp for 2023-01-11T21:41:24.0560201Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0560262Z { 2023-01-11T21:41:24.0560340Z #pragma GCC ivdep 2023-01-11T21:41:24.0560420Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0560481Z { 2023-01-11T21:41:24.0560537Z { 2023-01-11T21:41:24.0560599Z { 2023-01-11T21:41:24.0560702Z auto tmp0 = in_ptr0[4 + i1 + (10*i0)]; 2023-01-11T21:41:24.0560807Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0560898Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0561003Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0561094Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0561176Z out_ptr1[i1 + (3*i0)] = tmp4; 2023-01-11T21:41:24.0561240Z } 2023-01-11T21:41:24.0561303Z } 2023-01-11T21:41:24.0561364Z } 2023-01-11T21:41:24.0561425Z } 2023-01-11T21:41:24.0561497Z #pragma omp for 2023-01-11T21:41:24.0561576Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0561625Z { 2023-01-11T21:41:24.0561701Z #pragma GCC ivdep 2023-01-11T21:41:24.0561812Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0561874Z { 2023-01-11T21:41:24.0561935Z { 2023-01-11T21:41:24.0561998Z { 2023-01-11T21:41:24.0562087Z auto tmp0 = in_ptr0[7 + i1 + (10*i0)]; 2023-01-11T21:41:24.0562191Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0562281Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0562387Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0562478Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0562570Z out_ptr2[i1 + (3*i0)] = tmp4; 2023-01-11T21:41:24.0562637Z } 2023-01-11T21:41:24.0562689Z } 2023-01-11T21:41:24.0562750Z } 2023-01-11T21:41:24.0562811Z } 2023-01-11T21:41:24.0562872Z } 2023-01-11T21:41:24.0562931Z } 2023-01-11T21:41:24.0563010Z ''') 2023-01-11T21:41:24.0563015Z 2023-01-11T21:41:24.0563021Z 2023-01-11T21:41:24.0563144Z async_compile.wait(globals()) 2023-01-11T21:41:24.0563205Z del async_compile 2023-01-11T21:41:24.0563220Z 2023-01-11T21:41:24.0563276Z def call(args): 2023-01-11T21:41:24.0563341Z arg0_1, = args 2023-01-11T21:41:24.0563411Z args.clear() 2023-01-11T21:41:24.0563613Z buf0 = empty_strided((2, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0563811Z buf1 = empty_strided((2, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0564009Z buf2 = empty_strided((2, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0564198Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0564253Z del arg0_1 2023-01-11T21:41:24.0564332Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0564337Z 2023-01-11T21:41:24.0564341Z 2023-01-11T21:41:24.0564413Z if __name__ == "__main__": 2023-01-11T21:41:24.0564533Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0564657Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0564859Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0564966Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0564971Z 2023-01-11T21:41:24.0565237Z [2023-01-11 21:38:26,542] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 441 2023-01-11T21:41:24.0565242Z 2023-01-11T21:41:24.0565334Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0565389Z import torch 2023-01-11T21:41:24.0565455Z import random 2023-01-11T21:41:24.0565567Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0565685Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0565689Z 2023-01-11T21:41:24.0565762Z aten = torch.ops.aten 2023-01-11T21:41:24.0565892Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0565985Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0565991Z 2023-01-11T21:41:24.0565995Z 2023-01-11T21:41:24.0566124Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0566316Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0566430Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0566526Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0566622Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0566714Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0566803Z float* __restrict__ out_ptr3) 2023-01-11T21:41:24.0566861Z { 2023-01-11T21:41:24.0566946Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0567003Z { 2023-01-11T21:41:24.0567077Z #pragma omp for 2023-01-11T21:41:24.0567154Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0567212Z { 2023-01-11T21:41:24.0567305Z { 2023-01-11T21:41:24.0567357Z { 2023-01-11T21:41:24.0567446Z auto tmp0 = in_ptr0[10*i0]; 2023-01-11T21:41:24.0567550Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0567636Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0567737Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0567824Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0567904Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0567962Z } 2023-01-11T21:41:24.0568012Z } 2023-01-11T21:41:24.0568070Z } 2023-01-11T21:41:24.0568143Z #pragma omp for 2023-01-11T21:41:24.0568221Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0568280Z { 2023-01-11T21:41:24.0568355Z #pragma GCC ivdep 2023-01-11T21:41:24.0568423Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:24.0568482Z { 2023-01-11T21:41:24.0568585Z { 2023-01-11T21:41:24.0568650Z { 2023-01-11T21:41:24.0568752Z auto tmp0 = in_ptr0[1 + i1 + (10*i0)]; 2023-01-11T21:41:24.0568856Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0568947Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0569040Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0569128Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0569218Z out_ptr1[i1 + (2*i0)] = tmp4; 2023-01-11T21:41:24.0569282Z } 2023-01-11T21:41:24.0569343Z } 2023-01-11T21:41:24.0569404Z } 2023-01-11T21:41:24.0569462Z } 2023-01-11T21:41:24.0569524Z #pragma omp for 2023-01-11T21:41:24.0569602Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0569659Z { 2023-01-11T21:41:24.0569737Z #pragma GCC ivdep 2023-01-11T21:41:24.0569819Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0569880Z { 2023-01-11T21:41:24.0569943Z { 2023-01-11T21:41:24.0570019Z { 2023-01-11T21:41:24.0570164Z auto tmp0 = in_ptr0[3 + i1 + (10*i0)]; 2023-01-11T21:41:24.0570277Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0570367Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0570471Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0570558Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0570651Z out_ptr2[i1 + (3*i0)] = tmp4; 2023-01-11T21:41:24.0570703Z } 2023-01-11T21:41:24.0570765Z } 2023-01-11T21:41:24.0570823Z } 2023-01-11T21:41:24.0570881Z } 2023-01-11T21:41:24.0570956Z #pragma omp for 2023-01-11T21:41:24.0571034Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0571086Z { 2023-01-11T21:41:24.0571183Z #pragma GCC ivdep 2023-01-11T21:41:24.0571296Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.0571361Z { 2023-01-11T21:41:24.0571422Z { 2023-01-11T21:41:24.0571483Z { 2023-01-11T21:41:24.0571581Z auto tmp0 = in_ptr0[6 + i1 + (10*i0)]; 2023-01-11T21:41:24.0571674Z auto tmp1 = static_cast(2.0); 2023-01-11T21:41:24.0571762Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0571870Z auto tmp3 = static_cast(1.0); 2023-01-11T21:41:24.0571959Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0572050Z out_ptr3[i1 + (4*i0)] = tmp4; 2023-01-11T21:41:24.0572112Z } 2023-01-11T21:41:24.0572176Z } 2023-01-11T21:41:24.0572225Z } 2023-01-11T21:41:24.0572283Z } 2023-01-11T21:41:24.0572340Z } 2023-01-11T21:41:24.0572443Z } 2023-01-11T21:41:24.0572526Z ''') 2023-01-11T21:41:24.0572532Z 2023-01-11T21:41:24.0572537Z 2023-01-11T21:41:24.0572623Z async_compile.wait(globals()) 2023-01-11T21:41:24.0572689Z del async_compile 2023-01-11T21:41:24.0572694Z 2023-01-11T21:41:24.0572764Z def call(args): 2023-01-11T21:41:24.0572819Z arg0_1, = args 2023-01-11T21:41:24.0572886Z args.clear() 2023-01-11T21:41:24.0573086Z buf0 = empty_strided((2, 2, 1), (2, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0573286Z buf1 = empty_strided((2, 2, 2), (4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0573480Z buf2 = empty_strided((2, 2, 3), (6, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0573676Z buf3 = empty_strided((2, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0573884Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.0573970Z del arg0_1 2023-01-11T21:41:24.0574058Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:24.0574064Z 2023-01-11T21:41:24.0574068Z 2023-01-11T21:41:24.0574142Z if __name__ == "__main__": 2023-01-11T21:41:24.0574252Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0574369Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0574572Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0574677Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0574681Z 2023-01-11T21:41:24.0574744Z ok (4.993s) 2023-01-11T21:41:24.0575212Z test_squeeze1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0575339Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0575588Z [2023-01-11 21:38:26,565] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 442 2023-01-11T21:41:24.0575850Z [2023-01-11 21:38:28,130] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 442 2023-01-11T21:41:24.0575855Z 2023-01-11T21:41:24.0575943Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0576010Z import torch 2023-01-11T21:41:24.0576078Z import random 2023-01-11T21:41:24.0576189Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0576307Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0576312Z 2023-01-11T21:41:24.0576385Z aten = torch.ops.aten 2023-01-11T21:41:24.0576506Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0576592Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0576599Z 2023-01-11T21:41:24.0576605Z 2023-01-11T21:41:24.0576734Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0576939Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0577055Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0577151Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0577247Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0577296Z { 2023-01-11T21:41:24.0577390Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0577449Z { 2023-01-11T21:41:24.0577522Z #pragma omp for 2023-01-11T21:41:24.0577605Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0577667Z { 2023-01-11T21:41:24.0577806Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0577933Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0578005Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0578164Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0578249Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0578332Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0578419Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0578506Z tmp5.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0578564Z } 2023-01-11T21:41:24.0578643Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0578722Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.0578782Z { 2023-01-11T21:41:24.0578864Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0578960Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0579041Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0579137Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0579208Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0579290Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0579396Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0579476Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:24.0579536Z } 2023-01-11T21:41:24.0579596Z } 2023-01-11T21:41:24.0579644Z } 2023-01-11T21:41:24.0579723Z ''') 2023-01-11T21:41:24.0579728Z 2023-01-11T21:41:24.0579733Z 2023-01-11T21:41:24.0579815Z async_compile.wait(globals()) 2023-01-11T21:41:24.0579883Z del async_compile 2023-01-11T21:41:24.0579887Z 2023-01-11T21:41:24.0579955Z def call(args): 2023-01-11T21:41:24.0580020Z arg0_1, = args 2023-01-11T21:41:24.0580088Z args.clear() 2023-01-11T21:41:24.0580292Z buf0 = empty_strided((2, 2, 2), (4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0580481Z buf1 = empty_strided((2, 2, 2), (4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0580651Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0580749Z del arg0_1 2023-01-11T21:41:24.0580862Z return (buf0, buf1, ) 2023-01-11T21:41:24.0580868Z 2023-01-11T21:41:24.0580873Z 2023-01-11T21:41:24.0580970Z if __name__ == "__main__": 2023-01-11T21:41:24.0581116Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0581238Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0581469Z arg0_1 = rand_strided((1, 2, 1, 2, 2, 1, 1), (8, 4, 4, 2, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0581564Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0581569Z 2023-01-11T21:41:24.0581631Z ok (1.587s) 2023-01-11T21:41:24.0582098Z test_squeeze2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0582225Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0582659Z [2023-01-11 21:38:28,155] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 443 2023-01-11T21:41:24.0582926Z [2023-01-11 21:38:29,708] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 443 2023-01-11T21:41:24.0582931Z 2023-01-11T21:41:24.0583024Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0583093Z import torch 2023-01-11T21:41:24.0583162Z import random 2023-01-11T21:41:24.0583262Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0583381Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0583386Z 2023-01-11T21:41:24.0583463Z aten = torch.ops.aten 2023-01-11T21:41:24.0583589Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0583679Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0583684Z 2023-01-11T21:41:24.0583688Z 2023-01-11T21:41:24.0583895Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0584100Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0584217Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0584304Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0584395Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0584453Z { 2023-01-11T21:41:24.0584548Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0584609Z { 2023-01-11T21:41:24.0584684Z #pragma omp for 2023-01-11T21:41:24.0584763Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0584812Z { 2023-01-11T21:41:24.0584946Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0585077Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0585159Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0585324Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0585408Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0585489Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0585575Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0585652Z tmp5.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0585710Z } 2023-01-11T21:41:24.0585800Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0585879Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.0585939Z { 2023-01-11T21:41:24.0586018Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0586104Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0586184Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0586278Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0586358Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0586437Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0586514Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0586595Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:24.0586644Z } 2023-01-11T21:41:24.0586702Z } 2023-01-11T21:41:24.0586757Z } 2023-01-11T21:41:24.0586834Z ''') 2023-01-11T21:41:24.0586839Z 2023-01-11T21:41:24.0586844Z 2023-01-11T21:41:24.0586929Z async_compile.wait(globals()) 2023-01-11T21:41:24.0586998Z del async_compile 2023-01-11T21:41:24.0587003Z 2023-01-11T21:41:24.0587070Z def call(args): 2023-01-11T21:41:24.0587136Z arg0_1, = args 2023-01-11T21:41:24.0587193Z args.clear() 2023-01-11T21:41:24.0587409Z buf0 = empty_strided((1, 2, 2, 2, 2), (16, 8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0587629Z buf1 = empty_strided((2, 1, 2, 2, 2, 1), (8, 8, 4, 2, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0587784Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0587851Z del arg0_1 2023-01-11T21:41:24.0587924Z return (buf0, buf1, ) 2023-01-11T21:41:24.0587934Z 2023-01-11T21:41:24.0587938Z 2023-01-11T21:41:24.0588010Z if __name__ == "__main__": 2023-01-11T21:41:24.0588110Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0588229Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0588454Z arg0_1 = rand_strided((1, 2, 1, 2, 2, 2, 1), (16, 8, 8, 4, 2, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0588559Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0588564Z 2023-01-11T21:41:24.0588629Z ok (1.578s) 2023-01-11T21:41:24.0589089Z test_stack_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0589244Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0589506Z [2023-01-11 21:38:29,730] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 444 2023-01-11T21:41:24.0589774Z [2023-01-11 21:38:31,812] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 444 2023-01-11T21:41:24.0589779Z 2023-01-11T21:41:24.0589870Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0589926Z import torch 2023-01-11T21:41:24.0589996Z import random 2023-01-11T21:41:24.0590110Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0590227Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0590232Z 2023-01-11T21:41:24.0590308Z aten = torch.ops.aten 2023-01-11T21:41:24.0590439Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0590528Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0590533Z 2023-01-11T21:41:24.0590537Z 2023-01-11T21:41:24.0590700Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0590893Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0591010Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0591111Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0591206Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0591299Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0591359Z { 2023-01-11T21:41:24.0591455Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0591503Z { 2023-01-11T21:41:24.0591578Z #pragma omp for 2023-01-11T21:41:24.0591656Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.0591716Z { 2023-01-11T21:41:24.0591794Z #pragma GCC ivdep 2023-01-11T21:41:24.0591877Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:24.0591939Z { 2023-01-11T21:41:24.0591990Z { 2023-01-11T21:41:24.0592057Z { 2023-01-11T21:41:24.0592152Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.0592250Z out_ptr0[(2*i1) + (32*i0)] = tmp0; 2023-01-11T21:41:24.0592315Z } 2023-01-11T21:41:24.0592377Z } 2023-01-11T21:41:24.0592439Z } 2023-01-11T21:41:24.0592488Z } 2023-01-11T21:41:24.0592562Z #pragma omp for 2023-01-11T21:41:24.0592642Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.0592704Z { 2023-01-11T21:41:24.0592782Z #pragma GCC ivdep 2023-01-11T21:41:24.0592866Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:24.0592917Z { 2023-01-11T21:41:24.0592979Z { 2023-01-11T21:41:24.0593047Z { 2023-01-11T21:41:24.0593140Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0593239Z out_ptr1[(2*i1) + (32*i0)] = tmp0; 2023-01-11T21:41:24.0593304Z } 2023-01-11T21:41:24.0593373Z } 2023-01-11T21:41:24.0593423Z } 2023-01-11T21:41:24.0593484Z } 2023-01-11T21:41:24.0593543Z } 2023-01-11T21:41:24.0593602Z } 2023-01-11T21:41:24.0593683Z ''') 2023-01-11T21:41:24.0593689Z 2023-01-11T21:41:24.0593693Z 2023-01-11T21:41:24.0593839Z async_compile.wait(globals()) 2023-01-11T21:41:24.0593910Z del async_compile 2023-01-11T21:41:24.0593915Z 2023-01-11T21:41:24.0593972Z def call(args): 2023-01-11T21:41:24.0594044Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0594114Z args.clear() 2023-01-11T21:41:24.0594325Z buf2 = empty_strided((12, 16, 2), (32, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0594430Z buf0 = as_strided(buf2, (12, 16, 1), (32, 2, 1)) # alias 2023-01-11T21:41:24.0594538Z buf1 = as_strided(buf2, (12, 16, 1), (32, 2, 1), 1) # alias 2023-01-11T21:41:24.0594726Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0594832Z del arg0_1 2023-01-11T21:41:24.0594885Z del arg1_1 2023-01-11T21:41:24.0594953Z return (buf2, ) 2023-01-11T21:41:24.0594958Z 2023-01-11T21:41:24.0594962Z 2023-01-11T21:41:24.0595035Z if __name__ == "__main__": 2023-01-11T21:41:24.0595147Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0595570Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0595770Z arg0_1 = rand_strided((1, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0595964Z arg1_1 = rand_strided((12, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0596065Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0596079Z 2023-01-11T21:41:24.0596132Z ok (2.105s) 2023-01-11T21:41:24.0596631Z test_std_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0596758Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0597018Z [2023-01-11 21:38:31,882] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 445 2023-01-11T21:41:24.0597023Z 2023-01-11T21:41:24.0597118Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0597184Z import torch 2023-01-11T21:41:24.0597251Z import random 2023-01-11T21:41:24.0597363Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0597470Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0597484Z 2023-01-11T21:41:24.0597549Z aten = torch.ops.aten 2023-01-11T21:41:24.0597682Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0597769Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0597776Z 2023-01-11T21:41:24.0597783Z 2023-01-11T21:41:24.0597911Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0598112Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0598229Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0598328Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.0598424Z float* __restrict__ in_out_ptr2, 2023-01-11T21:41:24.0598509Z float* __restrict__ in_out_ptr3, 2023-01-11T21:41:24.0598603Z float* __restrict__ in_out_ptr4, 2023-01-11T21:41:24.0598695Z float* __restrict__ in_out_ptr5, 2023-01-11T21:41:24.0598792Z float* __restrict__ in_out_ptr6, 2023-01-11T21:41:24.0598895Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0598990Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0599086Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0599166Z float* __restrict__ out_ptr4, 2023-01-11T21:41:24.0599256Z float* __restrict__ out_ptr5, 2023-01-11T21:41:24.0599349Z float* __restrict__ out_ptr7, 2023-01-11T21:41:24.0599445Z float* __restrict__ out_ptr10, 2023-01-11T21:41:24.0599537Z float* __restrict__ out_ptr12, 2023-01-11T21:41:24.0599630Z float* __restrict__ out_ptr14) 2023-01-11T21:41:24.0599689Z { 2023-01-11T21:41:24.0599762Z auto out_ptr6 = in_out_ptr0; 2023-01-11T21:41:24.0599841Z auto out_ptr8 = in_out_ptr1; 2023-01-11T21:41:24.0599923Z auto out_ptr11 = in_out_ptr2; 2023-01-11T21:41:24.0600002Z auto out_ptr13 = in_out_ptr3; 2023-01-11T21:41:24.0600080Z auto out_ptr1 = in_out_ptr4; 2023-01-11T21:41:24.0600158Z auto out_ptr3 = in_out_ptr5; 2023-01-11T21:41:24.0600234Z auto out_ptr9 = in_out_ptr6; 2023-01-11T21:41:24.0600315Z { 2023-01-11T21:41:24.0600499Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0600572Z float tmp1 = 0; 2023-01-11T21:41:24.0600686Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0600785Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0600843Z { 2023-01-11T21:41:24.0600947Z #pragma omp for reduction(+:tmp1_vec) 2023-01-11T21:41:24.0601019Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0601082Z { 2023-01-11T21:41:24.0601208Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0601287Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0601348Z } 2023-01-11T21:41:24.0601541Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0601689Z #pragma omp for simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0601775Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0601825Z { 2023-01-11T21:41:24.0601908Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0601980Z tmp1 += tmp0; 2023-01-11T21:41:24.0602040Z } 2023-01-11T21:41:24.0602098Z } 2023-01-11T21:41:24.0602173Z out_ptr0[0] = tmp1; 2023-01-11T21:41:24.0602234Z } 2023-01-11T21:41:24.0602281Z { 2023-01-11T21:41:24.0602461Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0602531Z float tmp6 = 0; 2023-01-11T21:41:24.0602644Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0602716Z float tmp7 = 0; 2023-01-11T21:41:24.0602827Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.0602928Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0602981Z { 2023-01-11T21:41:24.0603109Z #pragma omp for reduction(+:tmp6_vec) reduction(+:tmp7_vec) 2023-01-11T21:41:24.0603192Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0603256Z { 2023-01-11T21:41:24.0603519Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0603642Z auto tmp1 = at::vec::Vectorized(out_ptr0[0]); 2023-01-11T21:41:24.0603775Z auto tmp2 = at::vec::Vectorized(static_cast(256)); 2023-01-11T21:41:24.0603861Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0603983Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0604069Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0604148Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0604224Z tmp7_vec += tmp0; 2023-01-11T21:41:24.0604283Z } 2023-01-11T21:41:24.0604474Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0604667Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.0604808Z #pragma omp for simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.0604883Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0604943Z { 2023-01-11T21:41:24.0605027Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0605111Z auto tmp1 = out_ptr0[0]; 2023-01-11T21:41:24.0605212Z auto tmp2 = static_cast(256); 2023-01-11T21:41:24.0605297Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0605421Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0605492Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0605564Z tmp6 += tmp5; 2023-01-11T21:41:24.0605713Z tmp7 += tmp0; 2023-01-11T21:41:24.0605813Z } 2023-01-11T21:41:24.0605875Z } 2023-01-11T21:41:24.0605948Z out_ptr1[0] = tmp6; 2023-01-11T21:41:24.0606019Z out_ptr2[0] = tmp7; 2023-01-11T21:41:24.0606067Z } 2023-01-11T21:41:24.0606126Z { 2023-01-11T21:41:24.0606308Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0606381Z float tmp6 = 0; 2023-01-11T21:41:24.0606493Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0606566Z float tmp7 = 0; 2023-01-11T21:41:24.0606679Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.0606769Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0606830Z { 2023-01-11T21:41:24.0606960Z #pragma omp for reduction(+:tmp6_vec) reduction(+:tmp7_vec) 2023-01-11T21:41:24.0607043Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0607109Z { 2023-01-11T21:41:24.0607272Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0607393Z auto tmp1 = at::vec::Vectorized(out_ptr2[0]); 2023-01-11T21:41:24.0607522Z auto tmp2 = at::vec::Vectorized(static_cast(256)); 2023-01-11T21:41:24.0607596Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0607724Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0607805Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0607883Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0607960Z tmp7_vec += tmp0; 2023-01-11T21:41:24.0608020Z } 2023-01-11T21:41:24.0608209Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0608396Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.0608525Z #pragma omp for simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.0608610Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0608671Z { 2023-01-11T21:41:24.0608753Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0608838Z auto tmp1 = out_ptr2[0]; 2023-01-11T21:41:24.0608936Z auto tmp2 = static_cast(256); 2023-01-11T21:41:24.0609020Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0609134Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0609215Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0609285Z tmp6 += tmp5; 2023-01-11T21:41:24.0609355Z tmp7 += tmp0; 2023-01-11T21:41:24.0609414Z } 2023-01-11T21:41:24.0609473Z } 2023-01-11T21:41:24.0609544Z out_ptr3[0] = tmp6; 2023-01-11T21:41:24.0609604Z out_ptr4[0] = tmp7; 2023-01-11T21:41:24.0609664Z } 2023-01-11T21:41:24.0609759Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0609821Z { 2023-01-11T21:41:24.0609893Z #pragma omp for 2023-01-11T21:41:24.0609970Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0610020Z { 2023-01-11T21:41:24.0610078Z { 2023-01-11T21:41:24.0610263Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0610338Z float tmp1 = 0; 2023-01-11T21:41:24.0610455Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0610541Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0610604Z { 2023-01-11T21:41:24.0610744Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0610812Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0610872Z } 2023-01-11T21:41:24.0611067Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0611227Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0611310Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0611369Z { 2023-01-11T21:41:24.0611467Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0611540Z tmp1 += tmp0; 2023-01-11T21:41:24.0611594Z } 2023-01-11T21:41:24.0611671Z out_ptr5[i0] = tmp1; 2023-01-11T21:41:24.0611732Z } 2023-01-11T21:41:24.0611792Z } 2023-01-11T21:41:24.0611867Z #pragma omp for 2023-01-11T21:41:24.0612047Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0612097Z { 2023-01-11T21:41:24.0612159Z { 2023-01-11T21:41:24.0612347Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0612461Z float tmp6 = 0; 2023-01-11T21:41:24.0612580Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0612653Z float tmp7 = 0; 2023-01-11T21:41:24.0612771Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.0612859Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0612909Z { 2023-01-11T21:41:24.0613049Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0613173Z auto tmp1 = at::vec::Vectorized(out_ptr5[i0]); 2023-01-11T21:41:24.0613305Z auto tmp2 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.0613394Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0613529Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0613618Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0613702Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0613767Z tmp7_vec += tmp0; 2023-01-11T21:41:24.0613828Z } 2023-01-11T21:41:24.0614134Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0614324Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.0614460Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.0614544Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0614603Z { 2023-01-11T21:41:24.0614701Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0614780Z auto tmp1 = out_ptr5[i0]; 2023-01-11T21:41:24.0614879Z auto tmp2 = static_cast(8); 2023-01-11T21:41:24.0614969Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0615105Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0615193Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0615267Z tmp6 += tmp5; 2023-01-11T21:41:24.0615341Z tmp7 += tmp0; 2023-01-11T21:41:24.0615392Z } 2023-01-11T21:41:24.0615539Z out_ptr6[i0] = tmp6; 2023-01-11T21:41:24.0615618Z out_ptr7[i0] = tmp7; 2023-01-11T21:41:24.0615679Z } 2023-01-11T21:41:24.0615738Z } 2023-01-11T21:41:24.0615811Z #pragma omp for 2023-01-11T21:41:24.0615887Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0615936Z { 2023-01-11T21:41:24.0616069Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr6 + 8*i0); 2023-01-11T21:41:24.0616196Z auto tmp1 = at::vec::Vectorized(static_cast(7)); 2023-01-11T21:41:24.0616278Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0616439Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0616500Z } 2023-01-11T21:41:24.0616590Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0616659Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0616789Z { 2023-01-11T21:41:24.0616872Z auto tmp0 = out_ptr6[i0]; 2023-01-11T21:41:24.0616969Z auto tmp1 = static_cast(7); 2023-01-11T21:41:24.0617050Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0617130Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0617190Z } 2023-01-11T21:41:24.0617252Z #pragma omp for 2023-01-11T21:41:24.0617333Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0617392Z { 2023-01-11T21:41:24.0617451Z { 2023-01-11T21:41:24.0617634Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0617710Z float tmp6 = 0; 2023-01-11T21:41:24.0617862Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0618012Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0618065Z { 2023-01-11T21:41:24.0618204Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0618328Z auto tmp1 = at::vec::Vectorized(out_ptr7[i0]); 2023-01-11T21:41:24.0618460Z auto tmp2 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.0618547Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0618683Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0618770Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0618838Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0618903Z } 2023-01-11T21:41:24.0619096Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0619220Z #pragma omp simd simdlen(4) reduction(+:tmp6) 2023-01-11T21:41:24.0619425Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0619490Z { 2023-01-11T21:41:24.0619587Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0619678Z auto tmp1 = out_ptr7[i0]; 2023-01-11T21:41:24.0619767Z auto tmp2 = static_cast(8); 2023-01-11T21:41:24.0619857Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0619993Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0620079Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0620151Z tmp6 += tmp5; 2023-01-11T21:41:24.0620215Z } 2023-01-11T21:41:24.0620294Z out_ptr8[i0] = tmp6; 2023-01-11T21:41:24.0620345Z } 2023-01-11T21:41:24.0620406Z } 2023-01-11T21:41:24.0620483Z #pragma omp for 2023-01-11T21:41:24.0620566Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0620805Z { 2023-01-11T21:41:24.0620938Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr8 + 8*i0); 2023-01-11T21:41:24.0621115Z auto tmp1 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.0621186Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0621280Z tmp2.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:24.0621393Z } 2023-01-11T21:41:24.0621486Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0621566Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0621628Z { 2023-01-11T21:41:24.0621710Z auto tmp0 = out_ptr8[i0]; 2023-01-11T21:41:24.0621795Z auto tmp1 = static_cast(8); 2023-01-11T21:41:24.0621874Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0621953Z in_out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.0622185Z } 2023-01-11T21:41:24.0622247Z } 2023-01-11T21:41:24.0622305Z { 2023-01-11T21:41:24.0622697Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0622760Z float tmp6 = 0; 2023-01-11T21:41:24.0622876Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0622972Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0623033Z { 2023-01-11T21:41:24.0623138Z #pragma omp for reduction(+:tmp6_vec) 2023-01-11T21:41:24.0623229Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0623291Z { 2023-01-11T21:41:24.0623412Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0623535Z auto tmp1 = at::vec::Vectorized(out_ptr4[0]); 2023-01-11T21:41:24.0623668Z auto tmp2 = at::vec::Vectorized(static_cast(256)); 2023-01-11T21:41:24.0623753Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0623893Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0624038Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0624115Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0624177Z } 2023-01-11T21:41:24.0624358Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0624478Z #pragma omp for simd simdlen(4) reduction(+:tmp6) 2023-01-11T21:41:24.0624568Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0624631Z { 2023-01-11T21:41:24.0624714Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0624798Z auto tmp1 = out_ptr4[0]; 2023-01-11T21:41:24.0624899Z auto tmp2 = static_cast(256); 2023-01-11T21:41:24.0625082Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0625213Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0625296Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0625374Z tmp6 += tmp5; 2023-01-11T21:41:24.0625439Z } 2023-01-11T21:41:24.0625499Z } 2023-01-11T21:41:24.0625574Z out_ptr9[0] = tmp6; 2023-01-11T21:41:24.0625623Z } 2023-01-11T21:41:24.0625717Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0625776Z { 2023-01-11T21:41:24.0625850Z #pragma omp for 2023-01-11T21:41:24.0625929Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0625987Z { 2023-01-11T21:41:24.0626048Z { 2023-01-11T21:41:24.0626099Z { 2023-01-11T21:41:24.0626176Z float tmp1 = 0; 2023-01-11T21:41:24.0626269Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0626333Z { 2023-01-11T21:41:24.0626400Z { 2023-01-11T21:41:24.0626501Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:24.0626579Z tmp1 += tmp0; 2023-01-11T21:41:24.0626633Z } 2023-01-11T21:41:24.0626702Z } 2023-01-11T21:41:24.0626782Z out_ptr10[i0] = tmp1; 2023-01-11T21:41:24.0626845Z } 2023-01-11T21:41:24.0626906Z } 2023-01-11T21:41:24.0626966Z } 2023-01-11T21:41:24.0627029Z #pragma omp for 2023-01-11T21:41:24.0627104Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0627164Z { 2023-01-11T21:41:24.0627223Z { 2023-01-11T21:41:24.0627283Z { 2023-01-11T21:41:24.0627357Z float tmp6 = 0; 2023-01-11T21:41:24.0627433Z float tmp7 = 0; 2023-01-11T21:41:24.0627509Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0627574Z { 2023-01-11T21:41:24.0627637Z { 2023-01-11T21:41:24.0627739Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:24.0627835Z auto tmp1 = out_ptr10[i0]; 2023-01-11T21:41:24.0627986Z auto tmp2 = static_cast(8); 2023-01-11T21:41:24.0628083Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0628216Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0628305Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0628386Z tmp6 += tmp5; 2023-01-11T21:41:24.0628465Z tmp7 += tmp0; 2023-01-11T21:41:24.0628527Z } 2023-01-11T21:41:24.0628590Z } 2023-01-11T21:41:24.0628672Z out_ptr11[i0] = tmp6; 2023-01-11T21:41:24.0628741Z out_ptr12[i0] = tmp7; 2023-01-11T21:41:24.0628805Z } 2023-01-11T21:41:24.0628866Z } 2023-01-11T21:41:24.0628926Z } 2023-01-11T21:41:24.0628999Z #pragma omp for 2023-01-11T21:41:24.0629078Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0629139Z { 2023-01-11T21:41:24.0629296Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr11 + 8*i0); 2023-01-11T21:41:24.0629425Z auto tmp1 = at::vec::Vectorized(static_cast(7)); 2023-01-11T21:41:24.0629507Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0629586Z auto tmp3 = tmp2.sqrt(); 2023-01-11T21:41:24.0629678Z tmp3.store(in_out_ptr2 + 8*i0); 2023-01-11T21:41:24.0629735Z } 2023-01-11T21:41:24.0629825Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0629893Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0629957Z { 2023-01-11T21:41:24.0630041Z auto tmp0 = out_ptr11[i0]; 2023-01-11T21:41:24.0630136Z auto tmp1 = static_cast(7); 2023-01-11T21:41:24.0630218Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0630303Z auto tmp3 = std::sqrt(tmp2); 2023-01-11T21:41:24.0630382Z in_out_ptr2[i0] = tmp3; 2023-01-11T21:41:24.0630430Z } 2023-01-11T21:41:24.0630499Z #pragma omp for 2023-01-11T21:41:24.0630583Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0630721Z { 2023-01-11T21:41:24.0630783Z { 2023-01-11T21:41:24.0630843Z { 2023-01-11T21:41:24.0630919Z float tmp6 = 0; 2023-01-11T21:41:24.0630996Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0631058Z { 2023-01-11T21:41:24.0631124Z { 2023-01-11T21:41:24.0631226Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:24.0631322Z auto tmp1 = out_ptr12[i0]; 2023-01-11T21:41:24.0631423Z auto tmp2 = static_cast(8); 2023-01-11T21:41:24.0631515Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0631646Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0631733Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0631812Z tmp6 += tmp5; 2023-01-11T21:41:24.0631883Z } 2023-01-11T21:41:24.0631945Z } 2023-01-11T21:41:24.0632025Z out_ptr13[i0] = tmp6; 2023-01-11T21:41:24.0632087Z } 2023-01-11T21:41:24.0632137Z } 2023-01-11T21:41:24.0632196Z } 2023-01-11T21:41:24.0632270Z #pragma omp for 2023-01-11T21:41:24.0632348Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0632406Z { 2023-01-11T21:41:24.0632601Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr13 + 8*i0); 2023-01-11T21:41:24.0632730Z auto tmp1 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.0632802Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0632880Z auto tmp3 = tmp2.sqrt(); 2023-01-11T21:41:24.0632973Z tmp3.store(in_out_ptr3 + 8*i0); 2023-01-11T21:41:24.0633028Z } 2023-01-11T21:41:24.0633411Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0633531Z for(long i0=32; i0<32; i0+=1) 2023-01-11T21:41:24.0633593Z { 2023-01-11T21:41:24.0633666Z auto tmp0 = out_ptr13[i0]; 2023-01-11T21:41:24.0633819Z auto tmp1 = static_cast(8); 2023-01-11T21:41:24.0633903Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0633991Z auto tmp3 = std::sqrt(tmp2); 2023-01-11T21:41:24.0634124Z in_out_ptr3[i0] = tmp3; 2023-01-11T21:41:24.0634184Z } 2023-01-11T21:41:24.0634258Z #pragma omp for 2023-01-11T21:41:24.0634391Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0634449Z { 2023-01-11T21:41:24.0634528Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0634589Z { 2023-01-11T21:41:24.0634727Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0634865Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr0 + 8 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0635043Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr0 + 16 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0635249Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr0 + 24 + (8*i1) + (32*i0)); 2023-01-11T21:41:24.0635326Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0635408Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0635490Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0635619Z auto tmp7 = at::vec::Vectorized(static_cast(4)); 2023-01-11T21:41:24.0635702Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:24.0635837Z auto tmp9 = tmp0 - tmp8; 2023-01-11T21:41:24.0635923Z auto tmp10 = tmp9.pow(2); 2023-01-11T21:41:24.0636039Z auto tmp11 = tmp1 - tmp8; 2023-01-11T21:41:24.0636128Z auto tmp12 = tmp11.pow(2); 2023-01-11T21:41:24.0636216Z auto tmp13 = tmp10 + tmp12; 2023-01-11T21:41:24.0636340Z auto tmp14 = tmp3 - tmp8; 2023-01-11T21:41:24.0636433Z auto tmp15 = tmp14.pow(2); 2023-01-11T21:41:24.0636520Z auto tmp16 = tmp13 + tmp15; 2023-01-11T21:41:24.0636647Z auto tmp17 = tmp5 - tmp8; 2023-01-11T21:41:24.0636722Z auto tmp18 = tmp17.pow(2); 2023-01-11T21:41:24.0636808Z auto tmp19 = tmp16 + tmp18; 2023-01-11T21:41:24.0636940Z auto tmp20 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:24.0637027Z auto tmp21 = tmp19 / tmp20; 2023-01-11T21:41:24.0637111Z auto tmp22 = tmp21.sqrt(); 2023-01-11T21:41:24.0637211Z tmp22.store(out_ptr14 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0637271Z } 2023-01-11T21:41:24.0637350Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.0637428Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0637489Z { 2023-01-11T21:41:24.0637581Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:24.0637682Z auto tmp1 = in_ptr0[8 + i1 + (32*i0)]; 2023-01-11T21:41:24.0637778Z auto tmp3 = in_ptr0[16 + i1 + (32*i0)]; 2023-01-11T21:41:24.0637874Z auto tmp5 = in_ptr0[24 + i1 + (32*i0)]; 2023-01-11T21:41:24.0637957Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0638089Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0638176Z auto tmp6 = tmp4 + tmp5; 2023-01-11T21:41:24.0638275Z auto tmp7 = static_cast(4); 2023-01-11T21:41:24.0638362Z auto tmp8 = tmp6 / tmp7; 2023-01-11T21:41:24.0638487Z auto tmp9 = tmp0 - tmp8; 2023-01-11T21:41:24.0638695Z auto tmp10 = tmp9 * tmp9; 2023-01-11T21:41:24.0638823Z auto tmp11 = tmp1 - tmp8; 2023-01-11T21:41:24.0638900Z auto tmp12 = tmp11 * tmp11; 2023-01-11T21:41:24.0638986Z auto tmp13 = tmp10 + tmp12; 2023-01-11T21:41:24.0639115Z auto tmp14 = tmp3 - tmp8; 2023-01-11T21:41:24.0639239Z auto tmp15 = tmp14 * tmp14; 2023-01-11T21:41:24.0639328Z auto tmp16 = tmp13 + tmp15; 2023-01-11T21:41:24.0639454Z auto tmp17 = tmp5 - tmp8; 2023-01-11T21:41:24.0639541Z auto tmp18 = tmp17 * tmp17; 2023-01-11T21:41:24.0639616Z auto tmp19 = tmp16 + tmp18; 2023-01-11T21:41:24.0639716Z auto tmp20 = static_cast(3); 2023-01-11T21:41:24.0639800Z auto tmp21 = tmp19 / tmp20; 2023-01-11T21:41:24.0639897Z auto tmp22 = std::sqrt(tmp21); 2023-01-11T21:41:24.0639987Z out_ptr14[i1 + (8*i0)] = tmp22; 2023-01-11T21:41:24.0640049Z } 2023-01-11T21:41:24.0640113Z } 2023-01-11T21:41:24.0640178Z #pragma omp single 2023-01-11T21:41:24.0640237Z { 2023-01-11T21:41:24.0640299Z { 2023-01-11T21:41:24.0640361Z { 2023-01-11T21:41:24.0640449Z auto tmp0 = out_ptr1[0]; 2023-01-11T21:41:24.0640582Z auto tmp1 = static_cast(255); 2023-01-11T21:41:24.0640661Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0646376Z in_out_ptr4[0] = tmp2; 2023-01-11T21:41:24.0646444Z } 2023-01-11T21:41:24.0646506Z } 2023-01-11T21:41:24.0646569Z } 2023-01-11T21:41:24.0646648Z #pragma omp single 2023-01-11T21:41:24.0646711Z { 2023-01-11T21:41:24.0646764Z { 2023-01-11T21:41:24.0646826Z { 2023-01-11T21:41:24.0647040Z auto tmp0 = out_ptr3[0]; 2023-01-11T21:41:24.0647146Z auto tmp1 = static_cast(256); 2023-01-11T21:41:24.0647234Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0647320Z in_out_ptr5[0] = tmp2; 2023-01-11T21:41:24.0647384Z } 2023-01-11T21:41:24.0647434Z } 2023-01-11T21:41:24.0647493Z } 2023-01-11T21:41:24.0647571Z #pragma omp single 2023-01-11T21:41:24.0647640Z { 2023-01-11T21:41:24.0647699Z { 2023-01-11T21:41:24.0647759Z { 2023-01-11T21:41:24.0647837Z auto tmp0 = out_ptr9[0]; 2023-01-11T21:41:24.0647941Z auto tmp1 = static_cast(256); 2023-01-11T21:41:24.0648026Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0648127Z auto tmp3 = std::sqrt(tmp2); 2023-01-11T21:41:24.0648213Z in_out_ptr6[0] = tmp3; 2023-01-11T21:41:24.0648275Z } 2023-01-11T21:41:24.0648335Z } 2023-01-11T21:41:24.0648383Z } 2023-01-11T21:41:24.0648440Z } 2023-01-11T21:41:24.0648499Z } 2023-01-11T21:41:24.0648597Z ''') 2023-01-11T21:41:24.0648603Z 2023-01-11T21:41:24.0648607Z 2023-01-11T21:41:24.0648696Z async_compile.wait(globals()) 2023-01-11T21:41:24.0649050Z del async_compile 2023-01-11T21:41:24.0649056Z 2023-01-11T21:41:24.0649125Z def call(args): 2023-01-11T21:41:24.0649246Z arg0_1, = args 2023-01-11T21:41:24.0649311Z args.clear() 2023-01-11T21:41:24.0649525Z buf0 = empty_strided((1, 1, 1, 1), (1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0649712Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0649920Z buf2 = empty_strided((1, 1, 1, 1), (1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650098Z buf3 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650308Z buf10 = empty_strided((1, 1, 1, 1), (1, 1, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650513Z buf4 = empty_strided((2, 4, 4, 1), (16, 4, 1, 32), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650712Z buf5 = empty_strided((2, 4, 4), (16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650905Z buf7 = empty_strided((2, 4, 4, 1), (16, 4, 1, 32), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0650986Z buf6 = buf5; del buf5 # reuse 2023-01-11T21:41:24.0651262Z buf8 = empty_strided((2, 4, 4), (16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0651343Z buf9 = buf8; del buf8 # reuse 2023-01-11T21:41:24.0651528Z buf11 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0651736Z buf12 = empty_strided((1, 1, 4, 8), (32, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0651930Z buf13 = empty_strided((4, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0652124Z buf15 = empty_strided((1, 1, 4, 8), (32, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0652206Z buf14 = buf13; del buf13 # reuse 2023-01-11T21:41:24.0652396Z buf16 = empty_strided((4, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0652479Z buf17 = buf16; del buf16 # reuse 2023-01-11T21:41:24.0652684Z buf18 = empty_strided((2, 4, 1, 8), (32, 8, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0652769Z buf19 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0652887Z buf20 = buf3; del buf3 # reuse 2023-01-11T21:41:24.0652971Z buf21 = buf11; del buf11 # reuse 2023-01-11T21:41:24.0653426Z kernel_cpp_0(c_void_p(buf6.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf14.data_ptr()), c_void_p(buf17.data_ptr()), c_void_p(buf19.data_ptr()), c_void_p(buf20.data_ptr()), c_void_p(buf21.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf10.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf12.data_ptr()), c_void_p(buf15.data_ptr()), c_void_p(buf18.data_ptr())) 2023-01-11T21:41:24.0653494Z del arg0_1 2023-01-11T21:41:24.0653612Z return (buf19, buf20, buf6, buf9, buf21, buf14, buf17, buf18, ) 2023-01-11T21:41:24.0653618Z 2023-01-11T21:41:24.0653623Z 2023-01-11T21:41:24.0653697Z if __name__ == "__main__": 2023-01-11T21:41:24.0653811Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0653933Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0654146Z arg0_1 = rand_strided((2, 4, 4, 8), (128, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0654252Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0654523Z [2023-01-11 21:38:34,169] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 445 2023-01-11T21:41:24.0654528Z 2023-01-11T21:41:24.0654596Z ok (2.357s) 2023-01-11T21:41:24.0654917Z test_strided_inputs_cpu (__main__.CpuTests) ... [2023-01-11 21:38:34,186] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 446 2023-01-11T21:41:24.0655185Z [2023-01-11 21:38:36,174] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 446 2023-01-11T21:41:24.0655190Z 2023-01-11T21:41:24.0655286Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0655356Z import torch 2023-01-11T21:41:24.0655424Z import random 2023-01-11T21:41:24.0655539Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0655665Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0655670Z 2023-01-11T21:41:24.0655747Z aten = torch.ops.aten 2023-01-11T21:41:24.0655867Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0655956Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0655962Z 2023-01-11T21:41:24.0655966Z 2023-01-11T21:41:24.0656100Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0656304Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0656421Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0656524Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0656623Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0656683Z { 2023-01-11T21:41:24.0656768Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0656828Z { 2023-01-11T21:41:24.0656903Z #pragma omp for 2023-01-11T21:41:24.0657040Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:24.0657104Z { 2023-01-11T21:41:24.0657169Z { 2023-01-11T21:41:24.0657220Z { 2023-01-11T21:41:24.0657312Z auto tmp0 = in_ptr0[2*i0]; 2023-01-11T21:41:24.0657399Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.0657489Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0657570Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0657634Z } 2023-01-11T21:41:24.0657776Z } 2023-01-11T21:41:24.0657826Z } 2023-01-11T21:41:24.0657885Z } 2023-01-11T21:41:24.0657944Z } 2023-01-11T21:41:24.0658023Z ''') 2023-01-11T21:41:24.0658028Z 2023-01-11T21:41:24.0658032Z 2023-01-11T21:41:24.0658124Z async_compile.wait(globals()) 2023-01-11T21:41:24.0658194Z del async_compile 2023-01-11T21:41:24.0658198Z 2023-01-11T21:41:24.0658265Z def call(args): 2023-01-11T21:41:24.0658326Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0658430Z args.clear() 2023-01-11T21:41:24.0658630Z buf0 = empty_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0658789Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0658858Z del arg0_1 2023-01-11T21:41:24.0658922Z del arg1_1 2023-01-11T21:41:24.0658991Z return (buf0, ) 2023-01-11T21:41:24.0658996Z 2023-01-11T21:41:24.0659000Z 2023-01-11T21:41:24.0659074Z if __name__ == "__main__": 2023-01-11T21:41:24.0659175Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0659294Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0659553Z arg0_1 = rand_strided((8, 16), (32, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0659750Z arg1_1 = rand_strided((8, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0659864Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0659871Z 2023-01-11T21:41:24.0660083Z ok (2.003s) 2023-01-11T21:41:24.0660547Z test_sum1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0660674Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0660936Z [2023-01-11 21:38:36,203] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 447 2023-01-11T21:41:24.0661263Z [2023-01-11 21:38:38,311] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 447 2023-01-11T21:41:24.0661279Z 2023-01-11T21:41:24.0661360Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0661426Z import torch 2023-01-11T21:41:24.0661494Z import random 2023-01-11T21:41:24.0661614Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0661732Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0661738Z 2023-01-11T21:41:24.0661813Z aten = torch.ops.aten 2023-01-11T21:41:24.0662006Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0662086Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0662091Z 2023-01-11T21:41:24.0662095Z 2023-01-11T21:41:24.0662228Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0662613Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0662735Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0662837Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0662935Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0662994Z { 2023-01-11T21:41:24.0663079Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0663294Z { 2023-01-11T21:41:24.0663371Z #pragma omp for 2023-01-11T21:41:24.0663451Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0663513Z { 2023-01-11T21:41:24.0663573Z { 2023-01-11T21:41:24.0663766Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0663842Z float tmp3 = 0; 2023-01-11T21:41:24.0663951Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0664040Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0664102Z { 2023-01-11T21:41:24.0664242Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0664377Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0664470Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0664606Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0664804Z } 2023-01-11T21:41:24.0665003Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0665121Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0665204Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0665268Z { 2023-01-11T21:41:24.0665368Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0665461Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:24.0665551Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0665615Z tmp3 += tmp2; 2023-01-11T21:41:24.0665675Z } 2023-01-11T21:41:24.0665755Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0665815Z } 2023-01-11T21:41:24.0665876Z } 2023-01-11T21:41:24.0665936Z } 2023-01-11T21:41:24.0665994Z } 2023-01-11T21:41:24.0666073Z ''') 2023-01-11T21:41:24.0666081Z 2023-01-11T21:41:24.0666086Z 2023-01-11T21:41:24.0666173Z async_compile.wait(globals()) 2023-01-11T21:41:24.0666241Z del async_compile 2023-01-11T21:41:24.0666246Z 2023-01-11T21:41:24.0666312Z def call(args): 2023-01-11T21:41:24.0666385Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0666452Z args.clear() 2023-01-11T21:41:24.0666643Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0666792Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0666856Z del arg0_1 2023-01-11T21:41:24.0666920Z del arg1_1 2023-01-11T21:41:24.0666987Z return (buf0, ) 2023-01-11T21:41:24.0666992Z 2023-01-11T21:41:24.0666996Z 2023-01-11T21:41:24.0667072Z if __name__ == "__main__": 2023-01-11T21:41:24.0667183Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0667304Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0667504Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0667683Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0667793Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0667798Z 2023-01-11T21:41:24.0667865Z ok (2.138s) 2023-01-11T21:41:24.0668323Z test_sum2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0668507Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0668775Z [2023-01-11 21:38:38,343] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 448 2023-01-11T21:41:24.0669076Z [2023-01-11 21:38:40,281] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 448 2023-01-11T21:41:24.0669082Z 2023-01-11T21:41:24.0669172Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0669239Z import torch 2023-01-11T21:41:24.0669296Z import random 2023-01-11T21:41:24.0669404Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0669520Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0669525Z 2023-01-11T21:41:24.0669603Z aten = torch.ops.aten 2023-01-11T21:41:24.0669733Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0669824Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0669829Z 2023-01-11T21:41:24.0669833Z 2023-01-11T21:41:24.0669966Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0670237Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0670375Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0670480Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0670577Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0670672Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0670731Z { 2023-01-11T21:41:24.0670827Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0670887Z { 2023-01-11T21:41:24.0670950Z #pragma omp for 2023-01-11T21:41:24.0671028Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0671088Z { 2023-01-11T21:41:24.0671166Z #pragma GCC ivdep 2023-01-11T21:41:24.0671248Z for(long i1=0; i1<21; i1+=1) 2023-01-11T21:41:24.0671309Z { 2023-01-11T21:41:24.0671370Z { 2023-01-11T21:41:24.0671423Z { 2023-01-11T21:41:24.0671506Z float tmp3 = 0; 2023-01-11T21:41:24.0671600Z for(long i2=0; i2<27; i2+=1) 2023-01-11T21:41:24.0671670Z { 2023-01-11T21:41:24.0671745Z { 2023-01-11T21:41:24.0672064Z auto tmp0 = in_ptr0[i1 + (21*i2) + (567*i0)]; 2023-01-11T21:41:24.0672179Z auto tmp1 = in_ptr1[i1 + (21*i2) + (567*i0)]; 2023-01-11T21:41:24.0672265Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0672352Z tmp3 += tmp2; 2023-01-11T21:41:24.0672416Z } 2023-01-11T21:41:24.0672483Z } 2023-01-11T21:41:24.0672577Z out_ptr0[i1 + (21*i0)] = tmp3; 2023-01-11T21:41:24.0672640Z } 2023-01-11T21:41:24.0672703Z } 2023-01-11T21:41:24.0672754Z } 2023-01-11T21:41:24.0672816Z } 2023-01-11T21:41:24.0672889Z #pragma omp for 2023-01-11T21:41:24.0672971Z for(long i0=0; i0<216; i0+=1) 2023-01-11T21:41:24.0673032Z { 2023-01-11T21:41:24.0673095Z { 2023-01-11T21:41:24.0673288Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0673469Z float tmp3 = 0; 2023-01-11T21:41:24.0673589Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0673673Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:24.0673786Z { 2023-01-11T21:41:24.0674145Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (21*i0)); 2023-01-11T21:41:24.0674283Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (21*i0)); 2023-01-11T21:41:24.0674372Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0674451Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0674502Z } 2023-01-11T21:41:24.0674701Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0674864Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0674951Z for(long i1=16; i1<21; i1+=1) 2023-01-11T21:41:24.0675013Z { 2023-01-11T21:41:24.0675109Z auto tmp0 = in_ptr0[i1 + (21*i0)]; 2023-01-11T21:41:24.0675203Z auto tmp1 = in_ptr1[i1 + (21*i0)]; 2023-01-11T21:41:24.0675292Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0675356Z tmp3 += tmp2; 2023-01-11T21:41:24.0675418Z } 2023-01-11T21:41:24.0675495Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:24.0675556Z } 2023-01-11T21:41:24.0675617Z } 2023-01-11T21:41:24.0675678Z } 2023-01-11T21:41:24.0675725Z } 2023-01-11T21:41:24.0675818Z ''') 2023-01-11T21:41:24.0675823Z 2023-01-11T21:41:24.0675828Z 2023-01-11T21:41:24.0675915Z async_compile.wait(globals()) 2023-01-11T21:41:24.0675985Z del async_compile 2023-01-11T21:41:24.0675992Z 2023-01-11T21:41:24.0676087Z def call(args): 2023-01-11T21:41:24.0676161Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0676228Z args.clear() 2023-01-11T21:41:24.0676428Z buf0 = empty_strided((8, 21), (21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0676621Z buf1 = empty_strided((8, 9, 3), (27, 3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0676805Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0676869Z del arg0_1 2023-01-11T21:41:24.0676931Z del arg1_1 2023-01-11T21:41:24.0677008Z return (buf0, buf1, ) 2023-01-11T21:41:24.0677013Z 2023-01-11T21:41:24.0677017Z 2023-01-11T21:41:24.0677091Z if __name__ == "__main__": 2023-01-11T21:41:24.0677202Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0677323Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0677527Z arg0_1 = rand_strided((8, 9, 3, 21), (567, 63, 21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0677740Z arg1_1 = rand_strided((8, 9, 3, 21), (567, 63, 21, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0677854Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0677859Z 2023-01-11T21:41:24.0677921Z ok (1.969s) 2023-01-11T21:41:24.0678386Z test_sum3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0678509Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0678770Z [2023-01-11 21:38:40,311] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 449 2023-01-11T21:41:24.0679035Z [2023-01-11 21:38:41,879] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 449 2023-01-11T21:41:24.0679040Z 2023-01-11T21:41:24.0679132Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0679189Z import torch 2023-01-11T21:41:24.0679254Z import random 2023-01-11T21:41:24.0679366Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0679484Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0679489Z 2023-01-11T21:41:24.0679564Z aten = torch.ops.aten 2023-01-11T21:41:24.0679694Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0679782Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0679787Z 2023-01-11T21:41:24.0679791Z 2023-01-11T21:41:24.0679919Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0680111Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0680226Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0680358Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0680456Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0680549Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0680640Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0680695Z { 2023-01-11T21:41:24.0680781Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0680841Z { 2023-01-11T21:41:24.0680913Z #pragma omp for 2023-01-11T21:41:24.0680991Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.0681050Z { 2023-01-11T21:41:24.0681110Z { 2023-01-11T21:41:24.0681295Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0681370Z float tmp3 = 0; 2023-01-11T21:41:24.0681478Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0681673Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0681736Z { 2023-01-11T21:41:24.0681876Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.0682011Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:24.0682101Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0682203Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.0682272Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0682334Z } 2023-01-11T21:41:24.0682525Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0682641Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0682727Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.0682789Z { 2023-01-11T21:41:24.0682884Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.0682973Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.0683048Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0683137Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.0683302Z tmp3 += tmp2; 2023-01-11T21:41:24.0683367Z } 2023-01-11T21:41:24.0683445Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:24.0683504Z } 2023-01-11T21:41:24.0683919Z } 2023-01-11T21:41:24.0683986Z #pragma omp for 2023-01-11T21:41:24.0684064Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.0684124Z { 2023-01-11T21:41:24.0684255Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0684388Z auto tmp1 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:24.0684469Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0684559Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0684612Z } 2023-01-11T21:41:24.0684702Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0684779Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.0684840Z { 2023-01-11T21:41:24.0684920Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0685019Z auto tmp1 = static_cast(10); 2023-01-11T21:41:24.0685097Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0685163Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:24.0685221Z } 2023-01-11T21:41:24.0685348Z } 2023-01-11T21:41:24.0685404Z } 2023-01-11T21:41:24.0685484Z ''') 2023-01-11T21:41:24.0685489Z 2023-01-11T21:41:24.0685605Z 2023-01-11T21:41:24.0685693Z async_compile.wait(globals()) 2023-01-11T21:41:24.0685764Z del async_compile 2023-01-11T21:41:24.0685769Z 2023-01-11T21:41:24.0685827Z def call(args): 2023-01-11T21:41:24.0685900Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0685969Z args.clear() 2023-01-11T21:41:24.0686171Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0686398Z buf1 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0686588Z buf2 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0686805Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0686871Z del arg0_1 2023-01-11T21:41:24.0686925Z del arg1_1 2023-01-11T21:41:24.0687006Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0687011Z 2023-01-11T21:41:24.0687016Z 2023-01-11T21:41:24.0687090Z if __name__ == "__main__": 2023-01-11T21:41:24.0687205Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0687326Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0687525Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0687753Z arg1_1 = rand_strided((1, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0687858Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0687875Z 2023-01-11T21:41:24.0687928Z ok (1.604s) 2023-01-11T21:41:24.0688388Z test_sum4_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0688515Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0688775Z [2023-01-11 21:38:41,922] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 450 2023-01-11T21:41:24.0689038Z [2023-01-11 21:38:43,664] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 450 2023-01-11T21:41:24.0689048Z 2023-01-11T21:41:24.0689143Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0689209Z import torch 2023-01-11T21:41:24.0689277Z import random 2023-01-11T21:41:24.0689387Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0689493Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0689498Z 2023-01-11T21:41:24.0689573Z aten = torch.ops.aten 2023-01-11T21:41:24.0689703Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0689794Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0689799Z 2023-01-11T21:41:24.0689804Z 2023-01-11T21:41:24.0689934Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0690136Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0690251Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0690349Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0690438Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0690530Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0690621Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.0690713Z float* __restrict__ out_ptr4) 2023-01-11T21:41:24.0690774Z { 2023-01-11T21:41:24.0690870Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0690931Z { 2023-01-11T21:41:24.0690995Z #pragma omp for 2023-01-11T21:41:24.0691076Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:24.0691136Z { 2023-01-11T21:41:24.0691267Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0691400Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0691485Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0691571Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0691620Z } 2023-01-11T21:41:24.0691747Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0691829Z for(long i0=1024; i0<1024; i0+=1) 2023-01-11T21:41:24.0691885Z { 2023-01-11T21:41:24.0691968Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0692066Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0692149Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0692216Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0692274Z } 2023-01-11T21:41:24.0692348Z #pragma omp for 2023-01-11T21:41:24.0692427Z for(long i0=0; i0<128; i0+=1) 2023-01-11T21:41:24.0692560Z { 2023-01-11T21:41:24.0692622Z { 2023-01-11T21:41:24.0692809Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0692875Z float tmp1 = 0; 2023-01-11T21:41:24.0692994Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0693079Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0693174Z { 2023-01-11T21:41:24.0693317Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0693394Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0693456Z } 2023-01-11T21:41:24.0693649Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0693755Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0693841Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0693902Z { 2023-01-11T21:41:24.0693999Z auto tmp0 = out_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0694076Z tmp1 += tmp0; 2023-01-11T21:41:24.0694136Z } 2023-01-11T21:41:24.0694217Z out_ptr1[i0] = tmp1; 2023-01-11T21:41:24.0694332Z } 2023-01-11T21:41:24.0694393Z } 2023-01-11T21:41:24.0694470Z #pragma omp for 2023-01-11T21:41:24.0694547Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.0694606Z { 2023-01-11T21:41:24.0694739Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0694990Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:24.0695061Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0695150Z tmp2.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0695210Z } 2023-01-11T21:41:24.0695414Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0695493Z for(long i0=128; i0<128; i0+=1) 2023-01-11T21:41:24.0695553Z { 2023-01-11T21:41:24.0695636Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:24.0695804Z auto tmp1 = static_cast(3); 2023-01-11T21:41:24.0695887Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0695962Z out_ptr2[i0] = tmp2; 2023-01-11T21:41:24.0696027Z } 2023-01-11T21:41:24.0696103Z #pragma omp for 2023-01-11T21:41:24.0696182Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.0696243Z { 2023-01-11T21:41:24.0696294Z { 2023-01-11T21:41:24.0696477Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0696552Z float tmp1 = 0; 2023-01-11T21:41:24.0696673Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0696758Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0696982Z { 2023-01-11T21:41:24.0697291Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0697371Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0697423Z } 2023-01-11T21:41:24.0697619Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0697784Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0697874Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0697936Z { 2023-01-11T21:41:24.0698035Z auto tmp0 = out_ptr2[i1 + (8*i0)]; 2023-01-11T21:41:24.0698111Z tmp1 += tmp0; 2023-01-11T21:41:24.0698162Z } 2023-01-11T21:41:24.0698239Z out_ptr3[i0] = tmp1; 2023-01-11T21:41:24.0698299Z } 2023-01-11T21:41:24.0698357Z } 2023-01-11T21:41:24.0698431Z #pragma omp for 2023-01-11T21:41:24.0698507Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0698567Z { 2023-01-11T21:41:24.0698684Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0698812Z auto tmp1 = at::vec::Vectorized(static_cast(5)); 2023-01-11T21:41:24.0698894Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0699014Z tmp2.store(out_ptr4 + 8*i0); 2023-01-11T21:41:24.0699074Z } 2023-01-11T21:41:24.0699163Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0699242Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.0699290Z { 2023-01-11T21:41:24.0699370Z auto tmp0 = out_ptr3[i0]; 2023-01-11T21:41:24.0699467Z auto tmp1 = static_cast(5); 2023-01-11T21:41:24.0699551Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0699629Z out_ptr4[i0] = tmp2; 2023-01-11T21:41:24.0699688Z } 2023-01-11T21:41:24.0699748Z } 2023-01-11T21:41:24.0699796Z } 2023-01-11T21:41:24.0699878Z ''') 2023-01-11T21:41:24.0699884Z 2023-01-11T21:41:24.0699889Z 2023-01-11T21:41:24.0699975Z async_compile.wait(globals()) 2023-01-11T21:41:24.0700042Z del async_compile 2023-01-11T21:41:24.0700047Z 2023-01-11T21:41:24.0700115Z def call(args): 2023-01-11T21:41:24.0700180Z arg0_1, = args 2023-01-11T21:41:24.0700248Z args.clear() 2023-01-11T21:41:24.0700459Z buf0 = empty_strided((1, 16, 8, 8), (1024, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0700664Z buf1 = empty_strided((1, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0700861Z buf2 = empty_strided((1, 16, 8), (128, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0701053Z buf3 = empty_strided((1, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0701244Z buf4 = empty_strided((1, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0701476Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:24.0701542Z del arg0_1 2023-01-11T21:41:24.0701634Z return (buf4, buf3, buf2, buf1, buf0, ) 2023-01-11T21:41:24.0701640Z 2023-01-11T21:41:24.0701644Z 2023-01-11T21:41:24.0701718Z if __name__ == "__main__": 2023-01-11T21:41:24.0701824Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0701945Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0702158Z arg0_1 = rand_strided((1, 16, 8, 8), (1024, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0702265Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0702270Z 2023-01-11T21:41:24.0702452Z ok (1.779s) 2023-01-11T21:41:24.0702996Z test_sum5_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0703121Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0703394Z [2023-01-11 21:38:43,687] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 451 2023-01-11T21:41:24.0703729Z [2023-01-11 21:38:45,260] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 451 2023-01-11T21:41:24.0703735Z 2023-01-11T21:41:24.0703817Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0703883Z import torch 2023-01-11T21:41:24.0703952Z import random 2023-01-11T21:41:24.0704064Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0704179Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0704184Z 2023-01-11T21:41:24.0704260Z aten = torch.ops.aten 2023-01-11T21:41:24.0704392Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0704479Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0704484Z 2023-01-11T21:41:24.0704489Z 2023-01-11T21:41:24.0704609Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0704807Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0704965Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0705068Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0705163Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0705316Z { 2023-01-11T21:41:24.0705401Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:24.0705485Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0705544Z { 2023-01-11T21:41:24.0705619Z #pragma omp for 2023-01-11T21:41:24.0705699Z for(long i0=0; i0<136; i0+=1) 2023-01-11T21:41:24.0705756Z { 2023-01-11T21:41:24.0705817Z { 2023-01-11T21:41:24.0706006Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0706073Z float tmp3 = 0; 2023-01-11T21:41:24.0706189Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0706276Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0706344Z { 2023-01-11T21:41:24.0706483Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (9*i0)); 2023-01-11T21:41:24.0706615Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0706704Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0706840Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0706892Z } 2023-01-11T21:41:24.0707245Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0707366Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0707452Z for(long i1=8; i1<9; i1+=1) 2023-01-11T21:41:24.0707516Z { 2023-01-11T21:41:24.0707611Z auto tmp0 = in_ptr0[i1 + (9*i0)]; 2023-01-11T21:41:24.0707711Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0707807Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0707871Z tmp3 += tmp2; 2023-01-11T21:41:24.0707933Z } 2023-01-11T21:41:24.0708016Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0708080Z } 2023-01-11T21:41:24.0708142Z } 2023-01-11T21:41:24.0708217Z #pragma omp for 2023-01-11T21:41:24.0708360Z for(long i0=0; i0<17; i0+=1) 2023-01-11T21:41:24.0708410Z { 2023-01-11T21:41:24.0708473Z { 2023-01-11T21:41:24.0708770Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0708846Z float tmp3 = 0; 2023-01-11T21:41:24.0708964Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0709052Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0709116Z { 2023-01-11T21:41:24.0709248Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0709415Z auto tmp1 = at::vec::Vectorized(static_cast(3)); 2023-01-11T21:41:24.0709505Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0709585Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0709762Z } 2023-01-11T21:41:24.0709954Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0710073Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0710162Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0710215Z { 2023-01-11T21:41:24.0710313Z auto tmp0 = out_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0710413Z auto tmp1 = static_cast(3); 2023-01-11T21:41:24.0710502Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0710649Z tmp3 += tmp2; 2023-01-11T21:41:24.0710711Z } 2023-01-11T21:41:24.0710790Z out_ptr1[i0] = tmp3; 2023-01-11T21:41:24.0710840Z } 2023-01-11T21:41:24.0710900Z } 2023-01-11T21:41:24.0710975Z #pragma omp for 2023-01-11T21:41:24.0711053Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0711114Z { 2023-01-11T21:41:24.0711244Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0711377Z auto tmp1 = at::vec::Vectorized(static_cast(5)); 2023-01-11T21:41:24.0711511Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0711603Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0711664Z } 2023-01-11T21:41:24.0711756Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0711835Z for(long i0=16; i0<17; i0+=1) 2023-01-11T21:41:24.0711895Z { 2023-01-11T21:41:24.0711976Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:24.0712066Z auto tmp1 = static_cast(5); 2023-01-11T21:41:24.0712144Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0712218Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0712278Z } 2023-01-11T21:41:24.0712336Z } 2023-01-11T21:41:24.0712391Z } 2023-01-11T21:41:24.0712473Z ''') 2023-01-11T21:41:24.0712479Z 2023-01-11T21:41:24.0712484Z 2023-01-11T21:41:24.0712559Z async_compile.wait(globals()) 2023-01-11T21:41:24.0712627Z del async_compile 2023-01-11T21:41:24.0712633Z 2023-01-11T21:41:24.0712699Z def call(args): 2023-01-11T21:41:24.0712768Z arg0_1, = args 2023-01-11T21:41:24.0712839Z args.clear() 2023-01-11T21:41:24.0713045Z buf0 = empty_strided((1, 17, 8), (136, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0713236Z buf1 = empty_strided((1, 17), (17, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0713317Z buf2 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0713469Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0713536Z del arg0_1 2023-01-11T21:41:24.0713600Z return (buf2, ) 2023-01-11T21:41:24.0713605Z 2023-01-11T21:41:24.0713609Z 2023-01-11T21:41:24.0713685Z if __name__ == "__main__": 2023-01-11T21:41:24.0713854Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0713978Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0714197Z arg0_1 = rand_strided((1, 17, 8, 9), (1224, 72, 9, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0714291Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0714308Z 2023-01-11T21:41:24.0714361Z ok (1.595s) 2023-01-11T21:41:24.0714828Z test_sum_dtype_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0714990Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0715330Z [2023-01-11 21:38:45,282] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 452 2023-01-11T21:41:24.0715596Z [2023-01-11 21:38:45,300] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 452 2023-01-11T21:41:24.0715601Z 2023-01-11T21:41:24.0715694Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0715760Z import torch 2023-01-11T21:41:24.0715827Z import random 2023-01-11T21:41:24.0715939Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0716046Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0716050Z 2023-01-11T21:41:24.0716127Z aten = torch.ops.aten 2023-01-11T21:41:24.0716257Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0716381Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0716386Z 2023-01-11T21:41:24.0716391Z 2023-01-11T21:41:24.0716524Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0716727Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0716842Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0716940Z double* __restrict__ out_ptr0, 2023-01-11T21:41:24.0717025Z double* __restrict__ out_ptr1, 2023-01-11T21:41:24.0717118Z double* __restrict__ out_ptr2) 2023-01-11T21:41:24.0717176Z { 2023-01-11T21:41:24.0717273Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0717329Z { 2023-01-11T21:41:24.0717402Z #pragma omp for 2023-01-11T21:41:24.0717481Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0717529Z { 2023-01-11T21:41:24.0717588Z { 2023-01-11T21:41:24.0717649Z { 2023-01-11T21:41:24.0717732Z double tmp2 = 0; 2023-01-11T21:41:24.0717820Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:24.0717885Z { 2023-01-11T21:41:24.0717940Z { 2023-01-11T21:41:24.0718043Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:24.0718153Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0718235Z tmp2 += tmp1; 2023-01-11T21:41:24.0718302Z } 2023-01-11T21:41:24.0718433Z } 2023-01-11T21:41:24.0718649Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0718700Z } 2023-01-11T21:41:24.0718763Z } 2023-01-11T21:41:24.0718823Z } 2023-01-11T21:41:24.0718882Z } 2023-01-11T21:41:24.0718940Z { 2023-01-11T21:41:24.0719000Z { 2023-01-11T21:41:24.0719079Z double tmp2 = 0; 2023-01-11T21:41:24.0719175Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0719238Z { 2023-01-11T21:41:24.0719339Z #pragma omp for reduction(+:tmp2) 2023-01-11T21:41:24.0719428Z for(long i0=0; i0<1024; i0+=1) 2023-01-11T21:41:24.0719492Z { 2023-01-11T21:41:24.0719556Z { 2023-01-11T21:41:24.0719647Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0719744Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0719822Z tmp2 += tmp1; 2023-01-11T21:41:24.0719887Z } 2023-01-11T21:41:24.0719951Z } 2023-01-11T21:41:24.0720014Z } 2023-01-11T21:41:24.0720090Z out_ptr1[0] = tmp2; 2023-01-11T21:41:24.0720152Z } 2023-01-11T21:41:24.0720201Z } 2023-01-11T21:41:24.0720296Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0720356Z { 2023-01-11T21:41:24.0720430Z #pragma omp for 2023-01-11T21:41:24.0720650Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0720764Z { 2023-01-11T21:41:24.0720830Z #pragma GCC ivdep 2023-01-11T21:41:24.0720911Z for(long i1=0; i1<32; i1+=1) 2023-01-11T21:41:24.0720972Z { 2023-01-11T21:41:24.0721034Z { 2023-01-11T21:41:24.0721096Z { 2023-01-11T21:41:24.0721195Z auto tmp0 = in_ptr0[i1 + (32*i0)]; 2023-01-11T21:41:24.0721289Z auto tmp2 = out_ptr0[i1]; 2023-01-11T21:41:24.0721369Z auto tmp4 = out_ptr1[0]; 2023-01-11T21:41:24.0721476Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0721568Z auto tmp3 = tmp1 * tmp2; 2023-01-11T21:41:24.0721658Z auto tmp5 = tmp3 + tmp4; 2023-01-11T21:41:24.0721751Z out_ptr2[i1 + (32*i0)] = tmp5; 2023-01-11T21:41:24.0721816Z } 2023-01-11T21:41:24.0721879Z } 2023-01-11T21:41:24.0721959Z } 2023-01-11T21:41:24.0722022Z } 2023-01-11T21:41:24.0722082Z } 2023-01-11T21:41:24.0722141Z } 2023-01-11T21:41:24.0722225Z ''') 2023-01-11T21:41:24.0722230Z 2023-01-11T21:41:24.0722235Z 2023-01-11T21:41:24.0722319Z async_compile.wait(globals()) 2023-01-11T21:41:24.0722390Z del async_compile 2023-01-11T21:41:24.0722395Z 2023-01-11T21:41:24.0722451Z def call(args): 2023-01-11T21:41:24.0722517Z arg0_1, = args 2023-01-11T21:41:24.0722585Z args.clear() 2023-01-11T21:41:24.0722777Z buf0 = empty_strided((32, ), (1, ), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0722958Z buf1 = empty_strided((), (), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0723150Z buf2 = empty_strided((32, 32), (32, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0723336Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0723404Z del arg0_1 2023-01-11T21:41:24.0723464Z return (buf2, ) 2023-01-11T21:41:24.0723469Z 2023-01-11T21:41:24.0723473Z 2023-01-11T21:41:24.0723546Z if __name__ == "__main__": 2023-01-11T21:41:24.0723658Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0723778Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0723973Z arg0_1 = rand_strided((32, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0724078Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0724083Z 2023-01-11T21:41:24.0724336Z ok (0.039s) 2023-01-11T21:41:24.0724797Z test_sum_int_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0724927Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0725177Z [2023-01-11 21:38:45,320] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 453 2023-01-11T21:41:24.0725446Z [2023-01-11 21:38:45,333] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 453 2023-01-11T21:41:24.0725878Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0725999Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0726252Z [2023-01-11 21:38:45,351] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 454 2023-01-11T21:41:24.0726550Z [2023-01-11 21:38:45,365] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 454 2023-01-11T21:41:24.0726974Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0727095Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0727351Z [2023-01-11 21:38:45,384] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 455 2023-01-11T21:41:24.0727612Z [2023-01-11 21:38:45,396] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 455 2023-01-11T21:41:24.0727617Z 2023-01-11T21:41:24.0727708Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0727769Z import torch 2023-01-11T21:41:24.0727865Z import random 2023-01-11T21:41:24.0727977Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0728094Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0728099Z 2023-01-11T21:41:24.0728239Z aten = torch.ops.aten 2023-01-11T21:41:24.0728370Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0728459Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0728464Z 2023-01-11T21:41:24.0728468Z 2023-01-11T21:41:24.0728603Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0728794Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0728903Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0729004Z const bool* __restrict__ in_ptr0, 2023-01-11T21:41:24.0729099Z long* __restrict__ out_ptr1) 2023-01-11T21:41:24.0729157Z { 2023-01-11T21:41:24.0729238Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0729302Z { 2023-01-11T21:41:24.0729351Z { 2023-01-11T21:41:24.0729422Z long tmp2 = 0; 2023-01-11T21:41:24.0729493Z long tmp3 = 0; 2023-01-11T21:41:24.0729593Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0729656Z { 2023-01-11T21:41:24.0729778Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:24.0729862Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0729914Z { 2023-01-11T21:41:24.0729980Z { 2023-01-11T21:41:24.0730071Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0730178Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0730253Z tmp2 += tmp1; 2023-01-11T21:41:24.0730329Z tmp3 += tmp1; 2023-01-11T21:41:24.0730389Z } 2023-01-11T21:41:24.0730440Z } 2023-01-11T21:41:24.0730501Z } 2023-01-11T21:41:24.0730582Z out_ptr0[0] = tmp2; 2023-01-11T21:41:24.0730654Z out_ptr1[0] = tmp3; 2023-01-11T21:41:24.0730715Z } 2023-01-11T21:41:24.0730774Z } 2023-01-11T21:41:24.0730821Z { 2023-01-11T21:41:24.0730879Z { 2023-01-11T21:41:24.0731023Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:24.0731103Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:24.0731197Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0731275Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0731352Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0731419Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:24.0731476Z } 2023-01-11T21:41:24.0731532Z } 2023-01-11T21:41:24.0731589Z } 2023-01-11T21:41:24.0731665Z ''') 2023-01-11T21:41:24.0731671Z 2023-01-11T21:41:24.0731675Z 2023-01-11T21:41:24.0731759Z async_compile.wait(globals()) 2023-01-11T21:41:24.0731827Z del async_compile 2023-01-11T21:41:24.0731865Z 2023-01-11T21:41:24.0731924Z def call(args): 2023-01-11T21:41:24.0732278Z arg0_1, = args 2023-01-11T21:41:24.0732409Z args.clear() 2023-01-11T21:41:24.0732593Z buf0 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0732771Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0732852Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0733013Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0733069Z del arg0_1 2023-01-11T21:41:24.0733136Z return (buf2, ) 2023-01-11T21:41:24.0733141Z 2023-01-11T21:41:24.0733146Z 2023-01-11T21:41:24.0733219Z if __name__ == "__main__": 2023-01-11T21:41:24.0733332Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0733569Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0733759Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0733898Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0733903Z 2023-01-11T21:41:24.0733908Z 2023-01-11T21:41:24.0733999Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0734065Z import torch 2023-01-11T21:41:24.0734121Z import random 2023-01-11T21:41:24.0734231Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0734351Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0734355Z 2023-01-11T21:41:24.0734431Z aten = torch.ops.aten 2023-01-11T21:41:24.0734562Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0734650Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0734655Z 2023-01-11T21:41:24.0734659Z 2023-01-11T21:41:24.0734793Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0735001Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0735104Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0735220Z const unsigned char* __restrict__ in_ptr0, 2023-01-11T21:41:24.0735313Z long* __restrict__ out_ptr1) 2023-01-11T21:41:24.0735370Z { 2023-01-11T21:41:24.0735455Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0735512Z { 2023-01-11T21:41:24.0735560Z { 2023-01-11T21:41:24.0735631Z long tmp2 = 0; 2023-01-11T21:41:24.0735700Z long tmp3 = 0; 2023-01-11T21:41:24.0735802Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0735863Z { 2023-01-11T21:41:24.0735986Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:24.0736071Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0736123Z { 2023-01-11T21:41:24.0736184Z { 2023-01-11T21:41:24.0736275Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0736379Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0736454Z tmp2 += tmp1; 2023-01-11T21:41:24.0736531Z tmp3 += tmp1; 2023-01-11T21:41:24.0736593Z } 2023-01-11T21:41:24.0736644Z } 2023-01-11T21:41:24.0736703Z } 2023-01-11T21:41:24.0736777Z out_ptr0[0] = tmp2; 2023-01-11T21:41:24.0736853Z out_ptr1[0] = tmp3; 2023-01-11T21:41:24.0736911Z } 2023-01-11T21:41:24.0736966Z } 2023-01-11T21:41:24.0737021Z { 2023-01-11T21:41:24.0737070Z { 2023-01-11T21:41:24.0737156Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:24.0737234Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:24.0737329Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0737408Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0737486Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0737565Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:24.0737614Z } 2023-01-11T21:41:24.0737672Z } 2023-01-11T21:41:24.0737726Z } 2023-01-11T21:41:24.0737850Z ''') 2023-01-11T21:41:24.0737857Z 2023-01-11T21:41:24.0737861Z 2023-01-11T21:41:24.0737947Z async_compile.wait(globals()) 2023-01-11T21:41:24.0738017Z del async_compile 2023-01-11T21:41:24.0738022Z 2023-01-11T21:41:24.0738090Z def call(args): 2023-01-11T21:41:24.0738146Z arg0_1, = args 2023-01-11T21:41:24.0738214Z args.clear() 2023-01-11T21:41:24.0738392Z buf0 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0738567Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0738651Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0738880Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0738943Z del arg0_1 2023-01-11T21:41:24.0739001Z return (buf2, ) 2023-01-11T21:41:24.0739006Z 2023-01-11T21:41:24.0739020Z 2023-01-11T21:41:24.0739082Z if __name__ == "__main__": 2023-01-11T21:41:24.0739188Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0739342Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0739535Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.uint8) 2023-01-11T21:41:24.0739640Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0739645Z 2023-01-11T21:41:24.0739649Z 2023-01-11T21:41:24.0739740Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0739804Z import torch 2023-01-11T21:41:24.0739860Z import random 2023-01-11T21:41:24.0739971Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0740084Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0740089Z 2023-01-11T21:41:24.0740165Z aten = torch.ops.aten 2023-01-11T21:41:24.0740296Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0740383Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0740388Z 2023-01-11T21:41:24.0740392Z 2023-01-11T21:41:24.0740523Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0740730Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0740832Z extern "C" void kernel(long* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0740931Z const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.0741023Z long* __restrict__ out_ptr1) 2023-01-11T21:41:24.0741081Z { 2023-01-11T21:41:24.0741164Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0741222Z { 2023-01-11T21:41:24.0741281Z { 2023-01-11T21:41:24.0741342Z long tmp2 = 0; 2023-01-11T21:41:24.0741498Z long tmp3 = 0; 2023-01-11T21:41:24.0741603Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0741665Z { 2023-01-11T21:41:24.0741789Z #pragma omp for reduction(+:tmp2) reduction(+:tmp3) 2023-01-11T21:41:24.0741874Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0741935Z { 2023-01-11T21:41:24.0741988Z { 2023-01-11T21:41:24.0742083Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0742186Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0742261Z tmp2 += tmp1; 2023-01-11T21:41:24.0742459Z tmp3 += tmp1; 2023-01-11T21:41:24.0742557Z } 2023-01-11T21:41:24.0742627Z } 2023-01-11T21:41:24.0742677Z } 2023-01-11T21:41:24.0742754Z out_ptr0[0] = tmp2; 2023-01-11T21:41:24.0742827Z out_ptr1[0] = tmp3; 2023-01-11T21:41:24.0742888Z } 2023-01-11T21:41:24.0742946Z } 2023-01-11T21:41:24.0743004Z { 2023-01-11T21:41:24.0743051Z { 2023-01-11T21:41:24.0743133Z auto tmp0 = out_ptr0[0]; 2023-01-11T21:41:24.0743215Z auto tmp3 = out_ptr1[0]; 2023-01-11T21:41:24.0743309Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0743388Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0743467Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0743616Z in_out_ptr0[0] = tmp4; 2023-01-11T21:41:24.0743665Z } 2023-01-11T21:41:24.0743722Z } 2023-01-11T21:41:24.0743780Z } 2023-01-11T21:41:24.0743864Z ''') 2023-01-11T21:41:24.0743870Z 2023-01-11T21:41:24.0743875Z 2023-01-11T21:41:24.0743962Z async_compile.wait(globals()) 2023-01-11T21:41:24.0744114Z del async_compile 2023-01-11T21:41:24.0744120Z 2023-01-11T21:41:24.0744188Z def call(args): 2023-01-11T21:41:24.0744345Z arg0_1, = args 2023-01-11T21:41:24.0744412Z args.clear() 2023-01-11T21:41:24.0744597Z buf0 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0744772Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0744854Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0745015Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0745308Z del arg0_1 2023-01-11T21:41:24.0745378Z return (buf2, ) 2023-01-11T21:41:24.0745430Z 2023-01-11T21:41:24.0745436Z 2023-01-11T21:41:24.0745499Z if __name__ == "__main__": 2023-01-11T21:41:24.0745668Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0745824Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0746027Z arg0_1 = rand_strided((64, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.0746161Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0746166Z 2023-01-11T21:41:24.0746229Z ok (0.096s) 2023-01-11T21:41:24.0746701Z test_sum_keepdims_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0746826Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0747093Z [2023-01-11 21:38:45,413] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 456 2023-01-11T21:41:24.0747346Z [2023-01-11 21:38:45,423] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 456 2023-01-11T21:41:24.0747361Z 2023-01-11T21:41:24.0747575Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0747644Z import torch 2023-01-11T21:41:24.0747711Z import random 2023-01-11T21:41:24.0747825Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0747942Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0747947Z 2023-01-11T21:41:24.0748024Z aten = torch.ops.aten 2023-01-11T21:41:24.0748154Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0748232Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0748237Z 2023-01-11T21:41:24.0748251Z 2023-01-11T21:41:24.0748372Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0748578Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0748701Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0748805Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0748902Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0748961Z { 2023-01-11T21:41:24.0749056Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0749104Z { 2023-01-11T21:41:24.0749178Z #pragma omp for 2023-01-11T21:41:24.0749259Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0749319Z { 2023-01-11T21:41:24.0749382Z { 2023-01-11T21:41:24.0749575Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0749655Z float tmp3 = 0; 2023-01-11T21:41:24.0749763Z auto tmp3_vec = at::vec::Vectorized(tmp3); 2023-01-11T21:41:24.0749891Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.0749955Z { 2023-01-11T21:41:24.0750098Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0750239Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.0750330Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0750410Z tmp3_vec += tmp2; 2023-01-11T21:41:24.0750475Z } 2023-01-11T21:41:24.0750655Z tmp3 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp3_vec); 2023-01-11T21:41:24.0750842Z #pragma omp simd simdlen(4) reduction(+:tmp3) 2023-01-11T21:41:24.0750926Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.0750989Z { 2023-01-11T21:41:24.0751088Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.0751221Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:24.0751315Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0751379Z tmp3 += tmp2; 2023-01-11T21:41:24.0751442Z } 2023-01-11T21:41:24.0751523Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0751584Z } 2023-01-11T21:41:24.0751643Z } 2023-01-11T21:41:24.0751700Z } 2023-01-11T21:41:24.0751758Z } 2023-01-11T21:41:24.0751827Z ''') 2023-01-11T21:41:24.0751833Z 2023-01-11T21:41:24.0751837Z 2023-01-11T21:41:24.0751925Z async_compile.wait(globals()) 2023-01-11T21:41:24.0751996Z del async_compile 2023-01-11T21:41:24.0752000Z 2023-01-11T21:41:24.0752069Z def call(args): 2023-01-11T21:41:24.0752142Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0752210Z args.clear() 2023-01-11T21:41:24.0752404Z buf0 = empty_strided((8, 1), (1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0752555Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0752624Z del arg0_1 2023-01-11T21:41:24.0752689Z del arg1_1 2023-01-11T21:41:24.0752758Z return (buf0, ) 2023-01-11T21:41:24.0752763Z 2023-01-11T21:41:24.0752767Z 2023-01-11T21:41:24.0752840Z if __name__ == "__main__": 2023-01-11T21:41:24.0752951Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0753071Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0753266Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0753448Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0753562Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0753567Z 2023-01-11T21:41:24.0753633Z ok (0.027s) 2023-01-11T21:41:24.0754153Z test_tanh_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0754282Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0754544Z [2023-01-11 21:38:45,455] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 457 2023-01-11T21:41:24.0754807Z [2023-01-11 21:38:47,032] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 457 2023-01-11T21:41:24.0754813Z 2023-01-11T21:41:24.0754901Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0754969Z import torch 2023-01-11T21:41:24.0755025Z import random 2023-01-11T21:41:24.0755140Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0755260Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0755265Z 2023-01-11T21:41:24.0755375Z aten = torch.ops.aten 2023-01-11T21:41:24.0755507Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0755595Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0755600Z 2023-01-11T21:41:24.0755605Z 2023-01-11T21:41:24.0755733Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0755938Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0756055Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0756142Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0756234Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0756292Z { 2023-01-11T21:41:24.0756387Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0756445Z { 2023-01-11T21:41:24.0756518Z #pragma omp for 2023-01-11T21:41:24.0756588Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0756647Z { 2023-01-11T21:41:24.0756808Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0756895Z auto tmp1 = tmp0.tanh(); 2023-01-11T21:41:24.0757025Z auto tmp2 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0757106Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0757231Z auto tmp4 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0757314Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0757382Z auto tmp6 = tmp5.tanh(); 2023-01-11T21:41:24.0757467Z tmp3.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0757553Z tmp6.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0757612Z } 2023-01-11T21:41:24.0757702Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0757783Z for(long i0=256; i0<256; i0+=1) 2023-01-11T21:41:24.0757842Z { 2023-01-11T21:41:24.0757911Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0757998Z auto tmp1 = std::tanh(tmp0); 2023-01-11T21:41:24.0758095Z auto tmp2 = static_cast(2); 2023-01-11T21:41:24.0758174Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0758270Z auto tmp4 = static_cast(1); 2023-01-11T21:41:24.0758349Z auto tmp5 = tmp0 + tmp4; 2023-01-11T21:41:24.0758435Z auto tmp6 = std::tanh(tmp5); 2023-01-11T21:41:24.0758501Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0758575Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0758635Z } 2023-01-11T21:41:24.0758696Z } 2023-01-11T21:41:24.0758749Z } 2023-01-11T21:41:24.0758823Z ''') 2023-01-11T21:41:24.0758829Z 2023-01-11T21:41:24.0758833Z 2023-01-11T21:41:24.0758921Z async_compile.wait(globals()) 2023-01-11T21:41:24.0758980Z del async_compile 2023-01-11T21:41:24.0758984Z 2023-01-11T21:41:24.0759051Z def call(args): 2023-01-11T21:41:24.0759115Z arg0_1, = args 2023-01-11T21:41:24.0759183Z args.clear() 2023-01-11T21:41:24.0759381Z buf0 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0759574Z buf1 = empty_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0759737Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0759793Z del arg0_1 2023-01-11T21:41:24.0759866Z return (buf0, buf1, ) 2023-01-11T21:41:24.0759872Z 2023-01-11T21:41:24.0759876Z 2023-01-11T21:41:24.0759947Z if __name__ == "__main__": 2023-01-11T21:41:24.0760056Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0760172Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0760372Z arg0_1 = rand_strided((16, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0760478Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0760483Z 2023-01-11T21:41:24.0760545Z ok (1.610s) 2023-01-11T21:41:24.0761013Z test_tensor1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0761169Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0761421Z [2023-01-11 21:38:47,052] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 458 2023-01-11T21:41:24.0761685Z [2023-01-11 21:38:48,624] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 458 2023-01-11T21:41:24.0761690Z 2023-01-11T21:41:24.0761779Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0761847Z import torch 2023-01-11T21:41:24.0761914Z import random 2023-01-11T21:41:24.0762026Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0762170Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0762178Z 2023-01-11T21:41:24.0762252Z aten = torch.ops.aten 2023-01-11T21:41:24.0762372Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0762459Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0762463Z 2023-01-11T21:41:24.0762468Z 2023-01-11T21:41:24.0762600Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0762803Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0762921Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0763019Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0763113Z long* __restrict__ out_ptr1) 2023-01-11T21:41:24.0763169Z { 2023-01-11T21:41:24.0763253Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0763311Z { 2023-01-11T21:41:24.0763386Z #pragma omp for 2023-01-11T21:41:24.0763463Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.0763525Z { 2023-01-11T21:41:24.0763586Z { 2023-01-11T21:41:24.0763637Z { 2023-01-11T21:41:24.0763725Z auto tmp2 = in_ptr0[i0]; 2023-01-11T21:41:24.0763824Z auto tmp0 = static_cast(1); 2023-01-11T21:41:24.0763930Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0764016Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0764096Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0764159Z } 2023-01-11T21:41:24.0764210Z } 2023-01-11T21:41:24.0764267Z } 2023-01-11T21:41:24.0764342Z #pragma omp single 2023-01-11T21:41:24.0764400Z { 2023-01-11T21:41:24.0764462Z { 2023-01-11T21:41:24.0764523Z { 2023-01-11T21:41:24.0764625Z auto tmp0 = static_cast(5); 2023-01-11T21:41:24.0764693Z out_ptr1[0] = tmp0; 2023-01-11T21:41:24.0764752Z } 2023-01-11T21:41:24.0764817Z } 2023-01-11T21:41:24.0764875Z } 2023-01-11T21:41:24.0764934Z } 2023-01-11T21:41:24.0764989Z } 2023-01-11T21:41:24.0765054Z ''') 2023-01-11T21:41:24.0765059Z 2023-01-11T21:41:24.0765072Z 2023-01-11T21:41:24.0765148Z async_compile.wait(globals()) 2023-01-11T21:41:24.0765221Z del async_compile 2023-01-11T21:41:24.0765225Z 2023-01-11T21:41:24.0765293Z def call(args): 2023-01-11T21:41:24.0765356Z arg0_1, = args 2023-01-11T21:41:24.0765426Z args.clear() 2023-01-11T21:41:24.0765936Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0766114Z buf1 = empty_strided((), (), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0766266Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0766332Z del arg0_1 2023-01-11T21:41:24.0766406Z return (buf0, buf1, ) 2023-01-11T21:41:24.0766411Z 2023-01-11T21:41:24.0766415Z 2023-01-11T21:41:24.0766526Z if __name__ == "__main__": 2023-01-11T21:41:24.0766639Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0766758Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0766953Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0767065Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0767070Z 2023-01-11T21:41:24.0767124Z ok (1.592s) 2023-01-11T21:41:24.0767967Z test_tensor2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0768157Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0768453Z [2023-01-11 21:38:48,642] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 459 2023-01-11T21:41:24.0768752Z [2023-01-11 21:38:50,192] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 459 2023-01-11T21:41:24.0768760Z 2023-01-11T21:41:24.0768866Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0768936Z import torch 2023-01-11T21:41:24.0769004Z import random 2023-01-11T21:41:24.0769118Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0769225Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0769230Z 2023-01-11T21:41:24.0769307Z aten = torch.ops.aten 2023-01-11T21:41:24.0769437Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0769527Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0769684Z constant0 = None # 4ebd4ff1c68a89413a036eaaf84436373c4ec2939ac1d7f84e9908772a109281 2023-01-11T21:41:24.0769689Z 2023-01-11T21:41:24.0769694Z 2023-01-11T21:41:24.0769833Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0770036Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0770152Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:24.0770243Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0770338Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0770392Z { 2023-01-11T21:41:24.0770490Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0770550Z { 2023-01-11T21:41:24.0770623Z #pragma omp for 2023-01-11T21:41:24.0770701Z for(long i0=0; i0<19; i0+=1) 2023-01-11T21:41:24.0770752Z { 2023-01-11T21:41:24.0770810Z { 2023-01-11T21:41:24.0770871Z { 2023-01-11T21:41:24.0770959Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0771046Z auto tmp2 = in_ptr1[0]; 2023-01-11T21:41:24.0771148Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.0771241Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.0771313Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0771375Z } 2023-01-11T21:41:24.0771435Z } 2023-01-11T21:41:24.0771495Z } 2023-01-11T21:41:24.0771550Z } 2023-01-11T21:41:24.0771608Z } 2023-01-11T21:41:24.0771674Z ''') 2023-01-11T21:41:24.0771689Z 2023-01-11T21:41:24.0771693Z 2023-01-11T21:41:24.0771770Z async_compile.wait(globals()) 2023-01-11T21:41:24.0771839Z del async_compile 2023-01-11T21:41:24.0771844Z 2023-01-11T21:41:24.0771914Z def call(args): 2023-01-11T21:41:24.0771982Z arg0_1, = args 2023-01-11T21:41:24.0772050Z args.clear() 2023-01-11T21:41:24.0772242Z buf0 = empty_strided((19, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0772408Z kernel_cpp_0(c_void_p(constant0.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0772464Z del arg0_1 2023-01-11T21:41:24.0772571Z return (buf0, ) 2023-01-11T21:41:24.0772578Z 2023-01-11T21:41:24.0772582Z 2023-01-11T21:41:24.0772655Z if __name__ == "__main__": 2023-01-11T21:41:24.0772769Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0772888Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0773084Z constant0 = rand_strided((19, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0773274Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0773378Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0773383Z 2023-01-11T21:41:24.0773436Z ok (1.568s) 2023-01-11T21:41:24.0773925Z test_tensor3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0774053Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0774314Z [2023-01-11 21:38:50,222] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 460 2023-01-11T21:41:24.0774574Z [2023-01-11 21:38:51,763] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 460 2023-01-11T21:41:24.0774579Z 2023-01-11T21:41:24.0774669Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0774735Z import torch 2023-01-11T21:41:24.0774806Z import random 2023-01-11T21:41:24.0774919Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0775025Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0775029Z 2023-01-11T21:41:24.0775105Z aten = torch.ops.aten 2023-01-11T21:41:24.0775238Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0775329Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0775338Z 2023-01-11T21:41:24.0775342Z 2023-01-11T21:41:24.0775473Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0775678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0775879Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0775974Z long* __restrict__ out_ptr0, 2023-01-11T21:41:24.0776055Z long* __restrict__ out_ptr1, 2023-01-11T21:41:24.0776149Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0776207Z { 2023-01-11T21:41:24.0776282Z #pragma GCC ivdep 2023-01-11T21:41:24.0776359Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0776417Z { 2023-01-11T21:41:24.0776477Z { 2023-01-11T21:41:24.0776527Z { 2023-01-11T21:41:24.0776624Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0776719Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0776808Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.0776901Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0776994Z auto tmp4 = tmp2 ? tmp1 : tmp3; 2023-01-11T21:41:24.0777079Z auto tmp5 = tmp4 + tmp1; 2023-01-11T21:41:24.0777147Z out_ptr0[i0] = tmp5; 2023-01-11T21:41:24.0777207Z } 2023-01-11T21:41:24.0777267Z } 2023-01-11T21:41:24.0777327Z } 2023-01-11T21:41:24.0777400Z #pragma GCC ivdep 2023-01-11T21:41:24.0777475Z for(long i0=0; i0<3; i0+=1) 2023-01-11T21:41:24.0777531Z { 2023-01-11T21:41:24.0777579Z { 2023-01-11T21:41:24.0777638Z { 2023-01-11T21:41:24.0777732Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0777826Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0777908Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.0777999Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0778121Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:24.0778211Z auto tmp5 = static_cast(3); 2023-01-11T21:41:24.0778303Z auto tmp6 = tmp4 ? tmp3 : tmp5; 2023-01-11T21:41:24.0778391Z auto tmp7 = tmp2 ? tmp1 : tmp6; 2023-01-11T21:41:24.0778471Z auto tmp8 = tmp7 + tmp3; 2023-01-11T21:41:24.0778553Z out_ptr1[i0] = tmp8; 2023-01-11T21:41:24.0778614Z } 2023-01-11T21:41:24.0778664Z } 2023-01-11T21:41:24.0778722Z } 2023-01-11T21:41:24.0778815Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0778874Z { 2023-01-11T21:41:24.0778947Z #pragma omp for 2023-01-11T21:41:24.0779026Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.0779088Z { 2023-01-11T21:41:24.0779137Z { 2023-01-11T21:41:24.0779197Z { 2023-01-11T21:41:24.0779285Z auto tmp12 = in_ptr0[i0]; 2023-01-11T21:41:24.0779385Z auto tmp0 = static_cast(i0); 2023-01-11T21:41:24.0779579Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0779669Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.0779767Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.0779843Z auto tmp4 = tmp0 < tmp3; 2023-01-11T21:41:24.0779935Z auto tmp5 = tmp4 ? tmp3 : tmp1; 2023-01-11T21:41:24.0780033Z auto tmp6 = static_cast(3); 2023-01-11T21:41:24.0780117Z auto tmp7 = tmp0 < tmp6; 2023-01-11T21:41:24.0780214Z auto tmp8 = static_cast(4); 2023-01-11T21:41:24.0780308Z auto tmp9 = tmp7 ? tmp6 : tmp8; 2023-01-11T21:41:24.0780510Z auto tmp10 = tmp2 ? tmp5 : tmp9; 2023-01-11T21:41:24.0780606Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.0780696Z auto tmp13 = tmp11 + tmp12; 2023-01-11T21:41:24.0780779Z out_ptr2[i0] = tmp13; 2023-01-11T21:41:24.0780846Z } 2023-01-11T21:41:24.0781080Z } 2023-01-11T21:41:24.0781143Z } 2023-01-11T21:41:24.0781200Z } 2023-01-11T21:41:24.0781247Z } 2023-01-11T21:41:24.0781328Z ''') 2023-01-11T21:41:24.0781333Z 2023-01-11T21:41:24.0781338Z 2023-01-11T21:41:24.0781426Z async_compile.wait(globals()) 2023-01-11T21:41:24.0781498Z del async_compile 2023-01-11T21:41:24.0781503Z 2023-01-11T21:41:24.0781570Z def call(args): 2023-01-11T21:41:24.0781637Z arg0_1, = args 2023-01-11T21:41:24.0781705Z args.clear() 2023-01-11T21:41:24.0781880Z buf0 = empty_strided((2, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0782127Z buf1 = empty_strided((3, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0782508Z buf2 = empty_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0782741Z buf3 = empty_strided((0, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0782936Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0783007Z del arg0_1 2023-01-11T21:41:24.0783093Z return (buf3, buf0, buf1, buf2, ) 2023-01-11T21:41:24.0783099Z 2023-01-11T21:41:24.0783103Z 2023-01-11T21:41:24.0783178Z if __name__ == "__main__": 2023-01-11T21:41:24.0783288Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0783397Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0783587Z arg0_1 = rand_strided((4, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0783693Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0783697Z 2023-01-11T21:41:24.0783764Z ok (1.571s) 2023-01-11T21:41:24.0784243Z test_tmp_not_defined_issue1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0784555Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0784933Z Failed to collect metadata on function, produced code may be suboptimal. Known situations this can occur are inference mode only compilation involving resize_ or prims (!schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED); if your situation looks different please file a bug to PyTorch. 2023-01-11T21:41:24.0785069Z Traceback (most recent call last): 2023-01-11T21:41:24.0785353Z File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1273, in aot_wrapper_dedupe 2023-01-11T21:41:24.0785671Z fw_metadata, _out, _num_aliasing_metadata_outs = run_functionalized_fw_and_collect_metadata( 2023-01-11T21:41:24.0785972Z File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 289, in inner 2023-01-11T21:41:24.0786047Z outs = f(*f_args) 2023-01-11T21:41:24.0786311Z File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2327, in functional_call 2023-01-11T21:41:24.0786431Z out = Interpreter(mod).run(*args[params_len:], **kwargs) 2023-01-11T21:41:24.0786658Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 136, in run 2023-01-11T21:41:24.0786753Z self.env[node] = self.run_node(node) 2023-01-11T21:41:24.0786986Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 177, in run_node 2023-01-11T21:41:24.0787289Z return getattr(self, n.op)(n.target, args, kwargs) 2023-01-11T21:41:24.0787537Z File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 249, in call_function 2023-01-11T21:41:24.0787622Z return target(*args, **kwargs) 2023-01-11T21:41:24.0787839Z File "/opt/conda/lib/python3.10/site-packages/torch/_ops.py", line 284, in __call__ 2023-01-11T21:41:24.0787942Z return self._op(*args, **kwargs or {}) 2023-01-11T21:41:24.0788203Z File "/opt/conda/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 36, in __torch_function__ 2023-01-11T21:41:24.0788286Z return func(*args, **kwargs) 2023-01-11T21:41:24.0788503Z File "/opt/conda/lib/python3.10/site-packages/torch/_ops.py", line 284, in __call__ 2023-01-11T21:41:24.0788587Z return self._op(*args, **kwargs or {}) 2023-01-11T21:41:24.0788831Z File "/opt/conda/lib/python3.10/site-packages/torch/_prims/__init__.py", line 285, in _autograd_impl 2023-01-11T21:41:24.0788950Z return backwards_not_supported(_prim)(*args, **kwargs) 2023-01-11T21:41:24.0789207Z File "/opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 309, in _autograd_impl 2023-01-11T21:41:24.0789302Z return redispatch_prim(args, kwargs) 2023-01-11T21:41:24.0789563Z File "/opt/conda/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 279, in redispatch_prim 2023-01-11T21:41:24.0789653Z return prim(*args, **kwargs) 2023-01-11T21:41:24.0789868Z File "/opt/conda/lib/python3.10/site-packages/torch/_ops.py", line 284, in __call__ 2023-01-11T21:41:24.0789953Z return self._op(*args, **kwargs or {}) 2023-01-11T21:41:24.0790448Z RuntimeError: !schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/aten/src/ATen/FunctionalizeFallbackKernel.cpp":32, please report a bug to PyTorch. mutating and aliasing ops should all have codegen'd kernels 2023-01-11T21:41:24.0790454Z 2023-01-11T21:41:24.0790689Z While executing %broadcast_in_dim_default : [#users=1] = call_function[target=torch.ops.prims.broadcast_in_dim.default](args = (%var_default_1, [1, 512, 1], [0, 1]), kwargs = {}) 2023-01-11T21:41:24.0790763Z Original traceback: 2023-01-11T21:41:24.0790834Z Module stack: {} 2023-01-11T21:41:24.0790995Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 4723, in forward 2023-01-11T21:41:24.0791143Z broadcast_in_dim_default_2 = torch.ops.prims.broadcast_in_dim.default( 2023-01-11T21:41:24.0791336Z | File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 318, in run 2023-01-11T21:41:24.0791417Z return model(*ex, **kwargs) 2023-01-11T21:41:24.0791422Z 2023-01-11T21:41:24.0791672Z [2023-01-11 21:38:51,997] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 461 2023-01-11T21:41:24.0791687Z 2023-01-11T21:41:24.0791769Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0791838Z import torch 2023-01-11T21:41:24.0791903Z import random 2023-01-11T21:41:24.0792015Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0792131Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0792136Z 2023-01-11T21:41:24.0792213Z aten = torch.ops.aten 2023-01-11T21:41:24.0792343Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0792422Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0792429Z 2023-01-11T21:41:24.0792472Z 2023-01-11T21:41:24.0792594Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0792793Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0792906Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0793006Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.0793106Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0793207Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0793303Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0793390Z const float* __restrict__ in_ptr3, 2023-01-11T21:41:24.0793487Z const float* __restrict__ in_ptr4, 2023-01-11T21:41:24.0793584Z const float* __restrict__ in_ptr5, 2023-01-11T21:41:24.0793678Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0793831Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0793925Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.0794014Z float* __restrict__ out_ptr5) 2023-01-11T21:41:24.0794062Z { 2023-01-11T21:41:24.0794237Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:24.0794317Z auto out_ptr4 = in_out_ptr1; 2023-01-11T21:41:24.0794414Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0794474Z { 2023-01-11T21:41:24.0794546Z #pragma omp for 2023-01-11T21:41:24.0794624Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:24.0794674Z { 2023-01-11T21:41:24.0794734Z { 2023-01-11T21:41:24.0794929Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0795004Z float tmp1 = 0; 2023-01-11T21:41:24.0795123Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.0795216Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0795281Z { 2023-01-11T21:41:24.0795427Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (1024*i0)); 2023-01-11T21:41:24.0795494Z tmp1_vec += tmp0; 2023-01-11T21:41:24.0795556Z } 2023-01-11T21:41:24.0795750Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.0795868Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.0796148Z for(long i1=1024; i1<1024; i1+=1) 2023-01-11T21:41:24.0796209Z { 2023-01-11T21:41:24.0796309Z auto tmp0 = in_ptr0[i1 + (1024*i0)]; 2023-01-11T21:41:24.0796372Z tmp1 += tmp0; 2023-01-11T21:41:24.0796433Z } 2023-01-11T21:41:24.0796510Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0796574Z } 2023-01-11T21:41:24.0796675Z } 2023-01-11T21:41:24.0796949Z #pragma omp for 2023-01-11T21:41:24.0797099Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:24.0797149Z { 2023-01-11T21:41:24.0797208Z { 2023-01-11T21:41:24.0797396Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0797473Z float tmp6 = 0; 2023-01-11T21:41:24.0797590Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0797675Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0797737Z { 2023-01-11T21:41:24.0797875Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (1024*i0)); 2023-01-11T21:41:24.0797987Z auto tmp1 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:24.0798121Z auto tmp2 = at::vec::Vectorized(static_cast(1024)); 2023-01-11T21:41:24.0798249Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0798395Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0798482Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0798562Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0798632Z } 2023-01-11T21:41:24.0798825Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0798932Z #pragma omp simd simdlen(4) reduction(+:tmp6) 2023-01-11T21:41:24.0799019Z for(long i1=1024; i1<1024; i1+=1) 2023-01-11T21:41:24.0799081Z { 2023-01-11T21:41:24.0799179Z auto tmp0 = in_ptr0[i1 + (1024*i0)]; 2023-01-11T21:41:24.0799268Z auto tmp1 = out_ptr0[i0]; 2023-01-11T21:41:24.0799367Z auto tmp2 = static_cast(1024); 2023-01-11T21:41:24.0799452Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0799580Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0799666Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0799742Z tmp6 += tmp5; 2023-01-11T21:41:24.0799806Z } 2023-01-11T21:41:24.0799883Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.0799943Z } 2023-01-11T21:41:24.0800004Z } 2023-01-11T21:41:24.0800067Z #pragma omp for 2023-01-11T21:41:24.0800210Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0800271Z { 2023-01-11T21:41:24.0800403Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0800536Z auto tmp1 = at::vec::Vectorized(static_cast(1024)); 2023-01-11T21:41:24.0800738Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0800831Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0800881Z } 2023-01-11T21:41:24.0800973Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0801058Z for(long i0=512; i0<512; i0+=1) 2023-01-11T21:41:24.0801117Z { 2023-01-11T21:41:24.0801197Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:24.0801298Z auto tmp1 = static_cast(1024); 2023-01-11T21:41:24.0801377Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0801447Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0801505Z } 2023-01-11T21:41:24.0801580Z #pragma omp for 2023-01-11T21:41:24.0801661Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:24.0801720Z { 2023-01-11T21:41:24.0801780Z { 2023-01-11T21:41:24.0801963Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0802028Z float tmp9 = 0; 2023-01-11T21:41:24.0802146Z auto tmp9_vec = at::vec::Vectorized(tmp9); 2023-01-11T21:41:24.0802234Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0802362Z { 2023-01-11T21:41:24.0802504Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (1024*i0)); 2023-01-11T21:41:24.0802629Z auto tmp1 = at::vec::Vectorized(in_ptr2[i0]); 2023-01-11T21:41:24.0802749Z auto tmp3 = at::vec::Vectorized(in_ptr3[i0]); 2023-01-11T21:41:24.0802882Z auto tmp5 = at::vec::Vectorized::loadu(in_ptr4 + 8*i1); 2023-01-11T21:41:24.0803010Z auto tmp7 = at::vec::Vectorized::loadu(in_ptr5 + 8*i1); 2023-01-11T21:41:24.0803137Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:24.0803225Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0803313Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.0803398Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0803499Z tmp8.store(out_ptr2 + (8*i1) + (1024*i0)); 2023-01-11T21:41:24.0803611Z tmp9_vec += tmp8; 2023-01-11T21:41:24.0803676Z } 2023-01-11T21:41:24.0803858Z tmp9 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp9_vec); 2023-01-11T21:41:24.0803974Z #pragma omp simd simdlen(4) reduction(+:tmp9) 2023-01-11T21:41:24.0804064Z for(long i1=1024; i1<1024; i1+=1) 2023-01-11T21:41:24.0804126Z { 2023-01-11T21:41:24.0804220Z auto tmp0 = in_ptr1[i1 + (1024*i0)]; 2023-01-11T21:41:24.0804304Z auto tmp1 = in_ptr2[i0]; 2023-01-11T21:41:24.0804389Z auto tmp3 = in_ptr3[i0]; 2023-01-11T21:41:24.0804474Z auto tmp5 = in_ptr4[i1]; 2023-01-11T21:41:24.0804548Z auto tmp7 = in_ptr5[i1]; 2023-01-11T21:41:24.0804679Z auto tmp2 = tmp0 - tmp1; 2023-01-11T21:41:24.0804762Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0804850Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.0804932Z auto tmp8 = tmp6 + tmp7; 2023-01-11T21:41:24.0805023Z out_ptr2[i1 + (1024*i0)] = tmp8; 2023-01-11T21:41:24.0805094Z tmp9 += tmp8; 2023-01-11T21:41:24.0805145Z } 2023-01-11T21:41:24.0805222Z out_ptr3[i0] = tmp9; 2023-01-11T21:41:24.0805285Z } 2023-01-11T21:41:24.0805343Z } 2023-01-11T21:41:24.0805415Z #pragma omp for 2023-01-11T21:41:24.0805494Z for(long i0=0; i0<512; i0+=1) 2023-01-11T21:41:24.0805544Z { 2023-01-11T21:41:24.0805602Z { 2023-01-11T21:41:24.0805789Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0805864Z float tmp6 = 0; 2023-01-11T21:41:24.0805981Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.0806055Z float tmp7 = 0; 2023-01-11T21:41:24.0806176Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.0806262Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0806313Z { 2023-01-11T21:41:24.0806453Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + (8*i1) + (1024*i0)); 2023-01-11T21:41:24.0806576Z auto tmp1 = at::vec::Vectorized(out_ptr3[i0]); 2023-01-11T21:41:24.0806709Z auto tmp2 = at::vec::Vectorized(static_cast(1024)); 2023-01-11T21:41:24.0806797Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0806932Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0807018Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.0807097Z tmp6_vec += tmp5; 2023-01-11T21:41:24.0807163Z tmp7_vec += tmp0; 2023-01-11T21:41:24.0807291Z } 2023-01-11T21:41:24.0807486Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.0807706Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.0807843Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.0807933Z for(long i1=1024; i1<1024; i1+=1) 2023-01-11T21:41:24.0807999Z { 2023-01-11T21:41:24.0808097Z auto tmp0 = out_ptr2[i1 + (1024*i0)]; 2023-01-11T21:41:24.0808176Z auto tmp1 = out_ptr3[i0]; 2023-01-11T21:41:24.0808276Z auto tmp2 = static_cast(1024); 2023-01-11T21:41:24.0808364Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.0808496Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.0808581Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.0808685Z tmp6 += tmp5; 2023-01-11T21:41:24.0808760Z tmp7 += tmp0; 2023-01-11T21:41:24.0808811Z } 2023-01-11T21:41:24.0808886Z out_ptr4[i0] = tmp6; 2023-01-11T21:41:24.0808963Z out_ptr5[i0] = tmp7; 2023-01-11T21:41:24.0809025Z } 2023-01-11T21:41:24.0809083Z } 2023-01-11T21:41:24.0809154Z #pragma omp for 2023-01-11T21:41:24.0809230Z for(long i0=0; i0<64; i0+=1) 2023-01-11T21:41:24.0809278Z { 2023-01-11T21:41:24.0809405Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr4 + 8*i0); 2023-01-11T21:41:24.0809538Z auto tmp1 = at::vec::Vectorized(static_cast(1024)); 2023-01-11T21:41:24.0809618Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0809883Z auto tmp3 = at::vec::Vectorized(static_cast(1e-05)); 2023-01-11T21:41:24.0809965Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0810061Z tmp4.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:24.0810123Z } 2023-01-11T21:41:24.0810265Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0810343Z for(long i0=512; i0<512; i0+=1) 2023-01-11T21:41:24.0810404Z { 2023-01-11T21:41:24.0810487Z auto tmp0 = out_ptr4[i0]; 2023-01-11T21:41:24.0810584Z auto tmp1 = static_cast(1024); 2023-01-11T21:41:24.0810665Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0810805Z auto tmp3 = static_cast(1e-05); 2023-01-11T21:41:24.0810880Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0810960Z in_out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0811021Z } 2023-01-11T21:41:24.0811080Z } 2023-01-11T21:41:24.0811136Z } 2023-01-11T21:41:24.0811210Z ''') 2023-01-11T21:41:24.0811216Z 2023-01-11T21:41:24.0811220Z 2023-01-11T21:41:24.0811306Z async_compile.wait(globals()) 2023-01-11T21:41:24.0811365Z del async_compile 2023-01-11T21:41:24.0811370Z 2023-01-11T21:41:24.0811440Z def call(args): 2023-01-11T21:41:24.0811542Z arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1 = args 2023-01-11T21:41:24.0811610Z args.clear() 2023-01-11T21:41:24.0811957Z buf0 = empty_strided((1, 512, 1), (512, 1, 512), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0812158Z buf1 = empty_strided((1, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0812238Z buf2 = buf1; del buf1 # reuse 2023-01-11T21:41:24.0812456Z buf3 = empty_strided((1, 512, 1024), (524288, 1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0812715Z buf4 = empty_strided((1, 512, 1), (512, 1, 512), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0812910Z buf5 = empty_strided((1, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0813103Z buf6 = empty_strided((1, 512), (512, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0813214Z buf7 = as_strided(buf5, (1, 512, 1), (512, 1, 1)); del buf5 # reuse 2023-01-11T21:41:24.0813628Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(arg3_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(arg4_1.data_ptr()), c_void_p(arg5_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf6.data_ptr())) 2023-01-11T21:41:24.0813764Z del arg0_1 2023-01-11T21:41:24.0813827Z del arg1_1 2023-01-11T21:41:24.0813887Z del arg2_1 2023-01-11T21:41:24.0813940Z del arg3_1 2023-01-11T21:41:24.0814001Z del arg4_1 2023-01-11T21:41:24.0814062Z del arg5_1 2023-01-11T21:41:24.0814301Z return (buf2, buf6, buf7, ) 2023-01-11T21:41:24.0814307Z 2023-01-11T21:41:24.0814312Z 2023-01-11T21:41:24.0814388Z if __name__ == "__main__": 2023-01-11T21:41:24.0814500Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0814620Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0814839Z arg0_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0815036Z arg1_1 = rand_strided((1024, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0815251Z arg2_1 = rand_strided((1, 512, 1024), (524288, 1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0815464Z arg3_1 = rand_strided((1, 512, 1024), (524288, 1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0815670Z arg4_1 = rand_strided((1, 512, 1), (512, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0815871Z arg5_1 = rand_strided((1, 512, 1), (512, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0816008Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1])) 2023-01-11T21:41:24.0816337Z [2023-01-11 21:38:53,613] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 461 2023-01-11T21:41:24.0816343Z 2023-01-11T21:41:24.0816406Z ok (1.863s) 2023-01-11T21:41:24.0816876Z test_tmp_not_defined_issue2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0817002Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0817263Z [2023-01-11 21:38:53,695] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 462 2023-01-11T21:41:24.0817520Z [2023-01-11 21:38:55,232] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 462 2023-01-11T21:41:24.0817525Z 2023-01-11T21:41:24.0817616Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0817683Z import torch 2023-01-11T21:41:24.0817752Z import random 2023-01-11T21:41:24.0817866Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0817989Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0817996Z 2023-01-11T21:41:24.0818062Z aten = torch.ops.aten 2023-01-11T21:41:24.0818195Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0818287Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0818292Z 2023-01-11T21:41:24.0818296Z 2023-01-11T21:41:24.0818430Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0818632Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0818745Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0818846Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0818945Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.0819032Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0819089Z { 2023-01-11T21:41:24.0819150Z { 2023-01-11T21:41:24.0819388Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.0819464Z float tmp5 = 0; 2023-01-11T21:41:24.0819580Z auto tmp5_vec = at::vec::Vectorized(tmp5); 2023-01-11T21:41:24.0819682Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0819731Z { 2023-01-11T21:41:24.0819834Z #pragma omp for reduction(+:tmp5_vec) 2023-01-11T21:41:24.0819921Z for(long i0=0; i0<17600; i0+=1) 2023-01-11T21:41:24.0819981Z { 2023-01-11T21:41:24.0820113Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0820233Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:24.0820364Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr2 + 8*i0); 2023-01-11T21:41:24.0820453Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0820527Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0820633Z tmp5_vec += tmp4; 2023-01-11T21:41:24.0820694Z } 2023-01-11T21:41:24.0820881Z tmp5 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp5_vec); 2023-01-11T21:41:24.0821002Z #pragma omp for simd simdlen(4) reduction(+:tmp5) 2023-01-11T21:41:24.0821094Z for(long i0=140800; i0<140800; i0+=1) 2023-01-11T21:41:24.0821156Z { 2023-01-11T21:41:24.0821229Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0821314Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.0821398Z auto tmp3 = in_ptr2[i0]; 2023-01-11T21:41:24.0821482Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.0821564Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0821637Z tmp5 += tmp4; 2023-01-11T21:41:24.0821699Z } 2023-01-11T21:41:24.0821748Z } 2023-01-11T21:41:24.0821820Z out_ptr0[0] = tmp5; 2023-01-11T21:41:24.0821885Z } 2023-01-11T21:41:24.0821942Z } 2023-01-11T21:41:24.0822018Z ''') 2023-01-11T21:41:24.0822023Z 2023-01-11T21:41:24.0822027Z 2023-01-11T21:41:24.0822113Z async_compile.wait(globals()) 2023-01-11T21:41:24.0822185Z del async_compile 2023-01-11T21:41:24.0822190Z 2023-01-11T21:41:24.0822246Z def call(args): 2023-01-11T21:41:24.0822523Z primals_1, primals_2, primals_3 = args 2023-01-11T21:41:24.0822592Z args.clear() 2023-01-11T21:41:24.0822885Z buf0 = empty_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0823087Z kernel_cpp_0(c_void_p(primals_3.data_ptr()), c_void_p(primals_2.data_ptr()), c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0823200Z return (buf0, primals_1, primals_2, primals_3, ) 2023-01-11T21:41:24.0823205Z 2023-01-11T21:41:24.0823209Z 2023-01-11T21:41:24.0823281Z if __name__ == "__main__": 2023-01-11T21:41:24.0823393Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0823506Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0823953Z primals_1 = rand_strided((1, 88, 40, 40), (140800, 1600, 40, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0824144Z primals_2 = rand_strided((), (), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0824370Z primals_3 = rand_strided((1, 88, 40, 40), (140800, 1600, 40, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0824515Z print_performance(lambda: call([primals_1, primals_2, primals_3])) 2023-01-11T21:41:24.0824523Z 2023-01-11T21:41:24.0824617Z ok (1.608s) 2023-01-11T21:41:24.0824764Z test_to_device_constant_cpu (__main__.CpuTests) ... skip: requires cuda (0.001s) 2023-01-11T21:41:24.0824894Z test_to_device_cpu (__main__.CpuTests) ... skip: requires cuda (0.000s) 2023-01-11T21:41:24.0825363Z test_to_dtype_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0825546Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0826027Z [2023-01-11 21:38:55,333] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 463 2023-01-11T21:41:24.0826293Z [2023-01-11 21:38:56,877] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 463 2023-01-11T21:41:24.0826299Z 2023-01-11T21:41:24.0826390Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0826458Z import torch 2023-01-11T21:41:24.0826525Z import random 2023-01-11T21:41:24.0826635Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0826751Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0826756Z 2023-01-11T21:41:24.0826829Z aten = torch.ops.aten 2023-01-11T21:41:24.0826994Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0827084Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0827089Z 2023-01-11T21:41:24.0827093Z 2023-01-11T21:41:24.0827224Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0827422Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0827537Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.0827634Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0827727Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:24.0827786Z { 2023-01-11T21:41:24.0827871Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0827929Z { 2023-01-11T21:41:24.0828002Z #pragma omp for 2023-01-11T21:41:24.0828082Z for(long i0=0; i0<40; i0+=1) 2023-01-11T21:41:24.0828141Z { 2023-01-11T21:41:24.0828201Z { 2023-01-11T21:41:24.0828255Z { 2023-01-11T21:41:24.0828347Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0828449Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0828537Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0828642Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.0828744Z auto tmp4 = static_cast(tmp0); 2023-01-11T21:41:24.0828824Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.0828894Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0828957Z } 2023-01-11T21:41:24.0829019Z } 2023-01-11T21:41:24.0829077Z } 2023-01-11T21:41:24.0829136Z } 2023-01-11T21:41:24.0829194Z } 2023-01-11T21:41:24.0829267Z ''') 2023-01-11T21:41:24.0829272Z 2023-01-11T21:41:24.0829276Z 2023-01-11T21:41:24.0829352Z async_compile.wait(globals()) 2023-01-11T21:41:24.0829421Z del async_compile 2023-01-11T21:41:24.0829427Z 2023-01-11T21:41:24.0829496Z def call(args): 2023-01-11T21:41:24.0829567Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0829637Z args.clear() 2023-01-11T21:41:24.0829840Z buf0 = empty_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0830036Z buf1 = empty_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0830199Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0830277Z return (arg0_1, buf0, arg1_1, buf1, ) 2023-01-11T21:41:24.0830282Z 2023-01-11T21:41:24.0830296Z 2023-01-11T21:41:24.0830358Z if __name__ == "__main__": 2023-01-11T21:41:24.0830464Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0830585Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0830790Z arg0_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0830991Z arg1_1 = rand_strided((2, 2, 10), (20, 10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.0831132Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0831138Z 2023-01-11T21:41:24.0831200Z ok (1.641s) 2023-01-11T21:41:24.0831660Z test_topk_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0831772Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0832033Z [2023-01-11 21:38:56,895] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 464 2023-01-11T21:41:24.0832243Z [2023-01-11 21:38:56,900] torch._inductor.ir: [WARNING] Using FallbackKernel: aten.topk 2023-01-11T21:41:24.0832533Z [2023-01-11 21:38:56,903] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 464 2023-01-11T21:41:24.0832541Z 2023-01-11T21:41:24.0832632Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0832699Z import torch 2023-01-11T21:41:24.0832766Z import random 2023-01-11T21:41:24.0832879Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0832986Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0833000Z 2023-01-11T21:41:24.0833064Z aten = torch.ops.aten 2023-01-11T21:41:24.0833192Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0833281Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0833286Z 2023-01-11T21:41:24.0833291Z 2023-01-11T21:41:24.0833376Z async_compile.wait(globals()) 2023-01-11T21:41:24.0833446Z del async_compile 2023-01-11T21:41:24.0833451Z 2023-01-11T21:41:24.0833516Z def call(args): 2023-01-11T21:41:24.0833580Z arg0_1, = args 2023-01-11T21:41:24.0833638Z args.clear() 2023-01-11T21:41:24.0833780Z buf0 = aten.topk(arg0_1, 2) 2023-01-11T21:41:24.0833849Z del arg0_1 2023-01-11T21:41:24.0834000Z buf1 = buf0[0] 2023-01-11T21:41:24.0834104Z assert_size_stride(buf1, (1, 1, 8, 2), (16, 16, 2, 1)) 2023-01-11T21:41:24.0834168Z buf2 = buf0[1] 2023-01-11T21:41:24.0834271Z assert_size_stride(buf2, (1, 1, 8, 2), (16, 16, 2, 1)) 2023-01-11T21:41:24.0834325Z del buf0 2023-01-11T21:41:24.0834401Z return (buf1, buf2, ) 2023-01-11T21:41:24.0834406Z 2023-01-11T21:41:24.0834410Z 2023-01-11T21:41:24.0834486Z if __name__ == "__main__": 2023-01-11T21:41:24.0834597Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0834718Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0834934Z arg0_1 = rand_strided((1, 1, 8, 8), (64, 64, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0835041Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0835045Z 2023-01-11T21:41:24.0835108Z ok (0.024s) 2023-01-11T21:41:24.0835777Z test_transpose_add_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0835892Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0836153Z [2023-01-11 21:38:56,917] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 465 2023-01-11T21:41:24.0836422Z [2023-01-11 21:38:58,454] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 465 2023-01-11T21:41:24.0836428Z 2023-01-11T21:41:24.0836522Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0836591Z import torch 2023-01-11T21:41:24.0836659Z import random 2023-01-11T21:41:24.0836772Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0837132Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0837138Z 2023-01-11T21:41:24.0837202Z aten = torch.ops.aten 2023-01-11T21:41:24.0837332Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0837421Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0837426Z 2023-01-11T21:41:24.0837430Z 2023-01-11T21:41:24.0837613Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0837988Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0838109Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0838212Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0838308Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0838357Z { 2023-01-11T21:41:24.0838457Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0838517Z { 2023-01-11T21:41:24.0838591Z #pragma omp for 2023-01-11T21:41:24.0838712Z for(long i0=0; i0<32; i0+=1) 2023-01-11T21:41:24.0838772Z { 2023-01-11T21:41:24.0838918Z #pragma GCC ivdep 2023-01-11T21:41:24.0838991Z for(long i1=0; i1<16; i1+=1) 2023-01-11T21:41:24.0839050Z { 2023-01-11T21:41:24.0839111Z { 2023-01-11T21:41:24.0839173Z { 2023-01-11T21:41:24.0839274Z auto tmp0 = in_ptr0[i0 + (32*i1)]; 2023-01-11T21:41:24.0839375Z auto tmp1 = in_ptr1[i1 + (16*i0)]; 2023-01-11T21:41:24.0839527Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0839611Z out_ptr0[i0 + (32*i1)] = tmp2; 2023-01-11T21:41:24.0839673Z } 2023-01-11T21:41:24.0839736Z } 2023-01-11T21:41:24.0839798Z } 2023-01-11T21:41:24.0839858Z } 2023-01-11T21:41:24.0839920Z } 2023-01-11T21:41:24.0839966Z } 2023-01-11T21:41:24.0840045Z ''') 2023-01-11T21:41:24.0840053Z 2023-01-11T21:41:24.0840060Z 2023-01-11T21:41:24.0840146Z async_compile.wait(globals()) 2023-01-11T21:41:24.0840215Z del async_compile 2023-01-11T21:41:24.0840220Z 2023-01-11T21:41:24.0840287Z def call(args): 2023-01-11T21:41:24.0840357Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0840423Z args.clear() 2023-01-11T21:41:24.0840621Z buf0 = empty_strided((32, 16), (1, 32), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0840771Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0840835Z del arg0_1 2023-01-11T21:41:24.0840898Z del arg1_1 2023-01-11T21:41:24.0840966Z return (buf0, ) 2023-01-11T21:41:24.0840970Z 2023-01-11T21:41:24.0840974Z 2023-01-11T21:41:24.0841047Z if __name__ == "__main__": 2023-01-11T21:41:24.0841158Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0841279Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0841479Z arg0_1 = rand_strided((16, 32), (32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0841665Z arg1_1 = rand_strided((32, 16), (16, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0841777Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0841782Z 2023-01-11T21:41:24.0841845Z ok (1.552s) 2023-01-11T21:41:24.0842312Z test_transpose_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0842435Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0842690Z [2023-01-11 21:38:58,479] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 466 2023-01-11T21:41:24.0842987Z [2023-01-11 21:38:58,493] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 466 2023-01-11T21:41:24.0842992Z 2023-01-11T21:41:24.0843080Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0843148Z import torch 2023-01-11T21:41:24.0843205Z import random 2023-01-11T21:41:24.0843314Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0843434Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0843439Z 2023-01-11T21:41:24.0843516Z aten = torch.ops.aten 2023-01-11T21:41:24.0843644Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0843731Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0843735Z 2023-01-11T21:41:24.0843739Z 2023-01-11T21:41:24.0843872Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0844076Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0844181Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0844311Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0844407Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0844499Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0844553Z { 2023-01-11T21:41:24.0844645Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0844706Z { 2023-01-11T21:41:24.0844769Z #pragma omp for 2023-01-11T21:41:24.0844848Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0844906Z { 2023-01-11T21:41:24.0844981Z #pragma GCC ivdep 2023-01-11T21:41:24.0845063Z for(long i1=0; i1<8; i1+=1) 2023-01-11T21:41:24.0845122Z { 2023-01-11T21:41:24.0845269Z { 2023-01-11T21:41:24.0845322Z { 2023-01-11T21:41:24.0845420Z auto tmp0 = in_ptr0[i0 + (8*i1)]; 2023-01-11T21:41:24.0845516Z auto tmp1 = in_ptr1[i1 + (8*i0)]; 2023-01-11T21:41:24.0845613Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0845704Z out_ptr0[i0 + (8*i1)] = tmp2; 2023-01-11T21:41:24.0845769Z } 2023-01-11T21:41:24.0845831Z } 2023-01-11T21:41:24.0845880Z } 2023-01-11T21:41:24.0845941Z } 2023-01-11T21:41:24.0846015Z #pragma omp for 2023-01-11T21:41:24.0846093Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0846150Z { 2023-01-11T21:41:24.0846286Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0846418Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0846493Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0846622Z auto tmp3 = at::vec::Vectorized(static_cast(10)); 2023-01-11T21:41:24.0846702Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0846794Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0846851Z } 2023-01-11T21:41:24.0846945Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0847189Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0847238Z { 2023-01-11T21:41:24.0847316Z auto tmp0 = in_ptr1[i0]; 2023-01-11T21:41:24.0847411Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.0847491Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0847586Z auto tmp3 = static_cast(10); 2023-01-11T21:41:24.0847665Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0847740Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0847789Z } 2023-01-11T21:41:24.0847846Z } 2023-01-11T21:41:24.0847904Z } 2023-01-11T21:41:24.0847982Z ''') 2023-01-11T21:41:24.0847988Z 2023-01-11T21:41:24.0847992Z 2023-01-11T21:41:24.0848078Z async_compile.wait(globals()) 2023-01-11T21:41:24.0848147Z del async_compile 2023-01-11T21:41:24.0848151Z 2023-01-11T21:41:24.0848218Z def call(args): 2023-01-11T21:41:24.0848279Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0848382Z args.clear() 2023-01-11T21:41:24.0848573Z buf0 = empty_strided((8, 8), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0848870Z buf1 = empty_strided((8, 8), (1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0849146Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0849211Z del arg0_1 2023-01-11T21:41:24.0849275Z del arg1_1 2023-01-11T21:41:24.0849338Z return (buf0, buf1, ) 2023-01-11T21:41:24.0849354Z 2023-01-11T21:41:24.0849359Z 2023-01-11T21:41:24.0849420Z if __name__ == "__main__": 2023-01-11T21:41:24.0849534Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0849654Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0849879Z arg0_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0850215Z arg1_1 = rand_strided((8, 8), (8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0850364Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0850370Z 2023-01-11T21:41:24.0850432Z ok (0.038s) 2023-01-11T21:41:24.0850780Z test_transposed_propagates_cpu (__main__.CpuTests) ... [2023-01-11 21:38:58,507] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 467 2023-01-11T21:41:24.0851199Z [2023-01-11 21:39:00,023] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 467 2023-01-11T21:41:24.0851214Z 2023-01-11T21:41:24.0851294Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0851358Z import torch 2023-01-11T21:41:24.0851427Z import random 2023-01-11T21:41:24.0851602Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0851719Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0851724Z 2023-01-11T21:41:24.0851798Z aten = torch.ops.aten 2023-01-11T21:41:24.0851930Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0852013Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0852019Z 2023-01-11T21:41:24.0852023Z 2023-01-11T21:41:24.0852159Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0852365Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0852480Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0852577Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.0852674Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.0852733Z { 2023-01-11T21:41:24.0852818Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0852878Z { 2023-01-11T21:41:24.0852954Z #pragma omp for 2023-01-11T21:41:24.0853036Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0853096Z { 2023-01-11T21:41:24.0853229Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0853361Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.0853446Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0853523Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0853583Z } 2023-01-11T21:41:24.0853673Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0853752Z for(long i0=64; i0<64; i0+=1) 2023-01-11T21:41:24.0853813Z { 2023-01-11T21:41:24.0853895Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0853962Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.0854041Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0854118Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0854175Z } 2023-01-11T21:41:24.0854232Z } 2023-01-11T21:41:24.0854290Z } 2023-01-11T21:41:24.0854370Z ''') 2023-01-11T21:41:24.0854375Z 2023-01-11T21:41:24.0854379Z 2023-01-11T21:41:24.0854466Z async_compile.wait(globals()) 2023-01-11T21:41:24.0854526Z del async_compile 2023-01-11T21:41:24.0854531Z 2023-01-11T21:41:24.0854632Z def call(args): 2023-01-11T21:41:24.0854709Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0854782Z args.clear() 2023-01-11T21:41:24.0854996Z buf0 = empty_strided((1, 4, 4, 4), (64, 4, 1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0855159Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.0855226Z del arg0_1 2023-01-11T21:41:24.0855279Z del arg1_1 2023-01-11T21:41:24.0855349Z return (buf0, ) 2023-01-11T21:41:24.0855354Z 2023-01-11T21:41:24.0855358Z 2023-01-11T21:41:24.0855427Z if __name__ == "__main__": 2023-01-11T21:41:24.0855538Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0855660Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0855871Z arg0_1 = rand_strided((1, 4, 4, 4), (64, 4, 1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0856073Z arg1_1 = rand_strided((4, 4, 4), (4, 1, 16), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0856214Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0856220Z 2023-01-11T21:41:24.0856274Z ok (1.529s) 2023-01-11T21:41:24.0856408Z test_triton_conv_cpu (__main__.CpuTests) ... skip: requires cuda (0.001s) 2023-01-11T21:41:24.0856735Z test_triton_mm2_cpu (__main__.CpuTests) ... [2023-01-11 21:39:00,079] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 468 2023-01-11T21:41:24.0857000Z [2023-01-11 21:39:01,619] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 468 2023-01-11T21:41:24.0857005Z 2023-01-11T21:41:24.0857096Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0857164Z import torch 2023-01-11T21:41:24.0857233Z import random 2023-01-11T21:41:24.0857345Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0857452Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0857456Z 2023-01-11T21:41:24.0857527Z aten = torch.ops.aten 2023-01-11T21:41:24.0857661Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0857748Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0857753Z 2023-01-11T21:41:24.0857757Z 2023-01-11T21:41:24.0857891Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0858088Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0858203Z extern "C" void kernel(float* __restrict__ in_out_ptr0) 2023-01-11T21:41:24.0858259Z { 2023-01-11T21:41:24.0858342Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0858401Z { 2023-01-11T21:41:24.0858473Z #pragma omp for 2023-01-11T21:41:24.0858559Z for(long i0=0; i0<131072; i0+=1) 2023-01-11T21:41:24.0858619Z { 2023-01-11T21:41:24.0858753Z auto tmp0 = at::vec::Vectorized::loadu(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0858877Z auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0)); 2023-01-11T21:41:24.0858963Z tmp1.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.0859023Z } 2023-01-11T21:41:24.0859115Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0859202Z for(long i0=1048576; i0<1048576; i0+=1) 2023-01-11T21:41:24.0859260Z { 2023-01-11T21:41:24.0859345Z auto tmp0 = in_out_ptr0[i0]; 2023-01-11T21:41:24.0859432Z auto tmp1 = tmp0 * (tmp0>0); 2023-01-11T21:41:24.0859504Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0859559Z } 2023-01-11T21:41:24.0859619Z } 2023-01-11T21:41:24.0859677Z } 2023-01-11T21:41:24.0859753Z ''') 2023-01-11T21:41:24.0859758Z 2023-01-11T21:41:24.0859762Z 2023-01-11T21:41:24.0859847Z async_compile.wait(globals()) 2023-01-11T21:41:24.0859915Z del async_compile 2023-01-11T21:41:24.0859920Z 2023-01-11T21:41:24.0859976Z def call(args): 2023-01-11T21:41:24.0860048Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.0860114Z args.clear() 2023-01-11T21:41:24.0860322Z buf0 = empty_strided((1024, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0860458Z aten.mm.out(arg0_1, arg1_1, out=buf0) 2023-01-11T21:41:24.0860526Z del arg0_1 2023-01-11T21:41:24.0860589Z del arg1_1 2023-01-11T21:41:24.0860660Z buf1 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0860758Z kernel_cpp_0(c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0860824Z return (buf1, ) 2023-01-11T21:41:24.0860829Z 2023-01-11T21:41:24.0860833Z 2023-01-11T21:41:24.0860908Z if __name__ == "__main__": 2023-01-11T21:41:24.0861014Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0861134Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0861340Z arg0_1 = rand_strided((1024, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0861536Z arg1_1 = rand_strided((1024, 1024), (1024, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0861637Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.0861642Z 2023-01-11T21:41:24.0861705Z ok (1.609s) 2023-01-11T21:41:24.0862191Z test_triu_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0862317Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0862728Z [2023-01-11 21:39:01,676] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 469 2023-01-11T21:41:24.0862993Z [2023-01-11 21:39:03,226] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 469 2023-01-11T21:41:24.0862999Z 2023-01-11T21:41:24.0863090Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0863156Z import torch 2023-01-11T21:41:24.0863225Z import random 2023-01-11T21:41:24.0863331Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0863446Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0863451Z 2023-01-11T21:41:24.0863526Z aten = torch.ops.aten 2023-01-11T21:41:24.0863656Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0863744Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0863749Z 2023-01-11T21:41:24.0863753Z 2023-01-11T21:41:24.0863885Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0864084Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0864200Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0864287Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0864382Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0864471Z float* __restrict__ out_ptr2) 2023-01-11T21:41:24.0864527Z { 2023-01-11T21:41:24.0864627Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0864685Z { 2023-01-11T21:41:24.0864771Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0864839Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0864896Z { 2023-01-11T21:41:24.0864980Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.0865039Z { 2023-01-11T21:41:24.0865234Z #pragma GCC ivdep 2023-01-11T21:41:24.0865321Z for(long i2=0; i2<10; i2+=1) 2023-01-11T21:41:24.0865386Z { 2023-01-11T21:41:24.0865439Z { 2023-01-11T21:41:24.0865505Z { 2023-01-11T21:41:24.0865618Z auto tmp3 = in_ptr0[i2 + (10*i1) + (100*i0)]; 2023-01-11T21:41:24.0865805Z auto tmp0 = static_cast((-1) + i2 + ((-1)*i1)); 2023-01-11T21:41:24.0865911Z auto tmp1 = static_cast(0); 2023-01-11T21:41:24.0866006Z auto tmp2 = tmp0 >= tmp1; 2023-01-11T21:41:24.0866182Z auto tmp4 = static_cast(0); 2023-01-11T21:41:24.0866272Z auto tmp5 = tmp2 ? tmp3 : tmp4; 2023-01-11T21:41:24.0866836Z auto tmp6 = static_cast(i2 + ((-1)*i1)); 2023-01-11T21:41:24.0866995Z auto tmp7 = tmp6 >= tmp1; 2023-01-11T21:41:24.0867150Z auto tmp8 = tmp7 ? tmp3 : tmp4; 2023-01-11T21:41:24.0867337Z auto tmp9 = static_cast((-2) + i2 + ((-1)*i1)); 2023-01-11T21:41:24.0867429Z auto tmp10 = tmp9 >= tmp1; 2023-01-11T21:41:24.0867531Z auto tmp11 = tmp10 ? tmp3 : tmp4; 2023-01-11T21:41:24.0867628Z out_ptr0[i2 + (10*i1) + (100*i0)] = tmp5; 2023-01-11T21:41:24.0867716Z out_ptr1[i2 + (10*i1) + (100*i0)] = tmp8; 2023-01-11T21:41:24.0867814Z out_ptr2[i2 + (10*i1) + (100*i0)] = tmp11; 2023-01-11T21:41:24.0867923Z } 2023-01-11T21:41:24.0867989Z } 2023-01-11T21:41:24.0868052Z } 2023-01-11T21:41:24.0868113Z } 2023-01-11T21:41:24.0868175Z } 2023-01-11T21:41:24.0868223Z } 2023-01-11T21:41:24.0868279Z } 2023-01-11T21:41:24.0868357Z ''') 2023-01-11T21:41:24.0868363Z 2023-01-11T21:41:24.0868367Z 2023-01-11T21:41:24.0868453Z async_compile.wait(globals()) 2023-01-11T21:41:24.0868525Z del async_compile 2023-01-11T21:41:24.0868530Z 2023-01-11T21:41:24.0868598Z def call(args): 2023-01-11T21:41:24.0868664Z arg0_1, = args 2023-01-11T21:41:24.0868723Z args.clear() 2023-01-11T21:41:24.0868930Z buf0 = empty_strided((2, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0869135Z buf1 = empty_strided((2, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0869336Z buf2 = empty_strided((2, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0869525Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr())) 2023-01-11T21:41:24.0869591Z del arg0_1 2023-01-11T21:41:24.0869668Z return (buf0, buf1, buf2, ) 2023-01-11T21:41:24.0869673Z 2023-01-11T21:41:24.0869677Z 2023-01-11T21:41:24.0869751Z if __name__ == "__main__": 2023-01-11T21:41:24.0869852Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0869971Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0870176Z arg0_1 = rand_strided((2, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0870288Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0870293Z 2023-01-11T21:41:24.0870357Z ok (1.595s) 2023-01-11T21:41:24.0870826Z test_unbind_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0870951Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0871208Z [2023-01-11 21:39:03,250] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 470 2023-01-11T21:41:24.0871475Z [2023-01-11 21:39:03,257] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 470 2023-01-11T21:41:24.0871480Z 2023-01-11T21:41:24.0871573Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0871629Z import torch 2023-01-11T21:41:24.0871698Z import random 2023-01-11T21:41:24.0871808Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0871924Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0871928Z 2023-01-11T21:41:24.0872004Z aten = torch.ops.aten 2023-01-11T21:41:24.0872166Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0872254Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0872259Z 2023-01-11T21:41:24.0872264Z 2023-01-11T21:41:24.0872347Z async_compile.wait(globals()) 2023-01-11T21:41:24.0872405Z del async_compile 2023-01-11T21:41:24.0872410Z 2023-01-11T21:41:24.0872476Z def call(args): 2023-01-11T21:41:24.0872539Z arg0_1, = args 2023-01-11T21:41:24.0872606Z args.clear() 2023-01-11T21:41:24.0872881Z return (as_strided(arg0_1, (4, 4), (4, 1)), as_strided(arg0_1, (4, 4), (4, 1), 16), as_strided(arg0_1, (4, 4), (4, 1), 32), as_strided(arg0_1, (4, 4), (4, 1), 48), as_strided(arg0_1, (4, 4), (16, 4)), as_strided(arg0_1, (4, 4), (16, 4), 1), as_strided(arg0_1, (4, 4), (16, 4), 2), as_strided(arg0_1, (4, 4), (16, 4), 3), ) 2023-01-11T21:41:24.0872887Z 2023-01-11T21:41:24.0872890Z 2023-01-11T21:41:24.0872963Z if __name__ == "__main__": 2023-01-11T21:41:24.0873073Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0873224Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0873419Z arg0_1 = rand_strided((4, 4, 4), (16, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0873521Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0873525Z 2023-01-11T21:41:24.0873587Z ok (0.031s) 2023-01-11T21:41:24.0874125Z test_unroll_small_reduction_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0874251Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0874515Z [2023-01-11 21:39:03,299] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 471 2023-01-11T21:41:24.0874781Z [2023-01-11 21:39:04,873] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 471 2023-01-11T21:41:24.0875209Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0875334Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0875588Z [2023-01-11 21:39:04,913] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 472 2023-01-11T21:41:24.0875594Z 2023-01-11T21:41:24.0875684Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0875741Z import torch 2023-01-11T21:41:24.0875809Z import random 2023-01-11T21:41:24.0875923Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0876045Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0876050Z 2023-01-11T21:41:24.0876125Z aten = torch.ops.aten 2023-01-11T21:41:24.0876256Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0876346Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0876352Z 2023-01-11T21:41:24.0876356Z 2023-01-11T21:41:24.0876486Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0876678Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0876794Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0876890Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0876985Z long* __restrict__ out_ptr1, 2023-01-11T21:41:24.0877078Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0877167Z long* __restrict__ out_ptr3, 2023-01-11T21:41:24.0877293Z float* __restrict__ out_ptr4, 2023-01-11T21:41:24.0877449Z bool* __restrict__ out_ptr5, 2023-01-11T21:41:24.0877538Z bool* __restrict__ out_ptr6, 2023-01-11T21:41:24.0877628Z long* __restrict__ out_ptr7, 2023-01-11T21:41:24.0877714Z long* __restrict__ out_ptr8, 2023-01-11T21:41:24.0877807Z float* __restrict__ out_ptr9, 2023-01-11T21:41:24.0878176Z float* __restrict__ out_ptr10) 2023-01-11T21:41:24.0878237Z { 2023-01-11T21:41:24.0878333Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0878380Z { 2023-01-11T21:41:24.0878452Z #pragma omp for 2023-01-11T21:41:24.0878529Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0878588Z { 2023-01-11T21:41:24.0878648Z { 2023-01-11T21:41:24.0878709Z { 2023-01-11T21:41:24.0878791Z auto tmp0 = in_ptr0[3*i0]; 2023-01-11T21:41:24.0878916Z auto tmp1 = in_ptr0[1 + (3*i0)]; 2023-01-11T21:41:24.0879010Z auto tmp3 = in_ptr0[2 + (3*i0)]; 2023-01-11T21:41:24.0879140Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::min(tmp0, tmp1); 2023-01-11T21:41:24.0879261Z auto tmp4 = (tmp3 != tmp3) ? tmp3 : std::min(tmp2, tmp3); 2023-01-11T21:41:24.0879519Z auto tmp5 = static_cast(0); 2023-01-11T21:41:24.0879617Z auto tmp6 = static_cast(1); 2023-01-11T21:41:24.0879708Z auto tmp7 = tmp1 < tmp0; 2023-01-11T21:41:24.0880004Z auto tmp8 = tmp7 ? tmp6 : tmp5; 2023-01-11T21:41:24.0880126Z auto tmp9 = static_cast(2); 2023-01-11T21:41:24.0880216Z auto tmp10 = tmp3 < tmp2; 2023-01-11T21:41:24.0880309Z auto tmp11 = tmp10 ? tmp9 : tmp8; 2023-01-11T21:41:24.0880438Z auto tmp12 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:24.0880567Z auto tmp13 = (tmp3 != tmp3) ? tmp3 : std::max(tmp12, tmp3); 2023-01-11T21:41:24.0880654Z auto tmp14 = tmp1 > tmp0; 2023-01-11T21:41:24.0880747Z auto tmp15 = tmp14 ? tmp6 : tmp5; 2023-01-11T21:41:24.0880826Z auto tmp16 = tmp3 > tmp12; 2023-01-11T21:41:24.0880923Z auto tmp17 = tmp16 ? tmp9 : tmp15; 2023-01-11T21:41:24.0881011Z auto tmp18 = tmp0 + tmp1; 2023-01-11T21:41:24.0881097Z auto tmp19 = tmp18 + tmp3; 2023-01-11T21:41:24.0881198Z auto tmp20 = static_cast(1); 2023-01-11T21:41:24.0881285Z auto tmp21 = tmp0 > tmp20; 2023-01-11T21:41:24.0881556Z auto tmp22 = static_cast(tmp21); 2023-01-11T21:41:24.0881651Z auto tmp23 = static_cast(tmp22); 2023-01-11T21:41:24.0881740Z auto tmp24 = tmp1 > tmp20; 2023-01-11T21:41:24.0881852Z auto tmp25 = static_cast(tmp24); 2023-01-11T21:41:24.0881954Z auto tmp26 = static_cast(tmp25); 2023-01-11T21:41:24.0882044Z auto tmp27 = tmp23 || tmp26; 2023-01-11T21:41:24.0882129Z auto tmp28 = tmp3 > tmp20; 2023-01-11T21:41:24.0882232Z auto tmp29 = static_cast(tmp28); 2023-01-11T21:41:24.0882322Z auto tmp30 = static_cast(tmp29); 2023-01-11T21:41:24.0882452Z auto tmp31 = tmp27 || tmp30; 2023-01-11T21:41:24.0882600Z auto tmp32 = static_cast(0); 2023-01-11T21:41:24.0882687Z auto tmp33 = tmp0 > tmp32; 2023-01-11T21:41:24.0882774Z auto tmp34 = tmp33 == 0; 2023-01-11T21:41:24.0882874Z auto tmp35 = static_cast(tmp34); 2023-01-11T21:41:24.0882976Z auto tmp36 = static_cast(tmp35); 2023-01-11T21:41:24.0883065Z auto tmp37 = tmp1 > tmp32; 2023-01-11T21:41:24.0883187Z auto tmp38 = tmp37 == 0; 2023-01-11T21:41:24.0883287Z auto tmp39 = static_cast(tmp38); 2023-01-11T21:41:24.0883386Z auto tmp40 = static_cast(tmp39); 2023-01-11T21:41:24.0883476Z auto tmp41 = tmp36 || tmp40; 2023-01-11T21:41:24.0883561Z auto tmp42 = tmp3 > tmp32; 2023-01-11T21:41:24.0883650Z auto tmp43 = tmp42 == 0; 2023-01-11T21:41:24.0883750Z auto tmp44 = static_cast(tmp43); 2023-01-11T21:41:24.0883839Z auto tmp45 = static_cast(tmp44); 2023-01-11T21:41:24.0883927Z auto tmp46 = tmp41 || tmp45; 2023-01-11T21:41:24.0884011Z auto tmp47 = tmp46 == 0; 2023-01-11T21:41:24.0884093Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0884173Z out_ptr1[i0] = tmp11; 2023-01-11T21:41:24.0884251Z out_ptr2[i0] = tmp13; 2023-01-11T21:41:24.0884362Z out_ptr3[i0] = tmp17; 2023-01-11T21:41:24.0884432Z out_ptr4[i0] = tmp19; 2023-01-11T21:41:24.0884511Z out_ptr5[i0] = tmp31; 2023-01-11T21:41:24.0884588Z out_ptr6[i0] = tmp47; 2023-01-11T21:41:24.0884666Z out_ptr7[i0] = tmp11; 2023-01-11T21:41:24.0884747Z out_ptr8[i0] = tmp17; 2023-01-11T21:41:24.0884823Z out_ptr9[i0] = tmp4; 2023-01-11T21:41:24.0884904Z out_ptr10[i0] = tmp13; 2023-01-11T21:41:24.0884955Z } 2023-01-11T21:41:24.0885016Z } 2023-01-11T21:41:24.0885076Z } 2023-01-11T21:41:24.0885137Z } 2023-01-11T21:41:24.0885198Z } 2023-01-11T21:41:24.0885290Z ''') 2023-01-11T21:41:24.0885296Z 2023-01-11T21:41:24.0885300Z 2023-01-11T21:41:24.0885389Z async_compile.wait(globals()) 2023-01-11T21:41:24.0885450Z del async_compile 2023-01-11T21:41:24.0885454Z 2023-01-11T21:41:24.0885527Z def call(args): 2023-01-11T21:41:24.0885591Z arg0_1, = args 2023-01-11T21:41:24.0885658Z args.clear() 2023-01-11T21:41:24.0885854Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0886043Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0886231Z buf2 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0886406Z buf3 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0886592Z buf4 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0886776Z buf5 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0886955Z buf6 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0887134Z buf7 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0887314Z buf8 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0887504Z buf9 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0887693Z buf10 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0888066Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf6.data_ptr()), c_void_p(buf7.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf10.data_ptr())) 2023-01-11T21:41:24.0888122Z del arg0_1 2023-01-11T21:41:24.0888249Z return (buf0, buf1, buf2, buf3, buf4, buf5, buf6, buf7, buf8, buf9, buf10, ) 2023-01-11T21:41:24.0888254Z 2023-01-11T21:41:24.0888259Z 2023-01-11T21:41:24.0888335Z if __name__ == "__main__": 2023-01-11T21:41:24.0888448Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0888569Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0888796Z arg0_1 = rand_strided((8, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0888903Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0888908Z 2023-01-11T21:41:24.0889174Z [2023-01-11 21:39:06,464] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 472 2023-01-11T21:41:24.0889180Z 2023-01-11T21:41:24.0889272Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0889329Z import torch 2023-01-11T21:41:24.0889398Z import random 2023-01-11T21:41:24.0889510Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0889628Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0889632Z 2023-01-11T21:41:24.0889709Z aten = torch.ops.aten 2023-01-11T21:41:24.0889842Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0889932Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0889937Z 2023-01-11T21:41:24.0889941Z 2023-01-11T21:41:24.0890137Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0890329Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0890439Z extern "C" void kernel(bool* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0890541Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0890638Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0890731Z long* __restrict__ out_ptr1, 2023-01-11T21:41:24.0890827Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0890917Z long* __restrict__ out_ptr3, 2023-01-11T21:41:24.0890998Z float* __restrict__ out_ptr4, 2023-01-11T21:41:24.0891088Z bool* __restrict__ out_ptr5, 2023-01-11T21:41:24.0891175Z long* __restrict__ out_ptr7, 2023-01-11T21:41:24.0891261Z long* __restrict__ out_ptr8, 2023-01-11T21:41:24.0891359Z float* __restrict__ out_ptr9, 2023-01-11T21:41:24.0891459Z float* __restrict__ out_ptr10) 2023-01-11T21:41:24.0891516Z { 2023-01-11T21:41:24.0891588Z auto out_ptr6 = in_out_ptr0; 2023-01-11T21:41:24.0891683Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0891744Z { 2023-01-11T21:41:24.0891816Z #pragma omp for 2023-01-11T21:41:24.0891895Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0891955Z { 2023-01-11T21:41:24.0892019Z { 2023-01-11T21:41:24.0892069Z { 2023-01-11T21:41:24.0892194Z float tmp1 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0892316Z struct IndexValue_13 {size_t index; float value;}; 2023-01-11T21:41:24.0892453Z IndexValue_13 tmp2{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0892588Z #pragma omp declare reduction(argmin : struct IndexValue_13 :\ 2023-01-11T21:41:24.0892742Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0892885Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0893027Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0893261Z float tmp3 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0893371Z struct IndexValue_14 {size_t index; float value;}; 2023-01-11T21:41:24.0893605Z IndexValue_14 tmp4{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0893738Z #pragma omp declare reduction(argmax : struct IndexValue_14 :\ 2023-01-11T21:41:24.0893885Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0894031Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0894294Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0894376Z float tmp5 = 0; 2023-01-11T21:41:24.0894453Z bool tmp10 = 0; 2023-01-11T21:41:24.0894519Z bool tmp16 = 0; 2023-01-11T21:41:24.0894638Z struct IndexValue_15 {size_t index; float value;}; 2023-01-11T21:41:24.0894774Z IndexValue_15 tmp17{0, std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0894910Z #pragma omp declare reduction(argmin : struct IndexValue_15 :\ 2023-01-11T21:41:24.0895055Z omp_out.value = omp_in.value > omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0895200Z omp_out.index = omp_in.value > omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0895342Z initializer(omp_priv = {0, std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0895494Z struct IndexValue_16 {size_t index; float value;}; 2023-01-11T21:41:24.0895727Z IndexValue_16 tmp18{0, -std::numeric_limits::infinity()}; 2023-01-11T21:41:24.0895853Z #pragma omp declare reduction(argmax : struct IndexValue_16 :\ 2023-01-11T21:41:24.0896000Z omp_out.value = omp_in.value < omp_out.value ? omp_out.value : omp_in.value,\ 2023-01-11T21:41:24.0896138Z omp_out.index = omp_in.value < omp_out.value ? omp_out.index : omp_in.index)\ 2023-01-11T21:41:24.0896375Z initializer(omp_priv = {0, -std::numeric_limits::infinity()}) 2023-01-11T21:41:24.0896503Z float tmp19 = std::numeric_limits::infinity(); 2023-01-11T21:41:24.0896713Z float tmp20 = -std::numeric_limits::infinity(); 2023-01-11T21:41:24.0896800Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.0896869Z { 2023-01-11T21:41:24.0896924Z { 2023-01-11T21:41:24.0897028Z auto tmp0 = in_ptr0[i1 + (3*i0)]; 2023-01-11T21:41:24.0897130Z auto tmp6 = static_cast(1); 2023-01-11T21:41:24.0897223Z auto tmp7 = tmp0 > tmp6; 2023-01-11T21:41:24.0897331Z auto tmp8 = static_cast(tmp7); 2023-01-11T21:41:24.0897437Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.0897544Z auto tmp11 = static_cast(0); 2023-01-11T21:41:24.0897629Z auto tmp12 = tmp0 > tmp11; 2023-01-11T21:41:24.0897720Z auto tmp13 = tmp12 == 0; 2023-01-11T21:41:24.0897824Z auto tmp14 = static_cast(tmp13); 2023-01-11T21:41:24.0897924Z auto tmp15 = static_cast(tmp14); 2023-01-11T21:41:24.0898030Z tmp1 = std::min(tmp1, tmp0); 2023-01-11T21:41:24.0898122Z if (tmp2.value > tmp0) { 2023-01-11T21:41:24.0898229Z tmp2.index = i1; tmp2.value = tmp0; 2023-01-11T21:41:24.0898295Z } 2023-01-11T21:41:24.0898383Z tmp3 = std::max(tmp3, tmp0); 2023-01-11T21:41:24.0898477Z if (tmp4.value < tmp0) { 2023-01-11T21:41:24.0898584Z tmp4.index = i1; tmp4.value = tmp0; 2023-01-11T21:41:24.0898653Z } 2023-01-11T21:41:24.0898732Z tmp5 += tmp0; 2023-01-11T21:41:24.0898821Z tmp10 = tmp10 || tmp9; 2023-01-11T21:41:24.0898907Z tmp16 = tmp16 || tmp15; 2023-01-11T21:41:24.0898988Z if (tmp17.value > tmp0) { 2023-01-11T21:41:24.0899093Z tmp17.index = i1; tmp17.value = tmp0; 2023-01-11T21:41:24.0899210Z } 2023-01-11T21:41:24.0899304Z if (tmp18.value < tmp0) { 2023-01-11T21:41:24.0899411Z tmp18.index = i1; tmp18.value = tmp0; 2023-01-11T21:41:24.0899478Z } 2023-01-11T21:41:24.0899580Z tmp19 = std::min(tmp19, tmp0); 2023-01-11T21:41:24.0899683Z tmp20 = std::max(tmp20, tmp0); 2023-01-11T21:41:24.0899738Z } 2023-01-11T21:41:24.0899800Z } 2023-01-11T21:41:24.0899881Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0899976Z out_ptr1[i0] = tmp2.index; 2023-01-11T21:41:24.0900054Z out_ptr2[i0] = tmp3; 2023-01-11T21:41:24.0900143Z out_ptr3[i0] = tmp4.index; 2023-01-11T21:41:24.0900221Z out_ptr4[i0] = tmp5; 2023-01-11T21:41:24.0900292Z out_ptr5[i0] = tmp10; 2023-01-11T21:41:24.0900400Z out_ptr6[i0] = tmp16; 2023-01-11T21:41:24.0900497Z out_ptr7[i0] = tmp17.index; 2023-01-11T21:41:24.0900587Z out_ptr8[i0] = tmp18.index; 2023-01-11T21:41:24.0900668Z out_ptr9[i0] = tmp19; 2023-01-11T21:41:24.0900751Z out_ptr10[i0] = tmp20; 2023-01-11T21:41:24.0900815Z } 2023-01-11T21:41:24.0900864Z } 2023-01-11T21:41:24.0900922Z } 2023-01-11T21:41:24.0900993Z #pragma omp for 2023-01-11T21:41:24.0901074Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0901134Z { 2023-01-11T21:41:24.0901194Z { 2023-01-11T21:41:24.0901245Z { 2023-01-11T21:41:24.0901332Z auto tmp0 = out_ptr6[i0]; 2023-01-11T21:41:24.0901416Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:24.0901502Z in_out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.0901563Z } 2023-01-11T21:41:24.0901627Z } 2023-01-11T21:41:24.0901687Z } 2023-01-11T21:41:24.0901736Z } 2023-01-11T21:41:24.0901798Z } 2023-01-11T21:41:24.0901874Z ''') 2023-01-11T21:41:24.0901879Z 2023-01-11T21:41:24.0901884Z 2023-01-11T21:41:24.0901971Z async_compile.wait(globals()) 2023-01-11T21:41:24.0902038Z del async_compile 2023-01-11T21:41:24.0902043Z 2023-01-11T21:41:24.0902110Z def call(args): 2023-01-11T21:41:24.0902176Z arg0_1, = args 2023-01-11T21:41:24.0902232Z args.clear() 2023-01-11T21:41:24.0902557Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0902743Z buf1 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0902932Z buf2 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0903114Z buf3 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0903301Z buf4 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0903488Z buf5 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0903665Z buf6 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.0903835Z buf8 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0904014Z buf9 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.0904204Z buf10 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0904391Z buf11 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0904474Z buf7 = buf6; del buf6 # reuse 2023-01-11T21:41:24.0904848Z kernel_cpp_0(c_void_p(buf7.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(buf10.data_ptr()), c_void_p(buf11.data_ptr())) 2023-01-11T21:41:24.0904969Z del arg0_1 2023-01-11T21:41:24.0905099Z return (buf0, buf1, buf2, buf3, buf4, buf5, buf7, buf8, buf9, buf10, buf11, ) 2023-01-11T21:41:24.0905105Z 2023-01-11T21:41:24.0905109Z 2023-01-11T21:41:24.0905184Z if __name__ == "__main__": 2023-01-11T21:41:24.0905285Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0905402Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0905596Z arg0_1 = rand_strided((8, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0905701Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0905706Z 2023-01-11T21:41:24.0905771Z ok (3.207s) 2023-01-11T21:41:24.0905908Z test_unspec_inputs_cpu (__main__.CpuTests) ... skip: requires cuda (0.001s) 2023-01-11T21:41:24.0906414Z test_unsqueeze_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0906541Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0906804Z [2023-01-11 21:39:06,499] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 473 2023-01-11T21:41:24.0907060Z [2023-01-11 21:39:08,020] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 473 2023-01-11T21:41:24.0907078Z 2023-01-11T21:41:24.0907158Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0907224Z import torch 2023-01-11T21:41:24.0907291Z import random 2023-01-11T21:41:24.0907403Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0907523Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0907528Z 2023-01-11T21:41:24.0907600Z aten = torch.ops.aten 2023-01-11T21:41:24.0907735Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0907813Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0907817Z 2023-01-11T21:41:24.0907832Z 2023-01-11T21:41:24.0907952Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0908154Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0908273Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0908369Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0908461Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0908550Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.0908647Z float* __restrict__ out_ptr3) 2023-01-11T21:41:24.0908694Z { 2023-01-11T21:41:24.0908784Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0908841Z { 2023-01-11T21:41:24.0908913Z #pragma omp for 2023-01-11T21:41:24.0908994Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0909053Z { 2023-01-11T21:41:24.0909189Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0909309Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0909391Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0909517Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0909598Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0909676Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0909766Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0909851Z tmp5.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0909926Z tmp4.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0910010Z tmp5.store(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0910069Z } 2023-01-11T21:41:24.0910157Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0910238Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.0910326Z { 2023-01-11T21:41:24.0910407Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0910493Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0910574Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0910670Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0910751Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0910828Z auto tmp5 = tmp0 + tmp3; 2023-01-11T21:41:24.0910905Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.0910980Z out_ptr1[i0] = tmp5; 2023-01-11T21:41:24.0911044Z out_ptr2[i0] = tmp4; 2023-01-11T21:41:24.0911117Z out_ptr3[i0] = tmp5; 2023-01-11T21:41:24.0911173Z } 2023-01-11T21:41:24.0911234Z } 2023-01-11T21:41:24.0911287Z } 2023-01-11T21:41:24.0911361Z ''') 2023-01-11T21:41:24.0911367Z 2023-01-11T21:41:24.0911371Z 2023-01-11T21:41:24.0911456Z async_compile.wait(globals()) 2023-01-11T21:41:24.0911518Z del async_compile 2023-01-11T21:41:24.0911549Z 2023-01-11T21:41:24.0911620Z def call(args): 2023-01-11T21:41:24.0911684Z arg0_1, = args 2023-01-11T21:41:24.0911750Z args.clear() 2023-01-11T21:41:24.0911968Z buf0 = empty_strided((2, 2, 2, 2, 1), (8, 4, 2, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0912175Z buf1 = empty_strided((2, 2, 1, 2, 2), (8, 4, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0912381Z buf2 = empty_strided((1, 2, 2, 2, 2), (16, 8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0912587Z buf3 = empty_strided((2, 2, 2, 1, 2), (8, 4, 2, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0912785Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.0912851Z del arg0_1 2023-01-11T21:41:24.0912938Z return (buf0, buf1, buf2, buf3, ) 2023-01-11T21:41:24.0912943Z 2023-01-11T21:41:24.0912949Z 2023-01-11T21:41:24.0913023Z if __name__ == "__main__": 2023-01-11T21:41:24.0913133Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0913251Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0913455Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0913559Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0913564Z 2023-01-11T21:41:24.0913617Z ok (1.555s) 2023-01-11T21:41:24.0914148Z test_unsqueeze_inplace_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0914278Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0914543Z [2023-01-11 21:39:08,054] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 474 2023-01-11T21:41:24.0914807Z [2023-01-11 21:39:09,568] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 474 2023-01-11T21:41:24.0914813Z 2023-01-11T21:41:24.0914903Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0914971Z import torch 2023-01-11T21:41:24.0915038Z import random 2023-01-11T21:41:24.0915151Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0915257Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0915276Z 2023-01-11T21:41:24.0915340Z aten = torch.ops.aten 2023-01-11T21:41:24.0915471Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0915558Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0915563Z 2023-01-11T21:41:24.0915568Z 2023-01-11T21:41:24.0915695Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0915934Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0916050Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0916145Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0916228Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0916285Z { 2023-01-11T21:41:24.0916378Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0916435Z { 2023-01-11T21:41:24.0916505Z #pragma omp for 2023-01-11T21:41:24.0916584Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0916643Z { 2023-01-11T21:41:24.0916764Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.0916892Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.0916973Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0917097Z auto tmp3 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.0917211Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0917300Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.0917386Z tmp4.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.0917445Z } 2023-01-11T21:41:24.0917524Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0917603Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.0917661Z { 2023-01-11T21:41:24.0917740Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.0917838Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.0917918Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0918014Z auto tmp3 = static_cast(2); 2023-01-11T21:41:24.0918084Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0918160Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.0918235Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.0918291Z } 2023-01-11T21:41:24.0918351Z } 2023-01-11T21:41:24.0918408Z } 2023-01-11T21:41:24.0918475Z ''') 2023-01-11T21:41:24.0918482Z 2023-01-11T21:41:24.0918499Z 2023-01-11T21:41:24.0918576Z async_compile.wait(globals()) 2023-01-11T21:41:24.0918647Z del async_compile 2023-01-11T21:41:24.0918652Z 2023-01-11T21:41:24.0918720Z def call(args): 2023-01-11T21:41:24.0918787Z arg0_1, = args 2023-01-11T21:41:24.0918855Z args.clear() 2023-01-11T21:41:24.0919071Z buf0 = empty_strided((2, 2, 1, 2, 2), (8, 4, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0919283Z buf1 = empty_strided((1, 2, 2, 2, 2), (16, 8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0919434Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0919500Z del arg0_1 2023-01-11T21:41:24.0919577Z return (buf0, buf1, ) 2023-01-11T21:41:24.0919582Z 2023-01-11T21:41:24.0919587Z 2023-01-11T21:41:24.0919662Z if __name__ == "__main__": 2023-01-11T21:41:24.0919773Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0919900Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0920108Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0920214Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0920219Z 2023-01-11T21:41:24.0920272Z ok (1.547s) 2023-01-11T21:41:24.0920743Z test_upsample_bicubic2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0920869Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0921126Z [2023-01-11 21:39:11,190] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 475 2023-01-11T21:41:24.0921159Z 2023-01-11T21:41:24.0921253Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0921321Z import torch 2023-01-11T21:41:24.0921391Z import random 2023-01-11T21:41:24.0921507Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0921624Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0921628Z 2023-01-11T21:41:24.0921693Z aten = torch.ops.aten 2023-01-11T21:41:24.0921820Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0921911Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0921916Z 2023-01-11T21:41:24.0921920Z 2023-01-11T21:41:24.0922052Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0922253Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0922368Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0922460Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.0922586Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0922637Z { 2023-01-11T21:41:24.0922737Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0922797Z { 2023-01-11T21:41:24.0922874Z #pragma omp for 2023-01-11T21:41:24.0922954Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.0923014Z { 2023-01-11T21:41:24.0923090Z #pragma GCC ivdep 2023-01-11T21:41:24.0923164Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0923226Z { 2023-01-11T21:41:24.0923308Z #pragma GCC ivdep 2023-01-11T21:41:24.0923396Z for(long i2=0; i2<128; i2+=1) 2023-01-11T21:41:24.0923458Z { 2023-01-11T21:41:24.0923519Z { 2023-01-11T21:41:24.0923574Z { 2023-01-11T21:41:24.0923681Z auto tmp0 = static_cast(i2); 2023-01-11T21:41:24.0923778Z auto tmp1 = 0.2440944881889764 * tmp0; 2023-01-11T21:41:24.0923884Z auto tmp2 = std::floor(tmp1); 2023-01-11T21:41:24.0924029Z auto tmp3 = tmp1 - tmp2; 2023-01-11T21:41:24.0924134Z auto tmp4 = static_cast(i1); 2023-01-11T21:41:24.0924233Z auto tmp5 = 0.49606299212598426 * tmp4; 2023-01-11T21:41:24.0924334Z auto tmp6 = std::floor(tmp5); 2023-01-11T21:41:24.0924465Z auto tmp7 = tmp5 - tmp6; 2023-01-11T21:41:24.0924565Z auto tmp8 = static_cast(tmp6); 2023-01-11T21:41:24.0924668Z auto tmp9 = static_cast(tmp2); 2023-01-11T21:41:24.0924812Z auto tmp10 = tmp8 + -1; 2023-01-11T21:41:24.0924902Z auto tmp11 = tmp8 + 0; 2023-01-11T21:41:24.0924987Z auto tmp12 = tmp8 + 1; 2023-01-11T21:41:24.0925069Z auto tmp13 = tmp8 + 2; 2023-01-11T21:41:24.0925211Z auto tmp14 = tmp9 + -1; 2023-01-11T21:41:24.0925285Z auto tmp15 = tmp9 + 0; 2023-01-11T21:41:24.0925369Z auto tmp16 = tmp9 + 1; 2023-01-11T21:41:24.0925453Z auto tmp17 = tmp9 + 2; 2023-01-11T21:41:24.0925582Z auto tmp18 = (tmp10 != tmp10) ? tmp10 : std::min(63, tmp10); 2023-01-11T21:41:24.0925705Z auto tmp19 = (tmp18 != tmp18) ? tmp18 : std::max(0, tmp18); 2023-01-11T21:41:24.0925828Z auto tmp20 = (tmp14 != tmp14) ? tmp14 : std::min(31, tmp14); 2023-01-11T21:41:24.0925952Z auto tmp21 = (tmp20 != tmp20) ? tmp20 : std::max(0, tmp20); 2023-01-11T21:41:24.0926067Z auto tmp22 = in_ptr0[tmp21 + (32*tmp19) + (2048*i0)]; 2023-01-11T21:41:24.0926178Z auto tmp23 = (tmp15 != tmp15) ? tmp15 : std::min(31, tmp15); 2023-01-11T21:41:24.0926333Z auto tmp24 = (tmp23 != tmp23) ? tmp23 : std::max(0, tmp23); 2023-01-11T21:41:24.0926449Z auto tmp25 = in_ptr0[tmp24 + (32*tmp19) + (2048*i0)]; 2023-01-11T21:41:24.0926570Z auto tmp26 = (tmp16 != tmp16) ? tmp16 : std::min(31, tmp16); 2023-01-11T21:41:24.0926690Z auto tmp27 = (tmp26 != tmp26) ? tmp26 : std::max(0, tmp26); 2023-01-11T21:41:24.0926804Z auto tmp28 = in_ptr0[tmp27 + (32*tmp19) + (2048*i0)]; 2023-01-11T21:41:24.0926926Z auto tmp29 = (tmp17 != tmp17) ? tmp17 : std::min(31, tmp17); 2023-01-11T21:41:24.0927045Z auto tmp30 = (tmp29 != tmp29) ? tmp29 : std::max(0, tmp29); 2023-01-11T21:41:24.0927157Z auto tmp31 = in_ptr0[tmp30 + (32*tmp19) + (2048*i0)]; 2023-01-11T21:41:24.0927238Z auto tmp32 = tmp3 + 1.0; 2023-01-11T21:41:24.0927413Z auto tmp33 = -0.75 * tmp32; 2023-01-11T21:41:24.0927561Z auto tmp34 = tmp33 - -3.75; 2023-01-11T21:41:24.0927653Z auto tmp35 = tmp34 * tmp32; 2023-01-11T21:41:24.0927798Z auto tmp36 = tmp35 + -6.0; 2023-01-11T21:41:24.0927889Z auto tmp37 = tmp36 * tmp32; 2023-01-11T21:41:24.0928031Z auto tmp38 = tmp37 - -3.0; 2023-01-11T21:41:24.0928121Z auto tmp39 = 1.25 * tmp3; 2023-01-11T21:41:24.0928253Z auto tmp40 = tmp39 - 2.25; 2023-01-11T21:41:24.0928346Z auto tmp41 = tmp40 * tmp3; 2023-01-11T21:41:24.0928438Z auto tmp42 = tmp41 * tmp3; 2023-01-11T21:41:24.0928527Z auto tmp43 = tmp42 + 1.0; 2023-01-11T21:41:24.0928669Z auto tmp44 = 1.0 - tmp3; 2023-01-11T21:41:24.0928759Z auto tmp45 = 1.25 * tmp44; 2023-01-11T21:41:24.0928903Z auto tmp46 = tmp45 - 2.25; 2023-01-11T21:41:24.0928987Z auto tmp47 = tmp46 * tmp44; 2023-01-11T21:41:24.0929074Z auto tmp48 = tmp47 * tmp44; 2023-01-11T21:41:24.0929166Z auto tmp49 = tmp48 + 1.0; 2023-01-11T21:41:24.0929255Z auto tmp50 = tmp44 + 1.0; 2023-01-11T21:41:24.0929397Z auto tmp51 = -0.75 * tmp50; 2023-01-11T21:41:24.0929540Z auto tmp52 = tmp51 - -3.75; 2023-01-11T21:41:24.0929627Z auto tmp53 = tmp52 * tmp50; 2023-01-11T21:41:24.0929757Z auto tmp54 = tmp53 + -6.0; 2023-01-11T21:41:24.0929849Z auto tmp55 = tmp54 * tmp50; 2023-01-11T21:41:24.0929991Z auto tmp56 = tmp55 - -3.0; 2023-01-11T21:41:24.0930081Z auto tmp57 = tmp22 * tmp38; 2023-01-11T21:41:24.0930176Z auto tmp58 = tmp25 * tmp43; 2023-01-11T21:41:24.0930266Z auto tmp59 = tmp28 * tmp49; 2023-01-11T21:41:24.0930356Z auto tmp60 = tmp31 * tmp56; 2023-01-11T21:41:24.0930443Z auto tmp61 = tmp59 + tmp60; 2023-01-11T21:41:24.0930521Z auto tmp62 = tmp58 + tmp61; 2023-01-11T21:41:24.0930607Z auto tmp63 = tmp57 + tmp62; 2023-01-11T21:41:24.0930731Z auto tmp64 = (tmp11 != tmp11) ? tmp11 : std::min(63, tmp11); 2023-01-11T21:41:24.0930853Z auto tmp65 = (tmp64 != tmp64) ? tmp64 : std::max(0, tmp64); 2023-01-11T21:41:24.0930968Z auto tmp66 = in_ptr0[tmp21 + (32*tmp65) + (2048*i0)]; 2023-01-11T21:41:24.0931081Z auto tmp67 = in_ptr0[tmp24 + (32*tmp65) + (2048*i0)]; 2023-01-11T21:41:24.0931192Z auto tmp68 = in_ptr0[tmp27 + (32*tmp65) + (2048*i0)]; 2023-01-11T21:41:24.0931334Z auto tmp69 = in_ptr0[tmp30 + (32*tmp65) + (2048*i0)]; 2023-01-11T21:41:24.0931414Z auto tmp70 = tmp66 * tmp38; 2023-01-11T21:41:24.0931504Z auto tmp71 = tmp67 * tmp43; 2023-01-11T21:41:24.0931593Z auto tmp72 = tmp68 * tmp49; 2023-01-11T21:41:24.0931683Z auto tmp73 = tmp69 * tmp56; 2023-01-11T21:41:24.0931770Z auto tmp74 = tmp72 + tmp73; 2023-01-11T21:41:24.0931857Z auto tmp75 = tmp71 + tmp74; 2023-01-11T21:41:24.0931942Z auto tmp76 = tmp70 + tmp75; 2023-01-11T21:41:24.0932064Z auto tmp77 = (tmp12 != tmp12) ? tmp12 : std::min(63, tmp12); 2023-01-11T21:41:24.0932176Z auto tmp78 = (tmp77 != tmp77) ? tmp77 : std::max(0, tmp77); 2023-01-11T21:41:24.0932326Z auto tmp79 = in_ptr0[tmp21 + (32*tmp78) + (2048*i0)]; 2023-01-11T21:41:24.0932441Z auto tmp80 = in_ptr0[tmp24 + (32*tmp78) + (2048*i0)]; 2023-01-11T21:41:24.0932553Z auto tmp81 = in_ptr0[tmp27 + (32*tmp78) + (2048*i0)]; 2023-01-11T21:41:24.0932662Z auto tmp82 = in_ptr0[tmp30 + (32*tmp78) + (2048*i0)]; 2023-01-11T21:41:24.0932754Z auto tmp83 = tmp79 * tmp38; 2023-01-11T21:41:24.0932842Z auto tmp84 = tmp80 * tmp43; 2023-01-11T21:41:24.0932934Z auto tmp85 = tmp81 * tmp49; 2023-01-11T21:41:24.0933015Z auto tmp86 = tmp82 * tmp56; 2023-01-11T21:41:24.0933102Z auto tmp87 = tmp85 + tmp86; 2023-01-11T21:41:24.0933193Z auto tmp88 = tmp84 + tmp87; 2023-01-11T21:41:24.0933280Z auto tmp89 = tmp83 + tmp88; 2023-01-11T21:41:24.0933410Z auto tmp90 = (tmp13 != tmp13) ? tmp13 : std::min(63, tmp13); 2023-01-11T21:41:24.0933534Z auto tmp91 = (tmp90 != tmp90) ? tmp90 : std::max(0, tmp90); 2023-01-11T21:41:24.0933647Z auto tmp92 = in_ptr0[tmp21 + (32*tmp91) + (2048*i0)]; 2023-01-11T21:41:24.0933759Z auto tmp93 = in_ptr0[tmp24 + (32*tmp91) + (2048*i0)]; 2023-01-11T21:41:24.0933858Z auto tmp94 = in_ptr0[tmp27 + (32*tmp91) + (2048*i0)]; 2023-01-11T21:41:24.0933967Z auto tmp95 = in_ptr0[tmp30 + (32*tmp91) + (2048*i0)]; 2023-01-11T21:41:24.0934057Z auto tmp96 = tmp92 * tmp38; 2023-01-11T21:41:24.0934147Z auto tmp97 = tmp93 * tmp43; 2023-01-11T21:41:24.0934238Z auto tmp98 = tmp94 * tmp49; 2023-01-11T21:41:24.0934325Z auto tmp99 = tmp95 * tmp56; 2023-01-11T21:41:24.0934425Z auto tmp100 = tmp98 + tmp99; 2023-01-11T21:41:24.0934510Z auto tmp101 = tmp97 + tmp100; 2023-01-11T21:41:24.0934602Z auto tmp102 = tmp96 + tmp101; 2023-01-11T21:41:24.0934698Z auto tmp103 = tmp7 + 1.0; 2023-01-11T21:41:24.0934845Z auto tmp104 = -0.75 * tmp103; 2023-01-11T21:41:24.0934992Z auto tmp105 = tmp104 - -3.75; 2023-01-11T21:41:24.0935088Z auto tmp106 = tmp105 * tmp103; 2023-01-11T21:41:24.0935234Z auto tmp107 = tmp106 + -6.0; 2023-01-11T21:41:24.0935328Z auto tmp108 = tmp107 * tmp103; 2023-01-11T21:41:24.0935465Z auto tmp109 = tmp108 - -3.0; 2023-01-11T21:41:24.0935557Z auto tmp110 = 1.25 * tmp7; 2023-01-11T21:41:24.0935700Z auto tmp111 = tmp110 - 2.25; 2023-01-11T21:41:24.0935843Z auto tmp112 = tmp111 * tmp7; 2023-01-11T21:41:24.0935934Z auto tmp113 = tmp112 * tmp7; 2023-01-11T21:41:24.0936026Z auto tmp114 = tmp113 + 1.0; 2023-01-11T21:41:24.0936170Z auto tmp115 = 1.0 - tmp7; 2023-01-11T21:41:24.0936250Z auto tmp116 = 1.25 * tmp115; 2023-01-11T21:41:24.0936389Z auto tmp117 = tmp116 - 2.25; 2023-01-11T21:41:24.0936484Z auto tmp118 = tmp117 * tmp115; 2023-01-11T21:41:24.0936576Z auto tmp119 = tmp118 * tmp115; 2023-01-11T21:41:24.0936667Z auto tmp120 = tmp119 + 1.0; 2023-01-11T21:41:24.0936755Z auto tmp121 = tmp115 + 1.0; 2023-01-11T21:41:24.0936900Z auto tmp122 = -0.75 * tmp121; 2023-01-11T21:41:24.0937049Z auto tmp123 = tmp122 - -3.75; 2023-01-11T21:41:24.0937172Z auto tmp124 = tmp123 * tmp121; 2023-01-11T21:41:24.0937320Z auto tmp125 = tmp124 + -6.0; 2023-01-11T21:41:24.0937415Z auto tmp126 = tmp125 * tmp121; 2023-01-11T21:41:24.0937562Z auto tmp127 = tmp126 - -3.0; 2023-01-11T21:41:24.0937655Z auto tmp128 = tmp63 * tmp109; 2023-01-11T21:41:24.0937745Z auto tmp129 = tmp76 * tmp114; 2023-01-11T21:41:24.0937837Z auto tmp130 = tmp89 * tmp120; 2023-01-11T21:41:24.0937923Z auto tmp131 = tmp102 * tmp127; 2023-01-11T21:41:24.0938016Z auto tmp132 = tmp130 + tmp131; 2023-01-11T21:41:24.0938111Z auto tmp133 = tmp129 + tmp132; 2023-01-11T21:41:24.0938203Z auto tmp134 = tmp128 + tmp133; 2023-01-11T21:41:24.0938307Z out_ptr0[i2 + (128*i1) + (16384*i0)] = tmp134; 2023-01-11T21:41:24.0938377Z } 2023-01-11T21:41:24.0938438Z } 2023-01-11T21:41:24.0938489Z } 2023-01-11T21:41:24.0938551Z } 2023-01-11T21:41:24.0938611Z } 2023-01-11T21:41:24.0938687Z #pragma omp for 2023-01-11T21:41:24.0938767Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.0938827Z { 2023-01-11T21:41:24.0938907Z #pragma GCC ivdep 2023-01-11T21:41:24.0938981Z for(long i1=0; i1<128; i1+=1) 2023-01-11T21:41:24.0939040Z { 2023-01-11T21:41:24.0939121Z #pragma GCC ivdep 2023-01-11T21:41:24.0939207Z for(long i2=0; i2<256; i2+=1) 2023-01-11T21:41:24.0939266Z { 2023-01-11T21:41:24.0939327Z { 2023-01-11T21:41:24.0939393Z { 2023-01-11T21:41:24.0939489Z auto tmp0 = static_cast(i2); 2023-01-11T21:41:24.0939582Z auto tmp1 = tmp0 + 0.5; 2023-01-11T21:41:24.0939668Z auto tmp2 = 0.125 * tmp1; 2023-01-11T21:41:24.0939808Z auto tmp3 = tmp2 - 0.5; 2023-01-11T21:41:24.0939909Z auto tmp4 = std::floor(tmp3); 2023-01-11T21:41:24.0940051Z auto tmp5 = tmp3 - tmp4; 2023-01-11T21:41:24.0940151Z auto tmp6 = static_cast(i1); 2023-01-11T21:41:24.0940229Z auto tmp7 = tmp6 + 0.5; 2023-01-11T21:41:24.0940313Z auto tmp8 = 0.5 * tmp7; 2023-01-11T21:41:24.0940447Z auto tmp9 = tmp8 - 0.5; 2023-01-11T21:41:24.0940551Z auto tmp10 = std::floor(tmp9); 2023-01-11T21:41:24.0940698Z auto tmp11 = tmp9 - tmp10; 2023-01-11T21:41:24.0940801Z auto tmp12 = static_cast(tmp10); 2023-01-11T21:41:24.0940906Z auto tmp13 = static_cast(tmp4); 2023-01-11T21:41:24.0941074Z auto tmp14 = tmp12 + -1; 2023-01-11T21:41:24.0941152Z auto tmp15 = tmp12 + 0; 2023-01-11T21:41:24.0941239Z auto tmp16 = tmp12 + 1; 2023-01-11T21:41:24.0941327Z auto tmp17 = tmp12 + 2; 2023-01-11T21:41:24.0941468Z auto tmp18 = tmp13 + -1; 2023-01-11T21:41:24.0941552Z auto tmp19 = tmp13 + 0; 2023-01-11T21:41:24.0941638Z auto tmp20 = tmp13 + 1; 2023-01-11T21:41:24.0941726Z auto tmp21 = tmp13 + 2; 2023-01-11T21:41:24.0941839Z auto tmp22 = (tmp14 != tmp14) ? tmp14 : std::min(63, tmp14); 2023-01-11T21:41:24.0941962Z auto tmp23 = (tmp22 != tmp22) ? tmp22 : std::max(0, tmp22); 2023-01-11T21:41:24.0942109Z auto tmp24 = (tmp18 != tmp18) ? tmp18 : std::min(31, tmp18); 2023-01-11T21:41:24.0942232Z auto tmp25 = (tmp24 != tmp24) ? tmp24 : std::max(0, tmp24); 2023-01-11T21:41:24.0942491Z auto tmp26 = in_ptr0[tmp25 + (32*tmp23) + (2048*i0)]; 2023-01-11T21:41:24.0942644Z auto tmp27 = (tmp19 != tmp19) ? tmp19 : std::min(31, tmp19); 2023-01-11T21:41:24.0942762Z auto tmp28 = (tmp27 != tmp27) ? tmp27 : std::max(0, tmp27); 2023-01-11T21:41:24.0948045Z auto tmp29 = in_ptr0[tmp28 + (32*tmp23) + (2048*i0)]; 2023-01-11T21:41:24.0948198Z auto tmp30 = (tmp20 != tmp20) ? tmp20 : std::min(31, tmp20); 2023-01-11T21:41:24.0948326Z auto tmp31 = (tmp30 != tmp30) ? tmp30 : std::max(0, tmp30); 2023-01-11T21:41:24.0948431Z auto tmp32 = in_ptr0[tmp31 + (32*tmp23) + (2048*i0)]; 2023-01-11T21:41:24.0948560Z auto tmp33 = (tmp21 != tmp21) ? tmp21 : std::min(31, tmp21); 2023-01-11T21:41:24.0948684Z auto tmp34 = (tmp33 != tmp33) ? tmp33 : std::max(0, tmp33); 2023-01-11T21:41:24.0948798Z auto tmp35 = in_ptr0[tmp34 + (32*tmp23) + (2048*i0)]; 2023-01-11T21:41:24.0948892Z auto tmp36 = tmp5 + 1.0; 2023-01-11T21:41:24.0949063Z auto tmp37 = -0.75 * tmp36; 2023-01-11T21:41:24.0949213Z auto tmp38 = tmp37 - -3.75; 2023-01-11T21:41:24.0949304Z auto tmp39 = tmp38 * tmp36; 2023-01-11T21:41:24.0949439Z auto tmp40 = tmp39 + -6.0; 2023-01-11T21:41:24.0949531Z auto tmp41 = tmp40 * tmp36; 2023-01-11T21:41:24.0949675Z auto tmp42 = tmp41 - -3.0; 2023-01-11T21:41:24.0949767Z auto tmp43 = 1.25 * tmp5; 2023-01-11T21:41:24.0949911Z auto tmp44 = tmp43 - 2.25; 2023-01-11T21:41:24.0950007Z auto tmp45 = tmp44 * tmp5; 2023-01-11T21:41:24.0950097Z auto tmp46 = tmp45 * tmp5; 2023-01-11T21:41:24.0950179Z auto tmp47 = tmp46 + 1.0; 2023-01-11T21:41:24.0950317Z auto tmp48 = 1.0 - tmp5; 2023-01-11T21:41:24.0950406Z auto tmp49 = 1.25 * tmp48; 2023-01-11T21:41:24.0950548Z auto tmp50 = tmp49 - 2.25; 2023-01-11T21:41:24.0950643Z auto tmp51 = tmp50 * tmp48; 2023-01-11T21:41:24.0950734Z auto tmp52 = tmp51 * tmp48; 2023-01-11T21:41:24.0950829Z auto tmp53 = tmp52 + 1.0; 2023-01-11T21:41:24.0950906Z auto tmp54 = tmp48 + 1.0; 2023-01-11T21:41:24.0951047Z auto tmp55 = -0.75 * tmp54; 2023-01-11T21:41:24.0951191Z auto tmp56 = tmp55 - -3.75; 2023-01-11T21:41:24.0951379Z auto tmp57 = tmp56 * tmp54; 2023-01-11T21:41:24.0951524Z auto tmp58 = tmp57 + -6.0; 2023-01-11T21:41:24.0951615Z auto tmp59 = tmp58 * tmp54; 2023-01-11T21:41:24.0951754Z auto tmp60 = tmp59 - -3.0; 2023-01-11T21:41:24.0951847Z auto tmp61 = tmp26 * tmp42; 2023-01-11T21:41:24.0951929Z auto tmp62 = tmp29 * tmp47; 2023-01-11T21:41:24.0952020Z auto tmp63 = tmp32 * tmp53; 2023-01-11T21:41:24.0952112Z auto tmp64 = tmp35 * tmp60; 2023-01-11T21:41:24.0952205Z auto tmp65 = tmp63 + tmp64; 2023-01-11T21:41:24.0952295Z auto tmp66 = tmp62 + tmp65; 2023-01-11T21:41:24.0952384Z auto tmp67 = tmp61 + tmp66; 2023-01-11T21:41:24.0952510Z auto tmp68 = (tmp15 != tmp15) ? tmp15 : std::min(63, tmp15); 2023-01-11T21:41:24.0952681Z auto tmp69 = (tmp68 != tmp68) ? tmp68 : std::max(0, tmp68); 2023-01-11T21:41:24.0952790Z auto tmp70 = in_ptr0[tmp25 + (32*tmp69) + (2048*i0)]; 2023-01-11T21:41:24.0952905Z auto tmp71 = in_ptr0[tmp28 + (32*tmp69) + (2048*i0)]; 2023-01-11T21:41:24.0953020Z auto tmp72 = in_ptr0[tmp31 + (32*tmp69) + (2048*i0)]; 2023-01-11T21:41:24.0953134Z auto tmp73 = in_ptr0[tmp34 + (32*tmp69) + (2048*i0)]; 2023-01-11T21:41:24.0953226Z auto tmp74 = tmp70 * tmp42; 2023-01-11T21:41:24.0953318Z auto tmp75 = tmp71 * tmp47; 2023-01-11T21:41:24.0953409Z auto tmp76 = tmp72 * tmp53; 2023-01-11T21:41:24.0953500Z auto tmp77 = tmp73 * tmp60; 2023-01-11T21:41:24.0953581Z auto tmp78 = tmp76 + tmp77; 2023-01-11T21:41:24.0953677Z auto tmp79 = tmp75 + tmp78; 2023-01-11T21:41:24.0953846Z auto tmp80 = tmp74 + tmp79; 2023-01-11T21:41:24.0953978Z auto tmp81 = (tmp16 != tmp16) ? tmp16 : std::min(63, tmp16); 2023-01-11T21:41:24.0954104Z auto tmp82 = (tmp81 != tmp81) ? tmp81 : std::max(0, tmp81); 2023-01-11T21:41:24.0954220Z auto tmp83 = in_ptr0[tmp25 + (32*tmp82) + (2048*i0)]; 2023-01-11T21:41:24.0954328Z auto tmp84 = in_ptr0[tmp28 + (32*tmp82) + (2048*i0)]; 2023-01-11T21:41:24.0954442Z auto tmp85 = in_ptr0[tmp31 + (32*tmp82) + (2048*i0)]; 2023-01-11T21:41:24.0954543Z auto tmp86 = in_ptr0[tmp34 + (32*tmp82) + (2048*i0)]; 2023-01-11T21:41:24.0954636Z auto tmp87 = tmp83 * tmp42; 2023-01-11T21:41:24.0954728Z auto tmp88 = tmp84 * tmp47; 2023-01-11T21:41:24.0954825Z auto tmp89 = tmp85 * tmp53; 2023-01-11T21:41:24.0954921Z auto tmp90 = tmp86 * tmp60; 2023-01-11T21:41:24.0955007Z auto tmp91 = tmp89 + tmp90; 2023-01-11T21:41:24.0955098Z auto tmp92 = tmp88 + tmp91; 2023-01-11T21:41:24.0955182Z auto tmp93 = tmp87 + tmp92; 2023-01-11T21:41:24.0955303Z auto tmp94 = (tmp17 != tmp17) ? tmp17 : std::min(63, tmp17); 2023-01-11T21:41:24.0955427Z auto tmp95 = (tmp94 != tmp94) ? tmp94 : std::max(0, tmp94); 2023-01-11T21:41:24.0955536Z auto tmp96 = in_ptr0[tmp25 + (32*tmp95) + (2048*i0)]; 2023-01-11T21:41:24.0955649Z auto tmp97 = in_ptr0[tmp28 + (32*tmp95) + (2048*i0)]; 2023-01-11T21:41:24.0955762Z auto tmp98 = in_ptr0[tmp31 + (32*tmp95) + (2048*i0)]; 2023-01-11T21:41:24.0955877Z auto tmp99 = in_ptr0[tmp34 + (32*tmp95) + (2048*i0)]; 2023-01-11T21:41:24.0956009Z auto tmp100 = tmp96 * tmp42; 2023-01-11T21:41:24.0956105Z auto tmp101 = tmp97 * tmp47; 2023-01-11T21:41:24.0956187Z auto tmp102 = tmp98 * tmp53; 2023-01-11T21:41:24.0956281Z auto tmp103 = tmp99 * tmp60; 2023-01-11T21:41:24.0956379Z auto tmp104 = tmp102 + tmp103; 2023-01-11T21:41:24.0956474Z auto tmp105 = tmp101 + tmp104; 2023-01-11T21:41:24.0956568Z auto tmp106 = tmp100 + tmp105; 2023-01-11T21:41:24.0956663Z auto tmp107 = tmp11 + 1.0; 2023-01-11T21:41:24.0956816Z auto tmp108 = -0.75 * tmp107; 2023-01-11T21:41:24.0956955Z auto tmp109 = tmp108 - -3.75; 2023-01-11T21:41:24.0957052Z auto tmp110 = tmp109 * tmp107; 2023-01-11T21:41:24.0957228Z auto tmp111 = tmp110 + -6.0; 2023-01-11T21:41:24.0957323Z auto tmp112 = tmp111 * tmp107; 2023-01-11T21:41:24.0957474Z auto tmp113 = tmp112 - -3.0; 2023-01-11T21:41:24.0957565Z auto tmp114 = 1.25 * tmp11; 2023-01-11T21:41:24.0957711Z auto tmp115 = tmp114 - 2.25; 2023-01-11T21:41:24.0957796Z auto tmp116 = tmp115 * tmp11; 2023-01-11T21:41:24.0957889Z auto tmp117 = tmp116 * tmp11; 2023-01-11T21:41:24.0957981Z auto tmp118 = tmp117 + 1.0; 2023-01-11T21:41:24.0958125Z auto tmp119 = 1.0 - tmp11; 2023-01-11T21:41:24.0958217Z auto tmp120 = 1.25 * tmp119; 2023-01-11T21:41:24.0958365Z auto tmp121 = tmp120 - 2.25; 2023-01-11T21:41:24.0958462Z auto tmp122 = tmp121 * tmp119; 2023-01-11T21:41:24.0958563Z auto tmp123 = tmp122 * tmp119; 2023-01-11T21:41:24.0958643Z auto tmp124 = tmp123 + 1.0; 2023-01-11T21:41:24.0958736Z auto tmp125 = tmp119 + 1.0; 2023-01-11T21:41:24.0958885Z auto tmp126 = -0.75 * tmp125; 2023-01-11T21:41:24.0959032Z auto tmp127 = tmp126 - -3.75; 2023-01-11T21:41:24.0959131Z auto tmp128 = tmp127 * tmp125; 2023-01-11T21:41:24.0959278Z auto tmp129 = tmp128 + -6.0; 2023-01-11T21:41:24.0959372Z auto tmp130 = tmp129 * tmp125; 2023-01-11T21:41:24.0959508Z auto tmp131 = tmp130 - -3.0; 2023-01-11T21:41:24.0959600Z auto tmp132 = tmp67 * tmp113; 2023-01-11T21:41:24.0959693Z auto tmp133 = tmp80 * tmp118; 2023-01-11T21:41:24.0959786Z auto tmp134 = tmp93 * tmp124; 2023-01-11T21:41:24.0959885Z auto tmp135 = tmp106 * tmp131; 2023-01-11T21:41:24.0959980Z auto tmp136 = tmp134 + tmp135; 2023-01-11T21:41:24.0960071Z auto tmp137 = tmp133 + tmp136; 2023-01-11T21:41:24.0960163Z auto tmp138 = tmp132 + tmp137; 2023-01-11T21:41:24.0960258Z out_ptr1[i2 + (256*i1) + (32768*i0)] = tmp138; 2023-01-11T21:41:24.0960322Z } 2023-01-11T21:41:24.0960386Z } 2023-01-11T21:41:24.0960447Z } 2023-01-11T21:41:24.0960510Z } 2023-01-11T21:41:24.0960571Z } 2023-01-11T21:41:24.0960625Z } 2023-01-11T21:41:24.0960672Z } 2023-01-11T21:41:24.0960750Z ''') 2023-01-11T21:41:24.0960757Z 2023-01-11T21:41:24.0960762Z 2023-01-11T21:41:24.0960849Z async_compile.wait(globals()) 2023-01-11T21:41:24.0960919Z del async_compile 2023-01-11T21:41:24.0960924Z 2023-01-11T21:41:24.0961022Z def call(args): 2023-01-11T21:41:24.0961092Z arg0_1, = args 2023-01-11T21:41:24.0961161Z args.clear() 2023-01-11T21:41:24.0961382Z buf0 = empty_strided((4, 3, 128, 128), (49152, 16384, 128, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0961600Z buf1 = empty_strided((4, 3, 128, 256), (98304, 32768, 256, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0961763Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.0961829Z del arg0_1 2023-01-11T21:41:24.0961903Z return (buf0, buf1, ) 2023-01-11T21:41:24.0961908Z 2023-01-11T21:41:24.0961913Z 2023-01-11T21:41:24.0961983Z if __name__ == "__main__": 2023-01-11T21:41:24.0962094Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0962219Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0962429Z arg0_1 = rand_strided((4, 3, 64, 32), (6144, 2048, 32, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0962567Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0962834Z [2023-01-11 21:39:12,969] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 475 2023-01-11T21:41:24.0962839Z 2023-01-11T21:41:24.0962905Z ok (3.413s) 2023-01-11T21:41:24.0963383Z test_upsample_bilinear2d_a_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0963508Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0963770Z [2023-01-11 21:39:13,693] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 476 2023-01-11T21:41:24.0963775Z 2023-01-11T21:41:24.0963875Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0963941Z import torch 2023-01-11T21:41:24.0963998Z import random 2023-01-11T21:41:24.0964110Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0964224Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0964230Z 2023-01-11T21:41:24.0964306Z aten = torch.ops.aten 2023-01-11T21:41:24.0964442Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0964532Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0964537Z 2023-01-11T21:41:24.0964541Z 2023-01-11T21:41:24.0964670Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0964871Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0964982Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0965070Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.0965169Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0965270Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.0965362Z float* __restrict__ out_ptr3) 2023-01-11T21:41:24.0965425Z { 2023-01-11T21:41:24.0965508Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0965584Z auto out_ptr2 = in_out_ptr1; 2023-01-11T21:41:24.0965670Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0965731Z { 2023-01-11T21:41:24.0965805Z #pragma omp for 2023-01-11T21:41:24.0965888Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0965947Z { 2023-01-11T21:41:24.0966023Z #pragma GCC ivdep 2023-01-11T21:41:24.0966096Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.0966155Z { 2023-01-11T21:41:24.0966231Z #pragma GCC ivdep 2023-01-11T21:41:24.0966320Z for(long i2=0; i2<45; i2+=1) 2023-01-11T21:41:24.0966389Z { 2023-01-11T21:41:24.0966451Z { 2023-01-11T21:41:24.0966519Z { 2023-01-11T21:41:24.0966644Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0966751Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:24.0966844Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0966958Z auto tmp3 = static_cast(0.8222222222222222); 2023-01-11T21:41:24.0967054Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0967194Z auto tmp5 = tmp4 - tmp1; 2023-01-11T21:41:24.0967304Z auto tmp6 = static_cast(0.0); 2023-01-11T21:41:24.0967432Z auto tmp7 = (tmp6 != tmp6) ? tmp6 : std::max(tmp5, tmp6); 2023-01-11T21:41:24.0967524Z auto tmp8 = std::floor(tmp7); 2023-01-11T21:41:24.0967630Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.0967732Z auto tmp10 = static_cast(i2); 2023-01-11T21:41:24.0967864Z auto tmp11 = tmp10 + tmp1; 2023-01-11T21:41:24.0967979Z auto tmp12 = static_cast(0.8444444444444444); 2023-01-11T21:41:24.0968073Z auto tmp13 = tmp11 * tmp12; 2023-01-11T21:41:24.0968219Z auto tmp14 = tmp13 - tmp1; 2023-01-11T21:41:24.0968348Z auto tmp15 = (tmp6 != tmp6) ? tmp6 : std::max(tmp14, tmp6); 2023-01-11T21:41:24.0968442Z auto tmp16 = std::floor(tmp15); 2023-01-11T21:41:24.0968550Z auto tmp17 = static_cast(tmp16); 2023-01-11T21:41:24.0968662Z auto tmp18 = in_ptr0[tmp17 + (38*tmp9) + (1406*i0)]; 2023-01-11T21:41:24.0968768Z auto tmp19 = static_cast(1.0); 2023-01-11T21:41:24.0968874Z auto tmp20 = static_cast(tmp9); 2023-01-11T21:41:24.0969019Z auto tmp21 = tmp7 - tmp20; 2023-01-11T21:41:24.0969168Z auto tmp22 = tmp19 - tmp21; 2023-01-11T21:41:24.0969250Z auto tmp23 = tmp18 * tmp22; 2023-01-11T21:41:24.0969351Z auto tmp24 = std::ceil(tmp7); 2023-01-11T21:41:24.0969459Z auto tmp25 = static_cast(36.0); 2023-01-11T21:41:24.0969587Z auto tmp26 = (tmp25 != tmp25) ? tmp25 : std::min(tmp24, tmp25); 2023-01-11T21:41:24.0969696Z auto tmp27 = static_cast(tmp26); 2023-01-11T21:41:24.0969808Z auto tmp28 = in_ptr0[tmp17 + (38*tmp27) + (1406*i0)]; 2023-01-11T21:41:24.0969902Z auto tmp29 = tmp28 * tmp21; 2023-01-11T21:41:24.0969996Z auto tmp30 = tmp23 + tmp29; 2023-01-11T21:41:24.0970089Z out_ptr0[i2 + (45*i1) + (2025*i0)] = tmp30; 2023-01-11T21:41:24.0970159Z } 2023-01-11T21:41:24.0970224Z } 2023-01-11T21:41:24.0970287Z } 2023-01-11T21:41:24.0970346Z } 2023-01-11T21:41:24.0970408Z } 2023-01-11T21:41:24.0970482Z #pragma omp for 2023-01-11T21:41:24.0970550Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0970610Z { 2023-01-11T21:41:24.0970687Z #pragma GCC ivdep 2023-01-11T21:41:24.0970766Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.0970830Z { 2023-01-11T21:41:24.0970913Z #pragma GCC ivdep 2023-01-11T21:41:24.0970998Z for(long i2=0; i2<45; i2+=1) 2023-01-11T21:41:24.0971049Z { 2023-01-11T21:41:24.0971109Z { 2023-01-11T21:41:24.0971175Z { 2023-01-11T21:41:24.0971279Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0971387Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:24.0971512Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0971624Z auto tmp3 = static_cast(0.8222222222222222); 2023-01-11T21:41:24.0971705Z auto tmp4 = tmp2 * tmp3; 2023-01-11T21:41:24.0971846Z auto tmp5 = tmp4 - tmp1; 2023-01-11T21:41:24.0971949Z auto tmp6 = static_cast(0.0); 2023-01-11T21:41:24.0972074Z auto tmp7 = (tmp6 != tmp6) ? tmp6 : std::max(tmp5, tmp6); 2023-01-11T21:41:24.0972174Z auto tmp8 = std::floor(tmp7); 2023-01-11T21:41:24.0972280Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.0972385Z auto tmp10 = static_cast(i2); 2023-01-11T21:41:24.0972482Z auto tmp11 = tmp10 + tmp1; 2023-01-11T21:41:24.0972585Z auto tmp12 = static_cast(0.8444444444444444); 2023-01-11T21:41:24.0972707Z auto tmp13 = tmp11 * tmp12; 2023-01-11T21:41:24.0972853Z auto tmp14 = tmp13 - tmp1; 2023-01-11T21:41:24.0972980Z auto tmp15 = (tmp6 != tmp6) ? tmp6 : std::max(tmp14, tmp6); 2023-01-11T21:41:24.0973088Z auto tmp16 = std::ceil(tmp15); 2023-01-11T21:41:24.0973191Z auto tmp17 = static_cast(37.0); 2023-01-11T21:41:24.0973319Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp16, tmp17); 2023-01-11T21:41:24.0973428Z auto tmp19 = static_cast(tmp18); 2023-01-11T21:41:24.0973530Z auto tmp20 = in_ptr0[tmp19 + (38*tmp9) + (1406*i0)]; 2023-01-11T21:41:24.0973635Z auto tmp21 = static_cast(1.0); 2023-01-11T21:41:24.0973744Z auto tmp22 = static_cast(tmp9); 2023-01-11T21:41:24.0973892Z auto tmp23 = tmp7 - tmp22; 2023-01-11T21:41:24.0974039Z auto tmp24 = tmp21 - tmp23; 2023-01-11T21:41:24.0974130Z auto tmp25 = tmp20 * tmp24; 2023-01-11T21:41:24.0974230Z auto tmp26 = std::ceil(tmp7); 2023-01-11T21:41:24.0974334Z auto tmp27 = static_cast(36.0); 2023-01-11T21:41:24.0974449Z auto tmp28 = (tmp27 != tmp27) ? tmp27 : std::min(tmp26, tmp27); 2023-01-11T21:41:24.0974556Z auto tmp29 = static_cast(tmp28); 2023-01-11T21:41:24.0974671Z auto tmp30 = in_ptr0[tmp19 + (38*tmp29) + (1406*i0)]; 2023-01-11T21:41:24.0974766Z auto tmp31 = tmp30 * tmp23; 2023-01-11T21:41:24.0974858Z auto tmp32 = tmp25 + tmp31; 2023-01-11T21:41:24.0974958Z out_ptr1[i2 + (45*i1) + (2025*i0)] = tmp32; 2023-01-11T21:41:24.0975031Z } 2023-01-11T21:41:24.0975085Z } 2023-01-11T21:41:24.0975147Z } 2023-01-11T21:41:24.0975208Z } 2023-01-11T21:41:24.0975267Z } 2023-01-11T21:41:24.0975341Z #pragma omp for 2023-01-11T21:41:24.0975423Z for(long i0=0; i0<360; i0+=1) 2023-01-11T21:41:24.0975481Z { 2023-01-11T21:41:24.0975546Z #pragma GCC ivdep 2023-01-11T21:41:24.0975628Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.0975690Z { 2023-01-11T21:41:24.0975749Z { 2023-01-11T21:41:24.0975812Z { 2023-01-11T21:41:24.0975912Z auto tmp0 = out_ptr0[i1 + (45*i0)]; 2023-01-11T21:41:24.0976013Z auto tmp16 = out_ptr1[i1 + (45*i0)]; 2023-01-11T21:41:24.0976106Z auto tmp1 = static_cast(1.0); 2023-01-11T21:41:24.0976209Z auto tmp2 = static_cast(i1); 2023-01-11T21:41:24.0976387Z auto tmp3 = static_cast(0.5); 2023-01-11T21:41:24.0976479Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.0976591Z auto tmp5 = static_cast(0.8444444444444444); 2023-01-11T21:41:24.0976685Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.0976823Z auto tmp7 = tmp6 - tmp3; 2023-01-11T21:41:24.0976919Z auto tmp8 = static_cast(0.0); 2023-01-11T21:41:24.0977044Z auto tmp9 = (tmp8 != tmp8) ? tmp8 : std::max(tmp7, tmp8); 2023-01-11T21:41:24.0977145Z auto tmp10 = std::floor(tmp9); 2023-01-11T21:41:24.0977251Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.0977355Z auto tmp12 = static_cast(tmp11); 2023-01-11T21:41:24.0977497Z auto tmp13 = tmp9 - tmp12; 2023-01-11T21:41:24.0977635Z auto tmp14 = tmp1 - tmp13; 2023-01-11T21:41:24.0977751Z auto tmp15 = tmp0 * tmp14; 2023-01-11T21:41:24.0977833Z auto tmp17 = tmp16 * tmp13; 2023-01-11T21:41:24.0977921Z auto tmp18 = tmp15 + tmp17; 2023-01-11T21:41:24.0978017Z in_out_ptr0[i1 + (45*i0)] = tmp18; 2023-01-11T21:41:24.0978080Z } 2023-01-11T21:41:24.0978142Z } 2023-01-11T21:41:24.0978199Z } 2023-01-11T21:41:24.0978258Z } 2023-01-11T21:41:24.0978322Z #pragma omp for 2023-01-11T21:41:24.0978402Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0978462Z { 2023-01-11T21:41:24.0978538Z #pragma GCC ivdep 2023-01-11T21:41:24.0978622Z for(long i1=0; i1<74; i1+=1) 2023-01-11T21:41:24.0978683Z { 2023-01-11T21:41:24.0978765Z #pragma GCC ivdep 2023-01-11T21:41:24.0978842Z for(long i2=0; i2<76; i2+=1) 2023-01-11T21:41:24.0978900Z { 2023-01-11T21:41:24.0978969Z { 2023-01-11T21:41:24.0979034Z { 2023-01-11T21:41:24.0979137Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0979250Z auto tmp1 = static_cast(0.4931506849315068); 2023-01-11T21:41:24.0979341Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0979430Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:24.0979536Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:24.0979640Z auto tmp5 = static_cast(i2); 2023-01-11T21:41:24.0979753Z auto tmp6 = static_cast(0.49333333333333335); 2023-01-11T21:41:24.0979846Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:24.0979946Z auto tmp8 = std::floor(tmp7); 2023-01-11T21:41:24.0980050Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.0980158Z auto tmp10 = in_ptr0[tmp9 + (38*tmp4) + (1406*i0)]; 2023-01-11T21:41:24.0980263Z auto tmp11 = static_cast(1.0); 2023-01-11T21:41:24.0980372Z auto tmp12 = static_cast(tmp4); 2023-01-11T21:41:24.0980598Z auto tmp13 = tmp2 - tmp12; 2023-01-11T21:41:24.0980780Z auto tmp14 = tmp11 - tmp13; 2023-01-11T21:41:24.0980871Z auto tmp15 = tmp10 * tmp14; 2023-01-11T21:41:24.0980971Z auto tmp16 = std::ceil(tmp2); 2023-01-11T21:41:24.0981074Z auto tmp17 = static_cast(36.0); 2023-01-11T21:41:24.0981193Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp16, tmp17); 2023-01-11T21:41:24.0981298Z auto tmp19 = static_cast(tmp18); 2023-01-11T21:41:24.0981416Z auto tmp20 = in_ptr0[tmp9 + (38*tmp19) + (1406*i0)]; 2023-01-11T21:41:24.0981545Z auto tmp21 = tmp20 * tmp13; 2023-01-11T21:41:24.0981638Z auto tmp22 = tmp15 + tmp21; 2023-01-11T21:41:24.0981742Z auto tmp23 = static_cast(tmp9); 2023-01-11T21:41:24.0981889Z auto tmp24 = tmp7 - tmp23; 2023-01-11T21:41:24.0982034Z auto tmp25 = tmp11 - tmp24; 2023-01-11T21:41:24.0982116Z auto tmp26 = tmp22 * tmp25; 2023-01-11T21:41:24.0982217Z out_ptr2[i2 + (76*i1) + (5624*i0)] = tmp26; 2023-01-11T21:41:24.0982283Z } 2023-01-11T21:41:24.0982560Z } 2023-01-11T21:41:24.0982624Z } 2023-01-11T21:41:24.0982686Z } 2023-01-11T21:41:24.0982747Z } 2023-01-11T21:41:24.0982812Z #pragma omp for 2023-01-11T21:41:24.0982891Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.0983013Z { 2023-01-11T21:41:24.0983094Z #pragma GCC ivdep 2023-01-11T21:41:24.0983178Z for(long i1=0; i1<74; i1+=1) 2023-01-11T21:41:24.0983238Z { 2023-01-11T21:41:24.0983324Z #pragma GCC ivdep 2023-01-11T21:41:24.0983454Z for(long i2=0; i2<76; i2+=1) 2023-01-11T21:41:24.0983546Z { 2023-01-11T21:41:24.0983612Z { 2023-01-11T21:41:24.0983676Z { 2023-01-11T21:41:24.0983782Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0983895Z auto tmp1 = static_cast(0.4931506849315068); 2023-01-11T21:41:24.0983989Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0984081Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:24.0984188Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:24.0984295Z auto tmp5 = static_cast(i2); 2023-01-11T21:41:24.0984414Z auto tmp6 = static_cast(0.49333333333333335); 2023-01-11T21:41:24.0984507Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:24.0984607Z auto tmp8 = std::ceil(tmp7); 2023-01-11T21:41:24.0984711Z auto tmp9 = static_cast(37.0); 2023-01-11T21:41:24.0984828Z auto tmp10 = (tmp9 != tmp9) ? tmp9 : std::min(tmp8, tmp9); 2023-01-11T21:41:24.0984938Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.0985053Z auto tmp12 = in_ptr0[tmp11 + (38*tmp4) + (1406*i0)]; 2023-01-11T21:41:24.0985162Z auto tmp13 = static_cast(1.0); 2023-01-11T21:41:24.0985271Z auto tmp14 = static_cast(tmp4); 2023-01-11T21:41:24.0985427Z auto tmp15 = tmp2 - tmp14; 2023-01-11T21:41:24.0985578Z auto tmp16 = tmp13 - tmp15; 2023-01-11T21:41:24.0985671Z auto tmp17 = tmp12 * tmp16; 2023-01-11T21:41:24.0985761Z auto tmp18 = std::ceil(tmp2); 2023-01-11T21:41:24.0985866Z auto tmp19 = static_cast(36.0); 2023-01-11T21:41:24.0985997Z auto tmp20 = (tmp19 != tmp19) ? tmp19 : std::min(tmp18, tmp19); 2023-01-11T21:41:24.0986102Z auto tmp21 = static_cast(tmp20); 2023-01-11T21:41:24.0986219Z auto tmp22 = in_ptr0[tmp11 + (38*tmp21) + (1406*i0)]; 2023-01-11T21:41:24.0986313Z auto tmp23 = tmp22 * tmp15; 2023-01-11T21:41:24.0986406Z auto tmp24 = tmp17 + tmp23; 2023-01-11T21:41:24.0986508Z auto tmp25 = std::floor(tmp7); 2023-01-11T21:41:24.0986603Z auto tmp26 = static_cast(tmp25); 2023-01-11T21:41:24.0986757Z auto tmp27 = static_cast(tmp26); 2023-01-11T21:41:24.0986905Z auto tmp28 = tmp7 - tmp27; 2023-01-11T21:41:24.0986998Z auto tmp29 = tmp24 * tmp28; 2023-01-11T21:41:24.0987100Z out_ptr3[i2 + (76*i1) + (5624*i0)] = tmp29; 2023-01-11T21:41:24.0987169Z } 2023-01-11T21:41:24.0987232Z } 2023-01-11T21:41:24.0987284Z } 2023-01-11T21:41:24.0987344Z } 2023-01-11T21:41:24.0987404Z } 2023-01-11T21:41:24.0987478Z #pragma omp for 2023-01-11T21:41:24.0987560Z for(long i0=0; i0<5624; i0+=1) 2023-01-11T21:41:24.0987621Z { 2023-01-11T21:41:24.0987754Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + 8*i0); 2023-01-11T21:41:24.0987872Z auto tmp1 = at::vec::Vectorized::loadu(out_ptr3 + 8*i0); 2023-01-11T21:41:24.0987956Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0988078Z tmp2.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:24.0988141Z } 2023-01-11T21:41:24.0988233Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.0988318Z for(long i0=44992; i0<44992; i0+=1) 2023-01-11T21:41:24.0988378Z { 2023-01-11T21:41:24.0988450Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:24.0988533Z auto tmp1 = out_ptr3[i0]; 2023-01-11T21:41:24.0988612Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.0988693Z in_out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.0988755Z } 2023-01-11T21:41:24.0988814Z } 2023-01-11T21:41:24.0988871Z } 2023-01-11T21:41:24.0988937Z ''') 2023-01-11T21:41:24.0988944Z 2023-01-11T21:41:24.0988960Z 2023-01-11T21:41:24.0989037Z async_compile.wait(globals()) 2023-01-11T21:41:24.0989105Z del async_compile 2023-01-11T21:41:24.0989110Z 2023-01-11T21:41:24.0989178Z def call(args): 2023-01-11T21:41:24.0989243Z arg0_1, = args 2023-01-11T21:41:24.0989318Z args.clear() 2023-01-11T21:41:24.0989544Z buf0 = empty_strided((2, 4, 45, 45), (8100, 2025, 45, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0989762Z buf1 = empty_strided((2, 4, 45, 45), (8100, 2025, 45, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0989834Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:24.0990050Z buf3 = empty_strided((2, 4, 74, 76), (22496, 5624, 76, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0990269Z buf4 = empty_strided((2, 4, 74, 76), (22496, 5624, 76, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0990350Z buf5 = buf3; del buf3 # reuse 2023-01-11T21:41:24.0990561Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(buf5.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:24.0990625Z del arg0_1 2023-01-11T21:41:24.0990698Z return (buf2, buf5, ) 2023-01-11T21:41:24.0990703Z 2023-01-11T21:41:24.0990707Z 2023-01-11T21:41:24.0990784Z if __name__ == "__main__": 2023-01-11T21:41:24.0990885Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.0991006Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.0991225Z arg0_1 = rand_strided((2, 4, 37, 38), (5624, 1406, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.0991333Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.0991602Z [2023-01-11 21:39:15,547] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 476 2023-01-11T21:41:24.0991607Z 2023-01-11T21:41:24.0991670Z ok (2.568s) 2023-01-11T21:41:24.0992148Z test_upsample_bilinear2d_b_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.0992303Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.0992562Z [2023-01-11 21:39:15,831] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 477 2023-01-11T21:41:24.0992824Z [2023-01-11 21:39:17,456] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 477 2023-01-11T21:41:24.0992829Z 2023-01-11T21:41:24.0992910Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.0992979Z import torch 2023-01-11T21:41:24.0993045Z import random 2023-01-11T21:41:24.0993156Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.0993277Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.0993282Z 2023-01-11T21:41:24.0993357Z aten = torch.ops.aten 2023-01-11T21:41:24.0993487Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.0993566Z async_compile = AsyncCompile() 2023-01-11T21:41:24.0993584Z 2023-01-11T21:41:24.0993616Z 2023-01-11T21:41:24.0993795Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.0993999Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.0994115Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.0994216Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.0994314Z float* __restrict__ out_ptr1) 2023-01-11T21:41:24.0994373Z { 2023-01-11T21:41:24.0994459Z auto out_ptr0 = in_out_ptr0; 2023-01-11T21:41:24.0994543Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.0994601Z { 2023-01-11T21:41:24.0994690Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0994769Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0994829Z { 2023-01-11T21:41:24.0994911Z for(long i1=0; i1<80; i1+=1) 2023-01-11T21:41:24.0994975Z { 2023-01-11T21:41:24.0995043Z #pragma GCC ivdep 2023-01-11T21:41:24.0995134Z for(long i2=0; i2<118; i2+=1) 2023-01-11T21:41:24.0995196Z { 2023-01-11T21:41:24.0995259Z { 2023-01-11T21:41:24.0995325Z { 2023-01-11T21:41:24.0995432Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0995546Z auto tmp1 = static_cast(0.4936708860759494); 2023-01-11T21:41:24.0995631Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0995733Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:24.0995838Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:24.0995948Z auto tmp5 = static_cast(i2); 2023-01-11T21:41:24.0996062Z auto tmp6 = static_cast(0.49572649572649574); 2023-01-11T21:41:24.0996154Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:24.0996256Z auto tmp8 = std::floor(tmp7); 2023-01-11T21:41:24.0996353Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.0996466Z auto tmp10 = in_ptr0[tmp9 + (59*tmp4) + (2360*i0)]; 2023-01-11T21:41:24.0996568Z auto tmp11 = static_cast(1.0); 2023-01-11T21:41:24.0996673Z auto tmp12 = static_cast(tmp4); 2023-01-11T21:41:24.0996819Z auto tmp13 = tmp2 - tmp12; 2023-01-11T21:41:24.0996961Z auto tmp14 = tmp11 - tmp13; 2023-01-11T21:41:24.0997056Z auto tmp15 = tmp10 * tmp14; 2023-01-11T21:41:24.0997155Z auto tmp16 = std::ceil(tmp2); 2023-01-11T21:41:24.0997248Z auto tmp17 = static_cast(39.0); 2023-01-11T21:41:24.0997379Z auto tmp18 = (tmp17 != tmp17) ? tmp17 : std::min(tmp16, tmp17); 2023-01-11T21:41:24.0997518Z auto tmp19 = static_cast(tmp18); 2023-01-11T21:41:24.0997634Z auto tmp20 = in_ptr0[tmp9 + (59*tmp19) + (2360*i0)]; 2023-01-11T21:41:24.0997727Z auto tmp21 = tmp20 * tmp13; 2023-01-11T21:41:24.0997820Z auto tmp22 = tmp15 + tmp21; 2023-01-11T21:41:24.0997928Z auto tmp23 = static_cast(tmp9); 2023-01-11T21:41:24.0998072Z auto tmp24 = tmp7 - tmp23; 2023-01-11T21:41:24.0998206Z auto tmp25 = tmp11 - tmp24; 2023-01-11T21:41:24.0998297Z auto tmp26 = tmp22 * tmp25; 2023-01-11T21:41:24.0998401Z out_ptr0[i2 + (118*i1) + (9440*i0)] = tmp26; 2023-01-11T21:41:24.0998469Z } 2023-01-11T21:41:24.0998533Z } 2023-01-11T21:41:24.0998594Z } 2023-01-11T21:41:24.0998656Z } 2023-01-11T21:41:24.0998709Z } 2023-01-11T21:41:24.0998822Z #pragma omp for collapse(2) 2023-01-11T21:41:24.0998900Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.0998960Z { 2023-01-11T21:41:24.0999041Z for(long i1=0; i1<80; i1+=1) 2023-01-11T21:41:24.0999099Z { 2023-01-11T21:41:24.0999182Z #pragma GCC ivdep 2023-01-11T21:41:24.0999259Z for(long i2=0; i2<118; i2+=1) 2023-01-11T21:41:24.0999321Z { 2023-01-11T21:41:24.0999384Z { 2023-01-11T21:41:24.0999450Z { 2023-01-11T21:41:24.0999552Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.0999671Z auto tmp1 = static_cast(0.4936708860759494); 2023-01-11T21:41:24.0999765Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.0999853Z auto tmp3 = std::floor(tmp2); 2023-01-11T21:41:24.0999960Z auto tmp4 = static_cast(tmp3); 2023-01-11T21:41:24.1000065Z auto tmp5 = static_cast(i2); 2023-01-11T21:41:24.1000178Z auto tmp6 = static_cast(0.49572649572649574); 2023-01-11T21:41:24.1000269Z auto tmp7 = tmp5 * tmp6; 2023-01-11T21:41:24.1000363Z auto tmp8 = std::ceil(tmp7); 2023-01-11T21:41:24.1000473Z auto tmp9 = static_cast(58.0); 2023-01-11T21:41:24.1000600Z auto tmp10 = (tmp9 != tmp9) ? tmp9 : std::min(tmp8, tmp9); 2023-01-11T21:41:24.1000695Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.1000809Z auto tmp12 = in_ptr0[tmp11 + (59*tmp4) + (2360*i0)]; 2023-01-11T21:41:24.1000916Z auto tmp13 = static_cast(1.0); 2023-01-11T21:41:24.1001023Z auto tmp14 = static_cast(tmp4); 2023-01-11T21:41:24.1001174Z auto tmp15 = tmp2 - tmp14; 2023-01-11T21:41:24.1001318Z auto tmp16 = tmp13 - tmp15; 2023-01-11T21:41:24.1001410Z auto tmp17 = tmp12 * tmp16; 2023-01-11T21:41:24.1001499Z auto tmp18 = std::ceil(tmp2); 2023-01-11T21:41:24.1001605Z auto tmp19 = static_cast(39.0); 2023-01-11T21:41:24.1001735Z auto tmp20 = (tmp19 != tmp19) ? tmp19 : std::min(tmp18, tmp19); 2023-01-11T21:41:24.1001841Z auto tmp21 = static_cast(tmp20); 2023-01-11T21:41:24.1001952Z auto tmp22 = in_ptr0[tmp11 + (59*tmp21) + (2360*i0)]; 2023-01-11T21:41:24.1002042Z auto tmp23 = tmp22 * tmp15; 2023-01-11T21:41:24.1002133Z auto tmp24 = tmp17 + tmp23; 2023-01-11T21:41:24.1002233Z auto tmp25 = std::floor(tmp7); 2023-01-11T21:41:24.1002359Z auto tmp26 = static_cast(tmp25); 2023-01-11T21:41:24.1002468Z auto tmp27 = static_cast(tmp26); 2023-01-11T21:41:24.1002611Z auto tmp28 = tmp7 - tmp27; 2023-01-11T21:41:24.1002701Z auto tmp29 = tmp24 * tmp28; 2023-01-11T21:41:24.1002801Z out_ptr1[i2 + (118*i1) + (9440*i0)] = tmp29; 2023-01-11T21:41:24.1002869Z } 2023-01-11T21:41:24.1002934Z } 2023-01-11T21:41:24.1002997Z } 2023-01-11T21:41:24.1003047Z } 2023-01-11T21:41:24.1003109Z } 2023-01-11T21:41:24.1003183Z #pragma omp for 2023-01-11T21:41:24.1003263Z for(long i0=0; i0<2360; i0+=1) 2023-01-11T21:41:24.1003320Z { 2023-01-11T21:41:24.1003448Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1003577Z auto tmp1 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.1003679Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1003773Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.1003833Z } 2023-01-11T21:41:24.1003939Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1004066Z for(long i0=18880; i0<18880; i0+=1) 2023-01-11T21:41:24.1004156Z { 2023-01-11T21:41:24.1004233Z auto tmp0 = out_ptr0[i0]; 2023-01-11T21:41:24.1004314Z auto tmp1 = out_ptr1[i0]; 2023-01-11T21:41:24.1004394Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1004479Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1004540Z } 2023-01-11T21:41:24.1004598Z } 2023-01-11T21:41:24.1004656Z } 2023-01-11T21:41:24.1004727Z ''') 2023-01-11T21:41:24.1004733Z 2023-01-11T21:41:24.1004746Z 2023-01-11T21:41:24.1004825Z async_compile.wait(globals()) 2023-01-11T21:41:24.1004895Z del async_compile 2023-01-11T21:41:24.1004899Z 2023-01-11T21:41:24.1004969Z def call(args): 2023-01-11T21:41:24.1005042Z arg0_1, = args 2023-01-11T21:41:24.1005112Z args.clear() 2023-01-11T21:41:24.1005340Z buf0 = empty_strided((1, 2, 80, 118), (18880, 9440, 118, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1005560Z buf1 = empty_strided((1, 2, 80, 118), (18880, 9440, 118, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1005634Z buf2 = buf0; del buf0 # reuse 2023-01-11T21:41:24.1005788Z kernel_cpp_0(c_void_p(buf2.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.1005854Z del arg0_1 2023-01-11T21:41:24.1005925Z return (buf2, ) 2023-01-11T21:41:24.1005930Z 2023-01-11T21:41:24.1005934Z 2023-01-11T21:41:24.1006005Z if __name__ == "__main__": 2023-01-11T21:41:24.1006114Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1006232Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1006452Z arg0_1 = rand_strided((1, 2, 40, 59), (4720, 2360, 59, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1006551Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1006555Z 2023-01-11T21:41:24.1006620Z ok (1.908s) 2023-01-11T21:41:24.1007101Z test_upsample_nearest1d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1007227Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1007486Z [2023-01-11 21:39:17,747] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 478 2023-01-11T21:41:24.1007752Z [2023-01-11 21:39:19,314] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 478 2023-01-11T21:41:24.1007792Z 2023-01-11T21:41:24.1007888Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1007958Z import torch 2023-01-11T21:41:24.1008026Z import random 2023-01-11T21:41:24.1008128Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1008248Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1008253Z 2023-01-11T21:41:24.1008329Z aten = torch.ops.aten 2023-01-11T21:41:24.1008461Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1008546Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1008552Z 2023-01-11T21:41:24.1008556Z 2023-01-11T21:41:24.1008686Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1008888Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1009004Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1009090Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1009183Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.1009304Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.1009395Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.1009483Z float* __restrict__ out_ptr4) 2023-01-11T21:41:24.1009542Z { 2023-01-11T21:41:24.1009639Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1009688Z { 2023-01-11T21:41:24.1009759Z #pragma omp for 2023-01-11T21:41:24.1009837Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1009900Z { 2023-01-11T21:41:24.1009976Z #pragma GCC ivdep 2023-01-11T21:41:24.1010058Z for(long i1=0; i1<74; i1+=1) 2023-01-11T21:41:24.1010118Z { 2023-01-11T21:41:24.1010168Z { 2023-01-11T21:41:24.1010230Z { 2023-01-11T21:41:24.1010335Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1010439Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:24.1010534Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1010637Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1010737Z auto tmp4 = in_ptr0[tmp3 + (37*i0)]; 2023-01-11T21:41:24.1010820Z out_ptr0[i1 + (74*i0)] = tmp4; 2023-01-11T21:41:24.1010911Z out_ptr1[i1 + (74*i0)] = tmp4; 2023-01-11T21:41:24.1010975Z } 2023-01-11T21:41:24.1011036Z } 2023-01-11T21:41:24.1011100Z } 2023-01-11T21:41:24.1011159Z } 2023-01-11T21:41:24.1011233Z #pragma omp for 2023-01-11T21:41:24.1011301Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1011361Z { 2023-01-11T21:41:24.1011438Z #pragma GCC ivdep 2023-01-11T21:41:24.1011520Z for(long i1=0; i1<70; i1+=1) 2023-01-11T21:41:24.1011580Z { 2023-01-11T21:41:24.1011640Z { 2023-01-11T21:41:24.1011705Z { 2023-01-11T21:41:24.1011802Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1011912Z auto tmp1 = static_cast(0.5285714285714286); 2023-01-11T21:41:24.1012005Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1012109Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1012209Z auto tmp4 = in_ptr0[tmp3 + (37*i0)]; 2023-01-11T21:41:24.1012300Z out_ptr2[i1 + (70*i0)] = tmp4; 2023-01-11T21:41:24.1012360Z } 2023-01-11T21:41:24.1012411Z } 2023-01-11T21:41:24.1012470Z } 2023-01-11T21:41:24.1012529Z } 2023-01-11T21:41:24.1012603Z #pragma omp for 2023-01-11T21:41:24.1012681Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1012741Z { 2023-01-11T21:41:24.1012807Z #pragma GCC ivdep 2023-01-11T21:41:24.1012885Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.1012993Z { 2023-01-11T21:41:24.1013054Z { 2023-01-11T21:41:24.1013117Z { 2023-01-11T21:41:24.1013219Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1013329Z auto tmp1 = static_cast(0.8222222222222222); 2023-01-11T21:41:24.1013409Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1013510Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1013610Z auto tmp4 = in_ptr0[tmp3 + (37*i0)]; 2023-01-11T21:41:24.1013703Z out_ptr3[i1 + (45*i0)] = tmp4; 2023-01-11T21:41:24.1013766Z } 2023-01-11T21:41:24.1013830Z } 2023-01-11T21:41:24.1013891Z } 2023-01-11T21:41:24.1013939Z } 2023-01-11T21:41:24.1014014Z #pragma omp for 2023-01-11T21:41:24.1014092Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1014153Z { 2023-01-11T21:41:24.1014230Z #pragma GCC ivdep 2023-01-11T21:41:24.1014342Z for(long i1=0; i1<36; i1+=1) 2023-01-11T21:41:24.1014410Z { 2023-01-11T21:41:24.1014461Z { 2023-01-11T21:41:24.1014526Z { 2023-01-11T21:41:24.1014659Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1014827Z auto tmp1 = static_cast(1.0277777777777777); 2023-01-11T21:41:24.1014961Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1015119Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1015267Z auto tmp4 = in_ptr0[tmp3 + (37*i0)]; 2023-01-11T21:41:24.1015348Z out_ptr4[i1 + (36*i0)] = tmp4; 2023-01-11T21:41:24.1015415Z } 2023-01-11T21:41:24.1015475Z } 2023-01-11T21:41:24.1015537Z } 2023-01-11T21:41:24.1015599Z } 2023-01-11T21:41:24.1015663Z } 2023-01-11T21:41:24.1015722Z } 2023-01-11T21:41:24.1015806Z ''') 2023-01-11T21:41:24.1015812Z 2023-01-11T21:41:24.1015817Z 2023-01-11T21:41:24.1015907Z async_compile.wait(globals()) 2023-01-11T21:41:24.1015977Z del async_compile 2023-01-11T21:41:24.1015982Z 2023-01-11T21:41:24.1016055Z def call(args): 2023-01-11T21:41:24.1016121Z arg0_1, = args 2023-01-11T21:41:24.1016190Z args.clear() 2023-01-11T21:41:24.1016399Z buf0 = empty_strided((2, 4, 74), (296, 74, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1016589Z buf4 = empty_strided((2, 4, 74), (296, 74, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1016788Z buf1 = empty_strided((2, 4, 70), (280, 70, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1016981Z buf2 = empty_strided((2, 4, 45), (180, 45, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1017175Z buf3 = empty_strided((2, 4, 36), (144, 36, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1017414Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.1017483Z del arg0_1 2023-01-11T21:41:24.1017577Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:24.1017582Z 2023-01-11T21:41:24.1017587Z 2023-01-11T21:41:24.1017662Z if __name__ == "__main__": 2023-01-11T21:41:24.1017778Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1017888Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1018096Z arg0_1 = rand_strided((2, 4, 37), (148, 37, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1018203Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1018208Z 2023-01-11T21:41:24.1018273Z ok (1.858s) 2023-01-11T21:41:24.1018764Z test_upsample_nearest2d_backward_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1018930Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1019196Z [2023-01-11 21:39:19,341] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 479 2023-01-11T21:41:24.1019202Z 2023-01-11T21:41:24.1019295Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1019363Z import torch 2023-01-11T21:41:24.1019420Z import random 2023-01-11T21:41:24.1019533Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1019649Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1019654Z 2023-01-11T21:41:24.1019730Z aten = torch.ops.aten 2023-01-11T21:41:24.1019859Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1019947Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1019983Z 2023-01-11T21:41:24.1019988Z 2023-01-11T21:41:24.1020120Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1020321Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1020426Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1020522Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1020616Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.1020708Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.1020796Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.1020889Z float* __restrict__ out_ptr4) 2023-01-11T21:41:24.1020947Z { 2023-01-11T21:41:24.1021031Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1021093Z { 2023-01-11T21:41:24.1021165Z #pragma omp for 2023-01-11T21:41:24.1021247Z for(long i0=0; i0<27; i0+=1) 2023-01-11T21:41:24.1021309Z { 2023-01-11T21:41:24.1021384Z #pragma GCC ivdep 2023-01-11T21:41:24.1021466Z for(long i1=0; i1<6; i1+=1) 2023-01-11T21:41:24.1021516Z { 2023-01-11T21:41:24.1021577Z { 2023-01-11T21:41:24.1021640Z { 2023-01-11T21:41:24.1021742Z auto tmp0 = in_ptr0[(2*i1) + (24*i0)]; 2023-01-11T21:41:24.1021847Z auto tmp1 = in_ptr0[1 + (2*i1) + (24*i0)]; 2023-01-11T21:41:24.1021948Z auto tmp3 = in_ptr0[12 + (2*i1) + (24*i0)]; 2023-01-11T21:41:24.1022048Z auto tmp5 = in_ptr0[13 + (2*i1) + (24*i0)]; 2023-01-11T21:41:24.1022129Z auto tmp2 = tmp1 + tmp0; 2023-01-11T21:41:24.1022221Z auto tmp4 = tmp3 + tmp2; 2023-01-11T21:41:24.1022307Z auto tmp6 = tmp5 + tmp4; 2023-01-11T21:41:24.1022511Z auto tmp7 = static_cast(1.0); 2023-01-11T21:41:24.1022604Z auto tmp8 = tmp6 * tmp7; 2023-01-11T21:41:24.1022698Z out_ptr0[i1 + (6*i0)] = tmp8; 2023-01-11T21:41:24.1022762Z } 2023-01-11T21:41:24.1022812Z } 2023-01-11T21:41:24.1022872Z } 2023-01-11T21:41:24.1022935Z } 2023-01-11T21:41:24.1023009Z #pragma omp for 2023-01-11T21:41:24.1023089Z for(long i0=0; i0<9; i0+=1) 2023-01-11T21:41:24.1023149Z { 2023-01-11T21:41:24.1023225Z #pragma GCC ivdep 2023-01-11T21:41:24.1023295Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.1023354Z { 2023-01-11T21:41:24.1023434Z #pragma GCC ivdep 2023-01-11T21:41:24.1023520Z for(long i2=0; i2<5; i2+=1) 2023-01-11T21:41:24.1023583Z { 2023-01-11T21:41:24.1023646Z { 2023-01-11T21:41:24.1023711Z { 2023-01-11T21:41:24.1023816Z auto tmp0 = static_cast(((3 + (6*i1)) / 4)); 2023-01-11T21:41:24.1023985Z auto tmp1 = static_cast(((9 + (6*i1)) / 4)); 2023-01-11T21:41:24.1024080Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.1024194Z auto tmp3 = static_cast(((4 + (12*i2)) / 5)); 2023-01-11T21:41:24.1024307Z auto tmp4 = static_cast(((16 + (12*i2)) / 5)); 2023-01-11T21:41:24.1024403Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:24.1024493Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:24.1024567Z float tmp7 = 0.0; 2023-01-11T21:41:24.1024641Z if(tmp6) 2023-01-11T21:41:24.1024708Z { 2023-01-11T21:41:24.1024833Z auto tmp8 = in_ptr0[(12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1024915Z tmp7 = tmp8; 2023-01-11T21:41:24.1025022Z } 2023-01-11T21:41:24.1025142Z auto tmp9 = static_cast(1 + (((4 + (12*i2)) / 5))); 2023-01-11T21:41:24.1025237Z auto tmp10 = tmp9 < tmp4; 2023-01-11T21:41:24.1025320Z auto tmp11 = tmp2 & tmp10; 2023-01-11T21:41:24.1025407Z float tmp12 = 0.0; 2023-01-11T21:41:24.1025481Z if(tmp11) 2023-01-11T21:41:24.1025549Z { 2023-01-11T21:41:24.1025687Z auto tmp13 = in_ptr0[1 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1025817Z tmp12 = tmp13; 2023-01-11T21:41:24.1025916Z } 2023-01-11T21:41:24.1026034Z auto tmp14 = tmp12 + tmp7; 2023-01-11T21:41:24.1026186Z auto tmp15 = static_cast(2 + (((4 + (12*i2)) / 5))); 2023-01-11T21:41:24.1026281Z auto tmp16 = tmp15 < tmp4; 2023-01-11T21:41:24.1026375Z auto tmp17 = tmp2 & tmp16; 2023-01-11T21:41:24.1026462Z float tmp18 = 0.0; 2023-01-11T21:41:24.1026537Z if(tmp17) 2023-01-11T21:41:24.1026604Z { 2023-01-11T21:41:24.1026727Z auto tmp19 = in_ptr0[2 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1026800Z tmp18 = tmp19; 2023-01-11T21:41:24.1026865Z } 2023-01-11T21:41:24.1026958Z auto tmp20 = tmp18 + tmp14; 2023-01-11T21:41:24.1027073Z auto tmp21 = static_cast(1 + (((3 + (6*i1)) / 4))); 2023-01-11T21:41:24.1027168Z auto tmp22 = tmp21 < tmp1; 2023-01-11T21:41:24.1027259Z auto tmp23 = tmp22 & tmp5; 2023-01-11T21:41:24.1027347Z float tmp24 = 0.0; 2023-01-11T21:41:24.1027410Z if(tmp23) 2023-01-11T21:41:24.1027475Z { 2023-01-11T21:41:24.1027601Z auto tmp25 = in_ptr0[12 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1027686Z tmp24 = tmp25; 2023-01-11T21:41:24.1027752Z } 2023-01-11T21:41:24.1027844Z auto tmp26 = tmp24 + tmp20; 2023-01-11T21:41:24.1027938Z auto tmp27 = tmp22 & tmp10; 2023-01-11T21:41:24.1028019Z float tmp28 = 0.0; 2023-01-11T21:41:24.1028082Z if(tmp27) 2023-01-11T21:41:24.1028145Z { 2023-01-11T21:41:24.1028273Z auto tmp29 = in_ptr0[13 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1028356Z tmp28 = tmp29; 2023-01-11T21:41:24.1028459Z } 2023-01-11T21:41:24.1028554Z auto tmp30 = tmp28 + tmp26; 2023-01-11T21:41:24.1028647Z auto tmp31 = tmp22 & tmp16; 2023-01-11T21:41:24.1028721Z float tmp32 = 0.0; 2023-01-11T21:41:24.1028793Z if(tmp31) 2023-01-11T21:41:24.1028858Z { 2023-01-11T21:41:24.1028986Z auto tmp33 = in_ptr0[14 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((4 + (12*i2)) / 5))]; 2023-01-11T21:41:24.1029070Z tmp32 = tmp33; 2023-01-11T21:41:24.1029134Z } 2023-01-11T21:41:24.1029225Z auto tmp34 = tmp32 + tmp30; 2023-01-11T21:41:24.1029317Z out_ptr1[i2 + (5*i1) + (20*i0)] = tmp34; 2023-01-11T21:41:24.1029386Z } 2023-01-11T21:41:24.1029451Z } 2023-01-11T21:41:24.1029544Z } 2023-01-11T21:41:24.1029605Z } 2023-01-11T21:41:24.1029665Z } 2023-01-11T21:41:24.1029741Z #pragma omp for 2023-01-11T21:41:24.1029809Z for(long i0=0; i0<9; i0+=1) 2023-01-11T21:41:24.1029870Z { 2023-01-11T21:41:24.1029944Z #pragma GCC ivdep 2023-01-11T21:41:24.1030024Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:24.1030088Z { 2023-01-11T21:41:24.1030166Z #pragma GCC ivdep 2023-01-11T21:41:24.1030252Z for(long i2=0; i2<8; i2+=1) 2023-01-11T21:41:24.1030302Z { 2023-01-11T21:41:24.1030363Z { 2023-01-11T21:41:24.1030426Z { 2023-01-11T21:41:24.1030532Z auto tmp0 = static_cast(3*i1); 2023-01-11T21:41:24.1030644Z auto tmp1 = static_cast(3 + (3*i1)); 2023-01-11T21:41:24.1030737Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.1030857Z auto tmp3 = static_cast(((7 + (12*i2)) / 8)); 2023-01-11T21:41:24.1030962Z auto tmp4 = static_cast(((19 + (12*i2)) / 8)); 2023-01-11T21:41:24.1031054Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:24.1031141Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:24.1031225Z float tmp7 = 0.0; 2023-01-11T21:41:24.1031298Z if(tmp6) 2023-01-11T21:41:24.1031365Z { 2023-01-11T21:41:24.1031480Z auto tmp8 = in_ptr0[(36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1031553Z tmp7 = tmp8; 2023-01-11T21:41:24.1031622Z } 2023-01-11T21:41:24.1031738Z auto tmp9 = static_cast(1 + (((7 + (12*i2)) / 8))); 2023-01-11T21:41:24.1031835Z auto tmp10 = tmp9 < tmp4; 2023-01-11T21:41:24.1031930Z auto tmp11 = tmp2 & tmp10; 2023-01-11T21:41:24.1032015Z float tmp12 = 0.0; 2023-01-11T21:41:24.1032090Z if(tmp11) 2023-01-11T21:41:24.1032155Z { 2023-01-11T21:41:24.1032262Z auto tmp13 = in_ptr0[1 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1032346Z tmp12 = tmp13; 2023-01-11T21:41:24.1032413Z } 2023-01-11T21:41:24.1032508Z auto tmp14 = tmp12 + tmp7; 2023-01-11T21:41:24.1032619Z auto tmp15 = static_cast(1 + (3*i1)); 2023-01-11T21:41:24.1032710Z auto tmp16 = tmp15 < tmp1; 2023-01-11T21:41:24.1032803Z auto tmp17 = tmp16 & tmp5; 2023-01-11T21:41:24.1032878Z float tmp18 = 0.0; 2023-01-11T21:41:24.1032982Z if(tmp17) 2023-01-11T21:41:24.1033048Z { 2023-01-11T21:41:24.1033165Z auto tmp19 = in_ptr0[12 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1033249Z tmp18 = tmp19; 2023-01-11T21:41:24.1033315Z } 2023-01-11T21:41:24.1033410Z auto tmp20 = tmp18 + tmp14; 2023-01-11T21:41:24.1033493Z auto tmp21 = tmp16 & tmp10; 2023-01-11T21:41:24.1033578Z float tmp22 = 0.0; 2023-01-11T21:41:24.1033651Z if(tmp21) 2023-01-11T21:41:24.1033717Z { 2023-01-11T21:41:24.1033917Z auto tmp23 = in_ptr0[13 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1033999Z tmp22 = tmp23; 2023-01-11T21:41:24.1034065Z } 2023-01-11T21:41:24.1034191Z auto tmp24 = tmp22 + tmp20; 2023-01-11T21:41:24.1034292Z auto tmp25 = static_cast(2 + (3*i1)); 2023-01-11T21:41:24.1034386Z auto tmp26 = tmp25 < tmp1; 2023-01-11T21:41:24.1034478Z auto tmp27 = tmp26 & tmp5; 2023-01-11T21:41:24.1034563Z float tmp28 = 0.0; 2023-01-11T21:41:24.1034639Z if(tmp27) 2023-01-11T21:41:24.1034706Z { 2023-01-11T21:41:24.1034825Z auto tmp29 = in_ptr0[24 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1034895Z tmp28 = tmp29; 2023-01-11T21:41:24.1034961Z } 2023-01-11T21:41:24.1035055Z auto tmp30 = tmp28 + tmp24; 2023-01-11T21:41:24.1035149Z auto tmp31 = tmp26 & tmp10; 2023-01-11T21:41:24.1035235Z float tmp32 = 0.0; 2023-01-11T21:41:24.1035311Z if(tmp31) 2023-01-11T21:41:24.1035378Z { 2023-01-11T21:41:24.1035483Z auto tmp33 = in_ptr0[25 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1035564Z tmp32 = tmp33; 2023-01-11T21:41:24.1035628Z } 2023-01-11T21:41:24.1035721Z auto tmp34 = tmp32 + tmp30; 2023-01-11T21:41:24.1035805Z float tmp35 = 0.0; 2023-01-11T21:41:24.1035878Z if(tmp6) 2023-01-11T21:41:24.1035947Z { 2023-01-11T21:41:24.1036053Z auto tmp36 = in_ptr0[(36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1036139Z tmp35 = tmp36; 2023-01-11T21:41:24.1036202Z } 2023-01-11T21:41:24.1036286Z float tmp37 = 0.0; 2023-01-11T21:41:24.1036365Z if(tmp11) 2023-01-11T21:41:24.1036432Z { 2023-01-11T21:41:24.1036550Z auto tmp38 = in_ptr0[1 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1036630Z tmp37 = tmp38; 2023-01-11T21:41:24.1036685Z } 2023-01-11T21:41:24.1036778Z auto tmp39 = tmp37 + tmp35; 2023-01-11T21:41:24.1036862Z float tmp40 = 0.0; 2023-01-11T21:41:24.1036933Z if(tmp17) 2023-01-11T21:41:24.1036998Z { 2023-01-11T21:41:24.1037114Z auto tmp41 = in_ptr0[12 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1037193Z tmp40 = tmp41; 2023-01-11T21:41:24.1037247Z } 2023-01-11T21:41:24.1037342Z auto tmp42 = tmp40 + tmp39; 2023-01-11T21:41:24.1037459Z float tmp43 = 0.0; 2023-01-11T21:41:24.1037534Z if(tmp21) 2023-01-11T21:41:24.1037599Z { 2023-01-11T21:41:24.1037715Z auto tmp44 = in_ptr0[13 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1037794Z tmp43 = tmp44; 2023-01-11T21:41:24.1037849Z } 2023-01-11T21:41:24.1037941Z auto tmp45 = tmp43 + tmp42; 2023-01-11T21:41:24.1038025Z float tmp46 = 0.0; 2023-01-11T21:41:24.1038099Z if(tmp27) 2023-01-11T21:41:24.1038165Z { 2023-01-11T21:41:24.1038283Z auto tmp47 = in_ptr0[24 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1038364Z tmp46 = tmp47; 2023-01-11T21:41:24.1038428Z } 2023-01-11T21:41:24.1038541Z auto tmp48 = tmp46 + tmp45; 2023-01-11T21:41:24.1038626Z float tmp49 = 0.0; 2023-01-11T21:41:24.1038700Z if(tmp31) 2023-01-11T21:41:24.1038764Z { 2023-01-11T21:41:24.1038882Z auto tmp50 = in_ptr0[25 + (36*i1) + (72*i0) + (((7 + (12*i2)) / 8))]; 2023-01-11T21:41:24.1038962Z tmp49 = tmp50; 2023-01-11T21:41:24.1039028Z } 2023-01-11T21:41:24.1039111Z auto tmp51 = tmp49 + tmp48; 2023-01-11T21:41:24.1039210Z out_ptr2[i2 + (8*i1) + (16*i0)] = tmp34; 2023-01-11T21:41:24.1039314Z out_ptr3[i2 + (8*i1) + (16*i0)] = tmp51; 2023-01-11T21:41:24.1039379Z } 2023-01-11T21:41:24.1039445Z } 2023-01-11T21:41:24.1039507Z } 2023-01-11T21:41:24.1039571Z } 2023-01-11T21:41:24.1039621Z } 2023-01-11T21:41:24.1039694Z #pragma omp for 2023-01-11T21:41:24.1039772Z for(long i0=0; i0<9; i0+=1) 2023-01-11T21:41:24.1039834Z { 2023-01-11T21:41:24.1039908Z #pragma GCC ivdep 2023-01-11T21:41:24.1039988Z for(long i1=0; i1<4; i1+=1) 2023-01-11T21:41:24.1040039Z { 2023-01-11T21:41:24.1040118Z #pragma GCC ivdep 2023-01-11T21:41:24.1040204Z for(long i2=0; i2<7; i2+=1) 2023-01-11T21:41:24.1040264Z { 2023-01-11T21:41:24.1040327Z { 2023-01-11T21:41:24.1040393Z { 2023-01-11T21:41:24.1040509Z auto tmp0 = static_cast(((3 + (6*i1)) / 4)); 2023-01-11T21:41:24.1040613Z auto tmp1 = static_cast(((9 + (6*i1)) / 4)); 2023-01-11T21:41:24.1040709Z auto tmp2 = tmp0 < tmp1; 2023-01-11T21:41:24.1040826Z auto tmp3 = static_cast(((6 + (12*i2)) / 7)); 2023-01-11T21:41:24.1040944Z auto tmp4 = static_cast(((18 + (12*i2)) / 7)); 2023-01-11T21:41:24.1041039Z auto tmp5 = tmp3 < tmp4; 2023-01-11T21:41:24.1041131Z auto tmp6 = tmp2 & tmp5; 2023-01-11T21:41:24.1041218Z float tmp7 = 0.0; 2023-01-11T21:41:24.1041297Z if(tmp6) 2023-01-11T21:41:24.1041353Z { 2023-01-11T21:41:24.1041480Z auto tmp8 = in_ptr0[(12*(((3 + (6*i1)) / 4))) + (72*i0) + (((6 + (12*i2)) / 7))]; 2023-01-11T21:41:24.1041565Z tmp7 = tmp8; 2023-01-11T21:41:24.1041635Z } 2023-01-11T21:41:24.1041752Z auto tmp9 = static_cast(1 + (((6 + (12*i2)) / 7))); 2023-01-11T21:41:24.1041844Z auto tmp10 = tmp9 < tmp4; 2023-01-11T21:41:24.1041977Z auto tmp11 = tmp2 & tmp10; 2023-01-11T21:41:24.1042052Z float tmp12 = 0.0; 2023-01-11T21:41:24.1042126Z if(tmp11) 2023-01-11T21:41:24.1042193Z { 2023-01-11T21:41:24.1042320Z auto tmp13 = in_ptr0[1 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((6 + (12*i2)) / 7))]; 2023-01-11T21:41:24.1042407Z tmp12 = tmp13; 2023-01-11T21:41:24.1042475Z } 2023-01-11T21:41:24.1042568Z auto tmp14 = tmp12 + tmp7; 2023-01-11T21:41:24.1042686Z auto tmp15 = static_cast(1 + (((3 + (6*i1)) / 4))); 2023-01-11T21:41:24.1042768Z auto tmp16 = tmp15 < tmp1; 2023-01-11T21:41:24.1042863Z auto tmp17 = tmp16 & tmp5; 2023-01-11T21:41:24.1042949Z float tmp18 = 0.0; 2023-01-11T21:41:24.1043056Z if(tmp17) 2023-01-11T21:41:24.1043124Z { 2023-01-11T21:41:24.1043250Z auto tmp19 = in_ptr0[12 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((6 + (12*i2)) / 7))]; 2023-01-11T21:41:24.1043336Z tmp18 = tmp19; 2023-01-11T21:41:24.1043391Z } 2023-01-11T21:41:24.1043486Z auto tmp20 = tmp18 + tmp14; 2023-01-11T21:41:24.1043578Z auto tmp21 = tmp16 & tmp10; 2023-01-11T21:41:24.1043664Z float tmp22 = 0.0; 2023-01-11T21:41:24.1043737Z if(tmp21) 2023-01-11T21:41:24.1043806Z { 2023-01-11T21:41:24.1043933Z auto tmp23 = in_ptr0[13 + (12*(((3 + (6*i1)) / 4))) + (72*i0) + (((6 + (12*i2)) / 7))]; 2023-01-11T21:41:24.1044017Z tmp22 = tmp23; 2023-01-11T21:41:24.1044074Z } 2023-01-11T21:41:24.1044171Z auto tmp24 = tmp22 + tmp20; 2023-01-11T21:41:24.1044275Z out_ptr4[i2 + (7*i1) + (28*i0)] = tmp24; 2023-01-11T21:41:24.1044343Z } 2023-01-11T21:41:24.1044405Z } 2023-01-11T21:41:24.1044469Z } 2023-01-11T21:41:24.1044531Z } 2023-01-11T21:41:24.1044580Z } 2023-01-11T21:41:24.1044641Z } 2023-01-11T21:41:24.1044699Z } 2023-01-11T21:41:24.1044799Z ''') 2023-01-11T21:41:24.1044805Z 2023-01-11T21:41:24.1044810Z 2023-01-11T21:41:24.1044898Z async_compile.wait(globals()) 2023-01-11T21:41:24.1044968Z del async_compile 2023-01-11T21:41:24.1044973Z 2023-01-11T21:41:24.1045038Z def call(args): 2023-01-11T21:41:24.1045093Z arg0_1, = args 2023-01-11T21:41:24.1045161Z args.clear() 2023-01-11T21:41:24.1045376Z buf0 = empty_strided((3, 3, 3, 6), (54, 18, 6, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1045591Z buf1 = empty_strided((3, 3, 4, 5), (60, 20, 5, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1045797Z buf2 = empty_strided((3, 3, 2, 8), (48, 16, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1046002Z buf3 = empty_strided((3, 3, 2, 8), (48, 16, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1046200Z buf4 = empty_strided((3, 3, 4, 7), (84, 28, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1046436Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr())) 2023-01-11T21:41:24.1046493Z del arg0_1 2023-01-11T21:41:24.1046588Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:24.1046593Z 2023-01-11T21:41:24.1046598Z 2023-01-11T21:41:24.1046672Z if __name__ == "__main__": 2023-01-11T21:41:24.1046785Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1046906Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1047160Z arg0_1 = rand_strided((3, 3, 6, 12), (216, 72, 12, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1047262Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1047535Z [2023-01-11 21:39:21,302] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 479 2023-01-11T21:41:24.1047540Z 2023-01-11T21:41:24.1047594Z ok (1.988s) 2023-01-11T21:41:24.1048067Z test_upsample_nearest2d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1048194Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1048477Z [2023-01-11 21:39:21,735] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 480 2023-01-11T21:41:24.1048745Z [2023-01-11 21:39:23,315] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 480 2023-01-11T21:41:24.1048750Z 2023-01-11T21:41:24.1048840Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1048907Z import torch 2023-01-11T21:41:24.1048978Z import random 2023-01-11T21:41:24.1049091Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1049198Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1049212Z 2023-01-11T21:41:24.1049275Z aten = torch.ops.aten 2023-01-11T21:41:24.1049404Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1049493Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1049497Z 2023-01-11T21:41:24.1049502Z 2023-01-11T21:41:24.1049632Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1049836Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1049951Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1050052Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1050144Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.1050225Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.1050317Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.1050409Z float* __restrict__ out_ptr4) 2023-01-11T21:41:24.1050470Z { 2023-01-11T21:41:24.1050567Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1050628Z { 2023-01-11T21:41:24.1050692Z #pragma omp for 2023-01-11T21:41:24.1050768Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1050828Z { 2023-01-11T21:41:24.1050907Z #pragma GCC ivdep 2023-01-11T21:41:24.1050989Z for(long i1=0; i1<74; i1+=1) 2023-01-11T21:41:24.1051048Z { 2023-01-11T21:41:24.1051132Z #pragma GCC ivdep 2023-01-11T21:41:24.1051209Z for(long i2=0; i2<76; i2+=1) 2023-01-11T21:41:24.1051266Z { 2023-01-11T21:41:24.1051332Z { 2023-01-11T21:41:24.1051399Z { 2023-01-11T21:41:24.1051507Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1051612Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:24.1051705Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1051800Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1051903Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1051993Z auto tmp5 = tmp4 * tmp1; 2023-01-11T21:41:24.1052096Z auto tmp6 = static_cast(tmp5); 2023-01-11T21:41:24.1052210Z auto tmp7 = in_ptr0[tmp6 + (38*tmp3) + (1406*i0)]; 2023-01-11T21:41:24.1052343Z out_ptr0[i2 + (76*i1) + (5624*i0)] = tmp7; 2023-01-11T21:41:24.1052445Z out_ptr1[i2 + (76*i1) + (5624*i0)] = tmp7; 2023-01-11T21:41:24.1052509Z } 2023-01-11T21:41:24.1052562Z } 2023-01-11T21:41:24.1052622Z } 2023-01-11T21:41:24.1052683Z } 2023-01-11T21:41:24.1052741Z } 2023-01-11T21:41:24.1052816Z #pragma omp for 2023-01-11T21:41:24.1052890Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1052939Z { 2023-01-11T21:41:24.1053016Z #pragma GCC ivdep 2023-01-11T21:41:24.1053099Z for(long i1=0; i1<70; i1+=1) 2023-01-11T21:41:24.1053159Z { 2023-01-11T21:41:24.1053235Z #pragma GCC ivdep 2023-01-11T21:41:24.1053324Z for(long i2=0; i2<75; i2+=1) 2023-01-11T21:41:24.1053386Z { 2023-01-11T21:41:24.1053439Z { 2023-01-11T21:41:24.1053506Z { 2023-01-11T21:41:24.1053672Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1053787Z auto tmp1 = static_cast(0.5285714285714286); 2023-01-11T21:41:24.1053880Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1053980Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1054082Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1054196Z auto tmp5 = static_cast(0.5066666666666667); 2023-01-11T21:41:24.1054278Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1054379Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1054493Z auto tmp8 = in_ptr0[tmp7 + (38*tmp3) + (1406*i0)]; 2023-01-11T21:41:24.1054596Z out_ptr2[i2 + (75*i1) + (5250*i0)] = tmp8; 2023-01-11T21:41:24.1054661Z } 2023-01-11T21:41:24.1054727Z } 2023-01-11T21:41:24.1054789Z } 2023-01-11T21:41:24.1054839Z } 2023-01-11T21:41:24.1054898Z } 2023-01-11T21:41:24.1054973Z #pragma omp for 2023-01-11T21:41:24.1055048Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1055109Z { 2023-01-11T21:41:24.1055187Z #pragma GCC ivdep 2023-01-11T21:41:24.1055258Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.1055319Z { 2023-01-11T21:41:24.1055399Z #pragma GCC ivdep 2023-01-11T21:41:24.1055487Z for(long i2=0; i2<74; i2+=1) 2023-01-11T21:41:24.1055550Z { 2023-01-11T21:41:24.1055612Z { 2023-01-11T21:41:24.1055676Z { 2023-01-11T21:41:24.1055768Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1055876Z auto tmp1 = static_cast(0.8222222222222222); 2023-01-11T21:41:24.1055972Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1056078Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1056179Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1056293Z auto tmp5 = static_cast(0.5135135135135135); 2023-01-11T21:41:24.1056386Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1056487Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1056591Z auto tmp8 = in_ptr0[tmp7 + (38*tmp3) + (1406*i0)]; 2023-01-11T21:41:24.1056692Z out_ptr3[i2 + (74*i1) + (3330*i0)] = tmp8; 2023-01-11T21:41:24.1056757Z } 2023-01-11T21:41:24.1056821Z } 2023-01-11T21:41:24.1056882Z } 2023-01-11T21:41:24.1056942Z } 2023-01-11T21:41:24.1057002Z } 2023-01-11T21:41:24.1057066Z #pragma omp for 2023-01-11T21:41:24.1057174Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1057230Z { 2023-01-11T21:41:24.1057304Z #pragma GCC ivdep 2023-01-11T21:41:24.1057385Z for(long i1=0; i1<36; i1+=1) 2023-01-11T21:41:24.1057445Z { 2023-01-11T21:41:24.1057513Z #pragma GCC ivdep 2023-01-11T21:41:24.1057605Z for(long i2=0; i2<39; i2+=1) 2023-01-11T21:41:24.1057665Z { 2023-01-11T21:41:24.1057730Z { 2023-01-11T21:41:24.1057799Z { 2023-01-11T21:41:24.1057902Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1058017Z auto tmp1 = static_cast(1.0277777777777777); 2023-01-11T21:41:24.1058110Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1058202Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1058331Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1058445Z auto tmp5 = static_cast(0.9743589743589743); 2023-01-11T21:41:24.1058542Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1058645Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1058757Z auto tmp8 = in_ptr0[tmp7 + (38*tmp3) + (1406*i0)]; 2023-01-11T21:41:24.1058862Z out_ptr4[i2 + (39*i1) + (1404*i0)] = tmp8; 2023-01-11T21:41:24.1058918Z } 2023-01-11T21:41:24.1058983Z } 2023-01-11T21:41:24.1059046Z } 2023-01-11T21:41:24.1059106Z } 2023-01-11T21:41:24.1059166Z } 2023-01-11T21:41:24.1059225Z } 2023-01-11T21:41:24.1059282Z } 2023-01-11T21:41:24.1059353Z ''') 2023-01-11T21:41:24.1059358Z 2023-01-11T21:41:24.1059363Z 2023-01-11T21:41:24.1059450Z async_compile.wait(globals()) 2023-01-11T21:41:24.1059522Z del async_compile 2023-01-11T21:41:24.1059531Z 2023-01-11T21:41:24.1059598Z def call(args): 2023-01-11T21:41:24.1059666Z arg0_1, = args 2023-01-11T21:41:24.1059736Z args.clear() 2023-01-11T21:41:24.1059956Z buf0 = empty_strided((2, 4, 74, 76), (22496, 5624, 76, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1060163Z buf4 = empty_strided((2, 4, 74, 76), (22496, 5624, 76, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1060380Z buf1 = empty_strided((2, 4, 70, 75), (21000, 5250, 75, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1060592Z buf2 = empty_strided((2, 4, 45, 74), (13320, 3330, 74, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1060809Z buf3 = empty_strided((2, 4, 36, 39), (5616, 1404, 39, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1061043Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.1061114Z del arg0_1 2023-01-11T21:41:24.1061211Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:24.1061217Z 2023-01-11T21:41:24.1061221Z 2023-01-11T21:41:24.1061295Z if __name__ == "__main__": 2023-01-11T21:41:24.1061406Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1061515Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1061728Z arg0_1 = rand_strided((2, 4, 37, 38), (5624, 1406, 38, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1061834Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1061839Z 2023-01-11T21:41:24.1061905Z ok (2.017s) 2023-01-11T21:41:24.1062522Z test_upsample_nearest3d_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1062728Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1062998Z [2023-01-11 21:39:23,910] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 481 2023-01-11T21:41:24.1063262Z [2023-01-11 21:39:25,552] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 481 2023-01-11T21:41:24.1063267Z 2023-01-11T21:41:24.1063357Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1063424Z import torch 2023-01-11T21:41:24.1063481Z import random 2023-01-11T21:41:24.1063595Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1063715Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1063719Z 2023-01-11T21:41:24.1063793Z aten = torch.ops.aten 2023-01-11T21:41:24.1063922Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1064016Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1064067Z 2023-01-11T21:41:24.1064072Z 2023-01-11T21:41:24.1064205Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1064405Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1064511Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1064608Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1064703Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.1064794Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.1064885Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.1064973Z float* __restrict__ out_ptr4) 2023-01-11T21:41:24.1065032Z { 2023-01-11T21:41:24.1065116Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1065174Z { 2023-01-11T21:41:24.1065248Z #pragma omp for 2023-01-11T21:41:24.1065327Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1065393Z { 2023-01-11T21:41:24.1065470Z #pragma GCC ivdep 2023-01-11T21:41:24.1065542Z for(long i1=0; i1<74; i1+=1) 2023-01-11T21:41:24.1065604Z { 2023-01-11T21:41:24.1065686Z #pragma GCC ivdep 2023-01-11T21:41:24.1065771Z for(long i2=0; i2<76; i2+=1) 2023-01-11T21:41:24.1065833Z { 2023-01-11T21:41:24.1065912Z #pragma GCC ivdep 2023-01-11T21:41:24.1065999Z for(long i3=0; i3<78; i3+=1) 2023-01-11T21:41:24.1066052Z { 2023-01-11T21:41:24.1066117Z { 2023-01-11T21:41:24.1066184Z { 2023-01-11T21:41:24.1066294Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1066404Z auto tmp1 = static_cast(0.5); 2023-01-11T21:41:24.1066499Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1066612Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1066718Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1066802Z auto tmp5 = tmp4 * tmp1; 2023-01-11T21:41:24.1066909Z auto tmp6 = static_cast(tmp5); 2023-01-11T21:41:24.1067012Z auto tmp7 = static_cast(i3); 2023-01-11T21:41:24.1067105Z auto tmp8 = tmp7 * tmp1; 2023-01-11T21:41:24.1067210Z auto tmp9 = static_cast(tmp8); 2023-01-11T21:41:24.1067336Z auto tmp10 = in_ptr0[tmp9 + (39*tmp6) + (1482*tmp3) + (54834*i0)]; 2023-01-11T21:41:24.1067445Z out_ptr0[i3 + (78*i2) + (5928*i1) + (438672*i0)] = tmp10; 2023-01-11T21:41:24.1067553Z out_ptr1[i3 + (78*i2) + (5928*i1) + (438672*i0)] = tmp10; 2023-01-11T21:41:24.1067611Z } 2023-01-11T21:41:24.1067711Z } 2023-01-11T21:41:24.1067775Z } 2023-01-11T21:41:24.1067835Z } 2023-01-11T21:41:24.1067895Z } 2023-01-11T21:41:24.1067954Z } 2023-01-11T21:41:24.1068018Z #pragma omp for 2023-01-11T21:41:24.1068096Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1068154Z { 2023-01-11T21:41:24.1068233Z #pragma GCC ivdep 2023-01-11T21:41:24.1068315Z for(long i1=0; i1<70; i1+=1) 2023-01-11T21:41:24.1068375Z { 2023-01-11T21:41:24.1068456Z #pragma GCC ivdep 2023-01-11T21:41:24.1068532Z for(long i2=0; i2<75; i2+=1) 2023-01-11T21:41:24.1068594Z { 2023-01-11T21:41:24.1068676Z #pragma GCC ivdep 2023-01-11T21:41:24.1068765Z for(long i3=0; i3<80; i3+=1) 2023-01-11T21:41:24.1068827Z { 2023-01-11T21:41:24.1068890Z { 2023-01-11T21:41:24.1068993Z { 2023-01-11T21:41:24.1069090Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1069205Z auto tmp1 = static_cast(0.5285714285714286); 2023-01-11T21:41:24.1069300Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1069409Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1069513Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1069631Z auto tmp5 = static_cast(0.5066666666666667); 2023-01-11T21:41:24.1069724Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1069832Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1069925Z auto tmp8 = static_cast(i3); 2023-01-11T21:41:24.1070034Z auto tmp9 = static_cast(0.4875); 2023-01-11T21:41:24.1070137Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.1070247Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.1070371Z auto tmp12 = in_ptr0[tmp11 + (39*tmp7) + (1482*tmp3) + (54834*i0)]; 2023-01-11T21:41:24.1070481Z out_ptr2[i3 + (80*i2) + (6000*i1) + (420000*i0)] = tmp12; 2023-01-11T21:41:24.1070550Z } 2023-01-11T21:41:24.1070605Z } 2023-01-11T21:41:24.1070671Z } 2023-01-11T21:41:24.1070734Z } 2023-01-11T21:41:24.1070797Z } 2023-01-11T21:41:24.1070859Z } 2023-01-11T21:41:24.1070934Z #pragma omp for 2023-01-11T21:41:24.1071012Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1071061Z { 2023-01-11T21:41:24.1071137Z #pragma GCC ivdep 2023-01-11T21:41:24.1071217Z for(long i1=0; i1<45; i1+=1) 2023-01-11T21:41:24.1071285Z { 2023-01-11T21:41:24.1071367Z #pragma GCC ivdep 2023-01-11T21:41:24.1071453Z for(long i2=0; i2<74; i2+=1) 2023-01-11T21:41:24.1071516Z { 2023-01-11T21:41:24.1071585Z #pragma GCC ivdep 2023-01-11T21:41:24.1071673Z for(long i3=0; i3<103; i3+=1) 2023-01-11T21:41:24.1071738Z { 2023-01-11T21:41:24.1071805Z { 2023-01-11T21:41:24.1071873Z { 2023-01-11T21:41:24.1071983Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1072097Z auto tmp1 = static_cast(0.8222222222222222); 2023-01-11T21:41:24.1072182Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1072286Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1072392Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1072536Z auto tmp5 = static_cast(0.5135135135135135); 2023-01-11T21:41:24.1072633Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1072741Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1072851Z auto tmp8 = static_cast(i3); 2023-01-11T21:41:24.1072954Z auto tmp9 = static_cast(0.3786407766990291); 2023-01-11T21:41:24.1073052Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.1073163Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.1073290Z auto tmp12 = in_ptr0[tmp11 + (39*tmp7) + (1482*tmp3) + (54834*i0)]; 2023-01-11T21:41:24.1073404Z out_ptr3[i3 + (103*i2) + (7622*i1) + (342990*i0)] = tmp12; 2023-01-11T21:41:24.1073472Z } 2023-01-11T21:41:24.1073566Z } 2023-01-11T21:41:24.1073631Z } 2023-01-11T21:41:24.1073682Z } 2023-01-11T21:41:24.1073805Z } 2023-01-11T21:41:24.1073866Z } 2023-01-11T21:41:24.1073942Z #pragma omp for 2023-01-11T21:41:24.1074019Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1074078Z { 2023-01-11T21:41:24.1074158Z #pragma GCC ivdep 2023-01-11T21:41:24.1074229Z for(long i1=0; i1<36; i1+=1) 2023-01-11T21:41:24.1074292Z { 2023-01-11T21:41:24.1074371Z #pragma GCC ivdep 2023-01-11T21:41:24.1074456Z for(long i2=0; i2<39; i2+=1) 2023-01-11T21:41:24.1074521Z { 2023-01-11T21:41:24.1074600Z #pragma GCC ivdep 2023-01-11T21:41:24.1074678Z for(long i3=0; i3<40; i3+=1) 2023-01-11T21:41:24.1074741Z { 2023-01-11T21:41:24.1074808Z { 2023-01-11T21:41:24.1074873Z { 2023-01-11T21:41:24.1074984Z auto tmp0 = static_cast(i1); 2023-01-11T21:41:24.1075099Z auto tmp1 = static_cast(1.0277777777777777); 2023-01-11T21:41:24.1075196Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1075301Z auto tmp3 = static_cast(tmp2); 2023-01-11T21:41:24.1075395Z auto tmp4 = static_cast(i2); 2023-01-11T21:41:24.1075508Z auto tmp5 = static_cast(0.9743589743589743); 2023-01-11T21:41:24.1075604Z auto tmp6 = tmp4 * tmp5; 2023-01-11T21:41:24.1075709Z auto tmp7 = static_cast(tmp6); 2023-01-11T21:41:24.1075812Z auto tmp8 = static_cast(i3); 2023-01-11T21:41:24.1075921Z auto tmp9 = static_cast(0.975); 2023-01-11T21:41:24.1076023Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.1076121Z auto tmp11 = static_cast(tmp10); 2023-01-11T21:41:24.1076246Z auto tmp12 = in_ptr0[tmp11 + (39*tmp7) + (1482*tmp3) + (54834*i0)]; 2023-01-11T21:41:24.1076357Z out_ptr4[i3 + (40*i2) + (1560*i1) + (56160*i0)] = tmp12; 2023-01-11T21:41:24.1076425Z } 2023-01-11T21:41:24.1076487Z } 2023-01-11T21:41:24.1076550Z } 2023-01-11T21:41:24.1076610Z } 2023-01-11T21:41:24.1076671Z } 2023-01-11T21:41:24.1076719Z } 2023-01-11T21:41:24.1076773Z } 2023-01-11T21:41:24.1076832Z } 2023-01-11T21:41:24.1076912Z ''') 2023-01-11T21:41:24.1076917Z 2023-01-11T21:41:24.1076922Z 2023-01-11T21:41:24.1077010Z async_compile.wait(globals()) 2023-01-11T21:41:24.1077082Z del async_compile 2023-01-11T21:41:24.1077087Z 2023-01-11T21:41:24.1077154Z def call(args): 2023-01-11T21:41:24.1077246Z arg0_1, = args 2023-01-11T21:41:24.1077315Z args.clear() 2023-01-11T21:41:24.1077556Z buf0 = empty_strided((2, 4, 74, 76, 78), (1754688, 438672, 5928, 78, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1077789Z buf4 = empty_strided((2, 4, 74, 76, 78), (1754688, 438672, 5928, 78, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1078019Z buf1 = empty_strided((2, 4, 70, 75, 80), (1680000, 420000, 6000, 80, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1078253Z buf2 = empty_strided((2, 4, 45, 74, 103), (1371960, 342990, 7622, 103, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1078483Z buf3 = empty_strided((2, 4, 36, 39, 40), (224640, 56160, 1560, 40, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1078717Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr())) 2023-01-11T21:41:24.1078804Z del arg0_1 2023-01-11T21:41:24.1078898Z return (buf0, buf1, buf2, buf3, buf4, ) 2023-01-11T21:41:24.1078903Z 2023-01-11T21:41:24.1078907Z 2023-01-11T21:41:24.1078984Z if __name__ == "__main__": 2023-01-11T21:41:24.1079096Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1079218Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1079448Z arg0_1 = rand_strided((2, 4, 37, 38, 39), (219336, 54834, 1482, 39, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1079553Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1079557Z 2023-01-11T21:41:24.1079621Z ok (2.469s) 2023-01-11T21:41:24.1080084Z test_var_mean_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1080210Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1080460Z [2023-01-11 21:39:25,810] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 482 2023-01-11T21:41:24.1080465Z 2023-01-11T21:41:24.1080555Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1080621Z import torch 2023-01-11T21:41:24.1080688Z import random 2023-01-11T21:41:24.1080798Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1080914Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1080919Z 2023-01-11T21:41:24.1080995Z aten = torch.ops.aten 2023-01-11T21:41:24.1081115Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1081206Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1081211Z 2023-01-11T21:41:24.1081215Z 2023-01-11T21:41:24.1081345Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1081553Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1081666Z extern "C" void kernel(float* __restrict__ in_out_ptr0, 2023-01-11T21:41:24.1081768Z float* __restrict__ in_out_ptr1, 2023-01-11T21:41:24.1081864Z float* __restrict__ in_out_ptr2, 2023-01-11T21:41:24.1081961Z float* __restrict__ in_out_ptr3, 2023-01-11T21:41:24.1082051Z const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1082148Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1082240Z float* __restrict__ out_ptr3) 2023-01-11T21:41:24.1082295Z { 2023-01-11T21:41:24.1082380Z auto out_ptr1 = in_out_ptr0; 2023-01-11T21:41:24.1082459Z auto out_ptr2 = in_out_ptr1; 2023-01-11T21:41:24.1082535Z auto out_ptr4 = in_out_ptr2; 2023-01-11T21:41:24.1082603Z auto out_ptr5 = in_out_ptr3; 2023-01-11T21:41:24.1082749Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1082807Z { 2023-01-11T21:41:24.1082881Z #pragma omp for 2023-01-11T21:41:24.1082960Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1083020Z { 2023-01-11T21:41:24.1083082Z { 2023-01-11T21:41:24.1083265Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.1083342Z float tmp1 = 0; 2023-01-11T21:41:24.1083461Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.1083548Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1083613Z { 2023-01-11T21:41:24.1083754Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.1083836Z tmp1_vec += tmp0; 2023-01-11T21:41:24.1083902Z } 2023-01-11T21:41:24.1084110Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.1084232Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.1084318Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.1084381Z { 2023-01-11T21:41:24.1084477Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.1084552Z tmp1 += tmp0; 2023-01-11T21:41:24.1084614Z } 2023-01-11T21:41:24.1084682Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.1084742Z } 2023-01-11T21:41:24.1084802Z } 2023-01-11T21:41:24.1084878Z #pragma omp for 2023-01-11T21:41:24.1084957Z for(long i0=0; i0<8; i0+=1) 2023-01-11T21:41:24.1085014Z { 2023-01-11T21:41:24.1085076Z { 2023-01-11T21:41:24.1085248Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.1085331Z float tmp6 = 0; 2023-01-11T21:41:24.1085450Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.1085526Z float tmp7 = 0; 2023-01-11T21:41:24.1085643Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.1085729Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1085790Z { 2023-01-11T21:41:24.1085929Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i1)); 2023-01-11T21:41:24.1086044Z auto tmp1 = at::vec::Vectorized(out_ptr0[i0]); 2023-01-11T21:41:24.1086179Z auto tmp2 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.1086268Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.1086406Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.1086492Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.1086573Z tmp6_vec += tmp5; 2023-01-11T21:41:24.1086656Z tmp7_vec += tmp0; 2023-01-11T21:41:24.1086718Z } 2023-01-11T21:41:24.1086900Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.1087093Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.1087231Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.1087317Z for(long i1=8; i1<8; i1+=1) 2023-01-11T21:41:24.1087378Z { 2023-01-11T21:41:24.1087471Z auto tmp0 = in_ptr0[i1 + (8*i0)]; 2023-01-11T21:41:24.1087559Z auto tmp1 = out_ptr0[i0]; 2023-01-11T21:41:24.1087664Z auto tmp2 = static_cast(8); 2023-01-11T21:41:24.1087741Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.1087907Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.1087993Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.1088067Z tmp6 += tmp5; 2023-01-11T21:41:24.1088138Z tmp7 += tmp0; 2023-01-11T21:41:24.1088199Z } 2023-01-11T21:41:24.1088277Z out_ptr1[i0] = tmp6; 2023-01-11T21:41:24.1088343Z out_ptr2[i0] = tmp7; 2023-01-11T21:41:24.1088403Z } 2023-01-11T21:41:24.1088463Z } 2023-01-11T21:41:24.1088535Z #pragma omp for 2023-01-11T21:41:24.1088612Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1088669Z { 2023-01-11T21:41:24.1088798Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr1 + 8*i0); 2023-01-11T21:41:24.1088918Z auto tmp1 = at::vec::Vectorized(static_cast(7)); 2023-01-11T21:41:24.1089002Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1089094Z tmp2.store(in_out_ptr0 + 8*i0); 2023-01-11T21:41:24.1089184Z } 2023-01-11T21:41:24.1089273Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1089350Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.1089407Z { 2023-01-11T21:41:24.1089479Z auto tmp0 = out_ptr1[i0]; 2023-01-11T21:41:24.1089575Z auto tmp1 = static_cast(7); 2023-01-11T21:41:24.1089656Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1089735Z in_out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1089794Z } 2023-01-11T21:41:24.1089865Z #pragma omp for 2023-01-11T21:41:24.1089941Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1089990Z { 2023-01-11T21:41:24.1090116Z auto tmp0 = at::vec::Vectorized::loadu(out_ptr2 + 8*i0); 2023-01-11T21:41:24.1090242Z auto tmp1 = at::vec::Vectorized(static_cast(8)); 2023-01-11T21:41:24.1090323Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1090414Z tmp2.store(in_out_ptr1 + 8*i0); 2023-01-11T21:41:24.1090479Z } 2023-01-11T21:41:24.1090570Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1090636Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.1090696Z { 2023-01-11T21:41:24.1090777Z auto tmp0 = out_ptr2[i0]; 2023-01-11T21:41:24.1090872Z auto tmp1 = static_cast(8); 2023-01-11T21:41:24.1090953Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1091031Z in_out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.1091089Z } 2023-01-11T21:41:24.1091150Z #pragma omp for 2023-01-11T21:41:24.1091228Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1091286Z { 2023-01-11T21:41:24.1091348Z { 2023-01-11T21:41:24.1091534Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.1091609Z float tmp1 = 0; 2023-01-11T21:41:24.1091727Z auto tmp1_vec = at::vec::Vectorized(tmp1); 2023-01-11T21:41:24.1091805Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:24.1091868Z { 2023-01-11T21:41:24.1091955Z for(long i2=0; i2<1; i2+=1) 2023-01-11T21:41:24.1092018Z { 2023-01-11T21:41:24.1092165Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i2) + (32*i1)); 2023-01-11T21:41:24.1092245Z tmp1_vec += tmp0; 2023-01-11T21:41:24.1092311Z } 2023-01-11T21:41:24.1092504Z tmp1 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp1_vec); 2023-01-11T21:41:24.1092611Z #pragma omp simd simdlen(4) reduction(+:tmp1) 2023-01-11T21:41:24.1092698Z for(long i2=8; i2<8; i2+=1) 2023-01-11T21:41:24.1092759Z { 2023-01-11T21:41:24.1092864Z auto tmp0 = in_ptr0[i2 + (8*i0) + (32*i1)]; 2023-01-11T21:41:24.1092983Z tmp1 += tmp0; 2023-01-11T21:41:24.1093045Z } 2023-01-11T21:41:24.1093105Z } 2023-01-11T21:41:24.1093173Z out_ptr3[i0] = tmp1; 2023-01-11T21:41:24.1093239Z } 2023-01-11T21:41:24.1093298Z } 2023-01-11T21:41:24.1093370Z #pragma omp for 2023-01-11T21:41:24.1093449Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1093507Z { 2023-01-11T21:41:24.1093570Z { 2023-01-11T21:41:24.1093741Z #pragma omp declare reduction(+:at::vec::Vectorized:omp_out += omp_in) initializer(omp_priv={{0}}) 2023-01-11T21:41:24.1093816Z float tmp6 = 0; 2023-01-11T21:41:24.1093932Z auto tmp6_vec = at::vec::Vectorized(tmp6); 2023-01-11T21:41:24.1094009Z float tmp7 = 0; 2023-01-11T21:41:24.1094124Z auto tmp7_vec = at::vec::Vectorized(tmp7); 2023-01-11T21:41:24.1094238Z for(long i1=0; i1<2; i1+=1) 2023-01-11T21:41:24.1094300Z { 2023-01-11T21:41:24.1094376Z for(long i2=0; i2<1; i2+=1) 2023-01-11T21:41:24.1094439Z { 2023-01-11T21:41:24.1094584Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i0) + (8*i2) + (32*i1)); 2023-01-11T21:41:24.1094709Z auto tmp1 = at::vec::Vectorized(out_ptr3[i0]); 2023-01-11T21:41:24.1094846Z auto tmp2 = at::vec::Vectorized(static_cast(16)); 2023-01-11T21:41:24.1094937Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.1095075Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.1095167Z auto tmp5 = tmp4.pow(2); 2023-01-11T21:41:24.1095238Z tmp6_vec += tmp5; 2023-01-11T21:41:24.1095320Z tmp7_vec += tmp0; 2023-01-11T21:41:24.1095382Z } 2023-01-11T21:41:24.1095578Z tmp6 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp6_vec); 2023-01-11T21:41:24.1095762Z tmp7 = at::vec::vec_reduce_all([](at::vec::Vectorized& x, at::vec::Vectorized&y) {return x + y;}, tmp7_vec); 2023-01-11T21:41:24.1095899Z #pragma omp simd simdlen(4) reduction(+:tmp6) reduction(+:tmp7) 2023-01-11T21:41:24.1095985Z for(long i2=8; i2<8; i2+=1) 2023-01-11T21:41:24.1096046Z { 2023-01-11T21:41:24.1096147Z auto tmp0 = in_ptr0[i2 + (8*i0) + (32*i1)]; 2023-01-11T21:41:24.1096229Z auto tmp1 = out_ptr3[i0]; 2023-01-11T21:41:24.1096330Z auto tmp2 = static_cast(16); 2023-01-11T21:41:24.1096420Z auto tmp3 = tmp1 / tmp2; 2023-01-11T21:41:24.1096557Z auto tmp4 = tmp0 - tmp3; 2023-01-11T21:41:24.1096651Z auto tmp5 = tmp4 * tmp4; 2023-01-11T21:41:24.1096727Z tmp6 += tmp5; 2023-01-11T21:41:24.1096800Z tmp7 += tmp0; 2023-01-11T21:41:24.1096852Z } 2023-01-11T21:41:24.1096911Z } 2023-01-11T21:41:24.1096991Z out_ptr4[i0] = tmp6; 2023-01-11T21:41:24.1097066Z out_ptr5[i0] = tmp7; 2023-01-11T21:41:24.1097127Z } 2023-01-11T21:41:24.1097186Z } 2023-01-11T21:41:24.1097259Z #pragma omp for 2023-01-11T21:41:24.1097326Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1097385Z { 2023-01-11T21:41:24.1097447Z { 2023-01-11T21:41:24.1097509Z { 2023-01-11T21:41:24.1097599Z auto tmp0 = out_ptr4[i0]; 2023-01-11T21:41:24.1097700Z auto tmp1 = static_cast(15); 2023-01-11T21:41:24.1097777Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1097865Z in_out_ptr2[i0] = tmp2; 2023-01-11T21:41:24.1097958Z } 2023-01-11T21:41:24.1098021Z } 2023-01-11T21:41:24.1098084Z } 2023-01-11T21:41:24.1098158Z #pragma omp for 2023-01-11T21:41:24.1098242Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1098291Z { 2023-01-11T21:41:24.1098353Z { 2023-01-11T21:41:24.1098417Z { 2023-01-11T21:41:24.1098507Z auto tmp0 = out_ptr5[i0]; 2023-01-11T21:41:24.1098611Z auto tmp1 = static_cast(16); 2023-01-11T21:41:24.1098699Z auto tmp2 = tmp0 / tmp1; 2023-01-11T21:41:24.1098787Z in_out_ptr3[i0] = tmp2; 2023-01-11T21:41:24.1098839Z } 2023-01-11T21:41:24.1098902Z } 2023-01-11T21:41:24.1098965Z } 2023-01-11T21:41:24.1099027Z } 2023-01-11T21:41:24.1099087Z } 2023-01-11T21:41:24.1099165Z ''') 2023-01-11T21:41:24.1099171Z 2023-01-11T21:41:24.1099175Z 2023-01-11T21:41:24.1099292Z async_compile.wait(globals()) 2023-01-11T21:41:24.1099352Z del async_compile 2023-01-11T21:41:24.1099357Z 2023-01-11T21:41:24.1099422Z def call(args): 2023-01-11T21:41:24.1099489Z arg0_1, = args 2023-01-11T21:41:24.1099561Z args.clear() 2023-01-11T21:41:24.1099775Z buf0 = empty_strided((1, 2, 4, 1), (8, 4, 1, 8), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1099978Z buf1 = empty_strided((1, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1100177Z buf2 = empty_strided((1, 2, 4), (8, 4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1100249Z buf3 = buf1; del buf1 # reuse 2023-01-11T21:41:24.1100328Z buf4 = buf2; del buf2 # reuse 2023-01-11T21:41:24.1100534Z buf5 = empty_strided((1, 1, 4, 1), (4, 4, 1, 4), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1100725Z buf6 = empty_strided((1, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1100915Z buf7 = empty_strided((1, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1101003Z buf8 = buf6; del buf6 # reuse 2023-01-11T21:41:24.1101083Z buf9 = buf7; del buf7 # reuse 2023-01-11T21:41:24.1101339Z kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf8.data_ptr()), c_void_p(buf9.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf5.data_ptr())) 2023-01-11T21:41:24.1101394Z del arg0_1 2023-01-11T21:41:24.1101480Z return (buf3, buf4, buf8, buf9, ) 2023-01-11T21:41:24.1101486Z 2023-01-11T21:41:24.1101490Z 2023-01-11T21:41:24.1101566Z if __name__ == "__main__": 2023-01-11T21:41:24.1101681Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1101807Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1102020Z arg0_1 = rand_strided((1, 2, 4, 8), (64, 32, 8, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1102126Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1102529Z [2023-01-11 21:39:27,444] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 482 2023-01-11T21:41:24.1102541Z 2023-01-11T21:41:24.1102635Z ok (1.656s) 2023-01-11T21:41:24.1103100Z test_vdd_clamp_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1103226Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1103492Z [2023-01-11 21:39:27,481] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 483 2023-01-11T21:41:24.1103755Z [2023-01-11 21:39:28,997] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 483 2023-01-11T21:41:24.1103760Z 2023-01-11T21:41:24.1103913Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1103983Z import torch 2023-01-11T21:41:24.1104050Z import random 2023-01-11T21:41:24.1104167Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1104287Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1104292Z 2023-01-11T21:41:24.1104359Z aten = torch.ops.aten 2023-01-11T21:41:24.1104490Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1104579Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1104584Z 2023-01-11T21:41:24.1104588Z 2023-01-11T21:41:24.1104719Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1104922Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1105038Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1105136Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1105230Z bool* __restrict__ out_ptr1) 2023-01-11T21:41:24.1105314Z { 2023-01-11T21:41:24.1105410Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1105466Z { 2023-01-11T21:41:24.1105541Z #pragma omp for 2023-01-11T21:41:24.1105619Z for(long i0=0; i0<16; i0+=1) 2023-01-11T21:41:24.1105681Z { 2023-01-11T21:41:24.1105732Z { 2023-01-11T21:41:24.1105793Z { 2023-01-11T21:41:24.1105883Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1105984Z auto tmp1 = static_cast(3.0); 2023-01-11T21:41:24.1106110Z auto tmp2 = (tmp1 != tmp1) ? tmp1 : std::max(tmp0, tmp1); 2023-01-11T21:41:24.1106210Z auto tmp3 = static_cast(3); 2023-01-11T21:41:24.1106300Z auto tmp4 = tmp0 >= tmp3; 2023-01-11T21:41:24.1106379Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1106451Z out_ptr1[i0] = tmp4; 2023-01-11T21:41:24.1106514Z } 2023-01-11T21:41:24.1106579Z } 2023-01-11T21:41:24.1106643Z } 2023-01-11T21:41:24.1106702Z } 2023-01-11T21:41:24.1106758Z } 2023-01-11T21:41:24.1106824Z ''') 2023-01-11T21:41:24.1106829Z 2023-01-11T21:41:24.1106843Z 2023-01-11T21:41:24.1106918Z async_compile.wait(globals()) 2023-01-11T21:41:24.1106987Z del async_compile 2023-01-11T21:41:24.1106991Z 2023-01-11T21:41:24.1107061Z def call(args): 2023-01-11T21:41:24.1107133Z primals_1, = args 2023-01-11T21:41:24.1107201Z args.clear() 2023-01-11T21:41:24.1107394Z buf0 = empty_strided((16, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1107576Z buf1 = empty_strided((16, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.1107733Z kernel_cpp_0(c_void_p(primals_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf1.data_ptr())) 2023-01-11T21:41:24.1107803Z del primals_1 2023-01-11T21:41:24.1107879Z return (buf0, buf1, ) 2023-01-11T21:41:24.1107884Z 2023-01-11T21:41:24.1107888Z 2023-01-11T21:41:24.1107964Z if __name__ == "__main__": 2023-01-11T21:41:24.1108076Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1108197Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1108394Z primals_1 = rand_strided((16, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1108503Z print_performance(lambda: call([primals_1])) 2023-01-11T21:41:24.1108508Z 2023-01-11T21:41:24.1108561Z ok (1.553s) 2023-01-11T21:41:24.1109033Z test_vertical_fusion1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1109161Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1109422Z [2023-01-11 21:39:29,069] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 484 2023-01-11T21:41:24.1109715Z [2023-01-11 21:39:30,613] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 484 2023-01-11T21:41:24.1109720Z 2023-01-11T21:41:24.1109813Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1109879Z import torch 2023-01-11T21:41:24.1109947Z import random 2023-01-11T21:41:24.1110058Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1110168Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1110172Z 2023-01-11T21:41:24.1110247Z aten = torch.ops.aten 2023-01-11T21:41:24.1110379Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1110470Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1110475Z 2023-01-11T21:41:24.1110479Z 2023-01-11T21:41:24.1110609Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1110845Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1110968Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1111067Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1111156Z const float* __restrict__ in_ptr2, 2023-01-11T21:41:24.1111251Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1111310Z { 2023-01-11T21:41:24.1111404Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1111466Z { 2023-01-11T21:41:24.1111536Z #pragma omp for 2023-01-11T21:41:24.1111617Z for(long i0=0; i0<41616; i0+=1) 2023-01-11T21:41:24.1111667Z { 2023-01-11T21:41:24.1111746Z for(long i1=0; i1<3; i1+=1) 2023-01-11T21:41:24.1111807Z { 2023-01-11T21:41:24.1111943Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (26*i0)); 2023-01-11T21:41:24.1112087Z auto tmp8 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (26*i0)); 2023-01-11T21:41:24.1112222Z auto tmp15 = at::vec::Vectorized::loadu(in_ptr2 + 8*i1); 2023-01-11T21:41:24.1112449Z auto tmp1 = at::vec::Vectorized(static_cast(-1.061519070296458e-11)); 2023-01-11T21:41:24.1112534Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1112748Z auto tmp3 = at::vec::Vectorized(static_cast(-1.988366587925593e-08)); 2023-01-11T21:41:24.1112833Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1112916Z auto tmp5 = tmp0 * tmp4; 2023-01-11T21:41:24.1113140Z auto tmp6 = at::vec::Vectorized(static_cast(-3.087032500374211e-07)); 2023-01-11T21:41:24.1113229Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.1113446Z auto tmp9 = at::vec::Vectorized(static_cast(1.55093272922008e-10)); 2023-01-11T21:41:24.1113530Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.1113624Z auto tmp11 = tmp7 + tmp10; 2023-01-11T21:41:24.1113714Z auto tmp12 = tmp11.reciprocal(); 2023-01-11T21:41:24.1113906Z auto tmp13 = at::vec::Vectorized(static_cast(1.0)); 2023-01-11T21:41:24.1113996Z auto tmp14 = tmp12 * tmp13; 2023-01-11T21:41:24.1114083Z auto tmp16 = tmp11 * tmp15; 2023-01-11T21:41:24.1114169Z auto tmp17 = tmp14 + tmp16; 2023-01-11T21:41:24.1114271Z tmp17.store(out_ptr0 + (8*i1) + (26*i0)); 2023-01-11T21:41:24.1114332Z } 2023-01-11T21:41:24.1114424Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1114496Z for(long i1=24; i1<26; i1+=1) 2023-01-11T21:41:24.1114557Z { 2023-01-11T21:41:24.1114651Z auto tmp0 = in_ptr0[i1 + (26*i0)]; 2023-01-11T21:41:24.1114743Z auto tmp8 = in_ptr1[i1 + (26*i0)]; 2023-01-11T21:41:24.1114827Z auto tmp15 = in_ptr2[i1]; 2023-01-11T21:41:24.1115042Z auto tmp1 = static_cast(-1.061519070296458e-11); 2023-01-11T21:41:24.1115126Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1115291Z auto tmp3 = static_cast(-1.988366587925593e-08); 2023-01-11T21:41:24.1115375Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1115460Z auto tmp5 = tmp0 * tmp4; 2023-01-11T21:41:24.1115635Z auto tmp6 = static_cast(-3.087032500374211e-07); 2023-01-11T21:41:24.1115719Z auto tmp7 = tmp5 + tmp6; 2023-01-11T21:41:24.1115893Z auto tmp9 = static_cast(1.55093272922008e-10); 2023-01-11T21:41:24.1115975Z auto tmp10 = tmp8 * tmp9; 2023-01-11T21:41:24.1116051Z auto tmp11 = tmp7 + tmp10; 2023-01-11T21:41:24.1116129Z auto tmp12 = 1 / tmp11; 2023-01-11T21:41:24.1116227Z auto tmp13 = static_cast(1.0); 2023-01-11T21:41:24.1116317Z auto tmp14 = tmp12 * tmp13; 2023-01-11T21:41:24.1116436Z auto tmp16 = tmp11 * tmp15; 2023-01-11T21:41:24.1116522Z auto tmp17 = tmp14 + tmp16; 2023-01-11T21:41:24.1116613Z out_ptr0[i1 + (26*i0)] = tmp17; 2023-01-11T21:41:24.1116663Z } 2023-01-11T21:41:24.1116723Z } 2023-01-11T21:41:24.1116781Z } 2023-01-11T21:41:24.1116836Z } 2023-01-11T21:41:24.1116918Z ''') 2023-01-11T21:41:24.1116924Z 2023-01-11T21:41:24.1116928Z 2023-01-11T21:41:24.1117014Z async_compile.wait(globals()) 2023-01-11T21:41:24.1117084Z del async_compile 2023-01-11T21:41:24.1117089Z 2023-01-11T21:41:24.1117146Z def call(args): 2023-01-11T21:41:24.1117227Z arg0_1, arg1_1, arg2_1 = args 2023-01-11T21:41:24.1117294Z args.clear() 2023-01-11T21:41:24.1117507Z buf0 = empty_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1117693Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(arg2_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1117766Z del arg0_1 2023-01-11T21:41:24.1117829Z del arg1_1 2023-01-11T21:41:24.1117880Z del arg2_1 2023-01-11T21:41:24.1117948Z return (buf0, ) 2023-01-11T21:41:24.1117953Z 2023-01-11T21:41:24.1117957Z 2023-01-11T21:41:24.1118027Z if __name__ == "__main__": 2023-01-11T21:41:24.1118139Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1118258Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1118475Z arg0_1 = rand_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1118685Z arg1_1 = rand_strided((204, 204, 26), (5304, 26, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1118875Z arg2_1 = rand_strided((26, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1118991Z print_performance(lambda: call([arg0_1, arg1_1, arg2_1])) 2023-01-11T21:41:24.1118997Z 2023-01-11T21:41:24.1119050Z ok (1.651s) 2023-01-11T21:41:24.1119511Z test_views1_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1119642Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1119904Z [2023-01-11 21:39:30,666] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 485 2023-01-11T21:41:24.1120167Z [2023-01-11 21:39:32,178] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 485 2023-01-11T21:41:24.1120595Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1120764Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1121017Z [2023-01-11 21:39:32,198] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 486 2023-01-11T21:41:24.1121279Z [2023-01-11 21:39:33,736] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 486 2023-01-11T21:41:24.1121703Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1121827Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1122109Z [2023-01-11 21:39:33,754] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 487 2023-01-11T21:41:24.1122389Z [2023-01-11 21:39:35,267] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 487 2023-01-11T21:41:24.1123010Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1123235Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1123733Z [2023-01-11 21:39:35,288] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 488 2023-01-11T21:41:24.1124035Z [2023-01-11 21:39:36,911] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 488 2023-01-11T21:41:24.1124459Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1124585Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1124839Z [2023-01-11 21:39:36,934] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 489 2023-01-11T21:41:24.1124846Z 2023-01-11T21:41:24.1124939Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1125007Z import torch 2023-01-11T21:41:24.1125065Z import random 2023-01-11T21:41:24.1125176Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1125295Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1125301Z 2023-01-11T21:41:24.1125380Z aten = torch.ops.aten 2023-01-11T21:41:24.1125511Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1125599Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1125604Z 2023-01-11T21:41:24.1125609Z 2023-01-11T21:41:24.1125737Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1125945Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1126051Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1126149Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1126246Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1126305Z { 2023-01-11T21:41:24.1126400Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1126458Z { 2023-01-11T21:41:24.1126531Z #pragma omp for 2023-01-11T21:41:24.1126600Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1126659Z { 2023-01-11T21:41:24.1126796Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1126969Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1127055Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1127144Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1127204Z } 2023-01-11T21:41:24.1127285Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1127366Z for(long i0=32; i0<35; i0+=1) 2023-01-11T21:41:24.1127426Z { 2023-01-11T21:41:24.1127507Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1127586Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1127664Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1127740Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1127789Z } 2023-01-11T21:41:24.1127849Z } 2023-01-11T21:41:24.1127905Z } 2023-01-11T21:41:24.1127982Z ''') 2023-01-11T21:41:24.1127987Z 2023-01-11T21:41:24.1127991Z 2023-01-11T21:41:24.1128078Z async_compile.wait(globals()) 2023-01-11T21:41:24.1128180Z del async_compile 2023-01-11T21:41:24.1128185Z 2023-01-11T21:41:24.1128254Z def call(args): 2023-01-11T21:41:24.1128326Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1128383Z args.clear() 2023-01-11T21:41:24.1128579Z buf0 = empty_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1128742Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1128810Z del arg0_1 2023-01-11T21:41:24.1128874Z del arg1_1 2023-01-11T21:41:24.1128943Z return (buf0, ) 2023-01-11T21:41:24.1128949Z 2023-01-11T21:41:24.1128953Z 2023-01-11T21:41:24.1129027Z if __name__ == "__main__": 2023-01-11T21:41:24.1129127Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1129246Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1129440Z arg0_1 = rand_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1129637Z arg1_1 = rand_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1129749Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1129754Z 2023-01-11T21:41:24.1129758Z 2023-01-11T21:41:24.1129849Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1129915Z import torch 2023-01-11T21:41:24.1129981Z import random 2023-01-11T21:41:24.1130081Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1130198Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1130203Z 2023-01-11T21:41:24.1130281Z aten = torch.ops.aten 2023-01-11T21:41:24.1130410Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1130498Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1130503Z 2023-01-11T21:41:24.1130507Z 2023-01-11T21:41:24.1130637Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1130840Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1130962Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1131052Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1131148Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1131206Z { 2023-01-11T21:41:24.1131302Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1131361Z { 2023-01-11T21:41:24.1131435Z #pragma omp for 2023-01-11T21:41:24.1131515Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1131564Z { 2023-01-11T21:41:24.1131694Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1131822Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1131948Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1132030Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1132109Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1132200Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1132282Z } 2023-01-11T21:41:24.1132376Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1132456Z for(long i0=32; i0<35; i0+=1) 2023-01-11T21:41:24.1132518Z { 2023-01-11T21:41:24.1132596Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1132673Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1132768Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1132840Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1132918Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1132995Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1133060Z } 2023-01-11T21:41:24.1133121Z } 2023-01-11T21:41:24.1133182Z } 2023-01-11T21:41:24.1133261Z ''') 2023-01-11T21:41:24.1133266Z 2023-01-11T21:41:24.1133270Z 2023-01-11T21:41:24.1133345Z async_compile.wait(globals()) 2023-01-11T21:41:24.1133415Z del async_compile 2023-01-11T21:41:24.1133420Z 2023-01-11T21:41:24.1133490Z def call(args): 2023-01-11T21:41:24.1133596Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1133669Z args.clear() 2023-01-11T21:41:24.1133862Z buf0 = empty_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1134023Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1134089Z del arg0_1 2023-01-11T21:41:24.1134142Z del arg1_1 2023-01-11T21:41:24.1134212Z return (buf0, ) 2023-01-11T21:41:24.1134217Z 2023-01-11T21:41:24.1134221Z 2023-01-11T21:41:24.1134299Z if __name__ == "__main__": 2023-01-11T21:41:24.1134413Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1134536Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1134730Z arg0_1 = rand_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1134921Z arg1_1 = rand_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1135022Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1135045Z 2023-01-11T21:41:24.1135049Z 2023-01-11T21:41:24.1135129Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1135199Z import torch 2023-01-11T21:41:24.1135269Z import random 2023-01-11T21:41:24.1135387Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1135505Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1135510Z 2023-01-11T21:41:24.1135588Z aten = torch.ops.aten 2023-01-11T21:41:24.1135718Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1135797Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1135815Z 2023-01-11T21:41:24.1135819Z 2023-01-11T21:41:24.1135938Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1136140Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1136257Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1136361Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1136462Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1136524Z { 2023-01-11T21:41:24.1136622Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1136671Z { 2023-01-11T21:41:24.1136748Z #pragma omp for 2023-01-11T21:41:24.1136830Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1136893Z { 2023-01-11T21:41:24.1137027Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1137154Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1137237Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1137313Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1137376Z } 2023-01-11T21:41:24.1137468Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1137553Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1137615Z { 2023-01-11T21:41:24.1137698Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1137845Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1137913Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1137991Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1138058Z } 2023-01-11T21:41:24.1138117Z } 2023-01-11T21:41:24.1138175Z } 2023-01-11T21:41:24.1138253Z ''') 2023-01-11T21:41:24.1138258Z 2023-01-11T21:41:24.1138263Z 2023-01-11T21:41:24.1138353Z async_compile.wait(globals()) 2023-01-11T21:41:24.1138412Z del async_compile 2023-01-11T21:41:24.1138417Z 2023-01-11T21:41:24.1138486Z def call(args): 2023-01-11T21:41:24.1138555Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1138624Z args.clear() 2023-01-11T21:41:24.1138858Z buf0 = empty_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1139018Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1139082Z del arg0_1 2023-01-11T21:41:24.1139138Z del arg1_1 2023-01-11T21:41:24.1139236Z return (buf0, ) 2023-01-11T21:41:24.1139241Z 2023-01-11T21:41:24.1139245Z 2023-01-11T21:41:24.1139319Z if __name__ == "__main__": 2023-01-11T21:41:24.1139432Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1139551Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1139745Z arg0_1 = rand_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1139977Z arg1_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1140089Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1140094Z 2023-01-11T21:41:24.1140098Z 2023-01-11T21:41:24.1140185Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1140240Z import torch 2023-01-11T21:41:24.1140306Z import random 2023-01-11T21:41:24.1140418Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1140536Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1140546Z 2023-01-11T21:41:24.1140622Z aten = torch.ops.aten 2023-01-11T21:41:24.1140751Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1140838Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1140843Z 2023-01-11T21:41:24.1140847Z 2023-01-11T21:41:24.1140976Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1141166Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1141286Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1141387Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1141484Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1141544Z { 2023-01-11T21:41:24.1141638Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1141698Z { 2023-01-11T21:41:24.1141760Z #pragma omp for 2023-01-11T21:41:24.1141840Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1141905Z { 2023-01-11T21:41:24.1142036Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1142163Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1142294Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1142522Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1142591Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1142680Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1142742Z } 2023-01-11T21:41:24.1142835Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1142919Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1142980Z { 2023-01-11T21:41:24.1143061Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1143129Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1143225Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1143306Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1143444Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1143524Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1143586Z } 2023-01-11T21:41:24.1143646Z } 2023-01-11T21:41:24.1143692Z } 2023-01-11T21:41:24.1143770Z ''') 2023-01-11T21:41:24.1143775Z 2023-01-11T21:41:24.1143779Z 2023-01-11T21:41:24.1143867Z async_compile.wait(globals()) 2023-01-11T21:41:24.1143938Z del async_compile 2023-01-11T21:41:24.1143944Z 2023-01-11T21:41:24.1144013Z def call(args): 2023-01-11T21:41:24.1144085Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1144153Z args.clear() 2023-01-11T21:41:24.1144373Z buf0 = empty_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1144530Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1144596Z del arg0_1 2023-01-11T21:41:24.1144660Z del arg1_1 2023-01-11T21:41:24.1144731Z return (buf0, ) 2023-01-11T21:41:24.1144773Z 2023-01-11T21:41:24.1144778Z 2023-01-11T21:41:24.1144852Z if __name__ == "__main__": 2023-01-11T21:41:24.1144965Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1145086Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1145271Z arg0_1 = rand_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1145499Z arg1_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1145614Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1145619Z 2023-01-11T21:41:24.1145890Z [2023-01-11 21:39:36,944] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 489 2023-01-11T21:41:24.1146321Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1146448Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1146709Z [2023-01-11 21:39:36,965] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 490 2023-01-11T21:41:24.1146970Z [2023-01-11 21:39:36,977] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 490 2023-01-11T21:41:24.1147394Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1147518Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1147779Z [2023-01-11 21:39:36,994] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 491 2023-01-11T21:41:24.1148030Z [2023-01-11 21:39:38,505] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 491 2023-01-11T21:41:24.1148452Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1148576Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1148827Z [2023-01-11 21:39:38,525] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 492 2023-01-11T21:41:24.1149087Z [2023-01-11 21:39:40,035] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 492 2023-01-11T21:41:24.1149537Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1149662Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1149915Z [2023-01-11 21:39:40,053] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 493 2023-01-11T21:41:24.1149920Z 2023-01-11T21:41:24.1150010Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1150076Z import torch 2023-01-11T21:41:24.1150142Z import random 2023-01-11T21:41:24.1150243Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1150358Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1150367Z 2023-01-11T21:41:24.1150473Z aten = torch.ops.aten 2023-01-11T21:41:24.1150608Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1150698Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1150703Z 2023-01-11T21:41:24.1150707Z 2023-01-11T21:41:24.1150838Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1151041Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1151161Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1151254Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1151352Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1151412Z { 2023-01-11T21:41:24.1151507Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1151567Z { 2023-01-11T21:41:24.1151643Z #pragma omp for 2023-01-11T21:41:24.1151723Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1151772Z { 2023-01-11T21:41:24.1151905Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1152032Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1152117Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1152204Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1152264Z } 2023-01-11T21:41:24.1152356Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1152429Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1152489Z { 2023-01-11T21:41:24.1152569Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1152647Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1152726Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1152802Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1152865Z } 2023-01-11T21:41:24.1152912Z } 2023-01-11T21:41:24.1152969Z } 2023-01-11T21:41:24.1153045Z ''') 2023-01-11T21:41:24.1153050Z 2023-01-11T21:41:24.1153055Z 2023-01-11T21:41:24.1153145Z async_compile.wait(globals()) 2023-01-11T21:41:24.1153216Z del async_compile 2023-01-11T21:41:24.1153221Z 2023-01-11T21:41:24.1153291Z def call(args): 2023-01-11T21:41:24.1153362Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1153419Z args.clear() 2023-01-11T21:41:24.1153653Z buf0 = empty_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1153873Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1153943Z del arg0_1 2023-01-11T21:41:24.1154008Z del arg1_1 2023-01-11T21:41:24.1154077Z return (buf0, ) 2023-01-11T21:41:24.1154082Z 2023-01-11T21:41:24.1154087Z 2023-01-11T21:41:24.1154162Z if __name__ == "__main__": 2023-01-11T21:41:24.1154276Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1154385Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1154604Z arg0_1 = rand_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1154874Z arg1_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1154986Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1154992Z 2023-01-11T21:41:24.1154996Z 2023-01-11T21:41:24.1155086Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1155153Z import torch 2023-01-11T21:41:24.1155220Z import random 2023-01-11T21:41:24.1155332Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1155438Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1155443Z 2023-01-11T21:41:24.1155519Z aten = torch.ops.aten 2023-01-11T21:41:24.1155651Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1155740Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1155745Z 2023-01-11T21:41:24.1155749Z 2023-01-11T21:41:24.1155881Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1156113Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1156231Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1156333Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1156419Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1156477Z { 2023-01-11T21:41:24.1156572Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1156630Z { 2023-01-11T21:41:24.1156705Z #pragma omp for 2023-01-11T21:41:24.1156786Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1156852Z { 2023-01-11T21:41:24.1156973Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1157098Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1157228Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1157312Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1157396Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1157486Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1157545Z } 2023-01-11T21:41:24.1157628Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1157711Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1157769Z { 2023-01-11T21:41:24.1157849Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1157928Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1158023Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1158104Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1158172Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1158247Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1158308Z } 2023-01-11T21:41:24.1158367Z } 2023-01-11T21:41:24.1158425Z } 2023-01-11T21:41:24.1158501Z ''') 2023-01-11T21:41:24.1158506Z 2023-01-11T21:41:24.1158510Z 2023-01-11T21:41:24.1158597Z async_compile.wait(globals()) 2023-01-11T21:41:24.1158660Z del async_compile 2023-01-11T21:41:24.1158676Z 2023-01-11T21:41:24.1158733Z def call(args): 2023-01-11T21:41:24.1158805Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1158870Z args.clear() 2023-01-11T21:41:24.1159104Z buf0 = empty_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1159265Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1159335Z del arg0_1 2023-01-11T21:41:24.1159398Z del arg1_1 2023-01-11T21:41:24.1159456Z return (buf0, ) 2023-01-11T21:41:24.1159462Z 2023-01-11T21:41:24.1159467Z 2023-01-11T21:41:24.1159544Z if __name__ == "__main__": 2023-01-11T21:41:24.1159656Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1159777Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1159993Z arg0_1 = rand_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1160267Z arg1_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1160380Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1160384Z 2023-01-11T21:41:24.1160389Z 2023-01-11T21:41:24.1160479Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1160535Z import torch 2023-01-11T21:41:24.1160604Z import random 2023-01-11T21:41:24.1160715Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1160834Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1160838Z 2023-01-11T21:41:24.1160915Z aten = torch.ops.aten 2023-01-11T21:41:24.1161046Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1161134Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1161139Z 2023-01-11T21:41:24.1161144Z 2023-01-11T21:41:24.1161276Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1161492Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1161610Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1161711Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1161810Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1161870Z { 2023-01-11T21:41:24.1161963Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1162020Z { 2023-01-11T21:41:24.1162083Z #pragma omp for 2023-01-11T21:41:24.1162162Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1162222Z { 2023-01-11T21:41:24.1162352Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1162480Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1162563Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1162651Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1162700Z } 2023-01-11T21:41:24.1162795Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1162881Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1162944Z { 2023-01-11T21:41:24.1163025Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1163104Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1163181Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1163247Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1163307Z } 2023-01-11T21:41:24.1163366Z } 2023-01-11T21:41:24.1163424Z } 2023-01-11T21:41:24.1163499Z ''') 2023-01-11T21:41:24.1163504Z 2023-01-11T21:41:24.1163509Z 2023-01-11T21:41:24.1163596Z async_compile.wait(globals()) 2023-01-11T21:41:24.1163667Z del async_compile 2023-01-11T21:41:24.1163672Z 2023-01-11T21:41:24.1163729Z def call(args): 2023-01-11T21:41:24.1163800Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1163868Z args.clear() 2023-01-11T21:41:24.1164074Z buf0 = empty_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1164238Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1164304Z del arg0_1 2023-01-11T21:41:24.1164368Z del arg1_1 2023-01-11T21:41:24.1164426Z return (buf0, ) 2023-01-11T21:41:24.1164440Z 2023-01-11T21:41:24.1164444Z 2023-01-11T21:41:24.1164505Z if __name__ == "__main__": 2023-01-11T21:41:24.1164619Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1164741Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1164941Z arg0_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1165153Z arg1_1 = rand_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1165268Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1165272Z 2023-01-11T21:41:24.1165276Z 2023-01-11T21:41:24.1165369Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1165443Z import torch 2023-01-11T21:41:24.1165535Z import random 2023-01-11T21:41:24.1165652Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1165771Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1165775Z 2023-01-11T21:41:24.1165851Z aten = torch.ops.aten 2023-01-11T21:41:24.1165986Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1166076Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1166080Z 2023-01-11T21:41:24.1166084Z 2023-01-11T21:41:24.1166217Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1166419Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1166525Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1166628Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1166728Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1166789Z { 2023-01-11T21:41:24.1166886Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1166978Z { 2023-01-11T21:41:24.1167057Z #pragma omp for 2023-01-11T21:41:24.1167127Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1167188Z { 2023-01-11T21:41:24.1167320Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1167448Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1167578Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1167661Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1167742Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1167818Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1167878Z } 2023-01-11T21:41:24.1167972Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1168058Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1168121Z { 2023-01-11T21:41:24.1168204Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1168295Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1168381Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1168465Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1168546Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1168626Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1168686Z } 2023-01-11T21:41:24.1168746Z } 2023-01-11T21:41:24.1168805Z } 2023-01-11T21:41:24.1168873Z ''') 2023-01-11T21:41:24.1168878Z 2023-01-11T21:41:24.1168882Z 2023-01-11T21:41:24.1168970Z async_compile.wait(globals()) 2023-01-11T21:41:24.1169043Z del async_compile 2023-01-11T21:41:24.1169048Z 2023-01-11T21:41:24.1169119Z def call(args): 2023-01-11T21:41:24.1169190Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1169259Z args.clear() 2023-01-11T21:41:24.1169470Z buf0 = empty_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1169629Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1169688Z del arg0_1 2023-01-11T21:41:24.1169754Z del arg1_1 2023-01-11T21:41:24.1169826Z return (buf0, ) 2023-01-11T21:41:24.1169831Z 2023-01-11T21:41:24.1169835Z 2023-01-11T21:41:24.1169906Z if __name__ == "__main__": 2023-01-11T21:41:24.1170020Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1170139Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1170341Z arg0_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1170537Z arg1_1 = rand_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1170651Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1170656Z 2023-01-11T21:41:24.1170925Z [2023-01-11 21:39:41,557] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 493 2023-01-11T21:41:24.1171359Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1171511Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1171768Z [2023-01-11 21:39:41,578] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 494 2023-01-11T21:41:24.1172033Z [2023-01-11 21:39:43,112] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 494 2023-01-11T21:41:24.1172456Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1172609Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1172863Z [2023-01-11 21:39:43,129] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 495 2023-01-11T21:41:24.1173123Z [2023-01-11 21:39:43,138] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 495 2023-01-11T21:41:24.1173549Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1173661Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1173914Z [2023-01-11 21:39:43,156] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 496 2023-01-11T21:41:24.1174178Z [2023-01-11 21:39:43,177] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 496 2023-01-11T21:41:24.1174602Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1174724Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1174975Z [2023-01-11 21:39:43,194] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 497 2023-01-11T21:41:24.1174980Z 2023-01-11T21:41:24.1175072Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1175139Z import torch 2023-01-11T21:41:24.1175207Z import random 2023-01-11T21:41:24.1175323Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1175435Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1175440Z 2023-01-11T21:41:24.1175522Z aten = torch.ops.aten 2023-01-11T21:41:24.1175655Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1175745Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1175751Z 2023-01-11T21:41:24.1175755Z 2023-01-11T21:41:24.1175883Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1176088Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1176204Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1176308Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1176395Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1176455Z { 2023-01-11T21:41:24.1176551Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1176612Z { 2023-01-11T21:41:24.1176691Z #pragma omp for 2023-01-11T21:41:24.1176801Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1176861Z { 2023-01-11T21:41:24.1176981Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1177112Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1177193Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1177285Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1177345Z } 2023-01-11T21:41:24.1177437Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1177515Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1177563Z { 2023-01-11T21:41:24.1177642Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1177719Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1177797Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1177875Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1177935Z } 2023-01-11T21:41:24.1177992Z } 2023-01-11T21:41:24.1178042Z } 2023-01-11T21:41:24.1178142Z ''') 2023-01-11T21:41:24.1178148Z 2023-01-11T21:41:24.1178152Z 2023-01-11T21:41:24.1178243Z async_compile.wait(globals()) 2023-01-11T21:41:24.1178314Z del async_compile 2023-01-11T21:41:24.1178319Z 2023-01-11T21:41:24.1178388Z def call(args): 2023-01-11T21:41:24.1178461Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1178531Z args.clear() 2023-01-11T21:41:24.1178716Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1178876Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1178943Z del arg0_1 2023-01-11T21:41:24.1179010Z del arg1_1 2023-01-11T21:41:24.1179079Z return (buf0, ) 2023-01-11T21:41:24.1179084Z 2023-01-11T21:41:24.1179088Z 2023-01-11T21:41:24.1179162Z if __name__ == "__main__": 2023-01-11T21:41:24.1179274Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1179396Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1179595Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1179786Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1179898Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1179903Z 2023-01-11T21:41:24.1179907Z 2023-01-11T21:41:24.1179998Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1180064Z import torch 2023-01-11T21:41:24.1180131Z import random 2023-01-11T21:41:24.1180251Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1180357Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1180373Z 2023-01-11T21:41:24.1180437Z aten = torch.ops.aten 2023-01-11T21:41:24.1180567Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1180656Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1180661Z 2023-01-11T21:41:24.1180666Z 2023-01-11T21:41:24.1180797Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1181008Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1181124Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1181225Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1181322Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1181369Z { 2023-01-11T21:41:24.1181463Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1181523Z { 2023-01-11T21:41:24.1181598Z #pragma omp for 2023-01-11T21:41:24.1181674Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1181735Z { 2023-01-11T21:41:24.1181853Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1181983Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1182114Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1182231Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1182316Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1182561Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1182651Z } 2023-01-11T21:41:24.1182745Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1182812Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1182872Z { 2023-01-11T21:41:24.1182952Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1183031Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1183127Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1183205Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1183286Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1183352Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1183411Z } 2023-01-11T21:41:24.1183470Z } 2023-01-11T21:41:24.1183527Z } 2023-01-11T21:41:24.1183610Z ''') 2023-01-11T21:41:24.1183615Z 2023-01-11T21:41:24.1183619Z 2023-01-11T21:41:24.1183762Z async_compile.wait(globals()) 2023-01-11T21:41:24.1183832Z del async_compile 2023-01-11T21:41:24.1183837Z 2023-01-11T21:41:24.1183894Z def call(args): 2023-01-11T21:41:24.1183965Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1184036Z args.clear() 2023-01-11T21:41:24.1184232Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1184392Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1184458Z del arg0_1 2023-01-11T21:41:24.1184527Z del arg1_1 2023-01-11T21:41:24.1184585Z return (buf0, ) 2023-01-11T21:41:24.1184590Z 2023-01-11T21:41:24.1184607Z 2023-01-11T21:41:24.1184670Z if __name__ == "__main__": 2023-01-11T21:41:24.1184783Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1184902Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1185113Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1185311Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1185424Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1185430Z 2023-01-11T21:41:24.1185434Z 2023-01-11T21:41:24.1185527Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1185583Z import torch 2023-01-11T21:41:24.1185651Z import random 2023-01-11T21:41:24.1185764Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1185882Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1185887Z 2023-01-11T21:41:24.1185961Z aten = torch.ops.aten 2023-01-11T21:41:24.1186091Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1186184Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1186188Z 2023-01-11T21:41:24.1186193Z 2023-01-11T21:41:24.1186324Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1186513Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1186632Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1186733Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1186831Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1186889Z { 2023-01-11T21:41:24.1186985Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1187045Z { 2023-01-11T21:41:24.1187108Z #pragma omp for 2023-01-11T21:41:24.1187188Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1187250Z { 2023-01-11T21:41:24.1187382Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1187509Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1187591Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1187678Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1187739Z } 2023-01-11T21:41:24.1187820Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1187951Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1188013Z { 2023-01-11T21:41:24.1188095Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1188175Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1188255Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1188321Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1188380Z } 2023-01-11T21:41:24.1188440Z } 2023-01-11T21:41:24.1188498Z } 2023-01-11T21:41:24.1188577Z ''') 2023-01-11T21:41:24.1188583Z 2023-01-11T21:41:24.1188587Z 2023-01-11T21:41:24.1188674Z async_compile.wait(globals()) 2023-01-11T21:41:24.1188744Z del async_compile 2023-01-11T21:41:24.1188749Z 2023-01-11T21:41:24.1188815Z def call(args): 2023-01-11T21:41:24.1188878Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1188947Z args.clear() 2023-01-11T21:41:24.1189148Z buf0 = empty_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1189336Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1189406Z del arg0_1 2023-01-11T21:41:24.1189471Z del arg1_1 2023-01-11T21:41:24.1189541Z return (buf0, ) 2023-01-11T21:41:24.1189546Z 2023-01-11T21:41:24.1189550Z 2023-01-11T21:41:24.1189611Z if __name__ == "__main__": 2023-01-11T21:41:24.1189722Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1189842Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1190068Z arg0_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1190268Z arg1_1 = rand_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1190382Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1190387Z 2023-01-11T21:41:24.1190391Z 2023-01-11T21:41:24.1190483Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1190549Z import torch 2023-01-11T21:41:24.1190605Z import random 2023-01-11T21:41:24.1190724Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1190841Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1190846Z 2023-01-11T21:41:24.1190922Z aten = torch.ops.aten 2023-01-11T21:41:24.1191051Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1191139Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1191144Z 2023-01-11T21:41:24.1191149Z 2023-01-11T21:41:24.1191277Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1191481Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1191586Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1191687Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1191782Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1191842Z { 2023-01-11T21:41:24.1191937Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1192000Z { 2023-01-11T21:41:24.1192075Z #pragma omp for 2023-01-11T21:41:24.1192143Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1192205Z { 2023-01-11T21:41:24.1192334Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1192460Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1192587Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1192670Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1192749Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1192839Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1192889Z } 2023-01-11T21:41:24.1192980Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1193063Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1193129Z { 2023-01-11T21:41:24.1193210Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1193289Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1193418Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1193489Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1193567Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1193645Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1193704Z } 2023-01-11T21:41:24.1193825Z } 2023-01-11T21:41:24.1193889Z } 2023-01-11T21:41:24.1193957Z ''') 2023-01-11T21:41:24.1193962Z 2023-01-11T21:41:24.1193978Z 2023-01-11T21:41:24.1194055Z async_compile.wait(globals()) 2023-01-11T21:41:24.1194125Z del async_compile 2023-01-11T21:41:24.1194132Z 2023-01-11T21:41:24.1194199Z def call(args): 2023-01-11T21:41:24.1194271Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1194340Z args.clear() 2023-01-11T21:41:24.1194541Z buf0 = empty_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1194702Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1194762Z del arg0_1 2023-01-11T21:41:24.1194859Z del arg1_1 2023-01-11T21:41:24.1194933Z return (buf0, ) 2023-01-11T21:41:24.1194938Z 2023-01-11T21:41:24.1194941Z 2023-01-11T21:41:24.1195016Z if __name__ == "__main__": 2023-01-11T21:41:24.1195128Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1195249Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1195480Z arg0_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1195680Z arg1_1 = rand_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1195781Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1195786Z 2023-01-11T21:41:24.1196055Z [2023-01-11 21:39:44,718] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 497 2023-01-11T21:41:24.1196483Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1196610Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1196868Z [2023-01-11 21:39:44,738] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 498 2023-01-11T21:41:24.1197129Z [2023-01-11 21:39:46,284] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 498 2023-01-11T21:41:24.1197556Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1197685Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1197943Z [2023-01-11 21:39:46,301] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 499 2023-01-11T21:41:24.1198205Z [2023-01-11 21:39:46,309] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 499 2023-01-11T21:41:24.1198632Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1198756Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1198998Z [2023-01-11 21:39:46,326] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 500 2023-01-11T21:41:24.1199306Z [2023-01-11 21:39:46,339] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 500 2023-01-11T21:41:24.1199734Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1199858Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1200113Z [2023-01-11 21:39:46,355] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 501 2023-01-11T21:41:24.1200118Z 2023-01-11T21:41:24.1200215Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1200284Z import torch 2023-01-11T21:41:24.1200355Z import random 2023-01-11T21:41:24.1200470Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1200610Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1200617Z 2023-01-11T21:41:24.1200698Z aten = torch.ops.aten 2023-01-11T21:41:24.1200832Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1200927Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1200933Z 2023-01-11T21:41:24.1200937Z 2023-01-11T21:41:24.1201071Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1201273Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1201394Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1201497Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1201582Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1201643Z { 2023-01-11T21:41:24.1201743Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1201806Z { 2023-01-11T21:41:24.1201884Z #pragma omp for 2023-01-11T21:41:24.1201973Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1202036Z { 2023-01-11T21:41:24.1202157Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1202288Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1202373Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1202462Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1202524Z } 2023-01-11T21:41:24.1202617Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1202697Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1202748Z { 2023-01-11T21:41:24.1202831Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1202914Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1202996Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1203076Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1203138Z } 2023-01-11T21:41:24.1203198Z } 2023-01-11T21:41:24.1203246Z } 2023-01-11T21:41:24.1203326Z ''') 2023-01-11T21:41:24.1203334Z 2023-01-11T21:41:24.1203339Z 2023-01-11T21:41:24.1203428Z async_compile.wait(globals()) 2023-01-11T21:41:24.1203500Z del async_compile 2023-01-11T21:41:24.1203505Z 2023-01-11T21:41:24.1203575Z def call(args): 2023-01-11T21:41:24.1203647Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1203719Z args.clear() 2023-01-11T21:41:24.1203902Z buf0 = empty_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1204066Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1204135Z del arg0_1 2023-01-11T21:41:24.1204200Z del arg1_1 2023-01-11T21:41:24.1204271Z return (buf0, ) 2023-01-11T21:41:24.1204276Z 2023-01-11T21:41:24.1204280Z 2023-01-11T21:41:24.1204357Z if __name__ == "__main__": 2023-01-11T21:41:24.1204467Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1204589Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1204819Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1205011Z arg1_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1205125Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1205130Z 2023-01-11T21:41:24.1205134Z 2023-01-11T21:41:24.1205230Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1205299Z import torch 2023-01-11T21:41:24.1205366Z import random 2023-01-11T21:41:24.1205481Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1205598Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1205603Z 2023-01-11T21:41:24.1205667Z aten = torch.ops.aten 2023-01-11T21:41:24.1205798Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1205886Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1205890Z 2023-01-11T21:41:24.1205895Z 2023-01-11T21:41:24.1206025Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1206257Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1206377Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1206477Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1206576Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1206624Z { 2023-01-11T21:41:24.1206720Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1206780Z { 2023-01-11T21:41:24.1206856Z #pragma omp for 2023-01-11T21:41:24.1206938Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1206999Z { 2023-01-11T21:41:24.1207125Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1207242Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1207366Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1207457Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1207541Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1207630Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1207689Z } 2023-01-11T21:41:24.1207782Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1207852Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1207911Z { 2023-01-11T21:41:24.1207990Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1208069Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1208166Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1208247Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1208331Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1208398Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1208458Z } 2023-01-11T21:41:24.1208516Z } 2023-01-11T21:41:24.1208576Z } 2023-01-11T21:41:24.1208652Z ''') 2023-01-11T21:41:24.1208657Z 2023-01-11T21:41:24.1208662Z 2023-01-11T21:41:24.1208747Z async_compile.wait(globals()) 2023-01-11T21:41:24.1208821Z del async_compile 2023-01-11T21:41:24.1208827Z 2023-01-11T21:41:24.1208884Z def call(args): 2023-01-11T21:41:24.1208956Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1209022Z args.clear() 2023-01-11T21:41:24.1209215Z buf0 = empty_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1209375Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1209442Z del arg0_1 2023-01-11T21:41:24.1209506Z del arg1_1 2023-01-11T21:41:24.1209564Z return (buf0, ) 2023-01-11T21:41:24.1209581Z 2023-01-11T21:41:24.1209585Z 2023-01-11T21:41:24.1209648Z if __name__ == "__main__": 2023-01-11T21:41:24.1209760Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1209881Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1210087Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1210316Z arg1_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1210430Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1210435Z 2023-01-11T21:41:24.1210439Z 2023-01-11T21:41:24.1210531Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1210598Z import torch 2023-01-11T21:41:24.1210654Z import random 2023-01-11T21:41:24.1210766Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1210882Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1210887Z 2023-01-11T21:41:24.1210963Z aten = torch.ops.aten 2023-01-11T21:41:24.1211092Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1211180Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1211185Z 2023-01-11T21:41:24.1211189Z 2023-01-11T21:41:24.1211317Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1211519Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1211652Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1211752Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1211847Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1211910Z { 2023-01-11T21:41:24.1212006Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1212065Z { 2023-01-11T21:41:24.1212140Z #pragma omp for 2023-01-11T21:41:24.1212208Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1212271Z { 2023-01-11T21:41:24.1212397Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1212529Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1212611Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1212701Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1212761Z } 2023-01-11T21:41:24.1212842Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1212926Z for(long i0=32; i0<35; i0+=1) 2023-01-11T21:41:24.1212989Z { 2023-01-11T21:41:24.1213072Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1213151Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1213230Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1213309Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1213357Z } 2023-01-11T21:41:24.1213415Z } 2023-01-11T21:41:24.1213474Z } 2023-01-11T21:41:24.1213551Z ''') 2023-01-11T21:41:24.1213556Z 2023-01-11T21:41:24.1213560Z 2023-01-11T21:41:24.1213648Z async_compile.wait(globals()) 2023-01-11T21:41:24.1213717Z del async_compile 2023-01-11T21:41:24.1213722Z 2023-01-11T21:41:24.1213790Z def call(args): 2023-01-11T21:41:24.1213850Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1213919Z args.clear() 2023-01-11T21:41:24.1214111Z buf0 = empty_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1214269Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1214339Z del arg0_1 2023-01-11T21:41:24.1214404Z del arg1_1 2023-01-11T21:41:24.1214473Z return (buf0, ) 2023-01-11T21:41:24.1214478Z 2023-01-11T21:41:24.1214483Z 2023-01-11T21:41:24.1214556Z if __name__ == "__main__": 2023-01-11T21:41:24.1214656Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1214778Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1214972Z arg0_1 = rand_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1215163Z arg1_1 = rand_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1215275Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1215280Z 2023-01-11T21:41:24.1215284Z 2023-01-11T21:41:24.1215374Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1215440Z import torch 2023-01-11T21:41:24.1215497Z import random 2023-01-11T21:41:24.1215607Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1215760Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1215765Z 2023-01-11T21:41:24.1215839Z aten = torch.ops.aten 2023-01-11T21:41:24.1215968Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1216056Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1216061Z 2023-01-11T21:41:24.1216065Z 2023-01-11T21:41:24.1216194Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1216394Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1216511Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1216601Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1216697Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1216756Z { 2023-01-11T21:41:24.1216851Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1216910Z { 2023-01-11T21:41:24.1216984Z #pragma omp for 2023-01-11T21:41:24.1217087Z for(long i0=0; i0<4; i0+=1) 2023-01-11T21:41:24.1217151Z { 2023-01-11T21:41:24.1217278Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1217408Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1217536Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1217620Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1217701Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1217789Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1217839Z } 2023-01-11T21:41:24.1217931Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1218013Z for(long i0=32; i0<35; i0+=1) 2023-01-11T21:41:24.1218070Z { 2023-01-11T21:41:24.1218151Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1218230Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1218329Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1218403Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1218483Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1218559Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1218619Z } 2023-01-11T21:41:24.1218678Z } 2023-01-11T21:41:24.1218736Z } 2023-01-11T21:41:24.1218802Z ''') 2023-01-11T21:41:24.1218817Z 2023-01-11T21:41:24.1218821Z 2023-01-11T21:41:24.1218897Z async_compile.wait(globals()) 2023-01-11T21:41:24.1218967Z del async_compile 2023-01-11T21:41:24.1218972Z 2023-01-11T21:41:24.1219042Z def call(args): 2023-01-11T21:41:24.1219112Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1219180Z args.clear() 2023-01-11T21:41:24.1219373Z buf0 = empty_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1219530Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1219584Z del arg0_1 2023-01-11T21:41:24.1219646Z del arg1_1 2023-01-11T21:41:24.1219720Z return (buf0, ) 2023-01-11T21:41:24.1219725Z 2023-01-11T21:41:24.1219730Z 2023-01-11T21:41:24.1219805Z if __name__ == "__main__": 2023-01-11T21:41:24.1219915Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1220037Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1220230Z arg0_1 = rand_strided((5, 7), (7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1220420Z arg1_1 = rand_strided((35, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1220521Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1220526Z 2023-01-11T21:41:24.1220792Z [2023-01-11 21:39:46,362] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 501 2023-01-11T21:41:24.1221219Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1221373Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1221631Z [2023-01-11 21:39:46,380] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 502 2023-01-11T21:41:24.1221894Z [2023-01-11 21:39:46,406] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 502 2023-01-11T21:41:24.1222427Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1222550Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1222893Z [2023-01-11 21:39:46,423] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 503 2023-01-11T21:41:24.1223157Z [2023-01-11 21:39:46,433] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 503 2023-01-11T21:41:24.1228857Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1229013Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1229287Z [2023-01-11 21:39:46,452] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 504 2023-01-11T21:41:24.1229564Z [2023-01-11 21:39:46,498] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 504 2023-01-11T21:41:24.1230004Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1230130Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1230388Z [2023-01-11 21:39:46,516] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 505 2023-01-11T21:41:24.1230394Z 2023-01-11T21:41:24.1230487Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1230556Z import torch 2023-01-11T21:41:24.1230624Z import random 2023-01-11T21:41:24.1230737Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1230846Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1230854Z 2023-01-11T21:41:24.1230936Z aten = torch.ops.aten 2023-01-11T21:41:24.1231069Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1231158Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1231164Z 2023-01-11T21:41:24.1231168Z 2023-01-11T21:41:24.1231302Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1231504Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1231621Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1231723Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1231810Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1231869Z { 2023-01-11T21:41:24.1231966Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1232027Z { 2023-01-11T21:41:24.1232101Z #pragma omp for 2023-01-11T21:41:24.1232181Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1232240Z { 2023-01-11T21:41:24.1232454Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1232583Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1232668Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1232756Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1232817Z } 2023-01-11T21:41:24.1232908Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1232993Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1233042Z { 2023-01-11T21:41:24.1233125Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1233206Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1233285Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1233363Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1233422Z } 2023-01-11T21:41:24.1233479Z } 2023-01-11T21:41:24.1233525Z } 2023-01-11T21:41:24.1233601Z ''') 2023-01-11T21:41:24.1233606Z 2023-01-11T21:41:24.1233610Z 2023-01-11T21:41:24.1233806Z async_compile.wait(globals()) 2023-01-11T21:41:24.1233884Z del async_compile 2023-01-11T21:41:24.1233889Z 2023-01-11T21:41:24.1233957Z def call(args): 2023-01-11T21:41:24.1234031Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1234100Z args.clear() 2023-01-11T21:41:24.1234289Z buf0 = empty_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1234451Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1234515Z del arg0_1 2023-01-11T21:41:24.1234579Z del arg1_1 2023-01-11T21:41:24.1234648Z return (buf0, ) 2023-01-11T21:41:24.1234654Z 2023-01-11T21:41:24.1234658Z 2023-01-11T21:41:24.1234731Z if __name__ == "__main__": 2023-01-11T21:41:24.1234845Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1234965Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1235190Z arg0_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1235387Z arg1_1 = rand_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1235499Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1235504Z 2023-01-11T21:41:24.1235508Z 2023-01-11T21:41:24.1235601Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1235668Z import torch 2023-01-11T21:41:24.1235736Z import random 2023-01-11T21:41:24.1235846Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1235961Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1235966Z 2023-01-11T21:41:24.1236030Z aten = torch.ops.aten 2023-01-11T21:41:24.1236160Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1236248Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1236253Z 2023-01-11T21:41:24.1236257Z 2023-01-11T21:41:24.1236389Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1236593Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1236711Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1236812Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1236906Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1236954Z { 2023-01-11T21:41:24.1237047Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1237105Z { 2023-01-11T21:41:24.1237179Z #pragma omp for 2023-01-11T21:41:24.1237259Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1237319Z { 2023-01-11T21:41:24.1237449Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1237566Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1237690Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1237771Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1237853Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1237970Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1238030Z } 2023-01-11T21:41:24.1238122Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1238194Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1238255Z { 2023-01-11T21:41:24.1238336Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1238416Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1238512Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1238591Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1238669Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1238735Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1238793Z } 2023-01-11T21:41:24.1238852Z } 2023-01-11T21:41:24.1238909Z } 2023-01-11T21:41:24.1238985Z ''') 2023-01-11T21:41:24.1238990Z 2023-01-11T21:41:24.1238994Z 2023-01-11T21:41:24.1239082Z async_compile.wait(globals()) 2023-01-11T21:41:24.1239150Z del async_compile 2023-01-11T21:41:24.1239185Z 2023-01-11T21:41:24.1239254Z def call(args): 2023-01-11T21:41:24.1239315Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1239382Z args.clear() 2023-01-11T21:41:24.1239579Z buf0 = empty_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1239736Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1239800Z del arg0_1 2023-01-11T21:41:24.1239864Z del arg1_1 2023-01-11T21:41:24.1239921Z return (buf0, ) 2023-01-11T21:41:24.1239937Z 2023-01-11T21:41:24.1239941Z 2023-01-11T21:41:24.1240004Z if __name__ == "__main__": 2023-01-11T21:41:24.1240116Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1240237Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1240469Z arg0_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1240729Z arg1_1 = rand_strided((5040, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1240865Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1240870Z 2023-01-11T21:41:24.1240874Z 2023-01-11T21:41:24.1240966Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1241033Z import torch 2023-01-11T21:41:24.1241089Z import random 2023-01-11T21:41:24.1241200Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1241317Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1241321Z 2023-01-11T21:41:24.1241395Z aten = torch.ops.aten 2023-01-11T21:41:24.1241522Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1241610Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1241614Z 2023-01-11T21:41:24.1241618Z 2023-01-11T21:41:24.1241750Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1241952Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1242060Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1242163Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1242260Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1242319Z { 2023-01-11T21:41:24.1242415Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1242473Z { 2023-01-11T21:41:24.1242547Z #pragma omp for 2023-01-11T21:41:24.1242616Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1242675Z { 2023-01-11T21:41:24.1242809Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1242937Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1243021Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1243108Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1243167Z } 2023-01-11T21:41:24.1243249Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1243332Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1243455Z { 2023-01-11T21:41:24.1243542Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1243620Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1243700Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1243778Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1243825Z } 2023-01-11T21:41:24.1243884Z } 2023-01-11T21:41:24.1243943Z } 2023-01-11T21:41:24.1244019Z ''') 2023-01-11T21:41:24.1244025Z 2023-01-11T21:41:24.1244029Z 2023-01-11T21:41:24.1244113Z async_compile.wait(globals()) 2023-01-11T21:41:24.1244187Z del async_compile 2023-01-11T21:41:24.1244192Z 2023-01-11T21:41:24.1244258Z def call(args): 2023-01-11T21:41:24.1244320Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1244388Z args.clear() 2023-01-11T21:41:24.1244605Z buf0 = empty_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1244798Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1244867Z del arg0_1 2023-01-11T21:41:24.1244929Z del arg1_1 2023-01-11T21:41:24.1244997Z return (buf0, ) 2023-01-11T21:41:24.1245001Z 2023-01-11T21:41:24.1245005Z 2023-01-11T21:41:24.1245078Z if __name__ == "__main__": 2023-01-11T21:41:24.1245179Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1245299Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1245530Z arg0_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1245745Z arg1_1 = rand_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1245856Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1245862Z 2023-01-11T21:41:24.1245866Z 2023-01-11T21:41:24.1245957Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1246023Z import torch 2023-01-11T21:41:24.1246091Z import random 2023-01-11T21:41:24.1246198Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1246316Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1246321Z 2023-01-11T21:41:24.1246397Z aten = torch.ops.aten 2023-01-11T21:41:24.1246525Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1246617Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1246621Z 2023-01-11T21:41:24.1246625Z 2023-01-11T21:41:24.1246757Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1246960Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1247076Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1247166Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1247263Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1247323Z { 2023-01-11T21:41:24.1247421Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1247479Z { 2023-01-11T21:41:24.1247557Z #pragma omp for 2023-01-11T21:41:24.1247636Z for(long i0=0; i0<630; i0+=1) 2023-01-11T21:41:24.1247685Z { 2023-01-11T21:41:24.1247815Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1247942Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1248066Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1248148Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1248228Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1248317Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1248367Z } 2023-01-11T21:41:24.1248458Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1248539Z for(long i0=5040; i0<5040; i0+=1) 2023-01-11T21:41:24.1248600Z { 2023-01-11T21:41:24.1248680Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1248759Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1248890Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1248961Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1249040Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1249117Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1249176Z } 2023-01-11T21:41:24.1249237Z } 2023-01-11T21:41:24.1249296Z } 2023-01-11T21:41:24.1249372Z ''') 2023-01-11T21:41:24.1249377Z 2023-01-11T21:41:24.1249381Z 2023-01-11T21:41:24.1249458Z async_compile.wait(globals()) 2023-01-11T21:41:24.1249530Z del async_compile 2023-01-11T21:41:24.1249535Z 2023-01-11T21:41:24.1249603Z def call(args): 2023-01-11T21:41:24.1249674Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1249746Z args.clear() 2023-01-11T21:41:24.1249960Z buf0 = empty_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1250120Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1250187Z del arg0_1 2023-01-11T21:41:24.1250266Z del arg1_1 2023-01-11T21:41:24.1250337Z return (buf0, ) 2023-01-11T21:41:24.1250342Z 2023-01-11T21:41:24.1250346Z 2023-01-11T21:41:24.1250420Z if __name__ == "__main__": 2023-01-11T21:41:24.1250537Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1250655Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1250890Z arg0_1 = rand_strided((2, 3, 4, 5, 6, 7), (2520, 840, 210, 42, 7, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1251101Z arg1_1 = rand_strided((6, 4, 5, 42), (840, 210, 42, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1251214Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1251219Z 2023-01-11T21:41:24.1251475Z [2023-01-11 21:39:46,524] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 505 2023-01-11T21:41:24.1251906Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1252033Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1252287Z [2023-01-11 21:39:46,542] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 506 2023-01-11T21:41:24.1252551Z [2023-01-11 21:39:46,564] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 506 2023-01-11T21:41:24.1252978Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1253106Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1253356Z [2023-01-11 21:39:46,580] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 507 2023-01-11T21:41:24.1253612Z [2023-01-11 21:39:46,588] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 507 2023-01-11T21:41:24.1254034Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1254157Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1254411Z [2023-01-11 21:39:46,606] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 508 2023-01-11T21:41:24.1254695Z [2023-01-11 21:39:46,615] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 508 2023-01-11T21:41:24.1255119Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1255245Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1255501Z [2023-01-11 21:39:46,631] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 509 2023-01-11T21:41:24.1255506Z 2023-01-11T21:41:24.1255599Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1255668Z import torch 2023-01-11T21:41:24.1255737Z import random 2023-01-11T21:41:24.1255851Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1256000Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1256007Z 2023-01-11T21:41:24.1256073Z aten = torch.ops.aten 2023-01-11T21:41:24.1256205Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1256298Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1256303Z 2023-01-11T21:41:24.1256307Z 2023-01-11T21:41:24.1256440Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1256644Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1256762Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1256865Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1256964Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1257014Z { 2023-01-11T21:41:24.1257110Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1257169Z { 2023-01-11T21:41:24.1257247Z #pragma omp for 2023-01-11T21:41:24.1257332Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1257395Z { 2023-01-11T21:41:24.1257526Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1257644Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1257728Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1257819Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1257882Z } 2023-01-11T21:41:24.1257973Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1258058Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1258120Z { 2023-01-11T21:41:24.1258190Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1258271Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1258352Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1258429Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1258489Z } 2023-01-11T21:41:24.1258548Z } 2023-01-11T21:41:24.1258608Z } 2023-01-11T21:41:24.1258676Z ''') 2023-01-11T21:41:24.1258681Z 2023-01-11T21:41:24.1258685Z 2023-01-11T21:41:24.1258772Z async_compile.wait(globals()) 2023-01-11T21:41:24.1258843Z del async_compile 2023-01-11T21:41:24.1258848Z 2023-01-11T21:41:24.1258918Z def call(args): 2023-01-11T21:41:24.1258990Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1259058Z args.clear() 2023-01-11T21:41:24.1259258Z buf0 = empty_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1259408Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1259475Z del arg0_1 2023-01-11T21:41:24.1259538Z del arg1_1 2023-01-11T21:41:24.1259607Z return (buf0, ) 2023-01-11T21:41:24.1259611Z 2023-01-11T21:41:24.1259615Z 2023-01-11T21:41:24.1259688Z if __name__ == "__main__": 2023-01-11T21:41:24.1259798Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1259918Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1260155Z arg0_1 = rand_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1260340Z arg1_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1260453Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1260458Z 2023-01-11T21:41:24.1260462Z 2023-01-11T21:41:24.1260553Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1260619Z import torch 2023-01-11T21:41:24.1260687Z import random 2023-01-11T21:41:24.1260798Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1260914Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1260919Z 2023-01-11T21:41:24.1260992Z aten = torch.ops.aten 2023-01-11T21:41:24.1261111Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1261200Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1261205Z 2023-01-11T21:41:24.1261209Z 2023-01-11T21:41:24.1261366Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1261573Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1261688Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1261788Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1261884Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1261946Z { 2023-01-11T21:41:24.1262030Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1262089Z { 2023-01-11T21:41:24.1262165Z #pragma omp for 2023-01-11T21:41:24.1262247Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1262307Z { 2023-01-11T21:41:24.1262671Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1262804Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1262919Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1263006Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1263085Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1263173Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1263237Z } 2023-01-11T21:41:24.1263327Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1263412Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1263460Z { 2023-01-11T21:41:24.1263542Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1263621Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1263722Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1263802Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1263881Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1263959Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1264007Z } 2023-01-11T21:41:24.1264066Z } 2023-01-11T21:41:24.1264122Z } 2023-01-11T21:41:24.1264205Z ''') 2023-01-11T21:41:24.1264210Z 2023-01-11T21:41:24.1264214Z 2023-01-11T21:41:24.1264305Z async_compile.wait(globals()) 2023-01-11T21:41:24.1264374Z del async_compile 2023-01-11T21:41:24.1264380Z 2023-01-11T21:41:24.1264447Z def call(args): 2023-01-11T21:41:24.1264510Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1264578Z args.clear() 2023-01-11T21:41:24.1264778Z buf0 = empty_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1264937Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1265004Z del arg0_1 2023-01-11T21:41:24.1265068Z del arg1_1 2023-01-11T21:41:24.1265137Z return (buf0, ) 2023-01-11T21:41:24.1265141Z 2023-01-11T21:41:24.1265147Z 2023-01-11T21:41:24.1265220Z if __name__ == "__main__": 2023-01-11T21:41:24.1265321Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1265442Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1265651Z arg0_1 = rand_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1265910Z arg1_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1266023Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1266028Z 2023-01-11T21:41:24.1266032Z 2023-01-11T21:41:24.1266124Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1266191Z import torch 2023-01-11T21:41:24.1266259Z import random 2023-01-11T21:41:24.1266359Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1266476Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1266480Z 2023-01-11T21:41:24.1266556Z aten = torch.ops.aten 2023-01-11T21:41:24.1266687Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1266777Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1266782Z 2023-01-11T21:41:24.1266786Z 2023-01-11T21:41:24.1266918Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1267154Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1267273Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1267363Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1267459Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1267519Z { 2023-01-11T21:41:24.1267613Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1267672Z { 2023-01-11T21:41:24.1267747Z #pragma omp for 2023-01-11T21:41:24.1267825Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1267875Z { 2023-01-11T21:41:24.1268003Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1268126Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1268209Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1268297Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1268355Z } 2023-01-11T21:41:24.1268446Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1268519Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1268577Z { 2023-01-11T21:41:24.1268656Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1268735Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1268812Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1268890Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1268949Z } 2023-01-11T21:41:24.1268996Z } 2023-01-11T21:41:24.1269052Z } 2023-01-11T21:41:24.1269127Z ''') 2023-01-11T21:41:24.1269132Z 2023-01-11T21:41:24.1269136Z 2023-01-11T21:41:24.1269221Z async_compile.wait(globals()) 2023-01-11T21:41:24.1269290Z del async_compile 2023-01-11T21:41:24.1269295Z 2023-01-11T21:41:24.1269363Z def call(args): 2023-01-11T21:41:24.1269435Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1269492Z args.clear() 2023-01-11T21:41:24.1269697Z buf0 = empty_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1269861Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1269928Z del arg0_1 2023-01-11T21:41:24.1269991Z del arg1_1 2023-01-11T21:41:24.1270058Z return (buf0, ) 2023-01-11T21:41:24.1270063Z 2023-01-11T21:41:24.1270067Z 2023-01-11T21:41:24.1270139Z if __name__ == "__main__": 2023-01-11T21:41:24.1270249Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1270359Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1270550Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1270753Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1270865Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1270870Z 2023-01-11T21:41:24.1270874Z 2023-01-11T21:41:24.1270964Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1271032Z import torch 2023-01-11T21:41:24.1271099Z import random 2023-01-11T21:41:24.1271232Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1271350Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1271355Z 2023-01-11T21:41:24.1271430Z aten = torch.ops.aten 2023-01-11T21:41:24.1271560Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1271649Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1271654Z 2023-01-11T21:41:24.1271658Z 2023-01-11T21:41:24.1271788Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1271990Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1272102Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1272203Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1272288Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1272344Z { 2023-01-11T21:41:24.1272439Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1272501Z { 2023-01-11T21:41:24.1272601Z #pragma omp for 2023-01-11T21:41:24.1272680Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1272729Z { 2023-01-11T21:41:24.1272857Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1272982Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1273106Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1273187Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1273270Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1273358Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1273419Z } 2023-01-11T21:41:24.1273499Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1273576Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1273634Z { 2023-01-11T21:41:24.1273713Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1273852Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1273955Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1274036Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1274106Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1274182Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1274242Z } 2023-01-11T21:41:24.1274301Z } 2023-01-11T21:41:24.1274359Z } 2023-01-11T21:41:24.1274435Z ''') 2023-01-11T21:41:24.1274440Z 2023-01-11T21:41:24.1274444Z 2023-01-11T21:41:24.1274533Z async_compile.wait(globals()) 2023-01-11T21:41:24.1274592Z del async_compile 2023-01-11T21:41:24.1274597Z 2023-01-11T21:41:24.1274665Z def call(args): 2023-01-11T21:41:24.1274739Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1274808Z args.clear() 2023-01-11T21:41:24.1275014Z buf0 = empty_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1275173Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1275239Z del arg0_1 2023-01-11T21:41:24.1275295Z del arg1_1 2023-01-11T21:41:24.1275363Z return (buf0, ) 2023-01-11T21:41:24.1275368Z 2023-01-11T21:41:24.1275371Z 2023-01-11T21:41:24.1275445Z if __name__ == "__main__": 2023-01-11T21:41:24.1275557Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1275677Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1275869Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1276072Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1276184Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1276189Z 2023-01-11T21:41:24.1276447Z [2023-01-11 21:39:46,641] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 509 2023-01-11T21:41:24.1276877Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1277033Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1277293Z [2023-01-11 21:39:46,660] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 510 2023-01-11T21:41:24.1277556Z [2023-01-11 21:39:46,671] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 510 2023-01-11T21:41:24.1277983Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1278138Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1278393Z [2023-01-11 21:39:46,687] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 511 2023-01-11T21:41:24.1278659Z [2023-01-11 21:39:46,697] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 511 2023-01-11T21:41:24.1279083Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1279209Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1279462Z [2023-01-11 21:39:46,717] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 512 2023-01-11T21:41:24.1279712Z [2023-01-11 21:39:46,728] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 512 2023-01-11T21:41:24.1279730Z 2023-01-11T21:41:24.1279809Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1279875Z import torch 2023-01-11T21:41:24.1279942Z import random 2023-01-11T21:41:24.1280056Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1280177Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1280181Z 2023-01-11T21:41:24.1280256Z aten = torch.ops.aten 2023-01-11T21:41:24.1280390Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1280468Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1280474Z 2023-01-11T21:41:24.1280488Z 2023-01-11T21:41:24.1280608Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1280809Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1280925Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1281031Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1281127Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1281184Z { 2023-01-11T21:41:24.1281282Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1281331Z { 2023-01-11T21:41:24.1281405Z #pragma omp for 2023-01-11T21:41:24.1281487Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1281547Z { 2023-01-11T21:41:24.1281676Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1281805Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1281888Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1281965Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1282025Z } 2023-01-11T21:41:24.1282115Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1282199Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1282258Z { 2023-01-11T21:41:24.1282384Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1282463Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1282531Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1282610Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1282671Z } 2023-01-11T21:41:24.1282729Z } 2023-01-11T21:41:24.1282785Z } 2023-01-11T21:41:24.1282865Z ''') 2023-01-11T21:41:24.1282870Z 2023-01-11T21:41:24.1282875Z 2023-01-11T21:41:24.1282961Z async_compile.wait(globals()) 2023-01-11T21:41:24.1283020Z del async_compile 2023-01-11T21:41:24.1283025Z 2023-01-11T21:41:24.1283093Z def call(args): 2023-01-11T21:41:24.1283165Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1283233Z args.clear() 2023-01-11T21:41:24.1283460Z buf0 = empty_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1283621Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1283689Z del arg0_1 2023-01-11T21:41:24.1283773Z del arg1_1 2023-01-11T21:41:24.1283844Z return (buf0, ) 2023-01-11T21:41:24.1283849Z 2023-01-11T21:41:24.1283853Z 2023-01-11T21:41:24.1283929Z if __name__ == "__main__": 2023-01-11T21:41:24.1284041Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1284164Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1284368Z arg0_1 = rand_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1284593Z arg1_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1284704Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1284708Z 2023-01-11T21:41:24.1284712Z 2023-01-11T21:41:24.1284801Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1284857Z import torch 2023-01-11T21:41:24.1284925Z import random 2023-01-11T21:41:24.1285037Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1285159Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1285164Z 2023-01-11T21:41:24.1285238Z aten = torch.ops.aten 2023-01-11T21:41:24.1285368Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1285457Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1285461Z 2023-01-11T21:41:24.1285465Z 2023-01-11T21:41:24.1285596Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1285787Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1285902Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1286004Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1286100Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1286159Z { 2023-01-11T21:41:24.1286254Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1286313Z { 2023-01-11T21:41:24.1286375Z #pragma omp for 2023-01-11T21:41:24.1286462Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1286525Z { 2023-01-11T21:41:24.1286655Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1286782Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1286906Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1286988Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1287057Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1287144Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1287205Z } 2023-01-11T21:41:24.1287296Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1287380Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1287439Z { 2023-01-11T21:41:24.1287521Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1287589Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1287686Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1287802Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1287881Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1287959Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1288019Z } 2023-01-11T21:41:24.1288077Z } 2023-01-11T21:41:24.1288124Z } 2023-01-11T21:41:24.1288200Z ''') 2023-01-11T21:41:24.1288205Z 2023-01-11T21:41:24.1288209Z 2023-01-11T21:41:24.1288298Z async_compile.wait(globals()) 2023-01-11T21:41:24.1288368Z del async_compile 2023-01-11T21:41:24.1288373Z 2023-01-11T21:41:24.1288442Z def call(args): 2023-01-11T21:41:24.1288512Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1288578Z args.clear() 2023-01-11T21:41:24.1288793Z buf0 = empty_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1288952Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1289018Z del arg0_1 2023-01-11T21:41:24.1289081Z del arg1_1 2023-01-11T21:41:24.1289192Z return (buf0, ) 2023-01-11T21:41:24.1289199Z 2023-01-11T21:41:24.1289203Z 2023-01-11T21:41:24.1289274Z if __name__ == "__main__": 2023-01-11T21:41:24.1289383Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1289504Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1289693Z arg0_1 = rand_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1289917Z arg1_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1290030Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1290035Z 2023-01-11T21:41:24.1290039Z 2023-01-11T21:41:24.1290129Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1290196Z import torch 2023-01-11T21:41:24.1290265Z import random 2023-01-11T21:41:24.1290381Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1290500Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1290509Z 2023-01-11T21:41:24.1290574Z aten = torch.ops.aten 2023-01-11T21:41:24.1290705Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1290795Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1290800Z 2023-01-11T21:41:24.1290804Z 2023-01-11T21:41:24.1290935Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1291138Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1291256Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1291357Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1291457Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1291505Z { 2023-01-11T21:41:24.1291600Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1291663Z { 2023-01-11T21:41:24.1291739Z #pragma omp for 2023-01-11T21:41:24.1291818Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1291880Z { 2023-01-11T21:41:24.1292012Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1292130Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1292212Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1292298Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1292361Z } 2023-01-11T21:41:24.1292453Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1292533Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1292592Z { 2023-01-11T21:41:24.1292662Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1292742Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1292821Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1292898Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1292957Z } 2023-01-11T21:41:24.1293018Z } 2023-01-11T21:41:24.1293074Z } 2023-01-11T21:41:24.1293138Z ''') 2023-01-11T21:41:24.1293143Z 2023-01-11T21:41:24.1293177Z 2023-01-11T21:41:24.1293268Z async_compile.wait(globals()) 2023-01-11T21:41:24.1293337Z del async_compile 2023-01-11T21:41:24.1293342Z 2023-01-11T21:41:24.1293411Z def call(args): 2023-01-11T21:41:24.1293483Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1293552Z args.clear() 2023-01-11T21:41:24.1293757Z buf0 = empty_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1293907Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1293973Z del arg0_1 2023-01-11T21:41:24.1294037Z del arg1_1 2023-01-11T21:41:24.1294106Z return (buf0, ) 2023-01-11T21:41:24.1294111Z 2023-01-11T21:41:24.1294115Z 2023-01-11T21:41:24.1294187Z if __name__ == "__main__": 2023-01-11T21:41:24.1294300Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1294421Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1294640Z arg0_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1294838Z arg1_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1294949Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1294954Z 2023-01-11T21:41:24.1294958Z 2023-01-11T21:41:24.1295051Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1295119Z import torch 2023-01-11T21:41:24.1295190Z import random 2023-01-11T21:41:24.1295304Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1295420Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1295425Z 2023-01-11T21:41:24.1295499Z aten = torch.ops.aten 2023-01-11T21:41:24.1295619Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1295708Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1295713Z 2023-01-11T21:41:24.1295718Z 2023-01-11T21:41:24.1295849Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1296053Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1296170Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1296271Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1296366Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1296425Z { 2023-01-11T21:41:24.1296507Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1296567Z { 2023-01-11T21:41:24.1296640Z #pragma omp for 2023-01-11T21:41:24.1296719Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1296778Z { 2023-01-11T21:41:24.1296908Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1297034Z auto tmp3 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1297149Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1297232Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1297315Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1297403Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1297463Z } 2023-01-11T21:41:24.1297555Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1297635Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1297684Z { 2023-01-11T21:41:24.1297762Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1297839Z auto tmp3 = in_ptr1[i0]; 2023-01-11T21:41:24.1297935Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1298014Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1298092Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1298167Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1298215Z } 2023-01-11T21:41:24.1298272Z } 2023-01-11T21:41:24.1298331Z } 2023-01-11T21:41:24.1298408Z ''') 2023-01-11T21:41:24.1298413Z 2023-01-11T21:41:24.1298417Z 2023-01-11T21:41:24.1298501Z async_compile.wait(globals()) 2023-01-11T21:41:24.1298569Z del async_compile 2023-01-11T21:41:24.1298604Z 2023-01-11T21:41:24.1298670Z def call(args): 2023-01-11T21:41:24.1298731Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1298798Z args.clear() 2023-01-11T21:41:24.1299007Z buf0 = empty_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1299167Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1299233Z del arg0_1 2023-01-11T21:41:24.1299298Z del arg1_1 2023-01-11T21:41:24.1299366Z return (buf0, ) 2023-01-11T21:41:24.1299371Z 2023-01-11T21:41:24.1299375Z 2023-01-11T21:41:24.1299447Z if __name__ == "__main__": 2023-01-11T21:41:24.1299546Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1299664Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1299858Z arg0_1 = rand_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1300089Z arg1_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1300202Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1300207Z 2023-01-11T21:41:24.1300271Z ok (16.080s) 2023-01-11T21:41:24.1300734Z test_views2_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1300861Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1301120Z [2023-01-11 21:39:46,745] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 513 2023-01-11T21:41:24.1301374Z [2023-01-11 21:39:48,253] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 513 2023-01-11T21:41:24.1301802Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1301928Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1302179Z [2023-01-11 21:39:48,271] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 514 2023-01-11T21:41:24.1302555Z [2023-01-11 21:39:49,791] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 514 2023-01-11T21:41:24.1302981Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1303105Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1303359Z [2023-01-11 21:39:49,807] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 515 2023-01-11T21:41:24.1303621Z [2023-01-11 21:39:51,343] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 515 2023-01-11T21:41:24.1304046Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1304168Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1304422Z [2023-01-11 21:39:51,362] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 516 2023-01-11T21:41:24.1304721Z [2023-01-11 21:39:52,871] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 516 2023-01-11T21:41:24.1305147Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1305271Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1305523Z [2023-01-11 21:39:52,888] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 517 2023-01-11T21:41:24.1305529Z 2023-01-11T21:41:24.1305620Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1305692Z import torch 2023-01-11T21:41:24.1305799Z import random 2023-01-11T21:41:24.1305912Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1306031Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1306036Z 2023-01-11T21:41:24.1306100Z aten = torch.ops.aten 2023-01-11T21:41:24.1306232Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1306322Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1306327Z 2023-01-11T21:41:24.1306331Z 2023-01-11T21:41:24.1306466Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1306670Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1306785Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1306883Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1306941Z { 2023-01-11T21:41:24.1307026Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1307087Z { 2023-01-11T21:41:24.1307162Z #pragma omp for 2023-01-11T21:41:24.1307245Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1307308Z { 2023-01-11T21:41:24.1307440Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1307568Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1307640Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1307729Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1307788Z } 2023-01-11T21:41:24.1307883Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1307965Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1308024Z { 2023-01-11T21:41:24.1308105Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1308191Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1308272Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1308350Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1308408Z } 2023-01-11T21:41:24.1308466Z } 2023-01-11T21:41:24.1308525Z } 2023-01-11T21:41:24.1308603Z ''') 2023-01-11T21:41:24.1308608Z 2023-01-11T21:41:24.1308612Z 2023-01-11T21:41:24.1308689Z async_compile.wait(globals()) 2023-01-11T21:41:24.1308758Z del async_compile 2023-01-11T21:41:24.1308763Z 2023-01-11T21:41:24.1308831Z def call(args): 2023-01-11T21:41:24.1308898Z arg0_1, = args 2023-01-11T21:41:24.1308964Z args.clear() 2023-01-11T21:41:24.1309160Z buf0 = empty_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1309292Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1309346Z del arg0_1 2023-01-11T21:41:24.1309415Z return (buf0, ) 2023-01-11T21:41:24.1309420Z 2023-01-11T21:41:24.1309424Z 2023-01-11T21:41:24.1309498Z if __name__ == "__main__": 2023-01-11T21:41:24.1309611Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1309730Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1309937Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1310077Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1310082Z 2023-01-11T21:41:24.1310086Z 2023-01-11T21:41:24.1310178Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1310233Z import torch 2023-01-11T21:41:24.1310299Z import random 2023-01-11T21:41:24.1310410Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1310527Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1310532Z 2023-01-11T21:41:24.1310608Z aten = torch.ops.aten 2023-01-11T21:41:24.1310738Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1310828Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1310833Z 2023-01-11T21:41:24.1310837Z 2023-01-11T21:41:24.1310967Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1311159Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1311304Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1311403Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1311463Z { 2023-01-11T21:41:24.1311558Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1311617Z { 2023-01-11T21:41:24.1311690Z #pragma omp for 2023-01-11T21:41:24.1311759Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1311818Z { 2023-01-11T21:41:24.1311948Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1312077Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.1312161Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1312287Z auto tmp3 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1312367Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1312455Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1312503Z } 2023-01-11T21:41:24.1312598Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1312680Z for(long i0=16; i0<16; i0+=1) 2023-01-11T21:41:24.1312739Z { 2023-01-11T21:41:24.1312819Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1312915Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.1312997Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1313083Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.1313162Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1313238Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1313296Z } 2023-01-11T21:41:24.1313355Z } 2023-01-11T21:41:24.1313413Z } 2023-01-11T21:41:24.1313477Z ''') 2023-01-11T21:41:24.1313494Z 2023-01-11T21:41:24.1313498Z 2023-01-11T21:41:24.1313572Z async_compile.wait(globals()) 2023-01-11T21:41:24.1313642Z del async_compile 2023-01-11T21:41:24.1313647Z 2023-01-11T21:41:24.1313718Z def call(args): 2023-01-11T21:41:24.1313849Z arg0_1, = args 2023-01-11T21:41:24.1313920Z args.clear() 2023-01-11T21:41:24.1314124Z buf0 = empty_strided((4, 4), (4, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1314254Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1314309Z del arg0_1 2023-01-11T21:41:24.1314378Z return (buf0, ) 2023-01-11T21:41:24.1314383Z 2023-01-11T21:41:24.1314387Z 2023-01-11T21:41:24.1314461Z if __name__ == "__main__": 2023-01-11T21:41:24.1314575Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1314696Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1314906Z arg0_1 = rand_strided((2, 2, 2, 2), (8, 4, 2, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1315012Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1315017Z 2023-01-11T21:41:24.1315021Z 2023-01-11T21:41:24.1315115Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1315170Z import torch 2023-01-11T21:41:24.1315236Z import random 2023-01-11T21:41:24.1315351Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1315541Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1315546Z 2023-01-11T21:41:24.1315622Z aten = torch.ops.aten 2023-01-11T21:41:24.1315753Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1315842Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1315847Z 2023-01-11T21:41:24.1315851Z 2023-01-11T21:41:24.1315982Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1316171Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1316291Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1316389Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1316452Z { 2023-01-11T21:41:24.1316546Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1316606Z { 2023-01-11T21:41:24.1316680Z #pragma omp for 2023-01-11T21:41:24.1316749Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1316842Z { 2023-01-11T21:41:24.1316973Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1317102Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1317186Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1317275Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1317337Z } 2023-01-11T21:41:24.1317417Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1317502Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1317563Z { 2023-01-11T21:41:24.1317643Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1317740Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1317821Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1317900Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1317949Z } 2023-01-11T21:41:24.1318008Z } 2023-01-11T21:41:24.1318065Z } 2023-01-11T21:41:24.1318141Z ''') 2023-01-11T21:41:24.1318152Z 2023-01-11T21:41:24.1318156Z 2023-01-11T21:41:24.1318242Z async_compile.wait(globals()) 2023-01-11T21:41:24.1318314Z del async_compile 2023-01-11T21:41:24.1318319Z 2023-01-11T21:41:24.1318386Z def call(args): 2023-01-11T21:41:24.1318442Z arg0_1, = args 2023-01-11T21:41:24.1318509Z args.clear() 2023-01-11T21:41:24.1318709Z buf0 = empty_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1318840Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1318906Z del arg0_1 2023-01-11T21:41:24.1318975Z return (buf0, ) 2023-01-11T21:41:24.1318980Z 2023-01-11T21:41:24.1318984Z 2023-01-11T21:41:24.1319059Z if __name__ == "__main__": 2023-01-11T21:41:24.1319168Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1319278Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1319507Z arg0_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1319616Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1319621Z 2023-01-11T21:41:24.1319625Z 2023-01-11T21:41:24.1319716Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1319782Z import torch 2023-01-11T21:41:24.1319850Z import random 2023-01-11T21:41:24.1319962Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1320083Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1320088Z 2023-01-11T21:41:24.1320152Z aten = torch.ops.aten 2023-01-11T21:41:24.1320282Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1320371Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1320375Z 2023-01-11T21:41:24.1320379Z 2023-01-11T21:41:24.1320510Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1320712Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1320826Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1320966Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1321023Z { 2023-01-11T21:41:24.1321108Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1321169Z { 2023-01-11T21:41:24.1321245Z #pragma omp for 2023-01-11T21:41:24.1321324Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1321386Z { 2023-01-11T21:41:24.1321513Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1321643Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.1321714Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1321841Z auto tmp3 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1321926Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1322015Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1322073Z } 2023-01-11T21:41:24.1322165Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1322279Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1322329Z { 2023-01-11T21:41:24.1322410Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1322506Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.1322588Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1322683Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.1322764Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1322845Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1322893Z } 2023-01-11T21:41:24.1322953Z } 2023-01-11T21:41:24.1323009Z } 2023-01-11T21:41:24.1323087Z ''') 2023-01-11T21:41:24.1323093Z 2023-01-11T21:41:24.1323097Z 2023-01-11T21:41:24.1323182Z async_compile.wait(globals()) 2023-01-11T21:41:24.1323253Z del async_compile 2023-01-11T21:41:24.1323257Z 2023-01-11T21:41:24.1323323Z def call(args): 2023-01-11T21:41:24.1323379Z arg0_1, = args 2023-01-11T21:41:24.1323445Z args.clear() 2023-01-11T21:41:24.1323654Z buf0 = empty_strided((10, 100), (100, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1323786Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1323850Z del arg0_1 2023-01-11T21:41:24.1323918Z return (buf0, ) 2023-01-11T21:41:24.1323923Z 2023-01-11T21:41:24.1323927Z 2023-01-11T21:41:24.1324000Z if __name__ == "__main__": 2023-01-11T21:41:24.1324109Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1324218Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1324443Z arg0_1 = rand_strided((10, 1, 10, 1, 10), (100, 100, 10, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1324550Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1324554Z 2023-01-11T21:41:24.1324822Z [2023-01-11 21:39:52,896] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 517 2023-01-11T21:41:24.1325258Z /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1325387Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1325648Z [2023-01-11 21:39:52,913] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 518 2023-01-11T21:41:24.1325909Z [2023-01-11 21:39:52,921] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 518 2023-01-11T21:41:24.1325914Z 2023-01-11T21:41:24.1326006Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1326073Z import torch 2023-01-11T21:41:24.1326130Z import random 2023-01-11T21:41:24.1326242Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1326362Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1326396Z 2023-01-11T21:41:24.1326473Z aten = torch.ops.aten 2023-01-11T21:41:24.1326604Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1326692Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1326697Z 2023-01-11T21:41:24.1326701Z 2023-01-11T21:41:24.1326832Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1327022Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1327140Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1327237Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1327296Z { 2023-01-11T21:41:24.1327389Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1327447Z { 2023-01-11T21:41:24.1327521Z #pragma omp for 2023-01-11T21:41:24.1327591Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1327650Z { 2023-01-11T21:41:24.1327808Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1327940Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1328022Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1328111Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1328172Z } 2023-01-11T21:41:24.1328264Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1328336Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1328399Z { 2023-01-11T21:41:24.1328481Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1328580Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1328666Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1328743Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1328804Z } 2023-01-11T21:41:24.1328853Z } 2023-01-11T21:41:24.1328915Z } 2023-01-11T21:41:24.1328992Z ''') 2023-01-11T21:41:24.1328997Z 2023-01-11T21:41:24.1329001Z 2023-01-11T21:41:24.1329086Z async_compile.wait(globals()) 2023-01-11T21:41:24.1329159Z del async_compile 2023-01-11T21:41:24.1329164Z 2023-01-11T21:41:24.1329232Z def call(args): 2023-01-11T21:41:24.1329298Z arg0_1, = args 2023-01-11T21:41:24.1329355Z args.clear() 2023-01-11T21:41:24.1329565Z buf0 = empty_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1329695Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1329762Z del arg0_1 2023-01-11T21:41:24.1329830Z return (buf0, ) 2023-01-11T21:41:24.1329835Z 2023-01-11T21:41:24.1329839Z 2023-01-11T21:41:24.1329912Z if __name__ == "__main__": 2023-01-11T21:41:24.1330022Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1330131Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1330330Z arg0_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1330434Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1330439Z 2023-01-11T21:41:24.1330445Z 2023-01-11T21:41:24.1330538Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1330604Z import torch 2023-01-11T21:41:24.1330672Z import random 2023-01-11T21:41:24.1330784Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1330901Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1330905Z 2023-01-11T21:41:24.1330969Z aten = torch.ops.aten 2023-01-11T21:41:24.1331098Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1331187Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1331192Z 2023-01-11T21:41:24.1331196Z 2023-01-11T21:41:24.1331327Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1331527Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1331641Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1331739Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1331797Z { 2023-01-11T21:41:24.1331915Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1331974Z { 2023-01-11T21:41:24.1332047Z #pragma omp for 2023-01-11T21:41:24.1332126Z for(long i0=0; i0<125; i0+=1) 2023-01-11T21:41:24.1332187Z { 2023-01-11T21:41:24.1332316Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1332445Z auto tmp1 = at::vec::Vectorized(static_cast(2)); 2023-01-11T21:41:24.1332517Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1332642Z auto tmp3 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1332722Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1332809Z tmp4.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1332868Z } 2023-01-11T21:41:24.1332958Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1333042Z for(long i0=1000; i0<1000; i0+=1) 2023-01-11T21:41:24.1333093Z { 2023-01-11T21:41:24.1333205Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1333304Z auto tmp1 = static_cast(2); 2023-01-11T21:41:24.1333384Z auto tmp2 = tmp0 * tmp1; 2023-01-11T21:41:24.1333479Z auto tmp3 = static_cast(1); 2023-01-11T21:41:24.1333558Z auto tmp4 = tmp2 + tmp3; 2023-01-11T21:41:24.1333635Z out_ptr0[i0] = tmp4; 2023-01-11T21:41:24.1333684Z } 2023-01-11T21:41:24.1333743Z } 2023-01-11T21:41:24.1333801Z } 2023-01-11T21:41:24.1333876Z ''') 2023-01-11T21:41:24.1333882Z 2023-01-11T21:41:24.1333886Z 2023-01-11T21:41:24.1333972Z async_compile.wait(globals()) 2023-01-11T21:41:24.1334043Z del async_compile 2023-01-11T21:41:24.1334047Z 2023-01-11T21:41:24.1334115Z def call(args): 2023-01-11T21:41:24.1334180Z arg0_1, = args 2023-01-11T21:41:24.1334236Z args.clear() 2023-01-11T21:41:24.1334446Z buf0 = empty_strided((10, 5, 20), (100, 20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1334580Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1334647Z del arg0_1 2023-01-11T21:41:24.1334717Z return (buf0, ) 2023-01-11T21:41:24.1334721Z 2023-01-11T21:41:24.1334725Z 2023-01-11T21:41:24.1334798Z if __name__ == "__main__": 2023-01-11T21:41:24.1334909Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1335018Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1335215Z arg0_1 = rand_strided((50, 20), (20, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1335318Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1335323Z 2023-01-11T21:41:24.1335386Z ok (6.193s) 2023-01-11T21:41:24.1335851Z test_views3_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1335977Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1336239Z [2023-01-11 21:39:52,983] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 519 2023-01-11T21:41:24.1336502Z [2023-01-11 21:39:54,529] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 519 2023-01-11T21:41:24.1336507Z 2023-01-11T21:41:24.1336598Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1336665Z import torch 2023-01-11T21:41:24.1336722Z import random 2023-01-11T21:41:24.1336832Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1336951Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1336956Z 2023-01-11T21:41:24.1337031Z aten = torch.ops.aten 2023-01-11T21:41:24.1337160Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1337250Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1337286Z 2023-01-11T21:41:24.1337290Z 2023-01-11T21:41:24.1337422Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1337624Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1337730Z extern "C" void kernel(const long* __restrict__ in_ptr0, 2023-01-11T21:41:24.1337832Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1337929Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1337988Z { 2023-01-11T21:41:24.1338085Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1338145Z { 2023-01-11T21:41:24.1338218Z #pragma omp for 2023-01-11T21:41:24.1338288Z for(long i0=0; i0<744; i0+=1) 2023-01-11T21:41:24.1338348Z { 2023-01-11T21:41:24.1338424Z #pragma GCC ivdep 2023-01-11T21:41:24.1338511Z for(long i1=0; i1<192; i1+=1) 2023-01-11T21:41:24.1338575Z { 2023-01-11T21:41:24.1338638Z { 2023-01-11T21:41:24.1338723Z { 2023-01-11T21:41:24.1338830Z auto tmp0 = in_ptr0[(3*i0) + (i1 / 64)]; 2023-01-11T21:41:24.1338937Z auto tmp1 = in_ptr1[(64*tmp0) + (i1 % 64)]; 2023-01-11T21:41:24.1339030Z out_ptr0[i1 + (192*i0)] = tmp1; 2023-01-11T21:41:24.1339094Z } 2023-01-11T21:41:24.1339156Z } 2023-01-11T21:41:24.1339221Z } 2023-01-11T21:41:24.1339270Z } 2023-01-11T21:41:24.1339330Z } 2023-01-11T21:41:24.1339389Z } 2023-01-11T21:41:24.1339465Z ''') 2023-01-11T21:41:24.1339470Z 2023-01-11T21:41:24.1339474Z 2023-01-11T21:41:24.1339560Z async_compile.wait(globals()) 2023-01-11T21:41:24.1339629Z del async_compile 2023-01-11T21:41:24.1339634Z 2023-01-11T21:41:24.1339702Z def call(args): 2023-01-11T21:41:24.1339763Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1339830Z args.clear() 2023-01-11T21:41:24.1340060Z buf0 = empty_strided((1, 12, 62, 192), (142848, 11904, 192, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1340224Z kernel_cpp_0(c_void_p(arg1_1.data_ptr()), c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1340289Z del arg0_1 2023-01-11T21:41:24.1340352Z del arg1_1 2023-01-11T21:41:24.1340421Z return (buf0, ) 2023-01-11T21:41:24.1340426Z 2023-01-11T21:41:24.1340430Z 2023-01-11T21:41:24.1340503Z if __name__ == "__main__": 2023-01-11T21:41:24.1340603Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1340725Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1340923Z arg0_1 = rand_strided((64, 64), (64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1341117Z arg1_1 = rand_strided((2232, ), (1, ), device='cpu', dtype=torch.int64) 2023-01-11T21:41:24.1341228Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1341233Z 2023-01-11T21:41:24.1341298Z ok (1.610s) 2023-01-11T21:41:24.1341643Z test_zero_dim_reductions_cpu (__main__.CpuTests) ... [2023-01-11 21:39:54,593] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 520 2023-01-11T21:41:24.1341910Z [2023-01-11 21:39:56,089] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 520 2023-01-11T21:41:24.1342164Z [2023-01-11 21:39:56,150] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 521 2023-01-11T21:41:24.1342572Z [2023-01-11 21:39:56,157] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 521 2023-01-11T21:41:24.1342583Z 2023-01-11T21:41:24.1342686Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1342755Z import torch 2023-01-11T21:41:24.1342825Z import random 2023-01-11T21:41:24.1342938Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1343058Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1343063Z 2023-01-11T21:41:24.1343137Z aten = torch.ops.aten 2023-01-11T21:41:24.1343325Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1343417Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1343422Z 2023-01-11T21:41:24.1343426Z 2023-01-11T21:41:24.1343565Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1343768Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1343878Z extern "C" void kernel(bool* __restrict__ out_ptr0) 2023-01-11T21:41:24.1343939Z { 2023-01-11T21:41:24.1344013Z #pragma GCC ivdep 2023-01-11T21:41:24.1344092Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1344139Z { 2023-01-11T21:41:24.1344200Z { 2023-01-11T21:41:24.1344263Z { 2023-01-11T21:41:24.1344366Z auto tmp0 = static_cast(false); 2023-01-11T21:41:24.1344451Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:24.1344532Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.1344593Z } 2023-01-11T21:41:24.1344644Z } 2023-01-11T21:41:24.1344739Z } 2023-01-11T21:41:24.1344798Z } 2023-01-11T21:41:24.1344877Z ''') 2023-01-11T21:41:24.1344883Z 2023-01-11T21:41:24.1344887Z 2023-01-11T21:41:24.1344975Z async_compile.wait(globals()) 2023-01-11T21:41:24.1345045Z del async_compile 2023-01-11T21:41:24.1345050Z 2023-01-11T21:41:24.1345119Z def call(args): 2023-01-11T21:41:24.1345175Z arg0_1, = args 2023-01-11T21:41:24.1345240Z args.clear() 2023-01-11T21:41:24.1345429Z buf0 = empty_strided((2, 1), (1, 1), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.1345530Z kernel_cpp_0(c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1345600Z return (buf0, ) 2023-01-11T21:41:24.1345604Z 2023-01-11T21:41:24.1345609Z 2023-01-11T21:41:24.1345682Z if __name__ == "__main__": 2023-01-11T21:41:24.1345791Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1345901Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1346098Z arg0_1 = rand_strided((2, 0), (1, 1), device='cpu', dtype=torch.float16) 2023-01-11T21:41:24.1346205Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1346210Z 2023-01-11T21:41:24.1346214Z 2023-01-11T21:41:24.1346305Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1346372Z import torch 2023-01-11T21:41:24.1346439Z import random 2023-01-11T21:41:24.1346550Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1346666Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1346670Z 2023-01-11T21:41:24.1346736Z aten = torch.ops.aten 2023-01-11T21:41:24.1346869Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1346958Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1346963Z 2023-01-11T21:41:24.1346968Z 2023-01-11T21:41:24.1347102Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1347305Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1347411Z extern "C" void kernel(bool* __restrict__ out_ptr0) 2023-01-11T21:41:24.1347475Z { 2023-01-11T21:41:24.1347550Z #pragma GCC ivdep 2023-01-11T21:41:24.1347617Z for(long i0=0; i0<2; i0+=1) 2023-01-11T21:41:24.1347678Z { 2023-01-11T21:41:24.1347738Z { 2023-01-11T21:41:24.1347798Z { 2023-01-11T21:41:24.1347901Z auto tmp0 = static_cast(false); 2023-01-11T21:41:24.1347984Z auto tmp1 = tmp0 == 0; 2023-01-11T21:41:24.1348064Z out_ptr0[i0] = tmp1; 2023-01-11T21:41:24.1348114Z } 2023-01-11T21:41:24.1348174Z } 2023-01-11T21:41:24.1348233Z } 2023-01-11T21:41:24.1348289Z } 2023-01-11T21:41:24.1348363Z ''') 2023-01-11T21:41:24.1348368Z 2023-01-11T21:41:24.1348372Z 2023-01-11T21:41:24.1348458Z async_compile.wait(globals()) 2023-01-11T21:41:24.1348528Z del async_compile 2023-01-11T21:41:24.1348533Z 2023-01-11T21:41:24.1348590Z def call(args): 2023-01-11T21:41:24.1348656Z arg0_1, = args 2023-01-11T21:41:24.1348725Z args.clear() 2023-01-11T21:41:24.1348940Z buf0 = empty_strided((2, ), (1, ), device='cpu', dtype=torch.bool) 2023-01-11T21:41:24.1349044Z kernel_cpp_0(c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1349111Z return (buf0, ) 2023-01-11T21:41:24.1349117Z 2023-01-11T21:41:24.1349121Z 2023-01-11T21:41:24.1349195Z if __name__ == "__main__": 2023-01-11T21:41:24.1349295Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1349413Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1349609Z arg0_1 = rand_strided((2, 0), (1, 1), device='cpu', dtype=torch.float16) 2023-01-11T21:41:24.1349714Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1349719Z 2023-01-11T21:41:24.1349783Z ok (1.627s) 2023-01-11T21:41:24.1350276Z test_zeros_cpu (__main__.CpuTests) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1350403Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1350662Z [2023-01-11 21:39:56,267] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 522 2023-01-11T21:41:24.1350927Z [2023-01-11 21:39:57,826] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 522 2023-01-11T21:41:24.1350932Z 2023-01-11T21:41:24.1351028Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1351085Z import torch 2023-01-11T21:41:24.1351154Z import random 2023-01-11T21:41:24.1351269Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1351387Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1351392Z 2023-01-11T21:41:24.1351467Z aten = torch.ops.aten 2023-01-11T21:41:24.1351597Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1351693Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1351698Z 2023-01-11T21:41:24.1351702Z 2023-01-11T21:41:24.1351833Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1352024Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1352140Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1352237Z float* __restrict__ out_ptr0, 2023-01-11T21:41:24.1352333Z float* __restrict__ out_ptr1, 2023-01-11T21:41:24.1352425Z float* __restrict__ out_ptr2, 2023-01-11T21:41:24.1352515Z float* __restrict__ out_ptr3, 2023-01-11T21:41:24.1352606Z float* __restrict__ out_ptr4, 2023-01-11T21:41:24.1352684Z float* __restrict__ out_ptr5) 2023-01-11T21:41:24.1352744Z { 2023-01-11T21:41:24.1352840Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1352903Z { 2023-01-11T21:41:24.1352981Z #pragma omp for 2023-01-11T21:41:24.1353059Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1353118Z { 2023-01-11T21:41:24.1353240Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1353370Z auto tmp1 = at::vec::Vectorized(static_cast(1)); 2023-01-11T21:41:24.1353453Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1353541Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1353629Z tmp2.store(out_ptr1 + 8*i0); 2023-01-11T21:41:24.1353689Z } 2023-01-11T21:41:24.1353841Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1353909Z for(long i0=8; i0<8; i0+=1) 2023-01-11T21:41:24.1353969Z { 2023-01-11T21:41:24.1354051Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1354149Z auto tmp1 = static_cast(1); 2023-01-11T21:41:24.1354230Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1354346Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1354424Z out_ptr1[i0] = tmp2; 2023-01-11T21:41:24.1354473Z } 2023-01-11T21:41:24.1354549Z #pragma omp for 2023-01-11T21:41:24.1354631Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:24.1354691Z { 2023-01-11T21:41:24.1354823Z auto tmp0 = at::vec::Vectorized(static_cast(0)); 2023-01-11T21:41:24.1354910Z tmp0.store(out_ptr2 + 8*i0); 2023-01-11T21:41:24.1354971Z } 2023-01-11T21:41:24.1355050Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1355136Z for(long i0=32768; i0<32768; i0+=1) 2023-01-11T21:41:24.1355196Z { 2023-01-11T21:41:24.1355294Z auto tmp0 = static_cast(0); 2023-01-11T21:41:24.1355371Z out_ptr2[i0] = tmp0; 2023-01-11T21:41:24.1355431Z } 2023-01-11T21:41:24.1355504Z #pragma omp for 2023-01-11T21:41:24.1355573Z for(long i0=0; i0<4096; i0+=1) 2023-01-11T21:41:24.1355635Z { 2023-01-11T21:41:24.1355791Z auto tmp0 = at::vec::Vectorized(static_cast(0)); 2023-01-11T21:41:24.1355879Z tmp0.store(out_ptr3 + 8*i0); 2023-01-11T21:41:24.1355940Z } 2023-01-11T21:41:24.1356030Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1356114Z for(long i0=32768; i0<32768; i0+=1) 2023-01-11T21:41:24.1356164Z { 2023-01-11T21:41:24.1356256Z auto tmp0 = static_cast(0); 2023-01-11T21:41:24.1356333Z out_ptr3[i0] = tmp0; 2023-01-11T21:41:24.1356393Z } 2023-01-11T21:41:24.1356468Z #pragma omp for 2023-01-11T21:41:24.1356546Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.1356595Z { 2023-01-11T21:41:24.1356656Z { 2023-01-11T21:41:24.1356719Z { 2023-01-11T21:41:24.1356821Z auto tmp0 = static_cast(0); 2023-01-11T21:41:24.1356904Z out_ptr4[i0] = tmp0; 2023-01-11T21:41:24.1356968Z } 2023-01-11T21:41:24.1357031Z } 2023-01-11T21:41:24.1357079Z } 2023-01-11T21:41:24.1357152Z #pragma omp for 2023-01-11T21:41:24.1357232Z for(long i0=0; i0<6; i0+=1) 2023-01-11T21:41:24.1357292Z { 2023-01-11T21:41:24.1357356Z { 2023-01-11T21:41:24.1357420Z { 2023-01-11T21:41:24.1357528Z auto tmp0 = static_cast(3.1416); 2023-01-11T21:41:24.1357598Z out_ptr5[i0] = tmp0; 2023-01-11T21:41:24.1357660Z } 2023-01-11T21:41:24.1357721Z } 2023-01-11T21:41:24.1357780Z } 2023-01-11T21:41:24.1357842Z } 2023-01-11T21:41:24.1357900Z } 2023-01-11T21:41:24.1357968Z ''') 2023-01-11T21:41:24.1357973Z 2023-01-11T21:41:24.1357988Z 2023-01-11T21:41:24.1358064Z async_compile.wait(globals()) 2023-01-11T21:41:24.1358133Z del async_compile 2023-01-11T21:41:24.1358138Z 2023-01-11T21:41:24.1358205Z def call(args): 2023-01-11T21:41:24.1358271Z arg0_1, = args 2023-01-11T21:41:24.1358344Z args.clear() 2023-01-11T21:41:24.1358536Z buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1358723Z buf4 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1358932Z buf1 = empty_strided((1, 8, 64, 64), (32768, 4096, 64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1359146Z buf2 = empty_strided((1, 8, 64, 64), (32768, 4096, 64, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1359335Z buf3 = empty_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1359523Z buf5 = empty_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1359779Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()), c_void_p(buf4.data_ptr()), c_void_p(buf1.data_ptr()), c_void_p(buf2.data_ptr()), c_void_p(buf3.data_ptr()), c_void_p(buf5.data_ptr())) 2023-01-11T21:41:24.1359844Z del arg0_1 2023-01-11T21:41:24.1359993Z return (buf0, buf1, buf2, buf3, buf4, buf5, ) 2023-01-11T21:41:24.1359999Z 2023-01-11T21:41:24.1360003Z 2023-01-11T21:41:24.1360077Z if __name__ == "__main__": 2023-01-11T21:41:24.1360190Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1360300Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1360489Z arg0_1 = rand_strided((8, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1360594Z print_performance(lambda: call([arg0_1])) 2023-01-11T21:41:24.1360599Z 2023-01-11T21:41:24.1360664Z ok (1.670s) 2023-01-11T21:41:24.1360784Z test_print_pow (__main__.ExprPrinterTests) ... ok (0.003s) 2023-01-11T21:41:24.1361311Z test_cpu_broadcast1_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1361440Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1361701Z [2023-01-11 21:39:57,845] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 523 2023-01-11T21:41:24.1361968Z [2023-01-11 21:39:57,853] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 523 2023-01-11T21:41:24.1361974Z 2023-01-11T21:41:24.1362055Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1362122Z import torch 2023-01-11T21:41:24.1362191Z import random 2023-01-11T21:41:24.1362301Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1362420Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1362426Z 2023-01-11T21:41:24.1362500Z aten = torch.ops.aten 2023-01-11T21:41:24.1362629Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1362723Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1362728Z 2023-01-11T21:41:24.1362733Z 2023-01-11T21:41:24.1362853Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1363053Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1363172Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1363272Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1363369Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1363427Z { 2023-01-11T21:41:24.1363523Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1363571Z { 2023-01-11T21:41:24.1363644Z #pragma omp for 2023-01-11T21:41:24.1363724Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1363783Z { 2023-01-11T21:41:24.1363911Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1364038Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1364125Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1364213Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1364262Z } 2023-01-11T21:41:24.1364354Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1364433Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1364493Z { 2023-01-11T21:41:24.1364573Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1364651Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1364718Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1364795Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1364854Z } 2023-01-11T21:41:24.1364912Z } 2023-01-11T21:41:24.1364970Z } 2023-01-11T21:41:24.1365046Z ''') 2023-01-11T21:41:24.1365051Z 2023-01-11T21:41:24.1365055Z 2023-01-11T21:41:24.1365141Z async_compile.wait(globals()) 2023-01-11T21:41:24.1365200Z del async_compile 2023-01-11T21:41:24.1365216Z 2023-01-11T21:41:24.1365272Z def call(args): 2023-01-11T21:41:24.1365379Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1365449Z args.clear() 2023-01-11T21:41:24.1365643Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1365804Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1365870Z del arg0_1 2023-01-11T21:41:24.1365934Z del arg1_1 2023-01-11T21:41:24.1365993Z return (buf0, ) 2023-01-11T21:41:24.1365998Z 2023-01-11T21:41:24.1366002Z 2023-01-11T21:41:24.1366077Z if __name__ == "__main__": 2023-01-11T21:41:24.1366188Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1366308Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1366499Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1366688Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1366804Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1366839Z 2023-01-11T21:41:24.1366904Z ok (0.021s) 2023-01-11T21:41:24.1367390Z test_cpu_broadcast1_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1367515Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1367773Z [2023-01-11 21:39:57,866] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 524 2023-01-11T21:41:24.1368037Z [2023-01-11 21:39:59,385] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 524 2023-01-11T21:41:24.1368042Z 2023-01-11T21:41:24.1368134Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1368205Z import torch 2023-01-11T21:41:24.1368273Z import random 2023-01-11T21:41:24.1368383Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1368499Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1368504Z 2023-01-11T21:41:24.1368568Z aten = torch.ops.aten 2023-01-11T21:41:24.1368699Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1368788Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1368793Z 2023-01-11T21:41:24.1368798Z 2023-01-11T21:41:24.1368931Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1369132Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1369246Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1369347Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1369442Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1369490Z { 2023-01-11T21:41:24.1369592Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1369651Z { 2023-01-11T21:41:24.1369726Z #pragma omp for 2023-01-11T21:41:24.1369805Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1369868Z { 2023-01-11T21:41:24.1369948Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1369998Z { 2023-01-11T21:41:24.1370128Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i1); 2023-01-11T21:41:24.1370253Z auto tmp1 = at::vec::Vectorized(in_ptr1[i0]); 2023-01-11T21:41:24.1370339Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1370439Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1370499Z } 2023-01-11T21:41:24.1370588Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1370658Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1370718Z { 2023-01-11T21:41:24.1370800Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1370913Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1370996Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1371085Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1371146Z } 2023-01-11T21:41:24.1371197Z } 2023-01-11T21:41:24.1371258Z } 2023-01-11T21:41:24.1371316Z } 2023-01-11T21:41:24.1371393Z ''') 2023-01-11T21:41:24.1371398Z 2023-01-11T21:41:24.1371402Z 2023-01-11T21:41:24.1371489Z async_compile.wait(globals()) 2023-01-11T21:41:24.1371560Z del async_compile 2023-01-11T21:41:24.1371565Z 2023-01-11T21:41:24.1371632Z def call(args): 2023-01-11T21:41:24.1371693Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1371764Z args.clear() 2023-01-11T21:41:24.1371974Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1372131Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1372195Z del arg0_1 2023-01-11T21:41:24.1372288Z del arg1_1 2023-01-11T21:41:24.1372358Z return (buf0, ) 2023-01-11T21:41:24.1372362Z 2023-01-11T21:41:24.1372366Z 2023-01-11T21:41:24.1372443Z if __name__ == "__main__": 2023-01-11T21:41:24.1372544Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1372665Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1372860Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1373061Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1373173Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1373178Z 2023-01-11T21:41:24.1373242Z ok (1.533s) 2023-01-11T21:41:24.1373738Z test_cpu_broadcast1_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1373865Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1374123Z [2023-01-11 21:39:59,400] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 525 2023-01-11T21:41:24.1374376Z [2023-01-11 21:40:00,925] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 525 2023-01-11T21:41:24.1374381Z 2023-01-11T21:41:24.1374471Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1374542Z import torch 2023-01-11T21:41:24.1374612Z import random 2023-01-11T21:41:24.1374724Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1374842Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1374847Z 2023-01-11T21:41:24.1374922Z aten = torch.ops.aten 2023-01-11T21:41:24.1375055Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1375135Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1375141Z 2023-01-11T21:41:24.1375145Z 2023-01-11T21:41:24.1375275Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1375477Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1375594Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1375697Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1375793Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1375854Z { 2023-01-11T21:41:24.1375937Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1375995Z { 2023-01-11T21:41:24.1376070Z #pragma omp for 2023-01-11T21:41:24.1376149Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1376211Z { 2023-01-11T21:41:24.1376344Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1376518Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:24.1376636Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1376746Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1376830Z } 2023-01-11T21:41:24.1376927Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1377008Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1377070Z { 2023-01-11T21:41:24.1377153Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1377223Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1377305Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1377383Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1377445Z } 2023-01-11T21:41:24.1377506Z } 2023-01-11T21:41:24.1377565Z } 2023-01-11T21:41:24.1377645Z ''') 2023-01-11T21:41:24.1377651Z 2023-01-11T21:41:24.1377657Z 2023-01-11T21:41:24.1377733Z async_compile.wait(globals()) 2023-01-11T21:41:24.1377807Z del async_compile 2023-01-11T21:41:24.1377815Z 2023-01-11T21:41:24.1377920Z def call(args): 2023-01-11T21:41:24.1377995Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1378064Z args.clear() 2023-01-11T21:41:24.1378257Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1378417Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1378484Z del arg0_1 2023-01-11T21:41:24.1378539Z del arg1_1 2023-01-11T21:41:24.1378609Z return (buf0, ) 2023-01-11T21:41:24.1378615Z 2023-01-11T21:41:24.1378619Z 2023-01-11T21:41:24.1378694Z if __name__ == "__main__": 2023-01-11T21:41:24.1378804Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1378925Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1379117Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1379307Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1379413Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1379429Z 2023-01-11T21:41:24.1379484Z ok (1.540s) 2023-01-11T21:41:24.1379977Z test_cpu_broadcast1_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1380104Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1380364Z [2023-01-11 21:40:00,941] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 526 2023-01-11T21:41:24.1380628Z [2023-01-11 21:40:02,454] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 526 2023-01-11T21:41:24.1380633Z 2023-01-11T21:41:24.1380733Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1380802Z import torch 2023-01-11T21:41:24.1380870Z import random 2023-01-11T21:41:24.1380983Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1381088Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1381093Z 2023-01-11T21:41:24.1381169Z aten = torch.ops.aten 2023-01-11T21:41:24.1381299Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1381387Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1381392Z 2023-01-11T21:41:24.1381396Z 2023-01-11T21:41:24.1381526Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1381728Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1381847Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1381951Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1382037Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1382127Z { 2023-01-11T21:41:24.1382224Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1382284Z { 2023-01-11T21:41:24.1382462Z #pragma omp for 2023-01-11T21:41:24.1382545Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1382606Z { 2023-01-11T21:41:24.1382676Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1382736Z { 2023-01-11T21:41:24.1382870Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i1); 2023-01-11T21:41:24.1383013Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1383100Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1383204Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1383266Z } 2023-01-11T21:41:24.1383343Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1383426Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1383489Z { 2023-01-11T21:41:24.1383625Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1383720Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1383802Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1383897Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1383947Z } 2023-01-11T21:41:24.1384008Z } 2023-01-11T21:41:24.1384069Z } 2023-01-11T21:41:24.1384129Z } 2023-01-11T21:41:24.1384208Z ''') 2023-01-11T21:41:24.1384213Z 2023-01-11T21:41:24.1384217Z 2023-01-11T21:41:24.1384305Z async_compile.wait(globals()) 2023-01-11T21:41:24.1384375Z del async_compile 2023-01-11T21:41:24.1384380Z 2023-01-11T21:41:24.1384436Z def call(args): 2023-01-11T21:41:24.1384509Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1384577Z args.clear() 2023-01-11T21:41:24.1384775Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1384943Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1385013Z del arg0_1 2023-01-11T21:41:24.1385080Z del arg1_1 2023-01-11T21:41:24.1385137Z return (buf0, ) 2023-01-11T21:41:24.1385154Z 2023-01-11T21:41:24.1385158Z 2023-01-11T21:41:24.1385219Z if __name__ == "__main__": 2023-01-11T21:41:24.1385329Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1385448Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1385641Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1385840Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1385951Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1385956Z 2023-01-11T21:41:24.1386020Z ok (1.528s) 2023-01-11T21:41:24.1386512Z test_cpu_broadcast1_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1386638Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1386887Z [2023-01-11 21:40:02,468] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 527 2023-01-11T21:41:24.1387152Z [2023-01-11 21:40:04,013] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 527 2023-01-11T21:41:24.1387157Z 2023-01-11T21:41:24.1387271Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1387369Z import torch 2023-01-11T21:41:24.1387467Z import random 2023-01-11T21:41:24.1387620Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1387766Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1387772Z 2023-01-11T21:41:24.1387902Z aten = torch.ops.aten 2023-01-11T21:41:24.1388026Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1388115Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1388120Z 2023-01-11T21:41:24.1388125Z 2023-01-11T21:41:24.1388264Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1388469Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1388585Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1388690Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1388788Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1388848Z { 2023-01-11T21:41:24.1388932Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1388993Z { 2023-01-11T21:41:24.1389069Z #pragma omp for 2023-01-11T21:41:24.1389150Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1389211Z { 2023-01-11T21:41:24.1389290Z #pragma GCC ivdep 2023-01-11T21:41:24.1389396Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1389458Z { 2023-01-11T21:41:24.1389521Z { 2023-01-11T21:41:24.1389587Z { 2023-01-11T21:41:24.1389681Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1389781Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1389891Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1389970Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1390063Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1390128Z } 2023-01-11T21:41:24.1390194Z } 2023-01-11T21:41:24.1390255Z } 2023-01-11T21:41:24.1390316Z } 2023-01-11T21:41:24.1390382Z } 2023-01-11T21:41:24.1390429Z } 2023-01-11T21:41:24.1390506Z ''') 2023-01-11T21:41:24.1390511Z 2023-01-11T21:41:24.1390515Z 2023-01-11T21:41:24.1390605Z async_compile.wait(globals()) 2023-01-11T21:41:24.1390680Z del async_compile 2023-01-11T21:41:24.1390685Z 2023-01-11T21:41:24.1390754Z def call(args): 2023-01-11T21:41:24.1390828Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1390900Z args.clear() 2023-01-11T21:41:24.1391086Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1391248Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1391314Z del arg0_1 2023-01-11T21:41:24.1391379Z del arg1_1 2023-01-11T21:41:24.1391448Z return (buf0, ) 2023-01-11T21:41:24.1391452Z 2023-01-11T21:41:24.1391456Z 2023-01-11T21:41:24.1391531Z if __name__ == "__main__": 2023-01-11T21:41:24.1391642Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1391762Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1391942Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1392145Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1392257Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1392261Z 2023-01-11T21:41:24.1392327Z ok (1.560s) 2023-01-11T21:41:24.1392814Z test_cpu_broadcast1_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1392937Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1393195Z [2023-01-11 21:40:04,028] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 528 2023-01-11T21:41:24.1393464Z [2023-01-11 21:40:05,670] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 528 2023-01-11T21:41:24.1393500Z 2023-01-11T21:41:24.1393592Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1393659Z import torch 2023-01-11T21:41:24.1393715Z import random 2023-01-11T21:41:24.1393885Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1394004Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1394009Z 2023-01-11T21:41:24.1394084Z aten = torch.ops.aten 2023-01-11T21:41:24.1394216Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1394305Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1394310Z 2023-01-11T21:41:24.1394314Z 2023-01-11T21:41:24.1394452Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1394642Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1394758Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1394857Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1394988Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1395048Z { 2023-01-11T21:41:24.1395144Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1395205Z { 2023-01-11T21:41:24.1395268Z #pragma omp for 2023-01-11T21:41:24.1395348Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1395407Z { 2023-01-11T21:41:24.1395468Z { 2023-01-11T21:41:24.1395529Z { 2023-01-11T21:41:24.1395619Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1395706Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1395800Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1395888Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1395971Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1396033Z } 2023-01-11T21:41:24.1396094Z } 2023-01-11T21:41:24.1396153Z } 2023-01-11T21:41:24.1396214Z } 2023-01-11T21:41:24.1396263Z } 2023-01-11T21:41:24.1396340Z ''') 2023-01-11T21:41:24.1396345Z 2023-01-11T21:41:24.1396349Z 2023-01-11T21:41:24.1396434Z async_compile.wait(globals()) 2023-01-11T21:41:24.1396504Z del async_compile 2023-01-11T21:41:24.1396509Z 2023-01-11T21:41:24.1396577Z def call(args): 2023-01-11T21:41:24.1396649Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1396716Z args.clear() 2023-01-11T21:41:24.1396896Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1397054Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1397121Z del arg0_1 2023-01-11T21:41:24.1397185Z del arg1_1 2023-01-11T21:41:24.1397254Z return (buf0, ) 2023-01-11T21:41:24.1397259Z 2023-01-11T21:41:24.1397263Z 2023-01-11T21:41:24.1397337Z if __name__ == "__main__": 2023-01-11T21:41:24.1397448Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1397570Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1397752Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1397940Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1398098Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1398107Z 2023-01-11T21:41:24.1398190Z ok (1.657s) 2023-01-11T21:41:24.1398685Z test_cpu_broadcast1_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1398813Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1399079Z [2023-01-11 21:40:05,687] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 529 2023-01-11T21:41:24.1399396Z [2023-01-11 21:40:07,278] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 529 2023-01-11T21:41:24.1399401Z 2023-01-11T21:41:24.1399495Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1399562Z import torch 2023-01-11T21:41:24.1399620Z import random 2023-01-11T21:41:24.1399730Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1399848Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1399853Z 2023-01-11T21:41:24.1399930Z aten = torch.ops.aten 2023-01-11T21:41:24.1400063Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1400153Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1400158Z 2023-01-11T21:41:24.1400162Z 2023-01-11T21:41:24.1400294Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1400498Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1400670Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1400776Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1400873Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1400932Z { 2023-01-11T21:41:24.1401027Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1401088Z { 2023-01-11T21:41:24.1401163Z #pragma omp for 2023-01-11T21:41:24.1401233Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1401293Z { 2023-01-11T21:41:24.1401371Z #pragma GCC ivdep 2023-01-11T21:41:24.1401455Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1401520Z { 2023-01-11T21:41:24.1401585Z { 2023-01-11T21:41:24.1401639Z { 2023-01-11T21:41:24.1401733Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1401838Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1401936Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1402032Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1402095Z } 2023-01-11T21:41:24.1402160Z } 2023-01-11T21:41:24.1402208Z } 2023-01-11T21:41:24.1402271Z } 2023-01-11T21:41:24.1402332Z } 2023-01-11T21:41:24.1402392Z } 2023-01-11T21:41:24.1402470Z ''') 2023-01-11T21:41:24.1402475Z 2023-01-11T21:41:24.1402479Z 2023-01-11T21:41:24.1402566Z async_compile.wait(globals()) 2023-01-11T21:41:24.1402635Z del async_compile 2023-01-11T21:41:24.1402640Z 2023-01-11T21:41:24.1402696Z def call(args): 2023-01-11T21:41:24.1402769Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1402838Z args.clear() 2023-01-11T21:41:24.1403036Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1403193Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1403262Z del arg0_1 2023-01-11T21:41:24.1403328Z del arg1_1 2023-01-11T21:41:24.1403398Z return (buf0, ) 2023-01-11T21:41:24.1403418Z 2023-01-11T21:41:24.1403424Z 2023-01-11T21:41:24.1403514Z if __name__ == "__main__": 2023-01-11T21:41:24.1403664Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1403831Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1404053Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1404376Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1404649Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1404659Z 2023-01-11T21:41:24.1404790Z ok (1.608s) 2023-01-11T21:41:24.1405370Z test_cpu_broadcast1_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1405548Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1405829Z [2023-01-11 21:40:07,293] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 530 2023-01-11T21:41:24.1406119Z [2023-01-11 21:40:08,817] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 530 2023-01-11T21:41:24.1406125Z 2023-01-11T21:41:24.1406227Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1406295Z import torch 2023-01-11T21:41:24.1406352Z import random 2023-01-11T21:41:24.1406462Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1406580Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1406585Z 2023-01-11T21:41:24.1406660Z aten = torch.ops.aten 2023-01-11T21:41:24.1406890Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1406985Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1406990Z 2023-01-11T21:41:24.1406994Z 2023-01-11T21:41:24.1407129Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1407333Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1407439Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1407579Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1407695Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1407756Z { 2023-01-11T21:41:24.1407853Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1407913Z { 2023-01-11T21:41:24.1407988Z #pragma omp for 2023-01-11T21:41:24.1408057Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1408118Z { 2023-01-11T21:41:24.1408198Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1408270Z { 2023-01-11T21:41:24.1408460Z auto tmp0 = at::vec::Vectorized(in_ptr0[i0]); 2023-01-11T21:41:24.1408644Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1408731Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1408835Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1408886Z } 2023-01-11T21:41:24.1408975Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1409058Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1409149Z { 2023-01-11T21:41:24.1409241Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1409336Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1409407Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1409499Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1409558Z } 2023-01-11T21:41:24.1409620Z } 2023-01-11T21:41:24.1409687Z } 2023-01-11T21:41:24.1409748Z } 2023-01-11T21:41:24.1409828Z ''') 2023-01-11T21:41:24.1409834Z 2023-01-11T21:41:24.1409838Z 2023-01-11T21:41:24.1409923Z async_compile.wait(globals()) 2023-01-11T21:41:24.1409982Z del async_compile 2023-01-11T21:41:24.1409987Z 2023-01-11T21:41:24.1410056Z def call(args): 2023-01-11T21:41:24.1410130Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1410198Z args.clear() 2023-01-11T21:41:24.1410400Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1410559Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1410625Z del arg0_1 2023-01-11T21:41:24.1410677Z del arg1_1 2023-01-11T21:41:24.1410746Z return (buf0, ) 2023-01-11T21:41:24.1410751Z 2023-01-11T21:41:24.1410755Z 2023-01-11T21:41:24.1410828Z if __name__ == "__main__": 2023-01-11T21:41:24.1410939Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1411062Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1411290Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1411486Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1411598Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1411603Z 2023-01-11T21:41:24.1411656Z ok (1.539s) 2023-01-11T21:41:24.1412154Z test_cpu_broadcast2_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1412281Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1412569Z [2023-01-11 21:40:08,833] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 531 2023-01-11T21:41:24.1412836Z [2023-01-11 21:40:10,346] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 531 2023-01-11T21:41:24.1412842Z 2023-01-11T21:41:24.1412934Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1413001Z import torch 2023-01-11T21:41:24.1413070Z import random 2023-01-11T21:41:24.1413183Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1413289Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1413295Z 2023-01-11T21:41:24.1413373Z aten = torch.ops.aten 2023-01-11T21:41:24.1413503Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1413593Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1413598Z 2023-01-11T21:41:24.1413602Z 2023-01-11T21:41:24.1413736Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1413940Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1414060Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1414160Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1414246Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1414306Z { 2023-01-11T21:41:24.1414404Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1414466Z { 2023-01-11T21:41:24.1414543Z #pragma omp for 2023-01-11T21:41:24.1414623Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1414687Z { 2023-01-11T21:41:24.1414757Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1414823Z { 2023-01-11T21:41:24.1414949Z auto tmp0 = at::vec::Vectorized(in_ptr0[i0]); 2023-01-11T21:41:24.1415083Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:24.1415170Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1415274Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1415339Z } 2023-01-11T21:41:24.1415418Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1415499Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1415561Z { 2023-01-11T21:41:24.1415645Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1415727Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1415809Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1415897Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1415947Z } 2023-01-11T21:41:24.1416008Z } 2023-01-11T21:41:24.1416068Z } 2023-01-11T21:41:24.1416129Z } 2023-01-11T21:41:24.1416204Z ''') 2023-01-11T21:41:24.1416209Z 2023-01-11T21:41:24.1416214Z 2023-01-11T21:41:24.1416301Z async_compile.wait(globals()) 2023-01-11T21:41:24.1416372Z del async_compile 2023-01-11T21:41:24.1416377Z 2023-01-11T21:41:24.1416433Z def call(args): 2023-01-11T21:41:24.1416505Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1416605Z args.clear() 2023-01-11T21:41:24.1416813Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1416972Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1417037Z del arg0_1 2023-01-11T21:41:24.1417101Z del arg1_1 2023-01-11T21:41:24.1417160Z return (buf0, ) 2023-01-11T21:41:24.1417177Z 2023-01-11T21:41:24.1417181Z 2023-01-11T21:41:24.1417242Z if __name__ == "__main__": 2023-01-11T21:41:24.1417354Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1417477Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1417681Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1417871Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1417984Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1417991Z 2023-01-11T21:41:24.1418093Z ok (1.530s) 2023-01-11T21:41:24.1418589Z test_cpu_broadcast2_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1418715Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1418966Z [2023-01-11 21:40:10,367] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 532 2023-01-11T21:41:24.1419232Z [2023-01-11 21:40:10,379] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 532 2023-01-11T21:41:24.1419237Z 2023-01-11T21:41:24.1419329Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1419398Z import torch 2023-01-11T21:41:24.1419478Z import random 2023-01-11T21:41:24.1419590Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1419708Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1419713Z 2023-01-11T21:41:24.1419788Z aten = torch.ops.aten 2023-01-11T21:41:24.1419910Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1420001Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1420006Z 2023-01-11T21:41:24.1420011Z 2023-01-11T21:41:24.1420143Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1420345Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1420463Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1420565Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1420662Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1420722Z { 2023-01-11T21:41:24.1420811Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1420873Z { 2023-01-11T21:41:24.1420948Z #pragma omp for 2023-01-11T21:41:24.1421027Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1421089Z { 2023-01-11T21:41:24.1421218Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1421346Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1421419Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1421505Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1421566Z } 2023-01-11T21:41:24.1421659Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1421739Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1421802Z { 2023-01-11T21:41:24.1421884Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1421952Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1422030Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1422109Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1422198Z } 2023-01-11T21:41:24.1422257Z } 2023-01-11T21:41:24.1422315Z } 2023-01-11T21:41:24.1422555Z ''') 2023-01-11T21:41:24.1422562Z 2023-01-11T21:41:24.1422583Z 2023-01-11T21:41:24.1422682Z async_compile.wait(globals()) 2023-01-11T21:41:24.1422780Z del async_compile 2023-01-11T21:41:24.1422786Z 2023-01-11T21:41:24.1422877Z def call(args): 2023-01-11T21:41:24.1422970Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1423059Z args.clear() 2023-01-11T21:41:24.1423272Z buf0 = empty_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1423433Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1423489Z del arg0_1 2023-01-11T21:41:24.1423554Z del arg1_1 2023-01-11T21:41:24.1423627Z return (buf0, ) 2023-01-11T21:41:24.1423632Z 2023-01-11T21:41:24.1423636Z 2023-01-11T21:41:24.1423709Z if __name__ == "__main__": 2023-01-11T21:41:24.1423896Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1424020Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1424227Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1424424Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1424524Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1424529Z 2023-01-11T21:41:24.1424594Z ok (0.032s) 2023-01-11T21:41:24.1425088Z test_cpu_broadcast2_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1425215Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1425477Z [2023-01-11 21:40:10,397] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 533 2023-01-11T21:41:24.1425741Z [2023-01-11 21:40:10,405] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 533 2023-01-11T21:41:24.1425746Z 2023-01-11T21:41:24.1425838Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1425908Z import torch 2023-01-11T21:41:24.1425977Z import random 2023-01-11T21:41:24.1426077Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1426194Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1426198Z 2023-01-11T21:41:24.1426276Z aten = torch.ops.aten 2023-01-11T21:41:24.1426407Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1426497Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1426502Z 2023-01-11T21:41:24.1426506Z 2023-01-11T21:41:24.1426638Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1426842Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1426959Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1427049Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1427146Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1427206Z { 2023-01-11T21:41:24.1427301Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1427361Z { 2023-01-11T21:41:24.1427435Z #pragma omp for 2023-01-11T21:41:24.1427515Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1427563Z { 2023-01-11T21:41:24.1427697Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1427819Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:24.1427902Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1427991Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1428094Z } 2023-01-11T21:41:24.1428187Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1428254Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1428314Z { 2023-01-11T21:41:24.1428393Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1428473Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1428552Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1428629Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1428689Z } 2023-01-11T21:41:24.1428735Z } 2023-01-11T21:41:24.1428793Z } 2023-01-11T21:41:24.1428870Z ''') 2023-01-11T21:41:24.1428875Z 2023-01-11T21:41:24.1428879Z 2023-01-11T21:41:24.1428966Z async_compile.wait(globals()) 2023-01-11T21:41:24.1429034Z del async_compile 2023-01-11T21:41:24.1429038Z 2023-01-11T21:41:24.1429108Z def call(args): 2023-01-11T21:41:24.1429182Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1429239Z args.clear() 2023-01-11T21:41:24.1429476Z buf0 = empty_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1429641Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1429708Z del arg0_1 2023-01-11T21:41:24.1429773Z del arg1_1 2023-01-11T21:41:24.1429843Z return (buf0, ) 2023-01-11T21:41:24.1429848Z 2023-01-11T21:41:24.1429852Z 2023-01-11T21:41:24.1429923Z if __name__ == "__main__": 2023-01-11T21:41:24.1430036Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1430145Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1430348Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1430539Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1430651Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1430656Z 2023-01-11T21:41:24.1430719Z ok (0.025s) 2023-01-11T21:41:24.1431217Z test_cpu_broadcast2_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1431345Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1431605Z [2023-01-11 21:40:10,419] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 534 2023-01-11T21:41:24.1431871Z [2023-01-11 21:40:10,427] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 534 2023-01-11T21:41:24.1431876Z 2023-01-11T21:41:24.1431968Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1432025Z import torch 2023-01-11T21:41:24.1432092Z import random 2023-01-11T21:41:24.1432208Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1432332Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1432337Z 2023-01-11T21:41:24.1432413Z aten = torch.ops.aten 2023-01-11T21:41:24.1432545Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1432635Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1432639Z 2023-01-11T21:41:24.1432643Z 2023-01-11T21:41:24.1432763Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1432966Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1433082Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1433185Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1433284Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1433345Z { 2023-01-11T21:41:24.1433441Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1433500Z { 2023-01-11T21:41:24.1433564Z #pragma omp for 2023-01-11T21:41:24.1433679Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1433805Z { 2023-01-11T21:41:24.1433894Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1433956Z { 2023-01-11T21:41:24.1434083Z auto tmp0 = at::vec::Vectorized(in_ptr0[i0]); 2023-01-11T21:41:24.1434222Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1434296Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1434396Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1434458Z } 2023-01-11T21:41:24.1434548Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1434629Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1434692Z { 2023-01-11T21:41:24.1434776Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1434859Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1434946Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1435085Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1435147Z } 2023-01-11T21:41:24.1435207Z } 2023-01-11T21:41:24.1435267Z } 2023-01-11T21:41:24.1435622Z } 2023-01-11T21:41:24.1435750Z ''') 2023-01-11T21:41:24.1435756Z 2023-01-11T21:41:24.1435761Z 2023-01-11T21:41:24.1435854Z async_compile.wait(globals()) 2023-01-11T21:41:24.1435936Z del async_compile 2023-01-11T21:41:24.1435940Z 2023-01-11T21:41:24.1436012Z def call(args): 2023-01-11T21:41:24.1436094Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1436166Z args.clear() 2023-01-11T21:41:24.1436378Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1436534Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1436603Z del arg0_1 2023-01-11T21:41:24.1436672Z del arg1_1 2023-01-11T21:41:24.1436742Z return (buf0, ) 2023-01-11T21:41:24.1436747Z 2023-01-11T21:41:24.1436754Z 2023-01-11T21:41:24.1436831Z if __name__ == "__main__": 2023-01-11T21:41:24.1436940Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1437067Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1437273Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1437459Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1437572Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1437577Z 2023-01-11T21:41:24.1437643Z ok (0.022s) 2023-01-11T21:41:24.1438138Z test_cpu_broadcast2_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1438267Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1438528Z [2023-01-11 21:40:10,441] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 535 2023-01-11T21:41:24.1438796Z [2023-01-11 21:40:11,954] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 535 2023-01-11T21:41:24.1438801Z 2023-01-11T21:41:24.1438896Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1438966Z import torch 2023-01-11T21:41:24.1439024Z import random 2023-01-11T21:41:24.1439139Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1439259Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1439264Z 2023-01-11T21:41:24.1439341Z aten = torch.ops.aten 2023-01-11T21:41:24.1439476Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1439567Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1439642Z 2023-01-11T21:41:24.1439649Z 2023-01-11T21:41:24.1439783Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1439991Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1440099Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1440207Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1440305Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1440370Z { 2023-01-11T21:41:24.1440467Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1440533Z { 2023-01-11T21:41:24.1440611Z #pragma omp for 2023-01-11T21:41:24.1440680Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1440742Z { 2023-01-11T21:41:24.1440824Z #pragma GCC ivdep 2023-01-11T21:41:24.1440911Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1440974Z { 2023-01-11T21:41:24.1441039Z { 2023-01-11T21:41:24.1441106Z { 2023-01-11T21:41:24.1441230Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1441333Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1441444Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1441536Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1441632Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1441700Z } 2023-01-11T21:41:24.1441765Z } 2023-01-11T21:41:24.1441817Z } 2023-01-11T21:41:24.1441878Z } 2023-01-11T21:41:24.1441939Z } 2023-01-11T21:41:24.1442000Z } 2023-01-11T21:41:24.1442082Z ''') 2023-01-11T21:41:24.1442086Z 2023-01-11T21:41:24.1442090Z 2023-01-11T21:41:24.1442179Z async_compile.wait(globals()) 2023-01-11T21:41:24.1442248Z del async_compile 2023-01-11T21:41:24.1442253Z 2023-01-11T21:41:24.1442309Z def call(args): 2023-01-11T21:41:24.1442382Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1442456Z args.clear() 2023-01-11T21:41:24.1442667Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1442830Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1442897Z del arg0_1 2023-01-11T21:41:24.1442961Z del arg1_1 2023-01-11T21:41:24.1443018Z return (buf0, ) 2023-01-11T21:41:24.1443023Z 2023-01-11T21:41:24.1443039Z 2023-01-11T21:41:24.1443101Z if __name__ == "__main__": 2023-01-11T21:41:24.1443212Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1443337Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1443544Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1443741Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1443853Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1443861Z 2023-01-11T21:41:24.1443927Z ok (1.527s) 2023-01-11T21:41:24.1444413Z test_cpu_broadcast2_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1444537Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1444786Z [2023-01-11 21:40:11,969] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 536 2023-01-11T21:41:24.1445051Z [2023-01-11 21:40:13,493] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 536 2023-01-11T21:41:24.1445056Z 2023-01-11T21:41:24.1445149Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1445219Z import torch 2023-01-11T21:41:24.1445326Z import random 2023-01-11T21:41:24.1445439Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1445558Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1445563Z 2023-01-11T21:41:24.1445638Z aten = torch.ops.aten 2023-01-11T21:41:24.1445759Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1445848Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1445852Z 2023-01-11T21:41:24.1445857Z 2023-01-11T21:41:24.1445990Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1446192Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1446310Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1446410Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1446512Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1446573Z { 2023-01-11T21:41:24.1446659Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1446749Z { 2023-01-11T21:41:24.1446828Z #pragma omp for 2023-01-11T21:41:24.1446907Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1446970Z { 2023-01-11T21:41:24.1447051Z #pragma GCC ivdep 2023-01-11T21:41:24.1447123Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1447186Z { 2023-01-11T21:41:24.1447248Z { 2023-01-11T21:41:24.1447315Z { 2023-01-11T21:41:24.1447411Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1447506Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1447617Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1447696Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1447790Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1447854Z } 2023-01-11T21:41:24.1447918Z } 2023-01-11T21:41:24.1447979Z } 2023-01-11T21:41:24.1448048Z } 2023-01-11T21:41:24.1448108Z } 2023-01-11T21:41:24.1448159Z } 2023-01-11T21:41:24.1448237Z ''') 2023-01-11T21:41:24.1448242Z 2023-01-11T21:41:24.1448246Z 2023-01-11T21:41:24.1448336Z async_compile.wait(globals()) 2023-01-11T21:41:24.1448407Z del async_compile 2023-01-11T21:41:24.1448411Z 2023-01-11T21:41:24.1448479Z def call(args): 2023-01-11T21:41:24.1448552Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1448623Z args.clear() 2023-01-11T21:41:24.1448819Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1448982Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1449049Z del arg0_1 2023-01-11T21:41:24.1449113Z del arg1_1 2023-01-11T21:41:24.1449186Z return (buf0, ) 2023-01-11T21:41:24.1449190Z 2023-01-11T21:41:24.1449194Z 2023-01-11T21:41:24.1449268Z if __name__ == "__main__": 2023-01-11T21:41:24.1449382Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1449508Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1449700Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1449891Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1450005Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1450009Z 2023-01-11T21:41:24.1450074Z ok (1.539s) 2023-01-11T21:41:24.1450571Z test_cpu_broadcast2_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1450696Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1450989Z [2023-01-11 21:40:13,509] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 537 2023-01-11T21:41:24.1451254Z [2023-01-11 21:40:15,036] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 537 2023-01-11T21:41:24.1451259Z 2023-01-11T21:41:24.1451352Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1451419Z import torch 2023-01-11T21:41:24.1451476Z import random 2023-01-11T21:41:24.1451589Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1451708Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1451713Z 2023-01-11T21:41:24.1451792Z aten = torch.ops.aten 2023-01-11T21:41:24.1451923Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1452013Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1452018Z 2023-01-11T21:41:24.1452022Z 2023-01-11T21:41:24.1452152Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1452384Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1452503Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1452606Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1452706Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1452768Z { 2023-01-11T21:41:24.1452864Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1452924Z { 2023-01-11T21:41:24.1452986Z #pragma omp for 2023-01-11T21:41:24.1453066Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1453127Z { 2023-01-11T21:41:24.1453205Z #pragma GCC ivdep 2023-01-11T21:41:24.1453287Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1453349Z { 2023-01-11T21:41:24.1453412Z { 2023-01-11T21:41:24.1453465Z { 2023-01-11T21:41:24.1453560Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1453668Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1453765Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1453859Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1453924Z } 2023-01-11T21:41:24.1453987Z } 2023-01-11T21:41:24.1454040Z } 2023-01-11T21:41:24.1454103Z } 2023-01-11T21:41:24.1454162Z } 2023-01-11T21:41:24.1454222Z } 2023-01-11T21:41:24.1454301Z ''') 2023-01-11T21:41:24.1454306Z 2023-01-11T21:41:24.1454311Z 2023-01-11T21:41:24.1454399Z async_compile.wait(globals()) 2023-01-11T21:41:24.1454468Z del async_compile 2023-01-11T21:41:24.1454472Z 2023-01-11T21:41:24.1454528Z def call(args): 2023-01-11T21:41:24.1454601Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1454671Z args.clear() 2023-01-11T21:41:24.1454877Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1455043Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1455112Z del arg0_1 2023-01-11T21:41:24.1455177Z del arg1_1 2023-01-11T21:41:24.1455235Z return (buf0, ) 2023-01-11T21:41:24.1455251Z 2023-01-11T21:41:24.1455255Z 2023-01-11T21:41:24.1455319Z if __name__ == "__main__": 2023-01-11T21:41:24.1455432Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1455557Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1455767Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1455970Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1456085Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1456091Z 2023-01-11T21:41:24.1456156Z ok (1.543s) 2023-01-11T21:41:24.1456659Z test_cpu_broadcast2_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1456833Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1457081Z [2023-01-11 21:40:15,052] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 538 2023-01-11T21:41:24.1457343Z [2023-01-11 21:40:15,061] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 538 2023-01-11T21:41:24.1457348Z 2023-01-11T21:41:24.1457440Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1457508Z import torch 2023-01-11T21:41:24.1457578Z import random 2023-01-11T21:41:24.1457691Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1457810Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1457849Z 2023-01-11T21:41:24.1457927Z aten = torch.ops.aten 2023-01-11T21:41:24.1458047Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1458136Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1458141Z 2023-01-11T21:41:24.1458145Z 2023-01-11T21:41:24.1458276Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1458479Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1458595Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1458701Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1458799Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1458859Z { 2023-01-11T21:41:24.1458944Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1459003Z { 2023-01-11T21:41:24.1459078Z #pragma omp for 2023-01-11T21:41:24.1459160Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1459224Z { 2023-01-11T21:41:24.1459308Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1459359Z { 2023-01-11T21:41:24.1459498Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i1); 2023-01-11T21:41:24.1459633Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1459719Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1459822Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1459883Z } 2023-01-11T21:41:24.1459971Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1460052Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1460101Z { 2023-01-11T21:41:24.1460184Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1460278Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1460359Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1460449Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1460516Z } 2023-01-11T21:41:24.1460576Z } 2023-01-11T21:41:24.1460624Z } 2023-01-11T21:41:24.1460682Z } 2023-01-11T21:41:24.1460758Z ''') 2023-01-11T21:41:24.1460763Z 2023-01-11T21:41:24.1460768Z 2023-01-11T21:41:24.1460853Z async_compile.wait(globals()) 2023-01-11T21:41:24.1460925Z del async_compile 2023-01-11T21:41:24.1460930Z 2023-01-11T21:41:24.1461002Z def call(args): 2023-01-11T21:41:24.1461073Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1461130Z args.clear() 2023-01-11T21:41:24.1461340Z buf0 = empty_strided((1, 10, 10), (100, 1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1461507Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1461575Z del arg0_1 2023-01-11T21:41:24.1461637Z del arg1_1 2023-01-11T21:41:24.1461708Z return (buf0, ) 2023-01-11T21:41:24.1461712Z 2023-01-11T21:41:24.1461721Z 2023-01-11T21:41:24.1461793Z if __name__ == "__main__": 2023-01-11T21:41:24.1461943Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1462055Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1462260Z arg0_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1462586Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1462702Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1462707Z 2023-01-11T21:41:24.1462773Z ok (0.024s) 2023-01-11T21:41:24.1463268Z test_cpu_broadcast3_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1463463Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1463728Z [2023-01-11 21:40:15,075] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 539 2023-01-11T21:41:24.1463991Z [2023-01-11 21:40:16,614] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 539 2023-01-11T21:41:24.1463996Z 2023-01-11T21:41:24.1464077Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1464143Z import torch 2023-01-11T21:41:24.1464211Z import random 2023-01-11T21:41:24.1464325Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1464444Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1464449Z 2023-01-11T21:41:24.1464528Z aten = torch.ops.aten 2023-01-11T21:41:24.1464660Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1464749Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1464754Z 2023-01-11T21:41:24.1464758Z 2023-01-11T21:41:24.1464885Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1465087Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1465204Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1465308Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1465409Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1465471Z { 2023-01-11T21:41:24.1465566Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1465614Z { 2023-01-11T21:41:24.1465690Z #pragma omp for 2023-01-11T21:41:24.1465770Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1465835Z { 2023-01-11T21:41:24.1465958Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:24.1466086Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1466168Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1466261Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1466314Z } 2023-01-11T21:41:24.1466408Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1466488Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1466550Z { 2023-01-11T21:41:24.1466630Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1466712Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1466781Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1466862Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1466926Z } 2023-01-11T21:41:24.1466988Z } 2023-01-11T21:41:24.1467047Z } 2023-01-11T21:41:24.1467123Z ''') 2023-01-11T21:41:24.1467127Z 2023-01-11T21:41:24.1467132Z 2023-01-11T21:41:24.1467220Z async_compile.wait(globals()) 2023-01-11T21:41:24.1467280Z del async_compile 2023-01-11T21:41:24.1467296Z 2023-01-11T21:41:24.1467354Z def call(args): 2023-01-11T21:41:24.1467427Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1467502Z args.clear() 2023-01-11T21:41:24.1467699Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1467908Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1467974Z del arg0_1 2023-01-11T21:41:24.1468041Z del arg1_1 2023-01-11T21:41:24.1468102Z return (buf0, ) 2023-01-11T21:41:24.1468107Z 2023-01-11T21:41:24.1468111Z 2023-01-11T21:41:24.1468185Z if __name__ == "__main__": 2023-01-11T21:41:24.1468297Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1468419Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1468612Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1468804Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1468918Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1468923Z 2023-01-11T21:41:24.1468993Z ok (1.554s) 2023-01-11T21:41:24.1469507Z test_cpu_broadcast3_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1469636Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1469895Z [2023-01-11 21:40:16,629] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 540 2023-01-11T21:41:24.1470161Z [2023-01-11 21:40:16,638] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 540 2023-01-11T21:41:24.1470167Z 2023-01-11T21:41:24.1470259Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1470326Z import torch 2023-01-11T21:41:24.1470394Z import random 2023-01-11T21:41:24.1470510Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1470635Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1470640Z 2023-01-11T21:41:24.1470704Z aten = torch.ops.aten 2023-01-11T21:41:24.1470835Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1470926Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1470931Z 2023-01-11T21:41:24.1470935Z 2023-01-11T21:41:24.1471068Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1471269Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1471385Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1471490Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1471591Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1471638Z { 2023-01-11T21:41:24.1471732Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1471794Z { 2023-01-11T21:41:24.1471869Z #pragma omp for 2023-01-11T21:41:24.1471952Z for(long i0=0; i0<1; i0+=1) 2023-01-11T21:41:24.1472014Z { 2023-01-11T21:41:24.1472135Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:24.1472253Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1472336Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1472425Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1472490Z } 2023-01-11T21:41:24.1472585Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1472667Z for(long i0=8; i0<10; i0+=1) 2023-01-11T21:41:24.1472727Z { 2023-01-11T21:41:24.1472797Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1472876Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1472956Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1473034Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1473094Z } 2023-01-11T21:41:24.1473152Z } 2023-01-11T21:41:24.1473237Z } 2023-01-11T21:41:24.1473315Z ''') 2023-01-11T21:41:24.1473320Z 2023-01-11T21:41:24.1473325Z 2023-01-11T21:41:24.1473413Z async_compile.wait(globals()) 2023-01-11T21:41:24.1473488Z del async_compile 2023-01-11T21:41:24.1473493Z 2023-01-11T21:41:24.1473563Z def call(args): 2023-01-11T21:41:24.1473634Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1473706Z args.clear() 2023-01-11T21:41:24.1473988Z buf0 = empty_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1474138Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1474205Z del arg0_1 2023-01-11T21:41:24.1474271Z del arg1_1 2023-01-11T21:41:24.1474343Z return (buf0, ) 2023-01-11T21:41:24.1474348Z 2023-01-11T21:41:24.1474353Z 2023-01-11T21:41:24.1474426Z if __name__ == "__main__": 2023-01-11T21:41:24.1474541Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1474664Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1474892Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1475088Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1475202Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1475207Z 2023-01-11T21:41:24.1475272Z ok (0.023s) 2023-01-11T21:41:24.1475770Z test_cpu_broadcast3_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1475892Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1476153Z [2023-01-11 21:40:16,652] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 541 2023-01-11T21:41:24.1476420Z [2023-01-11 21:40:18,156] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 541 2023-01-11T21:41:24.1476425Z 2023-01-11T21:41:24.1476520Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1476590Z import torch 2023-01-11T21:41:24.1476646Z import random 2023-01-11T21:41:24.1476758Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1476879Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1476884Z 2023-01-11T21:41:24.1476962Z aten = torch.ops.aten 2023-01-11T21:41:24.1477094Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1477188Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1477193Z 2023-01-11T21:41:24.1477197Z 2023-01-11T21:41:24.1477332Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1477539Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1477648Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1477748Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1477847Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1477911Z { 2023-01-11T21:41:24.1477970Z { 2023-01-11T21:41:24.1478040Z { 2023-01-11T21:41:24.1478123Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1478193Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1478275Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1478353Z out_ptr0[0] = tmp2; 2023-01-11T21:41:24.1478412Z } 2023-01-11T21:41:24.1478475Z } 2023-01-11T21:41:24.1478532Z } 2023-01-11T21:41:24.1478608Z ''') 2023-01-11T21:41:24.1478612Z 2023-01-11T21:41:24.1478616Z 2023-01-11T21:41:24.1478691Z async_compile.wait(globals()) 2023-01-11T21:41:24.1478760Z del async_compile 2023-01-11T21:41:24.1478765Z 2023-01-11T21:41:24.1478834Z def call(args): 2023-01-11T21:41:24.1478961Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1479029Z args.clear() 2023-01-11T21:41:24.1479223Z buf0 = empty_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1479382Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1479449Z del arg0_1 2023-01-11T21:41:24.1479504Z del arg1_1 2023-01-11T21:41:24.1479572Z return (buf0, ) 2023-01-11T21:41:24.1479577Z 2023-01-11T21:41:24.1479581Z 2023-01-11T21:41:24.1479655Z if __name__ == "__main__": 2023-01-11T21:41:24.1479767Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1479886Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1480079Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1480268Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1480370Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1480388Z 2023-01-11T21:41:24.1480469Z ok (1.518s) 2023-01-11T21:41:24.1480962Z test_cpu_broadcast3_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1481089Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1481350Z [2023-01-11 21:40:18,170] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 542 2023-01-11T21:41:24.1481616Z [2023-01-11 21:40:19,830] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 542 2023-01-11T21:41:24.1481621Z 2023-01-11T21:41:24.1481712Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1481780Z import torch 2023-01-11T21:41:24.1481852Z import random 2023-01-11T21:41:24.1481955Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1482075Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1482079Z 2023-01-11T21:41:24.1482155Z aten = torch.ops.aten 2023-01-11T21:41:24.1482290Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1482380Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1482385Z 2023-01-11T21:41:24.1482389Z 2023-01-11T21:41:24.1482521Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1482725Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1482844Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1482946Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1483032Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1483090Z { 2023-01-11T21:41:24.1483188Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1483251Z { 2023-01-11T21:41:24.1483324Z #pragma omp for 2023-01-11T21:41:24.1483406Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1483455Z { 2023-01-11T21:41:24.1483576Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:24.1483704Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1483788Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1483882Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1483943Z } 2023-01-11T21:41:24.1484037Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1484117Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1484166Z { 2023-01-11T21:41:24.1484247Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1484328Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1484408Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1484486Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1484579Z } 2023-01-11T21:41:24.1484627Z } 2023-01-11T21:41:24.1484685Z } 2023-01-11T21:41:24.1484763Z ''') 2023-01-11T21:41:24.1484768Z 2023-01-11T21:41:24.1484773Z 2023-01-11T21:41:24.1484861Z async_compile.wait(globals()) 2023-01-11T21:41:24.1484936Z del async_compile 2023-01-11T21:41:24.1484940Z 2023-01-11T21:41:24.1485011Z def call(args): 2023-01-11T21:41:24.1485084Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1485153Z args.clear() 2023-01-11T21:41:24.1485339Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1485500Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1485567Z del arg0_1 2023-01-11T21:41:24.1485631Z del arg1_1 2023-01-11T21:41:24.1485701Z return (buf0, ) 2023-01-11T21:41:24.1485705Z 2023-01-11T21:41:24.1485709Z 2023-01-11T21:41:24.1485784Z if __name__ == "__main__": 2023-01-11T21:41:24.1485927Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1486040Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1486232Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1486428Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1486542Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1486547Z 2023-01-11T21:41:24.1486610Z ok (1.674s) 2023-01-11T21:41:24.1487103Z test_cpu_broadcast3_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1487227Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1487489Z [2023-01-11 21:40:19,845] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 543 2023-01-11T21:41:24.1487752Z [2023-01-11 21:40:21,370] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 543 2023-01-11T21:41:24.1487757Z 2023-01-11T21:41:24.1487848Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1487905Z import torch 2023-01-11T21:41:24.1487975Z import random 2023-01-11T21:41:24.1488093Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1488210Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1488214Z 2023-01-11T21:41:24.1488290Z aten = torch.ops.aten 2023-01-11T21:41:24.1488422Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1488512Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1488516Z 2023-01-11T21:41:24.1488521Z 2023-01-11T21:41:24.1488651Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1488849Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1488966Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1489072Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1489172Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1489231Z { 2023-01-11T21:41:24.1489327Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1489387Z { 2023-01-11T21:41:24.1489450Z #pragma omp for 2023-01-11T21:41:24.1489531Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:24.1489592Z { 2023-01-11T21:41:24.1489654Z { 2023-01-11T21:41:24.1489716Z { 2023-01-11T21:41:24.1489806Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1489896Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.1489991Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1490083Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1490243Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1490305Z } 2023-01-11T21:41:24.1490367Z } 2023-01-11T21:41:24.1490427Z } 2023-01-11T21:41:24.1490487Z } 2023-01-11T21:41:24.1490533Z } 2023-01-11T21:41:24.1490614Z ''') 2023-01-11T21:41:24.1490619Z 2023-01-11T21:41:24.1490623Z 2023-01-11T21:41:24.1490710Z async_compile.wait(globals()) 2023-01-11T21:41:24.1490782Z del async_compile 2023-01-11T21:41:24.1490787Z 2023-01-11T21:41:24.1490855Z def call(args): 2023-01-11T21:41:24.1490930Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1490998Z args.clear() 2023-01-11T21:41:24.1491187Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1491350Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1491416Z del arg0_1 2023-01-11T21:41:24.1491482Z del arg1_1 2023-01-11T21:41:24.1491556Z return (buf0, ) 2023-01-11T21:41:24.1491588Z 2023-01-11T21:41:24.1491593Z 2023-01-11T21:41:24.1491666Z if __name__ == "__main__": 2023-01-11T21:41:24.1491779Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1491900Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1492081Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1492276Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1492387Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1492392Z 2023-01-11T21:41:24.1492457Z ok (1.541s) 2023-01-11T21:41:24.1492951Z test_cpu_broadcast3_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1493078Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1493337Z [2023-01-11 21:40:21,385] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 544 2023-01-11T21:41:24.1493601Z [2023-01-11 21:40:22,902] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 544 2023-01-11T21:41:24.1493606Z 2023-01-11T21:41:24.1493697Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1493753Z import torch 2023-01-11T21:41:24.1493819Z import random 2023-01-11T21:41:24.1493935Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1494054Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1494059Z 2023-01-11T21:41:24.1494135Z aten = torch.ops.aten 2023-01-11T21:41:24.1494266Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1494355Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1494366Z 2023-01-11T21:41:24.1494370Z 2023-01-11T21:41:24.1494501Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1494691Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1494808Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1494909Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1495005Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1495065Z { 2023-01-11T21:41:24.1495160Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1495218Z { 2023-01-11T21:41:24.1495281Z #pragma omp for 2023-01-11T21:41:24.1495360Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1495419Z { 2023-01-11T21:41:24.1495479Z { 2023-01-11T21:41:24.1495542Z { 2023-01-11T21:41:24.1495630Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1495722Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1495847Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1495937Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1496019Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1496083Z } 2023-01-11T21:41:24.1496145Z } 2023-01-11T21:41:24.1496207Z } 2023-01-11T21:41:24.1496267Z } 2023-01-11T21:41:24.1496314Z } 2023-01-11T21:41:24.1496391Z ''') 2023-01-11T21:41:24.1496396Z 2023-01-11T21:41:24.1496400Z 2023-01-11T21:41:24.1496487Z async_compile.wait(globals()) 2023-01-11T21:41:24.1496558Z del async_compile 2023-01-11T21:41:24.1496562Z 2023-01-11T21:41:24.1496630Z def call(args): 2023-01-11T21:41:24.1496703Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1496771Z args.clear() 2023-01-11T21:41:24.1496956Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1497150Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1497220Z del arg0_1 2023-01-11T21:41:24.1497285Z del arg1_1 2023-01-11T21:41:24.1497355Z return (buf0, ) 2023-01-11T21:41:24.1497360Z 2023-01-11T21:41:24.1497365Z 2023-01-11T21:41:24.1497439Z if __name__ == "__main__": 2023-01-11T21:41:24.1497551Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1497671Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1497851Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1498042Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1498154Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1498159Z 2023-01-11T21:41:24.1498225Z ok (1.532s) 2023-01-11T21:41:24.1498721Z test_cpu_broadcast3_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1498848Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1499106Z [2023-01-11 21:40:22,917] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 545 2023-01-11T21:41:24.1499368Z [2023-01-11 21:40:24,448] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 545 2023-01-11T21:41:24.1499373Z 2023-01-11T21:41:24.1499470Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1499526Z import torch 2023-01-11T21:41:24.1499592Z import random 2023-01-11T21:41:24.1499704Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1499824Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1499829Z 2023-01-11T21:41:24.1499909Z aten = torch.ops.aten 2023-01-11T21:41:24.1500040Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1500129Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1500134Z 2023-01-11T21:41:24.1500138Z 2023-01-11T21:41:24.1500269Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1500460Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1500576Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1500678Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1500778Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1500841Z { 2023-01-11T21:41:24.1500935Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1500994Z { 2023-01-11T21:41:24.1501056Z #pragma omp for 2023-01-11T21:41:24.1501136Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1501198Z { 2023-01-11T21:41:24.1501274Z #pragma GCC ivdep 2023-01-11T21:41:24.1501393Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1501454Z { 2023-01-11T21:41:24.1501519Z { 2023-01-11T21:41:24.1501573Z { 2023-01-11T21:41:24.1501664Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1501769Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1501865Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1501959Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1502023Z } 2023-01-11T21:41:24.1502086Z } 2023-01-11T21:41:24.1502135Z } 2023-01-11T21:41:24.1502194Z } 2023-01-11T21:41:24.1502255Z } 2023-01-11T21:41:24.1502313Z } 2023-01-11T21:41:24.1502552Z ''') 2023-01-11T21:41:24.1502561Z 2023-01-11T21:41:24.1502568Z 2023-01-11T21:41:24.1502670Z async_compile.wait(globals()) 2023-01-11T21:41:24.1502741Z del async_compile 2023-01-11T21:41:24.1502748Z 2023-01-11T21:41:24.1502864Z def call(args): 2023-01-11T21:41:24.1502942Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1503010Z args.clear() 2023-01-11T21:41:24.1503213Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1503379Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1503445Z del arg0_1 2023-01-11T21:41:24.1503510Z del arg1_1 2023-01-11T21:41:24.1503567Z return (buf0, ) 2023-01-11T21:41:24.1503572Z 2023-01-11T21:41:24.1503587Z 2023-01-11T21:41:24.1503649Z if __name__ == "__main__": 2023-01-11T21:41:24.1503760Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1503884Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1504076Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1504275Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1504394Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1504399Z 2023-01-11T21:41:24.1504468Z ok (1.546s) 2023-01-11T21:41:24.1504970Z test_cpu_broadcast3_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1505094Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1505344Z [2023-01-11 21:40:24,463] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 546 2023-01-11T21:41:24.1505606Z [2023-01-11 21:40:24,471] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 546 2023-01-11T21:41:24.1505614Z 2023-01-11T21:41:24.1505712Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1505780Z import torch 2023-01-11T21:41:24.1505848Z import random 2023-01-11T21:41:24.1505964Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1506083Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1506089Z 2023-01-11T21:41:24.1506165Z aten = torch.ops.aten 2023-01-11T21:41:24.1506284Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1506374Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1506378Z 2023-01-11T21:41:24.1506382Z 2023-01-11T21:41:24.1506515Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1506718Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1506839Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1506939Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1507040Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1507129Z { 2023-01-11T21:41:24.1507224Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1507283Z { 2023-01-11T21:41:24.1507358Z #pragma omp for 2023-01-11T21:41:24.1507438Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1507499Z { 2023-01-11T21:41:24.1507622Z auto tmp0 = at::vec::Vectorized(in_ptr0[0]); 2023-01-11T21:41:24.1507740Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1507822Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1507911Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1507972Z } 2023-01-11T21:41:24.1508070Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1508151Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1508211Z { 2023-01-11T21:41:24.1508280Z auto tmp0 = in_ptr0[0]; 2023-01-11T21:41:24.1508360Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1508472Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1508552Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1508612Z } 2023-01-11T21:41:24.1508677Z } 2023-01-11T21:41:24.1508734Z } 2023-01-11T21:41:24.1508800Z ''') 2023-01-11T21:41:24.1508805Z 2023-01-11T21:41:24.1508809Z 2023-01-11T21:41:24.1508895Z async_compile.wait(globals()) 2023-01-11T21:41:24.1508965Z del async_compile 2023-01-11T21:41:24.1508970Z 2023-01-11T21:41:24.1509038Z def call(args): 2023-01-11T21:41:24.1509112Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1509179Z args.clear() 2023-01-11T21:41:24.1509378Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1509539Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1509594Z del arg0_1 2023-01-11T21:41:24.1509658Z del arg1_1 2023-01-11T21:41:24.1509727Z return (buf0, ) 2023-01-11T21:41:24.1509732Z 2023-01-11T21:41:24.1509738Z 2023-01-11T21:41:24.1509815Z if __name__ == "__main__": 2023-01-11T21:41:24.1509926Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1510046Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1510238Z arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1510423Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1510536Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1510541Z 2023-01-11T21:41:24.1510606Z ok (0.023s) 2023-01-11T21:41:24.1511098Z test_cpu_dense_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1511227Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1511486Z [2023-01-11 21:40:24,485] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 547 2023-01-11T21:41:24.1511749Z [2023-01-11 21:40:26,063] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 547 2023-01-11T21:41:24.1511754Z 2023-01-11T21:41:24.1511845Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1511912Z import torch 2023-01-11T21:41:24.1511982Z import random 2023-01-11T21:41:24.1512083Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1512202Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1512207Z 2023-01-11T21:41:24.1512285Z aten = torch.ops.aten 2023-01-11T21:41:24.1512415Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1512504Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1512509Z 2023-01-11T21:41:24.1512548Z 2023-01-11T21:41:24.1512684Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1512886Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1513002Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1513091Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1513189Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1513249Z { 2023-01-11T21:41:24.1513345Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1513405Z { 2023-01-11T21:41:24.1513485Z #pragma omp for 2023-01-11T21:41:24.1513564Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1513614Z { 2023-01-11T21:41:24.1513694Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1513817Z { 2023-01-11T21:41:24.1513960Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1514128Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:24.1514219Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1514319Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1514370Z } 2023-01-11T21:41:24.1514459Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1514541Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1514602Z { 2023-01-11T21:41:24.1514696Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1514781Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1514865Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1514944Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1515006Z } 2023-01-11T21:41:24.1515066Z } 2023-01-11T21:41:24.1515125Z } 2023-01-11T21:41:24.1515184Z } 2023-01-11T21:41:24.1520923Z ''') 2023-01-11T21:41:24.1520935Z 2023-01-11T21:41:24.1520940Z 2023-01-11T21:41:24.1521062Z async_compile.wait(globals()) 2023-01-11T21:41:24.1521131Z del async_compile 2023-01-11T21:41:24.1521136Z 2023-01-11T21:41:24.1521207Z def call(args): 2023-01-11T21:41:24.1521284Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1521360Z args.clear() 2023-01-11T21:41:24.1521581Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1521746Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1521812Z del arg0_1 2023-01-11T21:41:24.1521865Z del arg1_1 2023-01-11T21:41:24.1521933Z return (buf0, ) 2023-01-11T21:41:24.1521938Z 2023-01-11T21:41:24.1521942Z 2023-01-11T21:41:24.1522017Z if __name__ == "__main__": 2023-01-11T21:41:24.1522132Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1522256Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1522460Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1522663Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1522777Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1522782Z 2023-01-11T21:41:24.1522835Z ok (1.592s) 2023-01-11T21:41:24.1523333Z test_cpu_dense_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1523463Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1523730Z [2023-01-11 21:40:26,078] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 548 2023-01-11T21:41:24.1524001Z [2023-01-11 21:40:27,632] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 548 2023-01-11T21:41:24.1524101Z 2023-01-11T21:41:24.1524197Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1524268Z import torch 2023-01-11T21:41:24.1524338Z import random 2023-01-11T21:41:24.1524456Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1524564Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1524582Z 2023-01-11T21:41:24.1524648Z aten = torch.ops.aten 2023-01-11T21:41:24.1524783Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1524876Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1524881Z 2023-01-11T21:41:24.1524886Z 2023-01-11T21:41:24.1525021Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1525224Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1525345Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1525489Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1525592Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1525641Z { 2023-01-11T21:41:24.1525740Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1525802Z { 2023-01-11T21:41:24.1525877Z #pragma omp for 2023-01-11T21:41:24.1525958Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1526020Z { 2023-01-11T21:41:24.1526091Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1526157Z { 2023-01-11T21:41:24.1526300Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1526424Z auto tmp1 = at::vec::Vectorized(in_ptr1[i0]); 2023-01-11T21:41:24.1526513Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1526617Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1526681Z } 2023-01-11T21:41:24.1526774Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1526851Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1526917Z { 2023-01-11T21:41:24.1527016Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1527104Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1527189Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1527282Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1527347Z } 2023-01-11T21:41:24.1527396Z } 2023-01-11T21:41:24.1527459Z } 2023-01-11T21:41:24.1527523Z } 2023-01-11T21:41:24.1527604Z ''') 2023-01-11T21:41:24.1527609Z 2023-01-11T21:41:24.1527613Z 2023-01-11T21:41:24.1527704Z async_compile.wait(globals()) 2023-01-11T21:41:24.1527778Z del async_compile 2023-01-11T21:41:24.1527782Z 2023-01-11T21:41:24.1527852Z def call(args): 2023-01-11T21:41:24.1527915Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1527985Z args.clear() 2023-01-11T21:41:24.1528198Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1528366Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1528436Z del arg0_1 2023-01-11T21:41:24.1528502Z del arg1_1 2023-01-11T21:41:24.1528574Z return (buf0, ) 2023-01-11T21:41:24.1528579Z 2023-01-11T21:41:24.1528583Z 2023-01-11T21:41:24.1528645Z if __name__ == "__main__": 2023-01-11T21:41:24.1528758Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1528882Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1529086Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1529292Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1529407Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1529412Z 2023-01-11T21:41:24.1529481Z ok (1.570s) 2023-01-11T21:41:24.1529976Z test_cpu_dense_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1530136Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1530383Z [2023-01-11 21:40:27,648] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 549 2023-01-11T21:41:24.1530649Z [2023-01-11 21:40:29,362] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 549 2023-01-11T21:41:24.1530654Z 2023-01-11T21:41:24.1530749Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1530824Z import torch 2023-01-11T21:41:24.1530895Z import random 2023-01-11T21:41:24.1531012Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1531166Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1531171Z 2023-01-11T21:41:24.1531252Z aten = torch.ops.aten 2023-01-11T21:41:24.1531373Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1531466Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1531471Z 2023-01-11T21:41:24.1531475Z 2023-01-11T21:41:24.1531612Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1531819Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1531938Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1532047Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1532148Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1532209Z { 2023-01-11T21:41:24.1532294Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1532355Z { 2023-01-11T21:41:24.1532433Z #pragma omp for 2023-01-11T21:41:24.1532520Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1532584Z { 2023-01-11T21:41:24.1532717Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1532837Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:24.1532908Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1532997Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1533062Z } 2023-01-11T21:41:24.1533154Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1533233Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1533296Z { 2023-01-11T21:41:24.1533378Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1533447Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1533527Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1533608Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1533670Z } 2023-01-11T21:41:24.1533729Z } 2023-01-11T21:41:24.1533786Z } 2023-01-11T21:41:24.1533861Z ''') 2023-01-11T21:41:24.1533869Z 2023-01-11T21:41:24.1533875Z 2023-01-11T21:41:24.1533952Z async_compile.wait(globals()) 2023-01-11T21:41:24.1534025Z del async_compile 2023-01-11T21:41:24.1534030Z 2023-01-11T21:41:24.1534099Z def call(args): 2023-01-11T21:41:24.1534171Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1534239Z args.clear() 2023-01-11T21:41:24.1534440Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1534600Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1534655Z del arg0_1 2023-01-11T21:41:24.1534719Z del arg1_1 2023-01-11T21:41:24.1534788Z return (buf0, ) 2023-01-11T21:41:24.1534793Z 2023-01-11T21:41:24.1534797Z 2023-01-11T21:41:24.1534870Z if __name__ == "__main__": 2023-01-11T21:41:24.1534981Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1535101Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1535301Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1535524Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1535627Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1535643Z 2023-01-11T21:41:24.1535696Z ok (1.729s) 2023-01-11T21:41:24.1536179Z test_cpu_dense_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1536303Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1536563Z [2023-01-11 21:40:29,380] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 550 2023-01-11T21:41:24.1536857Z [2023-01-11 21:40:29,389] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 550 2023-01-11T21:41:24.1536864Z 2023-01-11T21:41:24.1536956Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1537024Z import torch 2023-01-11T21:41:24.1537093Z import random 2023-01-11T21:41:24.1537194Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1537311Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1537316Z 2023-01-11T21:41:24.1537397Z aten = torch.ops.aten 2023-01-11T21:41:24.1537529Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1537618Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1537623Z 2023-01-11T21:41:24.1537627Z 2023-01-11T21:41:24.1537760Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1537961Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1538079Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1538184Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1538270Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1538331Z { 2023-01-11T21:41:24.1538427Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1538488Z { 2023-01-11T21:41:24.1538562Z #pragma omp for 2023-01-11T21:41:24.1538646Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1538695Z { 2023-01-11T21:41:24.1538825Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1538952Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1539034Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1539122Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1539183Z } 2023-01-11T21:41:24.1539276Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1539357Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1539408Z { 2023-01-11T21:41:24.1539494Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1539575Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1539653Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1539733Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1539792Z } 2023-01-11T21:41:24.1539840Z } 2023-01-11T21:41:24.1539901Z } 2023-01-11T21:41:24.1539979Z ''') 2023-01-11T21:41:24.1539984Z 2023-01-11T21:41:24.1539988Z 2023-01-11T21:41:24.1540075Z async_compile.wait(globals()) 2023-01-11T21:41:24.1540146Z del async_compile 2023-01-11T21:41:24.1540150Z 2023-01-11T21:41:24.1540218Z def call(args): 2023-01-11T21:41:24.1540291Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1540359Z args.clear() 2023-01-11T21:41:24.1540545Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1540705Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1540808Z del arg0_1 2023-01-11T21:41:24.1540875Z del arg1_1 2023-01-11T21:41:24.1540945Z return (buf0, ) 2023-01-11T21:41:24.1540950Z 2023-01-11T21:41:24.1540954Z 2023-01-11T21:41:24.1541026Z if __name__ == "__main__": 2023-01-11T21:41:24.1541137Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1541247Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1541447Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1541644Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1541757Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1541762Z 2023-01-11T21:41:24.1541827Z ok (0.027s) 2023-01-11T21:41:24.1542482Z test_cpu_dense_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1542613Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1542879Z [2023-01-11 21:40:29,406] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 551 2023-01-11T21:41:24.1543146Z [2023-01-11 21:40:30,972] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 551 2023-01-11T21:41:24.1543152Z 2023-01-11T21:41:24.1543246Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1543302Z import torch 2023-01-11T21:41:24.1543370Z import random 2023-01-11T21:41:24.1543486Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1543605Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1543610Z 2023-01-11T21:41:24.1543687Z aten = torch.ops.aten 2023-01-11T21:41:24.1543825Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1543918Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1543923Z 2023-01-11T21:41:24.1543927Z 2023-01-11T21:41:24.1544060Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1544252Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1544370Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1544475Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1544574Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1544633Z { 2023-01-11T21:41:24.1544728Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1544788Z { 2023-01-11T21:41:24.1544851Z #pragma omp for 2023-01-11T21:41:24.1544932Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:24.1544999Z { 2023-01-11T21:41:24.1545061Z { 2023-01-11T21:41:24.1545124Z { 2023-01-11T21:41:24.1545219Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1545308Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.1545404Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1545492Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1545575Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1545639Z } 2023-01-11T21:41:24.1545699Z } 2023-01-11T21:41:24.1545758Z } 2023-01-11T21:41:24.1545806Z } 2023-01-11T21:41:24.1545864Z } 2023-01-11T21:41:24.1545940Z ''') 2023-01-11T21:41:24.1545945Z 2023-01-11T21:41:24.1545949Z 2023-01-11T21:41:24.1546037Z async_compile.wait(globals()) 2023-01-11T21:41:24.1546108Z del async_compile 2023-01-11T21:41:24.1546113Z 2023-01-11T21:41:24.1546181Z def call(args): 2023-01-11T21:41:24.1546255Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1546325Z args.clear() 2023-01-11T21:41:24.1546517Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1546735Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1546803Z del arg0_1 2023-01-11T21:41:24.1546869Z del arg1_1 2023-01-11T21:41:24.1546941Z return (buf0, ) 2023-01-11T21:41:24.1546946Z 2023-01-11T21:41:24.1546950Z 2023-01-11T21:41:24.1547026Z if __name__ == "__main__": 2023-01-11T21:41:24.1547137Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1547245Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1547446Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1547645Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1547758Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1547763Z 2023-01-11T21:41:24.1547830Z ok (1.583s) 2023-01-11T21:41:24.1548351Z test_cpu_dense_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1548480Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1548744Z [2023-01-11 21:40:30,987] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 552 2023-01-11T21:41:24.1549010Z [2023-01-11 21:40:32,571] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 552 2023-01-11T21:41:24.1549015Z 2023-01-11T21:41:24.1549108Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1549165Z import torch 2023-01-11T21:41:24.1549232Z import random 2023-01-11T21:41:24.1549345Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1549470Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1549476Z 2023-01-11T21:41:24.1549552Z aten = torch.ops.aten 2023-01-11T21:41:24.1549684Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1549773Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1549778Z 2023-01-11T21:41:24.1549782Z 2023-01-11T21:41:24.1549913Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1550104Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1550222Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1550323Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1550420Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1550479Z { 2023-01-11T21:41:24.1550575Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1550635Z { 2023-01-11T21:41:24.1550698Z #pragma omp for 2023-01-11T21:41:24.1550783Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1550844Z { 2023-01-11T21:41:24.1550923Z #pragma GCC ivdep 2023-01-11T21:41:24.1551006Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1551068Z { 2023-01-11T21:41:24.1551131Z { 2023-01-11T21:41:24.1551183Z { 2023-01-11T21:41:24.1551286Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1551378Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1551485Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1551577Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1551670Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1551737Z } 2023-01-11T21:41:24.1551788Z } 2023-01-11T21:41:24.1551850Z } 2023-01-11T21:41:24.1551910Z } 2023-01-11T21:41:24.1551971Z } 2023-01-11T21:41:24.1552028Z } 2023-01-11T21:41:24.1552138Z ''') 2023-01-11T21:41:24.1552146Z 2023-01-11T21:41:24.1552151Z 2023-01-11T21:41:24.1552240Z async_compile.wait(globals()) 2023-01-11T21:41:24.1552299Z del async_compile 2023-01-11T21:41:24.1552304Z 2023-01-11T21:41:24.1552371Z def call(args): 2023-01-11T21:41:24.1552443Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1552513Z args.clear() 2023-01-11T21:41:24.1552713Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1552878Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1552943Z del arg0_1 2023-01-11T21:41:24.1552997Z del arg1_1 2023-01-11T21:41:24.1553065Z return (buf0, ) 2023-01-11T21:41:24.1553071Z 2023-01-11T21:41:24.1553075Z 2023-01-11T21:41:24.1553148Z if __name__ == "__main__": 2023-01-11T21:41:24.1553260Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1553380Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1553610Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1553873Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1553991Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1553996Z 2023-01-11T21:41:24.1554048Z ok (1.599s) 2023-01-11T21:41:24.1554536Z test_cpu_dense_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1554664Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1554928Z [2023-01-11 21:40:32,587] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 553 2023-01-11T21:41:24.1555200Z [2023-01-11 21:40:34,145] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 553 2023-01-11T21:41:24.1555205Z 2023-01-11T21:41:24.1555297Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1555367Z import torch 2023-01-11T21:41:24.1555439Z import random 2023-01-11T21:41:24.1555555Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1555662Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1555679Z 2023-01-11T21:41:24.1555743Z aten = torch.ops.aten 2023-01-11T21:41:24.1555874Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1555965Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1555970Z 2023-01-11T21:41:24.1555974Z 2023-01-11T21:41:24.1556108Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1556313Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1556433Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1556538Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1556625Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1556683Z { 2023-01-11T21:41:24.1556778Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1556841Z { 2023-01-11T21:41:24.1556916Z #pragma omp for 2023-01-11T21:41:24.1556995Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1557056Z { 2023-01-11T21:41:24.1557121Z #pragma GCC ivdep 2023-01-11T21:41:24.1557204Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1557266Z { 2023-01-11T21:41:24.1557328Z { 2023-01-11T21:41:24.1557394Z { 2023-01-11T21:41:24.1557494Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1557598Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1557679Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1557811Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1557876Z } 2023-01-11T21:41:24.1557939Z } 2023-01-11T21:41:24.1558000Z } 2023-01-11T21:41:24.1558060Z } 2023-01-11T21:41:24.1558119Z } 2023-01-11T21:41:24.1558165Z } 2023-01-11T21:41:24.1558243Z ''') 2023-01-11T21:41:24.1558248Z 2023-01-11T21:41:24.1558252Z 2023-01-11T21:41:24.1558340Z async_compile.wait(globals()) 2023-01-11T21:41:24.1558410Z del async_compile 2023-01-11T21:41:24.1558415Z 2023-01-11T21:41:24.1558486Z def call(args): 2023-01-11T21:41:24.1558561Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1558633Z args.clear() 2023-01-11T21:41:24.1558822Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1558986Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1559055Z del arg0_1 2023-01-11T21:41:24.1559153Z del arg1_1 2023-01-11T21:41:24.1559226Z return (buf0, ) 2023-01-11T21:41:24.1559231Z 2023-01-11T21:41:24.1559235Z 2023-01-11T21:41:24.1559310Z if __name__ == "__main__": 2023-01-11T21:41:24.1559424Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1559547Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1559735Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1559936Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1560051Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1560055Z 2023-01-11T21:41:24.1560127Z ok (1.574s) 2023-01-11T21:41:24.1560626Z test_cpu_dense_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1560755Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1561016Z [2023-01-11 21:40:34,160] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 554 2023-01-11T21:41:24.1561283Z [2023-01-11 21:40:35,714] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 554 2023-01-11T21:41:24.1561289Z 2023-01-11T21:41:24.1561383Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1561440Z import torch 2023-01-11T21:41:24.1561510Z import random 2023-01-11T21:41:24.1561624Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1561745Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1561750Z 2023-01-11T21:41:24.1561829Z aten = torch.ops.aten 2023-01-11T21:41:24.1561963Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1562058Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1562063Z 2023-01-11T21:41:24.1562067Z 2023-01-11T21:41:24.1562202Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1562392Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1562513Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1562618Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1562719Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1562780Z { 2023-01-11T21:41:24.1562878Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1562937Z { 2023-01-11T21:41:24.1563001Z #pragma omp for 2023-01-11T21:41:24.1563083Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1563149Z { 2023-01-11T21:41:24.1563230Z #pragma GCC ivdep 2023-01-11T21:41:24.1563318Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1563432Z { 2023-01-11T21:41:24.1563498Z { 2023-01-11T21:41:24.1563550Z { 2023-01-11T21:41:24.1563654Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1563755Z auto tmp1 = in_ptr1[i0 + (10*i1)]; 2023-01-11T21:41:24.1563849Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1563945Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1564012Z } 2023-01-11T21:41:24.1564075Z } 2023-01-11T21:41:24.1564125Z } 2023-01-11T21:41:24.1564188Z } 2023-01-11T21:41:24.1564253Z } 2023-01-11T21:41:24.1564315Z } 2023-01-11T21:41:24.1564394Z ''') 2023-01-11T21:41:24.1564399Z 2023-01-11T21:41:24.1564403Z 2023-01-11T21:41:24.1564492Z async_compile.wait(globals()) 2023-01-11T21:41:24.1564567Z del async_compile 2023-01-11T21:41:24.1564572Z 2023-01-11T21:41:24.1564628Z def call(args): 2023-01-11T21:41:24.1564702Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1564804Z args.clear() 2023-01-11T21:41:24.1565003Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1565168Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1565234Z del arg0_1 2023-01-11T21:41:24.1565300Z del arg1_1 2023-01-11T21:41:24.1565357Z return (buf0, ) 2023-01-11T21:41:24.1565363Z 2023-01-11T21:41:24.1565378Z 2023-01-11T21:41:24.1565440Z if __name__ == "__main__": 2023-01-11T21:41:24.1565554Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1565674Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1565872Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1566068Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1566180Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1566190Z 2023-01-11T21:41:24.1566255Z ok (1.569s) 2023-01-11T21:41:24.1566742Z test_cpu_double_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1566867Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1567112Z [2023-01-11 21:40:35,731] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 555 2023-01-11T21:41:24.1567378Z [2023-01-11 21:40:37,295] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 555 2023-01-11T21:41:24.1567383Z 2023-01-11T21:41:24.1567477Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1567549Z import torch 2023-01-11T21:41:24.1567620Z import random 2023-01-11T21:41:24.1567737Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1567857Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1567863Z 2023-01-11T21:41:24.1567938Z aten = torch.ops.aten 2023-01-11T21:41:24.1568060Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1568151Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1568155Z 2023-01-11T21:41:24.1568159Z 2023-01-11T21:41:24.1568292Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1568496Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1568617Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1568720Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1568822Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1568883Z { 2023-01-11T21:41:24.1569001Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1569062Z { 2023-01-11T21:41:24.1569136Z #pragma omp for 2023-01-11T21:41:24.1569218Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1569280Z { 2023-01-11T21:41:24.1569364Z #pragma GCC ivdep 2023-01-11T21:41:24.1569439Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1569503Z { 2023-01-11T21:41:24.1569569Z { 2023-01-11T21:41:24.1569634Z { 2023-01-11T21:41:24.1569737Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1569830Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1569939Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1570018Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1570111Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1570174Z } 2023-01-11T21:41:24.1570239Z } 2023-01-11T21:41:24.1570328Z } 2023-01-11T21:41:24.1570389Z } 2023-01-11T21:41:24.1570450Z } 2023-01-11T21:41:24.1570497Z } 2023-01-11T21:41:24.1570576Z ''') 2023-01-11T21:41:24.1570583Z 2023-01-11T21:41:24.1570587Z 2023-01-11T21:41:24.1570675Z async_compile.wait(globals()) 2023-01-11T21:41:24.1570746Z del async_compile 2023-01-11T21:41:24.1570751Z 2023-01-11T21:41:24.1570820Z def call(args): 2023-01-11T21:41:24.1570892Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1570960Z args.clear() 2023-01-11T21:41:24.1571146Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1571304Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1571369Z del arg0_1 2023-01-11T21:41:24.1571433Z del arg1_1 2023-01-11T21:41:24.1571501Z return (buf0, ) 2023-01-11T21:41:24.1571506Z 2023-01-11T21:41:24.1571510Z 2023-01-11T21:41:24.1571582Z if __name__ == "__main__": 2023-01-11T21:41:24.1571695Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1571816Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1572003Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1572195Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1572309Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1572314Z 2023-01-11T21:41:24.1572382Z ok (1.581s) 2023-01-11T21:41:24.1572869Z test_cpu_double_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1572998Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1573260Z [2023-01-11 21:40:37,311] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 556 2023-01-11T21:41:24.1573525Z [2023-01-11 21:40:38,818] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 556 2023-01-11T21:41:24.1573530Z 2023-01-11T21:41:24.1573622Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1573689Z import torch 2023-01-11T21:41:24.1573747Z import random 2023-01-11T21:41:24.1573861Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1573981Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1573986Z 2023-01-11T21:41:24.1574064Z aten = torch.ops.aten 2023-01-11T21:41:24.1574197Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1574287Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1574292Z 2023-01-11T21:41:24.1574296Z 2023-01-11T21:41:24.1574430Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1574662Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1574782Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1574884Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1574985Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1575048Z { 2023-01-11T21:41:24.1575143Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1575202Z { 2023-01-11T21:41:24.1575267Z #pragma omp for 2023-01-11T21:41:24.1575350Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1575410Z { 2023-01-11T21:41:24.1575491Z #pragma GCC ivdep 2023-01-11T21:41:24.1575575Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1575641Z { 2023-01-11T21:41:24.1575705Z { 2023-01-11T21:41:24.1575760Z { 2023-01-11T21:41:24.1575860Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1575989Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1576101Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1576195Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1576288Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1576352Z } 2023-01-11T21:41:24.1576405Z } 2023-01-11T21:41:24.1576468Z } 2023-01-11T21:41:24.1576528Z } 2023-01-11T21:41:24.1576591Z } 2023-01-11T21:41:24.1576650Z } 2023-01-11T21:41:24.1576732Z ''') 2023-01-11T21:41:24.1576737Z 2023-01-11T21:41:24.1576741Z 2023-01-11T21:41:24.1576831Z async_compile.wait(globals()) 2023-01-11T21:41:24.1576891Z del async_compile 2023-01-11T21:41:24.1576907Z 2023-01-11T21:41:24.1576963Z def call(args): 2023-01-11T21:41:24.1577037Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1577106Z args.clear() 2023-01-11T21:41:24.1577319Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1577484Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1577551Z del arg0_1 2023-01-11T21:41:24.1577619Z del arg1_1 2023-01-11T21:41:24.1577678Z return (buf0, ) 2023-01-11T21:41:24.1577683Z 2023-01-11T21:41:24.1577687Z 2023-01-11T21:41:24.1577764Z if __name__ == "__main__": 2023-01-11T21:41:24.1577877Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1578005Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1578203Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1578410Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1578522Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1578527Z 2023-01-11T21:41:24.1578591Z ok (1.522s) 2023-01-11T21:41:24.1579077Z test_cpu_double_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1579201Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1579458Z [2023-01-11 21:40:38,833] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 557 2023-01-11T21:41:24.1579722Z [2023-01-11 21:40:40,409] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 557 2023-01-11T21:41:24.1579727Z 2023-01-11T21:41:24.1579821Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1579889Z import torch 2023-01-11T21:41:24.1579957Z import random 2023-01-11T21:41:24.1580073Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1580221Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1580227Z 2023-01-11T21:41:24.1580291Z aten = torch.ops.aten 2023-01-11T21:41:24.1580421Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1580511Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1580517Z 2023-01-11T21:41:24.1580521Z 2023-01-11T21:41:24.1580653Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1580855Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1580975Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1581078Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1581176Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1581223Z { 2023-01-11T21:41:24.1581318Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1581378Z { 2023-01-11T21:41:24.1581514Z #pragma omp for 2023-01-11T21:41:24.1581596Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:24.1581656Z { 2023-01-11T21:41:24.1581705Z { 2023-01-11T21:41:24.1581767Z { 2023-01-11T21:41:24.1581859Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1581951Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1582058Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1582148Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1582230Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1582281Z } 2023-01-11T21:41:24.1582474Z } 2023-01-11T21:41:24.1582567Z } 2023-01-11T21:41:24.1582647Z } 2023-01-11T21:41:24.1582707Z } 2023-01-11T21:41:24.1582790Z ''') 2023-01-11T21:41:24.1582795Z 2023-01-11T21:41:24.1582799Z 2023-01-11T21:41:24.1582890Z async_compile.wait(globals()) 2023-01-11T21:41:24.1582948Z del async_compile 2023-01-11T21:41:24.1582968Z 2023-01-11T21:41:24.1583027Z def call(args): 2023-01-11T21:41:24.1583100Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1583170Z args.clear() 2023-01-11T21:41:24.1583371Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1583532Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1583598Z del arg0_1 2023-01-11T21:41:24.1583663Z del arg1_1 2023-01-11T21:41:24.1583720Z return (buf0, ) 2023-01-11T21:41:24.1583724Z 2023-01-11T21:41:24.1583730Z 2023-01-11T21:41:24.1583803Z if __name__ == "__main__": 2023-01-11T21:41:24.1583917Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1584039Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1584238Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1584429Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1584543Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1584548Z 2023-01-11T21:41:24.1584614Z ok (1.591s) 2023-01-11T21:41:24.1585086Z test_cpu_double_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1585209Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1585470Z [2023-01-11 21:40:40,425] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 558 2023-01-11T21:41:24.1585732Z [2023-01-11 21:40:41,995] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 558 2023-01-11T21:41:24.1585737Z 2023-01-11T21:41:24.1585896Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1585965Z import torch 2023-01-11T21:41:24.1586034Z import random 2023-01-11T21:41:24.1586148Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1586267Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1586272Z 2023-01-11T21:41:24.1586336Z aten = torch.ops.aten 2023-01-11T21:41:24.1586469Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1586566Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1586571Z 2023-01-11T21:41:24.1586575Z 2023-01-11T21:41:24.1586713Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1586916Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1587035Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1587138Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1587237Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1587322Z { 2023-01-11T21:41:24.1587419Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1587483Z { 2023-01-11T21:41:24.1587557Z #pragma omp for 2023-01-11T21:41:24.1587638Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:24.1587698Z { 2023-01-11T21:41:24.1587758Z { 2023-01-11T21:41:24.1587809Z { 2023-01-11T21:41:24.1587899Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1587989Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1588094Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1588184Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1588271Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1588334Z } 2023-01-11T21:41:24.1588383Z } 2023-01-11T21:41:24.1588444Z } 2023-01-11T21:41:24.1588506Z } 2023-01-11T21:41:24.1588566Z } 2023-01-11T21:41:24.1588648Z ''') 2023-01-11T21:41:24.1588655Z 2023-01-11T21:41:24.1588662Z 2023-01-11T21:41:24.1588747Z async_compile.wait(globals()) 2023-01-11T21:41:24.1588818Z del async_compile 2023-01-11T21:41:24.1588823Z 2023-01-11T21:41:24.1588879Z def call(args): 2023-01-11T21:41:24.1588955Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1589025Z args.clear() 2023-01-11T21:41:24.1589225Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1589382Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1589449Z del arg0_1 2023-01-11T21:41:24.1589515Z del arg1_1 2023-01-11T21:41:24.1589573Z return (buf0, ) 2023-01-11T21:41:24.1589578Z 2023-01-11T21:41:24.1589582Z 2023-01-11T21:41:24.1589655Z if __name__ == "__main__": 2023-01-11T21:41:24.1589766Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1589887Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1590087Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1590286Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1590401Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1590405Z 2023-01-11T21:41:24.1590471Z ok (1.585s) 2023-01-11T21:41:24.1590950Z test_cpu_double_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1591075Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1591336Z [2023-01-11 21:40:42,010] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 559 2023-01-11T21:41:24.1591633Z [2023-01-11 21:40:43,542] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 559 2023-01-11T21:41:24.1591638Z 2023-01-11T21:41:24.1591733Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1591803Z import torch 2023-01-11T21:41:24.1591873Z import random 2023-01-11T21:41:24.1591988Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1592110Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1592115Z 2023-01-11T21:41:24.1592180Z aten = torch.ops.aten 2023-01-11T21:41:24.1592314Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1592406Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1592411Z 2023-01-11T21:41:24.1592415Z 2023-01-11T21:41:24.1592550Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1592756Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1592877Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1593017Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1593121Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1593169Z { 2023-01-11T21:41:24.1593268Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1593331Z { 2023-01-11T21:41:24.1593409Z #pragma omp for 2023-01-11T21:41:24.1593495Z for(long i0=0; i0<100; i0+=1) 2023-01-11T21:41:24.1593560Z { 2023-01-11T21:41:24.1593624Z { 2023-01-11T21:41:24.1593675Z { 2023-01-11T21:41:24.1593824Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1593917Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1594007Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1594092Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1594157Z } 2023-01-11T21:41:24.1594219Z } 2023-01-11T21:41:24.1594268Z } 2023-01-11T21:41:24.1594333Z } 2023-01-11T21:41:24.1594395Z } 2023-01-11T21:41:24.1594477Z ''') 2023-01-11T21:41:24.1594482Z 2023-01-11T21:41:24.1594487Z 2023-01-11T21:41:24.1594575Z async_compile.wait(globals()) 2023-01-11T21:41:24.1594650Z del async_compile 2023-01-11T21:41:24.1594654Z 2023-01-11T21:41:24.1594724Z def call(args): 2023-01-11T21:41:24.1594785Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1594856Z args.clear() 2023-01-11T21:41:24.1595057Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1595218Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1595290Z del arg0_1 2023-01-11T21:41:24.1595356Z del arg1_1 2023-01-11T21:41:24.1595428Z return (buf0, ) 2023-01-11T21:41:24.1595433Z 2023-01-11T21:41:24.1595437Z 2023-01-11T21:41:24.1595499Z if __name__ == "__main__": 2023-01-11T21:41:24.1595613Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1595738Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1595939Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1596136Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1596251Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1596256Z 2023-01-11T21:41:24.1596322Z ok (1.547s) 2023-01-11T21:41:24.1596810Z test_cpu_double_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1596938Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1597190Z [2023-01-11 21:40:43,559] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 560 2023-01-11T21:41:24.1597487Z [2023-01-11 21:40:45,125] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 560 2023-01-11T21:41:24.1597493Z 2023-01-11T21:41:24.1597585Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1597657Z import torch 2023-01-11T21:41:24.1597724Z import random 2023-01-11T21:41:24.1597837Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1597956Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1597962Z 2023-01-11T21:41:24.1598039Z aten = torch.ops.aten 2023-01-11T21:41:24.1598159Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1598253Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1598257Z 2023-01-11T21:41:24.1598261Z 2023-01-11T21:41:24.1598393Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1598624Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1598751Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1598850Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1598948Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1599007Z { 2023-01-11T21:41:24.1599091Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1599152Z { 2023-01-11T21:41:24.1599227Z #pragma omp for 2023-01-11T21:41:24.1599307Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1599367Z { 2023-01-11T21:41:24.1599443Z #pragma GCC ivdep 2023-01-11T21:41:24.1599526Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1599575Z { 2023-01-11T21:41:24.1599638Z { 2023-01-11T21:41:24.1599701Z { 2023-01-11T21:41:24.1599802Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1599894Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1600010Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1600104Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1600187Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1600252Z } 2023-01-11T21:41:24.1600314Z } 2023-01-11T21:41:24.1600375Z } 2023-01-11T21:41:24.1600437Z } 2023-01-11T21:41:24.1600497Z } 2023-01-11T21:41:24.1600544Z } 2023-01-11T21:41:24.1600619Z ''') 2023-01-11T21:41:24.1600625Z 2023-01-11T21:41:24.1600629Z 2023-01-11T21:41:24.1600718Z async_compile.wait(globals()) 2023-01-11T21:41:24.1600789Z del async_compile 2023-01-11T21:41:24.1600794Z 2023-01-11T21:41:24.1600861Z def call(args): 2023-01-11T21:41:24.1600933Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1601001Z args.clear() 2023-01-11T21:41:24.1601204Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1601356Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1601425Z del arg0_1 2023-01-11T21:41:24.1601488Z del arg1_1 2023-01-11T21:41:24.1601556Z return (buf0, ) 2023-01-11T21:41:24.1601560Z 2023-01-11T21:41:24.1601565Z 2023-01-11T21:41:24.1601637Z if __name__ == "__main__": 2023-01-11T21:41:24.1601749Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1601872Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1602068Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1602245Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1602359Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1602364Z 2023-01-11T21:41:24.1602429Z ok (1.584s) 2023-01-11T21:41:24.1602922Z test_cpu_double_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1603094Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1603353Z [2023-01-11 21:40:45,141] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 561 2023-01-11T21:41:24.1603617Z [2023-01-11 21:40:46,691] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 561 2023-01-11T21:41:24.1603623Z 2023-01-11T21:41:24.1603716Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1603785Z import torch 2023-01-11T21:41:24.1603842Z import random 2023-01-11T21:41:24.1603955Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1604074Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1604081Z 2023-01-11T21:41:24.1604187Z aten = torch.ops.aten 2023-01-11T21:41:24.1604324Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1604420Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1604425Z 2023-01-11T21:41:24.1604429Z 2023-01-11T21:41:24.1604563Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1604767Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1604873Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1604976Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1605076Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1605137Z { 2023-01-11T21:41:24.1605232Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1605293Z { 2023-01-11T21:41:24.1605370Z #pragma omp for 2023-01-11T21:41:24.1605439Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1605499Z { 2023-01-11T21:41:24.1605582Z #pragma GCC ivdep 2023-01-11T21:41:24.1605665Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1605726Z { 2023-01-11T21:41:24.1605788Z { 2023-01-11T21:41:24.1605852Z { 2023-01-11T21:41:24.1605941Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1606043Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1606152Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1606244Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1606338Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1606402Z } 2023-01-11T21:41:24.1606465Z } 2023-01-11T21:41:24.1606514Z } 2023-01-11T21:41:24.1606574Z } 2023-01-11T21:41:24.1606633Z } 2023-01-11T21:41:24.1606691Z } 2023-01-11T21:41:24.1606768Z ''') 2023-01-11T21:41:24.1606773Z 2023-01-11T21:41:24.1606779Z 2023-01-11T21:41:24.1606868Z async_compile.wait(globals()) 2023-01-11T21:41:24.1606938Z del async_compile 2023-01-11T21:41:24.1606943Z 2023-01-11T21:41:24.1607005Z def call(args): 2023-01-11T21:41:24.1607080Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1607151Z args.clear() 2023-01-11T21:41:24.1607351Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1607513Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1607579Z del arg0_1 2023-01-11T21:41:24.1607645Z del arg1_1 2023-01-11T21:41:24.1607704Z return (buf0, ) 2023-01-11T21:41:24.1607709Z 2023-01-11T21:41:24.1607724Z 2023-01-11T21:41:24.1607787Z if __name__ == "__main__": 2023-01-11T21:41:24.1607900Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1608023Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1608224Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1608453Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1608567Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1608572Z 2023-01-11T21:41:24.1608637Z ok (1.566s) 2023-01-11T21:41:24.1609132Z test_cpu_double_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1609256Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1609503Z [2023-01-11 21:40:46,707] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 562 2023-01-11T21:41:24.1609801Z [2023-01-11 21:40:48,262] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 562 2023-01-11T21:41:24.1609809Z 2023-01-11T21:41:24.1609905Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1609972Z import torch 2023-01-11T21:41:24.1610041Z import random 2023-01-11T21:41:24.1610152Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1610270Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1610274Z 2023-01-11T21:41:24.1610351Z aten = torch.ops.aten 2023-01-11T21:41:24.1610470Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1610561Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1610566Z 2023-01-11T21:41:24.1610571Z 2023-01-11T21:41:24.1610702Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1610903Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1611021Z extern "C" void kernel(const double* __restrict__ in_ptr0, 2023-01-11T21:41:24.1611129Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1611227Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1611277Z { 2023-01-11T21:41:24.1611372Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1611432Z { 2023-01-11T21:41:24.1611511Z #pragma omp for 2023-01-11T21:41:24.1611592Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1611654Z { 2023-01-11T21:41:24.1611732Z #pragma GCC ivdep 2023-01-11T21:41:24.1611803Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1611869Z { 2023-01-11T21:41:24.1611933Z { 2023-01-11T21:41:24.1611996Z { 2023-01-11T21:41:24.1612098Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1612197Z auto tmp1 = in_ptr1[i0 + (10*i1)]; 2023-01-11T21:41:24.1612308Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1612394Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1612486Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1612550Z } 2023-01-11T21:41:24.1612612Z } 2023-01-11T21:41:24.1612673Z } 2023-01-11T21:41:24.1612732Z } 2023-01-11T21:41:24.1612792Z } 2023-01-11T21:41:24.1612839Z } 2023-01-11T21:41:24.1612919Z ''') 2023-01-11T21:41:24.1612923Z 2023-01-11T21:41:24.1612928Z 2023-01-11T21:41:24.1613014Z async_compile.wait(globals()) 2023-01-11T21:41:24.1613084Z del async_compile 2023-01-11T21:41:24.1613088Z 2023-01-11T21:41:24.1613158Z def call(args): 2023-01-11T21:41:24.1613231Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1613303Z args.clear() 2023-01-11T21:41:24.1613489Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1613648Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1613752Z del arg0_1 2023-01-11T21:41:24.1613820Z del arg1_1 2023-01-11T21:41:24.1613889Z return (buf0, ) 2023-01-11T21:41:24.1613894Z 2023-01-11T21:41:24.1613898Z 2023-01-11T21:41:24.1613971Z if __name__ == "__main__": 2023-01-11T21:41:24.1614082Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1614204Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1614390Z arg0_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1614586Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1614699Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1614704Z 2023-01-11T21:41:24.1614768Z ok (1.570s) 2023-01-11T21:41:24.1615284Z test_cpu_int_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1615414Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1615672Z [2023-01-11 21:40:48,277] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 563 2023-01-11T21:41:24.1615933Z [2023-01-11 21:40:49,806] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 563 2023-01-11T21:41:24.1615938Z 2023-01-11T21:41:24.1616032Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1616088Z import torch 2023-01-11T21:41:24.1616155Z import random 2023-01-11T21:41:24.1616268Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1616385Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1616390Z 2023-01-11T21:41:24.1616465Z aten = torch.ops.aten 2023-01-11T21:41:24.1616602Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1616692Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1616698Z 2023-01-11T21:41:24.1616702Z 2023-01-11T21:41:24.1616835Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1617025Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1617139Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1617242Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1617342Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1617402Z { 2023-01-11T21:41:24.1617496Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1617556Z { 2023-01-11T21:41:24.1617619Z #pragma omp for 2023-01-11T21:41:24.1617699Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1617762Z { 2023-01-11T21:41:24.1617827Z { 2023-01-11T21:41:24.1617889Z { 2023-01-11T21:41:24.1617986Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1618078Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.1618172Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1618262Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1618344Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1618408Z } 2023-01-11T21:41:24.1618468Z } 2023-01-11T21:41:24.1618528Z } 2023-01-11T21:41:24.1618589Z } 2023-01-11T21:41:24.1618635Z } 2023-01-11T21:41:24.1618713Z ''') 2023-01-11T21:41:24.1618718Z 2023-01-11T21:41:24.1618721Z 2023-01-11T21:41:24.1618808Z async_compile.wait(globals()) 2023-01-11T21:41:24.1618879Z del async_compile 2023-01-11T21:41:24.1618884Z 2023-01-11T21:41:24.1618952Z def call(args): 2023-01-11T21:41:24.1619024Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1619092Z args.clear() 2023-01-11T21:41:24.1619272Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1619469Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1619536Z del arg0_1 2023-01-11T21:41:24.1619599Z del arg1_1 2023-01-11T21:41:24.1619667Z return (buf0, ) 2023-01-11T21:41:24.1619671Z 2023-01-11T21:41:24.1619676Z 2023-01-11T21:41:24.1619750Z if __name__ == "__main__": 2023-01-11T21:41:24.1619863Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1619984Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1620161Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1620354Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1620465Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1620470Z 2023-01-11T21:41:24.1620535Z ok (1.544s) 2023-01-11T21:41:24.1621051Z test_cpu_int_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1621182Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1621443Z [2023-01-11 21:40:49,822] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 564 2023-01-11T21:41:24.1621709Z [2023-01-11 21:40:51,386] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 564 2023-01-11T21:41:24.1621714Z 2023-01-11T21:41:24.1621806Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1621874Z import torch 2023-01-11T21:41:24.1621931Z import random 2023-01-11T21:41:24.1622044Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1622170Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1622176Z 2023-01-11T21:41:24.1622251Z aten = torch.ops.aten 2023-01-11T21:41:24.1622495Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1622586Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1622591Z 2023-01-11T21:41:24.1622595Z 2023-01-11T21:41:24.1622730Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1622921Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1623038Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1623143Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1623242Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1623303Z { 2023-01-11T21:41:24.1623399Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1623460Z { 2023-01-11T21:41:24.1623523Z #pragma omp for 2023-01-11T21:41:24.1623603Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1623671Z { 2023-01-11T21:41:24.1623752Z #pragma GCC ivdep 2023-01-11T21:41:24.1623837Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1623901Z { 2023-01-11T21:41:24.1623965Z { 2023-01-11T21:41:24.1624019Z { 2023-01-11T21:41:24.1624115Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1624210Z auto tmp2 = in_ptr1[i0]; 2023-01-11T21:41:24.1624321Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1624414Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1624511Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1624579Z } 2023-01-11T21:41:24.1624631Z } 2023-01-11T21:41:24.1624698Z } 2023-01-11T21:41:24.1624761Z } 2023-01-11T21:41:24.1624823Z } 2023-01-11T21:41:24.1624884Z } 2023-01-11T21:41:24.1624963Z ''') 2023-01-11T21:41:24.1625026Z 2023-01-11T21:41:24.1625030Z 2023-01-11T21:41:24.1625120Z async_compile.wait(globals()) 2023-01-11T21:41:24.1625179Z del async_compile 2023-01-11T21:41:24.1625184Z 2023-01-11T21:41:24.1625256Z def call(args): 2023-01-11T21:41:24.1625331Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1625404Z args.clear() 2023-01-11T21:41:24.1625615Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1625779Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1625846Z del arg0_1 2023-01-11T21:41:24.1625898Z del arg1_1 2023-01-11T21:41:24.1625969Z return (buf0, ) 2023-01-11T21:41:24.1625974Z 2023-01-11T21:41:24.1625979Z 2023-01-11T21:41:24.1626055Z if __name__ == "__main__": 2023-01-11T21:41:24.1626167Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1626290Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1626522Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1626731Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1626845Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1626849Z 2023-01-11T21:41:24.1626916Z ok (1.580s) 2023-01-11T21:41:24.1627393Z test_cpu_int_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1627520Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1627783Z [2023-01-11 21:40:51,403] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 565 2023-01-11T21:41:24.1628052Z [2023-01-11 21:40:53,005] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 565 2023-01-11T21:41:24.1628058Z 2023-01-11T21:41:24.1628151Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1628224Z import torch 2023-01-11T21:41:24.1628294Z import random 2023-01-11T21:41:24.1628409Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1628531Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1628536Z 2023-01-11T21:41:24.1628601Z aten = torch.ops.aten 2023-01-11T21:41:24.1628735Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1628826Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1628831Z 2023-01-11T21:41:24.1628835Z 2023-01-11T21:41:24.1628968Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1629174Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1629292Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1629399Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1629496Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1629544Z { 2023-01-11T21:41:24.1629641Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1629706Z { 2023-01-11T21:41:24.1629783Z #pragma omp for 2023-01-11T21:41:24.1629861Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1629925Z { 2023-01-11T21:41:24.1629976Z { 2023-01-11T21:41:24.1630037Z { 2023-01-11T21:41:24.1630126Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1630217Z auto tmp2 = in_ptr1[0]; 2023-01-11T21:41:24.1630320Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1630411Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1630496Z out_ptr0[i0] = tmp3; 2023-01-11T21:41:24.1630550Z } 2023-01-11T21:41:24.1630646Z } 2023-01-11T21:41:24.1630706Z } 2023-01-11T21:41:24.1630765Z } 2023-01-11T21:41:24.1630825Z } 2023-01-11T21:41:24.1630902Z ''') 2023-01-11T21:41:24.1630907Z 2023-01-11T21:41:24.1630911Z 2023-01-11T21:41:24.1631000Z async_compile.wait(globals()) 2023-01-11T21:41:24.1631059Z del async_compile 2023-01-11T21:41:24.1631075Z 2023-01-11T21:41:24.1631132Z def call(args): 2023-01-11T21:41:24.1631204Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1631273Z args.clear() 2023-01-11T21:41:24.1631469Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1631631Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1631699Z del arg0_1 2023-01-11T21:41:24.1631762Z del arg1_1 2023-01-11T21:41:24.1631820Z return (buf0, ) 2023-01-11T21:41:24.1631825Z 2023-01-11T21:41:24.1631829Z 2023-01-11T21:41:24.1631902Z if __name__ == "__main__": 2023-01-11T21:41:24.1632046Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1632168Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1632359Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1632548Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1632660Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1632665Z 2023-01-11T21:41:24.1632730Z ok (1.619s) 2023-01-11T21:41:24.1633199Z test_cpu_int_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1633326Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1633586Z [2023-01-11 21:40:53,021] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 566 2023-01-11T21:41:24.1633942Z [2023-01-11 21:40:54,577] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 566 2023-01-11T21:41:24.1633949Z 2023-01-11T21:41:24.1634044Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1634113Z import torch 2023-01-11T21:41:24.1634183Z import random 2023-01-11T21:41:24.1634297Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1634417Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1634421Z 2023-01-11T21:41:24.1634485Z aten = torch.ops.aten 2023-01-11T21:41:24.1634617Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1634709Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1634714Z 2023-01-11T21:41:24.1634718Z 2023-01-11T21:41:24.1634857Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1635064Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1635178Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1635282Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1635380Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1635426Z { 2023-01-11T21:41:24.1635522Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1635583Z { 2023-01-11T21:41:24.1635658Z #pragma omp for 2023-01-11T21:41:24.1635740Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1635801Z { 2023-01-11T21:41:24.1635867Z #pragma GCC ivdep 2023-01-11T21:41:24.1635951Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1636014Z { 2023-01-11T21:41:24.1636077Z { 2023-01-11T21:41:24.1636141Z { 2023-01-11T21:41:24.1636236Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1636379Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1636475Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1636566Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1636660Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1636725Z } 2023-01-11T21:41:24.1636787Z } 2023-01-11T21:41:24.1636849Z } 2023-01-11T21:41:24.1636910Z } 2023-01-11T21:41:24.1636959Z } 2023-01-11T21:41:24.1637016Z } 2023-01-11T21:41:24.1637094Z ''') 2023-01-11T21:41:24.1637100Z 2023-01-11T21:41:24.1637104Z 2023-01-11T21:41:24.1637192Z async_compile.wait(globals()) 2023-01-11T21:41:24.1637261Z del async_compile 2023-01-11T21:41:24.1637266Z 2023-01-11T21:41:24.1637332Z def call(args): 2023-01-11T21:41:24.1637405Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1637462Z args.clear() 2023-01-11T21:41:24.1637695Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1637856Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1637921Z del arg0_1 2023-01-11T21:41:24.1637984Z del arg1_1 2023-01-11T21:41:24.1638054Z return (buf0, ) 2023-01-11T21:41:24.1638059Z 2023-01-11T21:41:24.1638063Z 2023-01-11T21:41:24.1638135Z if __name__ == "__main__": 2023-01-11T21:41:24.1638251Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1638361Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1638555Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1638752Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1638863Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1638868Z 2023-01-11T21:41:24.1638933Z ok (1.572s) 2023-01-11T21:41:24.1639417Z test_cpu_int_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1639544Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1639805Z [2023-01-11 21:40:54,593] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 567 2023-01-11T21:41:24.1640069Z [2023-01-11 21:40:56,260] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 567 2023-01-11T21:41:24.1640074Z 2023-01-11T21:41:24.1640169Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1640226Z import torch 2023-01-11T21:41:24.1640294Z import random 2023-01-11T21:41:24.1640408Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1640533Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1640538Z 2023-01-11T21:41:24.1640617Z aten = torch.ops.aten 2023-01-11T21:41:24.1640749Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1640840Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1640845Z 2023-01-11T21:41:24.1640849Z 2023-01-11T21:41:24.1640983Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1641172Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1641288Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1641397Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1641497Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1641556Z { 2023-01-11T21:41:24.1641650Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1641711Z { 2023-01-11T21:41:24.1641773Z #pragma omp for 2023-01-11T21:41:24.1641904Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1641965Z { 2023-01-11T21:41:24.1642042Z #pragma GCC ivdep 2023-01-11T21:41:24.1642126Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1642190Z { 2023-01-11T21:41:24.1642243Z { 2023-01-11T21:41:24.1642306Z { 2023-01-11T21:41:24.1642399Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1642500Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1642609Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1642700Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1642793Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1642847Z } 2023-01-11T21:41:24.1642911Z } 2023-01-11T21:41:24.1642973Z } 2023-01-11T21:41:24.1643035Z } 2023-01-11T21:41:24.1643096Z } 2023-01-11T21:41:24.1643155Z } 2023-01-11T21:41:24.1643234Z ''') 2023-01-11T21:41:24.1643268Z 2023-01-11T21:41:24.1643273Z 2023-01-11T21:41:24.1643349Z async_compile.wait(globals()) 2023-01-11T21:41:24.1643419Z del async_compile 2023-01-11T21:41:24.1643424Z 2023-01-11T21:41:24.1643492Z def call(args): 2023-01-11T21:41:24.1643565Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1643636Z args.clear() 2023-01-11T21:41:24.1643834Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1643993Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1644061Z del arg0_1 2023-01-11T21:41:24.1644112Z del arg1_1 2023-01-11T21:41:24.1644180Z return (buf0, ) 2023-01-11T21:41:24.1644184Z 2023-01-11T21:41:24.1644188Z 2023-01-11T21:41:24.1644261Z if __name__ == "__main__": 2023-01-11T21:41:24.1644372Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1644492Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1644689Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1644890Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1644993Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1645009Z 2023-01-11T21:41:24.1645063Z ok (1.682s) 2023-01-11T21:41:24.1645541Z test_cpu_int_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1645666Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1645923Z [2023-01-11 21:40:56,275] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 568 2023-01-11T21:41:24.1646191Z [2023-01-11 21:40:57,921] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 568 2023-01-11T21:41:24.1646196Z 2023-01-11T21:41:24.1646286Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1646355Z import torch 2023-01-11T21:41:24.1646423Z import random 2023-01-11T21:41:24.1646537Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1646645Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1646650Z 2023-01-11T21:41:24.1646726Z aten = torch.ops.aten 2023-01-11T21:41:24.1646856Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1646946Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1646950Z 2023-01-11T21:41:24.1646955Z 2023-01-11T21:41:24.1647086Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1647289Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1647405Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1647537Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1647619Z int* __restrict__ out_ptr0) 2023-01-11T21:41:24.1647676Z { 2023-01-11T21:41:24.1647772Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1647831Z { 2023-01-11T21:41:24.1647905Z #pragma omp for 2023-01-11T21:41:24.1647985Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1648046Z { 2023-01-11T21:41:24.1648095Z { 2023-01-11T21:41:24.1648159Z { 2023-01-11T21:41:24.1648249Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1648338Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1648424Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1648508Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1648561Z } 2023-01-11T21:41:24.1648621Z } 2023-01-11T21:41:24.1648681Z } 2023-01-11T21:41:24.1648774Z } 2023-01-11T21:41:24.1648833Z } 2023-01-11T21:41:24.1648911Z ''') 2023-01-11T21:41:24.1648916Z 2023-01-11T21:41:24.1648920Z 2023-01-11T21:41:24.1649007Z async_compile.wait(globals()) 2023-01-11T21:41:24.1649066Z del async_compile 2023-01-11T21:41:24.1649081Z 2023-01-11T21:41:24.1649138Z def call(args): 2023-01-11T21:41:24.1649210Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1649279Z args.clear() 2023-01-11T21:41:24.1649471Z buf0 = empty_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1649629Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1649694Z del arg0_1 2023-01-11T21:41:24.1649758Z del arg1_1 2023-01-11T21:41:24.1649816Z return (buf0, ) 2023-01-11T21:41:24.1649821Z 2023-01-11T21:41:24.1649825Z 2023-01-11T21:41:24.1649901Z if __name__ == "__main__": 2023-01-11T21:41:24.1650012Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1650139Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1650331Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1650519Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1650631Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1650636Z 2023-01-11T21:41:24.1650701Z ok (1.661s) 2023-01-11T21:41:24.1651173Z test_cpu_int_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1651298Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1651558Z [2023-01-11 21:40:57,938] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 569 2023-01-11T21:41:24.1651823Z [2023-01-11 21:40:59,536] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 569 2023-01-11T21:41:24.1651828Z 2023-01-11T21:41:24.1651920Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1651990Z import torch 2023-01-11T21:41:24.1652059Z import random 2023-01-11T21:41:24.1652171Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1652289Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1652294Z 2023-01-11T21:41:24.1652358Z aten = torch.ops.aten 2023-01-11T21:41:24.1652488Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1652577Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1652582Z 2023-01-11T21:41:24.1652586Z 2023-01-11T21:41:24.1652718Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1652923Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1653070Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1653174Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1653271Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1653319Z { 2023-01-11T21:41:24.1653413Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1653473Z { 2023-01-11T21:41:24.1653548Z #pragma omp for 2023-01-11T21:41:24.1653628Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1653688Z { 2023-01-11T21:41:24.1653766Z #pragma GCC ivdep 2023-01-11T21:41:24.1653838Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1653900Z { 2023-01-11T21:41:24.1653961Z { 2023-01-11T21:41:24.1654028Z { 2023-01-11T21:41:24.1654120Z auto tmp0 = in_ptr0[i1]; 2023-01-11T21:41:24.1654222Z auto tmp2 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1654357Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1654452Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1654544Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1654609Z } 2023-01-11T21:41:24.1654672Z } 2023-01-11T21:41:24.1654735Z } 2023-01-11T21:41:24.1654796Z } 2023-01-11T21:41:24.1654844Z } 2023-01-11T21:41:24.1654902Z } 2023-01-11T21:41:24.1654979Z ''') 2023-01-11T21:41:24.1654984Z 2023-01-11T21:41:24.1654988Z 2023-01-11T21:41:24.1655076Z async_compile.wait(globals()) 2023-01-11T21:41:24.1655145Z del async_compile 2023-01-11T21:41:24.1655150Z 2023-01-11T21:41:24.1655220Z def call(args): 2023-01-11T21:41:24.1655296Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1655366Z args.clear() 2023-01-11T21:41:24.1655554Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1655725Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1655796Z del arg0_1 2023-01-11T21:41:24.1655864Z del arg1_1 2023-01-11T21:41:24.1655935Z return (buf0, ) 2023-01-11T21:41:24.1655939Z 2023-01-11T21:41:24.1655943Z 2023-01-11T21:41:24.1656020Z if __name__ == "__main__": 2023-01-11T21:41:24.1656135Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1656244Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1656435Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1656636Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1656750Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1656754Z 2023-01-11T21:41:24.1656825Z ok (1.615s) 2023-01-11T21:41:24.1657316Z test_cpu_int_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1657446Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1657709Z [2023-01-11 21:40:59,553] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 570 2023-01-11T21:41:24.1657977Z [2023-01-11 21:41:01,124] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 570 2023-01-11T21:41:24.1657982Z 2023-01-11T21:41:24.1658079Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1658137Z import torch 2023-01-11T21:41:24.1658209Z import random 2023-01-11T21:41:24.1658324Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1658444Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1658483Z 2023-01-11T21:41:24.1658565Z aten = torch.ops.aten 2023-01-11T21:41:24.1658700Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1658791Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1658796Z 2023-01-11T21:41:24.1658800Z 2023-01-11T21:41:24.1658935Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1659125Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1659242Z extern "C" void kernel(const int* __restrict__ in_ptr0, 2023-01-11T21:41:24.1659347Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1659447Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1659509Z { 2023-01-11T21:41:24.1659607Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1659669Z { 2023-01-11T21:41:24.1659733Z #pragma omp for 2023-01-11T21:41:24.1659815Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1659880Z { 2023-01-11T21:41:24.1659993Z #pragma GCC ivdep 2023-01-11T21:41:24.1660080Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1660143Z { 2023-01-11T21:41:24.1660194Z { 2023-01-11T21:41:24.1660259Z { 2023-01-11T21:41:24.1660355Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1660458Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1660567Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1660664Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1660763Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1660827Z } 2023-01-11T21:41:24.1660878Z } 2023-01-11T21:41:24.1660942Z } 2023-01-11T21:41:24.1661001Z } 2023-01-11T21:41:24.1661063Z } 2023-01-11T21:41:24.1661123Z } 2023-01-11T21:41:24.1661205Z ''') 2023-01-11T21:41:24.1661210Z 2023-01-11T21:41:24.1661217Z 2023-01-11T21:41:24.1661305Z async_compile.wait(globals()) 2023-01-11T21:41:24.1661364Z del async_compile 2023-01-11T21:41:24.1661368Z 2023-01-11T21:41:24.1661436Z def call(args): 2023-01-11T21:41:24.1661507Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1661582Z args.clear() 2023-01-11T21:41:24.1661781Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1661942Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1662007Z del arg0_1 2023-01-11T21:41:24.1662060Z del arg1_1 2023-01-11T21:41:24.1662128Z return (buf0, ) 2023-01-11T21:41:24.1662133Z 2023-01-11T21:41:24.1662137Z 2023-01-11T21:41:24.1662211Z if __name__ == "__main__": 2023-01-11T21:41:24.1662436Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1662622Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1662823Z arg0_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1663026Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1663139Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1663144Z 2023-01-11T21:41:24.1663197Z ok (1.588s) 2023-01-11T21:41:24.1663692Z test_cpu_strided_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1663817Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1664078Z [2023-01-11 21:41:01,140] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 571 2023-01-11T21:41:24.1664342Z [2023-01-11 21:41:02,739] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 571 2023-01-11T21:41:24.1664410Z 2023-01-11T21:41:24.1664504Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1664572Z import torch 2023-01-11T21:41:24.1664641Z import random 2023-01-11T21:41:24.1664753Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1664857Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1664862Z 2023-01-11T21:41:24.1664937Z aten = torch.ops.aten 2023-01-11T21:41:24.1665067Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1665157Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1665162Z 2023-01-11T21:41:24.1665166Z 2023-01-11T21:41:24.1665299Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1665501Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1665620Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1665760Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1665847Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1665908Z { 2023-01-11T21:41:24.1666004Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1666066Z { 2023-01-11T21:41:24.1666139Z #pragma omp for 2023-01-11T21:41:24.1666221Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1666282Z { 2023-01-11T21:41:24.1666348Z #pragma GCC ivdep 2023-01-11T21:41:24.1666430Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1666491Z { 2023-01-11T21:41:24.1666554Z { 2023-01-11T21:41:24.1666617Z { 2023-01-11T21:41:24.1666723Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1666815Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1666895Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1666987Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1667057Z } 2023-01-11T21:41:24.1667120Z } 2023-01-11T21:41:24.1667181Z } 2023-01-11T21:41:24.1667241Z } 2023-01-11T21:41:24.1667289Z } 2023-01-11T21:41:24.1667347Z } 2023-01-11T21:41:24.1667424Z ''') 2023-01-11T21:41:24.1667429Z 2023-01-11T21:41:24.1667433Z 2023-01-11T21:41:24.1667520Z async_compile.wait(globals()) 2023-01-11T21:41:24.1667590Z del async_compile 2023-01-11T21:41:24.1667594Z 2023-01-11T21:41:24.1667662Z def call(args): 2023-01-11T21:41:24.1667734Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1667801Z args.clear() 2023-01-11T21:41:24.1667986Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1668148Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1668213Z del arg0_1 2023-01-11T21:41:24.1668277Z del arg1_1 2023-01-11T21:41:24.1668345Z return (buf0, ) 2023-01-11T21:41:24.1668354Z 2023-01-11T21:41:24.1668358Z 2023-01-11T21:41:24.1668432Z if __name__ == "__main__": 2023-01-11T21:41:24.1668543Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1668654Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1668853Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1669045Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1669157Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1669162Z 2023-01-11T21:41:24.1669226Z ok (1.614s) 2023-01-11T21:41:24.1669721Z test_cpu_strided_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1669916Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1670174Z [2023-01-11 21:41:02,754] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 572 2023-01-11T21:41:24.1670436Z [2023-01-11 21:41:04,314] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 572 2023-01-11T21:41:24.1670441Z 2023-01-11T21:41:24.1670534Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1670590Z import torch 2023-01-11T21:41:24.1670660Z import random 2023-01-11T21:41:24.1670773Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1670892Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1670896Z 2023-01-11T21:41:24.1670974Z aten = torch.ops.aten 2023-01-11T21:41:24.1671106Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1671198Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1671234Z 2023-01-11T21:41:24.1671238Z 2023-01-11T21:41:24.1671371Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1671559Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1671676Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1671780Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1671878Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1671937Z { 2023-01-11T21:41:24.1672031Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1672091Z { 2023-01-11T21:41:24.1672154Z #pragma omp for 2023-01-11T21:41:24.1672234Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1672296Z { 2023-01-11T21:41:24.1672372Z #pragma GCC ivdep 2023-01-11T21:41:24.1672457Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1672518Z { 2023-01-11T21:41:24.1672581Z { 2023-01-11T21:41:24.1672638Z { 2023-01-11T21:41:24.1672742Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1672833Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1672926Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1673020Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1673084Z } 2023-01-11T21:41:24.1673146Z } 2023-01-11T21:41:24.1673196Z } 2023-01-11T21:41:24.1673256Z } 2023-01-11T21:41:24.1673315Z } 2023-01-11T21:41:24.1673374Z } 2023-01-11T21:41:24.1673452Z ''') 2023-01-11T21:41:24.1673457Z 2023-01-11T21:41:24.1673461Z 2023-01-11T21:41:24.1673550Z async_compile.wait(globals()) 2023-01-11T21:41:24.1673619Z del async_compile 2023-01-11T21:41:24.1673624Z 2023-01-11T21:41:24.1673681Z def call(args): 2023-01-11T21:41:24.1673810Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1673883Z args.clear() 2023-01-11T21:41:24.1674097Z buf0 = empty_strided((1, 10, 10), (100, 10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1674259Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1674326Z del arg0_1 2023-01-11T21:41:24.1674392Z del arg1_1 2023-01-11T21:41:24.1674449Z return (buf0, ) 2023-01-11T21:41:24.1674454Z 2023-01-11T21:41:24.1674458Z 2023-01-11T21:41:24.1674533Z if __name__ == "__main__": 2023-01-11T21:41:24.1674644Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1674770Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1674970Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1675177Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1675292Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1675297Z 2023-01-11T21:41:24.1675365Z ok (1.576s) 2023-01-11T21:41:24.1675891Z test_cpu_strided_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1676007Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1676267Z [2023-01-11 21:41:04,330] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 573 2023-01-11T21:41:24.1676530Z [2023-01-11 21:41:05,874] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 573 2023-01-11T21:41:24.1676535Z 2023-01-11T21:41:24.1676629Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1676696Z import torch 2023-01-11T21:41:24.1676765Z import random 2023-01-11T21:41:24.1676908Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1677033Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1677038Z 2023-01-11T21:41:24.1677104Z aten = torch.ops.aten 2023-01-11T21:41:24.1677235Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1677326Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1677331Z 2023-01-11T21:41:24.1677335Z 2023-01-11T21:41:24.1677469Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1677671Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1677790Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1677895Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1677992Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1678041Z { 2023-01-11T21:41:24.1678136Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1678198Z { 2023-01-11T21:41:24.1678273Z #pragma omp for 2023-01-11T21:41:24.1678355Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1678417Z { 2023-01-11T21:41:24.1678495Z #pragma GCC ivdep 2023-01-11T21:41:24.1678567Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1678629Z { 2023-01-11T21:41:24.1678692Z { 2023-01-11T21:41:24.1678756Z { 2023-01-11T21:41:24.1678860Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1678950Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1679042Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1679125Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1679188Z } 2023-01-11T21:41:24.1679250Z } 2023-01-11T21:41:24.1679311Z } 2023-01-11T21:41:24.1679370Z } 2023-01-11T21:41:24.1679430Z } 2023-01-11T21:41:24.1679477Z } 2023-01-11T21:41:24.1679555Z ''') 2023-01-11T21:41:24.1679562Z 2023-01-11T21:41:24.1679566Z 2023-01-11T21:41:24.1679653Z async_compile.wait(globals()) 2023-01-11T21:41:24.1679725Z del async_compile 2023-01-11T21:41:24.1679730Z 2023-01-11T21:41:24.1679800Z def call(args): 2023-01-11T21:41:24.1679873Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1679942Z args.clear() 2023-01-11T21:41:24.1680138Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1680286Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1680351Z del arg0_1 2023-01-11T21:41:24.1680414Z del arg1_1 2023-01-11T21:41:24.1680483Z return (buf0, ) 2023-01-11T21:41:24.1680488Z 2023-01-11T21:41:24.1680492Z 2023-01-11T21:41:24.1680565Z if __name__ == "__main__": 2023-01-11T21:41:24.1680675Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1680794Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1681040Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1681217Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1681329Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1681334Z 2023-01-11T21:41:24.1681401Z ok (1.560s) 2023-01-11T21:41:24.1681889Z test_cpu_strided_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1682015Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1682275Z [2023-01-11 21:41:05,894] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 574 2023-01-11T21:41:24.1682572Z [2023-01-11 21:41:07,492] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 574 2023-01-11T21:41:24.1682578Z 2023-01-11T21:41:24.1682671Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1682742Z import torch 2023-01-11T21:41:24.1682800Z import random 2023-01-11T21:41:24.1682910Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1683028Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1683033Z 2023-01-11T21:41:24.1683107Z aten = torch.ops.aten 2023-01-11T21:41:24.1683238Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1683326Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1683331Z 2023-01-11T21:41:24.1683335Z 2023-01-11T21:41:24.1683467Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1683670Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1683781Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1683883Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1683979Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1684038Z { 2023-01-11T21:41:24.1684133Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1684194Z { 2023-01-11T21:41:24.1684269Z #pragma omp for 2023-01-11T21:41:24.1684336Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1684396Z { 2023-01-11T21:41:24.1684474Z #pragma GCC ivdep 2023-01-11T21:41:24.1684556Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1684618Z { 2023-01-11T21:41:24.1684680Z { 2023-01-11T21:41:24.1684744Z { 2023-01-11T21:41:24.1684836Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1684937Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1685030Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1685128Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1685192Z } 2023-01-11T21:41:24.1685252Z } 2023-01-11T21:41:24.1685314Z } 2023-01-11T21:41:24.1685363Z } 2023-01-11T21:41:24.1685422Z } 2023-01-11T21:41:24.1685480Z } 2023-01-11T21:41:24.1685556Z ''') 2023-01-11T21:41:24.1685561Z 2023-01-11T21:41:24.1685565Z 2023-01-11T21:41:24.1685652Z async_compile.wait(globals()) 2023-01-11T21:41:24.1685721Z del async_compile 2023-01-11T21:41:24.1685725Z 2023-01-11T21:41:24.1685792Z def call(args): 2023-01-11T21:41:24.1685854Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1685922Z args.clear() 2023-01-11T21:41:24.1686118Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1686279Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1686344Z del arg0_1 2023-01-11T21:41:24.1686439Z del arg1_1 2023-01-11T21:41:24.1686509Z return (buf0, ) 2023-01-11T21:41:24.1686513Z 2023-01-11T21:41:24.1686517Z 2023-01-11T21:41:24.1686589Z if __name__ == "__main__": 2023-01-11T21:41:24.1686691Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1686812Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1687009Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1687204Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1687316Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1687321Z 2023-01-11T21:41:24.1687388Z ok (1.618s) 2023-01-11T21:41:24.1687907Z test_cpu_strided_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1688037Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1688296Z [2023-01-11 21:41:07,509] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 575 2023-01-11T21:41:24.1688549Z [2023-01-11 21:41:09,077] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 575 2023-01-11T21:41:24.1688553Z 2023-01-11T21:41:24.1688649Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1688719Z import torch 2023-01-11T21:41:24.1688790Z import random 2023-01-11T21:41:24.1688905Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1689025Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1689030Z 2023-01-11T21:41:24.1689108Z aten = torch.ops.aten 2023-01-11T21:41:24.1689230Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1689323Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1689328Z 2023-01-11T21:41:24.1689332Z 2023-01-11T21:41:24.1689466Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1689668Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1689787Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1689895Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1689997Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1690058Z { 2023-01-11T21:41:24.1690143Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1690203Z { 2023-01-11T21:41:24.1690279Z #pragma omp for 2023-01-11T21:41:24.1690361Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1690423Z { 2023-01-11T21:41:24.1690504Z #pragma GCC ivdep 2023-01-11T21:41:24.1690589Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1690645Z { 2023-01-11T21:41:24.1690710Z { 2023-01-11T21:41:24.1690776Z { 2023-01-11T21:41:24.1690881Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1690984Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1691095Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1691190Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1691272Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1691339Z } 2023-01-11T21:41:24.1691407Z } 2023-01-11T21:41:24.1691471Z } 2023-01-11T21:41:24.1691535Z } 2023-01-11T21:41:24.1691597Z } 2023-01-11T21:41:24.1691658Z } 2023-01-11T21:41:24.1691723Z ''') 2023-01-11T21:41:24.1691728Z 2023-01-11T21:41:24.1691732Z 2023-01-11T21:41:24.1691822Z async_compile.wait(globals()) 2023-01-11T21:41:24.1691894Z del async_compile 2023-01-11T21:41:24.1691927Z 2023-01-11T21:41:24.1692000Z def call(args): 2023-01-11T21:41:24.1692076Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1692146Z args.clear() 2023-01-11T21:41:24.1692348Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1692498Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1692566Z del arg0_1 2023-01-11T21:41:24.1692633Z del arg1_1 2023-01-11T21:41:24.1692701Z return (buf0, ) 2023-01-11T21:41:24.1692706Z 2023-01-11T21:41:24.1692710Z 2023-01-11T21:41:24.1692786Z if __name__ == "__main__": 2023-01-11T21:41:24.1692899Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1693022Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1693220Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1693403Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1693562Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1693567Z 2023-01-11T21:41:24.1693634Z ok (1.585s) 2023-01-11T21:41:24.1694118Z test_cpu_strided_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1694241Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1694502Z [2023-01-11 21:41:09,092] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 576 2023-01-11T21:41:24.1694771Z [2023-01-11 21:41:10,658] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 576 2023-01-11T21:41:24.1694779Z 2023-01-11T21:41:24.1694878Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1694951Z import torch 2023-01-11T21:41:24.1695007Z import random 2023-01-11T21:41:24.1695118Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1695236Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1695240Z 2023-01-11T21:41:24.1695315Z aten = torch.ops.aten 2023-01-11T21:41:24.1695447Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1695536Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1695541Z 2023-01-11T21:41:24.1695545Z 2023-01-11T21:41:24.1695678Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1695881Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1695987Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1696088Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1696189Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1696251Z { 2023-01-11T21:41:24.1696348Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1696409Z { 2023-01-11T21:41:24.1696485Z #pragma omp for 2023-01-11T21:41:24.1696554Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1696614Z { 2023-01-11T21:41:24.1696693Z #pragma GCC ivdep 2023-01-11T21:41:24.1696775Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1696836Z { 2023-01-11T21:41:24.1696902Z { 2023-01-11T21:41:24.1696966Z { 2023-01-11T21:41:24.1697058Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1697149Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1697258Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1697350Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1697444Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1697540Z } 2023-01-11T21:41:24.1697604Z } 2023-01-11T21:41:24.1697654Z } 2023-01-11T21:41:24.1697713Z } 2023-01-11T21:41:24.1697773Z } 2023-01-11T21:41:24.1697833Z } 2023-01-11T21:41:24.1697915Z ''') 2023-01-11T21:41:24.1697920Z 2023-01-11T21:41:24.1697924Z 2023-01-11T21:41:24.1698012Z async_compile.wait(globals()) 2023-01-11T21:41:24.1698081Z del async_compile 2023-01-11T21:41:24.1698085Z 2023-01-11T21:41:24.1698142Z def call(args): 2023-01-11T21:41:24.1698216Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1698287Z args.clear() 2023-01-11T21:41:24.1698484Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1698643Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1698709Z del arg0_1 2023-01-11T21:41:24.1698774Z del arg1_1 2023-01-11T21:41:24.1698831Z return (buf0, ) 2023-01-11T21:41:24.1698850Z 2023-01-11T21:41:24.1698885Z 2023-01-11T21:41:24.1698947Z if __name__ == "__main__": 2023-01-11T21:41:24.1699058Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1699178Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1699374Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1699567Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1699677Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1699682Z 2023-01-11T21:41:24.1699748Z ok (1.581s) 2023-01-11T21:41:24.1700233Z test_cpu_strided_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1700359Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1700608Z [2023-01-11 21:41:10,674] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 577 2023-01-11T21:41:24.1700866Z [2023-01-11 21:41:12,725] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 577 2023-01-11T21:41:24.1700871Z 2023-01-11T21:41:24.1700962Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1701030Z import torch 2023-01-11T21:41:24.1701098Z import random 2023-01-11T21:41:24.1701207Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1701323Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1701328Z 2023-01-11T21:41:24.1701402Z aten = torch.ops.aten 2023-01-11T21:41:24.1701520Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1701609Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1701616Z 2023-01-11T21:41:24.1701622Z 2023-01-11T21:41:24.1701754Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1701956Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1702074Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1702172Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1702274Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1702466Z { 2023-01-11T21:41:24.1702553Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1702613Z { 2023-01-11T21:41:24.1702689Z #pragma omp for 2023-01-11T21:41:24.1702770Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1702834Z { 2023-01-11T21:41:24.1702912Z #pragma GCC ivdep 2023-01-11T21:41:24.1702985Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1703049Z { 2023-01-11T21:41:24.1703112Z { 2023-01-11T21:41:24.1703231Z { 2023-01-11T21:41:24.1703340Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1703442Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1703536Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1703619Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1703686Z } 2023-01-11T21:41:24.1703748Z } 2023-01-11T21:41:24.1703812Z } 2023-01-11T21:41:24.1703874Z } 2023-01-11T21:41:24.1703938Z } 2023-01-11T21:41:24.1703998Z } 2023-01-11T21:41:24.1704066Z ''') 2023-01-11T21:41:24.1704070Z 2023-01-11T21:41:24.1704075Z 2023-01-11T21:41:24.1704164Z async_compile.wait(globals()) 2023-01-11T21:41:24.1704234Z del async_compile 2023-01-11T21:41:24.1704239Z 2023-01-11T21:41:24.1704307Z def call(args): 2023-01-11T21:41:24.1704381Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1704450Z args.clear() 2023-01-11T21:41:24.1704684Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1704837Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1704906Z del arg0_1 2023-01-11T21:41:24.1704969Z del arg1_1 2023-01-11T21:41:24.1705038Z return (buf0, ) 2023-01-11T21:41:24.1705044Z 2023-01-11T21:41:24.1705048Z 2023-01-11T21:41:24.1705124Z if __name__ == "__main__": 2023-01-11T21:41:24.1705235Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1705356Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1705554Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1705738Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1705851Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1705855Z 2023-01-11T21:41:24.1705920Z ok (2.067s) 2023-01-11T21:41:24.1706418Z test_cpu_strided_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1706544Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1706801Z [2023-01-11 21:41:12,742] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 578 2023-01-11T21:41:24.1707066Z [2023-01-11 21:41:14,658] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 578 2023-01-11T21:41:24.1707071Z 2023-01-11T21:41:24.1707162Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1707229Z import torch 2023-01-11T21:41:24.1707297Z import random 2023-01-11T21:41:24.1707401Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1707522Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1707527Z 2023-01-11T21:41:24.1707601Z aten = torch.ops.aten 2023-01-11T21:41:24.1707733Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1707821Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1707826Z 2023-01-11T21:41:24.1707830Z 2023-01-11T21:41:24.1707962Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1708165Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1708281Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1708372Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1708469Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1708528Z { 2023-01-11T21:41:24.1708626Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1708688Z { 2023-01-11T21:41:24.1708794Z #pragma omp for 2023-01-11T21:41:24.1708875Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1708925Z { 2023-01-11T21:41:24.1709004Z #pragma GCC ivdep 2023-01-11T21:41:24.1709086Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1709147Z { 2023-01-11T21:41:24.1709208Z { 2023-01-11T21:41:24.1709271Z { 2023-01-11T21:41:24.1709361Z auto tmp0 = in_ptr0[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1709461Z auto tmp1 = in_ptr1[i0 + (10*i1)]; 2023-01-11T21:41:24.1709553Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1709648Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1709712Z } 2023-01-11T21:41:24.1709774Z } 2023-01-11T21:41:24.1709835Z } 2023-01-11T21:41:24.1709884Z } 2023-01-11T21:41:24.1709945Z } 2023-01-11T21:41:24.1710004Z } 2023-01-11T21:41:24.1710082Z ''') 2023-01-11T21:41:24.1710115Z 2023-01-11T21:41:24.1710120Z 2023-01-11T21:41:24.1710208Z async_compile.wait(globals()) 2023-01-11T21:41:24.1710278Z del async_compile 2023-01-11T21:41:24.1710283Z 2023-01-11T21:41:24.1710351Z def call(args): 2023-01-11T21:41:24.1710413Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1710482Z args.clear() 2023-01-11T21:41:24.1710680Z buf0 = empty_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1710839Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1710905Z del arg0_1 2023-01-11T21:41:24.1710969Z del arg1_1 2023-01-11T21:41:24.1711039Z return (buf0, ) 2023-01-11T21:41:24.1711044Z 2023-01-11T21:41:24.1711048Z 2023-01-11T21:41:24.1711120Z if __name__ == "__main__": 2023-01-11T21:41:24.1711221Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1711341Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1711543Z arg0_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1711742Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1711853Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1711858Z 2023-01-11T21:41:24.1711922Z ok (1.933s) 2023-01-11T21:41:24.1712416Z test_cpu_transposed_broadcast1 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1712541Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1712799Z [2023-01-11 21:41:14,681] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 579 2023-01-11T21:41:24.1713055Z [2023-01-11 21:41:14,695] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 579 2023-01-11T21:41:24.1713072Z 2023-01-11T21:41:24.1713153Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1713220Z import torch 2023-01-11T21:41:24.1713288Z import random 2023-01-11T21:41:24.1713400Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1713519Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1713524Z 2023-01-11T21:41:24.1713602Z aten = torch.ops.aten 2023-01-11T21:41:24.1713793Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1713877Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1713882Z 2023-01-11T21:41:24.1713901Z 2023-01-11T21:41:24.1714025Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1714228Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1714347Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1714491Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1714590Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1714651Z { 2023-01-11T21:41:24.1714747Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1714795Z { 2023-01-11T21:41:24.1714872Z #pragma omp for 2023-01-11T21:41:24.1714957Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1715020Z { 2023-01-11T21:41:24.1715103Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1715164Z { 2023-01-11T21:41:24.1715318Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1715431Z auto tmp1 = at::vec::Vectorized(in_ptr1[i0]); 2023-01-11T21:41:24.1715516Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1715618Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1715683Z } 2023-01-11T21:41:24.1715806Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1715889Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1715951Z { 2023-01-11T21:41:24.1716034Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1716116Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1716200Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1716289Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1716352Z } 2023-01-11T21:41:24.1716411Z } 2023-01-11T21:41:24.1716475Z } 2023-01-11T21:41:24.1716521Z } 2023-01-11T21:41:24.1716598Z ''') 2023-01-11T21:41:24.1716603Z 2023-01-11T21:41:24.1716607Z 2023-01-11T21:41:24.1716695Z async_compile.wait(globals()) 2023-01-11T21:41:24.1716765Z del async_compile 2023-01-11T21:41:24.1716770Z 2023-01-11T21:41:24.1716837Z def call(args): 2023-01-11T21:41:24.1716911Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1716980Z args.clear() 2023-01-11T21:41:24.1717173Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1717333Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1717401Z del arg0_1 2023-01-11T21:41:24.1717465Z del arg1_1 2023-01-11T21:41:24.1717536Z return (buf0, ) 2023-01-11T21:41:24.1717541Z 2023-01-11T21:41:24.1717545Z 2023-01-11T21:41:24.1717620Z if __name__ == "__main__": 2023-01-11T21:41:24.1717731Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1717851Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1718037Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1718229Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1718340Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1718345Z 2023-01-11T21:41:24.1718409Z ok (0.037s) 2023-01-11T21:41:24.1718909Z test_cpu_transposed_broadcast2 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1719033Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1719293Z [2023-01-11 21:41:14,713] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 580 2023-01-11T21:41:24.1719556Z [2023-01-11 21:41:14,722] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 580 2023-01-11T21:41:24.1719561Z 2023-01-11T21:41:24.1719654Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1719709Z import torch 2023-01-11T21:41:24.1719776Z import random 2023-01-11T21:41:24.1719934Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1720053Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1720058Z 2023-01-11T21:41:24.1720136Z aten = torch.ops.aten 2023-01-11T21:41:24.1720269Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1720362Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1720367Z 2023-01-11T21:41:24.1720371Z 2023-01-11T21:41:24.1720506Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1720696Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1720816Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1720920Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1721020Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1721085Z { 2023-01-11T21:41:24.1721185Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1721246Z { 2023-01-11T21:41:24.1721339Z #pragma omp for 2023-01-11T21:41:24.1721423Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1721487Z { 2023-01-11T21:41:24.1721571Z for(long i1=0; i1<1; i1+=1) 2023-01-11T21:41:24.1721635Z { 2023-01-11T21:41:24.1721775Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1721910Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i1); 2023-01-11T21:41:24.1721998Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1722089Z tmp2.store(out_ptr0 + (8*i1) + (10*i0)); 2023-01-11T21:41:24.1722156Z } 2023-01-11T21:41:24.1722248Z #pragma omp simd simdlen(4) 2023-01-11T21:41:24.1722331Z for(long i1=8; i1<10; i1+=1) 2023-01-11T21:41:24.1722394Z { 2023-01-11T21:41:24.1722493Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1722578Z auto tmp1 = in_ptr1[i1]; 2023-01-11T21:41:24.1722655Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1722747Z out_ptr0[i1 + (10*i0)] = tmp2; 2023-01-11T21:41:24.1722815Z } 2023-01-11T21:41:24.1722878Z } 2023-01-11T21:41:24.1722941Z } 2023-01-11T21:41:24.1723003Z } 2023-01-11T21:41:24.1723068Z ''') 2023-01-11T21:41:24.1723074Z 2023-01-11T21:41:24.1723091Z 2023-01-11T21:41:24.1723167Z async_compile.wait(globals()) 2023-01-11T21:41:24.1723238Z del async_compile 2023-01-11T21:41:24.1723243Z 2023-01-11T21:41:24.1723314Z def call(args): 2023-01-11T21:41:24.1723389Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1723460Z args.clear() 2023-01-11T21:41:24.1723673Z buf0 = empty_strided((1, 10, 10), (100, 1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1723836Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1723891Z del arg0_1 2023-01-11T21:41:24.1723957Z del arg1_1 2023-01-11T21:41:24.1724033Z return (buf0, ) 2023-01-11T21:41:24.1724039Z 2023-01-11T21:41:24.1724042Z 2023-01-11T21:41:24.1724117Z if __name__ == "__main__": 2023-01-11T21:41:24.1724232Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1724356Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1724556Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1724762Z arg1_1 = rand_strided((1, 10, 1), (10, 1, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1724865Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1724870Z 2023-01-11T21:41:24.1724937Z ok (0.026s) 2023-01-11T21:41:24.1725441Z test_cpu_transposed_broadcast3 (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1725598Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1725861Z [2023-01-11 21:41:14,736] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 581 2023-01-11T21:41:24.1726126Z [2023-01-11 21:41:14,746] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 581 2023-01-11T21:41:24.1726132Z 2023-01-11T21:41:24.1726226Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1726293Z import torch 2023-01-11T21:41:24.1726360Z import random 2023-01-11T21:41:24.1726460Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1726576Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1726581Z 2023-01-11T21:41:24.1726659Z aten = torch.ops.aten 2023-01-11T21:41:24.1726792Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1726911Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1726917Z 2023-01-11T21:41:24.1726921Z 2023-01-11T21:41:24.1727054Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1727253Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1727371Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1727461Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1727558Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1727620Z { 2023-01-11T21:41:24.1727718Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1727778Z { 2023-01-11T21:41:24.1727851Z #pragma omp for 2023-01-11T21:41:24.1727931Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1727981Z { 2023-01-11T21:41:24.1728113Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1728235Z auto tmp1 = at::vec::Vectorized(in_ptr1[0]); 2023-01-11T21:41:24.1728320Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1728409Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1728468Z } 2023-01-11T21:41:24.1728562Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1728633Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1728694Z { 2023-01-11T21:41:24.1728778Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1728858Z auto tmp1 = in_ptr1[0]; 2023-01-11T21:41:24.1728940Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1729021Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1729082Z } 2023-01-11T21:41:24.1729129Z } 2023-01-11T21:41:24.1729186Z } 2023-01-11T21:41:24.1729265Z ''') 2023-01-11T21:41:24.1729270Z 2023-01-11T21:41:24.1729274Z 2023-01-11T21:41:24.1729361Z async_compile.wait(globals()) 2023-01-11T21:41:24.1729431Z del async_compile 2023-01-11T21:41:24.1729436Z 2023-01-11T21:41:24.1729504Z def call(args): 2023-01-11T21:41:24.1729581Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1729638Z args.clear() 2023-01-11T21:41:24.1729838Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1729998Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1730065Z del arg0_1 2023-01-11T21:41:24.1730129Z del arg1_1 2023-01-11T21:41:24.1730199Z return (buf0, ) 2023-01-11T21:41:24.1730204Z 2023-01-11T21:41:24.1730208Z 2023-01-11T21:41:24.1730280Z if __name__ == "__main__": 2023-01-11T21:41:24.1730390Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1730499Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1730703Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1730893Z arg1_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1731007Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1731041Z 2023-01-11T21:41:24.1731107Z ok (0.026s) 2023-01-11T21:41:24.1731604Z test_cpu_transposed_dense (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1731731Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1731988Z [2023-01-11 21:41:14,772] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 582 2023-01-11T21:41:24.1732257Z [2023-01-11 21:41:16,690] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 582 2023-01-11T21:41:24.1732262Z 2023-01-11T21:41:24.1732353Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1732439Z import torch 2023-01-11T21:41:24.1732509Z import random 2023-01-11T21:41:24.1732622Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1732739Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1732744Z 2023-01-11T21:41:24.1732819Z aten = torch.ops.aten 2023-01-11T21:41:24.1732950Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1733041Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1733046Z 2023-01-11T21:41:24.1733050Z 2023-01-11T21:41:24.1733170Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1733373Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1733491Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1733594Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1733695Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1733754Z { 2023-01-11T21:41:24.1733854Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1733912Z { 2023-01-11T21:41:24.1733975Z #pragma omp for 2023-01-11T21:41:24.1734055Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1734117Z { 2023-01-11T21:41:24.1734195Z #pragma GCC ivdep 2023-01-11T21:41:24.1734278Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1734341Z { 2023-01-11T21:41:24.1734391Z { 2023-01-11T21:41:24.1734455Z { 2023-01-11T21:41:24.1734560Z auto tmp0 = in_ptr0[i0 + (10*i1)]; 2023-01-11T21:41:24.1734657Z auto tmp1 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1734748Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1734842Z out_ptr0[i0 + (10*i1)] = tmp2; 2023-01-11T21:41:24.1734906Z } 2023-01-11T21:41:24.1734958Z } 2023-01-11T21:41:24.1735018Z } 2023-01-11T21:41:24.1735081Z } 2023-01-11T21:41:24.1735144Z } 2023-01-11T21:41:24.1735204Z } 2023-01-11T21:41:24.1735282Z ''') 2023-01-11T21:41:24.1735287Z 2023-01-11T21:41:24.1735291Z 2023-01-11T21:41:24.1735382Z async_compile.wait(globals()) 2023-01-11T21:41:24.1735440Z del async_compile 2023-01-11T21:41:24.1735457Z 2023-01-11T21:41:24.1735514Z def call(args): 2023-01-11T21:41:24.1735586Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1735655Z args.clear() 2023-01-11T21:41:24.1735855Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1736013Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1736078Z del arg0_1 2023-01-11T21:41:24.1736131Z del arg1_1 2023-01-11T21:41:24.1736199Z return (buf0, ) 2023-01-11T21:41:24.1736204Z 2023-01-11T21:41:24.1736208Z 2023-01-11T21:41:24.1736282Z if __name__ == "__main__": 2023-01-11T21:41:24.1736393Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1736548Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1736746Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1736942Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1737054Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1737059Z 2023-01-11T21:41:24.1737122Z ok (1.944s) 2023-01-11T21:41:24.1737604Z test_cpu_transposed_double (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1737729Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1738019Z [2023-01-11 21:41:16,710] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 583 2023-01-11T21:41:24.1738284Z [2023-01-11 21:41:18,662] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 583 2023-01-11T21:41:24.1738289Z 2023-01-11T21:41:24.1738381Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1738447Z import torch 2023-01-11T21:41:24.1738516Z import random 2023-01-11T21:41:24.1738629Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1738751Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1738755Z 2023-01-11T21:41:24.1738820Z aten = torch.ops.aten 2023-01-11T21:41:24.1738951Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1739040Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1739045Z 2023-01-11T21:41:24.1739049Z 2023-01-11T21:41:24.1739181Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1739387Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1739506Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1739610Z const double* __restrict__ in_ptr1, 2023-01-11T21:41:24.1739713Z double* __restrict__ out_ptr0) 2023-01-11T21:41:24.1739760Z { 2023-01-11T21:41:24.1739857Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1739917Z { 2023-01-11T21:41:24.1739994Z #pragma omp for 2023-01-11T21:41:24.1740076Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1740136Z { 2023-01-11T21:41:24.1740202Z #pragma GCC ivdep 2023-01-11T21:41:24.1740284Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1740347Z { 2023-01-11T21:41:24.1740408Z { 2023-01-11T21:41:24.1740473Z { 2023-01-11T21:41:24.1740576Z auto tmp0 = in_ptr0[i0 + (10*i1)]; 2023-01-11T21:41:24.1740677Z auto tmp2 = in_ptr1[i1 + (10*i0)]; 2023-01-11T21:41:24.1740777Z auto tmp1 = static_cast(tmp0); 2023-01-11T21:41:24.1740869Z auto tmp3 = tmp1 + tmp2; 2023-01-11T21:41:24.1740961Z out_ptr0[i0 + (10*i1)] = tmp3; 2023-01-11T21:41:24.1741025Z } 2023-01-11T21:41:24.1741089Z } 2023-01-11T21:41:24.1741150Z } 2023-01-11T21:41:24.1741210Z } 2023-01-11T21:41:24.1741258Z } 2023-01-11T21:41:24.1741316Z } 2023-01-11T21:41:24.1741392Z ''') 2023-01-11T21:41:24.1741398Z 2023-01-11T21:41:24.1741401Z 2023-01-11T21:41:24.1741490Z async_compile.wait(globals()) 2023-01-11T21:41:24.1741560Z del async_compile 2023-01-11T21:41:24.1741565Z 2023-01-11T21:41:24.1741633Z def call(args): 2023-01-11T21:41:24.1741705Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1741762Z args.clear() 2023-01-11T21:41:24.1741959Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1742163Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1742232Z del arg0_1 2023-01-11T21:41:24.1742298Z del arg1_1 2023-01-11T21:41:24.1742508Z return (buf0, ) 2023-01-11T21:41:24.1742516Z 2023-01-11T21:41:24.1742521Z 2023-01-11T21:41:24.1742616Z if __name__ == "__main__": 2023-01-11T21:41:24.1742733Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1742843Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1743048Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1743243Z arg1_1 = rand_strided((10, 10), (10, 1), device='cpu', dtype=torch.float64) 2023-01-11T21:41:24.1743358Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1743363Z 2023-01-11T21:41:24.1743430Z ok (1.970s) 2023-01-11T21:41:24.1743980Z test_cpu_transposed_int (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1744112Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1744377Z [2023-01-11 21:41:18,677] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 584 2023-01-11T21:41:24.1744644Z [2023-01-11 21:41:20,452] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 584 2023-01-11T21:41:24.1744650Z 2023-01-11T21:41:24.1744743Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1744803Z import torch 2023-01-11T21:41:24.1744873Z import random 2023-01-11T21:41:24.1744989Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1745116Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1745121Z 2023-01-11T21:41:24.1745203Z aten = torch.ops.aten 2023-01-11T21:41:24.1745338Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1745428Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1745434Z 2023-01-11T21:41:24.1745438Z 2023-01-11T21:41:24.1745575Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1745769Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1745886Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1745992Z const int* __restrict__ in_ptr1, 2023-01-11T21:41:24.1746090Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1746149Z { 2023-01-11T21:41:24.1746245Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1746303Z { 2023-01-11T21:41:24.1746366Z #pragma omp for 2023-01-11T21:41:24.1746447Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1746511Z { 2023-01-11T21:41:24.1746589Z #pragma GCC ivdep 2023-01-11T21:41:24.1746674Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1746735Z { 2023-01-11T21:41:24.1746786Z { 2023-01-11T21:41:24.1746851Z { 2023-01-11T21:41:24.1746952Z auto tmp0 = in_ptr0[i1 + (10*i0)]; 2023-01-11T21:41:24.1747046Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1747155Z auto tmp2 = static_cast(tmp1); 2023-01-11T21:41:24.1747246Z auto tmp3 = tmp0 + tmp2; 2023-01-11T21:41:24.1747341Z out_ptr0[i1 + (10*i0)] = tmp3; 2023-01-11T21:41:24.1747405Z } 2023-01-11T21:41:24.1747457Z } 2023-01-11T21:41:24.1747518Z } 2023-01-11T21:41:24.1747579Z } 2023-01-11T21:41:24.1747641Z } 2023-01-11T21:41:24.1747701Z } 2023-01-11T21:41:24.1747823Z ''') 2023-01-11T21:41:24.1747831Z 2023-01-11T21:41:24.1747835Z 2023-01-11T21:41:24.1747927Z async_compile.wait(globals()) 2023-01-11T21:41:24.1747988Z del async_compile 2023-01-11T21:41:24.1747993Z 2023-01-11T21:41:24.1748062Z def call(args): 2023-01-11T21:41:24.1748136Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1748205Z args.clear() 2023-01-11T21:41:24.1748403Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1748566Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1748633Z del arg0_1 2023-01-11T21:41:24.1748687Z del arg1_1 2023-01-11T21:41:24.1748761Z return (buf0, ) 2023-01-11T21:41:24.1748765Z 2023-01-11T21:41:24.1748769Z 2023-01-11T21:41:24.1748843Z if __name__ == "__main__": 2023-01-11T21:41:24.1748959Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1749082Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1749311Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1749505Z arg1_1 = rand_strided((10, ), (1, ), device='cpu', dtype=torch.int32) 2023-01-11T21:41:24.1749619Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1749624Z 2023-01-11T21:41:24.1749676Z ok (1.791s) 2023-01-11T21:41:24.1750170Z test_cpu_transposed_strided (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1750295Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1750556Z [2023-01-11 21:41:20,468] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 585 2023-01-11T21:41:24.1750823Z [2023-01-11 21:41:22,295] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 585 2023-01-11T21:41:24.1750828Z 2023-01-11T21:41:24.1750923Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1750992Z import torch 2023-01-11T21:41:24.1751059Z import random 2023-01-11T21:41:24.1751171Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1751279Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1751284Z 2023-01-11T21:41:24.1751360Z aten = torch.ops.aten 2023-01-11T21:41:24.1751491Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1751581Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1751586Z 2023-01-11T21:41:24.1751590Z 2023-01-11T21:41:24.1751721Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1751925Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1752045Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1752150Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1752237Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1752296Z { 2023-01-11T21:41:24.1752391Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1752453Z { 2023-01-11T21:41:24.1752533Z #pragma omp for 2023-01-11T21:41:24.1752617Z for(long i0=0; i0<10; i0+=1) 2023-01-11T21:41:24.1752681Z { 2023-01-11T21:41:24.1752746Z #pragma GCC ivdep 2023-01-11T21:41:24.1752832Z for(long i1=0; i1<10; i1+=1) 2023-01-11T21:41:24.1752896Z { 2023-01-11T21:41:24.1752960Z { 2023-01-11T21:41:24.1753028Z { 2023-01-11T21:41:24.1753132Z auto tmp0 = in_ptr0[i0 + (10*i1)]; 2023-01-11T21:41:24.1753237Z auto tmp1 = in_ptr1[(2*i1) + (30*i0)]; 2023-01-11T21:41:24.1753321Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1753450Z out_ptr0[i0 + (10*i1)] = tmp2; 2023-01-11T21:41:24.1753517Z } 2023-01-11T21:41:24.1753582Z } 2023-01-11T21:41:24.1753647Z } 2023-01-11T21:41:24.1753709Z } 2023-01-11T21:41:24.1753817Z } 2023-01-11T21:41:24.1753878Z } 2023-01-11T21:41:24.1753959Z ''') 2023-01-11T21:41:24.1753965Z 2023-01-11T21:41:24.1753969Z 2023-01-11T21:41:24.1754061Z async_compile.wait(globals()) 2023-01-11T21:41:24.1754133Z del async_compile 2023-01-11T21:41:24.1754137Z 2023-01-11T21:41:24.1754207Z def call(args): 2023-01-11T21:41:24.1754282Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1754353Z args.clear() 2023-01-11T21:41:24.1754541Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1754704Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1754773Z del arg0_1 2023-01-11T21:41:24.1754841Z del arg1_1 2023-01-11T21:41:24.1754980Z return (buf0, ) 2023-01-11T21:41:24.1754987Z 2023-01-11T21:41:24.1754991Z 2023-01-11T21:41:24.1755065Z if __name__ == "__main__": 2023-01-11T21:41:24.1755179Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1755288Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1755492Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1755689Z arg1_1 = rand_strided((10, 10), (30, 2), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1755802Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1755806Z 2023-01-11T21:41:24.1755874Z ok (1.842s) 2023-01-11T21:41:24.1756373Z test_cpu_transposed_transposed (__main__.SweepInputsCpuTest) ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor.py:246: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:24.1756504Z buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone() 2023-01-11T21:41:24.1756766Z [2023-01-11 21:41:22,311] torch._inductor.compile_fx: [INFO] Step 1: torchinductor compiling FORWARDS graph 586 2023-01-11T21:41:24.1757035Z [2023-01-11 21:41:22,320] torch._inductor.compile_fx: [INFO] Step 1: torchinductor done compiling FORWARDS graph 586 2023-01-11T21:41:24.1757040Z 2023-01-11T21:41:24.1757135Z from ctypes import c_void_p, c_long 2023-01-11T21:41:24.1757190Z import torch 2023-01-11T21:41:24.1757261Z import random 2023-01-11T21:41:24.1757376Z from torch import empty_strided, as_strided, device 2023-01-11T21:41:24.1757496Z from torch._inductor.codecache import AsyncCompile 2023-01-11T21:41:24.1757501Z 2023-01-11T21:41:24.1757579Z aten = torch.ops.aten 2023-01-11T21:41:24.1757714Z assert_size_stride = torch._C._dynamo.guards.assert_size_stride 2023-01-11T21:41:24.1757808Z async_compile = AsyncCompile() 2023-01-11T21:41:24.1757814Z 2023-01-11T21:41:24.1757818Z 2023-01-11T21:41:24.1757950Z kernel_cpp_0 = async_compile.cpp(''' 2023-01-11T21:41:24.1758140Z #include "/tmp/torchinductor_jenkins/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h" 2023-01-11T21:41:24.1758260Z extern "C" void kernel(const float* __restrict__ in_ptr0, 2023-01-11T21:41:24.1758364Z const float* __restrict__ in_ptr1, 2023-01-11T21:41:24.1758463Z float* __restrict__ out_ptr0) 2023-01-11T21:41:24.1758522Z { 2023-01-11T21:41:24.1758617Z #pragma omp parallel num_threads(4) 2023-01-11T21:41:24.1758676Z { 2023-01-11T21:41:24.1758739Z #pragma omp for 2023-01-11T21:41:24.1758817Z for(long i0=0; i0<12; i0+=1) 2023-01-11T21:41:24.1758881Z { 2023-01-11T21:41:24.1759021Z auto tmp0 = at::vec::Vectorized::loadu(in_ptr0 + 8*i0); 2023-01-11T21:41:24.1759196Z auto tmp1 = at::vec::Vectorized::loadu(in_ptr1 + 8*i0); 2023-01-11T21:41:24.1759279Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1759369Z tmp2.store(out_ptr0 + 8*i0); 2023-01-11T21:41:24.1759420Z } 2023-01-11T21:41:24.1759511Z #pragma omp for simd simdlen(4) 2023-01-11T21:41:24.1759591Z for(long i0=96; i0<100; i0+=1) 2023-01-11T21:41:24.1759651Z { 2023-01-11T21:41:24.1759732Z auto tmp0 = in_ptr0[i0]; 2023-01-11T21:41:24.1759814Z auto tmp1 = in_ptr1[i0]; 2023-01-11T21:41:24.1759894Z auto tmp2 = tmp0 + tmp1; 2023-01-11T21:41:24.1759960Z out_ptr0[i0] = tmp2; 2023-01-11T21:41:24.1760020Z } 2023-01-11T21:41:24.1760079Z } 2023-01-11T21:41:24.1760139Z } 2023-01-11T21:41:24.1760218Z ''') 2023-01-11T21:41:24.1760224Z 2023-01-11T21:41:24.1760228Z 2023-01-11T21:41:24.1760315Z async_compile.wait(globals()) 2023-01-11T21:41:24.1760386Z del async_compile 2023-01-11T21:41:24.1760393Z 2023-01-11T21:41:24.1760475Z def call(args): 2023-01-11T21:41:24.1760550Z arg0_1, arg1_1 = args 2023-01-11T21:41:24.1760618Z args.clear() 2023-01-11T21:41:24.1760819Z buf0 = empty_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1760976Z kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(arg1_1.data_ptr()), c_void_p(buf0.data_ptr())) 2023-01-11T21:41:24.1761045Z del arg0_1 2023-01-11T21:41:24.1761109Z del arg1_1 2023-01-11T21:41:24.1761167Z return (buf0, ) 2023-01-11T21:41:24.1761182Z 2023-01-11T21:41:24.1761186Z 2023-01-11T21:41:24.1761248Z if __name__ == "__main__": 2023-01-11T21:41:24.1761359Z from torch._dynamo.testing import rand_strided 2023-01-11T21:41:24.1761479Z from torch._inductor.utils import print_performance 2023-01-11T21:41:24.1761677Z arg0_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1761872Z arg1_1 = rand_strided((10, 10), (1, 10), device='cpu', dtype=torch.float32) 2023-01-11T21:41:24.1761989Z print_performance(lambda: call([arg0_1, arg1_1])) 2023-01-11T21:41:24.1761994Z 2023-01-11T21:41:24.1762059Z ok (0.025s) 2023-01-11T21:41:24.1762206Z test_indexing_join (__main__.TestIndexingSimplification) ... ok (0.064s) 2023-01-11T21:41:24.1762357Z test_indexing_simplification (__main__.TestIndexingSimplification) ... ok (0.070s) 2023-01-11T21:41:24.1762362Z 2023-01-11T21:41:24.1762560Z ---------------------------------------------------------------------- 2023-01-11T21:41:24.1762639Z Ran 362 tests in 1063.565s 2023-01-11T21:41:24.1762644Z 2023-01-11T21:41:24.1762712Z OK (skipped=17) 2023-01-11T21:41:24.1762717Z 2023-01-11T21:41:24.1762798Z Generating XML reports... 2023-01-11T21:41:24.1763116Z Generated XML report: test-reports/python-unittest/inductor.test_torchinductor/TEST-CPUReproTests-20230111212338.xml 2023-01-11T21:41:24.1763405Z Generated XML report: test-reports/python-unittest/inductor.test_torchinductor/TEST-CpuTests-20230111212338.xml 2023-01-11T21:41:24.1763722Z Generated XML report: test-reports/python-unittest/inductor.test_torchinductor/TEST-ExprPrinterTests-20230111212338.xml 2023-01-11T21:41:24.1764029Z Generated XML report: test-reports/python-unittest/inductor.test_torchinductor/TEST-SweepInputsCpuTest-20230111212338.xml 2023-01-11T21:41:24.1764373Z Generated XML report: test-reports/python-unittest/inductor.test_torchinductor/TEST-TestIndexingSimplification-20230111212338.xml 2023-01-11T21:41:24.1764379Z 2023-01-11T21:41:24.1764734Z ##[endgroup] 2023-01-11T21:41:24.1765063Z FINISHED PRINTING LOG FILE of inductor/test_torchinductor (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_yykynwpa) 2023-01-11T21:41:24.1765070Z 2023-01-11T21:41:26.1278136Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:41:26.2319261Z Ignoring disabled issues: [] 2023-01-11T21:41:26.2549416Z Running test_mkldnn_fusion ... [2023-01-11 21:41:26.254472] 2023-01-11T21:41:26.2551139Z Executing ['/opt/conda/bin/python', '-bb', 'test_mkldnn_fusion.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:41:26.254757] 2023-01-11T21:41:58.6381578Z 2023-01-11T21:41:58.6382236Z Expand the folded group to see the log file of test_mkldnn 2023-01-11T21:41:58.6383457Z ##[group]PRINTING LOG FILE of test_mkldnn (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_1zhyjxun) 2023-01-11T21:41:58.6383817Z 2023-01-11T21:41:58.6424225Z Running tests... 2023-01-11T21:41:58.6424913Z ---------------------------------------------------------------------- 2023-01-11T21:41:58.6425593Z Test results will be stored in test-reports/python-unittest/test_mkldnn 2023-01-11T21:41:58.6426163Z test_0_dimension_tensor (__main__.TestMkldnn) ... ok (0.289s) 2023-01-11T21:41:58.6426635Z test_adaptive_avg_pool2d (__main__.TestMkldnn) ... ok (0.144s) 2023-01-11T21:41:58.6427099Z test_adaptive_avg_pool2d_bf16 (__main__.TestMkldnn) ... ok (0.061s) 2023-01-11T21:41:58.6427536Z test_add (__main__.TestMkldnn) ... ok (0.078s) 2023-01-11T21:41:58.6428216Z test_autograd_from_mkldnn (__main__.TestMkldnn) ... ok (0.006s) 2023-01-11T21:41:58.6428688Z test_autograd_to_mkldnn (__main__.TestMkldnn) ... ok (0.009s) 2023-01-11T21:41:58.6429112Z test_avg_pool2d (__main__.TestMkldnn) ... ok (0.022s) 2023-01-11T21:41:58.6429546Z test_avg_pool2d_bf16 (__main__.TestMkldnn) ... ok (0.009s) 2023-01-11T21:41:58.6430000Z test_avg_pool2d_stride_none (__main__.TestMkldnn) ... ok (0.008s) 2023-01-11T21:41:58.6430431Z test_avg_pool3d (__main__.TestMkldnn) ... ok (1.471s) 2023-01-11T21:41:58.6430860Z test_avg_pool3d_bf16 (__main__.TestMkldnn) ... ok (0.438s) 2023-01-11T21:41:58.6431939Z test_batch_norm_2d (__main__.TestMkldnn) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py:780: UserWarning: The input to trace is already a ScriptModule, tracing it is a no-op. Returning the object as is. 2023-01-11T21:41:58.6432580Z warnings.warn( 2023-01-11T21:41:58.6432854Z ok (0.080s) 2023-01-11T21:41:58.6433219Z test_batch_norm_2d_bf16 (__main__.TestMkldnn) ... ok (0.022s) 2023-01-11T21:41:58.6433670Z test_batch_norm_3d (__main__.TestMkldnn) ... ok (0.949s) 2023-01-11T21:41:58.6434088Z test_batch_norm_3d_bf16 (__main__.TestMkldnn) ... ok (0.439s) 2023-01-11T21:41:58.6434398Z test_clone (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6434750Z test_conv1d (__main__.TestMkldnn) ... ok (0.118s) 2023-01-11T21:41:58.6435119Z test_conv1d_bf16 (__main__.TestMkldnn) ... ok (0.049s) 2023-01-11T21:41:58.6435554Z test_conv1d_functional (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6435975Z test_conv2d (__main__.TestMkldnn) ... ok (0.846s) 2023-01-11T21:41:58.6436352Z test_conv2d_bf16 (__main__.TestMkldnn) ... ok (0.296s) 2023-01-11T21:41:58.6436766Z test_conv2d_legacy_jit_model (__main__.TestMkldnn) 2023-01-11T21:41:58.6437263Z MKLDNN integration used to serialize models with 5d weight for grouped ... ok (0.011s) 2023-01-11T21:41:58.6437740Z test_conv2d_nhwc (__main__.TestMkldnn) ... ok (0.953s) 2023-01-11T21:41:58.6438144Z test_conv2d_nhwc_bf16 (__main__.TestMkldnn) ... ok (1.584s) 2023-01-11T21:41:58.6438561Z test_conv3d (__main__.TestMkldnn) ... ok (2.332s) 2023-01-11T21:41:58.6438967Z test_conv3d_bf16 (__main__.TestMkldnn) ... ok (0.493s) 2023-01-11T21:41:58.6439352Z test_conversion (__main__.TestMkldnn) ... ok (0.020s) 2023-01-11T21:41:58.6439761Z test_copy (__main__.TestMkldnn) ... ok (0.013s) 2023-01-11T21:41:58.6440146Z test_detach (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6440526Z test_empty (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6440915Z test_gelu (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6441304Z test_gelu_bf16 (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6441698Z test_is_mkldnn (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6442118Z test_is_mkldnn_jit (__main__.TestMkldnn) ... ok (0.005s) 2023-01-11T21:41:58.6443131Z test_legacy_new_failure (__main__.TestMkldnn) ... /var/lib/jenkins/workspace/test/test_mkldnn.py:1222: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:41:58.6444444Z self.assertRaises(RuntimeError, lambda: x_mkldnn.new(x.storage())) 2023-01-11T21:41:58.6444826Z ok (0.007s) 2023-01-11T21:41:58.6445149Z test_linear (__main__.TestMkldnn) ... ok (0.022s) 2023-01-11T21:41:58.6445580Z test_linear_backward (__main__.TestMkldnn) ... ok (0.006s) 2023-01-11T21:41:58.6445999Z test_linear_bf16 (__main__.TestMkldnn) ... ok (0.012s) 2023-01-11T21:41:58.6446444Z test_linear_non_contiguous_weight (__main__.TestMkldnn) ... ok (0.004s) 2023-01-11T21:41:58.6446879Z test_max_pool2d (__main__.TestMkldnn) ... ok (0.114s) 2023-01-11T21:41:58.6447297Z test_max_pool2d_bf16 (__main__.TestMkldnn) ... ok (0.052s) 2023-01-11T21:41:58.6447848Z test_max_pool2d_stride_none (__main__.TestMkldnn) ... ok (0.014s) 2023-01-11T21:41:58.6448290Z test_max_pool3d (__main__.TestMkldnn) ... ok (14.397s) 2023-01-11T21:41:58.6448721Z test_max_pool3d_bf16 (__main__.TestMkldnn) ... ok (10.001s) 2023-01-11T21:41:58.6449172Z test_max_pool_unsupported (__main__.TestMkldnn) ... ok (0.023s) 2023-01-11T21:41:58.6449620Z test_mkldnn_conv_shapecheck (__main__.TestMkldnn) ... ok (0.039s) 2023-01-11T21:41:58.6450034Z test_mul (__main__.TestMkldnn) ... ok (0.091s) 2023-01-11T21:41:58.6450427Z test_prelu (__main__.TestMkldnn) ... ok (8.069s) 2023-01-11T21:41:58.6450814Z test_prelu_bf16 (__main__.TestMkldnn) ... ok (3.983s) 2023-01-11T21:41:58.6451218Z test_relu (__main__.TestMkldnn) ... ok (0.003s) 2023-01-11T21:41:58.6451609Z test_relu_ (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6452184Z test_relu_bf16 (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6452582Z test_relu_inplace_bf16 (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6453016Z test_repr (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6453428Z test_reshape (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6453831Z test_reshape_backward (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6454303Z test_reshape_blocked_format (__main__.TestMkldnn) ... ok (0.007s) 2023-01-11T21:41:58.6455447Z test_resnet18 (__main__.TestMkldnn) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. 2023-01-11T21:41:58.6456139Z warnings.warn( 2023-01-11T21:41:58.6457190Z /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`. 2023-01-11T21:41:58.6457899Z warnings.warn(msg) 2023-01-11T21:41:58.6458208Z ok (0.412s) 2023-01-11T21:41:58.6458535Z test_resnext50_32x4d (__main__.TestMkldnn) ... ok (0.922s) 2023-01-11T21:41:58.6458981Z test_set_data_tensorimpl_type (__main__.TestMkldnn) ... ok (0.003s) 2023-01-11T21:41:58.6459407Z test_sigmoid (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6459813Z test_softmax (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6460192Z test_tanh (__main__.TestMkldnn) ... ok (0.002s) 2023-01-11T21:41:58.6460594Z test_transpose (__main__.TestMkldnn) ... ok (0.003s) 2023-01-11T21:41:58.6461040Z test_transpose_invalid_dime (__main__.TestMkldnn) ... ok (0.005s) 2023-01-11T21:41:58.6461448Z test_unsupported (__main__.TestMkldnn) ... ok (0.055s) 2023-01-11T21:41:58.6461858Z test_view (__main__.TestMkldnn) ... ok (0.007s) 2023-01-11T21:41:58.6462241Z test_zero_ (__main__.TestMkldnn) ... ok (0.001s) 2023-01-11T21:41:58.6462608Z 2023-01-11T21:41:58.6462978Z ---------------------------------------------------------------------- 2023-01-11T21:41:58.6463527Z Ran 68 tests in 49.028s 2023-01-11T21:41:58.6463722Z 2023-01-11T21:41:58.6463829Z OK 2023-01-11T21:41:58.6463982Z 2023-01-11T21:41:58.6464122Z Generating XML reports... 2023-01-11T21:41:58.6464754Z Generated XML report: test-reports/python-unittest/test_mkldnn/TEST-TestMkldnn-20230111214109.xml 2023-01-11T21:41:58.6465117Z 2023-01-11T21:41:58.6465618Z ##[endgroup] 2023-01-11T21:41:58.6466248Z FINISHED PRINTING LOG FILE of test_mkldnn (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_1zhyjxun) 2023-01-11T21:41:58.6466567Z 2023-01-11T21:42:01.0995484Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:01.2339694Z Ignoring disabled issues: [] 2023-01-11T21:42:01.2567700Z Running test_mkldnn_verbose ... [2023-01-11 21:42:01.256370] 2023-01-11T21:42:01.2569450Z Executing ['/opt/conda/bin/python', '-bb', 'test_mkldnn_verbose.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:01.256684] 2023-01-11T21:42:07.0464295Z 2023-01-11T21:42:07.0465137Z Expand the folded group to see the log file of test_mkldnn_verbose 2023-01-11T21:42:07.0466185Z ##[group]PRINTING LOG FILE of test_mkldnn_verbose (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_verbose_759gktct) 2023-01-11T21:42:07.0466556Z 2023-01-11T21:42:07.0466663Z Running tests... 2023-01-11T21:42:07.0467816Z ---------------------------------------------------------------------- 2023-01-11T21:42:07.0515357Z Test results will be stored in test-reports/python-unittest/test_mkldnn_verbose 2023-01-11T21:42:07.0515811Z test_verbose_off (__main__.TestMKLDNNVerbose) ... ok (1.739s) 2023-01-11T21:42:07.0516180Z test_verbose_on (__main__.TestMKLDNNVerbose) ... ok (1.818s) 2023-01-11T21:42:07.0516400Z 2023-01-11T21:42:07.0516675Z ---------------------------------------------------------------------- 2023-01-11T21:42:07.0516987Z Ran 2 tests in 3.557s 2023-01-11T21:42:07.0517140Z 2023-01-11T21:42:07.0517220Z OK 2023-01-11T21:42:07.0517342Z 2023-01-11T21:42:07.0517456Z Generating XML reports... 2023-01-11T21:42:07.0518082Z Generated XML report: test-reports/python-unittest/test_mkldnn_verbose/TEST-TestMKLDNNVerbose-20230111214202.xml 2023-01-11T21:42:07.0518495Z 2023-01-11T21:42:07.0519049Z ##[endgroup] 2023-01-11T21:42:07.0519767Z FINISHED PRINTING LOG FILE of test_mkldnn_verbose (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_verbose_759gktct) 2023-01-11T21:42:07.0520166Z 2023-01-11T21:42:10.0070631Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:10.1339328Z Ignoring disabled issues: [] 2023-01-11T21:42:10.1572714Z Running test_model_dump ... [2023-01-11 21:42:10.156900] 2023-01-11T21:42:10.1573343Z Executing ['/opt/conda/bin/python', '-bb', 'test_model_dump.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:10.157149] 2023-01-11T21:42:13.1909614Z 2023-01-11T21:42:13.1910201Z Expand the folded group to see the log file of test_model_dump 2023-01-11T21:42:13.1911661Z ##[group]PRINTING LOG FILE of test_model_dump (/var/lib/jenkins/workspace/test/test-reports/test_model_dump_0gdwguc_) 2023-01-11T21:42:13.1912189Z 2023-01-11T21:42:13.1912350Z Running tests... 2023-01-11T21:42:13.1913069Z ---------------------------------------------------------------------- 2023-01-11T21:42:13.1913753Z Test results will be stored in test-reports/python-unittest/test_model_dump 2023-01-11T21:42:13.1914345Z test_inline_skeleton (__main__.TestModelDump) ... ok (0.240s) 2023-01-11T21:42:13.1914848Z test_invalid_json (__main__.TestModelDump) ... ok (0.019s) 2023-01-11T21:42:13.1915187Z test_main (__main__.TestModelDump) ... ok (0.015s) 2023-01-11T21:42:13.1915580Z test_memory_computation (__main__.TestModelDump) ... skip: Webdriver not requested (0.001s) 2023-01-11T21:42:13.1915960Z test_model_with_lists (__main__.TestModelDump) ... ok (0.004s) 2023-01-11T21:42:13.1916379Z test_optimized_quantized_model (__main__.TestModelDump) ... ok (0.322s) 2023-01-11T21:42:13.1916831Z test_quantized_model (__main__.TestModelDump) ... ok (0.214s) 2023-01-11T21:42:13.1917484Z test_scripted_model (__main__.TestModelDump) ... ok (0.010s) 2023-01-11T21:42:13.1918140Z test_traced_model (__main__.TestModelDump) ... ok (0.017s) 2023-01-11T21:42:13.1918586Z 2023-01-11T21:42:13.1919022Z ---------------------------------------------------------------------- 2023-01-11T21:42:13.1919447Z Ran 9 tests in 0.843s 2023-01-11T21:42:13.1919628Z 2023-01-11T21:42:13.1919748Z OK (skipped=1) 2023-01-11T21:42:13.1919932Z 2023-01-11T21:42:13.1920074Z Generating XML reports... 2023-01-11T21:42:13.1962622Z Generated XML report: test-reports/python-unittest/test_model_dump/TEST-TestModelDump-20230111214211.xml 2023-01-11T21:42:13.1963012Z 2023-01-11T21:42:13.1963420Z ##[endgroup] 2023-01-11T21:42:13.1964269Z FINISHED PRINTING LOG FILE of test_model_dump (/var/lib/jenkins/workspace/test/test-reports/test_model_dump_0gdwguc_) 2023-01-11T21:42:13.1964949Z 2023-01-11T21:42:15.8138035Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:15.9008260Z Ignoring disabled issues: [] 2023-01-11T21:42:15.9172240Z Running test_module_init ... [2023-01-11 21:42:15.916773] 2023-01-11T21:42:15.9173265Z Executing ['/opt/conda/bin/python', '-bb', 'test_module_init.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:15.917083] 2023-01-11T21:42:18.1776884Z 2023-01-11T21:42:18.1784593Z Expand the folded group to see the log file of test_module_init 2023-01-11T21:42:18.1785585Z ##[group]PRINTING LOG FILE of test_module_init (/var/lib/jenkins/workspace/test/test-reports/test_module_init_rxooqqb7) 2023-01-11T21:42:18.1785966Z 2023-01-11T21:42:18.1786077Z Running tests... 2023-01-11T21:42:18.1786709Z ---------------------------------------------------------------------- 2023-01-11T21:42:18.1787251Z 2023-01-11T21:42:18.1787834Z ---------------------------------------------------------------------- 2023-01-11T21:42:18.1788261Z Ran 0 tests in 0.000s 2023-01-11T21:42:18.1788505Z 2023-01-11T21:42:18.1788637Z OK 2023-01-11T21:42:18.1788834Z 2023-01-11T21:42:18.1789052Z Generating XML reports... 2023-01-11T21:42:18.1789773Z Test results will be stored in test-reports/python-unittest/test_module_init 2023-01-11T21:42:18.1790175Z 2023-01-11T21:42:18.1790662Z ##[endgroup] 2023-01-11T21:42:18.1791532Z FINISHED PRINTING LOG FILE of test_module_init (/var/lib/jenkins/workspace/test/test-reports/test_module_init_rxooqqb7) 2023-01-11T21:42:18.1792030Z 2023-01-11T21:42:20.1393573Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:20.2226773Z Ignoring disabled issues: [] 2023-01-11T21:42:20.2386060Z Running test_monitor ... [2023-01-11 21:42:20.238076] 2023-01-11T21:42:20.2387179Z Executing ['/opt/conda/bin/python', '-bb', 'test_monitor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:20.238370] 2023-01-11T21:42:22.6406453Z 2023-01-11T21:42:22.6407069Z Expand the folded group to see the log file of test_monitor 2023-01-11T21:42:22.6408531Z ##[group]PRINTING LOG FILE of test_monitor (/var/lib/jenkins/workspace/test/test-reports/test_monitor_hf6cj7zk) 2023-01-11T21:42:22.6409116Z 2023-01-11T21:42:22.6409291Z Running tests... 2023-01-11T21:42:22.6410125Z ---------------------------------------------------------------------- 2023-01-11T21:42:22.6411016Z Test results will be stored in test-reports/python-unittest/test_monitor 2023-01-11T21:42:22.6411704Z test_event_handler (__main__.TestMonitor) ... ok (0.250s) 2023-01-11T21:42:22.6412334Z test_fixed_count_stat (__main__.TestMonitor) ... ok (0.001s) 2023-01-11T21:42:22.6412947Z test_interval_stat (__main__.TestMonitor) ... ok (0.002s) 2023-01-11T21:42:22.6413539Z test_log_event (__main__.TestMonitor) ... ok (0.001s) 2023-01-11T21:42:22.6414193Z test_event_handler (__main__.TestMonitorTensorboard) ... ok (0.168s) 2023-01-11T21:42:22.6414618Z 2023-01-11T21:42:22.6415098Z ---------------------------------------------------------------------- 2023-01-11T21:42:22.6415642Z Ran 5 tests in 0.423s 2023-01-11T21:42:22.6415899Z 2023-01-11T21:42:22.6416312Z OK 2023-01-11T21:42:22.6416533Z 2023-01-11T21:42:22.6416731Z Generating XML reports... 2023-01-11T21:42:22.6417674Z Generated XML report: test-reports/python-unittest/test_monitor/TEST-TestMonitor-20230111214221.xml 2023-01-11T21:42:22.6418925Z Generated XML report: test-reports/python-unittest/test_monitor/TEST-TestMonitorTensorboard-20230111214221.xml 2023-01-11T21:42:22.6419525Z 2023-01-11T21:42:22.6420045Z ##[endgroup] 2023-01-11T21:42:22.6420907Z FINISHED PRINTING LOG FILE of test_monitor (/var/lib/jenkins/workspace/test/test-reports/test_monitor_hf6cj7zk) 2023-01-11T21:42:22.6421416Z 2023-01-11T21:42:25.1911570Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:25.2761965Z Ignoring disabled issues: [] 2023-01-11T21:42:25.2924180Z Running test_namedtensor ... [2023-01-11 21:42:25.292049] 2023-01-11T21:42:25.2925680Z Executing ['/opt/conda/bin/python', '-bb', 'test_namedtensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:25.292346] 2023-01-11T21:42:28.2385492Z 2023-01-11T21:42:28.2386003Z Expand the folded group to see the log file of test_namedtensor 2023-01-11T21:42:28.2387009Z ##[group]PRINTING LOG FILE of test_namedtensor (/var/lib/jenkins/workspace/test/test-reports/test_namedtensor_giocblkf) 2023-01-11T21:42:28.2387385Z 2023-01-11T21:42:28.2387585Z Running tests... 2023-01-11T21:42:28.2388187Z ---------------------------------------------------------------------- 2023-01-11T21:42:28.2388830Z Test results will be stored in test-reports/python-unittest/test_namedtensor 2023-01-11T21:42:28.2389401Z test_aaa_must_run_first_check_experimental_warning (__main__.TestNamedTensor) ... ok (0.002s) 2023-01-11T21:42:28.2389927Z test_addcmul_addcdiv (__main__.TestNamedTensor) ... ok (0.002s) 2023-01-11T21:42:28.2390362Z test_addmm (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2390785Z test_addmv (__main__.TestNamedTensor) ... ok (0.002s) 2023-01-11T21:42:28.2391211Z test_align_as (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2391690Z test_align_tensors (__main__.TestNamedTensor) ... skip: Not implemented yet (0.001s) 2023-01-11T21:42:28.2392256Z test_align_tensors_two_inputs (__main__.TestNamedTensor) ... skip: Not implemented yet (0.002s) 2023-01-11T21:42:28.2392765Z test_align_to (__main__.TestNamedTensor) ... ok (0.009s) 2023-01-11T21:42:28.2393209Z test_align_to_ellipsis (__main__.TestNamedTensor) ... ok (0.010s) 2023-01-11T21:42:28.2393639Z test_any_all (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2394124Z test_as_strided (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2394593Z test_as_strided_cuda (__main__.TestNamedTensor) ... skip: no CUDA (0.000s) 2023-01-11T21:42:28.2395077Z test_autograd_ignores_names (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2395549Z test_autograd_smoke (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2396035Z test_autograd_warns_named_grad (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2396506Z test_bernoulli (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2396967Z test_big_tensor_repr_has_names (__main__.TestNamedTensor) ... ok (0.032s) 2023-01-11T21:42:28.2397427Z test_binary_ops (__main__.TestNamedTensor) ... ok (0.341s) 2023-01-11T21:42:28.2397867Z test_bitwise_not (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2398271Z test_bmm (__main__.TestNamedTensor) ... ok (0.019s) 2023-01-11T21:42:28.2398677Z test_cat (__main__.TestNamedTensor) ... ok (0.015s) 2023-01-11T21:42:28.2399085Z test_cdist (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2399509Z test_comparison_ops (__main__.TestNamedTensor) ... ok (0.005s) 2023-01-11T21:42:28.2399966Z test_copy_transpose (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2400418Z test_cummax_cummin (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2400841Z test_detach (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2401269Z test_diagonal (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2401831Z test_dot (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2402249Z test_equal (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2402650Z test_expand (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2403090Z test_factory_coverage (__main__.TestNamedTensor) ... ok (0.004s) 2023-01-11T21:42:28.2403565Z test_factory_edge_cases (__main__.TestNamedTensor) ... ok (0.020s) 2023-01-11T21:42:28.2403997Z test_flatten (__main__.TestNamedTensor) ... ok (0.010s) 2023-01-11T21:42:28.2404429Z test_flatten_nodims (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2404879Z test_has_names (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2405296Z test_index_fill (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2406390Z test_info_smoke (__main__.TestNamedTensor) ... /var/lib/jenkins/workspace/test/test_namedtensor.py:617: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:42:28.2407327Z tensor.storage() 2023-01-11T21:42:28.2408238Z /var/lib/jenkins/workspace/test/test_namedtensor.py:619: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:42:28.2409096Z tensor.storage_type() 2023-01-11T21:42:28.2410578Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:959: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:42:28.2411577Z if self.device.type not in ['cpu', 'cuda']: 2023-01-11T21:42:28.2412923Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:962: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:42:28.2413957Z module = torch if self.device.type == 'cpu' else torch.cuda 2023-01-11T21:42:28.2414294Z ok (0.001s) 2023-01-11T21:42:28.2414658Z test_logcumsumexp (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2415107Z test_logical_not (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2415548Z test_logical_ops (__main__.TestNamedTensor) ... ok (0.002s) 2023-01-11T21:42:28.2415976Z test_masked_fill (__main__.TestNamedTensor) ... ok (0.014s) 2023-01-11T21:42:28.2416430Z test_masked_select (__main__.TestNamedTensor) ... ok (0.006s) 2023-01-11T21:42:28.2416860Z test_matmul (__main__.TestNamedTensor) ... ok (0.021s) 2023-01-11T21:42:28.2417282Z test_max_pooling (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2417776Z test_max_pooling_without_names_does_not_warn (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2418249Z test_mm (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2418639Z test_mv (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2419083Z test_no_jit_script_support (__main__.TestNamedTensor) ... ok (0.047s) 2023-01-11T21:42:28.2419567Z test_no_jit_tracer_support (__main__.TestNamedTensor) ... ok (0.009s) 2023-01-11T21:42:28.2420075Z test_no_multiprocessing_support (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2420548Z test_no_pickle_support (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2421021Z test_no_save_support (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2421572Z test_noncontig_contiguous (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2422036Z test_none_names_refcount (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2422718Z test_nyi_dimname_overload_msg (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2423202Z test_out_fn_semantics (__main__.TestNamedTensor) ... ok (0.021s) 2023-01-11T21:42:28.2423651Z test_pow_special (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2424083Z test_py3_ellipsis (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2424536Z test_reduction_fns (__main__.TestNamedTensor) ... ok (0.106s) 2023-01-11T21:42:28.2424978Z test_refine_names (__main__.TestNamedTensor) ... ok (0.009s) 2023-01-11T21:42:28.2425399Z test_rename (__main__.TestNamedTensor) ... ok (0.008s) 2023-01-11T21:42:28.2425820Z test_rename_ (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2426345Z test_rename_globber (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2426807Z test_rename_rename_map (__main__.TestNamedTensor) ... ok (0.002s) 2023-01-11T21:42:28.2427251Z test_repr (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2427673Z test_resize (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2428090Z test_select (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2428530Z test_select_cuda (__main__.TestNamedTensor) ... skip: no CUDA (0.000s) 2023-01-11T21:42:28.2429011Z test_set_names_property (__main__.TestNamedTensor) ... ok (0.005s) 2023-01-11T21:42:28.2429445Z test_size (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2429897Z test_split_fns_propagates_names (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2430365Z test_squeeze (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2430790Z test_stride (__main__.TestNamedTensor) ... ok (0.007s) 2023-01-11T21:42:28.2431220Z test_tensor_from_lists (__main__.TestNamedTensor) ... ok (0.003s) 2023-01-11T21:42:28.2432155Z test_tensor_from_named_tensor (__main__.TestNamedTensor) ... /var/lib/jenkins/workspace/test/test_namedtensor.py:516: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:42:28.2432973Z tensor = torch.tensor(x) 2023-01-11T21:42:28.2433729Z /var/lib/jenkins/workspace/test/test_namedtensor.py:522: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:42:28.2434552Z tensor = torch.tensor(x, names=None) 2023-01-11T21:42:28.2435315Z /var/lib/jenkins/workspace/test/test_namedtensor.py:527: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:42:28.2436183Z tensor = torch.tensor(x, names=('N', 'C')) 2023-01-11T21:42:28.2436513Z ok (0.003s) 2023-01-11T21:42:28.2436869Z test_tensor_from_numpy (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2437780Z test_tensor_from_tensor (__main__.TestNamedTensor) ... /var/lib/jenkins/workspace/test/test_namedtensor.py:511: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:42:28.2438600Z tensor = torch.tensor(x, names=names) 2023-01-11T21:42:28.2438921Z ok (0.001s) 2023-01-11T21:42:28.2439282Z test_tensor_grad_is_unnamed (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2439765Z test_transpose_variants (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2440219Z test_trivial (__main__.TestNamedTensor) ... ok (0.000s) 2023-01-11T21:42:28.2440668Z test_unary_propagate_names_fns (__main__.TestNamedTensor) ... ok (0.018s) 2023-01-11T21:42:28.2441251Z test_unflatten (__main__.TestNamedTensor) ... ok (0.016s) 2023-01-11T21:42:28.2441724Z test_unsupported_op_error_msg (__main__.TestNamedTensor) ... ok (0.010s) 2023-01-11T21:42:28.2442270Z test_using_seen_interned_string_doesnt_bump_refcount (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2442861Z test_using_unseen_interned_string_bumps_refcount_permanently (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2443450Z test_using_unseen_uninterned_string_refcounts (__main__.TestNamedTensor) ... ok (0.001s) 2023-01-11T21:42:28.2443762Z 2023-01-11T21:42:28.2444108Z ---------------------------------------------------------------------- 2023-01-11T21:42:28.2444492Z Ran 86 tests in 0.869s 2023-01-11T21:42:28.2444677Z 2023-01-11T21:42:28.2444790Z OK (skipped=4) 2023-01-11T21:42:28.2444964Z 2023-01-11T21:42:28.2445106Z Generating XML reports... 2023-01-11T21:42:28.2445862Z Generated XML report: test-reports/python-unittest/test_namedtensor/TEST-TestNamedTensor-20230111214226.xml 2023-01-11T21:42:28.2446267Z 2023-01-11T21:42:28.2446674Z ##[endgroup] 2023-01-11T21:42:28.2447337Z FINISHED PRINTING LOG FILE of test_namedtensor (/var/lib/jenkins/workspace/test/test-reports/test_namedtensor_giocblkf) 2023-01-11T21:42:28.2447716Z 2023-01-11T21:42:30.5099894Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:30.6042204Z Ignoring disabled issues: [] 2023-01-11T21:42:30.6208265Z Running test_native_functions ... [2023-01-11 21:42:30.620301] 2023-01-11T21:42:30.6209232Z Executing ['/opt/conda/bin/python', '-bb', 'test_native_functions.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:30.620581] 2023-01-11T21:42:32.9319107Z 2023-01-11T21:42:32.9319953Z Expand the folded group to see the log file of test_native_functions 2023-01-11T21:42:32.9321118Z ##[group]PRINTING LOG FILE of test_native_functions (/var/lib/jenkins/workspace/test/test-reports/test_native_functions_qz4w1sz5) 2023-01-11T21:42:32.9321512Z 2023-01-11T21:42:32.9321643Z Running tests... 2023-01-11T21:42:32.9322270Z ---------------------------------------------------------------------- 2023-01-11T21:42:32.9322920Z Test results will be stored in test-reports/python-unittest/test_native_functions 2023-01-11T21:42:32.9323453Z test_intlist_error_with_overload (__main__.TestNativeFunctions) ... ok (0.265s) 2023-01-11T21:42:32.9324332Z test_optional_filled_intlist (__main__.TestNativeFunctions) ... pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9325085Z pad(): argument 'pad' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9325765Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9326458Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9327152Z pad(): argument 'pad' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9327814Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9328427Z pad(): argument 'pad' (position 2) must be tuple of ints, not str 2023-01-11T21:42:32.9328801Z ok (0.050s) 2023-01-11T21:42:32.9329186Z test_optional_floatlist (__main__.TestNativeFunctions) ... ok (0.010s) 2023-01-11T21:42:32.9329700Z test_optional_floatlist_invalid (__main__.TestNativeFunctions) ... ok (0.010s) 2023-01-11T21:42:32.9330204Z test_optional_intlist (__main__.TestNativeFunctions) ... ok (0.008s) 2023-01-11T21:42:32.9330692Z test_optional_intlist_invalid (__main__.TestNativeFunctions) ... ok (0.009s) 2023-01-11T21:42:32.9331186Z test_string_defaults (__main__.TestNativeFunctions) ... ok (0.006s) 2023-01-11T21:42:32.9332006Z test_symintlist_error (__main__.TestNativeFunctions) ... pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9333076Z pad(): argument 'pad' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9333735Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9334579Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9335328Z pad(): argument 'pad' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9336116Z pad(): argument 'pad' (position 2) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9336795Z pad(): argument 'pad' (position 2) must be tuple of ints, not str 2023-01-11T21:42:32.9337222Z ok (0.016s) 2023-01-11T21:42:32.9338083Z test_symintlist_error_with_overload (__main__.TestNativeFunctions) ... view() received an invalid combination of arguments - got (tuple), but expected one of: 2023-01-11T21:42:32.9338694Z * (torch.dtype dtype) 2023-01-11T21:42:32.9339536Z didn't match because some of the arguments have invalid types: (!tuple of (str,)!) 2023-01-11T21:42:32.9340025Z * (tuple of ints size) 2023-01-11T21:42:32.9340721Z didn't match because some of the arguments have invalid types: (!tuple of (str,)!) 2023-01-11T21:42:32.9341032Z 2023-01-11T21:42:32.9341395Z view(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9342066Z view() received an invalid combination of arguments - got (tuple), but expected one of: 2023-01-11T21:42:32.9342709Z * (torch.dtype dtype) 2023-01-11T21:42:32.9343257Z didn't match because some of the arguments have invalid types: (!tuple of (str, int)!) 2023-01-11T21:42:32.9343665Z * (tuple of ints size) 2023-01-11T21:42:32.9344220Z didn't match because some of the arguments have invalid types: (!tuple of (str, int)!) 2023-01-11T21:42:32.9344519Z 2023-01-11T21:42:32.9344879Z view() received an invalid combination of arguments - got (list), but expected one of: 2023-01-11T21:42:32.9345324Z * (torch.dtype dtype) 2023-01-11T21:42:32.9345869Z didn't match because some of the arguments have invalid types: (!list of [str]!) 2023-01-11T21:42:32.9346285Z * (tuple of ints size) 2023-01-11T21:42:32.9346804Z didn't match because some of the arguments have invalid types: (!list of [str]!) 2023-01-11T21:42:32.9347099Z 2023-01-11T21:42:32.9347459Z view(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9348109Z view() received an invalid combination of arguments - got (list), but expected one of: 2023-01-11T21:42:32.9348509Z * (torch.dtype dtype) 2023-01-11T21:42:32.9349061Z didn't match because some of the arguments have invalid types: (!list of [str, int]!) 2023-01-11T21:42:32.9349476Z * (tuple of ints size) 2023-01-11T21:42:32.9350027Z didn't match because some of the arguments have invalid types: (!list of [str, int]!) 2023-01-11T21:42:32.9350313Z 2023-01-11T21:42:32.9350693Z view() received an invalid combination of arguments - got (str), but expected one of: 2023-01-11T21:42:32.9351116Z * (torch.dtype dtype) 2023-01-11T21:42:32.9351764Z didn't match because some of the arguments have invalid types: (!str!) 2023-01-11T21:42:32.9352206Z * (tuple of ints size) 2023-01-11T21:42:32.9352796Z didn't match because some of the arguments have invalid types: (!str!) 2023-01-11T21:42:32.9353119Z 2023-01-11T21:42:32.9353247Z ok (0.016s) 2023-01-11T21:42:32.9354289Z test_symintlist_error_with_overload_but_is_unique (__main__.TestNativeFunctions) ... set_() received an invalid combination of arguments - got (Tensor, int, tuple), but expected one of: 2023-01-11T21:42:32.9354914Z * () 2023-01-11T21:42:32.9355244Z * (torch.Storage source) 2023-01-11T21:42:32.9355754Z * (torch.Storage source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9356214Z * (Tensor source) 2023-01-11T21:42:32.9356683Z * (Tensor source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9357270Z 2023-01-11T21:42:32.9357709Z set_(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9358596Z set_() received an invalid combination of arguments - got (Tensor, int, tuple), but expected one of: 2023-01-11T21:42:32.9359010Z * () 2023-01-11T21:42:32.9359308Z * (torch.Storage source) 2023-01-11T21:42:32.9359755Z * (torch.Storage source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9360261Z * (Tensor source) 2023-01-11T21:42:32.9360645Z * (Tensor source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9360933Z 2023-01-11T21:42:32.9361347Z set_() received an invalid combination of arguments - got (Tensor, int, list), but expected one of: 2023-01-11T21:42:32.9361761Z * () 2023-01-11T21:42:32.9362021Z * (torch.Storage source) 2023-01-11T21:42:32.9362451Z * (torch.Storage source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9363040Z * (Tensor source) 2023-01-11T21:42:32.9363427Z * (Tensor source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9363716Z 2023-01-11T21:42:32.9364092Z set_(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9364780Z set_() received an invalid combination of arguments - got (Tensor, int, list), but expected one of: 2023-01-11T21:42:32.9365190Z * () 2023-01-11T21:42:32.9365463Z * (torch.Storage source) 2023-01-11T21:42:32.9365901Z * (torch.Storage source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9366310Z * (Tensor source) 2023-01-11T21:42:32.9366706Z * (Tensor source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9366992Z 2023-01-11T21:42:32.9367401Z set_() received an invalid combination of arguments - got (Tensor, int, str), but expected one of: 2023-01-11T21:42:32.9367816Z * () 2023-01-11T21:42:32.9368079Z * (torch.Storage source) 2023-01-11T21:42:32.9368549Z * (torch.Storage source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9369092Z * (Tensor source) 2023-01-11T21:42:32.9369569Z * (Tensor source, int storage_offset, tuple of ints size, tuple of ints stride) 2023-01-11T21:42:32.9369873Z 2023-01-11T21:42:32.9370003Z ok (0.016s) 2023-01-11T21:42:32.9370867Z test_vararg_symintlist_error (__main__.TestNativeFunctions) ... rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9371745Z rand(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9372509Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9373304Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9374074Z rand(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9374872Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9375659Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9376467Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9377227Z rand(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9377993Z rand() received an invalid combination of arguments - got (str, int), but expected one of: 2023-01-11T21:42:32.9378805Z * (tuple of ints size, *, torch.Generator generator, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9379764Z * (tuple of ints size, *, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9380769Z * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9381500Z * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9381901Z 2023-01-11T21:42:32.9382303Z rand(): argument 'size' (position 1) must be tuple of ints, but found element of type str at pos 0 2023-01-11T21:42:32.9383121Z rand(): argument 'size' must be tuple of ints, but found element of type str at pos 2 2023-01-11T21:42:32.9383797Z rand() received an invalid combination of arguments - got (str, int), but expected one of: 2023-01-11T21:42:32.9384503Z * (tuple of ints size, *, torch.Generator generator, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9385456Z * (tuple of ints size, *, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9386220Z * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9386940Z * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9387327Z 2023-01-11T21:42:32.9387749Z rand() received an invalid combination of arguments - got (str, str, str), but expected one of: 2023-01-11T21:42:32.9388441Z * (tuple of ints size, *, torch.Generator generator, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9389262Z * (tuple of ints size, *, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9390021Z * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9390727Z * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) 2023-01-11T21:42:32.9391111Z 2023-01-11T21:42:32.9391208Z ok (0.032s) 2023-01-11T21:42:32.9391367Z 2023-01-11T21:42:32.9391715Z ---------------------------------------------------------------------- 2023-01-11T21:42:32.9392125Z Ran 11 tests in 0.438s 2023-01-11T21:42:32.9392307Z 2023-01-11T21:42:32.9392417Z OK 2023-01-11T21:42:32.9392562Z 2023-01-11T21:42:32.9392702Z Generating XML reports... 2023-01-11T21:42:32.9393431Z Generated XML report: test-reports/python-unittest/test_native_functions/TEST-TestNativeFunctions-20230111214232.xml 2023-01-11T21:42:32.9393837Z 2023-01-11T21:42:32.9394384Z ##[endgroup] 2023-01-11T21:42:32.9395051Z FINISHED PRINTING LOG FILE of test_native_functions (/var/lib/jenkins/workspace/test/test-reports/test_native_functions_qz4w1sz5) 2023-01-11T21:42:32.9395440Z 2023-01-11T21:42:35.3325124Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:35.4198505Z Ignoring disabled issues: [] 2023-01-11T21:42:35.4362858Z Running test_native_mha ... [2023-01-11 21:42:35.435887] 2023-01-11T21:42:35.4364443Z Executing ['/opt/conda/bin/python', '-bb', 'test_native_mha.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:35.436215] 2023-01-11T21:42:37.4097322Z 2023-01-11T21:42:37.4097820Z Expand the folded group to see the log file of test_native_mha 2023-01-11T21:42:37.4098762Z ##[group]PRINTING LOG FILE of test_native_mha (/var/lib/jenkins/workspace/test/test-reports/test_native_mha_57c20dz5) 2023-01-11T21:42:37.4099166Z 2023-01-11T21:42:37.4099300Z Running tests... 2023-01-11T21:42:37.4099959Z ---------------------------------------------------------------------- 2023-01-11T21:42:37.4100270Z 2023-01-11T21:42:37.4100918Z ---------------------------------------------------------------------- 2023-01-11T21:42:37.4102004Z Ran 0 tests in 0.000s 2023-01-11T21:42:37.4102199Z 2023-01-11T21:42:37.4102309Z OK 2023-01-11T21:42:37.4102626Z 2023-01-11T21:42:37.4102764Z Generating XML reports... 2023-01-11T21:42:37.4103140Z Test results will be stored in test-reports/python-unittest/test_native_mha 2023-01-11T21:42:37.4103416Z 2023-01-11T21:42:37.4103697Z ##[endgroup] 2023-01-11T21:42:37.4104092Z FINISHED PRINTING LOG FILE of test_native_mha (/var/lib/jenkins/workspace/test/test-reports/test_native_mha_57c20dz5) 2023-01-11T21:42:37.4104312Z 2023-01-11T21:42:39.5231449Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:39.6110103Z Ignoring disabled issues: [] 2023-01-11T21:42:39.6271592Z Running test_nestedtensor ... [2023-01-11 21:42:39.626788] 2023-01-11T21:42:39.6273405Z Executing ['/opt/conda/bin/python', '-bb', 'test_nestedtensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:39.627068] 2023-01-11T21:42:41.7582868Z 2023-01-11T21:42:41.7583436Z Expand the folded group to see the log file of test_nestedtensor 2023-01-11T21:42:41.7584418Z ##[group]PRINTING LOG FILE of test_nestedtensor (/var/lib/jenkins/workspace/test/test-reports/test_nestedtensor_g9fx0ytb) 2023-01-11T21:42:41.7584662Z 2023-01-11T21:42:41.7584738Z Running tests... 2023-01-11T21:42:41.7585130Z ---------------------------------------------------------------------- 2023-01-11T21:42:41.7585664Z Test results will be stored in test-reports/python-unittest/test_nestedtensor 2023-01-11T21:42:41.7586725Z test_2d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... /var/lib/jenkins/workspace/test/test_nestedtensor.py:113: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:42:41.7587572Z nested_tensor = torch.nested.nested_tensor(data, dtype=torch.int64) 2023-01-11T21:42:41.7587909Z ok (0.003s) 2023-01-11T21:42:41.7588387Z test_2d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7588944Z test_2d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7589300Z test_2d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7589657Z test_2d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7590276Z test_2d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7590856Z test_2d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7591204Z test_2d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7591662Z test_3d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7592246Z test_3d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7592821Z test_3d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7593200Z test_3d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7593534Z test_3d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7593853Z test_3d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7594286Z test_3d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7594860Z test_3d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7595201Z test_3d_nested_tensor_float_batch_size_2_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7595564Z test_3d_nested_tensor_float_batch_size_2_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7595930Z test_3d_nested_tensor_float_batch_size_2_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7596289Z test_3d_nested_tensor_float_batch_size_2_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7596631Z test_3d_nested_tensor_float_batch_size_4_max_seq_len_3_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7596992Z test_3d_nested_tensor_float_batch_size_4_max_seq_len_3_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7597412Z test_3d_nested_tensor_float_batch_size_4_max_seq_len_5_vocab_size_10 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7597775Z test_3d_nested_tensor_float_batch_size_4_max_seq_len_5_vocab_size_20 (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7598066Z test_copy_ (__main__.TestNestedTensor) ... ok (0.006s) 2023-01-11T21:42:41.7598343Z test_default_nested_tensor (__main__.TestNestedTensor) ... ok (0.003s) 2023-01-11T21:42:41.7598612Z test_dim (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7598852Z test_fill_ (__main__.TestNestedTensor) ... ok (0.006s) 2023-01-11T21:42:41.7599117Z test_is_contiguous (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7599395Z test_nested_namespace (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7599671Z test_nested_tensor (__main__.TestNestedTensor) ... ok (0.003s) 2023-01-11T21:42:41.7599944Z test_nested_tensor_matching_dim (__main__.TestNestedTensor) ... ok (0.006s) 2023-01-11T21:42:41.7600219Z test_numel (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7600476Z test_ones_like (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7600727Z test_repr_string (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7600982Z test_size (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7601235Z test_size_dim (__main__.TestNestedTensor) ... ok (0.005s) 2023-01-11T21:42:41.7601477Z test_stride (__main__.TestNestedTensor) ... ok (0.002s) 2023-01-11T21:42:41.7601723Z test_to (__main__.TestNestedTensor) ... ok (0.003s) 2023-01-11T21:42:41.7602001Z test_to_padded_tensor_on_empty_tensor (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7602298Z test_unbind_0 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7602540Z test_unbind_1 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7602793Z test_unbind_3 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7603040Z test_unbind_4 (__main__.TestNestedTensor) ... ok (0.001s) 2023-01-11T21:42:41.7603286Z test_unbind_dim (__main__.TestNestedTensor) ... ok (0.003s) 2023-01-11T21:42:41.7603436Z 2023-01-11T21:42:41.7603690Z ---------------------------------------------------------------------- 2023-01-11T21:42:41.7603930Z Ran 45 tests in 0.089s 2023-01-11T21:42:41.7604043Z 2023-01-11T21:42:41.7604103Z OK 2023-01-11T21:42:41.7604181Z 2023-01-11T21:42:41.7604265Z Generating XML reports... 2023-01-11T21:42:41.7604697Z Generated XML report: test-reports/python-unittest/test_nestedtensor/TEST-TestNestedTensor-20230111214241.xml 2023-01-11T21:42:41.7604937Z 2023-01-11T21:42:41.7605221Z ##[endgroup] 2023-01-11T21:42:41.7605606Z FINISHED PRINTING LOG FILE of test_nestedtensor (/var/lib/jenkins/workspace/test/test-reports/test_nestedtensor_g9fx0ytb) 2023-01-11T21:42:41.7605833Z 2023-01-11T21:42:43.8807538Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:43.9683522Z Ignoring disabled issues: [] 2023-01-11T21:42:43.9842401Z Running test_numba_integration ... [2023-01-11 21:42:43.983953] 2023-01-11T21:42:43.9844364Z Executing ['/opt/conda/bin/python', '-bb', 'test_numba_integration.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:43.984209] 2023-01-11T21:42:46.4311981Z 2023-01-11T21:42:46.4312983Z Expand the folded group to see the log file of test_numba_integration 2023-01-11T21:42:46.4314684Z ##[group]PRINTING LOG FILE of test_numba_integration (/var/lib/jenkins/workspace/test/test-reports/test_numba_integration_4aqtnrb2) 2023-01-11T21:42:46.4315366Z 2023-01-11T21:42:46.4315542Z Running tests... 2023-01-11T21:42:46.4317335Z ---------------------------------------------------------------------- 2023-01-11T21:42:46.4318556Z Test results will be stored in test-reports/python-unittest/test_numba_integration 2023-01-11T21:42:46.4319244Z test_active_device (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4320348Z 'as_cuda_array' tensor device must match active numba context. ... skip: No cuda (0.002s) 2023-01-11T21:42:46.4326966Z test_array_adaptor (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4327732Z Torch __cuda_array_adaptor__ exposes tensor data to numba.cuda. ... skip: No cuda (0.003s) 2023-01-11T21:42:46.4328364Z test_conversion_errors (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4329088Z Numba properly detects array interface for tensor.Tensor variants. ... skip: No cuda (0.002s) 2023-01-11T21:42:46.4329559Z test_cuda_array_interface (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4329990Z torch.Tensor exposes __cuda_array_interface__ for cuda tensors. ... skip: No cuda (0.003s) 2023-01-11T21:42:46.4330701Z test_from_cuda_array_interface (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4331775Z torch.as_tensor() and torch.tensor() supports the __cuda_array_interface__ protocol. ... skip: Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418 (0.003s) 2023-01-11T21:42:46.4332586Z test_from_cuda_array_interface_active_device (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4333343Z torch.as_tensor() tensor device must match active numba context. ... skip: Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418 (0.002s) 2023-01-11T21:42:46.4334196Z test_from_cuda_array_interface_inferred_strides (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4334932Z torch.as_tensor(numba_ary) should have correct inferred (contiguous) strides ... skip: No cuda (0.001s) 2023-01-11T21:42:46.4335620Z test_from_cuda_array_interface_lifetime (__main__.TestNumbaIntegration) 2023-01-11T21:42:46.4336534Z torch.as_tensor(obj) tensor grabs a reference to obj so that the lifetime of obj exceeds the tensor ... skip: Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418 (0.001s) 2023-01-11T21:42:46.4337139Z 2023-01-11T21:42:46.4337628Z ---------------------------------------------------------------------- 2023-01-11T21:42:46.4338123Z Ran 8 tests in 0.016s 2023-01-11T21:42:46.4338332Z 2023-01-11T21:42:46.4338470Z OK (skipped=8) 2023-01-11T21:42:46.4338698Z 2023-01-11T21:42:46.4338865Z Generating XML reports... 2023-01-11T21:42:46.4339767Z Generated XML report: test-reports/python-unittest/test_numba_integration/TEST-TestNumbaIntegration-20230111214245.xml 2023-01-11T21:42:46.4340167Z 2023-01-11T21:42:46.4340846Z ##[endgroup] 2023-01-11T21:42:46.4341941Z FINISHED PRINTING LOG FILE of test_numba_integration (/var/lib/jenkins/workspace/test/test-reports/test_numba_integration_4aqtnrb2) 2023-01-11T21:42:46.4342785Z 2023-01-11T21:42:48.7659810Z 2023-01-11T21:42:48.7660571Z Expand the folded group to see the log file of test_mkldnn_fusion 2023-01-11T21:42:48.7661826Z ##[group]PRINTING LOG FILE of test_mkldnn_fusion (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_fusion_8ljahok8) 2023-01-11T21:42:48.7662188Z 2023-01-11T21:42:48.7662297Z Running tests... 2023-01-11T21:42:48.7663101Z ---------------------------------------------------------------------- 2023-01-11T21:42:48.7663791Z Test results will be stored in test-reports/python-unittest/test_mkldnn_fusion 2023-01-11T21:42:48.7664705Z test_conv_binary_fusion_ops (__main__.TestMkldnnFusion) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:42:48.7665336Z test_conv_unary_fusion_nnc (__main__.TestMkldnnFusion) ... ok (1.036s) 2023-01-11T21:42:48.7665860Z test_conv_unary_fusion_ops (__main__.TestMkldnnFusion) ... ok (65.061s) 2023-01-11T21:42:48.7666378Z test_linear_binary_fusion_ops (__main__.TestMkldnnFusion) ... ok (0.016s) 2023-01-11T21:42:48.7666910Z test_linear_unary_fusion_ops (__main__.TestMkldnnFusion) ... ok (0.022s) 2023-01-11T21:42:48.7667421Z test_single_conv (__main__.TestMkldnnFusion) ... ok (1.465s) 2023-01-11T21:42:48.7667899Z test_unsupported_conv (__main__.TestMkldnnFusion) ... ok (12.641s) 2023-01-11T21:42:48.7668194Z 2023-01-11T21:42:48.7668563Z ---------------------------------------------------------------------- 2023-01-11T21:42:48.7669000Z Ran 7 tests in 80.243s 2023-01-11T21:42:48.7669206Z 2023-01-11T21:42:48.7669429Z OK (skipped=1) 2023-01-11T21:42:48.7669618Z 2023-01-11T21:42:48.7669747Z Generating XML reports... 2023-01-11T21:42:48.7670528Z Generated XML report: test-reports/python-unittest/test_mkldnn_fusion/TEST-TestMkldnnFusion-20230111214128.xml 2023-01-11T21:42:48.7670959Z 2023-01-11T21:42:48.7671391Z ##[endgroup] 2023-01-11T21:42:48.7672078Z FINISHED PRINTING LOG FILE of test_mkldnn_fusion (/var/lib/jenkins/workspace/test/test-reports/test_mkldnn_fusion_8ljahok8) 2023-01-11T21:42:48.7672478Z 2023-01-11T21:42:48.8545110Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:48.9402271Z Ignoring disabled issues: [] 2023-01-11T21:42:48.9565479Z Running test_numpy_interop ... [2023-01-11 21:42:48.956122] 2023-01-11T21:42:48.9566114Z Executing ['/opt/conda/bin/python', '-bb', 'test_numpy_interop.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:48.956387] 2023-01-11T21:42:50.6887099Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:50.7555387Z Ignoring disabled issues: [] 2023-01-11T21:42:50.7718370Z Running test_nvfuser_dynamo ... [2023-01-11 21:42:50.771536] 2023-01-11T21:42:50.7720689Z Executing ['/opt/conda/bin/python', '-bb', 'test_nvfuser_dynamo.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:50.771790] 2023-01-11T21:42:50.9576440Z 2023-01-11T21:42:50.9577104Z Expand the folded group to see the log file of test_numpy_interop 2023-01-11T21:42:50.9578141Z ##[group]PRINTING LOG FILE of test_numpy_interop (/var/lib/jenkins/workspace/test/test-reports/test_numpy_interop_dh069_5_) 2023-01-11T21:42:50.9578381Z 2023-01-11T21:42:50.9578458Z Running tests... 2023-01-11T21:42:50.9578849Z ---------------------------------------------------------------------- 2023-01-11T21:42:50.9579024Z 2023-01-11T21:42:50.9579222Z ---------------------------------------------------------------------- 2023-01-11T21:42:50.9579582Z Ran 0 tests in 0.000s 2023-01-11T21:42:50.9579757Z 2023-01-11T21:42:50.9579878Z OK 2023-01-11T21:42:50.9580057Z 2023-01-11T21:42:50.9580197Z Generating XML reports... 2023-01-11T21:42:50.9580816Z Test results will be stored in test-reports/python-unittest/test_numpy_interop 2023-01-11T21:42:50.9581151Z 2023-01-11T21:42:50.9581535Z ##[endgroup] 2023-01-11T21:42:50.9582083Z FINISHED PRINTING LOG FILE of test_numpy_interop (/var/lib/jenkins/workspace/test/test-reports/test_numpy_interop_dh069_5_) 2023-01-11T21:42:50.9582308Z 2023-01-11T21:42:52.6597935Z 2023-01-11T21:42:52.6598547Z Expand the folded group to see the log file of test_nvfuser_dynamo 2023-01-11T21:42:52.6601085Z ##[group]PRINTING LOG FILE of test_nvfuser_dynamo (/var/lib/jenkins/workspace/test/test-reports/test_nvfuser_dynamo_ult0d520) 2023-01-11T21:42:52.6601660Z 2023-01-11T21:42:52.6601825Z Running tests... 2023-01-11T21:42:52.6602640Z ---------------------------------------------------------------------- 2023-01-11T21:42:52.6603777Z test_basic (__main__.TestNvFuserDynamo) ... Test results will be stored in test-reports/python-unittest/test_nvfuser_dynamo 2023-01-11T21:42:52.6604769Z skip: requires CUDA (0.001s) 2023-01-11T21:42:52.6605463Z test_batch_norm_implicit_dtype_promotion (__main__.TestNvFuserDynamo) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:52.6606312Z test_dtype_correctness (__main__.TestNvFuserDynamo) ... skip: requires CUDA (0.000s) 2023-01-11T21:42:52.6607053Z test_min_cut (__main__.TestNvFuserDynamo) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:52.6607459Z 2023-01-11T21:42:52.6607935Z ---------------------------------------------------------------------- 2023-01-11T21:42:52.6608472Z Ran 4 tests in 0.003s 2023-01-11T21:42:52.6608731Z 2023-01-11T21:42:52.6608894Z OK (skipped=4) 2023-01-11T21:42:52.6609148Z 2023-01-11T21:42:52.6609350Z Generating XML reports... 2023-01-11T21:42:52.6610403Z Generated XML report: test-reports/python-unittest/test_nvfuser_dynamo/TEST-TestNvFuserDynamo-20230111214252.xml 2023-01-11T21:42:52.6610958Z 2023-01-11T21:42:52.6611510Z ##[endgroup] 2023-01-11T21:42:52.6612651Z FINISHED PRINTING LOG FILE of test_nvfuser_dynamo (/var/lib/jenkins/workspace/test/test-reports/test_nvfuser_dynamo_ult0d520) 2023-01-11T21:42:52.6613199Z 2023-01-11T21:42:52.8705240Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:52.9374382Z Ignoring disabled issues: [] 2023-01-11T21:42:52.9536506Z Running test_nvfuser_frontend ... [2023-01-11 21:42:52.953373] 2023-01-11T21:42:52.9538915Z Executing ['/opt/conda/bin/python', '-bb', 'test_nvfuser_frontend.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:52.953618] 2023-01-11T21:42:54.5930632Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:54.6421342Z 2023-01-11T21:42:54.6421891Z Expand the folded group to see the log file of test_nvfuser_frontend 2023-01-11T21:42:54.6423170Z ##[group]PRINTING LOG FILE of test_nvfuser_frontend (/var/lib/jenkins/workspace/test/test-reports/test_nvfuser_frontend_wsi6t0nt) 2023-01-11T21:42:54.6423623Z 2023-01-11T21:42:54.6423758Z Running tests... 2023-01-11T21:42:54.6424200Z ---------------------------------------------------------------------- 2023-01-11T21:42:54.6424680Z test_basic (__main__.TestNvFuserFrontend) ... Test results will be stored in test-reports/python-unittest/test_nvfuser_frontend 2023-01-11T21:42:54.6424979Z skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6425244Z test_basic_fp16 (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6425573Z test_broadcast_mixing (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6425900Z test_cast_double_to_half (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6426242Z test_explicit_broadcast_input (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6426591Z test_implicit_broadcast_input (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6426931Z test_ops_broadcast (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.000s) 2023-01-11T21:42:54.6427249Z test_prim_layer_norm_fwd (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.003s) 2023-01-11T21:42:54.6427581Z test_prim_rms_norm_fwd (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.001s) 2023-01-11T21:42:54.6427910Z test_promote_to_double (__main__.TestNvFuserFrontend) ... skip: requires CUDA (0.000s) 2023-01-11T21:42:54.6428091Z 2023-01-11T21:42:54.6428280Z ---------------------------------------------------------------------- 2023-01-11T21:42:54.6428522Z Ran 10 tests in 0.009s 2023-01-11T21:42:54.6428635Z 2023-01-11T21:42:54.6428708Z OK (skipped=10) 2023-01-11T21:42:54.6428816Z 2023-01-11T21:42:54.6428899Z Generating XML reports... 2023-01-11T21:42:54.6429332Z Generated XML report: test-reports/python-unittest/test_nvfuser_frontend/TEST-TestNvFuserFrontend-20230111214254.xml 2023-01-11T21:42:54.6429584Z 2023-01-11T21:42:54.6429808Z ##[endgroup] 2023-01-11T21:42:54.6430211Z FINISHED PRINTING LOG FILE of test_nvfuser_frontend (/var/lib/jenkins/workspace/test/test-reports/test_nvfuser_frontend_wsi6t0nt) 2023-01-11T21:42:54.6430635Z 2023-01-11T21:42:54.6584145Z Ignoring disabled issues: [] 2023-01-11T21:42:54.6744557Z Running test_openmp ... [2023-01-11 21:42:54.674175] 2023-01-11T21:42:54.6746520Z Executing ['/opt/conda/bin/python', '-bb', 'test_openmp.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:54.674434] 2023-01-11T21:42:56.7108689Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:42:56.8033717Z Ignoring disabled issues: [] 2023-01-11T21:42:56.8285796Z Running test_optim ... [2023-01-11 21:42:56.828193] 2023-01-11T21:42:56.8287492Z Executing ['/opt/conda/bin/python', '-bb', 'test_optim.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:42:56.828506] 2023-01-11T21:42:59.5996548Z 2023-01-11T21:42:59.5997043Z Expand the folded group to see the log file of test_openmp 2023-01-11T21:42:59.5998217Z ##[group]PRINTING LOG FILE of test_openmp (/var/lib/jenkins/workspace/test/test-reports/test_openmp_m_mtlrq1) 2023-01-11T21:42:59.5998445Z 2023-01-11T21:42:59.5998520Z Running tests... 2023-01-11T21:42:59.5998937Z ---------------------------------------------------------------------- 2023-01-11T21:42:59.5999314Z Test results will be stored in test-reports/python-unittest/test_openmp 2023-01-11T21:42:59.5999591Z test_n_threads (__main__.TestOpenMP_ParallelFor) 2023-01-11T21:42:59.5999862Z Make sure there is no memory leak with many threads ... ok (1.165s) 2023-01-11T21:42:59.6000127Z test_one_thread (__main__.TestOpenMP_ParallelFor) 2023-01-11T21:42:59.6000487Z Make sure there is no memory leak with one thread: issue gh-32284 ... ok (2.007s) 2023-01-11T21:42:59.6000656Z 2023-01-11T21:42:59.6000856Z ---------------------------------------------------------------------- 2023-01-11T21:42:59.6001097Z Ran 2 tests in 3.172s 2023-01-11T21:42:59.6001211Z 2023-01-11T21:42:59.6001259Z OK 2023-01-11T21:42:59.6001350Z 2023-01-11T21:42:59.6001439Z Generating XML reports... 2023-01-11T21:42:59.6001851Z Generated XML report: test-reports/python-unittest/test_openmp/TEST-TestOpenMP_ParallelFor-20230111214256.xml 2023-01-11T21:42:59.6002082Z 2023-01-11T21:42:59.6002301Z ##[endgroup] 2023-01-11T21:42:59.6002667Z FINISHED PRINTING LOG FILE of test_openmp (/var/lib/jenkins/workspace/test/test-reports/test_openmp_m_mtlrq1) 2023-01-11T21:42:59.6002878Z 2023-01-11T21:43:01.5127146Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:43:01.5784861Z Ignoring disabled issues: [] 2023-01-11T21:43:01.5945313Z Running test_package ... [2023-01-11 21:43:01.594201] 2023-01-11T21:43:01.5947037Z Executing ['/opt/conda/bin/python', '-bb', 'test_package.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:43:01.594465] 2023-01-11T21:43:05.7991357Z 2023-01-11T21:43:05.7992242Z Expand the folded group to see the log file of test_package 2023-01-11T21:43:05.7993244Z ##[group]PRINTING LOG FILE of test_package (/var/lib/jenkins/workspace/test/test-reports/test_package_8fwzjqhn) 2023-01-11T21:43:05.7993567Z 2023-01-11T21:43:05.7993670Z Running tests... 2023-01-11T21:43:05.7994306Z ---------------------------------------------------------------------- 2023-01-11T21:43:05.7994844Z Test results will be stored in test-reports/python-unittest/test_package 2023-01-11T21:43:05.7996160Z test_trace_dependencies (test_analyze.TestAnalyze) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/81213 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.213s) 2023-01-11T21:43:05.7997104Z test_allow_empty_with_error (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.7997619Z If an error occurs during packaging, it should not be shadowed by the allow_empty error. ... ok (0.002s) 2023-01-11T21:43:05.7998119Z test_broken_dependency (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.7998823Z A unpackageable dependency should raise a PackagingError. ... ok (0.002s) 2023-01-11T21:43:05.7999258Z test_deny (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.7999686Z Test marking packages as "deny" during export. ... ok (0.002s) 2023-01-11T21:43:05.8000114Z test_deny_glob (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8000572Z Test marking packages as "deny" using globs instead of package names. ... ok (0.003s) 2023-01-11T21:43:05.8001099Z test_extern (test_dependency_api.TestDependencyAPI) ... ok (0.001s) 2023-01-11T21:43:05.8001592Z test_extern_glob (test_dependency_api.TestDependencyAPI) ... ok (0.002s) 2023-01-11T21:43:05.8002129Z test_extern_glob_allow_empty (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8002668Z Test that an error is thrown when a extern glob is specified with allow_empty=True ... ok (0.001s) 2023-01-11T21:43:05.8003196Z test_externing_c_extension (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8004892Z Externing c extensions modules should allow us to still access them especially those found in torch._C. ... /opt/conda/lib/python3.10/site-packages/torch/package/package_exporter.py:900: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8041150Z storage_type_str = obj.pickle_storage_type() 2023-01-11T21:43:05.8042762Z /opt/conda/lib/python3.10/site-packages/torch/package/package_exporter.py:903: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8043763Z storage_numel = obj.size() 2023-01-11T21:43:05.8045042Z /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8045901Z return self.fget.__get__(instance, owner)() 2023-01-11T21:43:05.8046198Z ok (0.007s) 2023-01-11T21:43:05.8046592Z test_implicit_intern (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8047133Z The save_module APIs should implicitly intern the module being saved. ... ok (0.001s) 2023-01-11T21:43:05.8047670Z test_intern_error (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8048180Z Failure to handle all dependencies should lead to an error. ... ok (0.002s) 2023-01-11T21:43:05.8048694Z test_invalid_import (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8049353Z An incorrectly-formed import should raise a PackagingError. ... ok (0.001s) 2023-01-11T21:43:05.8049900Z test_mock (test_dependency_api.TestDependencyAPI) ... ok (0.002s) 2023-01-11T21:43:05.8050404Z test_mock_glob (test_dependency_api.TestDependencyAPI) ... ok (0.002s) 2023-01-11T21:43:05.8050941Z test_mock_glob_allow_empty (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8051512Z Test that an error is thrown when a mock glob is specified with allow_empty=True ... ok (0.001s) 2023-01-11T21:43:05.8052074Z test_pickle_mocked (test_dependency_api.TestDependencyAPI) ... ok (0.001s) 2023-01-11T21:43:05.8052630Z test_pickle_mocked_all (test_dependency_api.TestDependencyAPI) ... ok (0.001s) 2023-01-11T21:43:05.8053203Z test_repackage_mocked_module (test_dependency_api.TestDependencyAPI) 2023-01-11T21:43:05.8053925Z Re-packaging a package that contains a mocked module should work correctly. ... ok (0.003s) 2023-01-11T21:43:05.8054513Z test_extern_and_mock_hook (test_dependency_hooks.TestDependencyHooks) ... ok (0.001s) 2023-01-11T21:43:05.8055283Z test_multiple_extern_hooks (test_dependency_hooks.TestDependencyHooks) ... ok (0.001s) 2023-01-11T21:43:05.8055881Z test_multiple_mock_hooks (test_dependency_hooks.TestDependencyHooks) ... ok (0.001s) 2023-01-11T21:43:05.8056466Z test_remove_hooks (test_dependency_hooks.TestDependencyHooks) ... ok (0.001s) 2023-01-11T21:43:05.8057007Z test_single_hook (test_dependency_hooks.TestDependencyHooks) ... ok (0.001s) 2023-01-11T21:43:05.8057521Z test_all_paths (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8058002Z test_contains (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8058474Z test_contains_non_hashable (test_digraph.TestDiGraph) ... ok (0.000s) 2023-01-11T21:43:05.8058961Z test_edges (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8059440Z test_forward_closure (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8059887Z test_iter (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8060436Z test_node_attr_update (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8060925Z test_node_attrs (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8061424Z test_predecessor_not_in_graph (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8061921Z test_predecessors (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8062620Z test_successor_not_in_graph (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8063061Z test_successors (test_digraph.TestDiGraph) ... ok (0.001s) 2023-01-11T21:43:05.8063396Z test_importer_access (test_directory_reader.DirectoryReaderTest) ... ok (0.003s) 2023-01-11T21:43:05.8063727Z test_loading_has_record (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8064182Z Test DirectoryReader's has_record(). ... ok (0.003s) 2023-01-11T21:43:05.8064714Z test_loading_module (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8065191Z Test basic saving and loading of a packages from a DirectoryReader. ... ok (0.003s) 2023-01-11T21:43:05.8065767Z test_loading_pickle (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8066462Z Test basic saving and loading of modules and pickles from a DirectoryReader. ... skip: Does not work with latest TorchVision, see https://github.com/pytorch/pytorch/issues/81115 (0.001s) 2023-01-11T21:43:05.8067040Z test_package_resource_access (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8067530Z Packaged modules should be able to use the importlib.resources API to access ... ok (0.003s) 2023-01-11T21:43:05.8068110Z test_resource_access_by_path (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8068606Z Tests that packaged code can used importlib.resources.path. ... ok (0.004s) 2023-01-11T21:43:05.8069141Z test_resource_reader (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8069682Z Tests DirectoryReader as the base for get_resource_reader. ... ok (0.007s) 2023-01-11T21:43:05.8070293Z test_scriptobject_failure_message (test_directory_reader.DirectoryReaderTest) 2023-01-11T21:43:05.8070885Z Test basic saving and loading of a ScriptModule in a directory. ... ok (0.011s) 2023-01-11T21:43:05.8071262Z test_exclude (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8071548Z test_exclude_from_all (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8071970Z test_invalid_raw (test_glob_group.TestGlobGroup) ... ok (0.000s) 2023-01-11T21:43:05.8072373Z test_list_include_exclude (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8072879Z test_one_star (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8073346Z test_one_star_middle (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8073903Z test_one_star_multiple_in_component (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8074552Z test_one_star_partial (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8075283Z test_one_star_partial_extension (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8075831Z test_raw_two_star (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8076304Z test_two_star (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8076812Z test_two_star_end (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8077200Z test_two_star_middle (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8077497Z test_two_star_multiple (test_glob_group.TestGlobGroup) ... ok (0.001s) 2023-01-11T21:43:05.8077798Z test_ordered_importer_basic (test_importer.TestImporter) ... ok (0.001s) 2023-01-11T21:43:05.8078157Z test_ordered_importer_whichmodule (test_importer.TestImporter) 2023-01-11T21:43:05.8078574Z OrderedImporter's implementation of whichmodule should try each ... ok (0.001s) 2023-01-11T21:43:05.8078896Z test_package_importer_whichmodule_no_dunder_module (test_importer.TestImporter) 2023-01-11T21:43:05.8079489Z Exercise corner case where we try to pickle an object whose ... ok (0.001s) 2023-01-11T21:43:05.8080013Z test_single_ordered_importer (test_importer.TestImporter) ... ok (0.001s) 2023-01-11T21:43:05.8080310Z test_sys_importer (test_importer.TestImporter) ... ok (0.001s) 2023-01-11T21:43:05.8080594Z test_sys_importer_roundtrip (test_importer.TestImporter) ... ok (0.001s) 2023-01-11T21:43:05.8080916Z test_load_bc_packages_fx_module (test_load_bc_packages.TestLoadBCPackages) 2023-01-11T21:43:05.8081217Z Tests for backwards compatible fx module ... ok (0.011s) 2023-01-11T21:43:05.8081504Z test_load_bc_packages_nn_module (test_load_bc_packages.TestLoadBCPackages) 2023-01-11T21:43:05.8081795Z Tests for backwards compatible nn module ... ok (0.007s) 2023-01-11T21:43:05.8082108Z test_load_bc_packages_torchscript_module (test_load_bc_packages.TestLoadBCPackages) 2023-01-11T21:43:05.8082429Z Tests for backwards compatible torchscript module ... ok (0.036s) 2023-01-11T21:43:05.8082692Z test_demangle_base (test_mangling.TestMangling) 2023-01-11T21:43:05.8083000Z Demangling a mangle parent directly should currently return an empty string. ... ok (0.001s) 2023-01-11T21:43:05.8083320Z test_demangler_multiple_manglers (test_mangling.TestMangling) 2023-01-11T21:43:05.8083645Z PackageDemangler should be able to demangle name generated by any PackageMangler. ... ok (0.001s) 2023-01-11T21:43:05.8083967Z test_is_mangled (test_mangling.TestMangling) ... ok (0.001s) 2023-01-11T21:43:05.8084256Z test_mangle_empty_errors (test_mangling.TestMangling) ... ok (0.000s) 2023-01-11T21:43:05.8084535Z test_mangle_prefix (test_mangling.TestMangling) ... ok (0.001s) 2023-01-11T21:43:05.8084813Z test_mangler_is_consistent (test_mangling.TestMangling) 2023-01-11T21:43:05.8085104Z Mangling the same name twice should produce the same result. ... ok (0.001s) 2023-01-11T21:43:05.8085401Z test_package_mangler (test_mangling.TestMangling) ... ok (0.001s) 2023-01-11T21:43:05.8085685Z test_roundtrip_mangling (test_mangling.TestMangling) ... ok (0.001s) 2023-01-11T21:43:05.8085965Z test_unique_manglers (test_mangling.TestMangling) 2023-01-11T21:43:05.8086270Z Each mangler instance should generate a unique mangled name for a given input. ... ok (0.001s) 2023-01-11T21:43:05.8086577Z test_unique_module_names (test_mangling.TestMangling) ... ok (0.002s) 2023-01-11T21:43:05.8086851Z test_dunder_package_present (test_misc.TestMisc) 2023-01-11T21:43:05.8087235Z The attribute '__torch_package__' should be populated on imported modules. ... ok (0.001s) 2023-01-11T21:43:05.8087542Z test_dunder_package_works_from_package (test_misc.TestMisc) 2023-01-11T21:43:05.8087892Z The attribute '__torch_package__' should be accessible from within ... ok (0.002s) 2023-01-11T21:43:05.8088171Z test_exporter_content_lists (test_misc.TestMisc) 2023-01-11T21:43:05.8088520Z Test content list API for PackageExporter's contained modules. ... ok (0.003s) 2023-01-11T21:43:05.8088778Z test_file_structure (test_misc.TestMisc) 2023-01-11T21:43:05.8089187Z Tests package's Directory structure representation of a zip file. Ensures ... ok (0.002s) 2023-01-11T21:43:05.8089480Z test_file_structure_has_file (test_misc.TestMisc) 2023-01-11T21:43:05.8089761Z Test Directory's has_file() method. ... ok (0.001s) 2023-01-11T21:43:05.8089995Z test_inspect_class (test_misc.TestMisc) 2023-01-11T21:43:05.8090256Z Should be able to retrieve source for a packaged class. ... ok (0.002s) 2023-01-11T21:43:05.8090515Z test_is_from_package (test_misc.TestMisc) 2023-01-11T21:43:05.8090760Z is_from_package should work for objects and modules ... ok (0.001s) 2023-01-11T21:43:05.8091036Z test_load_python_version_from_package (test_misc.TestMisc) 2023-01-11T21:43:05.8091315Z Tests loading a package with a python version embdded ... ok (0.002s) 2023-01-11T21:43:05.8091598Z test_loaders_that_remap_files_work_ok (test_misc.TestMisc) ... ok (0.002s) 2023-01-11T21:43:05.8091860Z test_python_version (test_misc.TestMisc) 2023-01-11T21:43:05.8092181Z Tests that the current python version is stored in the package and is available ... ok (0.002s) 2023-01-11T21:43:05.8092471Z test_std_lib_sys_hackery_checks (test_misc.TestMisc) 2023-01-11T21:43:05.8092759Z The standard library performs sys.module assignment hackery which ... ok (0.003s) 2023-01-11T21:43:05.8093166Z test_model_save (test_model.ModelTest) ... skip: Does not work with recent torchvision, see https://github.com/pytorch/pytorch/issues/81115 (0.001s) 2023-01-11T21:43:05.8093622Z test_resnet (test_model.ModelTest) ... skip: Does not work with recent torchvision, see https://github.com/pytorch/pytorch/issues/81115 (0.001s) 2023-01-11T21:43:05.8094069Z test_script_resnet (test_model.ModelTest) ... skip: Does not work with recent torchvision, see https://github.com/pytorch/pytorch/issues/81115 (0.000s) 2023-01-11T21:43:05.8094456Z test_package_fx_custom_tracer (test_package_fx.TestPackageFX) ... ok (0.009s) 2023-01-11T21:43:05.8094762Z test_package_fx_package (test_package_fx.TestPackageFX) ... ok (0.017s) 2023-01-11T21:43:05.8095066Z test_package_fx_simple (test_package_fx.TestPackageFX) ... ok (0.005s) 2023-01-11T21:43:05.8095359Z test_package_fx_with_imports (test_package_fx.TestPackageFX) ... ok (0.005s) 2023-01-11T21:43:05.8095658Z test_package_then_fx (test_package_fx.TestPackageFX) ... ok (0.004s) 2023-01-11T21:43:05.8095972Z test_different_package_interface (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8096341Z Test a case where the interface defined in the package is ... ok (0.012s) 2023-01-11T21:43:05.8096646Z test_different_package_script_class (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8096964Z Test a case where the script class defined in the package is ... ok (0.006s) 2023-01-11T21:43:05.8097282Z test_load_shared_scriptmodules (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8097602Z Test loading of single ScriptModule shared by multiple eager ... ok (0.008s) 2023-01-11T21:43:05.8097897Z test_load_shared_tensors (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8098563Z Test tensors shared across eager and ScriptModules on load ... /var/lib/jenkins/workspace/test/package/test_package_script.py:547: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8099157Z shared_tensor.storage()._cdata, 2023-01-11T21:43:05.8099703Z /var/lib/jenkins/workspace/test/package/test_package_script.py:548: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8100235Z scripted_mod_0.tensor.storage()._cdata, 2023-01-11T21:43:05.8100775Z /var/lib/jenkins/workspace/test/package/test_package_script.py:551: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8101323Z shared_tensor.storage()._cdata, 2023-01-11T21:43:05.8101858Z /var/lib/jenkins/workspace/test/package/test_package_script.py:552: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8102502Z scripted_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8103165Z /var/lib/jenkins/workspace/test/package/test_package_script.py:565: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8103686Z loaded_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8104229Z /var/lib/jenkins/workspace/test/package/test_package_script.py:566: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8104766Z loaded_mod_1.sub_mod_0.tensor.storage()._cdata, 2023-01-11T21:43:05.8105317Z /var/lib/jenkins/workspace/test/package/test_package_script.py:569: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8105839Z loaded_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8106371Z /var/lib/jenkins/workspace/test/package/test_package_script.py:570: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8106899Z loaded_mod_1.sub_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8107102Z ok (0.008s) 2023-01-11T21:43:05.8107344Z test_load_shared_tensors_repackaged (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8108015Z Test tensors shared across eager and ScriptModules on load ... /var/lib/jenkins/workspace/test/package/test_package_script.py:619: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8108605Z loaded_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8109160Z /var/lib/jenkins/workspace/test/package/test_package_script.py:620: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8109700Z loaded_mod_1.sub_mod_0.tensor.storage()._cdata, 2023-01-11T21:43:05.8110258Z /var/lib/jenkins/workspace/test/package/test_package_script.py:623: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8110804Z loaded_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8111349Z /var/lib/jenkins/workspace/test/package/test_package_script.py:624: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:43:05.8111882Z loaded_mod_1.sub_mod_1.tensor.storage()._cdata, 2023-01-11T21:43:05.8112083Z ok (0.012s) 2023-01-11T21:43:05.8112329Z test_mixing_packaged_and_inline_modules (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8112662Z Test saving inline and imported modules in same package with ... ok (0.047s) 2023-01-11T21:43:05.8113006Z test_mixing_packaged_and_inline_modules_shared_code (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8113360Z Test saving inline and imported modules in same package that ... ok (1.047s) 2023-01-11T21:43:05.8113669Z test_package_interface (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8113970Z Packaging an interface class should work correctly. ... ok (0.040s) 2023-01-11T21:43:05.8114351Z test_package_script_class (test_package_script.TestPackageScript) ... ok (0.005s) 2023-01-11T21:43:05.8114702Z test_package_script_class_referencing_self (test_package_script.TestPackageScript) ... ok (0.013s) 2023-01-11T21:43:05.8115067Z test_save_eager_mods_sharing_scriptmodule (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8115389Z Test saving of single ScriptModule shared by multiple ... ok (0.005s) 2023-01-11T21:43:05.8115695Z test_save_independent_scriptmodules (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8116022Z Test to verify saving multiple ScriptModules with completely ... ok (0.008s) 2023-01-11T21:43:05.8116339Z test_save_repeat_scriptmodules (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8116654Z Test to verify saving multiple different modules and ... ok (0.042s) 2023-01-11T21:43:05.8116941Z test_save_scriptmodule (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8117216Z Test basic saving of ScriptModule. ... ok (0.004s) 2023-01-11T21:43:05.8117498Z test_save_scriptmodule_file (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8117774Z Test basic saving of ScriptModule in file. ... ok (0.004s) 2023-01-11T21:43:05.8118079Z test_save_scriptmodule_only_necessary_code (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8118402Z Test to verify when saving multiple packages with same CU ... ok (0.431s) 2023-01-11T21:43:05.8118704Z test_save_scriptmodule_with_submods (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8119014Z Test basic saving of ScriptModule with submodule. ... ok (0.007s) 2023-01-11T21:43:05.8119322Z test_save_scriptmodules_in_container (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8119660Z Test saving of ScriptModules inside of container. Checks that relations ... ok (0.035s) 2023-01-11T21:43:05.8119995Z test_save_scriptmodules_submod_redefinition (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8120330Z Test to verify saving multiple ScriptModules with same top module ... ok (0.012s) 2023-01-11T21:43:05.8120638Z test_save_shared_tensors (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8120946Z Test tensors shared across eager and ScriptModules are serialized once. ... ok (0.007s) 2023-01-11T21:43:05.8121280Z test_saving_and_scripting_packaged_mod (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8121589Z Test scripting a module loaded from a package ... ok (0.008s) 2023-01-11T21:43:05.8121886Z test_scriptmodules_repeat_save (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8122191Z Test to verify saving and loading same ScriptModule object works ... ok (0.010s) 2023-01-11T21:43:05.8122500Z test_tensor_sharing_pickle (test_package_script.TestPackageScript) 2023-01-11T21:43:05.8122877Z Test that saving a ScriptModule and a separately saving a tensor ... ok (0.004s) 2023-01-11T21:43:05.8123216Z test_repackage_import_indirectly_via_parent_module (test_repackage.TestRepackage) ... ok (0.005s) 2023-01-11T21:43:05.8123552Z test_importer_access (test_resources.TestResources) ... ok (0.002s) 2023-01-11T21:43:05.8123852Z test_package_resource_access (test_resources.TestResources) 2023-01-11T21:43:05.8124176Z Packaged modules should be able to use the importlib.resources API to access ... ok (0.001s) 2023-01-11T21:43:05.8124484Z test_resource_access_by_path (test_resources.TestResources) 2023-01-11T21:43:05.8124792Z Tests that packaged code can used importlib.resources.path. ... ok (0.002s) 2023-01-11T21:43:05.8125091Z test_resource_reader (test_resources.TestResources) 2023-01-11T21:43:05.8125365Z Test compliance with the get_resource_reader importlib API. ... ok (0.003s) 2023-01-11T21:43:05.8125654Z test_bad_dunder_imports (test_save_load.TestSaveLoad) 2023-01-11T21:43:05.8126097Z Test to ensure bad __imports__ don't cause PackageExporter to fail. ... ok (0.001s) 2023-01-11T21:43:05.8126403Z test_dunder_imports (test_save_load.TestSaveLoad) ... ok (0.003s) 2023-01-11T21:43:05.8126678Z test_exporting_mismatched_code (test_save_load.TestSaveLoad) 2023-01-11T21:43:05.8126979Z If an object with the same qualified name is loaded from different ... ok (0.004s) 2023-01-11T21:43:05.8127270Z test_pickle (test_save_load.TestSaveLoad) ... ok (0.002s) 2023-01-11T21:43:05.8127527Z test_save_imported_module (test_save_load.TestSaveLoad) 2023-01-11T21:43:05.8127826Z Saving a module that came from another PackageImporter should work. ... ok (0.002s) 2023-01-11T21:43:05.8128152Z test_save_imported_module_using_package_importer (test_save_load.TestSaveLoad) 2023-01-11T21:43:05.8128563Z Exercise a corner case: re-packaging a module that uses `torch_package_importer` ... ok (0.002s) 2023-01-11T21:43:05.8128879Z test_save_module (test_save_load.TestSaveLoad) ... ok (0.002s) 2023-01-11T21:43:05.8129173Z test_save_module_binary (test_save_load.TestSaveLoad) ... ok (0.001s) 2023-01-11T21:43:05.8129458Z test_saving_source (test_save_load.TestSaveLoad) ... ok (0.004s) 2023-01-11T21:43:05.8129722Z test_saving_string (test_save_load.TestSaveLoad) ... ok (0.001s) 2023-01-11T21:43:05.8129883Z 2023-01-11T21:43:05.8130086Z ---------------------------------------------------------------------- 2023-01-11T21:43:05.8130331Z Ran 133 tests in 2.249s 2023-01-11T21:43:05.8130447Z 2023-01-11T21:43:05.8130505Z OK (skipped=5) 2023-01-11T21:43:05.8130612Z 2023-01-11T21:43:05.8130696Z Generating XML reports... 2023-01-11T21:43:05.8131145Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_dependency_api.TestDependencyAPI-20230111214303.xml 2023-01-11T21:43:05.8131732Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_dependency_hooks.TestDependencyHooks-20230111214303.xml 2023-01-11T21:43:05.8132276Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_digraph.TestDiGraph-20230111214303.xml 2023-01-11T21:43:05.8132839Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_directory_reader.DirectoryReaderTest-20230111214303.xml 2023-01-11T21:43:05.8133406Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_glob_group.TestGlobGroup-20230111214303.xml 2023-01-11T21:43:05.8133937Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_importer.TestImporter-20230111214303.xml 2023-01-11T21:43:05.8134497Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_load_bc_packages.TestLoadBCPackages-20230111214303.xml 2023-01-11T21:43:05.8135055Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_mangling.TestMangling-20230111214303.xml 2023-01-11T21:43:05.8135557Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_misc.TestMisc-20230111214303.xml 2023-01-11T21:43:05.8136070Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_package_fx.TestPackageFX-20230111214303.xml 2023-01-11T21:43:05.8136654Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_package_script.TestPackageScript-20230111214303.xml 2023-01-11T21:43:05.8137212Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_repackage.TestRepackage-20230111214303.xml 2023-01-11T21:43:05.8137752Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_resources.TestResources-20230111214303.xml 2023-01-11T21:43:05.8138283Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_save_load.TestSaveLoad-20230111214303.xml 2023-01-11T21:43:05.8138787Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_analyze.TestAnalyze-20230111214303.xml 2023-01-11T21:43:05.8139291Z Generated XML report: test-reports/python-unittest/test_package/TEST-test_model.ModelTest-20230111214303.xml 2023-01-11T21:43:05.8139513Z 2023-01-11T21:43:05.8139870Z ##[endgroup] 2023-01-11T21:43:05.8140281Z FINISHED PRINTING LOG FILE of test_package (/var/lib/jenkins/workspace/test/test-reports/test_package_8fwzjqhn) 2023-01-11T21:43:05.8140493Z 2023-01-11T21:43:08.0118223Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:43:08.1063455Z Ignoring disabled issues: [] 2023-01-11T21:43:08.1262753Z Running test_per_overload_api ... [2023-01-11 21:43:08.125890] 2023-01-11T21:43:08.1265278Z Executing ['/opt/conda/bin/python', '-bb', 'test_per_overload_api.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:43:08.126166] 2023-01-11T21:43:10.3788557Z 2023-01-11T21:43:10.3789125Z Expand the folded group to see the log file of test_per_overload_api 2023-01-11T21:43:10.3790302Z ##[group]PRINTING LOG FILE of test_per_overload_api (/var/lib/jenkins/workspace/test/test-reports/test_per_overload_api_e6flvlj6) 2023-01-11T21:43:10.3790803Z 2023-01-11T21:43:10.3791323Z Running tests... 2023-01-11T21:43:10.3791883Z ---------------------------------------------------------------------- 2023-01-11T21:43:10.3792475Z Test results will be stored in test-reports/python-unittest/test_per_overload_api 2023-01-11T21:43:10.3792894Z test_basics_opoverload (__main__.TestPerOverloadAPI) ... ok (0.270s) 2023-01-11T21:43:10.3793303Z test_basics_opoverloadpacket (__main__.TestPerOverloadAPI) ... ok (0.002s) 2023-01-11T21:43:10.3793685Z test_decompose (__main__.TestPerOverloadAPI) ... ok (0.001s) 2023-01-11T21:43:10.3793887Z 2023-01-11T21:43:10.3794200Z ---------------------------------------------------------------------- 2023-01-11T21:43:10.3794515Z Ran 3 tests in 0.274s 2023-01-11T21:43:10.3794660Z 2023-01-11T21:43:10.3794739Z OK 2023-01-11T21:43:10.3798865Z 2023-01-11T21:43:10.3799251Z Generating XML reports... 2023-01-11T21:43:10.3799923Z Generated XML report: test-reports/python-unittest/test_per_overload_api/TEST-TestPerOverloadAPI-20230111214309.xml 2023-01-11T21:43:10.3800248Z 2023-01-11T21:43:10.3800601Z ##[endgroup] 2023-01-11T21:43:10.3801796Z FINISHED PRINTING LOG FILE of test_per_overload_api (/var/lib/jenkins/workspace/test/test-reports/test_per_overload_api_e6flvlj6) 2023-01-11T21:43:10.3802090Z 2023-01-11T21:43:12.4821536Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:43:12.5659726Z Ignoring disabled issues: [] 2023-01-11T21:43:12.5825243Z Running test_pruning_op ... [2023-01-11 21:43:12.582174] 2023-01-11T21:43:12.5827848Z Executing ['/opt/conda/bin/python', '-bb', 'test_pruning_op.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:43:12.582488] 2023-01-11T21:43:14.2196904Z 2023-01-11T21:43:14.2197542Z Expand the folded group to see the log file of test_pruning_op 2023-01-11T21:43:14.2199745Z ##[group]PRINTING LOG FILE of test_pruning_op (/var/lib/jenkins/workspace/test/test-reports/test_pruning_op_uran59ue) 2023-01-11T21:43:14.2200283Z 2023-01-11T21:43:14.2200928Z ##[endgroup] 2023-01-11T21:43:14.2201978Z FINISHED PRINTING LOG FILE of test_pruning_op (/var/lib/jenkins/workspace/test/test-reports/test_pruning_op_uran59ue) 2023-01-11T21:43:14.2202791Z 2023-01-11T21:43:16.2016409Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:43:16.2861420Z Ignoring disabled issues: [] 2023-01-11T21:43:16.3027888Z Running test_pytree ... [2023-01-11 21:43:16.302495] 2023-01-11T21:43:16.3030570Z Executing ['/opt/conda/bin/python', '-bb', 'test_pytree.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:43:16.302781] 2023-01-11T21:43:18.9189651Z 2023-01-11T21:43:18.9190123Z Expand the folded group to see the log file of test_pytree 2023-01-11T21:43:18.9191156Z ##[group]PRINTING LOG FILE of test_pytree (/var/lib/jenkins/workspace/test/test-reports/test_pytree_7sx5t_qy) 2023-01-11T21:43:18.9191562Z 2023-01-11T21:43:18.9191693Z Running tests... 2023-01-11T21:43:18.9192365Z ---------------------------------------------------------------------- 2023-01-11T21:43:18.9193037Z Test results will be stored in test-reports/python-unittest/test_pytree 2023-01-11T21:43:18.9193805Z test_broadcast_to_and_flatten (__main__.TestPytree) ... ok (0.230s) 2023-01-11T21:43:18.9194292Z test_flatten_unflatten_dict (__main__.TestPytree) ... ok (0.003s) 2023-01-11T21:43:18.9194692Z test_flatten_unflatten_leaf (__main__.TestPytree) ... ok (0.002s) 2023-01-11T21:43:18.9195113Z test_flatten_unflatten_list (__main__.TestPytree) ... ok (0.002s) 2023-01-11T21:43:18.9195436Z test_flatten_unflatten_namedtuple (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9195924Z test_flatten_unflatten_nested (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9196417Z test_flatten_unflatten_odict (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9196939Z test_flatten_unflatten_return_type_max (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9197482Z test_flatten_unflatten_return_type_min (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9197896Z test_flatten_unflatten_tuple (__main__.TestPytree) ... ok (0.002s) 2023-01-11T21:43:18.9198172Z test_tree_all_any (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9198417Z test_tree_only (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9198649Z test_treemap (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9198903Z test_treespec_equality (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9199161Z test_treespec_repr (__main__.TestPytree) ... ok (0.001s) 2023-01-11T21:43:18.9199304Z 2023-01-11T21:43:18.9199525Z ---------------------------------------------------------------------- 2023-01-11T21:43:18.9199755Z Ran 15 tests in 0.249s 2023-01-11T21:43:18.9199867Z 2023-01-11T21:43:18.9199928Z OK 2023-01-11T21:43:18.9200019Z 2023-01-11T21:43:18.9200101Z Generating XML reports... 2023-01-11T21:43:18.9200494Z Generated XML report: test-reports/python-unittest/test_pytree/TEST-TestPytree-20230111214318.xml 2023-01-11T21:43:18.9200712Z 2023-01-11T21:43:18.9200975Z ##[endgroup] 2023-01-11T21:43:18.9201347Z FINISHED PRINTING LOG FILE of test_pytree (/var/lib/jenkins/workspace/test/test-reports/test_pytree_7sx5t_qy) 2023-01-11T21:43:18.9205690Z 2023-01-11T21:43:20.7820121Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:43:20.8484882Z Ignoring disabled issues: [] 2023-01-11T21:43:20.8651163Z Running test_quantization ... [2023-01-11 21:43:20.864812] 2023-01-11T21:43:20.8653663Z Executing ['/opt/conda/bin/python', '-bb', 'test_quantization.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:43:20.865089] 2023-01-11T21:44:23.5730183Z 2023-01-11T21:44:23.5730627Z Expand the folded group to see the log file of test_optim 2023-01-11T21:44:23.5731371Z ##[group]PRINTING LOG FILE of test_optim (/var/lib/jenkins/workspace/test/test-reports/test_optim_a93x3xeo) 2023-01-11T21:44:23.5800963Z 2023-01-11T21:44:23.5943512Z Running tests... 2023-01-11T21:44:23.5944153Z ---------------------------------------------------------------------- 2023-01-11T21:44:23.5944772Z Test results will be stored in test-reports/python-unittest/test_optim 2023-01-11T21:44:23.5945772Z test_adadelta (__main__.TestDifferentiableOptimizer) ... ok (0.239s) 2023-01-11T21:44:23.5946336Z test_adagrad (__main__.TestDifferentiableOptimizer) ... ok (0.006s) 2023-01-11T21:44:23.5946849Z test_adam (__main__.TestDifferentiableOptimizer) ... ok (0.014s) 2023-01-11T21:44:23.5947192Z test_adamax (__main__.TestDifferentiableOptimizer) ... ok (0.010s) 2023-01-11T21:44:23.5947613Z test_adamw (__main__.TestDifferentiableOptimizer) ... ok (0.014s) 2023-01-11T21:44:23.5947991Z test_asgd (__main__.TestDifferentiableOptimizer) ... ok (0.005s) 2023-01-11T21:44:23.5948360Z test_nadam (__main__.TestDifferentiableOptimizer) ... ok (0.014s) 2023-01-11T21:44:23.5948739Z test_radam (__main__.TestDifferentiableOptimizer) ... ok (0.009s) 2023-01-11T21:44:23.5949123Z test_rmsprop (__main__.TestDifferentiableOptimizer) ... ok (0.013s) 2023-01-11T21:44:23.5949508Z test_rprop (__main__.TestDifferentiableOptimizer) ... ok (0.010s) 2023-01-11T21:44:23.5949953Z test_sgd (__main__.TestDifferentiableOptimizer) ... ok (0.004s) 2023-01-11T21:44:23.5950367Z test_CosineAnnealingWarmRestarts_lr1_T_mult_1 (__main__.TestLRScheduler) ... ok (0.007s) 2023-01-11T21:44:23.5950813Z test_CosineAnnealingWarmRestarts_lr1_T_mult_2 (__main__.TestLRScheduler) ... ok (0.007s) 2023-01-11T21:44:23.5951243Z test_CosineAnnealingWarmRestarts_lr1_T_mult_4 (__main__.TestLRScheduler) ... ok (0.007s) 2023-01-11T21:44:23.5951676Z test_CosineAnnealingWarmRestarts_lr2 (__main__.TestLRScheduler) ... ok (0.057s) 2023-01-11T21:44:23.5952096Z test_CosineAnnealingWarmRestarts_lr3 (__main__.TestLRScheduler) ... ok (0.004s) 2023-01-11T21:44:23.5952539Z test_CosineAnnealingWarmRestarts_lr_state_dict (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5954035Z test_chained_lr1 (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5955119Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5955441Z ok (0.002s) 2023-01-11T21:44:23.5955722Z test_chained_lr2 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5956087Z test_chained_lr2_get_last_lr_before_step (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5956462Z test_chained_lr3 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5956801Z test_chained_lr4 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5957136Z test_chained_lr5 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5957477Z test_closed_form_constantlr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5957862Z test_closed_form_cos_anneal_lr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5958234Z test_closed_form_exp_lr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5958586Z test_closed_form_linearlr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5958960Z test_closed_form_multi_step_lr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5959329Z test_closed_form_poly_lr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5959676Z test_closed_form_step_lr (__main__.TestLRScheduler) ... ok (0.006s) 2023-01-11T21:44:23.5961132Z test_compound_cosanneal_and_exp_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5962163Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5962479Z ok (0.002s) 2023-01-11T21:44:23.5962793Z test_compound_cosanneal_and_linearlr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5963194Z test_compound_cosanneal_and_multistep_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5963604Z test_compound_cosanneal_and_step_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5963995Z test_compound_exp_and_linearlr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5964373Z test_compound_exp_and_multistep_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5964777Z test_compound_linearlr_and_multistep_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5965221Z test_compound_reduce_lr_on_plateau1 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5965619Z test_compound_reduce_lr_on_plateau2 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5965997Z test_compound_reduce_lr_on_plateau3 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5966382Z test_compound_reduce_lr_on_plateau4 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5966771Z test_compound_reduce_lr_on_plateau5 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5967154Z test_compound_step_and_constantlr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5967540Z test_compound_step_and_exp_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5967931Z test_compound_step_and_multistep_lr (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5968296Z test_constantlr (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5968670Z test_constantlr_is_constant_for_constant_epoch (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5969072Z test_constantlr_with_epoch (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5970503Z test_cos_anneal_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5971460Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5971765Z ok (0.002s) 2023-01-11T21:44:23.5972055Z test_cos_anneal_lr_continue (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5972424Z test_cosine_lr_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5972790Z test_cosine_then_cyclic (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5973198Z test_cycle_lr_cycle_momentum_fail_with_momentumless_optimizer (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5973617Z test_cycle_lr_exp_range_mode (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5974000Z test_cycle_lr_exp_range_mode_one_lr (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5974397Z test_cycle_lr_exp_range_mode_step_size_up_down (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5974789Z test_cycle_lr_invalid_mode (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5975177Z test_cycle_lr_removed_after_out_of_scope (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5975592Z test_cycle_lr_scale_fn_restored_from_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5975986Z test_cycle_lr_state_dict_picklable (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5976373Z test_cycle_lr_triangular2_mode (__main__.TestLRScheduler) ... ok (0.004s) 2023-01-11T21:44:23.5976802Z test_cycle_lr_triangular2_mode_one_lr (__main__.TestLRScheduler) ... ok (0.004s) 2023-01-11T21:44:23.5977204Z test_cycle_lr_triangular2_mode_step_size_up_down (__main__.TestLRScheduler) ... ok (0.005s) 2023-01-11T21:44:23.5977601Z test_cycle_lr_triangular_mode (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5977994Z test_cycle_lr_triangular_mode_one_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5978409Z test_cycle_lr_triangular_mode_one_lr_no_momentum (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5978824Z test_cycle_lr_triangular_mode_step_size_up_down (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5979212Z test_cycle_lr_with_adam (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5979601Z test_cycle_lr_with_momentumless_optimizer (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5980021Z test_error_when_getlr_has_epoch (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5980377Z test_exp_lr (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5980725Z test_exp_step_lr_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5981134Z test_exponential_lr_is_constant_for_constant_epoch (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5981521Z test_get_last_lr_constantlr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5981887Z test_get_last_lr_linearlr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5982261Z test_get_last_lr_multi_step_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5983954Z test_get_last_lr_sequentiallr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:152: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. 2023-01-11T21:44:23.5984909Z warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) 2023-01-11T21:44:23.5985191Z ok (0.002s) 2023-01-11T21:44:23.5985482Z test_get_last_lr_step_lr (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5986890Z test_lambda_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5987848Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5988154Z ok (0.002s) 2023-01-11T21:44:23.5988449Z test_lambda_lr_state_dict_fn (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5988951Z test_lambda_lr_state_dict_obj (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5989415Z test_linear_linearlr_is_constant_for_constant_epoch (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.5991100Z test_linearlr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5992297Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5992660Z ok (0.002s) 2023-01-11T21:44:23.5992997Z test_linearlr_start_factor_limits1 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5993452Z test_linearlr_start_factor_limits2 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5993890Z test_linearlr_with_epoch (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5995594Z test_multi_step_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.5996751Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.5997201Z ok (0.001s) 2023-01-11T21:44:23.5997496Z test_multi_step_lr_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.5997877Z test_multi_step_lr_with_epoch (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.5999303Z test_multiplicative_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.6000244Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.6000561Z ok (0.002s) 2023-01-11T21:44:23.6000859Z test_new_pattern_no_warning (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6001250Z test_new_pattern_no_warning_with_arg (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6001671Z test_new_pattern_no_warning_with_overridden_optim_step (__main__.TestLRScheduler) ... ok (0.005s) 2023-01-11T21:44:23.6002079Z test_no_cyclic_references (__main__.TestLRScheduler) ... ok (0.134s) 2023-01-11T21:44:23.6002460Z test_no_cyclic_references_in_step (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6002824Z test_old_pattern_warning (__main__.TestLRScheduler) ... ok (0.004s) 2023-01-11T21:44:23.6003203Z test_old_pattern_warning_resuming (__main__.TestLRScheduler) ... ok (0.004s) 2023-01-11T21:44:23.6003609Z test_old_pattern_warning_resuming_with_arg (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.6004013Z test_old_pattern_warning_with_arg (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.6004420Z test_old_pattern_warning_with_overridden_optim_step (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.6004863Z test_onecycle_lr_cannot_calculate_total_steps (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6005271Z test_onecycle_lr_cosine_annealing (__main__.TestLRScheduler) ... ok (0.003s) 2023-01-11T21:44:23.6005657Z test_onecycle_lr_invalid_anneal_strategy (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6006064Z test_onecycle_lr_invalid_pct_start (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6006461Z test_onecycle_lr_linear_annealing (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6006875Z test_onecycle_lr_linear_annealing_three_phases (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6008296Z test_poly_lr (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.6009287Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.6009601Z ok (0.001s) 2023-01-11T21:44:23.6009931Z test_polynomial_lr_is_constant_for_constant_epoch (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6010320Z test_reduce_lr_on_plateau1 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6010723Z test_reduce_lr_on_plateau2 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6011091Z test_reduce_lr_on_plateau3 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6011460Z test_reduce_lr_on_plateau4 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6011848Z test_reduce_lr_on_plateau5 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6012216Z test_reduce_lr_on_plateau6 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6012607Z test_reduce_lr_on_plateau7 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6012958Z test_reduce_lr_on_plateau8 (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6013342Z test_reduce_lr_on_plateau_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6014799Z test_sequentiallr1 (__main__.TestLRScheduler) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.6015764Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.6017192Z /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:152: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. 2023-01-11T21:44:23.6018072Z warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) 2023-01-11T21:44:23.6018337Z ok (0.002s) 2023-01-11T21:44:23.6018621Z test_sequentiallr2 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6018970Z test_sequentiallr3 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6019310Z test_sequentiallr4 (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6019649Z test_step_lr (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6020025Z test_step_lr_is_constant_for_constant_epoch (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6020402Z test_step_lr_state_dict (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6020761Z test_swa_lr_state_dict (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6021157Z test_swalr_cosine_anneal_after_multiplicative (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6021549Z test_swalr_hypers (__main__.TestLRScheduler) ... ok (0.001s) 2023-01-11T21:44:23.6021922Z test_swalr_linear_anneal_after_multiplicative (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6022308Z test_swalr_no_anneal (__main__.TestLRScheduler) ... ok (0.002s) 2023-01-11T21:44:23.6023787Z test_adadelta (__main__.TestOptim) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.6024800Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.6025103Z ok (2.912s) 2023-01-11T21:44:23.6025380Z test_adadelta_complex (__main__.TestOptim) ... ok (0.008s) 2023-01-11T21:44:23.6025702Z test_adagrad (__main__.TestOptim) ... ok (3.915s) 2023-01-11T21:44:23.6026030Z test_adagrad_complex (__main__.TestOptim) ... ok (0.007s) 2023-01-11T21:44:23.6026352Z test_adagrad_sparse (__main__.TestOptim) ... ok (11.989s) 2023-01-11T21:44:23.6026669Z test_adam (__main__.TestOptim) ... ok (10.080s) 2023-01-11T21:44:23.6026972Z test_adamax (__main__.TestOptim) ... ok (3.287s) 2023-01-11T21:44:23.6027329Z test_adamw (__main__.TestOptim) ... ok (3.020s) 2023-01-11T21:44:23.6027631Z test_asgd (__main__.TestOptim) ... ok (4.117s) 2023-01-11T21:44:23.6027976Z test_duplicate_params_in_param_group (__main__.TestOptim) ... ok (0.001s) 2023-01-11T21:44:23.6028309Z test_empty_grad (__main__.TestOptim) ... ok (0.010s) 2023-01-11T21:44:23.6028696Z test_functional_fused_adam_with_foundinf (__main__.TestOptim) ... skip: CUDA is required. (0.002s) 2023-01-11T21:44:23.6029088Z test_fused_optimizers (__main__.TestOptim) ... ok (0.001s) 2023-01-11T21:44:23.6029416Z test_invalid_param_type (__main__.TestOptim) ... ok (0.001s) 2023-01-11T21:44:23.6029736Z test_lbfgs (__main__.TestOptim) ... ok (0.407s) 2023-01-11T21:44:23.6030057Z test_lbfgs_return_type (__main__.TestOptim) ... ok (0.002s) 2023-01-11T21:44:23.6030402Z test_multi_tensor_optimizers (__main__.TestOptim) ... ok (0.002s) 2023-01-11T21:44:23.6031805Z test_nadam (__main__.TestOptim) ... /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate 2023-01-11T21:44:23.6032743Z warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " 2023-01-11T21:44:23.6033061Z ok (1.545s) 2023-01-11T21:44:23.6033349Z test_no_grad_for_all_params (__main__.TestOptim) ... ok (0.002s) 2023-01-11T21:44:23.6033666Z test_post_hook (__main__.TestOptim) ... ok (0.001s) 2023-01-11T21:44:23.6033994Z test_pre_and_post_hook (__main__.TestOptim) ... ok (0.002s) 2023-01-11T21:44:23.6034309Z test_pre_hook (__main__.TestOptim) ... ok (0.001s) 2023-01-11T21:44:23.6034764Z test_radam (__main__.TestOptim) ... ok (1.734s) 2023-01-11T21:44:23.6035115Z test_rmsprop (__main__.TestOptim) ... ok (7.732s) 2023-01-11T21:44:23.6035468Z test_rprop (__main__.TestOptim) ... ok (3.392s) 2023-01-11T21:44:23.6035809Z test_sgd (__main__.TestOptim) ... ok (7.871s) 2023-01-11T21:44:23.6036145Z test_sgd_complex (__main__.TestOptim) ... ok (0.010s) 2023-01-11T21:44:23.6036507Z test_sgd_sparse (__main__.TestOptim) ... ok (17.984s) 2023-01-11T21:44:23.6036871Z test_sparse_adam (__main__.TestOptim) ... ok (2.975s) 2023-01-11T21:44:23.6037259Z test_averaged_model_all_devices (__main__.TestSWAUtils) ... ok (0.012s) 2023-01-11T21:44:23.6037693Z test_averaged_model_exponential (__main__.TestSWAUtils) ... ok (0.009s) 2023-01-11T21:44:23.6038143Z test_averaged_model_exponential_buffers (__main__.TestSWAUtils) ... ok (0.006s) 2023-01-11T21:44:23.6038577Z test_averaged_model_mixed_device (__main__.TestSWAUtils) ... ok (0.001s) 2023-01-11T21:44:23.6039008Z test_averaged_model_state_dict (__main__.TestSWAUtils) ... ok (0.005s) 2023-01-11T21:44:23.6039432Z test_bn_update_eval_momentum (__main__.TestSWAUtils) ... ok (0.112s) 2023-01-11T21:44:23.6039886Z test_update_bn_cnn (__main__.TestSWAUtils) ... ok (0.728s) 2023-01-11T21:44:23.6040251Z test_update_bn_dnn (__main__.TestSWAUtils) ... ok (0.060s) 2023-01-11T21:44:23.6040468Z 2023-01-11T21:44:23.6040786Z ---------------------------------------------------------------------- 2023-01-11T21:44:23.6041149Z Ran 166 tests in 84.751s 2023-01-11T21:44:23.6041317Z 2023-01-11T21:44:23.6041403Z OK (skipped=1) 2023-01-11T21:44:23.6041652Z 2023-01-11T21:44:23.6041761Z Generating XML reports... 2023-01-11T21:44:23.6042345Z Generated XML report: test-reports/python-unittest/test_optim/TEST-TestDifferentiableOptimizer-20230111214258.xml 2023-01-11T21:44:23.6043038Z Generated XML report: test-reports/python-unittest/test_optim/TEST-TestLRScheduler-20230111214258.xml 2023-01-11T21:44:23.6043652Z Generated XML report: test-reports/python-unittest/test_optim/TEST-TestOptim-20230111214258.xml 2023-01-11T21:44:23.6044309Z Generated XML report: test-reports/python-unittest/test_optim/TEST-TestSWAUtils-20230111214258.xml 2023-01-11T21:44:23.6044590Z 2023-01-11T21:44:23.6044994Z ##[endgroup] 2023-01-11T21:44:23.6045464Z FINISHED PRINTING LOG FILE of test_optim (/var/lib/jenkins/workspace/test/test-reports/test_optim_a93x3xeo) 2023-01-11T21:44:23.6045729Z 2023-01-11T21:44:25.6855803Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:44:25.7706801Z Ignoring disabled issues: [] 2023-01-11T21:44:25.7874151Z Running test_schema_check ... [2023-01-11 21:44:25.787129] 2023-01-11T21:44:25.7875873Z Executing ['/opt/conda/bin/python', '-bb', 'test_schema_check.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:44:25.787369] 2023-01-11T21:44:28.6984340Z 2023-01-11T21:44:28.6985026Z Expand the folded group to see the log file of test_schema_check 2023-01-11T21:44:28.6985848Z ##[group]PRINTING LOG FILE of test_schema_check (/var/lib/jenkins/workspace/test/test-reports/test_schema_check_xb88u8nc) 2023-01-11T21:44:28.6986109Z 2023-01-11T21:44:28.6986511Z Running tests... 2023-01-11T21:44:28.6987235Z ---------------------------------------------------------------------- 2023-01-11T21:44:28.6987954Z Test results will be stored in test-reports/python-unittest/test_schema_check 2023-01-11T21:44:28.6988541Z test_alias_check_fail_multiple_operators (__main__.TestSchemaCheck) ... ok (0.004s) 2023-01-11T21:44:28.6989163Z test_alias_check_fail_multiple_operators_centered (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6989801Z test_alias_check_fail_outputs_unexpectedly_aliasing (__main__.TestSchemaCheck) ... ok (0.004s) 2023-01-11T21:44:28.6990386Z test_alias_check_fail_simple (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6990849Z test_is_alias_of_basic (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6991369Z test_is_alias_of_empty_container (__main__.TestSchemaCheck) ... ok (0.004s) 2023-01-11T21:44:28.6991896Z test_mutation_check_fail (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6992481Z test_mutation_check_fail_multiple_operators (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6992994Z test_overlaps_basic (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6993476Z test_overlaps_empty_container (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6994031Z test_schema_check_mode_empty_list_input (__main__.TestSchemaCheck) ... ok (0.010s) 2023-01-11T21:44:28.6994656Z test_schema_check_mode_functionality (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6995256Z test_schema_check_mode_functionality_aliasing_inputs (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6995894Z test_schema_check_mode_functionality_default_replaced (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6996500Z test_schema_check_mode_functionality_device_input (__main__.TestSchemaCheck) ... ok (0.005s) 2023-01-11T21:44:28.6997066Z test_schema_check_mode_functionality_kwarg_tensor (__main__.TestSchemaCheck) ... ok (0.005s) 2023-01-11T21:44:28.6997907Z test_schema_check_mode_functionality_list_input (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.6998492Z test_schema_check_mode_functionality_mutable_inputs (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.6999022Z test_schema_check_mode_functionality_nested_training_op (__main__.TestSchemaCheck) ... ok (0.012s) 2023-01-11T21:44:28.6999499Z test_schema_check_mode_functionality_training_op (__main__.TestSchemaCheck) ... ok (0.011s) 2023-01-11T21:44:28.6999965Z test_schema_check_mode_functionality_wildcard_after (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7000480Z test_schema_check_mode_functionality_with_multiple_outputs (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7002215Z test_schema_check_mode_functionality_with_multiple_outputs_aliasing (__main__.TestSchemaCheck) ... /var/lib/jenkins/workspace/test/test_schema_check.py:278: UserWarning: An output with one or more elements was resized since it had shape [1, 1, 3], which does not match the required output shape [1, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:44:28.7003598Z torch.aminmax(x, dim=0, out=[actual, actual]) 2023-01-11T21:44:28.7003934Z ok (0.002s) 2023-01-11T21:44:28.7053167Z test_schema_check_mode_mutated_aliasing_aliasing_inputs (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7054277Z test_schema_check_mode_mutated_aliasing_aliasing_outputs (__main__.TestSchemaCheck) ... /var/lib/jenkins/workspace/test/test_schema_check.py:180: UserWarning: An output with one or more elements was resized since it had shape [1, 1, 3], which does not match the required output shape [1, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:44:28.7055349Z torch.aminmax(x, dim=0, out=[actual, actual]) 2023-01-11T21:44:28.7055593Z ok (0.002s) 2023-01-11T21:44:28.7055933Z test_schema_check_mode_mutated_aliasing_as_strided (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7056383Z test_schema_check_mode_mutated_aliasing_multiple_outputs (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7056833Z test_schema_check_mode_mutated_aliasing_mutation (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.7057247Z test_schema_check_mode_mutated_aliasing_none (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.7057676Z test_schema_check_mode_mutated_aliasing_resize_ (__main__.TestSchemaCheck) ... ok (0.001s) 2023-01-11T21:44:28.7058099Z test_schema_check_mode_operator_order (__main__.TestSchemaCheck) ... ok (0.005s) 2023-01-11T21:44:28.7058513Z test_schema_check_mode_operator_order_without_grad (__main__.TestSchemaCheck) ... ok (0.005s) 2023-01-11T21:44:28.7058911Z test_schema_info_bind_basic (__main__.TestSchemaCheck) ... ok (0.002s) 2023-01-11T21:44:28.7059124Z 2023-01-11T21:44:28.7059493Z ---------------------------------------------------------------------- 2023-01-11T21:44:28.7059804Z Ran 33 tests in 0.104s 2023-01-11T21:44:28.7059948Z 2023-01-11T21:44:28.7060014Z OK 2023-01-11T21:44:28.7060127Z 2023-01-11T21:44:28.7060235Z Generating XML reports... 2023-01-11T21:44:28.7060788Z Generated XML report: test-reports/python-unittest/test_schema_check/TEST-TestSchemaCheck-20230111214428.xml 2023-01-11T21:44:28.7061090Z 2023-01-11T21:44:28.7061455Z ##[endgroup] 2023-01-11T21:44:28.7061969Z FINISHED PRINTING LOG FILE of test_schema_check (/var/lib/jenkins/workspace/test/test-reports/test_schema_check_xb88u8nc) 2023-01-11T21:44:28.7062528Z 2023-01-11T21:44:30.5964973Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:44:30.6615602Z Ignoring disabled issues: [] 2023-01-11T21:44:30.6785961Z Running test_serialization ... [2023-01-11 21:44:30.678243] 2023-01-11T21:44:30.6787018Z Executing ['/opt/conda/bin/python', '-bb', 'test_serialization.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:44:30.678520] 2023-01-11T21:45:18.2121444Z 2023-01-11T21:45:18.2122357Z Expand the folded group to see the log file of test_serialization 2023-01-11T21:45:18.2123919Z ##[group]PRINTING LOG FILE of test_serialization (/var/lib/jenkins/workspace/test/test-reports/test_serialization_vschjqal) 2023-01-11T21:45:18.2124359Z 2023-01-11T21:45:18.2124455Z Running tests... 2023-01-11T21:45:18.2124873Z ---------------------------------------------------------------------- 2023-01-11T21:45:18.2125479Z Test results will be stored in test-reports/python-unittest/test_serialization 2023-01-11T21:45:18.2126188Z test_load_error_msg (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2128729Z test_load_nonexistent_device (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2129375Z test_load_python2_unicode_module (__main__.TestOldSerialization) ... ok (0.036s) 2023-01-11T21:45:18.2129772Z test_load_unicode_error_msg (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2130648Z test_save_different_dtype_error (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:719: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2131590Z torch.save([a.storage(), a.imag], f) 2023-01-11T21:45:18.2132623Z /var/lib/jenkins/workspace/test/test_serialization.py:722: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2133602Z torch.save([a, a.imag.storage()], f) 2023-01-11T21:45:18.2134633Z /var/lib/jenkins/workspace/test/test_serialization.py:725: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2135642Z torch.save([a.storage(), a.imag.storage()], f) 2023-01-11T21:45:18.2136696Z /var/lib/jenkins/workspace/test/test_serialization.py:729: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2137690Z wrap_storage=a.storage().untyped(), 2023-01-11T21:45:18.2138720Z /var/lib/jenkins/workspace/test/test_serialization.py:728: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2139679Z s_bytes = torch.TypedStorage( 2023-01-11T21:45:18.2140773Z /var/lib/jenkins/workspace/test/test_serialization.py:736: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2141995Z torch.save([a.storage(), s_bytes], f) 2023-01-11T21:45:18.2142602Z ok (0.002s) 2023-01-11T21:45:18.2143744Z test_save_different_dtype_unallocated (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:697: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2144879Z wrap_storage=a.storage().untyped(), 2023-01-11T21:45:18.2145905Z /var/lib/jenkins/workspace/test/test_serialization.py:696: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2146867Z s = torch.TypedStorage( 2023-01-11T21:45:18.2148620Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2149658Z device=typed_storage.device, 2023-01-11T21:45:18.2150669Z /var/lib/jenkins/workspace/test/test_serialization.py:700: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2151634Z save_load_check(a.storage(), s) 2023-01-11T21:45:18.2151968Z ok (0.178s) 2023-01-11T21:45:18.2153056Z test_serialization (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:85: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2154092Z b += [a[0].storage()] # 4 2023-01-11T21:45:18.2155142Z /var/lib/jenkins/workspace/test/test_serialization.py:86: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2156240Z b += [a[0].reshape(-1)[1:4].storage()] # 5 2023-01-11T21:45:18.2157281Z /var/lib/jenkins/workspace/test/test_serialization.py:88: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2158485Z t1 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2159578Z /var/lib/jenkins/workspace/test/test_serialization.py:89: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2160767Z t2 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2161859Z /var/lib/jenkins/workspace/test/test_serialization.py:90: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2162979Z b += [(t1.storage(), t1.storage(), t2.storage())] # 7 2023-01-11T21:45:18.2164014Z /var/lib/jenkins/workspace/test/test_serialization.py:91: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2165079Z b += [a[0].reshape(-1)[0:2].storage()] # 8 2023-01-11T21:45:18.2166152Z /var/lib/jenkins/workspace/test/test_serialization.py:104: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2167215Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2168320Z /var/lib/jenkins/workspace/test/test_serialization.py:110: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2169297Z self.assertEqual(c[4][i + 1], c[5][i]) 2023-01-11T21:45:18.2170309Z /var/lib/jenkins/workspace/test/test_serialization.py:120: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2171312Z self.assertEqual(rootview.data_ptr(), c[0].data_ptr()) 2023-01-11T21:45:18.2171704Z ok (0.007s) 2023-01-11T21:45:18.2172829Z test_serialization_backwards_compat (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:403: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2173898Z b += [a[0].storage()] 2023-01-11T21:45:18.2174871Z /var/lib/jenkins/workspace/test/test_serialization.py:404: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2175946Z b += [a[0].reshape(-1)[1:4].clone().storage()] 2023-01-11T21:45:18.2176976Z /var/lib/jenkins/workspace/test/test_serialization.py:416: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2178027Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2179115Z /var/lib/jenkins/workspace/test/test_serialization.py:426: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2180097Z return (self.new_tensor.storage(), 2023-01-11T21:45:18.2181116Z /var/lib/jenkins/workspace/test/test_serialization.py:446: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2182210Z self.assertEqual(x.storage(), load_x.storage()) 2023-01-11T21:45:18.2182708Z ok (0.018s) 2023-01-11T21:45:18.2184397Z test_serialization_backwards_compat_safe (__main__.TestOldSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2185522Z return self.fget.__get__(instance, owner)() 2023-01-11T21:45:18.2185877Z ok (0.005s) 2023-01-11T21:45:18.2186437Z test_serialization_container (__main__.TestOldSerialization) ... ok (0.009s) 2023-01-11T21:45:18.2187056Z test_serialization_container_filelike (__main__.TestOldSerialization) ... ok (0.008s) 2023-01-11T21:45:18.2187722Z test_serialization_dill (__main__.TestOldSerialization) ... skip: "dill" not found or not correct version (0.000s) 2023-01-11T21:45:18.2188486Z test_serialization_dill_version_not_supported (__main__.TestOldSerialization) ... skip: "dill" not found or is correct version (0.000s) 2023-01-11T21:45:18.2189175Z test_serialization_fake_zip (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2190418Z test_serialization_filelike (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:85: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2191466Z b += [a[0].storage()] # 4 2023-01-11T21:45:18.2192465Z /var/lib/jenkins/workspace/test/test_serialization.py:86: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2193554Z b += [a[0].reshape(-1)[1:4].storage()] # 5 2023-01-11T21:45:18.2194576Z /var/lib/jenkins/workspace/test/test_serialization.py:88: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2195852Z t1 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2196935Z /var/lib/jenkins/workspace/test/test_serialization.py:89: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2198141Z t2 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2199240Z /var/lib/jenkins/workspace/test/test_serialization.py:90: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2200222Z b += [(t1.storage(), t1.storage(), t2.storage())] # 7 2023-01-11T21:45:18.2201254Z /var/lib/jenkins/workspace/test/test_serialization.py:91: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2202444Z b += [a[0].reshape(-1)[0:2].storage()] # 8 2023-01-11T21:45:18.2203958Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2204953Z device=typed_storage.device, 2023-01-11T21:45:18.2206034Z /var/lib/jenkins/workspace/test/test_serialization.py:104: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2207101Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2208174Z /var/lib/jenkins/workspace/test/test_serialization.py:110: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2209143Z self.assertEqual(c[4][i + 1], c[5][i]) 2023-01-11T21:45:18.2210162Z /var/lib/jenkins/workspace/test/test_serialization.py:120: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2211189Z self.assertEqual(rootview.data_ptr(), c[0].data_ptr()) 2023-01-11T21:45:18.2211571Z ok (0.003s) 2023-01-11T21:45:18.2212037Z test_serialization_filelike_api_requirements (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2213343Z test_serialization_filelike_exceptions (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:616: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2214324Z s = torch.CharStorage(s_data) 2023-01-11T21:45:18.2214629Z ok (0.001s) 2023-01-11T21:45:18.2215018Z test_serialization_filelike_missing_attrs (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2215595Z test_serialization_filelike_stress (__main__.TestOldSerialization) ... ok (0.467s) 2023-01-11T21:45:18.2216224Z test_serialization_filelike_uses_readinto (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2216624Z test_serialization_gzip (__main__.TestOldSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2216941Z test_serialization_map_location (__main__.TestOldSerialization) ... ok (0.078s) 2023-01-11T21:45:18.2217257Z test_serialization_offset (__main__.TestOldSerialization) ... ok (12.293s) 2023-01-11T21:45:18.2217606Z test_serialization_offset_filelike_weights_only_False (__main__.TestOldSerialization) ... ok (11.714s) 2023-01-11T21:45:18.2218607Z test_serialization_offset_filelike_weights_only_True (__main__.TestOldSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2219307Z return self.fget.__get__(instance, owner)() 2023-01-11T21:45:18.2219510Z ok (11.793s) 2023-01-11T21:45:18.2219758Z test_serialization_offset_gzip (__main__.TestOldSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2220057Z test_serialization_safe (__main__.TestOldSerialization) ... ok (0.008s) 2023-01-11T21:45:18.2220373Z test_serialization_save_warnings (__main__.TestOldSerialization) ... ok (0.003s) 2023-01-11T21:45:18.2221026Z test_serialization_sparse (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:85: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2221578Z b += [a[0].storage()] # 4 2023-01-11T21:45:18.2222131Z /var/lib/jenkins/workspace/test/test_serialization.py:86: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2223082Z b += [a[0].reshape(-1)[1:4].storage()] # 5 2023-01-11T21:45:18.2224030Z /var/lib/jenkins/workspace/test/test_serialization.py:88: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2224886Z t1 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2225601Z /var/lib/jenkins/workspace/test/test_serialization.py:89: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2226244Z t2 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2226805Z /var/lib/jenkins/workspace/test/test_serialization.py:90: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2227340Z b += [(t1.storage(), t1.storage(), t2.storage())] # 7 2023-01-11T21:45:18.2227885Z /var/lib/jenkins/workspace/test/test_serialization.py:91: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2228450Z b += [a[0].reshape(-1)[0:2].storage()] # 8 2023-01-11T21:45:18.2229240Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2229761Z device=typed_storage.device, 2023-01-11T21:45:18.2230298Z /var/lib/jenkins/workspace/test/test_serialization.py:104: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2231074Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2231634Z /var/lib/jenkins/workspace/test/test_serialization.py:110: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2232150Z self.assertEqual(c[4][i + 1], c[5][i]) 2023-01-11T21:45:18.2232682Z /var/lib/jenkins/workspace/test/test_serialization.py:120: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2233273Z self.assertEqual(rootview.data_ptr(), c[0].data_ptr()) 2023-01-11T21:45:18.2233476Z ok (0.007s) 2023-01-11T21:45:18.2234102Z test_serialization_sparse_bsc_invalid (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:391: UserWarning: Sparse BSC tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:56.) 2023-01-11T21:45:18.2234837Z lambda x: x.to_sparse_bsc(1, 1), torch.Tensor.ccol_indices, torch.Tensor.row_indices) 2023-01-11T21:45:18.2235187Z /opt/conda/lib/python3.10/unittest/case.py:172: FutureWarning: Possible nested set at position 14 2023-01-11T21:45:18.2235486Z expected_regex = re.compile(expected_regex) 2023-01-11T21:45:18.2235674Z ok (0.007s) 2023-01-11T21:45:18.2235926Z test_serialization_sparse_bsr_invalid (__main__.TestOldSerialization) ... ok (0.006s) 2023-01-11T21:45:18.2236273Z test_serialization_sparse_csc_invalid (__main__.TestOldSerialization) ... ok (0.006s) 2023-01-11T21:45:18.2236594Z test_serialization_sparse_csr_invalid (__main__.TestOldSerialization) ... ok (0.006s) 2023-01-11T21:45:18.2236925Z test_serialization_sparse_invalid (__main__.TestOldSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2237848Z test_serialization_sparse_safe (__main__.TestOldSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2238441Z return self.fget.__get__(instance, owner)() 2023-01-11T21:45:18.2238624Z ok (0.008s) 2023-01-11T21:45:18.2239207Z test_serialization_storage_slice (__main__.TestOldSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:651: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2239790Z self.assertEqual(s1[0], 0) 2023-01-11T21:45:18.2240323Z /var/lib/jenkins/workspace/test/test_serialization.py:652: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2240829Z self.assertEqual(s2[0], 0) 2023-01-11T21:45:18.2241346Z /var/lib/jenkins/workspace/test/test_serialization.py:653: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2241946Z self.assertEqual(s1.data_ptr() + 4, s2.data_ptr()) 2023-01-11T21:45:18.2242150Z ok (0.001s) 2023-01-11T21:45:18.2242399Z test_serialization_zipfile_utils (__main__.TestOldSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2242697Z test_serialize_device (__main__.TestOldSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2242986Z test_load_error_msg (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2243274Z test_load_nonexistent_device (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2243568Z test_load_python2_unicode_module (__main__.TestSerialization) ... ok (0.008s) 2023-01-11T21:45:18.2243868Z test_load_unicode_error_msg (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2244186Z test_meta_serialization_weights_only_False (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2244558Z test_meta_serialization_weights_only_True (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2244886Z test_pathlike_serialization_weights_only_False (__main__.TestSerialization) ... ok (0.008s) 2023-01-11T21:45:18.2245831Z test_pathlike_serialization_weights_only_True (__main__.TestSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2246434Z return self.fget.__get__(instance, owner)() 2023-01-11T21:45:18.2246630Z ok (0.007s) 2023-01-11T21:45:18.2247200Z test_save_different_dtype_error (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:719: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2247773Z torch.save([a.storage(), a.imag], f) 2023-01-11T21:45:18.2248313Z /var/lib/jenkins/workspace/test/test_serialization.py:722: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2248825Z torch.save([a, a.imag.storage()], f) 2023-01-11T21:45:18.2249364Z /var/lib/jenkins/workspace/test/test_serialization.py:725: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2249883Z torch.save([a.storage(), a.imag.storage()], f) 2023-01-11T21:45:18.2250432Z /var/lib/jenkins/workspace/test/test_serialization.py:729: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2250945Z wrap_storage=a.storage().untyped(), 2023-01-11T21:45:18.2251485Z /var/lib/jenkins/workspace/test/test_serialization.py:728: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2251994Z s_bytes = torch.TypedStorage( 2023-01-11T21:45:18.2252571Z /var/lib/jenkins/workspace/test/test_serialization.py:736: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2253070Z torch.save([a.storage(), s_bytes], f) 2023-01-11T21:45:18.2253262Z ok (0.002s) 2023-01-11T21:45:18.2253844Z test_save_different_dtype_unallocated (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:697: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2254414Z wrap_storage=a.storage().untyped(), 2023-01-11T21:45:18.2255001Z /var/lib/jenkins/workspace/test/test_serialization.py:696: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2255501Z s = torch.TypedStorage( 2023-01-11T21:45:18.2256298Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2256832Z device=typed_storage.device, 2023-01-11T21:45:18.2257357Z /var/lib/jenkins/workspace/test/test_serialization.py:700: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2257867Z save_load_check(a.storage(), s) 2023-01-11T21:45:18.2258051Z ok (0.194s) 2023-01-11T21:45:18.2258612Z test_serialization (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:85: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2259164Z b += [a[0].storage()] # 4 2023-01-11T21:45:18.2259679Z /var/lib/jenkins/workspace/test/test_serialization.py:86: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2260245Z b += [a[0].reshape(-1)[1:4].storage()] # 5 2023-01-11T21:45:18.2260790Z /var/lib/jenkins/workspace/test/test_serialization.py:88: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2261422Z t1 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2261999Z /var/lib/jenkins/workspace/test/test_serialization.py:89: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2263011Z t2 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2263736Z /var/lib/jenkins/workspace/test/test_serialization.py:90: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2264681Z b += [(t1.storage(), t1.storage(), t2.storage())] # 7 2023-01-11T21:45:18.2265462Z /var/lib/jenkins/workspace/test/test_serialization.py:91: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2266593Z b += [a[0].reshape(-1)[0:2].storage()] # 8 2023-01-11T21:45:18.2267176Z /var/lib/jenkins/workspace/test/test_serialization.py:104: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2267740Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2268317Z /var/lib/jenkins/workspace/test/test_serialization.py:110: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2268829Z self.assertEqual(c[4][i + 1], c[5][i]) 2023-01-11T21:45:18.2269367Z /var/lib/jenkins/workspace/test/test_serialization.py:120: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2269894Z self.assertEqual(rootview.data_ptr(), c[0].data_ptr()) 2023-01-11T21:45:18.2270105Z ok (0.007s) 2023-01-11T21:45:18.2270339Z test_serialization_2gb_file (__main__.TestSerialization) ... ok (7.838s) 2023-01-11T21:45:18.2270986Z test_serialization_backwards_compat (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:403: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2271530Z b += [a[0].storage()] 2023-01-11T21:45:18.2272042Z /var/lib/jenkins/workspace/test/test_serialization.py:404: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2272613Z b += [a[0].reshape(-1)[1:4].clone().storage()] 2023-01-11T21:45:18.2273153Z /var/lib/jenkins/workspace/test/test_serialization.py:416: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2273702Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2274265Z /var/lib/jenkins/workspace/test/test_serialization.py:426: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2274915Z return (self.new_tensor.storage(), 2023-01-11T21:45:18.2275450Z /var/lib/jenkins/workspace/test/test_serialization.py:446: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2275978Z self.assertEqual(x.storage(), load_x.storage()) 2023-01-11T21:45:18.2276168Z ok (0.006s) 2023-01-11T21:45:18.2276419Z test_serialization_backwards_compat_safe (__main__.TestSerialization) ... ok (0.005s) 2023-01-11T21:45:18.2276834Z test_serialization_dill (__main__.TestSerialization) ... skip: "dill" not found or not correct version (0.001s) 2023-01-11T21:45:18.2277232Z test_serialization_dill_version_not_supported (__main__.TestSerialization) ... skip: "dill" not found or is correct version (0.000s) 2023-01-11T21:45:18.2277624Z test_serialization_efficient_zerotensor_weights_only_False (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2277993Z test_serialization_efficient_zerotensor_weights_only_True (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2278322Z test_serialization_fake_zip (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2278621Z test_serialization_filelike (__main__.TestSerialization) ... ok (0.003s) 2023-01-11T21:45:18.2278924Z test_serialization_filelike_api_requirements (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2279607Z test_serialization_filelike_exceptions (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:616: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2280189Z s = torch.CharStorage(s_data) 2023-01-11T21:45:18.2280376Z ok (0.001s) 2023-01-11T21:45:18.2280614Z test_serialization_filelike_missing_attrs (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2280936Z test_serialization_filelike_stress (__main__.TestSerialization) ... ok (0.595s) 2023-01-11T21:45:18.2281258Z test_serialization_filelike_uses_readinto (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2281555Z test_serialization_gzip (__main__.TestSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2281859Z test_serialization_map_location (__main__.TestSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2282188Z test_serialization_math_bits_weights_only_False (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2282530Z test_serialization_math_bits_weights_only_True (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2282838Z test_serialization_offset_gzip (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2283146Z test_serialization_python_attr (__main__.TestSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2283440Z test_serialization_safe (__main__.TestSerialization) ... ok (0.008s) 2023-01-11T21:45:18.2283728Z test_serialization_save_warnings (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2284376Z test_serialization_sparse (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:85: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2284982Z b += [a[0].storage()] # 4 2023-01-11T21:45:18.2285508Z /var/lib/jenkins/workspace/test/test_serialization.py:86: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2286080Z b += [a[0].reshape(-1)[1:4].storage()] # 5 2023-01-11T21:45:18.2286608Z /var/lib/jenkins/workspace/test/test_serialization.py:88: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2287243Z t1 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2287860Z /var/lib/jenkins/workspace/test/test_serialization.py:89: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2288482Z t2 = torch.FloatTensor().set_(a[0].reshape(-1)[1:4].clone().storage(), 0, (3,), (1,)) 2023-01-11T21:45:18.2289050Z /var/lib/jenkins/workspace/test/test_serialization.py:90: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2289576Z b += [(t1.storage(), t1.storage(), t2.storage())] # 7 2023-01-11T21:45:18.2290108Z /var/lib/jenkins/workspace/test/test_serialization.py:91: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2290670Z b += [a[0].reshape(-1)[0:2].storage()] # 8 2023-01-11T21:45:18.2291463Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2291994Z device=typed_storage.device, 2023-01-11T21:45:18.2292517Z /var/lib/jenkins/workspace/test/test_serialization.py:104: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2293232Z self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), atol=0, rtol=0) 2023-01-11T21:45:18.2293810Z /var/lib/jenkins/workspace/test/test_serialization.py:110: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2294327Z self.assertEqual(c[4][i + 1], c[5][i]) 2023-01-11T21:45:18.2294864Z /var/lib/jenkins/workspace/test/test_serialization.py:120: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2295451Z self.assertEqual(rootview.data_ptr(), c[0].data_ptr()) 2023-01-11T21:45:18.2295649Z ok (0.007s) 2023-01-11T21:45:18.2295896Z test_serialization_sparse_bsc_invalid (__main__.TestSerialization) ... ok (0.007s) 2023-01-11T21:45:18.2296220Z test_serialization_sparse_bsr_invalid (__main__.TestSerialization) ... ok (0.007s) 2023-01-11T21:45:18.2296532Z test_serialization_sparse_csc_invalid (__main__.TestSerialization) ... ok (0.007s) 2023-01-11T21:45:18.2296857Z test_serialization_sparse_csr_invalid (__main__.TestSerialization) ... ok (0.007s) 2023-01-11T21:45:18.2297173Z test_serialization_sparse_invalid (__main__.TestSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2298126Z test_serialization_sparse_safe (__main__.TestSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2298706Z return self.fget.__get__(instance, owner)() 2023-01-11T21:45:18.2298903Z ok (0.008s) 2023-01-11T21:45:18.2299478Z test_serialization_storage_slice (__main__.TestSerialization) ... /var/lib/jenkins/workspace/test/test_serialization.py:651: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2300042Z self.assertEqual(s1[0], 0) 2023-01-11T21:45:18.2300561Z /var/lib/jenkins/workspace/test/test_serialization.py:652: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2301060Z self.assertEqual(s2[0], 0) 2023-01-11T21:45:18.2301592Z /var/lib/jenkins/workspace/test/test_serialization.py:653: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:45:18.2302128Z self.assertEqual(s1.data_ptr() + 4, s2.data_ptr()) 2023-01-11T21:45:18.2302460Z ok (0.001s) 2023-01-11T21:45:18.2303147Z test_serialization_zipfile_actually_jit (__main__.TestSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/serialization.py:801: UserWarning: 'torch.load' received a zip file that looks like a TorchScript archive dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to silence this warning) 2023-01-11T21:45:18.2303779Z warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive" 2023-01-11T21:45:18.2304026Z ok (0.012s) 2023-01-11T21:45:18.2304266Z test_serialization_zipfile_utils (__main__.TestSerialization) ... ok (0.004s) 2023-01-11T21:45:18.2304581Z test_serialization_zipfile_weights_only_False (__main__.TestSerialization) ... ok (0.006s) 2023-01-11T21:45:18.2304921Z test_serialization_zipfile_weights_only_True (__main__.TestSerialization) ... ok (0.010s) 2023-01-11T21:45:18.2305233Z test_serialize_device (__main__.TestSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2305513Z test_weights_only_assert (__main__.TestSerialization) ... Hello World! 2023-01-11T21:45:18.2305740Z ok (0.001s) 2023-01-11T21:45:18.2306010Z test_cloned_deepcopy_requires_grad_False (__main__.TestSubclassSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2306356Z test_cloned_deepcopy_requires_grad_True (__main__.TestSubclassSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2306793Z test_empty_class_serialization (__main__.TestSubclassSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2307130Z test_tensor_subclass_deepcopy (__main__.TestSubclassSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2307476Z test_tensor_subclass_getstate_overwrite (__main__.TestSubclassSerialization) ... ok (0.002s) 2023-01-11T21:45:18.2307823Z test_tensor_subclass_wrapper_serialization (__main__.TestSubclassSerialization) ... ok (0.001s) 2023-01-11T21:45:18.2308024Z 2023-01-11T21:45:18.2308227Z ---------------------------------------------------------------------- 2023-01-11T21:45:18.2308469Z Ran 91 tests in 45.498s 2023-01-11T21:45:18.2308583Z 2023-01-11T21:45:18.2308654Z OK (skipped=4) 2023-01-11T21:45:18.2308748Z 2023-01-11T21:45:18.2308833Z Generating XML reports... 2023-01-11T21:45:18.2309263Z Generated XML report: test-reports/python-unittest/test_serialization/TEST-TestOldSerialization-20230111214432.xml 2023-01-11T21:45:18.2309853Z Generated XML report: test-reports/python-unittest/test_serialization/TEST-TestSerialization-20230111214432.xml 2023-01-11T21:45:18.2310397Z Generated XML report: test-reports/python-unittest/test_serialization/TEST-TestSubclassSerialization-20230111214432.xml 2023-01-11T21:45:18.2310657Z 2023-01-11T21:45:18.2311054Z ##[endgroup] 2023-01-11T21:45:18.2311452Z FINISHED PRINTING LOG FILE of test_serialization (/var/lib/jenkins/workspace/test/test-reports/test_serialization_vschjqal) 2023-01-11T21:45:18.2311677Z 2023-01-11T21:45:20.2080474Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:45:20.3056028Z Ignoring disabled issues: [] 2023-01-11T21:45:20.4684953Z Running test_set_default_mobile_cpu_allocator ... [2023-01-11 21:45:20.468133] 2023-01-11T21:45:20.4686041Z Executing ['/opt/conda/bin/python', '-bb', 'test_set_default_mobile_cpu_allocator.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:45:20.468399] 2023-01-11T21:45:22.4960114Z 2023-01-11T21:45:22.4960755Z Expand the folded group to see the log file of test_set_default_mobile_cpu_allocator 2023-01-11T21:45:22.4961882Z ##[group]PRINTING LOG FILE of test_set_default_mobile_cpu_allocator (/var/lib/jenkins/workspace/test/test-reports/test_set_default_mobile_cpu_allocator_1n51_p9l) 2023-01-11T21:45:22.4962300Z 2023-01-11T21:45:22.4962409Z Running tests... 2023-01-11T21:45:22.4963014Z ---------------------------------------------------------------------- 2023-01-11T21:45:22.4963737Z Test results will be stored in test-reports/python-unittest/test_set_default_mobile_cpu_allocator 2023-01-11T21:45:22.4964364Z test_exception (__main__.TestSetDefaultMobileCPUAllocator) ... ok (0.240s) 2023-01-11T21:45:22.4964993Z test_no_exception (__main__.TestSetDefaultMobileCPUAllocator) ... ok (0.001s) 2023-01-11T21:45:22.4965342Z 2023-01-11T21:45:22.4965624Z ---------------------------------------------------------------------- 2023-01-11T21:45:22.4965876Z Ran 2 tests in 0.241s 2023-01-11T21:45:22.4965977Z 2023-01-11T21:45:22.4966039Z OK 2023-01-11T21:45:22.4966136Z 2023-01-11T21:45:22.4966224Z Generating XML reports... 2023-01-11T21:45:22.4966738Z Generated XML report: test-reports/python-unittest/test_set_default_mobile_cpu_allocator/TEST-TestSetDefaultMobileCPUAllocator-20230111214521.xml 2023-01-11T21:45:22.4967039Z 2023-01-11T21:45:22.4967257Z ##[endgroup] 2023-01-11T21:45:22.4967715Z FINISHED PRINTING LOG FILE of test_set_default_mobile_cpu_allocator (/var/lib/jenkins/workspace/test/test-reports/test_set_default_mobile_cpu_allocator_1n51_p9l) 2023-01-11T21:45:22.4967971Z 2023-01-11T21:45:24.3382012Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:45:24.4034748Z Ignoring disabled issues: [] 2023-01-11T21:45:24.4202543Z Running test_shape_ops ... [2023-01-11 21:45:24.419956] 2023-01-11T21:45:24.4203859Z Executing ['/opt/conda/bin/python', '-bb', 'test_shape_ops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:45:24.420214] 2023-01-11T21:45:26.2855443Z 2023-01-11T21:45:26.2855966Z Expand the folded group to see the log file of test_shape_ops 2023-01-11T21:45:26.2857308Z ##[group]PRINTING LOG FILE of test_shape_ops (/var/lib/jenkins/workspace/test/test-reports/test_shape_ops_2z6w1ps7) 2023-01-11T21:45:26.2857726Z 2023-01-11T21:45:26.2857837Z Running tests... 2023-01-11T21:45:26.2858491Z ---------------------------------------------------------------------- 2023-01-11T21:45:26.2858801Z 2023-01-11T21:45:26.2859150Z ---------------------------------------------------------------------- 2023-01-11T21:45:26.2859547Z Ran 0 tests in 0.000s 2023-01-11T21:45:26.2859754Z 2023-01-11T21:45:26.2859876Z OK 2023-01-11T21:45:26.2860043Z 2023-01-11T21:45:26.2860189Z Generating XML reports... 2023-01-11T21:45:26.2860536Z Test results will be stored in test-reports/python-unittest/test_shape_ops 2023-01-11T21:45:26.2860717Z 2023-01-11T21:45:26.2860930Z ##[endgroup] 2023-01-11T21:45:26.2861326Z FINISHED PRINTING LOG FILE of test_shape_ops (/var/lib/jenkins/workspace/test/test-reports/test_shape_ops_2z6w1ps7) 2023-01-11T21:45:26.2861628Z 2023-01-11T21:45:28.1667521Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:45:28.2504696Z Ignoring disabled issues: [] 2023-01-11T21:45:28.2670981Z Running test_subclass ... [2023-01-11 21:45:28.266773] 2023-01-11T21:45:28.2672519Z Executing ['/opt/conda/bin/python', '-bb', 'test_subclass.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:45:28.267025] 2023-01-11T21:45:30.4314334Z 2023-01-11T21:45:30.4315409Z Expand the folded group to see the log file of test_subclass 2023-01-11T21:45:30.4316614Z ##[group]PRINTING LOG FILE of test_subclass (/var/lib/jenkins/workspace/test/test-reports/test_subclass_k6dkwalo) 2023-01-11T21:45:30.4316979Z 2023-01-11T21:45:30.4317135Z Running tests... 2023-01-11T21:45:30.4317781Z ---------------------------------------------------------------------- 2023-01-11T21:45:30.4318472Z Test results will be stored in test-reports/python-unittest/test_subclass 2023-01-11T21:45:30.4319015Z test_deepcopy_base_tensor_as_param_False (__main__.TestSubclass) ... ok (0.236s) 2023-01-11T21:45:30.4319525Z test_deepcopy_base_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4320108Z test_deepcopy_diag_tensor_below_as_param_False (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4320698Z test_deepcopy_diag_tensor_below_as_param_True (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4321284Z test_deepcopy_logging_tensor_as_param_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4321846Z test_deepcopy_logging_tensor_as_param_True (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4322273Z test_deepcopy_non_wrapper_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4322600Z test_deepcopy_non_wrapper_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4322905Z test_deepcopy_sparse_tensor_as_param_False (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4323222Z test_deepcopy_sparse_tensor_as_param_True (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4323537Z test_lazy_module_base_tensor (__main__.TestSubclass) ... expected failure (0.001s) 2023-01-11T21:45:30.4324236Z test_lazy_module_diag_tensor_below (__main__.TestSubclass) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:45:30.4324784Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:45:30.4325021Z expected failure (0.002s) 2023-01-11T21:45:30.4325282Z test_lazy_module_logging_tensor (__main__.TestSubclass) ... expected failure (0.001s) 2023-01-11T21:45:30.4325609Z test_lazy_module_non_wrapper_tensor (__main__.TestSubclass) ... expected failure (0.001s) 2023-01-11T21:45:30.4325919Z test_lazy_module_sparse_tensor (__main__.TestSubclass) ... expected failure (0.002s) 2023-01-11T21:45:30.4326437Z test_module_optimization_base_tensor (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4326747Z test_module_optimization_diag_tensor_below (__main__.TestSubclass) ... ok (0.006s) 2023-01-11T21:45:30.4327059Z test_module_optimization_logging_tensor (__main__.TestSubclass) ... ok (0.004s) 2023-01-11T21:45:30.4327359Z test_module_optimization_non_wrapper_tensor (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4327669Z test_module_optimization_sparse_tensor (__main__.TestSubclass) ... ok (0.007s) 2023-01-11T21:45:30.4328019Z test_non_rewrapping_torch_dispatch_subclass_as_parameter_throws_for_detach (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4328374Z test_param_invariants_base_tensor_tensor_requires_grad_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4328721Z test_param_invariants_base_tensor_tensor_requires_grad_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4329136Z test_param_invariants_diag_tensor_below_tensor_requires_grad_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4329530Z test_param_invariants_diag_tensor_below_tensor_requires_grad_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4329882Z test_param_invariants_logging_tensor_tensor_requires_grad_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4330220Z test_param_invariants_logging_tensor_tensor_requires_grad_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4330578Z test_param_invariants_non_wrapper_tensor_tensor_requires_grad_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4330944Z test_param_invariants_non_wrapper_tensor_tensor_requires_grad_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4331285Z test_param_invariants_sparse_tensor_tensor_requires_grad_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4331640Z test_param_invariants_sparse_tensor_tensor_requires_grad_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4332000Z test_parametrization_base_tensor_leave_parametrized_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4332354Z test_parametrization_base_tensor_leave_parametrized_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4332699Z test_parametrization_diag_tensor_below_leave_parametrized_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4333062Z test_parametrization_diag_tensor_below_leave_parametrized_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4333423Z test_parametrization_logging_tensor_leave_parametrized_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4333783Z test_parametrization_logging_tensor_leave_parametrized_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4334137Z test_parametrization_non_wrapper_tensor_leave_parametrized_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4334506Z test_parametrization_non_wrapper_tensor_leave_parametrized_True (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4334870Z test_parametrization_sparse_tensor_leave_parametrized_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4335225Z test_parametrization_sparse_tensor_leave_parametrized_True (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4335536Z test_repr_base_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4335829Z test_repr_base_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4336132Z test_repr_diag_tensor_below_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4336426Z test_repr_diag_tensor_below_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4336731Z test_repr_logging_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4337038Z test_repr_logging_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4337408Z test_repr_non_wrapper_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4337853Z test_repr_non_wrapper_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4338250Z test_repr_sparse_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4338638Z test_repr_sparse_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4339024Z test_serialization_base_tensor_as_param_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4339433Z test_serialization_base_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4339855Z test_serialization_diag_tensor_below_as_param_False (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4340284Z test_serialization_diag_tensor_below_as_param_True (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4340692Z test_serialization_logging_tensor_as_param_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4341149Z test_serialization_logging_tensor_as_param_True (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4341578Z test_serialization_non_wrapper_tensor_as_param_False (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4341994Z test_serialization_non_wrapper_tensor_as_param_True (__main__.TestSubclass) ... ok (0.002s) 2023-01-11T21:45:30.4342564Z test_serialization_sparse_tensor_as_param_False (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4342985Z test_serialization_sparse_tensor_as_param_True (__main__.TestSubclass) ... ok (0.003s) 2023-01-11T21:45:30.4343402Z test_type_propagation_base_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4343811Z test_type_propagation_base_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4344239Z test_type_propagation_diag_tensor_below_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4344675Z test_type_propagation_diag_tensor_below_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4345118Z test_type_propagation_logging_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4345535Z test_type_propagation_logging_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4345967Z test_type_propagation_non_wrapper_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4346405Z test_type_propagation_non_wrapper_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4346828Z test_type_propagation_sparse_tensor_as_param_False (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4347247Z test_type_propagation_sparse_tensor_as_param_True (__main__.TestSubclass) ... ok (0.001s) 2023-01-11T21:45:30.4347480Z 2023-01-11T21:45:30.4347776Z ---------------------------------------------------------------------- 2023-01-11T21:45:30.4348083Z Ran 71 tests in 0.357s 2023-01-11T21:45:30.4348215Z 2023-01-11T21:45:30.4348318Z OK (expected failures=5) 2023-01-11T21:45:30.4348474Z 2023-01-11T21:45:30.4348578Z Generating XML reports... 2023-01-11T21:45:30.4349099Z Generated XML report: test-reports/python-unittest/test_subclass/TEST-TestSubclass-20230111214529.xml 2023-01-11T21:45:30.4349385Z 2023-01-11T21:45:30.4349690Z ##[endgroup] 2023-01-11T21:45:30.4350189Z FINISHED PRINTING LOG FILE of test_subclass (/var/lib/jenkins/workspace/test/test-reports/test_subclass_k6dkwalo) 2023-01-11T21:45:30.4350463Z 2023-01-11T21:45:32.3921135Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:45:32.4564751Z Ignoring disabled issues: [] 2023-01-11T21:45:32.4730507Z Running test_tensorboard ... [2023-01-11 21:45:32.472754] 2023-01-11T21:45:32.4732325Z Executing ['/opt/conda/bin/python', '-bb', 'test_tensorboard.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:45:32.473008] 2023-01-11T21:46:03.5770678Z 2023-01-11T21:46:03.5771204Z Expand the folded group to see the log file of test_tensorboard 2023-01-11T21:46:03.5775675Z ##[group]PRINTING LOG FILE of test_tensorboard (/var/lib/jenkins/workspace/test/test-reports/test_tensorboard_42urr5_2) 2023-01-11T21:46:03.5776192Z 2023-01-11T21:46:03.5776331Z Running tests... 2023-01-11T21:46:03.5776800Z ---------------------------------------------------------------------- 2023-01-11T21:46:03.5777194Z Test results will be stored in test-reports/python-unittest/test_tensorboard 2023-01-11T21:46:03.5777592Z test_embedding (__main__.TestTensorBoardEmbedding) ... warning: Embedding dir exists, did you set global_step for add_embedding()? 2023-01-11T21:46:03.5777872Z ok (0.038s) 2023-01-11T21:46:03.5778184Z test_embedding_64 (__main__.TestTensorBoardEmbedding) ... warning: Embedding dir exists, did you set global_step for add_embedding()? 2023-01-11T21:46:03.5778477Z ok (0.023s) 2023-01-11T21:46:03.5778723Z test_figure (__main__.TestTensorBoardFigure) ... skip: no matplotlib (0.001s) 2023-01-11T21:46:03.5779039Z test_figure_list (__main__.TestTensorBoardFigure) ... skip: no matplotlib (0.001s) 2023-01-11T21:46:03.5779492Z test_caffe2_np (__main__.TestTensorBoardNumpy) ... skip: no caffe2 (0.000s) 2023-01-11T21:46:03.5779817Z test_caffe2_np_expect_fail (__main__.TestTensorBoardNumpy) ... skip: no caffe2 (0.000s) 2023-01-11T21:46:03.5780138Z test_caffe2_simple_cnnmodel (__main__.TestTensorBoardNumpy) ... skip: no caffe2 (0.001s) 2023-01-11T21:46:03.5780471Z test_caffe2_simple_model (__main__.TestTensorBoardNumpy) ... skip: no caffe2 (0.001s) 2023-01-11T21:46:03.5780793Z test_pytorch_np_expect_fail (__main__.TestTensorBoardNumpy) ... ok (0.000s) 2023-01-11T21:46:03.5781088Z test_scalar (__main__.TestTensorBoardNumpy) ... ok (0.001s) 2023-01-11T21:46:03.5781387Z test_pytorch_autograd_np (__main__.TestTensorBoardPyTorchNumpy) ... ok (0.000s) 2023-01-11T21:46:03.5781724Z test_pytorch_histogram (__main__.TestTensorBoardPyTorchNumpy) ... ok (0.004s) 2023-01-11T21:46:03.5782064Z test_pytorch_histogram_raw (__main__.TestTensorBoardPyTorchNumpy) ... ok (0.006s) 2023-01-11T21:46:03.5782595Z test_pytorch_np (__main__.TestTensorBoardPyTorchNumpy) ... ok (0.001s) 2023-01-11T21:46:03.5782922Z test_pytorch_write (__main__.TestTensorBoardPyTorchNumpy) ... ok (0.003s) 2023-01-11T21:46:03.5783242Z test_mlp_graph (__main__.TestTensorBoardPytorchGraph) ... ok (0.135s) 2023-01-11T21:46:03.5783568Z test_nested_nn_squential (__main__.TestTensorBoardPytorchGraph) ... ok (0.068s) 2023-01-11T21:46:03.5783884Z test_pytorch_graph (__main__.TestTensorBoardPytorchGraph) ... ok (0.020s) 2023-01-11T21:46:03.5784924Z test_pytorch_graph_dict_input (__main__.TestTensorBoardPytorchGraph) ... Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior. 2023-01-11T21:46:03.5785565Z Error occurs, No graph saved 2023-01-11T21:46:03.5785755Z ok (0.030s) 2023-01-11T21:46:03.5786010Z test_torchvision_smoke (__main__.TestTensorBoardPytorchGraph) ... ok (28.234s) 2023-01-11T21:46:03.5786380Z test_wrong_input_size (__main__.TestTensorBoardPytorchGraph) ... mat1 and mat2 shapes cannot be multiplied (1x9 and 3x5) 2023-01-11T21:46:03.5786685Z Error occurs, No graph saved 2023-01-11T21:46:03.5786872Z ok (0.012s) 2023-01-11T21:46:03.5787134Z test_audio (__main__.TestTensorBoardSummary) ... warning: audio amplitude out of range, auto clipped. 2023-01-11T21:46:03.5787398Z ok (0.013s) 2023-01-11T21:46:03.5787635Z test_custom_scalars (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5787921Z test_empty_input (__main__.TestTensorBoardSummary) ... ok (0.000s) 2023-01-11T21:46:03.5788206Z test_float32_image (__main__.TestTensorBoardSummary) 2023-01-11T21:46:03.5788508Z Tests that float32 image (pixel values in [0, 1]) are scaled correctly ... ok (0.001s) 2023-01-11T21:46:03.5788907Z test_histogram_auto (__main__.TestTensorBoardSummary) ... ok (0.002s) 2023-01-11T21:46:03.5789195Z test_histogram_doane (__main__.TestTensorBoardSummary) ... ok (0.002s) 2023-01-11T21:46:03.5789499Z test_histogram_fd (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5789797Z test_hparams_bool (__main__.TestTensorBoardSummary) ... ok (0.004s) 2023-01-11T21:46:03.5790090Z test_hparams_domain_discrete (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5790404Z test_hparams_number (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5790704Z test_hparams_smoke (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5790999Z test_hparams_string (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5791359Z test_hparams_wrong_parameter (__main__.TestTensorBoardSummary) ... parameter: hparam_dict should be a dictionary, nothing logged. 2023-01-11T21:46:03.5791769Z parameter: metric_dict should be a dictionary, nothing logged. 2023-01-11T21:46:03.5791987Z ok (0.017s) 2023-01-11T21:46:03.5792221Z test_image_with_3_channel_batched (__main__.TestTensorBoardSummary) ... ok (0.002s) 2023-01-11T21:46:03.5792531Z test_image_with_boxes (__main__.TestTensorBoardSummary) ... ok (0.002s) 2023-01-11T21:46:03.5792840Z test_image_with_one_channel (__main__.TestTensorBoardSummary) ... ok (0.002s) 2023-01-11T21:46:03.5793163Z test_image_with_one_channel_batched (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5793477Z test_image_without_channel (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5793778Z test_list_input (__main__.TestTensorBoardSummary) ... ok (0.000s) 2023-01-11T21:46:03.5794064Z test_mesh (__main__.TestTensorBoardSummary) ... ok (0.003s) 2023-01-11T21:46:03.5794343Z test_scalar_new_style (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5794630Z test_text (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5794963Z test_uint8_image (__main__.TestTensorBoardSummary) 2023-01-11T21:46:03.5795237Z Tests that uint8 image (pixel values in [0, 255]) is not changed ... ok (0.000s) 2023-01-11T21:46:03.5795525Z test_video (__main__.TestTensorBoardSummary) ... ok (0.001s) 2023-01-11T21:46:03.5795823Z test_pathlib (__main__.TestTensorBoardSummaryWriter) ... ok (0.003s) 2023-01-11T21:46:03.5796149Z test_summary_writer_close (__main__.TestTensorBoardSummaryWriter) ... ok (0.003s) 2023-01-11T21:46:03.5796470Z test_summary_writer_ctx (__main__.TestTensorBoardSummaryWriter) ... ok (0.004s) 2023-01-11T21:46:03.5796800Z test_convert_to_HWC_dtype_remains_same (__main__.TestTensorBoardUtils) ... ok (0.001s) 2023-01-11T21:46:03.5797111Z test_numpy_vid_uint8 (__main__.TestTensorBoardUtils) ... ok (0.015s) 2023-01-11T21:46:03.5797391Z test_prepare_video (__main__.TestTensorBoardUtils) ... ok (0.086s) 2023-01-11T21:46:03.5797672Z test_to_HWC (__main__.TestTensorBoardUtils) ... ok (0.001s) 2023-01-11T21:46:03.5797972Z test_writer (__main__.TestTensorBoardWriter) ... add_video needs package moviepy 2023-01-11T21:46:03.5798210Z ok (0.057s) 2023-01-11T21:46:03.5798299Z 2023-01-11T21:46:03.5798516Z ---------------------------------------------------------------------- 2023-01-11T21:46:03.5798754Z Ran 53 tests in 28.810s 2023-01-11T21:46:03.5798869Z 2023-01-11T21:46:03.5798939Z OK (skipped=6) 2023-01-11T21:46:03.5799044Z 2023-01-11T21:46:03.5799115Z Generating XML reports... 2023-01-11T21:46:03.5799569Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardEmbedding-20230111214534.xml 2023-01-11T21:46:03.5800119Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardNumpy-20230111214534.xml 2023-01-11T21:46:03.5800677Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardPyTorchNumpy-20230111214534.xml 2023-01-11T21:46:03.5801251Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardPytorchGraph-20230111214534.xml 2023-01-11T21:46:03.5801861Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardSummary-20230111214534.xml 2023-01-11T21:46:03.5802433Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardSummaryWriter-20230111214534.xml 2023-01-11T21:46:03.5802988Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardUtils-20230111214534.xml 2023-01-11T21:46:03.5803509Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardWriter-20230111214534.xml 2023-01-11T21:46:03.5804038Z Generated XML report: test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardFigure-20230111214534.xml 2023-01-11T21:46:03.5804284Z 2023-01-11T21:46:03.5804562Z ##[endgroup] 2023-01-11T21:46:03.5804934Z FINISHED PRINTING LOG FILE of test_tensorboard (/var/lib/jenkins/workspace/test/test-reports/test_tensorboard_42urr5_2) 2023-01-11T21:46:03.5805159Z 2023-01-11T21:46:05.5337732Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:05.6184925Z Ignoring disabled issues: [] 2023-01-11T21:46:05.6352504Z Running test_tensorexpr_pybind ... [2023-01-11 21:46:05.634948] 2023-01-11T21:46:05.6354692Z Executing ['/opt/conda/bin/python', '-bb', 'test_tensorexpr_pybind.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:05.635235] 2023-01-11T21:46:08.0947804Z 2023-01-11T21:46:08.0948892Z Expand the folded group to see the log file of test_tensorexpr_pybind 2023-01-11T21:46:08.0950060Z ##[group]PRINTING LOG FILE of test_tensorexpr_pybind (/var/lib/jenkins/workspace/test/test-reports/test_tensorexpr_pybind_fknumtnl) 2023-01-11T21:46:08.0984310Z 2023-01-11T21:46:08.0986786Z Running tests... 2023-01-11T21:46:08.0987505Z ---------------------------------------------------------------------- 2023-01-11T21:46:08.0988423Z Test results will be stored in test-reports/python-unittest/test_tensorexpr_pybind 2023-01-11T21:46:08.0989184Z test_unary_ops (__main__.TestExprHandlePyBind) ... ok (0.239s) 2023-01-11T21:46:08.0989795Z test_alloc_in_loop (__main__.TestTensorExprPyBind) ... ok (0.026s) 2023-01-11T21:46:08.0990378Z test_call_raw (__main__.TestTensorExprPyBind) ... ok (0.002s) 2023-01-11T21:46:08.0990985Z test_dtype_error (__main__.TestTensorExprPyBind) ... ok (0.001s) 2023-01-11T21:46:08.0991615Z test_dynamic_shape (__main__.TestTensorExprPyBind) ... ok (0.002s) 2023-01-11T21:46:08.0992238Z test_dynamic_shape_2d (__main__.TestTensorExprPyBind) ... ok (0.002s) 2023-01-11T21:46:08.0992883Z test_external_calls (__main__.TestTensorExprPyBind) ... ok (0.001s) 2023-01-11T21:46:08.0993528Z test_kernel_shape_prop (__main__.TestTensorExprPyBind) ... ok (0.194s) 2023-01-11T21:46:08.0994187Z test_kernel_shape_prop_module (__main__.TestTensorExprPyBind) ... ok (0.040s) 2023-01-11T21:46:08.0994931Z test_kernel_with_custom_lowering (__main__.TestTensorExprPyBind) ... ok (0.026s) 2023-01-11T21:46:08.0995627Z test_kernel_with_expand (__main__.TestTensorExprPyBind) ... ok (0.026s) 2023-01-11T21:46:08.1184177Z test_kernel_with_permute (__main__.TestTensorExprPyBind) ... ok (0.062s) 2023-01-11T21:46:08.1184745Z test_kernel_with_scalar_inputs (__main__.TestTensorExprPyBind) ... ok (0.022s) 2023-01-11T21:46:08.1185326Z test_kernel_with_t (__main__.TestTensorExprPyBind) ... ok (0.029s) 2023-01-11T21:46:08.1185896Z test_kernel_with_tensor_inputs (__main__.TestTensorExprPyBind) ... ok (0.026s) 2023-01-11T21:46:08.1186452Z test_kernel_with_transpose (__main__.TestTensorExprPyBind) ... ok (0.029s) 2023-01-11T21:46:08.1187006Z test_simple_sum (__main__.TestTensorExprPyBind) ... ok (0.001s) 2023-01-11T21:46:08.1187350Z 2023-01-11T21:46:08.1282212Z ---------------------------------------------------------------------- 2023-01-11T21:46:08.1282577Z Ran 17 tests in 0.729s 2023-01-11T21:46:08.1282763Z 2023-01-11T21:46:08.1282852Z OK 2023-01-11T21:46:08.1282987Z 2023-01-11T21:46:08.1283115Z Generating XML reports... 2023-01-11T21:46:08.1284011Z Generated XML report: test-reports/python-unittest/test_tensorexpr_pybind/TEST-TestExprHandlePyBind-20230111214607.xml 2023-01-11T21:46:08.1284863Z Generated XML report: test-reports/python-unittest/test_tensorexpr_pybind/TEST-TestTensorExprPyBind-20230111214607.xml 2023-01-11T21:46:08.1285235Z 2023-01-11T21:46:08.1285805Z ##[endgroup] 2023-01-11T21:46:08.1286349Z FINISHED PRINTING LOG FILE of test_tensorexpr_pybind (/var/lib/jenkins/workspace/test/test-reports/test_tensorexpr_pybind_fknumtnl) 2023-01-11T21:46:08.1286640Z 2023-01-11T21:46:09.9788396Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:10.0466727Z Ignoring disabled issues: [] 2023-01-11T21:46:10.0635375Z Running test_type_hints ... [2023-01-11 21:46:10.063210] 2023-01-11T21:46:10.0636568Z Executing ['/opt/conda/bin/python', '-bb', 'test_type_hints.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:10.063450] 2023-01-11T21:46:11.9521132Z 2023-01-11T21:46:11.9521886Z Expand the folded group to see the log file of test_type_hints 2023-01-11T21:46:11.9522694Z ##[group]PRINTING LOG FILE of test_type_hints (/var/lib/jenkins/workspace/test/test-reports/test_type_hints_n0jk_6i6) 2023-01-11T21:46:11.9522994Z 2023-01-11T21:46:11.9524486Z Running tests... 2023-01-11T21:46:11.9525531Z ---------------------------------------------------------------------- 2023-01-11T21:46:11.9526233Z Test results will be stored in test-reports/python-unittest/test_type_hints 2023-01-11T21:46:11.9526729Z test_doc_examples (__main__.TestTypeHints) 2023-01-11T21:46:11.9527310Z Run documentation examples through mypy. ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:46:11.9527685Z 2023-01-11T21:46:11.9528042Z ---------------------------------------------------------------------- 2023-01-11T21:46:11.9528450Z Ran 1 test in 0.001s 2023-01-11T21:46:11.9528655Z 2023-01-11T21:46:11.9528779Z OK (skipped=1) 2023-01-11T21:46:11.9528967Z 2023-01-11T21:46:11.9529141Z Generating XML reports... 2023-01-11T21:46:11.9529840Z Generated XML report: test-reports/python-unittest/test_type_hints/TEST-TestTypeHints-20230111214611.xml 2023-01-11T21:46:11.9530228Z 2023-01-11T21:46:11.9530699Z ##[endgroup] 2023-01-11T21:46:11.9531374Z FINISHED PRINTING LOG FILE of test_type_hints (/var/lib/jenkins/workspace/test/test-reports/test_type_hints_n0jk_6i6) 2023-01-11T21:46:11.9531741Z 2023-01-11T21:46:14.4767086Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:14.5584879Z Ignoring disabled issues: [] 2023-01-11T21:46:14.5751790Z Running test_type_info ... [2023-01-11 21:46:14.574855] 2023-01-11T21:46:14.5753121Z Executing ['/opt/conda/bin/python', '-bb', 'test_type_info.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:14.575108] 2023-01-11T21:46:16.8700787Z 2023-01-11T21:46:16.8701530Z Expand the folded group to see the log file of test_type_info 2023-01-11T21:46:16.8702713Z ##[group]PRINTING LOG FILE of test_type_info (/var/lib/jenkins/workspace/test/test-reports/test_type_info_w18u_1wg) 2023-01-11T21:46:16.8703128Z 2023-01-11T21:46:16.8703261Z Running tests... 2023-01-11T21:46:16.8703916Z ---------------------------------------------------------------------- 2023-01-11T21:46:16.8704590Z Test results will be stored in test-reports/python-unittest/test_type_info 2023-01-11T21:46:16.8705122Z test_finfo (__main__.TestDTypeInfo) ... ok (0.240s) 2023-01-11T21:46:16.8705556Z test_iinfo (__main__.TestDTypeInfo) ... ok (0.002s) 2023-01-11T21:46:16.8709566Z test_invalid_input (__main__.TestDTypeInfo) ... ok (0.001s) 2023-01-11T21:46:16.8709815Z 2023-01-11T21:46:16.8710188Z ---------------------------------------------------------------------- 2023-01-11T21:46:16.8710631Z Ran 3 tests in 0.243s 2023-01-11T21:46:16.8713092Z 2023-01-11T21:46:16.8713460Z OK 2023-01-11T21:46:16.8713808Z 2023-01-11T21:46:16.8714371Z Generating XML reports... 2023-01-11T21:46:16.8715342Z Generated XML report: test-reports/python-unittest/test_type_info/TEST-TestDTypeInfo-20230111214616.xml 2023-01-11T21:46:16.8716166Z 2023-01-11T21:46:16.8716894Z ##[endgroup] 2023-01-11T21:46:16.8717716Z FINISHED PRINTING LOG FILE of test_type_info (/var/lib/jenkins/workspace/test/test-reports/test_type_info_w18u_1wg) 2023-01-11T21:46:16.8718157Z 2023-01-11T21:46:19.2649538Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:19.3518294Z Ignoring disabled issues: [] 2023-01-11T21:46:19.3689098Z Running test_type_promotion ... [2023-01-11 21:46:19.368533] 2023-01-11T21:46:19.3690389Z Executing ['/opt/conda/bin/python', '-bb', 'test_type_promotion.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:19.368796] 2023-01-11T21:46:21.7242547Z 2023-01-11T21:46:21.7243139Z Expand the folded group to see the log file of test_type_promotion 2023-01-11T21:46:21.7244326Z ##[group]PRINTING LOG FILE of test_type_promotion (/var/lib/jenkins/workspace/test/test-reports/test_type_promotion_c5ufw1fu) 2023-01-11T21:46:21.7244754Z 2023-01-11T21:46:21.7245137Z Running tests... 2023-01-11T21:46:21.7246135Z ---------------------------------------------------------------------- 2023-01-11T21:46:21.7246438Z 2023-01-11T21:46:21.7246790Z ---------------------------------------------------------------------- 2023-01-11T21:46:21.7247207Z Ran 0 tests in 0.000s 2023-01-11T21:46:21.7247407Z 2023-01-11T21:46:21.7247504Z OK 2023-01-11T21:46:21.7250107Z 2023-01-11T21:46:21.7250599Z Generating XML reports... 2023-01-11T21:46:21.7251453Z Test results will be stored in test-reports/python-unittest/test_type_promotion 2023-01-11T21:46:21.7251818Z 2023-01-11T21:46:21.7252511Z ##[endgroup] 2023-01-11T21:46:21.7253298Z FINISHED PRINTING LOG FILE of test_type_promotion (/var/lib/jenkins/workspace/test/test-reports/test_type_promotion_c5ufw1fu) 2023-01-11T21:46:21.7253710Z 2023-01-11T21:46:23.9360416Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:24.0180986Z Ignoring disabled issues: [] 2023-01-11T21:46:24.0351066Z Running test_unary_ufuncs ... [2023-01-11 21:46:24.034616] 2023-01-11T21:46:24.0351929Z Executing ['/opt/conda/bin/python', '-bb', 'test_unary_ufuncs.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:24.034890] 2023-01-11T21:46:26.7213802Z 2023-01-11T21:46:26.7214334Z Expand the folded group to see the log file of test_unary_ufuncs 2023-01-11T21:46:26.7215369Z ##[group]PRINTING LOG FILE of test_unary_ufuncs (/var/lib/jenkins/workspace/test/test-reports/test_unary_ufuncs_xlrccb_s) 2023-01-11T21:46:26.7215792Z 2023-01-11T21:46:26.7215923Z Running tests... 2023-01-11T21:46:26.7216602Z ---------------------------------------------------------------------- 2023-01-11T21:46:26.7216898Z 2023-01-11T21:46:26.7217261Z ---------------------------------------------------------------------- 2023-01-11T21:46:26.7217701Z Ran 0 tests in 0.000s 2023-01-11T21:46:26.7217916Z 2023-01-11T21:46:26.7218029Z OK 2023-01-11T21:46:26.7218179Z 2023-01-11T21:46:26.7295835Z Generating XML reports... 2023-01-11T21:46:26.7296542Z Test results will be stored in test-reports/python-unittest/test_unary_ufuncs 2023-01-11T21:46:26.7296810Z 2023-01-11T21:46:26.7297147Z ##[endgroup] 2023-01-11T21:46:26.7297538Z FINISHED PRINTING LOG FILE of test_unary_ufuncs (/var/lib/jenkins/workspace/test/test-reports/test_unary_ufuncs_xlrccb_s) 2023-01-11T21:46:26.7297757Z 2023-01-11T21:46:28.7455986Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:28.8318909Z Ignoring disabled issues: [] 2023-01-11T21:46:28.8492651Z Running test_view_ops ... [2023-01-11 21:46:28.848756] 2023-01-11T21:46:28.8493894Z Executing ['/opt/conda/bin/python', '-bb', 'test_view_ops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:28.849036] 2023-01-11T21:46:31.0117047Z 2023-01-11T21:46:31.0117665Z Expand the folded group to see the log file of test_view_ops 2023-01-11T21:46:31.0118716Z ##[group]PRINTING LOG FILE of test_view_ops (/var/lib/jenkins/workspace/test/test-reports/test_view_ops_yvbkbf9w) 2023-01-11T21:46:31.0119403Z 2023-01-11T21:46:31.0119535Z Running tests... 2023-01-11T21:46:31.0120163Z ---------------------------------------------------------------------- 2023-01-11T21:46:31.0120450Z 2023-01-11T21:46:31.0120776Z ---------------------------------------------------------------------- 2023-01-11T21:46:31.0121186Z Ran 0 tests in 0.000s 2023-01-11T21:46:31.0121376Z 2023-01-11T21:46:31.0121466Z OK 2023-01-11T21:46:31.0121623Z 2023-01-11T21:46:31.0121760Z Generating XML reports... 2023-01-11T21:46:31.0122300Z Test results will be stored in test-reports/python-unittest/test_view_ops 2023-01-11T21:46:31.0122593Z 2023-01-11T21:46:31.0122972Z ##[endgroup] 2023-01-11T21:46:31.0123616Z FINISHED PRINTING LOG FILE of test_view_ops (/var/lib/jenkins/workspace/test/test-reports/test_view_ops_yvbkbf9w) 2023-01-11T21:46:31.0123962Z 2023-01-11T21:46:33.0006235Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:33.0833780Z Ignoring disabled issues: [] 2023-01-11T21:46:33.1005963Z Running test_vulkan ... [2023-01-11 21:46:33.100122] 2023-01-11T21:46:33.1007295Z Executing ['/opt/conda/bin/python', '-bb', 'test_vulkan.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:33.100407] 2023-01-11T21:46:34.9922887Z 2023-01-11T21:46:34.9923418Z Expand the folded group to see the log file of test_vulkan 2023-01-11T21:46:34.9924436Z ##[group]PRINTING LOG FILE of test_vulkan (/var/lib/jenkins/workspace/test/test-reports/test_vulkan_enndtzqm) 2023-01-11T21:46:34.9924773Z 2023-01-11T21:46:34.9924904Z Running tests... 2023-01-11T21:46:34.9925523Z ---------------------------------------------------------------------- 2023-01-11T21:46:34.9926296Z test_conv (__main__.TestVulkanRewritePass) ... Test results will be stored in test-reports/python-unittest/test_vulkan 2023-01-11T21:46:34.9926852Z skip: Vulkan backend must be available for these tests. (0.002s) 2023-01-11T21:46:34.9927115Z 2023-01-11T21:46:34.9927473Z ---------------------------------------------------------------------- 2023-01-11T21:46:34.9927878Z Ran 1 test in 0.002s 2023-01-11T21:46:34.9928072Z 2023-01-11T21:46:34.9928195Z OK (skipped=1) 2023-01-11T21:46:34.9928471Z 2023-01-11T21:46:34.9928634Z Generating XML reports... 2023-01-11T21:46:34.9929468Z Generated XML report: test-reports/python-unittest/test_vulkan/TEST-TestVulkanRewritePass-20230111214634.xml 2023-01-11T21:46:34.9929929Z 2023-01-11T21:46:34.9930385Z ##[endgroup] 2023-01-11T21:46:34.9931090Z FINISHED PRINTING LOG FILE of test_vulkan (/var/lib/jenkins/workspace/test/test-reports/test_vulkan_enndtzqm) 2023-01-11T21:46:34.9931494Z 2023-01-11T21:46:37.0603855Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:37.1482729Z Ignoring disabled issues: [] 2023-01-11T21:46:37.1657165Z Running test_weak ... [2023-01-11 21:46:37.165329] 2023-01-11T21:46:37.1659231Z Executing ['/opt/conda/bin/python', '-bb', 'test_weak.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:37.165604] 2023-01-11T21:46:42.9549324Z 2023-01-11T21:46:42.9551177Z Expand the folded group to see the log file of test_weak 2023-01-11T21:46:42.9552219Z ##[group]PRINTING LOG FILE of test_weak (/var/lib/jenkins/workspace/test/test-reports/test_weak_n_pno514) 2023-01-11T21:46:42.9552594Z 2023-01-11T21:46:42.9552721Z Running tests... 2023-01-11T21:46:42.9553382Z ---------------------------------------------------------------------- 2023-01-11T21:46:42.9554062Z Test results will be stored in test-reports/python-unittest/test_weak 2023-01-11T21:46:42.9554592Z test_bool (__main__.WeakKeyDictionaryTestCase) ... ok (0.260s) 2023-01-11T21:46:42.9555161Z test_constructor (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9555642Z test_get (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9556100Z test_getitem (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9556580Z test_items (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9557344Z test_keys (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9557839Z test_len (__main__.WeakKeyDictionaryTestCase) ... ok (0.000s) 2023-01-11T21:46:42.9558327Z test_pop (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9558807Z test_popitem (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9559328Z test_read (__main__.WeakKeyDictionaryTestCase) ... ok (0.002s) 2023-01-11T21:46:42.9559832Z test_setdefault (__main__.WeakKeyDictionaryTestCase) ... ok (0.000s) 2023-01-11T21:46:42.9560367Z test_update (__main__.WeakKeyDictionaryTestCase) ... ok (0.004s) 2023-01-11T21:46:42.9560893Z test_values (__main__.WeakKeyDictionaryTestCase) ... ok (0.001s) 2023-01-11T21:46:42.9561383Z test_write (__main__.WeakKeyDictionaryTestCase) ... ok (0.006s) 2023-01-11T21:46:42.9561855Z test_make_weak_keyed_dict_from_dict (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9562550Z test_make_weak_keyed_dict_from_weak_keyed_dict (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9563068Z test_make_weak_keyed_dict_repr (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9563544Z test_threaded_weak_key_dict_copy (__main__.WeakTest) ... ok (1.588s) 2023-01-11T21:46:42.9563989Z test_threaded_weak_key_dict_deepcopy (__main__.WeakTest) ... ok (1.953s) 2023-01-11T21:46:42.9564434Z test_weak_keyed_bad_delitem (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9564825Z test_weak_keyed_delitem (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9565233Z test_weak_keyed_dict_popitem (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9565657Z test_weak_keyed_dict_setdefault (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9566063Z test_weak_keyed_dict_update (__main__.WeakTest) ... ok (0.001s) 2023-01-11T21:46:42.9568578Z test_weak_keyed_union_operators (__main__.WeakTest) ... ok (0.003s) 2023-01-11T21:46:42.9568855Z 2023-01-11T21:46:42.9569266Z ---------------------------------------------------------------------- 2023-01-11T21:46:42.9569607Z Ran 25 tests in 3.828s 2023-01-11T21:46:42.9569761Z 2023-01-11T21:46:42.9569841Z OK 2023-01-11T21:46:42.9569978Z 2023-01-11T21:46:42.9570093Z Generating XML reports... 2023-01-11T21:46:42.9570795Z Generated XML report: test-reports/python-unittest/test_weak/TEST-WeakKeyDictionaryTestCase-20230111214638.xml 2023-01-11T21:46:42.9571623Z Generated XML report: test-reports/python-unittest/test_weak/TEST-WeakTest-20230111214638.xml 2023-01-11T21:46:42.9571972Z 2023-01-11T21:46:42.9572494Z ##[endgroup] 2023-01-11T21:46:42.9573067Z FINISHED PRINTING LOG FILE of test_weak (/var/lib/jenkins/workspace/test/test-reports/test_weak_n_pno514) 2023-01-11T21:46:42.9573375Z 2023-01-11T21:46:44.8891551Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:44.9767258Z Ignoring disabled issues: [] 2023-01-11T21:46:44.9947649Z Running test_xnnpack_integration ... [2023-01-11 21:46:44.994469] 2023-01-11T21:46:44.9949385Z Executing ['/opt/conda/bin/python', '-bb', 'test_xnnpack_integration.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:46:44.994725] 2023-01-11T21:46:55.3833116Z 2023-01-11T21:46:55.3833591Z Expand the folded group to see the log file of test_xnnpack_integration 2023-01-11T21:46:55.3834438Z ##[group]PRINTING LOG FILE of test_xnnpack_integration (/var/lib/jenkins/workspace/test/test-reports/test_xnnpack_integration_zff63w3_) 2023-01-11T21:46:55.3834777Z 2023-01-11T21:46:55.3834872Z Running tests... 2023-01-11T21:46:55.3835420Z ---------------------------------------------------------------------- 2023-01-11T21:46:55.3835941Z Test results will be stored in test-reports/python-unittest/test_xnnpack_integration 2023-01-11T21:46:55.3836487Z test_conv1d_basic (__main__.TestXNNPACKConv1dTransformPass) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:46:55.3837106Z test_conv1d_with_relu_fc (__main__.TestXNNPACKConv1dTransformPass) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.002s) 2023-01-11T21:46:55.3837767Z test_conv2d (__main__.TestXNNPACKOps) ... ok (0.997s) 2023-01-11T21:46:55.3838102Z test_conv2d_transpose (__main__.TestXNNPACKOps) ... ok (1.586s) 2023-01-11T21:46:55.3838594Z test_linear (__main__.TestXNNPACKOps) ... skip: Fails on some platforms, see https://github.com/pytorch/pytorch/issues/73488 (0.001s) 2023-01-11T21:46:55.3839052Z test_linear_1d_input (__main__.TestXNNPACKOps) ... ok (0.249s) 2023-01-11T21:46:55.3839434Z test_decomposed_linear (__main__.TestXNNPACKRewritePass) ... ok (0.092s) 2023-01-11T21:46:55.3839805Z test_linear (__main__.TestXNNPACKRewritePass) ... ok (1.052s) 2023-01-11T21:46:55.3840295Z test_combined_model (__main__.TestXNNPACKSerDes) ... skip: Fails on some platforms, see https://github.com/pytorch/pytorch/issues/73488 (0.003s) 2023-01-11T21:46:55.3840758Z test_conv2d (__main__.TestXNNPACKSerDes) ... ok (2.120s) 2023-01-11T21:46:55.3841102Z test_conv2d_transpose (__main__.TestXNNPACKSerDes) ... ok (2.406s) 2023-01-11T21:46:55.3841658Z test_linear (__main__.TestXNNPACKSerDes) ... skip: Fails on some platforms, see https://github.com/pytorch/pytorch/issues/73488 (0.001s) 2023-01-11T21:46:55.3841963Z 2023-01-11T21:46:55.3842231Z ---------------------------------------------------------------------- 2023-01-11T21:46:55.3842542Z Ran 12 tests in 8.512s 2023-01-11T21:46:55.3842688Z 2023-01-11T21:46:55.3842766Z OK (skipped=5) 2023-01-11T21:46:55.3842906Z 2023-01-11T21:46:55.3843013Z Generating XML reports... 2023-01-11T21:46:55.3843577Z Generated XML report: test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKOps-20230111214646.xml 2023-01-11T21:46:55.3844305Z Generated XML report: test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKRewritePass-20230111214646.xml 2023-01-11T21:46:55.3845035Z Generated XML report: test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKSerDes-20230111214646.xml 2023-01-11T21:46:55.3845807Z Generated XML report: test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKConv1dTransformPass-20230111214646.xml 2023-01-11T21:46:55.3846159Z 2023-01-11T21:46:55.3846444Z ##[endgroup] 2023-01-11T21:46:55.3846985Z FINISHED PRINTING LOG FILE of test_xnnpack_integration (/var/lib/jenkins/workspace/test/test-reports/test_xnnpack_integration_zff63w3_) 2023-01-11T21:46:55.3847288Z 2023-01-11T21:46:57.3261773Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:46:57.3908115Z Ignoring disabled issues: [] 2023-01-11T21:56:30.4894754Z 2023-01-11T21:56:30.4895309Z Expand the folded group to see the log file of test_quantization 2023-01-11T21:56:30.4896197Z ##[group]PRINTING LOG FILE of test_quantization (/var/lib/jenkins/workspace/test/test-reports/test_quantization_9fc5vzw9) 2023-01-11T21:56:30.4901163Z /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 2023-01-11T21:56:30.4901958Z warnings.warn('Lazy modules are a new feature under heavy development ' 2023-01-11T21:56:30.4902255Z 2023-01-11T21:56:30.4902532Z Running tests... 2023-01-11T21:56:30.4903093Z ---------------------------------------------------------------------- 2023-01-11T21:56:30.4903756Z Test results will be stored in test-reports/python-unittest/test_quantization 2023-01-11T21:56:30.4904917Z test_modules_import_nn_intrinsic (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.009s) 2023-01-11T21:56:30.4905892Z test_modules_import_nn_intrinsic_qat (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.001s) 2023-01-11T21:56:30.4906926Z test_modules_import_nn_intrinsic_quantized (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.001s) 2023-01-11T21:56:30.4907957Z test_modules_intrinsic_qat_conv_fused (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.001s) 2023-01-11T21:56:30.4909301Z test_modules_intrinsic_qat_linear_fused (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4910317Z test_modules_intrinsic_qat_linear_relu (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4911347Z test_modules_intrinsic_quantized_bn_relu (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4912395Z test_modules_intrinsic_quantized_conv_relu (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4929597Z test_modules_intrinsic_quantized_linear_relu (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4930207Z test_modules_nn_intrinsic_fused (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.001s) 2023-01-11T21:56:30.4930813Z test_package_import_nn_intrinsic (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) ... ok (0.000s) 2023-01-11T21:56:30.4931274Z test_package_import_nn_intrinsic_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) 2023-01-11T21:56:30.4931648Z Tests the migration of the torch.nn.intrinsic.modules ... ok (0.001s) 2023-01-11T21:56:30.4933639Z test_package_import_nn_intrinsic_qat (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) 2023-01-11T21:56:30.4934326Z Tests the migration of the torch.nn.intrinsic.modules ... ok (0.001s) 2023-01-11T21:56:30.4935044Z test_package_import_nn_intrinsic_quantized (quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic) 2023-01-11T21:56:30.4935483Z Tests the migration of the torch.nn.intrinsic.quantized ... ok (0.001s) 2023-01-11T21:56:30.4935849Z test_functional_import (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4936208Z Tests the migration of the torch.nn.quantized.functional ... ok (0.001s) 2023-01-11T21:56:30.4936634Z test_import_nn_qat_conv (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4937075Z test_import_nn_qat_dynamic_linear (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4937512Z test_import_nn_qat_embedding_ops (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4938108Z test_import_nn_qat_linear (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4938878Z test_import_nn_quantizable_activation (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4939690Z test_import_nn_quantizable_rnn (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4940438Z test_import_nn_quantized_dynamic_import (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4941200Z test_modules_activation (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4941856Z test_modules_batchnorm (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4942304Z test_modules_conv (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4942908Z test_modules_dropout (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4943388Z test_modules_embedding_ops (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4943829Z test_modules_functional_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4944314Z test_modules_import (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4944894Z test_modules_linear (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4945322Z test_modules_normalization (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4945890Z test_modules_utils (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4946560Z test_package_import_nn_qat (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4947323Z test_package_import_nn_qat_dynamic (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4947687Z Tests the migration of the torch.nn.qat.modules ... ok (0.001s) 2023-01-11T21:56:30.4948049Z test_package_import_nn_qat_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4948466Z Tests the migration of the torch.nn.qat.modules ... ok (0.001s) 2023-01-11T21:56:30.4948834Z test_package_import_nn_quantizable (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4949357Z test_package_import_nn_quantizable_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4949722Z Tests the migration of the torch.nn.quantizable.modules ... ok (0.001s) 2023-01-11T21:56:30.4950108Z test_package_import_nn_quantized (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.001s) 2023-01-11T21:56:30.4950571Z test_package_import_nn_quantized_dynamic (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) ... ok (0.000s) 2023-01-11T21:56:30.4951147Z test_package_import_nn_quantized_dynamic_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4951584Z Tests the migration of the torch.nn.quantized.modules ... ok (0.001s) 2023-01-11T21:56:30.4952186Z test_package_import_nn_quantized_modules (quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized) 2023-01-11T21:56:30.4952591Z Tests the migration of the torch.nn.quantized.modules ... ok (0.001s) 2023-01-11T21:56:30.4953209Z test_function_import_fake_quantize (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4953968Z test_function_import_fuse_modules (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4954698Z test_function_import_fuser_method_mappings (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4955455Z test_function_import_observer (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4956141Z test_function_import_qconfig (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4956853Z test_function_import_quant_type (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4957786Z test_function_import_quantization_mappings (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4958534Z test_function_import_quantize (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4959265Z test_function_import_quantize_jit (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4960017Z test_function_import_stubs (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4960731Z test_function_import_utils (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.001s) 2023-01-11T21:56:30.4961205Z test_package_import_fake_quantize (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4961748Z test_package_import_fuse_modules (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4962209Z test_package_import_fuser_method_mappings (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4962678Z test_package_import_observer (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4963137Z test_package_import_qconfig (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4963596Z test_package_import_quant_type (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4964062Z test_package_import_quantization_mappings (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4964564Z test_package_import_quantize (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4965026Z test_package_import_quantize_jit (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4965469Z test_package_import_stubs (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4965913Z test_package_import_utils (quantization.ao_migration.test_quantization.TestAOMigrationQuantization) ... ok (0.000s) 2023-01-11T21:56:30.4966354Z test_function_import_fx (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4966797Z test_function_import_fx_convert (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4967234Z test_function_import_fx_equalize (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4967679Z test_function_import_fx_fuse (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4968131Z test_function_import_fx_fusion_patterns (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4968643Z test_function_import_fx_graph_module (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4969083Z test_function_import_fx_match_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4969536Z test_function_import_fx_pattern_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4969989Z test_function_import_fx_prepare (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4970452Z test_function_import_fx_quantization_patterns (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4970903Z test_function_import_fx_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4971346Z test_function_import_quantize_fx (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.001s) 2023-01-11T21:56:30.4971788Z test_package_import_fx (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4972227Z test_package_import_fx_convert (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4972663Z test_package_import_fx_equalize (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4973110Z test_package_import_fx_fuse (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4973594Z test_package_import_fx_fusion_patterns (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4974055Z test_package_import_fx_graph_module (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4974499Z test_package_import_fx_match_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4974956Z test_package_import_fx_pattern_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4975482Z test_package_import_fx_prepare (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4975946Z test_package_import_fx_quantization_patterns (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4976427Z test_package_import_fx_utils (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4976875Z test_package_import_quantize_fx (quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx) ... ok (0.000s) 2023-01-11T21:56:30.4977296Z test_backend_config_from_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.004s) 2023-01-11T21:56:30.4977708Z test_backend_config_set_backend_pattern_config (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4978103Z test_backend_config_set_name (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4978490Z test_backend_config_to_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.002s) 2023-01-11T21:56:30.4978888Z test_backend_op_config_add_dtype_config (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4979299Z test_backend_op_config_from_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.002s) 2023-01-11T21:56:30.4979698Z test_backend_op_config_set_extra_inputs_getter (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4980112Z test_backend_op_config_set_fused_module (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4980519Z test_backend_op_config_set_fuser_method (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4980937Z test_backend_op_config_set_input_output_observed (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4981350Z test_backend_op_config_set_input_type_to_index (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4981837Z test_backend_op_config_set_num_tensor_args_to_observation_type (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4982279Z test_backend_op_config_set_observation_type (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4982899Z test_backend_op_config_set_qat_module (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4983400Z test_backend_op_config_set_reference_quantized_module (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4983882Z test_backend_op_config_set_root_module (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4984423Z test_backend_op_config_set_root_node_getter (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.4985017Z test_backend_op_config_to_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.002s) 2023-01-11T21:56:30.5024369Z test_dtype_config_from_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.5025584Z test_dtype_config_to_dict (quantization.core.test_backend_config.TestBackendConfig) ... ok (0.001s) 2023-01-11T21:56:30.5026543Z test_conv_chain (quantization.eager.test_bias_correction_eager.TestBiasCorrectionEager) ... ok (1.321s) 2023-01-11T21:56:30.5027531Z test_linear_chain (quantization.eager.test_bias_correction_eager.TestBiasCorrectionEager) ... ok (0.735s) 2023-01-11T21:56:30.5028542Z test_compare_tensor_scalar (quantization.core.test_quantized_op.TestComparatorOps) ... ok (0.508s) 2023-01-11T21:56:30.5029504Z test_compare_tensor_tensor (quantization.core.test_quantized_op.TestComparatorOps) ... ok (0.217s) 2023-01-11T21:56:30.5030529Z test_erase_class_tensor_shapes (quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized) ... ok (0.017s) 2023-01-11T21:56:30.5031606Z test_quantization_modules (quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized) ... ok (0.092s) 2023-01-11T21:56:30.5034013Z test_rnn_cell_quantized (quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized) ... /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:510: UserWarning: quantize_rnn_cell_modules function has been deprecated. Please use torch.ao.quantization.quantize_dynamic API instead. 2023-01-11T21:56:30.5035525Z warnings.warn("quantize_rnn_cell_modules function has been deprecated. " 2023-01-11T21:56:30.5037283Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:100: UserWarning: torch.jit.QuantizedRNNCellBase is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.RNNCell instead. 2023-01-11T21:56:30.5038384Z warnings.warn( 2023-01-11T21:56:30.5040037Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:213: UserWarning: torch.jit.QuantizedLSTMCell is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.LSTMCell instead. 2023-01-11T21:56:30.5041123Z warnings.warn( 2023-01-11T21:56:30.5042667Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:236: UserWarning: torch.jit.QuantizedGRUCell is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.GRUCell instead. 2023-01-11T21:56:30.5043750Z warnings.warn( 2023-01-11T21:56:30.5045238Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:178: UserWarning: torch.jit.QuantizedRNNCell is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.RNNCell instead. 2023-01-11T21:56:30.5046306Z warnings.warn( 2023-01-11T21:56:30.5046696Z ok (0.107s) 2023-01-11T21:56:30.5048429Z test_rnn_quantized (quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized) ... /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:556: UserWarning: quantize_rnn_modules function has been deprecated. Please use torch.ao.quantization.quantize_dynamic API instead. 2023-01-11T21:56:30.5049809Z warnings.warn("quantize_rnn_modules function has been deprecated. " 2023-01-11T21:56:30.5051422Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:264: UserWarning: torch.jit.QuantizedRNNBase is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic instead. 2023-01-11T21:56:30.5052498Z warnings.warn( 2023-01-11T21:56:30.5053958Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:369: UserWarning: torch.jit.QuantizedLSTM is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.LSTM instead. 2023-01-11T21:56:30.5055025Z warnings.warn( 2023-01-11T21:56:30.5056483Z /opt/conda/lib/python3.10/site-packages/torch/jit/quantized.py:449: UserWarning: torch.jit.QuantizedGRU is deprecated and will be removed in an upcoming PyTorch release. Please use the torch.ao.nn.quantized.dynamic.GRU instead. 2023-01-11T21:56:30.5057532Z warnings.warn( 2023-01-11T21:56:30.5057921Z ok (0.089s) 2023-01-11T21:56:30.5058860Z test_device_affinity (quantization.core.test_workflow_module.TestDistributed) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:56:30.5059972Z test_fake_quant_preserves_buffers (quantization.core.test_workflow_module.TestDistributed) 2023-01-11T21:56:30.5060858Z Tests that fake quant only modifies buffers in place. Note: this is important ... ok (0.003s) 2023-01-11T21:56:30.5061693Z test_observers_preserve_buffers (quantization.core.test_workflow_module.TestDistributed) 2023-01-11T21:56:30.5062650Z Tests that observers only modify buffers in place. Note: this is important ... ok (0.009s) 2023-01-11T21:56:30.5063560Z test_qat_convbn_fused_syncbn_replacement (quantization.core.test_workflow_module.TestDistributed) 2023-01-11T21:56:30.5065661Z Tests that SyncBatchNorm replacement works for fused ConvBN. ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. 2023-01-11T21:56:30.5066811Z warnings.warn( 2023-01-11T21:56:30.5067165Z ok (0.005s) 2023-01-11T21:56:30.5067733Z test_qat_data_parallel (quantization.core.test_workflow_module.TestDistributed) 2023-01-11T21:56:30.5068714Z Tests that doing QAT in nn.DataParallel does not crash. ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:56:30.5069523Z test_syncbn_preserves_qconfig (quantization.core.test_workflow_module.TestDistributed) 2023-01-11T21:56:30.5070300Z Makes sure that if a BatchNorm is not fused and a qconfig exists, ... ok (0.001s) 2023-01-11T21:56:30.5071595Z test_cell_api (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... [W qlinear_dynamic.cpp:247] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function operator()) 2023-01-11T21:56:30.5074221Z /opt/conda/lib/python3.10/site-packages/torch/_utils.py:309: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5075521Z device=storage.device, 2023-01-11T21:56:30.5075934Z ok (0.977s) 2023-01-11T21:56:30.5077896Z test_dynamic_conv1d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:59: UserWarning: The current implementation of the DynamicQuantizedConv1d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5078717Z warnings.warn( 2023-01-11T21:56:30.5079203Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5080471Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:471: UserWarning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/qconv_dynamic.cpp:82.) 2023-01-11T21:56:30.5081215Z return callable(*args, **kwargs) 2023-01-11T21:56:30.5081748Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5082451Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5083147Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5083849Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5084683Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5085403Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5086094Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5086796Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5087600Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5088315Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5088983Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5089652Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5090385Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5091099Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5091793Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5093272Z ok (1.348s) 2023-01-11T21:56:30.5094388Z test_dynamic_conv2d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:124: UserWarning: The current implementation of the DynamicQuantizedConv2d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5095126Z warnings.warn( 2023-01-11T21:56:30.5095637Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5096336Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5097027Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5097739Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5098438Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5099140Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5099818Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5100565Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5101219Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5101942Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5102802Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5103635Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5104345Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5105002Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5105700Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5106374Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5106825Z ok (1.383s) 2023-01-11T21:56:30.5107946Z test_dynamic_conv3d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:189: UserWarning: The current implementation of the DynamicQuantizedConv3d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5108707Z warnings.warn( 2023-01-11T21:56:30.5108935Z ok (0.449s) 2023-01-11T21:56:30.5109978Z test_dynamic_convtranspose1d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:259: UserWarning: The current implementation of the DynamicQuantizedConvTranpose1d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5110750Z warnings.warn( 2023-01-11T21:56:30.5111261Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5111990Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5112740Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5113472Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5114188Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5114882Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5115590Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5116409Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5117180Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5117874Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5118582Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5119401Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5120111Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5120795Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5121477Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5122138Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5122574Z ok (1.456s) 2023-01-11T21:56:30.5123729Z test_dynamic_convtranspose2d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:320: UserWarning: The current implementation of the DynamicQuantizedConvTranpose2d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5124556Z warnings.warn( 2023-01-11T21:56:30.5125037Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5125719Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5126414Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5127115Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5127812Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5128508Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5129203Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5129910Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5130599Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5131435Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5132259Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5132960Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5133608Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5134332Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5135160Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5135907Z [W qconv_dynamic.cpp:82] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic) 2023-01-11T21:56:30.5136370Z ok (1.238s) 2023-01-11T21:56:30.5137552Z test_dynamic_convtranspose3d (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py:381: UserWarning: The current implementation of the DynamicQuantizedConvTranpose3d module has poor numerical accuracy and its use is not recommended 2023-01-11T21:56:30.5138336Z warnings.warn( 2023-01-11T21:56:30.5138597Z ok (0.496s) 2023-01-11T21:56:30.5139006Z test_gru_api (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... ok (0.331s) 2023-01-11T21:56:30.5139624Z test_linear_api (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... ok (23.480s) 2023-01-11T21:56:30.5140211Z test_lstm_api (quantization.core.test_quantized_module.TestDynamicQuantizedModule) ... ok (3.901s) 2023-01-11T21:56:30.5140793Z test_dynamic_conv1d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.049s) 2023-01-11T21:56:30.5141360Z test_dynamic_conv2d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.069s) 2023-01-11T21:56:30.5141904Z test_dynamic_conv3d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.050s) 2023-01-11T21:56:30.5142649Z test_dynamic_convtranspose1d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.068s) 2023-01-11T21:56:30.5143281Z test_dynamic_convtranspose2d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.048s) 2023-01-11T21:56:30.5143808Z test_dynamic_convtranspose3d (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.012s) 2023-01-11T21:56:30.5144517Z test_linear_prepack_fp16_numerics (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5145215Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5145738Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5146223Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5146705Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5147180Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5147692Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5148318Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5148805Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5149322Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5149814Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5150282Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5150771Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5151275Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5151775Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5152410Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5152943Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5153445Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5153916Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5154280Z ok (0.106s) 2023-01-11T21:56:30.5154653Z test_qlinear (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (2.240s) 2023-01-11T21:56:30.5155207Z test_qlinear_dynamic_fp16 (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.166s) 2023-01-11T21:56:30.5155777Z test_qlinear_legacy (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (0.241s) 2023-01-11T21:56:30.5156340Z test_qlstmGRU (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (9.458s) 2023-01-11T21:56:30.5156969Z test_qrnncell (quantization.core.test_quantized_op.TestDynamicQuantizedOps) ... ok (11.447s) 2023-01-11T21:56:30.5157494Z test_converged (quantization.eager.test_equalize_eager.TestEqualizeEager) 2023-01-11T21:56:30.5157950Z Sanity checks on _equalize.converged working ... ok (0.002s) 2023-01-11T21:56:30.5158413Z test_cross_layer_equalization (quantization.eager.test_equalize_eager.TestEqualizeEager) 2023-01-11T21:56:30.5158934Z applies _equalize.cross_layer_equalization on two modules and checks ... ok (0.002s) 2023-01-11T21:56:30.5159403Z test_equalize (quantization.eager.test_equalize_eager.TestEqualizeEager) 2023-01-11T21:56:30.5159887Z First checks to see if _equalize.equalize can handle multiple ... ok (0.014s) 2023-01-11T21:56:30.5160370Z test_equalize_fused_convrelu (quantization.eager.test_equalize_eager.TestEqualizeEager) 2023-01-11T21:56:30.5160837Z Checks to see if eager mode equalization supports fused ... ok (0.028s) 2023-01-11T21:56:30.5161342Z test_equalize_fused_linearrelu (quantization.eager.test_equalize_eager.TestEqualizeEager) 2023-01-11T21:56:30.5161842Z Checks to see if eager mode equalization supports fused ... ok (0.022s) 2023-01-11T21:56:30.5162321Z test_input_weight_eq_observer (quantization.fx.test_equalize_fx.TestEqualizeFx) ... ok (0.232s) 2023-01-11T21:56:30.5162855Z test_input_weight_equalization_activation_values (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5164114Z After applying the equalization functions check if the input ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/prepare.py:1435: UserWarning: Passing a QConfig dictionary to prepare is deprecated and will not be supported in a future version. Please pass in a QConfigMapping instead. 2023-01-11T21:56:30.5164817Z warnings.warn( 2023-01-11T21:56:30.5165775Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/prepare.py:1441: UserWarning: Passing a QConfig dictionary to prepare for equalization is deprecated and will not be supported in a future version. Please pass in a QConfigMapping instead. 2023-01-11T21:56:30.5166494Z warnings.warn( 2023-01-11T21:56:30.5166741Z ok (0.074s) 2023-01-11T21:56:30.5167145Z test_input_weight_equalization_branching (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5168074Z Tests that graphs containing branches are prepared correctly. ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/prepare.py:712: UserWarning: Cannot equalize linear1 because it is part of a branch. 2023-01-11T21:56:30.5168625Z warnings.warn( 2023-01-11T21:56:30.5169286Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/prepare.py:712: UserWarning: Cannot equalize linear2 because it is part of a branch. 2023-01-11T21:56:30.5169746Z warnings.warn( 2023-01-11T21:56:30.5169986Z ok (0.023s) 2023-01-11T21:56:30.5170384Z test_input_weight_equalization_convert (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5171721Z Tests that the modified model for equalization (before quantization) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node linear1_packed_weight_0 target linear1_packed_weight_0 linear1_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5172688Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5173703Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node linear2_packed_weight_0 target linear2_packed_weight_0 linear2_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5174543Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5175559Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node linear_packed_weight_0 target linear_packed_weight_0 linear_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5176393Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5177398Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node conv1_packed_weight_0 target conv1_packed_weight_0 conv1_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5178216Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5179223Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node conv2_packed_weight_0 target conv2_packed_weight_0 conv2_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5180022Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5181007Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node conv_packed_weight_0 target conv_packed_weight_0 conv_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5181801Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5182160Z ok (2.550s) 2023-01-11T21:56:30.5182708Z test_input_weight_equalization_equalization_scales (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5183243Z After applying the equalization functions, check if the equalization ... ok (0.071s) 2023-01-11T21:56:30.5183724Z test_input_weight_equalization_graphs (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5184686Z Tests that the modified model for equalization has the same graph ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py:302: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2023-01-11T21:56:30.5185373Z warnings.warn( 2023-01-11T21:56:30.5186178Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_equalize.py:190: UserWarning: Must run observer before calling calculate_equalization_scale. Returning default equalization scale torch.tensor(1). 2023-01-11T21:56:30.5186703Z warnings.warn( 2023-01-11T21:56:30.5187514Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_equalize.py:102: UserWarning: Must call calculate_equalization_scale before calling calculate_scaled_minmax. Will not scale the next quantization observer. 2023-01-11T21:56:30.5188065Z warnings.warn( 2023-01-11T21:56:30.5188717Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2023-01-11T21:56:30.5189192Z warnings.warn( 2023-01-11T21:56:30.5189438Z ok (0.506s) 2023-01-11T21:56:30.5189844Z test_input_weight_equalization_prepare (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5190409Z Tests that graphs created after prepare_fx is as expected ... ok (0.176s) 2023-01-11T21:56:30.5190893Z test_input_weight_equalization_results (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5191392Z Tests that for small models, the results of quantized models that ... ok (0.233s) 2023-01-11T21:56:30.5191847Z test_input_weight_equalization_weights_bias (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5192306Z After applying the equalization functions check if the weights and ... ok (0.065s) 2023-01-11T21:56:30.5192762Z test_selective_equalization (quantization.fx.test_equalize_fx.TestEqualizeFx) 2023-01-11T21:56:30.5193216Z Tests that we are able to run numeric suite on the equalized model ... ok (0.231s) 2023-01-11T21:56:30.5193675Z test_dict_return_type (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.078s) 2023-01-11T21:56:30.5194216Z test_matching_failure_node_count (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.044s) 2023-01-11T21:56:30.5194785Z test_matching_failure_node_type (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.033s) 2023-01-11T21:56:30.5195285Z test_methods (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) 2023-01-11T21:56:30.5195709Z Verify that graph matching works on methods ... ok (0.048s) 2023-01-11T21:56:30.5196141Z test_nodes_before_cat (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.121s) 2023-01-11T21:56:30.5197365Z test_nodes_with_equal_types_get_matched (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:1204: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point 2023-01-11T21:56:30.5198056Z warnings.warn( 2023-01-11T21:56:30.5198287Z ok (0.113s) 2023-01-11T21:56:30.5198681Z test_op_relationship_mapping (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) 2023-01-11T21:56:30.5199168Z Tests that the mapping of op relationships is complete. ... ok (0.007s) 2023-01-11T21:56:30.5199630Z test_results_order (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.177s) 2023-01-11T21:56:30.5200831Z test_simple_fun (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_0 target _packed_weight_0 _packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5201744Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5202098Z ok (0.081s) 2023-01-11T21:56:30.5202460Z test_simple_fusion (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.080s) 2023-01-11T21:56:30.5202974Z test_simple_mod (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.060s) 2023-01-11T21:56:30.5203556Z test_simple_mod_multi (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.069s) 2023-01-11T21:56:30.5204063Z test_simple_tensor_ops (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) ... ok (0.190s) 2023-01-11T21:56:30.5204576Z test_user_defined_function (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher) 2023-01-11T21:56:30.5205033Z Verify that graph matching works on user defined functions ... ok (0.044s) 2023-01-11T21:56:30.5206149Z test_mobilenet_v2 (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcherModels) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. 2023-01-11T21:56:30.5206798Z warnings.warn( 2023-01-11T21:56:30.5207812Z /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`. 2023-01-11T21:56:30.5208440Z warnings.warn(msg) 2023-01-11T21:56:30.5208683Z ok (3.141s) 2023-01-11T21:56:30.5209084Z test_mobilenet_v2_qat (quantization.fx.test_numeric_suite_fx.TestFXGraphMatcherModels) ... ok (3.803s) 2023-01-11T21:56:30.5209692Z test_add_loggers_cuda (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:56:30.5211135Z test_add_mul_inputs_activations (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T21:56:30.5212137Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T21:56:30.5212436Z ok (2.689s) 2023-01-11T21:56:30.5212896Z test_add_shadow_loggers_cuda (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:56:30.5214228Z test_add_shadow_loggers_fun_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_1 target _packed_weight_1 _packed_weight_1 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5215178Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5215515Z ok (0.774s) 2023-01-11T21:56:30.5215931Z test_add_shadow_loggers_fun_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.325s) 2023-01-11T21:56:30.5216546Z test_add_shadow_loggers_meth_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5217152Z Verify that add_loggers works on methods ... skipping shadow loggers for node_b: str, start_node_a: str, unknown dtype cast 2023-01-11T21:56:30.5217771Z skipping shadow loggers for node_b: torch.ao.quantization.observer.FixedQParamsObserver, start_node_a: str, unknown dtype cast 2023-01-11T21:56:30.5218368Z skipping shadow loggers for node_b: str, start_node_a: str, unknown dtype cast 2023-01-11T21:56:30.5218706Z ok (0.121s) 2023-01-11T21:56:30.5219142Z test_add_shadow_loggers_mod_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.771s) 2023-01-11T21:56:30.5219763Z test_add_shadow_loggers_mod_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.458s) 2023-01-11T21:56:30.5220404Z test_extend_logger_results_with_comparison (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.072s) 2023-01-11T21:56:30.5221762Z test_extract_weights_conv_fun_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_2 target _packed_weight_2 _packed_weight_2 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5223022Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5224027Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_3 target _packed_weight_3 _packed_weight_3 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5224838Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5225941Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_4 target _packed_weight_4 _packed_weight_4 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5226774Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5227775Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_5 target _packed_weight_5 _packed_weight_5 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5228593Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5228959Z ok (0.700s) 2023-01-11T21:56:30.5229410Z test_extract_weights_conv_fun_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.854s) 2023-01-11T21:56:30.5230067Z test_extract_weights_cuda (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:56:30.5230743Z test_extract_weights_dynamic (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.171s) 2023-01-11T21:56:30.5231342Z test_extract_weights_fqn (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.076s) 2023-01-11T21:56:30.5231970Z test_extract_weights_linear_fun_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.380s) 2023-01-11T21:56:30.5232593Z test_extract_weights_linear_fun_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.412s) 2023-01-11T21:56:30.5233234Z test_extract_weights_mod_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.816s) 2023-01-11T21:56:30.5233814Z test_extract_weights_mod_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.987s) 2023-01-11T21:56:30.5234415Z test_fp16_shadows_fp32 (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.054s) 2023-01-11T21:56:30.5235363Z test_int8_shadows_fp32_coverage (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skipping shadow loggers for node_b: torch.nn.modules.pooling.AdaptiveAvgPool2d, start_node_a: torch.nn.modules.pooling.AdaptiveAvgPool2d, unknown dtype cast 2023-01-11T21:56:30.5236391Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch.nn.modules.pooling.AdaptiveAvgPool2d, unknown dtype cast 2023-01-11T21:56:30.5237227Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.mul, start_node_a: torch._ops.quantized.PyCapsule.mul, unsupported 2023-01-11T21:56:30.5237978Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._ops.quantized.PyCapsule.mul, unknown dtype cast 2023-01-11T21:56:30.5238711Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._ops.quantized.PyCapsule.add_relu, unsupported 2023-01-11T21:56:30.5239440Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._ops.quantized.PyCapsule.add_relu, unknown dtype cast 2023-01-11T21:56:30.5240088Z ok (0.157s) 2023-01-11T21:56:30.5240506Z test_int8_shadows_fp32_simple (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.197s) 2023-01-11T21:56:30.5241087Z test_int8_shadows_int8_fun (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.103s) 2023-01-11T21:56:30.5241662Z test_int8_shadows_int8_mod (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.080s) 2023-01-11T21:56:30.5242219Z test_layer_names (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.352s) 2023-01-11T21:56:30.5242796Z test_linear_fp16_activations (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.306s) 2023-01-11T21:56:30.5243389Z test_linear_fp16_shadow_activations (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.406s) 2023-01-11T21:56:30.5244109Z test_linear_fp16_vs_linear_fp16_shadow_activations (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.051s) 2023-01-11T21:56:30.5244749Z test_linear_fp16_weights (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.162s) 2023-01-11T21:56:30.5245323Z test_linear_kwargs_shadow (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.047s) 2023-01-11T21:56:30.5245909Z test_loggers_preserve_qat_numerics (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.193s) 2023-01-11T21:56:30.5246467Z test_logging_inputs (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5247147Z Verifies that logging inputs works correctly ... skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5247886Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5248318Z ok (0.587s) 2023-01-11T21:56:30.5248760Z test_match_activations_fqn (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.078s) 2023-01-11T21:56:30.5249350Z test_match_activations_fun_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.608s) 2023-01-11T21:56:30.5249951Z test_match_activations_fun_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.385s) 2023-01-11T21:56:30.5250534Z test_match_activations_meth_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5251024Z Verify that add_loggers works on methods ... ok (0.144s) 2023-01-11T21:56:30.5251514Z test_match_activations_mod_ptq (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (1.018s) 2023-01-11T21:56:30.5252107Z test_match_activations_mod_qat (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.283s) 2023-01-11T21:56:30.5252865Z test_mul_add_cat_stack_skips_shadowing (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skipping shadow loggers for node_b: _operator.mul, start_node_a: _operator.mul, unsupported 2023-01-11T21:56:30.5253660Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: _operator.mul, unsupported 2023-01-11T21:56:30.5254349Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.mul, start_node_a: torch._VariableFunctionsClass.mul, unsupported 2023-01-11T21:56:30.5255029Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch._VariableFunctionsClass.mul, unsupported 2023-01-11T21:56:30.5255671Z skipping shadow loggers for node_b: _operator.add, start_node_a: _operator.add, unsupported 2023-01-11T21:56:30.5256265Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: _operator.add, unsupported 2023-01-11T21:56:30.5256981Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5257794Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5258553Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5259240Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.stack, start_node_a: torch._VariableFunctionsClass.stack, unsupported 2023-01-11T21:56:30.5259914Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.mul, start_node_a: _operator.mul, unsupported 2023-01-11T21:56:30.5260573Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.mul, start_node_a: torch._VariableFunctionsClass.mul, unsupported 2023-01-11T21:56:30.5261238Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: _operator.add, unsupported 2023-01-11T21:56:30.5261990Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5262848Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5263511Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.stack, start_node_a: torch._VariableFunctionsClass.stack, unsupported 2023-01-11T21:56:30.5263971Z ok (1.183s) 2023-01-11T21:56:30.5264381Z test_op_io_dtype_coverage (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5264898Z Tests that all the ops quantization cares about have input and output ... ok (0.009s) 2023-01-11T21:56:30.5265420Z test_op_with_either_fp32_or_int8_input (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5266092Z Verify that shadowing works with ops which accept either fp32 or ... skipping shadow loggers for node_b: torch.nn.modules.activation.ReLU, start_node_a: torch.nn.modules.activation.ReLU, unknown dtype cast 2023-01-11T21:56:30.5266956Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch.nn.modules.activation.ReLU, unknown dtype cast 2023-01-11T21:56:30.5267654Z skipping shadow loggers for node_b: torch.nn.functional.relu, start_node_a: torch.nn.functional.relu, unknown dtype cast 2023-01-11T21:56:30.5268333Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch.nn.functional.relu, unknown dtype cast 2023-01-11T21:56:30.5269099Z skipping shadow loggers for node_b: torch.nn.modules.activation.ReLU, start_node_a: torch.nn.modules.activation.ReLU, unknown dtype cast 2023-01-11T21:56:30.5269763Z skipping shadow loggers for node_b: torch.nn.functional.relu, start_node_a: torch.nn.functional.relu, unknown dtype cast 2023-01-11T21:56:30.5270180Z ok (0.509s) 2023-01-11T21:56:30.5270816Z test_op_with_only_kwargs_skips_shadowing (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5271714Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.stack, start_node_a: torch._VariableFunctionsClass.stack, unsupported 2023-01-11T21:56:30.5272385Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5273066Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.stack, start_node_a: torch._VariableFunctionsClass.stack, unsupported 2023-01-11T21:56:30.5273491Z ok (0.259s) 2023-01-11T21:56:30.5273927Z test_ops_with_same_fp32_and_int8_signature (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5274437Z Verifies that we can match pairs of ops which have the same aten ... ok (2.809s) 2023-01-11T21:56:30.5274957Z test_shadow_activations_fqn (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.194s) 2023-01-11T21:56:30.5275697Z test_shadow_loggers_preserve_qat_numerics (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.096s) 2023-01-11T21:56:30.5276306Z test_unsupported_op_copy_skips_shadowing (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5277120Z Copying a `call_function` node is not implemented, test that this ... skipping shadow loggers for node_b: torch.nn.functional.layer_norm, start_node_a: torch.nn.functional.layer_norm, unhandled logic in subgraph copy 2023-01-11T21:56:30.5277958Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch.nn.functional.layer_norm, unhandled logic in subgraph copy 2023-01-11T21:56:30.5278695Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.layer_norm, start_node_a: torch.nn.functional.layer_norm, unhandled logic in subgraph copy 2023-01-11T21:56:30.5279293Z ok (0.413s) 2023-01-11T21:56:30.5279701Z test_user_defined_function (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5280367Z Verify that NS APIs work on user defined functions ... skipping shadow loggers for node_b: torch._C._nn.linear, start_node_a: quantization.fx.test_numeric_suite_fx._wrapped_linear, unknown dtype cast 2023-01-11T21:56:30.5281173Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: quantization.fx.test_numeric_suite_fx._wrapped_linear, unknown dtype cast 2023-01-11T21:56:30.5281688Z ok (0.330s) 2023-01-11T21:56:30.5282061Z test_user_module (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) 2023-01-11T21:56:30.5283142Z For user defined modules, ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py:148: UserWarning: Passing a prepare_custom_config_dict to prepare is deprecated and will not be supported in a future version. Please pass in a PrepareCustomConfig instead. 2023-01-11T21:56:30.5283785Z warnings.warn( 2023-01-11T21:56:30.5284028Z ok (0.175s) 2023-01-11T21:56:30.5284444Z test_user_module_scriptable (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs) ... ok (0.214s) 2023-01-11T21:56:30.5285032Z test_compare_activations_conv (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (1.142s) 2023-01-11T21:56:30.5285665Z test_compare_activations_linear (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.501s) 2023-01-11T21:56:30.5287126Z test_compare_activations_lstm_dynamic (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/reference/modules/rnn.py:320: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:56:30.5288065Z torch.tensor(weight_qparams["scale"], dtype=torch.float, device=device)) 2023-01-11T21:56:30.5289121Z /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/reference/modules/rnn.py:323: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:56:30.5289844Z torch.tensor(weight_qparams["zero_point"], dtype=torch.int, device=device)) 2023-01-11T21:56:30.5290172Z ok (0.251s) 2023-01-11T21:56:30.5290630Z test_compare_shadow_activations_conv (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (1.204s) 2023-01-11T21:56:30.5291286Z test_compare_shadow_activations_linear (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.640s) 2023-01-11T21:56:30.5291940Z test_compare_shadow_activations_lstm_dynamic (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.134s) 2023-01-11T21:56:30.5292677Z test_compare_weights_conv (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.713s) 2023-01-11T21:56:30.5293270Z test_compare_weights_linear (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.420s) 2023-01-11T21:56:30.5293951Z test_compare_weights_lstm_dynamic (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (0.227s) 2023-01-11T21:56:30.5294837Z test_mobilenet_v2 (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5295745Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5296477Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5297254Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5297930Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5298646Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5299351Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5300057Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5300767Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5301490Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5302218Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5303062Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5303717Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5304421Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5305168Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5305932Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5306666Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5307414Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5308132Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5308869Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5309609Z skipping shadow loggers for node_b: torch.nn.functional.adaptive_avg_pool2d, start_node_a: torch.nn.functional.adaptive_avg_pool2d, unhandled logic in subgraph copy 2023-01-11T21:56:30.5310519Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch.nn.functional.adaptive_avg_pool2d, unhandled logic in subgraph copy 2023-01-11T21:56:30.5311297Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5311971Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5312640Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5313311Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5314055Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5314881Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5315558Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5316206Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5316874Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5317612Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5318300Z skipping shadow loggers for node_b: torch.nn.functional.adaptive_avg_pool2d, start_node_a: torch.nn.functional.adaptive_avg_pool2d, unhandled logic in subgraph copy 2023-01-11T21:56:30.5318762Z ok (10.281s) 2023-01-11T21:56:30.5319392Z test_resnet18 (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5320249Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5320924Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5321668Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5322386Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5323072Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5323765Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5324464Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5325172Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5325887Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5326617Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5327307Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5328136Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5328851Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5329567Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.add, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5330381Z skipping shadow loggers for node_b: torch.ao.quantization.observer.MinMaxObserver, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5331275Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5331963Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5332732Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5333425Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5334196Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5334995Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5335724Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5336404Z skipping shadow loggers for node_b: torch._ops.quantized.PyCapsule.add_relu, start_node_a: torch._VariableFunctionsClass.add, unsupported 2023-01-11T21:56:30.5337201Z skipping shadow loggers for node_b: torch.nn.modules.pooling.AdaptiveAvgPool2d, start_node_a: torch.nn.modules.pooling.AdaptiveAvgPool2d, unknown dtype cast 2023-01-11T21:56:30.5337994Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.flatten, start_node_a: torch._VariableFunctionsClass.flatten, unknown dtype cast 2023-01-11T21:56:30.5338457Z ok (4.487s) 2023-01-11T21:56:30.5338905Z test_sparsenn_compare_activations (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... ok (4.209s) 2023-01-11T21:56:30.5339815Z test_sparsenn_shadow (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels) ... skipping shadow loggers for node_b: torch.nn.modules.sparse.EmbeddingBag, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5340794Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5341577Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5342287Z skipping shadow loggers for node_b: torch.nn.modules.sparse.EmbeddingBag, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5343133Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5343856Z skipping shadow loggers for node_b: torch.nn.modules.sparse.EmbeddingBag, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5344648Z skipping shadow loggers for node_b: torch.ao.quantization.observer.HistogramObserver, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5345349Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5346174Z skipping shadow loggers for node_b: torch.nn.modules.sparse.EmbeddingBag, start_node_a: torch.nn.modules.sparse.EmbeddingBag, unknown dtype cast 2023-01-11T21:56:30.5346895Z skipping shadow loggers for node_b: torch._VariableFunctionsClass.cat, start_node_a: torch._VariableFunctionsClass.cat, unsupported 2023-01-11T21:56:30.5347342Z ok (4.419s) 2023-01-11T21:56:30.5347907Z test_conv_bn_relu_mod (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5348479Z ok (0.095s) 2023-01-11T21:56:30.5348895Z test_custom_functions_and_tracer (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... working 2023-01-11T21:56:30.5349258Z working 2023-01-11T21:56:30.5349472Z GraphModule( 2023-01-11T21:56:30.5349775Z (fc1): Linear(in_features=2, out_features=2, bias=True) 2023-01-11T21:56:30.5350210Z (fc2): Linear(in_features=2, out_features=2, bias=True) 2023-01-11T21:56:30.5350585Z (shadow_0_0): OutputLogger(ref_name=model, model_name=subgraph_0_0, 2023-01-11T21:56:30.5350934Z prev_node_name=fc1, ref_node_name=fc1, 2023-01-11T21:56:30.5351285Z ref_node_target_type=torch.nn.modules.linear.Linear 2023-01-11T21:56:30.5351643Z results_type=node_output, index_within_arg=0, 2023-01-11T21:56:30.5351931Z index_of_arg=0, fqn=fc1) 2023-01-11T21:56:30.5352205Z (shadow_wrapper_0_1): GraphModule( 2023-01-11T21:56:30.5352627Z (mod_0): QuantizedLinear(in_features=2, out_features=2, scale=0.019699346274137497, zero_point=58, qscheme=torch.per_tensor_affine) 2023-01-11T21:56:30.5353048Z (shadow_0_1): OutputComparisonLogger 2023-01-11T21:56:30.5353306Z ) 2023-01-11T21:56:30.5353609Z (shadow_1_0): OutputLogger(ref_name=model, model_name=subgraph_1_0, 2023-01-11T21:56:30.5353942Z prev_node_name=fc2, ref_node_name=fc2, 2023-01-11T21:56:30.5354288Z ref_node_target_type=torch.nn.modules.linear.Linear 2023-01-11T21:56:30.5354680Z results_type=node_output, index_within_arg=0, 2023-01-11T21:56:30.5354961Z index_of_arg=0, fqn=fc2) 2023-01-11T21:56:30.5355247Z (shadow_wrapper_1_1): GraphModule( 2023-01-11T21:56:30.5355714Z (mod_0): QuantizedLinear(in_features=2, out_features=2, scale=0.01274071168154478, zero_point=96, qscheme=torch.per_tensor_affine) 2023-01-11T21:56:30.5356142Z (shadow_1_1): OutputComparisonLogger 2023-01-11T21:56:30.5356419Z ) 2023-01-11T21:56:30.5356621Z ) 2023-01-11T21:56:30.5356751Z 2023-01-11T21:56:30.5356757Z 2023-01-11T21:56:30.5356763Z 2023-01-11T21:56:30.5356873Z def forward(self, x): 2023-01-11T21:56:30.5357174Z fc1 = self.fc1(x) 2023-01-11T21:56:30.5357498Z shadow_wrapper_0_1 = self.shadow_wrapper_0_1(fc1, x); x = None 2023-01-11T21:56:30.5357834Z shadow_0_0 = self.shadow_0_0(fc1) 2023-01-11T21:56:30.5358086Z fc2 = self.fc2(fc1) 2023-01-11T21:56:30.5358408Z shadow_wrapper_1_1 = self.shadow_wrapper_1_1(fc2, fc1); fc1 = None 2023-01-11T21:56:30.5358758Z shadow_1_0 = self.shadow_1_0(fc2) 2023-01-11T21:56:30.5359006Z return fc2 2023-01-11T21:56:30.5359231Z 2023-01-11T21:56:30.5359556Z # To see more debug info, please use `graph_module.print_readable()` 2023-01-11T21:56:30.5360047Z `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5360477Z ok (0.078s) 2023-01-11T21:56:30.5362329Z test_functions (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:813: UserWarning: QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize for fixed qparams ops, ignoring QConfig(activation=functools.partial(, quant_min=0, quant_max=127){}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){}). 2023-01-11T21:56:30.5363812Z Please use torch.ao.quantization.get_default_qconfig_mapping or torch.ao.quantization.get_default_qat_qconfig_mapping. Example: 2023-01-11T21:56:30.5364338Z qconfig_mapping = get_default_qconfig_mapping("fbgemm") 2023-01-11T21:56:30.5364735Z model = prepare_fx(model, qconfig_mapping, example_inputs) 2023-01-11T21:56:30.5365186Z warnings.warn(("QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize " 2023-01-11T21:56:30.5365774Z `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5366182Z ok (0.227s) 2023-01-11T21:56:30.5366751Z test_linear_mod (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5367291Z ok (0.030s) 2023-01-11T21:56:30.5367933Z test_linear_relu_mod (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5368489Z ok (0.086s) 2023-01-11T21:56:30.5368934Z test_logger_enabled_and_save_activations_flags (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... ok (0.040s) 2023-01-11T21:56:30.5369739Z test_mobilenet_v2 (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5370294Z ok (3.527s) 2023-01-11T21:56:30.5370864Z test_partial_qconfig_mapping (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... unable to find at least one qconfig for node %fc : [#users=1] = call_module[target=fc](args = (%x,), kwargs = {}), skipping 2023-01-11T21:56:30.5371605Z unable to find at least one qconfig for node %add : [#users=1] = call_function[target=operator.add](args = (%relu, %relu), kwargs = {}), skipping 2023-01-11T21:56:30.5372239Z `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5372651Z ok (0.170s) 2023-01-11T21:56:30.5373098Z test_qconfig_multi_mapping_deduplication (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... ok (0.001s) 2023-01-11T21:56:30.5373902Z test_qconfig_multi_mapping_end_to_end (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5374481Z ok (0.091s) 2023-01-11T21:56:30.5375091Z test_qconfig_multi_mapping_from_list (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5375675Z ok (0.085s) 2023-01-11T21:56:30.5376097Z test_qconfig_multi_mapping_insert_padding (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... ok (0.001s) 2023-01-11T21:56:30.5376928Z test_qconfig_multi_mapping_ordering (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... `print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library. 2023-01-11T21:56:30.5377528Z ok (0.086s) 2023-01-11T21:56:30.5377950Z test_qconfig_multi_mapping_repr (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... ok (0.001s) 2023-01-11T21:56:30.5378571Z test_qconfig_multi_mapping_retroactive_padding (quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows) ... ok (0.001s) 2023-01-11T21:56:30.5379185Z test_fq_module_per_channel (quantization.core.test_workflow_module.TestFakeQuantize) ... ok (0.167s) 2023-01-11T21:56:30.5379835Z test_fq_serializable_per_channel (quantization.core.test_workflow_module.TestFakeQuantize) ... ok (0.003s) 2023-01-11T21:56:30.5380365Z test_quant_min_max_override (quantization.core.test_workflow_module.TestFakeQuantize) ... ok (0.001s) 2023-01-11T21:56:30.5380871Z test_backward_per_channel (quantization.core.test_workflow_ops.TestFakeQuantizeOps) 2023-01-11T21:56:30.5381431Z Tests the backward method. ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.100s) 2023-01-11T21:56:30.5382030Z test_backward_per_channel_cachemask_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.014s) 2023-01-11T21:56:30.5383057Z test_backward_per_channel_cachemask_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.000s) 2023-01-11T21:56:30.5383522Z test_backward_per_tensor (quantization.core.test_workflow_ops.TestFakeQuantizeOps) 2023-01-11T21:56:30.5383976Z Tests the backward method. ... skip: temporarily disable the test (0.004s) 2023-01-11T21:56:30.5384347Z test_backward_per_tensor_cachemask_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.010s) 2023-01-11T21:56:30.5384783Z test_backward_per_tensor_cachemask_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.000s) 2023-01-11T21:56:30.5385205Z test_fake_quant_control (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.008s) 2023-01-11T21:56:30.5385606Z test_fake_quant_per_channel_qparam_range (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.360s) 2023-01-11T21:56:30.5386035Z test_fake_quant_preserves_qparam_shapes_for_activations (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.004s) 2023-01-11T21:56:30.5386439Z test_fixed_qparams_fq_module (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.102s) 2023-01-11T21:56:30.5387215Z test_forward_backward_per_tensor_with_amp (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... /opt/conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling 2023-01-11T21:56:30.5387828Z warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') 2023-01-11T21:56:30.5388087Z ok (0.006s) 2023-01-11T21:56:30.5388342Z test_forward_per_channel (quantization.core.test_workflow_ops.TestFakeQuantizeOps) 2023-01-11T21:56:30.5388698Z Tests the forward path of the FakeQuantizePerTensorAffine op. ... ok (0.161s) 2023-01-11T21:56:30.5389082Z test_forward_per_channel_cachemask_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.011s) 2023-01-11T21:56:30.5389530Z test_forward_per_channel_cachemask_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.000s) 2023-01-11T21:56:30.5389969Z test_forward_per_channel_half_precision_numerics (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.011s) 2023-01-11T21:56:30.5390361Z test_forward_per_tensor (quantization.core.test_workflow_ops.TestFakeQuantizeOps) 2023-01-11T21:56:30.5390706Z Tests the forward path of the FakeQuantizePerTensorAffine op. ... ok (0.093s) 2023-01-11T21:56:30.5391068Z test_forward_per_tensor_cachemask_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.011s) 2023-01-11T21:56:30.5391509Z test_forward_per_tensor_cachemask_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.000s) 2023-01-11T21:56:30.5391952Z test_forward_per_tensor_half_precision_numerics (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.005s) 2023-01-11T21:56:30.5392353Z test_fq_module_per_tensor (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.134s) 2023-01-11T21:56:30.5392731Z test_fq_serializable_per_tensor (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.007s) 2023-01-11T21:56:30.5393298Z test_learnable_backward_per_channel_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.003s) 2023-01-11T21:56:30.5393837Z test_learnable_backward_per_channel_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.003s) 2023-01-11T21:56:30.5394281Z test_learnable_backward_per_tensor_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.277s) 2023-01-11T21:56:30.5394726Z test_learnable_backward_per_tensor_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.003s) 2023-01-11T21:56:30.5395152Z test_learnable_forward_per_channel_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.135s) 2023-01-11T21:56:30.5395591Z test_learnable_forward_per_channel_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.003s) 2023-01-11T21:56:30.5396158Z test_learnable_forward_per_tensor_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.003s) 2023-01-11T21:56:30.5396687Z test_learnable_forward_per_tensor_cuda (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... skip: No gpu is not available. (0.003s) 2023-01-11T21:56:30.5397162Z test_numerical_consistency_per_channel (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.078s) 2023-01-11T21:56:30.5397575Z test_numerical_consistency_per_tensor (quantization.core.test_workflow_ops.TestFakeQuantizeOps) ... ok (0.029s) 2023-01-11T21:56:30.5397951Z test_forward_hooks_preserved (quantization.eager.test_fuse_eager.TestFuseEager) 2023-01-11T21:56:30.5398290Z Test case that checks whether forward pre hooks of the first module and ... ok (0.015s) 2023-01-11T21:56:30.5398637Z test_fuse_function_customization (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.001s) 2023-01-11T21:56:30.5399008Z test_fuse_module_eval (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.065s) 2023-01-11T21:56:30.5399356Z test_fuse_module_train (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.387s) 2023-01-11T21:56:30.5399694Z test_fusion_conv_with_bias (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.057s) 2023-01-11T21:56:30.5400061Z test_fusion_convtranspose_bn_eval (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.005s) 2023-01-11T21:56:30.5400429Z test_fusion_linear_bn_eval (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.002s) 2023-01-11T21:56:30.5400803Z test_fusion_sequential_model_eval (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (1.115s) 2023-01-11T21:56:30.5401165Z test_fusion_sequential_model_train (quantization.eager.test_fuse_eager.TestFuseEager) ... ok (0.252s) 2023-01-11T21:56:30.5401638Z test_fuse_addtional_fuser_method (quantization.fx.test_quantize_fx.TestFuseFx) ... skip: Temporarily skipping the test case, will enable after the simplepattern format is supported (0.002s) 2023-01-11T21:56:30.5402073Z test_fuse_conv_bn_relu (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.025s) 2023-01-11T21:56:30.5402414Z test_fuse_convtranspose_bn_eval (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.006s) 2023-01-11T21:56:30.5402743Z test_fuse_custom_pattern (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.008s) 2023-01-11T21:56:30.5403074Z test_fuse_linear_bn_eval (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.005s) 2023-01-11T21:56:30.5403418Z test_fuse_linear_bn_leaky_relu_onednn (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.009s) 2023-01-11T21:56:30.5403771Z test_fuse_linear_tanh_for_onednn_backend (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.004s) 2023-01-11T21:56:30.5404113Z test_fuse_module_relu (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.007s) 2023-01-11T21:56:30.5404450Z test_fusion_pattern_with_matchallnode (quantization.fx.test_quantize_fx.TestFuseFx) 2023-01-11T21:56:30.5404851Z This test tests that the node matched by MatchAllNode will be regared as an input ... ok (0.005s) 2023-01-11T21:56:30.5405187Z test_fusion_pattern_with_multiple_inputs (quantization.fx.test_quantize_fx.TestFuseFx) 2023-01-11T21:56:30.5405511Z This test tests two keys in backend_config: root_node_getter and ... ok (0.005s) 2023-01-11T21:56:30.5405853Z test_linear_bn_leaky_relu_not_fused_by_default (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.009s) 2023-01-11T21:56:30.5406223Z test_linear_tanh_not_fused_by_default (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.004s) 2023-01-11T21:56:30.5406567Z test_problematic_fuse_example (quantization.fx.test_quantize_fx.TestFuseFx) ... ok (0.013s) 2023-01-11T21:56:30.5406893Z test_qconfig_fused_module (quantization.fx.test_quantize_fx.TestFuseFx) 2023-01-11T21:56:30.5407172Z TODO: add test for all fused modules ... ok (0.035s) 2023-01-11T21:56:30.5407538Z test_fused_backward_op_fake_quant_off (quantization.core.test_workflow_ops.TestFusedObsFakeQuant) ... ok (0.005s) 2023-01-11T21:56:30.5407954Z test_fused_obs_fake_quant_backward_op (quantization.core.test_workflow_ops.TestFusedObsFakeQuant) ... ok (0.005s) 2023-01-11T21:56:30.5408352Z test_fused_obs_fake_quant_moving_avg (quantization.core.test_workflow_ops.TestFusedObsFakeQuant) 2023-01-11T21:56:30.5408709Z Tests the case where we call the fused_obs_fake_quant op multiple times ... ok (0.011s) 2023-01-11T21:56:30.5409064Z test_fused_obs_fake_quant_moving_avg_per_channel (quantization.core.test_workflow_ops.TestFusedObsFakeQuant) 2023-01-11T21:56:30.5409428Z Tests the case where we call the fused_obs_fake_quant op multiple times ... ok (0.036s) 2023-01-11T21:56:30.5409816Z test_compare_fused_obs_fq_oss_module (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.009s) 2023-01-11T21:56:30.5410265Z test_default_fused_qat_config (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.015s) 2023-01-11T21:56:30.5410694Z test_embedding_bag_qat_config (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.017s) 2023-01-11T21:56:30.5411127Z test_embedding_qat_config (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.047s) 2023-01-11T21:56:30.5411553Z test_fused_mod_per_channel (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.025s) 2023-01-11T21:56:30.5411971Z test_fused_mod_reduce_range (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.001s) 2023-01-11T21:56:30.5412398Z test_fused_obs_fq_module (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.005s) 2023-01-11T21:56:30.5412834Z test_fused_obs_fq_moving_avg_module (quantization.core.test_workflow_module.TestFusedObsFakeQuantModule) ... ok (0.007s) 2023-01-11T21:56:30.5413250Z test_quantized_add_relu_fusion (quantization.jit.test_fusion_passes.TestFusionPasses) ... ok (0.018s) 2023-01-11T21:56:30.5413685Z test_input_weight_equalization_determine_points (quantization.fx.test_model_report_fx.TestFxDetectInputWeightEqualization) ... ok (0.052s) 2023-01-11T21:56:30.5414179Z test_input_weight_equalization_report_gen (quantization.fx.test_model_report_fx.TestFxDetectInputWeightEqualization) ... ok (0.064s) 2023-01-11T21:56:30.5414663Z test_input_weight_equalization_report_gen_empty (quantization.fx.test_model_report_fx.TestFxDetectInputWeightEqualization) ... ok (0.021s) 2023-01-11T21:56:30.5415099Z test_all_outlier_report_gen (quantization.fx.test_model_report_fx.TestFxDetectOutliers) ... ok (0.034s) 2023-01-11T21:56:30.5415500Z test_multiple_run_consistent_spike_outlier_report_gen (quantization.fx.test_model_report_fx.TestFxDetectOutliers) ... ok (0.097s) 2023-01-11T21:56:30.5415903Z test_no_outlier_report_gen (quantization.fx.test_model_report_fx.TestFxDetectOutliers) ... ok (0.023s) 2023-01-11T21:56:30.5416303Z test_outlier_detection_determine_points (quantization.fx.test_model_report_fx.TestFxDetectOutliers) ... ok (0.196s) 2023-01-11T21:56:30.5416720Z test_constructor (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5417025Z Tests the constructor of the ModelReport class. ... ok (0.025s) 2023-01-11T21:56:30.5417364Z test_equalization_mapping_generation (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5417709Z Tests for generation of qconfigs by ModelReport API ... ok (0.101s) 2023-01-11T21:56:30.5418022Z test_generate_report (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5418636Z Tests model_report.generate_model_report to ensure report generation ... /var/lib/jenkins/workspace/test/quantization/fx/test_model_report_fx.py:1061: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:56:30.5419238Z example_input = torch.tensor(torch.randint(100, (1, 3, 3, 3)), dtype=torch.float) 2023-01-11T21:56:30.5419472Z ok (0.156s) 2023-01-11T21:56:30.5419732Z test_generate_visualizer (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5420111Z Tests that the ModelReport class can properly create the ModelReportVisualizer instance ... ok (0.031s) 2023-01-11T21:56:30.5420502Z test_prepare_model_callibration (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5420883Z Tests model_report.prepare_detailed_calibration that prepares the model for callibration ... ok (0.023s) 2023-01-11T21:56:30.5421255Z test_qconfig_mapping_generation (quantization.fx.test_model_report_fx.TestFxModelReportClass) 2023-01-11T21:56:30.5421587Z Tests for generation of qconfigs by ModelReport API ... ok (0.053s) 2023-01-11T21:56:30.5421967Z test_nested_detection_case (quantization.fx.test_model_report_fx.TestFxModelReportDetectDynamicStatic) ... ok (0.042s) 2023-01-11T21:56:30.5422589Z test_conv_sub_class_considered (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.021s) 2023-01-11T21:56:30.5422999Z test_fusion_layer_in_sequential (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.018s) 2023-01-11T21:56:30.5423423Z test_multi_linear_model_without_per_channel (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.014s) 2023-01-11T21:56:30.5423842Z test_multiple_q_config_options (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.018s) 2023-01-11T21:56:30.5424254Z test_qat_aware_model_example (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.010s) 2023-01-11T21:56:30.5424645Z test_sequential_model_format (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.020s) 2023-01-11T21:56:30.5425038Z test_simple_conv (quantization.fx.test_model_report_fx.TestFxModelReportDetector) ... ok (0.011s) 2023-01-11T21:56:30.5425437Z test_observer_after_relu (quantization.fx.test_model_report_fx.TestFxModelReportObserver) ... ok (0.018s) 2023-01-11T21:56:30.5425837Z test_random_epochs_and_batches (quantization.fx.test_model_report_fx.TestFxModelReportObserver) ... ok (0.103s) 2023-01-11T21:56:30.5426244Z test_single_batch_of_ones (quantization.fx.test_model_report_fx.TestFxModelReportObserver) ... ok (0.003s) 2023-01-11T21:56:30.5426644Z test_zero_tensor_errors (quantization.fx.test_model_report_fx.TestFxModelReportObserver) ... ok (0.013s) 2023-01-11T21:56:30.5427051Z test_generate_tables_match_with_report (quantization.fx.test_model_report_fx.TestFxModelReportVisualizer) 2023-01-11T21:56:30.5427367Z Tests the generate_table_view() ... ok (0.031s) 2023-01-11T21:56:30.5427690Z test_generate_tables_no_match (quantization.fx.test_model_report_fx.TestFxModelReportVisualizer) 2023-01-11T21:56:30.5428000Z Tests the generate_table_view() ... ok (0.029s) 2023-01-11T21:56:30.5428331Z test_generate_tables_single_feat_match (quantization.fx.test_model_report_fx.TestFxModelReportVisualizer) 2023-01-11T21:56:30.5428693Z Tests the generate_table_view() ... ok (0.028s) 2023-01-11T21:56:30.5429008Z test_get_modules_and_features (quantization.fx.test_model_report_fx.TestFxModelReportVisualizer) 2023-01-11T21:56:30.5429373Z Tests the get_all_unique_module_fqns and get_all_unique_feature_names methods of ... ok (0.029s) 2023-01-11T21:56:30.5429735Z test_histogram_observer (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (21.203s) 2023-01-11T21:56:30.5430155Z test_histogram_observer_against_reference (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (4.883s) 2023-01-11T21:56:30.5430594Z test_histogram_observer_correct_numel (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (0.003s) 2023-01-11T21:56:30.5431014Z test_histogram_observer_extreme_inputs (quantization.core.test_workflow_module.TestHistogramObserver) 2023-01-11T21:56:30.5431402Z Ensures that the HistogramObserver is able to work correctly in ... ok (0.001s) 2023-01-11T21:56:30.5431775Z test_histogram_observer_one_sided (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (0.210s) 2023-01-11T21:56:30.5432194Z test_histogram_observer_same_inputs (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (0.641s) 2023-01-11T21:56:30.5432607Z test_observer_scriptable (quantization.core.test_workflow_module.TestHistogramObserver) ... ok (0.454s) 2023-01-11T21:56:30.5433010Z test_fake_quant_true_quant_compare (quantization.eager.test_model_numerics.TestModelNumericsEager) ... ok (0.193s) 2023-01-11T21:56:30.5433445Z test_float_quant_compare_per_channel (quantization.eager.test_model_numerics.TestModelNumericsEager) ... ok (0.051s) 2023-01-11T21:56:30.5433874Z test_float_quant_compare_per_tensor (quantization.eager.test_model_numerics.TestModelNumericsEager) ... ok (0.148s) 2023-01-11T21:56:30.5434303Z test_weight_only_activation_only_fakequant (quantization.eager.test_model_numerics.TestModelNumericsEager) ... ok (0.232s) 2023-01-11T21:56:30.5434748Z test_compare_model_outputs_conv_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.179s) 2023-01-11T21:56:30.5435193Z test_compare_model_outputs_functional_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.178s) 2023-01-11T21:56:30.5435638Z test_compare_model_outputs_linear_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.029s) 2023-01-11T21:56:30.5436065Z test_compare_model_outputs_linear_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.727s) 2023-01-11T21:56:30.5436505Z test_compare_model_outputs_lstm_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.128s) 2023-01-11T21:56:30.5436987Z test_compare_model_stub_conv_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.223s) 2023-01-11T21:56:30.5437425Z test_compare_model_stub_functional_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.169s) 2023-01-11T21:56:30.5437849Z test_compare_model_stub_linear_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.029s) 2023-01-11T21:56:30.5438280Z test_compare_model_stub_linear_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.718s) 2023-01-11T21:56:30.5438708Z test_compare_model_stub_lstm_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.079s) 2023-01-11T21:56:30.5439134Z test_compare_model_stub_partial (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.669s) 2023-01-11T21:56:30.5439552Z test_compare_model_stub_submodule_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.046s) 2023-01-11T21:56:30.5439991Z test_compare_weights_conv_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.119s) 2023-01-11T21:56:30.5440461Z test_compare_weights_linear_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.012s) 2023-01-11T21:56:30.5440894Z test_compare_weights_linear_static (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.710s) 2023-01-11T21:56:30.5441308Z test_compare_weights_lstm_dynamic (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... ok (0.031s) 2023-01-11T21:56:30.5442458Z test_mobilenet_v2 (quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights. 2023-01-11T21:56:30.5443104Z warnings.warn(msg) 2023-01-11T21:56:30.5443617Z Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth 2023-01-11T21:56:30.5443877Z 2023-01-11T21:56:30.5443962Z 0%| | 0.00/13.6M [00:00, quant_min=0, quant_max=63, dtype=torch.quint8){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135b8b370>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135b8b370>}) 2023-01-11T21:56:30.5517774Z warnings.warn(("QConfig %s quantization range must fall within the backend's:\n" 2023-01-11T21:56:30.5518290Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:783: UserWarning: QConfig activation quantization range must fall within the backend's: 2023-01-11T21:56:30.5519618Z QConfig range = (0, 255), BackendConfig range = (0, 31), ignoring QConfig(activation=functools.partial(, dtype=torch.quint8){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135b88700>}, weight=functools.partial(, quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135b88700>}) 2023-01-11T21:56:30.5520531Z warnings.warn(("QConfig %s quantization range must fall within the backend's:\n" 2023-01-11T21:56:30.5521367Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:779: UserWarning: QConfig activation must specify 'quant_min' and 'quant_max', ignoring QConfig(activation=, weight=) 2023-01-11T21:56:30.5522053Z warnings.warn("QConfig %s must specify 'quant_min' and 'quant_max', ignoring %s" % 2023-01-11T21:56:30.5522281Z ok (0.034s) 2023-01-11T21:56:30.5522536Z test_backend_config_scale_min (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5524136Z Test QConfig eps validation against the BackendConfig's min scale value. ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:794: UserWarning: QConfig activation eps (tensor([6.1035e-05])) must be greater than or equal to the backend's min scale value (0.000244140625), ignoring QConfig(activation=functools.partial(, dtype=torch.quint8, eps=6.103515625e-05){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135499870>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f6135499870>}) 2023-01-11T21:56:30.5525212Z warnings.warn(("QConfig %s eps (%s) must be greater than or equal to " 2023-01-11T21:56:30.5526703Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:794: UserWarning: QConfig activation eps (tensor([1.1921e-07])) must be greater than or equal to the backend's min scale value (0.000244140625), ignoring QConfig(activation=functools.partial(, dtype=torch.quint8){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613549a7a0>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric, eps=6.103515625e-05){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613549a7a0>}) 2023-01-11T21:56:30.5527671Z warnings.warn(("QConfig %s eps (%s) must be greater than or equal to " 2023-01-11T21:56:30.5528611Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:791: UserWarning: QConfig activation must specify 'eps', ignoring QConfig(activation=functools.partial(, scale=1.0, zero_point=0){}, weight=functools.partial(, scale=1.0, zero_point=0){}) 2023-01-11T21:56:30.5529377Z warnings.warn("QConfig %s must specify 'eps', ignoring %s" % (debug_string, qconfig)) 2023-01-11T21:56:30.5529607Z ok (0.046s) 2023-01-11T21:56:30.5529883Z test_change_backend_config_for_fixed_qparam_ops (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5530238Z Making sure we can skip validation of qconfigs for fixedqparam ops based ... ok (0.008s) 2023-01-11T21:56:30.5530578Z test_channel_shuffle_lowering (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.076s) 2023-01-11T21:56:30.5530917Z test_conv_bn_relu (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5531298Z Tests fusion and quantization for "Conv - Bn" and "Conv - Bn - ReLU" ... ok (1.289s) 2023-01-11T21:56:30.5531625Z test_conv_linear_not_reference (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5531898Z Test quantizing conv and linear ... ok (1.865s) 2023-01-11T21:56:30.5532185Z test_conv_linear_reference (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5532513Z Test quantizing functional conv and linear with reference option ... ok (1.667s) 2023-01-11T21:56:30.5532831Z test_conv_lowering (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.073s) 2023-01-11T21:56:30.5533188Z test_convert_custom_config_from_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5533590Z test_convert_custom_config_set_observed_to_quantized_mapping (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5534008Z test_convert_custom_config_set_preserved_attributes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5534383Z test_convert_custom_config_to_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5535186Z test_convert_qconfig_mapping (quantization.fx.test_quantize_fx.TestQuantizeFx) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/convert.py:886: UserWarning: Passing a QConfig dictionary to convert is deprecated and will not be supported in a future version. Please pass in a QConfigMapping instead. 2023-01-11T21:56:30.5535670Z warnings.warn( 2023-01-11T21:56:30.5536290Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node mods1_1_packed_weight_0 target mods1_1_packed_weight_0 mods1_1_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5536889Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5537561Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node mods1_0_packed_weight_0 target mods1_0_packed_weight_0 mods1_0_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5538110Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5538349Z ok (0.102s) 2023-01-11T21:56:30.5538611Z test_convtranspose_per_channel_fails_early (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5539048Z Verifies that attempting to quantize a ConvTranspose module with per-Channel ... ok (0.013s) 2023-01-11T21:56:30.5539409Z test_copy_node_has_shared_actpp_instance (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5539718Z Test the output of CopyNode to have the same ... ok (0.042s) 2023-01-11T21:56:30.5540047Z test_custom_module_class (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.093s) 2023-01-11T21:56:30.5540411Z test_custom_module_class_input_has_multiple_users (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5540752Z Tests that the flow still works when the input of custom module ... ok (0.026s) 2023-01-11T21:56:30.5541097Z test_deepcopy_preserve_attributes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.014s) 2023-01-11T21:56:30.5541474Z test_default_qconfig_mapping_override_global (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.012s) 2023-01-11T21:56:30.5541846Z test_default_quant_after_none_qconfig (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5542154Z Make sure default quant is inserted properly ... ok (0.020s) 2023-01-11T21:56:30.5542652Z test_dequantize (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5542958Z Test to make sure dequantize node are placed before ... ok (0.230s) 2023-01-11T21:56:30.5543259Z test_dict_output (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5543569Z Make sure quantization runs for models with dictionary output ... ok (0.020s) 2023-01-11T21:56:30.5543886Z test_dynamic_linear_input_multiple_use (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5544214Z Tests input for dynamic linear being used by multiple ops ... ok (0.043s) 2023-01-11T21:56:30.5544535Z test_dynamic_quant_fp16 (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.057s) 2023-01-11T21:56:30.5544867Z test_dynamic_quant_weight_observer (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5545177Z Test that weight observer is run in convert step ... ok (0.015s) 2023-01-11T21:56:30.5545476Z test_dynamic_with_fusion (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5546279Z Tests that dynamic quantization APIs work with Linear + Relu fusion ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node mods2_packed_weight_0 target mods2_packed_weight_0 mods2_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5546892Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5547129Z ok (0.057s) 2023-01-11T21:56:30.5547392Z test_dynamic_with_fusion_multiple_uses (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5547727Z Tests that dynamic quantization APIs work with Linear + Relu fusion ... ok (0.045s) 2023-01-11T21:56:30.5547864Z test_fold_quant_dequant (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5548057Z Test that the sequence of quant-dequant nodes in the ... ok (0.024s) 2023-01-11T21:56:30.5548227Z test_fp32_input_fp32_output (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5548402Z test_fp32_input_quantized_output (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5548602Z test_fp32_sum (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5548814Z Verifies that fp32 sum works correctly if it's before or after ... ok (0.051s) 2023-01-11T21:56:30.5548989Z test_fuse_custom_config_from_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5549180Z test_fuse_custom_config_set_preserved_attributes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5549340Z test_fuse_custom_config_to_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5549508Z test_fused_module_qat_swap (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5549666Z test_fusion_pattern_unquantized (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5549814Z Ensure that leaving a possible fusion pattern of multiple nodes ... ok (0.160s) 2023-01-11T21:56:30.5550022Z test_get_default_qconfig_valid_backend (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5550194Z Checks that AssertionError is raised when non expected backend input is specified ... ok (0.001s) 2023-01-11T21:56:30.5550376Z test_get_executorch_backend_config (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5550536Z test_getattr_with_nontensor_result (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5550661Z Verifies that binary ops get quantized correctly if some ... ok (0.054s) 2023-01-11T21:56:30.5550811Z test_linear_bn (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.756s) 2023-01-11T21:56:30.5550968Z test_linear_leaky_relu_lowering (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5551160Z Test fusion and lowering of Linear - (bn -) LeakyReLU ... ok (0.049s) 2023-01-11T21:56:30.5551314Z test_linear_qint8_activation (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5551454Z Test support for qint8 activation in reference pattern ... ok (0.017s) 2023-01-11T21:56:30.5551608Z test_linear_tanh_lowering (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5551777Z Test fusion and lowering of Linear - Tanh ... ok (0.025s) 2023-01-11T21:56:30.5551963Z test_masked_fill_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.024s) 2023-01-11T21:56:30.5552118Z test_match_pattern_with_multiple_args (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5552256Z Test that we can match a pattern that has multiple arguments ... ok (0.010s) 2023-01-11T21:56:30.5552421Z test_mul_add_fp16_config (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.029s) 2023-01-11T21:56:30.5552598Z test_no_obs_between_unmatched_node_and_copy_node (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5552740Z Verifies that an observer is not inserted between an unmatched ... ok (0.019s) 2023-01-11T21:56:30.5552912Z test_non_traceable_module (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.014s) 2023-01-11T21:56:30.5553047Z test_not_used (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5553152Z Test quantizing a not used value ... ok (0.015s) 2023-01-11T21:56:30.5553281Z test_observer_fqn (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5553454Z Test to make sure the observer FQN is based on the quantizable op/module that it is observing ... ok (0.024s) 2023-01-11T21:56:30.5553607Z test_output_lists_and_dicts (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5553751Z Verify that specifying complicated output types does not crash. ... ok (0.018s) 2023-01-11T21:56:30.5553921Z test_packed_weight_fused_op (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.033s) 2023-01-11T21:56:30.5554065Z test_pattern_match (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5554158Z test MatchAllNode with ... ok (0.007s) 2023-01-11T21:56:30.5554365Z test_pattern_match_constant (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.005s) 2023-01-11T21:56:30.5554536Z test_permute_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5554717Z test_prepare_custom_config_from_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5554916Z test_prepare_custom_config_set_float_to_observed_mapping (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5555115Z test_prepare_custom_config_set_input_quantized_indexes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5555323Z test_prepare_custom_config_set_non_traceable_module_classes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5555521Z test_prepare_custom_config_set_non_traceable_module_names (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5555752Z test_prepare_custom_config_set_output_quantized_indexes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5555950Z test_prepare_custom_config_set_preserved_attributes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5556150Z test_prepare_custom_config_set_standalone_module_class (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5556332Z test_prepare_custom_config_set_standalone_module_name (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5556506Z test_prepare_custom_config_to_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5556664Z test_prepare_mode (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.054s) 2023-01-11T21:56:30.5556825Z test_prepared_model_deepcopy (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5557050Z Ensures that copy.deepcopy works correctly on a prepared model. ... ok (0.021s) 2023-01-11T21:56:30.5557222Z test_preserve_attributes (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.017s) 2023-01-11T21:56:30.5557370Z test_preserve_qconfig (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5557528Z Test to make sure the temporary config option to preserve qconfig attributes ... ok (0.021s) 2023-01-11T21:56:30.5557674Z test_preserve_tuple (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5557772Z Test tuple input type is preserved ... ok (0.013s) 2023-01-11T21:56:30.5557966Z test_propagate_dtypes_for_known_nodes_dict_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5558168Z test_propagate_dtypes_for_known_nodes_dict_split_tuple_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5558364Z test_propagate_dtypes_for_known_nodes_dict_tuple_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5558562Z test_propagate_dtypes_for_known_nodes_list_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5558756Z test_propagate_dtypes_for_known_nodes_split_list_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5558952Z test_propagate_dtypes_for_known_nodes_split_tuple_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5559143Z test_propagate_dtypes_for_known_nodes_tuple_args (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5559300Z test_qat_and_script (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.207s) 2023-01-11T21:56:30.5559593Z test_qat_prepare_device_affinity (quantization.fx.test_quantize_fx.TestQuantizeFx) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:56:30.5559755Z test_qat_skip_untraced (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.148s) 2023-01-11T21:56:30.5559981Z test_qconfig_dict_setup (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.158s) 2023-01-11T21:56:30.5560159Z test_qconfig_dict_with_fused_modules (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.116s) 2023-01-11T21:56:30.5560323Z test_qconfig_for_call_func (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.031s) 2023-01-11T21:56:30.5560492Z test_qconfig_for_call_method (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.057s) 2023-01-11T21:56:30.5560649Z test_qconfig_function (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.019s) 2023-01-11T21:56:30.5560821Z test_qconfig_mapping_from_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5560988Z test_qconfig_mapping_repr (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5561148Z test_qconfig_mapping_set_global (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.001s) 2023-01-11T21:56:30.5561352Z test_qconfig_mapping_set_module_name (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5561551Z test_qconfig_mapping_set_module_name_object_type_order (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.003s) 2023-01-11T21:56:30.5561736Z test_qconfig_mapping_set_module_name_regex (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5561914Z test_qconfig_mapping_set_object_type (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5562083Z test_qconfig_mapping_to_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5562264Z test_qconfig_module_name_object_type_order (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.075s) 2023-01-11T21:56:30.5562433Z test_qconfig_module_name_regex (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5562597Z test_qconfig_module_type (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.025s) 2023-01-11T21:56:30.5562744Z test_qconfig_none (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5562908Z test_qconfig_precedence (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.019s) 2023-01-11T21:56:30.5563075Z test_qconfig_qat_module_type (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.032s) 2023-01-11T21:56:30.5563231Z test_qnnpack_backend_config (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5563412Z Test whether default QNNPACK QConfigs are compatible with the QNNPACK BackendConfig. ... ok (1.045s) 2023-01-11T21:56:30.5563572Z test_qparams_buffers (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.051s) 2023-01-11T21:56:30.5563715Z test_qparams_fqn (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5563841Z Test that the FQN of input_scale/zero_point is set ... ok (0.032s) 2023-01-11T21:56:30.5563990Z test_quant_output_always_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5564129Z If the output is hardcoded to be quantized, ensure that ... ok (0.097s) 2023-01-11T21:56:30.5564304Z test_quantized_input_fp32_output (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5564483Z test_quantized_input_quantized_output (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.019s) 2023-01-11T21:56:30.5564637Z test_quantized_model_type (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5564787Z Test state_dict and deepcopy works properly in the quantized model ... ok (0.045s) 2023-01-11T21:56:30.5564931Z test_ref_conv_module (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5565059Z Make sure the numerics for models with ref conv module ... ok (0.141s) 2023-01-11T21:56:30.5565194Z test_ref_linear_module (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5565325Z Make sure the numerics for models with ref linear module ... ok (0.046s) 2023-01-11T21:56:30.5565519Z test_register_patterns (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.002s) 2023-01-11T21:56:30.5565677Z test_relu_lowering (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.020s) 2023-01-11T21:56:30.5565833Z test_remove_qconfig (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.016s) 2023-01-11T21:56:30.5566015Z test_repeat_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5566180Z test_reroute_tuple_getitem_patterns (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5566344Z The following graph should redirect the output to `b`. After the transformation, ... ok (0.002s) 2023-01-11T21:56:30.5566528Z test_reshape_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.021s) 2023-01-11T21:56:30.5566669Z test_return_none (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.013s) 2023-01-11T21:56:30.5566858Z test_reuse_input_qconfig (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.246s) 2023-01-11T21:56:30.5567031Z test_save_observer_state_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.588s) 2023-01-11T21:56:30.5567184Z test_sequential (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.331s) 2023-01-11T21:56:30.5567348Z test_shape_followed_by_quantized_op (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5567465Z Make sure that shape does not dequantize ... ok (0.028s) 2023-01-11T21:56:30.5567643Z test_size_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.018s) 2023-01-11T21:56:30.5567821Z test_stack_trace_preserved_linear (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.023s) 2023-01-11T21:56:30.5567989Z test_standalone_module_float_interface (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.118s) 2023-01-11T21:56:30.5568175Z test_standalone_module_quantized_interface (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.115s) 2023-01-11T21:56:30.5568319Z test_state_dict (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5568444Z Make sure packed params appear in state_dict ... ok (0.085s) 2023-01-11T21:56:30.5568584Z test_static_lstm (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5568754Z Test statically quantized custom module LSTM followed by ops that consume individual ... ok (0.266s) 2023-01-11T21:56:30.5568911Z test_static_lstm_consume_tuple (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5569078Z Test statically quantized custom module LSTM followed by a module that consumes the ... ok (0.241s) 2023-01-11T21:56:30.5569248Z test_static_lstm_with_custom_fixed_qparams (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5569398Z Test statically quantized LSTM with custom fixed qparams assigned to each of the ... ok (0.071s) 2023-01-11T21:56:30.5569550Z test_sub_scalar (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.020s) 2023-01-11T21:56:30.5569724Z test_symmetric_qnnpack_qconfig_mapping (quantization.fx.test_quantize_fx.TestQuantizeFx) 2023-01-11T21:56:30.5569917Z Test whether `torch.ao.quantization.qconfig_mapping._get_symmetric_qnnpack_qconfig_mapping` ... ok (0.501s) 2023-01-11T21:56:30.5570109Z test_torch_transpose_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.024s) 2023-01-11T21:56:30.5570298Z test_torch_unsqueeze_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5570471Z test_trace_quantize_per_tensor (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.017s) 2023-01-11T21:56:30.5570652Z test_transpose_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5570837Z test_unsqueeze__nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.022s) 2023-01-11T21:56:30.5571007Z test_unsqueeze_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.150s) 2023-01-11T21:56:30.5571216Z test_view_nontensor_args_not_observed (quantization.fx.test_quantize_fx.TestQuantizeFx) ... ok (0.024s) 2023-01-11T21:56:30.5571385Z test_model_dropout (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... ok (1.621s) 2023-01-11T21:56:30.5571617Z test_prepare_serialize_switch_device_convert (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... skip: gpu is not available. (0.001s) 2023-01-11T21:56:30.5571793Z test_qat_embedding_linear (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... ok (0.612s) 2023-01-11T21:56:30.5571975Z test_qat_embeddingbag_linear (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... ok (0.662s) 2023-01-11T21:56:30.5572152Z test_qat_functional_linear (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... ok (0.292s) 2023-01-11T21:56:30.5572595Z test_resnet18_ddp (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... skip: TODO: Test is always failing - https://github.com/pytorch/pytorch/issues/54979 (0.001s) 2023-01-11T21:56:30.5572765Z test_resnet_base (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... ok (1.152s) 2023-01-11T21:56:30.5572962Z test_static_gpu_convert_basic (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... skip: gpu is not available. (0.001s) 2023-01-11T21:56:30.5573179Z test_switch_device_prepare_convert (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... skip: gpu is not available. (0.001s) 2023-01-11T21:56:30.5573382Z test_torchvision (quantization.fx.test_quantize_fx.TestQuantizeFxModels) ... skip: skip for now since tbb failed (0.001s) 2023-01-11T21:56:30.5573536Z test_add (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (5.957s) 2023-01-11T21:56:30.5573697Z test_add_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (6.143s) 2023-01-11T21:56:30.5573878Z test_add_relu_multiple_uses_of_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.036s) 2023-01-11T21:56:30.5574045Z test_ave_pool_with_custom_cfg (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5574180Z A test that checks correct patterns are produced for ... ok (0.016s) 2023-01-11T21:56:30.5574406Z test_bmm (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.001s) 2023-01-11T21:56:30.5574552Z test_bmm_int_reference (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5574756Z int8 is not supported for bmm so we won't produce reference ... ok (0.013s) 2023-01-11T21:56:30.5574910Z test_boolean_tensor (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5575102Z Make sure we don't insert observer for boolean Tensors ... ok (0.016s) 2023-01-11T21:56:30.5575240Z test_cat (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5575371Z quantization of the output of cat will depend on the ... ok (0.880s) 2023-01-11T21:56:30.5575530Z test_chunk (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.020s) 2023-01-11T21:56:30.5575683Z test_clamp (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (1.246s) 2023-01-11T21:56:30.5575831Z test_conv_module (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.451s) 2023-01-11T21:56:30.5575996Z test_conv_transpose_1d (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.712s) 2023-01-11T21:56:30.5576162Z test_conv_transpose_2d (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.243s) 2023-01-11T21:56:30.5576322Z test_copy_node_fp32_input (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5576466Z CopyNode works for both fp32 and int8 inputs, this is a test to make ... ok (0.017s) 2023-01-11T21:56:30.5576690Z test_div (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.000s) 2023-01-11T21:56:30.5576843Z test_elu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.776s) 2023-01-11T21:56:30.5577041Z test_embedding (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.085s) 2023-01-11T21:56:30.5577203Z test_embedding_bag (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.076s) 2023-01-11T21:56:30.5577361Z test_fixed_qparams_ops (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.344s) 2023-01-11T21:56:30.5577532Z test_fixed_qparams_ops_fp16 (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.035s) 2023-01-11T21:56:30.5577709Z test_fixed_qparams_ops_qint8 (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.044s) 2023-01-11T21:56:30.5577884Z test_fixed_qparams_ops_wrong_qconfig (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5579328Z Test that wrong qconfigs for fixed qparams ops results in the ops not being quantized. ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:813: UserWarning: QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize for fixed qparams ops, ignoring QConfig(activation=functools.partial(, quant_min=0, quant_max=127){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543d2d0>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543d2d0>}). 2023-01-11T21:56:30.5579566Z Please use torch.ao.quantization.get_default_qconfig_mapping or torch.ao.quantization.get_default_qat_qconfig_mapping. Example: 2023-01-11T21:56:30.5579695Z qconfig_mapping = get_default_qconfig_mapping("fbgemm") 2023-01-11T21:56:30.5579824Z model = prepare_fx(model, qconfig_mapping, example_inputs) 2023-01-11T21:56:30.5580011Z warnings.warn(("QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize " 2023-01-11T21:56:30.5581290Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:813: UserWarning: QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize for fixed qparams ops, ignoring QConfig(activation=functools.partial(, quant_min=0, quant_max=127){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543e7a0>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543e7a0>}). 2023-01-11T21:56:30.5581516Z Please use torch.ao.quantization.get_default_qconfig_mapping or torch.ao.quantization.get_default_qat_qconfig_mapping. Example: 2023-01-11T21:56:30.5581644Z qconfig_mapping = get_default_qconfig_mapping("fbgemm") 2023-01-11T21:56:30.5581779Z model = prepare_fx(model, qconfig_mapping, example_inputs) 2023-01-11T21:56:30.5581962Z warnings.warn(("QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize " 2023-01-11T21:56:30.5583361Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:813: UserWarning: QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize for fixed qparams ops, ignoring QConfig(activation=functools.partial(, quant_min=0, quant_max=127){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543dbd0>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543dbd0>}). 2023-01-11T21:56:30.5583589Z Please use torch.ao.quantization.get_default_qconfig_mapping or torch.ao.quantization.get_default_qat_qconfig_mapping. Example: 2023-01-11T21:56:30.5583754Z qconfig_mapping = get_default_qconfig_mapping("fbgemm") 2023-01-11T21:56:30.5583881Z model = prepare_fx(model, qconfig_mapping, example_inputs) 2023-01-11T21:56:30.5584063Z warnings.warn(("QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize " 2023-01-11T21:56:30.5585380Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/utils.py:813: UserWarning: QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize for fixed qparams ops, ignoring QConfig(activation=functools.partial(, quant_min=0, quant_max=127){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543cc10>}, weight=functools.partial(, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){'factory_kwargs': .get_factory_kwargs_based_on_module_device at 0x7f613543cc10>}). 2023-01-11T21:56:30.5585610Z Please use torch.ao.quantization.get_default_qconfig_mapping or torch.ao.quantization.get_default_qat_qconfig_mapping. Example: 2023-01-11T21:56:30.5585734Z qconfig_mapping = get_default_qconfig_mapping("fbgemm") 2023-01-11T21:56:30.5585862Z model = prepare_fx(model, qconfig_mapping, example_inputs) 2023-01-11T21:56:30.5586047Z warnings.warn(("QConfig must specify a FixedQParamsObserver or a FixedQParamsFakeQuantize " 2023-01-11T21:56:30.5586113Z ok (0.027s) 2023-01-11T21:56:30.5586284Z test_float_functional (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.086s) 2023-01-11T21:56:30.5586452Z test_functional_conv (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (10.054s) 2023-01-11T21:56:30.5586610Z test_functional_linear (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (12.238s) 2023-01-11T21:56:30.5586824Z test_gelu_normal (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: TODO: reenable with backend_config api (0.000s) 2023-01-11T21:56:30.5587062Z test_gelu_reference (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.001s) 2023-01-11T21:56:30.5587219Z test_general_shape_ops (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5587347Z A test that checks dequantize will be swapped for ... ok (0.203s) 2023-01-11T21:56:30.5587501Z test_general_value_ops (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5587634Z A test that checks correct patterns are produced for ... ok (0.062s) 2023-01-11T21:56:30.5587776Z test_getitem (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5587920Z Make sure we only insert observer for getitem if the following node is matched ... ok (0.047s) 2023-01-11T21:56:30.5588084Z test_hardswish (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.732s) 2023-01-11T21:56:30.5588247Z test_instance_norm (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (2.418s) 2023-01-11T21:56:30.5588413Z test_int8_input_no_unnecessary_fq (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5588546Z If the inputs to the graph are quantized and the only node ... ok (0.014s) 2023-01-11T21:56:30.5588706Z test_layer_norm (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.539s) 2023-01-11T21:56:30.5588862Z test_leaky_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.655s) 2023-01-11T21:56:30.5589035Z test_linear_dynamic_fp16 (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.406s) 2023-01-11T21:56:30.5589196Z test_linear_module (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (3.713s) 2023-01-11T21:56:30.5589350Z test_linear_static_fp16 (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.554s) 2023-01-11T21:56:30.5589621Z test_mish_reference (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.001s) 2023-01-11T21:56:30.5589773Z test_mul (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (6.016s) 2023-01-11T21:56:30.5589929Z test_mul_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (6.177s) 2023-01-11T21:56:30.5590102Z test_multiple_qconfigs_for_single_value (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5590219Z Test multiple qconfigs for a single value ... ok (0.021s) 2023-01-11T21:56:30.5590867Z test_norm_weight_bias (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node mods1_packed_weight_0 target mods1_packed_weight_0 mods1_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5591112Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5591184Z ok (1.017s) 2023-01-11T21:56:30.5591327Z test_prelu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.374s) 2023-01-11T21:56:30.5591486Z test_qbatch_norm (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.371s) 2023-01-11T21:56:30.5591651Z test_qbatch_norm_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (1.698s) 2023-01-11T21:56:30.5591809Z test_qmatmul (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (1.306s) 2023-01-11T21:56:30.5591976Z test_quantized_add_qat (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.025s) 2023-01-11T21:56:30.5592132Z test_quantized_conv_relu (quantization.fx.test_quantize_fx.TestQuantizeFxOps) 2023-01-11T21:56:30.5592250Z tests for conv1d_relu/conv2d_relu/conv3d_relu ... ok (1.826s) 2023-01-11T21:56:30.5592416Z test_quantized_mul_qat (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.025s) 2023-01-11T21:56:30.5592578Z test_ref_pattern_multi_use (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.024s) 2023-01-11T21:56:30.5592737Z test_reshape_fp16 (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (0.027s) 2023-01-11T21:56:30.5592887Z test_rnn (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (5.093s) 2023-01-11T21:56:30.5593041Z test_rnn_cell (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... ok (3.086s) 2023-01-11T21:56:30.5593276Z test_silu_reference (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.001s) 2023-01-11T21:56:30.5593488Z test_softmax_normal (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: TODO: reenable with backend_config api (0.000s) 2023-01-11T21:56:30.5593728Z test_softmax_reference (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.001s) 2023-01-11T21:56:30.5593955Z test_sub (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.000s) 2023-01-11T21:56:30.5594175Z test_sum (quantization.fx.test_quantize_fx.TestQuantizeFxOps) ... skip: This is no longer needed right now, can enable later with new api (0.000s) 2023-01-11T21:56:30.5594329Z test_conv (quantization.jit.test_quantize_jit.TestQuantizeJit) ... ok (2.220s) 2023-01-11T21:56:30.5594473Z test_conv_bn (quantization.jit.test_quantize_jit.TestQuantizeJit) ... ok (0.389s) 2023-01-11T21:56:30.5594825Z test_conv_transpose (quantization.jit.test_quantize_jit.TestQuantizeJit) ... [W insert_observers.cpp:1580] Warning: prim::Loop is not yet supported in quantization, please make sure nothing needs to be quantized in the loop (function operator()) 2023-01-11T21:56:30.5594891Z ok (0.593s) 2023-01-11T21:56:30.5595159Z test_linear_dynamic_fp16 (quantization.jit.test_quantize_jit.TestQuantizeJit) ... [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5595362Z [W QuantUtils.h:215] Warning: FOUND weight out of range (function HandleWeightsSaturation) 2023-01-11T21:56:30.5595428Z ok (0.035s) 2023-01-11T21:56:30.5596554Z test_nested (quantization.jit.test_quantize_jit.TestQuantizeJit) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:818: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:485.) 2023-01-11T21:56:30.5596640Z if param.grad is not None: 2023-01-11T21:56:30.5596704Z ok (1.187s) 2023-01-11T21:56:30.5596967Z test_observer_with_ignored_function (quantization.jit.test_quantize_jit.TestQuantizeJit) 2023-01-11T21:56:30.5597115Z Test observers with ignored function and make sure it works in ... ok (1.368s) 2023-01-11T21:56:30.5597714Z test_single_linear (quantization.jit.test_quantize_jit.TestQuantizeJit) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. 2023-01-11T21:56:30.5597776Z warnings.warn( 2023-01-11T21:56:30.5597841Z ok (3.961s) 2023-01-11T21:56:30.5598014Z test_single_linear_dynamic (quantization.jit.test_quantize_jit.TestQuantizeJit) ... ok (0.132s) 2023-01-11T21:56:30.5598173Z test_skip_quant (quantization.jit.test_quantize_jit.TestQuantizeJit) ... ok (5.457s) 2023-01-11T21:56:30.5598338Z test_cat_linear (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (0.081s) 2023-01-11T21:56:30.5598504Z test_clamp (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (3.960s) 2023-01-11T21:56:30.5598693Z test_conv_with_benchmark_flag (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (0.295s) 2023-01-11T21:56:30.5598855Z test_dequantize_tuple (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5598971Z Make sure dequantize can support Tuple of tensor ... ok (1.310s) 2023-01-11T21:56:30.5599129Z test_elu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (15.467s) 2023-01-11T21:56:30.5599293Z test_general_shape_ops (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5599417Z A test that checks dequantize will be swapped for ... ok (0.116s) 2023-01-11T21:56:30.5599576Z test_general_value_ops (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5599708Z A test that checks correct patterns are produced for ... ok (0.319s) 2023-01-11T21:56:30.5599877Z test_group_norm (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (11.925s) 2023-01-11T21:56:30.5600045Z test_hardswish (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (11.774s) 2023-01-11T21:56:30.5600337Z test_instance_norm (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5600553Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5600768Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5600981Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5601240Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5601448Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5601652Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5601857Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5602061Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5602298Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5602502Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5602708Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5602913Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5603105Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5603307Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5603513Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5603716Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5603919Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5604120Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5604323Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5604528Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5604732Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5604933Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5605139Z [W observer.py:1204] Warning: must run observer before calling calculate_qparams. Returning default scale and zero point (function ) 2023-01-11T21:56:30.5605194Z ok (11.437s) 2023-01-11T21:56:30.5605362Z test_layer_norm (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (12.064s) 2023-01-11T21:56:30.5605564Z test_linear (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (14.986s) 2023-01-11T21:56:30.5605730Z test_qbatch_norm (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (8.536s) 2023-01-11T21:56:30.5605928Z test_qbatch_norm_relu_BNFuncInplaceRelu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (5.638s) 2023-01-11T21:56:30.5606115Z test_qbatch_norm_relu_BNFuncRelu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (5.681s) 2023-01-11T21:56:30.5606302Z test_qbatch_norm_relu_BNRelu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (11.356s) 2023-01-11T21:56:30.5606472Z test_quantized_add (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (4.294s) 2023-01-11T21:56:30.5606642Z test_quantized_add_alpha (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5606767Z Test quant fusion for multiple aten::add using same ... ok (2.344s) 2023-01-11T21:56:30.5606983Z test_quantized_add_relu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (8.516s) 2023-01-11T21:56:30.5607158Z test_quantized_add_relu_alpha (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5607292Z Test quant fusion for multiple aten::add using same ... ok (17.343s) 2023-01-11T21:56:30.5607471Z test_quantized_add_scalar (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (2.309s) 2023-01-11T21:56:30.5607656Z test_quantized_add_scalar_relu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (5.665s) 2023-01-11T21:56:30.5607813Z test_quantized_cat (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5607946Z quantization of the output of cat will be depend on the ... ok (1.949s) 2023-01-11T21:56:30.5608102Z test_quantized_conv (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (3.389s) 2023-01-11T21:56:30.5608265Z test_quantized_conv_relu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) 2023-01-11T21:56:30.5608387Z tests for conv1d_relu/conv2d_relu/conv3d_relu ... ok (13.610s) 2023-01-11T21:56:30.5608557Z test_quantized_mul (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (1.767s) 2023-01-11T21:56:30.5608731Z test_quantized_mul_relu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (8.500s) 2023-01-11T21:56:30.5608908Z test_quantized_mul_scalar (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (1.649s) 2023-01-11T21:56:30.5609093Z test_quantized_mul_scalar_relu (quantization.jit.test_quantize_jit.TestQuantizeJitOps) ... ok (5.681s) 2023-01-11T21:56:30.5609266Z test_conv_trace (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.049s) 2023-01-11T21:56:30.5609451Z test_convtranspose_trace (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.049s) 2023-01-11T21:56:30.5609622Z test_dedup_module_uses (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.006s) 2023-01-11T21:56:30.5609803Z test_finalize_debug (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.074s) 2023-01-11T21:56:30.5609989Z test_finalize_for_linear (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.072s) 2023-01-11T21:56:30.5610173Z test_foldbn_complex_cases (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.782s) 2023-01-11T21:56:30.5610353Z test_foldbn_in_submodule (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.063s) 2023-01-11T21:56:30.5610520Z test_foldbn_no_fusion (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5610736Z Test that we don't fuse the cases when module type does not match ... ok (0.007s) 2023-01-11T21:56:30.5610924Z test_foldbn_shared_classtype (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.512s) 2023-01-11T21:56:30.5611087Z test_foldbn_trivial (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.051s) 2023-01-11T21:56:30.5611273Z test_foldbn_trivial_nobias (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.051s) 2023-01-11T21:56:30.5611483Z test_fuse_linear (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.036s) 2023-01-11T21:56:30.5611662Z test_inplace_option (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.164s) 2023-01-11T21:56:30.5611842Z test_insert_observers (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.028s) 2023-01-11T21:56:30.5612037Z test_insert_observers_child_qconfig (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.034s) 2023-01-11T21:56:30.5612223Z test_insert_observers_for_general_ops (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5612420Z Make sure we skip observers for ops that doesn't require ... ok (0.028s) 2023-01-11T21:56:30.5612606Z test_insert_observers_for_if (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.098s) 2023-01-11T21:56:30.5612830Z test_insert_observers_for_if_consistent_observation (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5612952Z check quantization for if works as long as ... ok (0.252s) 2023-01-11T21:56:30.5613143Z test_insert_observers_for_nested_if (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.097s) 2023-01-11T21:56:30.5613341Z test_insert_observers_for_reused_weight (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.023s) 2023-01-11T21:56:30.5613529Z test_insert_observers_interface (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.069s) 2023-01-11T21:56:30.5613735Z test_insert_observers_interface_unshare_type (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.257s) 2023-01-11T21:56:30.5613928Z test_insert_observers_propagate_observed (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5614073Z Make sure we propagate observed property through general ops ... ok (0.033s) 2023-01-11T21:56:30.5614293Z test_insert_observers_propagate_observed_for_function (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.041s) 2023-01-11T21:56:30.5614489Z test_insert_observers_propagate_observed_in_submodule (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5614638Z Make sure we propagate observed property through general ops ... ok (0.038s) 2023-01-11T21:56:30.5614834Z test_insert_observers_shared_class_type (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.032s) 2023-01-11T21:56:30.5615024Z test_insert_observers_skip_values (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.115s) 2023-01-11T21:56:30.5615215Z test_insert_observers_weight_dtype (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.028s) 2023-01-11T21:56:30.5615400Z test_insert_quant_dequant (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.148s) 2023-01-11T21:56:30.5615604Z test_insert_quant_dequant_shared_class_type (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.167s) 2023-01-11T21:56:30.5615792Z test_interface_with_fork (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.080s) 2023-01-11T21:56:30.5615965Z test_module_list (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.088s) 2023-01-11T21:56:30.5616123Z test_quantize_fork_wait (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5616378Z Tests the case where fork and wait calls are in different subgraphs ... [W utils.py:310] Warning: must run observer before calling calculate_qparams. Returning default values. (function ) 2023-01-11T21:56:30.5616446Z ok (0.073s) 2023-01-11T21:56:30.5616641Z test_replicate_dequant_same_value (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.076s) 2023-01-11T21:56:30.5616826Z test_replicate_dequantize (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.007s) 2023-01-11T21:56:30.5617020Z test_replicate_dequantize_in_block (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.008s) 2023-01-11T21:56:30.5617241Z test_replicate_quantize_for_if (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) 2023-01-11T21:56:30.5617478Z We want to move quantize nodes for output of prim::If ... [W utils.py:310] Warning: must run observer before calling calculate_qparams. Returning default values. (function ) 2023-01-11T21:56:30.5617661Z [W utils.py:310] Warning: must run observer before calling calculate_qparams. Returning default values. (function ) 2023-01-11T21:56:30.5617829Z [W utils.py:310] Warning: must run observer before calling calculate_qparams. Returning default values. (function ) 2023-01-11T21:56:30.5618006Z [W utils.py:310] Warning: must run observer before calling calculate_qparams. Returning default values. (function ) 2023-01-11T21:56:30.5618071Z ok (0.085s) 2023-01-11T21:56:30.5618266Z test_skip_dequant_constant_prop (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.083s) 2023-01-11T21:56:30.5618493Z test_swap_functional_linear (quantization.jit.test_quantize_jit.TestQuantizeJitPasses) ... ok (0.007s) 2023-01-11T21:56:30.5619324Z test_resnet18 (quantization.fx.test_quantize_pt2e.TestQuantizePT2EModels) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/_pt2e/utils.py:101: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer 2023-01-11T21:56:30.5619436Z get_bias_node = m.graph.get_attr(bias_attr_name) 2023-01-11T21:56:30.5619502Z ok (3.834s) 2023-01-11T21:56:30.5619734Z test_benchmark (quantization.core.test_quantized_op.TestQuantizedConv) ... skip: used for local benchmarking, comment when we want to run it (0.002s) 2023-01-11T21:56:30.5619904Z test_conv_reorder_issue_onednn (quantization.core.test_quantized_op.TestQuantizedConv) 2023-01-11T21:56:30.5620041Z Ensure reorder failure issue in conv is fixed for onednn backend. ... ok (0.008s) 2023-01-11T21:56:30.5620205Z test_qconv1d (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (1.777s) 2023-01-11T21:56:30.5620401Z test_qconv1d_cudnn (quantization.core.test_quantized_op.TestQuantizedConv) ... skip: cudnn is not enabled. (0.005s) 2023-01-11T21:56:30.5620572Z test_qconv1d_unpack (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (0.937s) 2023-01-11T21:56:30.5620733Z test_qconv2d (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (5.493s) 2023-01-11T21:56:30.5620930Z test_qconv2d_cudnn (quantization.core.test_quantized_op.TestQuantizedConv) ... skip: cudnn is not enabled. (0.005s) 2023-01-11T21:56:30.5621101Z test_qconv2d_unpack (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (1.035s) 2023-01-11T21:56:30.5621263Z test_qconv3d (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (1.210s) 2023-01-11T21:56:30.5621434Z test_qconv3d_unpack (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (0.648s) 2023-01-11T21:56:30.5621602Z test_qconv_transpose1d (quantization.core.test_quantized_op.TestQuantizedConv) ... ok (1.516s) 2023-01-11T21:56:30.5621876Z test_qconv_transpose2d (quantization.core.test_quantized_op.TestQuantizedConv) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.004s) 2023-01-11T21:56:30.5622149Z test_qconv_transpose3d (quantization.core.test_quantized_op.TestQuantizedConv) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.003s) 2023-01-11T21:56:30.5622486Z test_embedding (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... ok (0.258s) 2023-01-11T21:56:30.5622676Z test_embedding_2d_indices (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) 2023-01-11T21:56:30.5622820Z Tests the case where 2D indices are passed into the operator ... ok (0.003s) 2023-01-11T21:56:30.5623017Z test_embedding_bag_2bit (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... ok (0.117s) 2023-01-11T21:56:30.5623301Z test_embedding_bag_2bit_unpack (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... skip: Test needs Caffe2 (0.000s) 2023-01-11T21:56:30.5623490Z test_embedding_bag_2d_indices (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) 2023-01-11T21:56:30.5623618Z Tests the case where 2D indices are passed into the operator ... ok (0.004s) 2023-01-11T21:56:30.5623810Z test_embedding_bag_4bit (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... ok (0.186s) 2023-01-11T21:56:30.5624033Z test_embedding_bag_4bit_unpack (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... skip: Test needs Caffe2 (0.000s) 2023-01-11T21:56:30.5624225Z test_embedding_bag_byte (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... ok (0.188s) 2023-01-11T21:56:30.5624485Z test_embedding_bag_byte_unpack (quantization.core.test_quantized_op.TestQuantizedEmbeddingOps) ... skip: Test needs Caffe2 (0.000s) 2023-01-11T21:56:30.5624697Z test_conv1d_api (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps) ... ok (0.304s) 2023-01-11T21:56:30.5624906Z test_conv2d_api (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps) ... ok (0.254s) 2023-01-11T21:56:30.5625112Z test_conv3d_api (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps) ... ok (0.200s) 2023-01-11T21:56:30.5625856Z test_grid_sample (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:4235: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. 2023-01-11T21:56:30.5625932Z warnings.warn( 2023-01-11T21:56:30.5625986Z ok (0.056s) 2023-01-11T21:56:30.5626194Z test_relu_api (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps) ... ok (0.001s) 2023-01-11T21:56:30.5626364Z test_qlinear (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (1.723s) 2023-01-11T21:56:30.5626569Z test_qlinear_cudnn (quantization.core.test_quantized_op.TestQuantizedLinear) ... skip: cudnn is not enabled. (0.006s) 2023-01-11T21:56:30.5626748Z test_qlinear_leaky_relu (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (1.030s) 2023-01-11T21:56:30.5626920Z test_qlinear_relu (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (1.710s) 2023-01-11T21:56:30.5627091Z test_qlinear_tanh (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (0.540s) 2023-01-11T21:56:30.5627265Z test_qlinear_unpack (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (0.406s) 2023-01-11T21:56:30.5627470Z test_qlinear_with_input_q_dq_qweight_dq_output_fp32 (quantization.core.test_quantized_op.TestQuantizedLinear) ... ok (0.281s) 2023-01-11T21:56:30.5627632Z test_adaptive_avg_pool (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.623s) 2023-01-11T21:56:30.5627817Z test_adaptive_avg_pool2d_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.074s) 2023-01-11T21:56:30.5628001Z test_adaptive_avg_pool3d_ndhwc (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.070s) 2023-01-11T21:56:30.5628169Z test_add_scalar_relu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.195s) 2023-01-11T21:56:30.5628335Z test_advanced_indexing (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5628476Z Verifies that the x[:, [0], :, :] syntax works for quantized tensors. ... ok (0.006s) 2023-01-11T21:56:30.5628628Z test_avg_pool2d (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5628809Z Note: we currently cannot test the divisor_override, because quantized op will clamp the result ... ok (1.152s) 2023-01-11T21:56:30.5628965Z test_avg_pool2d_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5629177Z Note: 1) we currently cannot test the divisor_override, because quantized op will clamp the result ... ok (1.124s) 2023-01-11T21:56:30.5629327Z test_avg_pool3d (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5629504Z Note: we currently cannot test the divisor_override, because quantized op will clamp the result ... ok (4.660s) 2023-01-11T21:56:30.5629659Z test_avg_pool3d_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5629845Z Note: 1) we currently cannot test the divisor_override, because quantized op will clamp the result ... ok (7.382s) 2023-01-11T21:56:30.5630011Z test_batch_norm (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.580s) 2023-01-11T21:56:30.5630181Z test_batch_norm_relu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.641s) 2023-01-11T21:56:30.5630339Z test_cat (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.358s) 2023-01-11T21:56:30.5630524Z test_cat_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.265s) 2023-01-11T21:56:30.5630697Z test_channel_shuffle (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (1.661s) 2023-01-11T21:56:30.5631307Z test_custom_module_lstm (quantization.core.test_quantized_op.TestQuantizedOps) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. 2023-01-11T21:56:30.5631385Z warnings.warn( 2023-01-11T21:56:30.5631450Z ok (38.856s) 2023-01-11T21:56:30.5632034Z test_custom_module_multi_head_attention (quantization.core.test_quantized_op.TestQuantizedOps) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py:1204: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point 2023-01-11T21:56:30.5632114Z warnings.warn( 2023-01-11T21:56:30.5632178Z ok (11.381s) 2023-01-11T21:56:30.5632681Z test_empty_batch (quantization.core.test_quantized_op.TestQuantizedOps) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:4022: UserWarning: nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead. 2023-01-11T21:56:30.5632879Z warnings.warn("nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead.") 2023-01-11T21:56:30.5632932Z ok (0.025s) 2023-01-11T21:56:30.5633092Z test_equal (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.182s) 2023-01-11T21:56:30.5633257Z test_group_norm (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (6.246s) 2023-01-11T21:56:30.5633421Z test_hardswish (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.398s) 2023-01-11T21:56:30.5633580Z test_hardtanh (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.485s) 2023-01-11T21:56:30.5633753Z test_instance_norm (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (1.499s) 2023-01-11T21:56:30.5633907Z test_interpolate (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5634048Z This test cover upsample_nearest2d and upsample_bilinear2d ... ok (3.004s) 2023-01-11T21:56:30.5634194Z test_interpolate3d (quantization.core.test_quantized_op.TestQuantizedOps) 2023-01-11T21:56:30.5634304Z This test cover upsample_nearest3d ... ok (17.508s) 2023-01-11T21:56:30.5634467Z test_leaky_relu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.019s) 2023-01-11T21:56:30.5634649Z test_leaky_relu_observed_output (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.542s) 2023-01-11T21:56:30.5634821Z test_linear_bias_unpack (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.003s) 2023-01-11T21:56:30.5634982Z test_max_pool1d (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.233s) 2023-01-11T21:56:30.5635145Z test_max_pool2d (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.545s) 2023-01-11T21:56:30.5635380Z test_max_pool2d_cudnn (quantization.core.test_quantized_op.TestQuantizedOps) ... skip: cudnn is not enabled. (0.005s) 2023-01-11T21:56:30.5635536Z test_max_pool2d_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.510s) 2023-01-11T21:56:30.5635698Z test_mean (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.345s) 2023-01-11T21:56:30.5635863Z test_mul_scalar_relu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.384s) 2023-01-11T21:56:30.5636451Z test_qadd_broadcast (quantization.core.test_quantized_op.TestQuantizedOps) ... [W Resize.cpp:33] Warning: An output with one or more elements was resized since it had shape [1, 1, 4, 4], which does not match the required output shape [2, 1, 4, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function _resize_output_check) 2023-01-11T21:56:30.5636522Z ok (0.002s) 2023-01-11T21:56:30.5636721Z test_qadd_relu_cudnn (quantization.core.test_quantized_op.TestQuantizedOps) ... skip: cudnn is not enabled. (0.001s) 2023-01-11T21:56:30.5636981Z test_qadd_relu_cudnn_nhwc (quantization.core.test_quantized_op.TestQuantizedOps) ... skip: cudnn is not enabled. (0.001s) 2023-01-11T21:56:30.5637177Z test_qadd_relu_different_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.005s) 2023-01-11T21:56:30.5637357Z test_qadd_relu_same_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.004s) 2023-01-11T21:56:30.5637515Z test_qcelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.129s) 2023-01-11T21:56:30.5637675Z test_qclamp (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.633s) 2023-01-11T21:56:30.5637820Z test_qelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.159s) 2023-01-11T21:56:30.5637983Z test_qgelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.043s) 2023-01-11T21:56:30.5638152Z test_qhardsigmoid (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.357s) 2023-01-11T21:56:30.5638313Z test_qlayer_norm (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.090s) 2023-01-11T21:56:30.5638476Z test_qmatmul (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.181s) 2023-01-11T21:56:30.5638644Z test_qmul_broadcast (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.002s) 2023-01-11T21:56:30.5638827Z test_qmul_relu_different_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.005s) 2023-01-11T21:56:30.5639004Z test_qmul_relu_same_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.391s) 2023-01-11T21:56:30.5639149Z test_qprelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.042s) 2023-01-11T21:56:30.5654714Z test_qrelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (1.710s) 2023-01-11T21:56:30.5654915Z test_qrelu6 (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.220s) 2023-01-11T21:56:30.5655081Z test_qsoftmax (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.141s) 2023-01-11T21:56:30.5655256Z test_qsoftmax_qnnpack (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.174s) 2023-01-11T21:56:30.5655516Z test_qtanh (quantization.core.test_quantized_op.TestQuantizedOps) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.005s) 2023-01-11T21:56:30.5655680Z test_qthreshold (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.145s) 2023-01-11T21:56:30.5656118Z test_qtopk (quantization.core.test_quantized_op.TestQuantizedOps) ... /var/lib/jenkins/workspace/test/quantization/core/test_quantized_op.py:1999: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:56:30.5656328Z indices = torch.tensor(X).long() 2023-01-11T21:56:30.5656393Z ok (0.147s) 2023-01-11T21:56:30.5656566Z test_quantized_mean_qnnpack (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.013s) 2023-01-11T21:56:30.5656721Z test_sigmoid (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.167s) 2023-01-11T21:56:30.5656896Z test_sigmoid_non_observed (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.700s) 2023-01-11T21:56:30.5657055Z test_std (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (1.439s) 2023-01-11T21:56:30.5657240Z test_bfp16_quantize (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5657568Z test_choose_qparams (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: this is broken without changes to any relevant code, we need to remove hypothesis testing in CI (0.004s) 2023-01-11T21:56:30.5657772Z test_choose_qparams_optimized (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.011s) 2023-01-11T21:56:30.5657945Z test_clone (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5658178Z test_compare_per_channel_device_numerics (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: CUDA is not available (0.001s) 2023-01-11T21:56:30.5658399Z test_compare_per_tensor_device_numerics (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: CUDA is not available (0.001s) 2023-01-11T21:56:30.5658633Z test_cuda_quantization_does_not_pin_memory (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: CUDA is not available (0.000s) 2023-01-11T21:56:30.5658840Z test_decomposed_dequantize_per_channel (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5659049Z test_decomposed_dequantize_per_tensor (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5659255Z test_decomposed_dynamic_quant_pattern (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5659458Z test_decomposed_quantize_per_channel (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5659660Z test_decomposed_quantize_per_tensor (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5659850Z test_dequantize_fp16_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5660067Z test_dequantize_fp16_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5660474Z test_fp16_saturate_op (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /var/lib/jenkins/workspace/test/quantization/core/test_quantized_tensor.py:1390: UserWarning: FOUND weight out of range (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/quantized/cpu/QuantUtils.h:215.) 2023-01-11T21:56:30.5660560Z y = torch._saturate_weight_to_fp16(x) 2023-01-11T21:56:30.5660625Z ok (0.001s) 2023-01-11T21:56:30.5660812Z test_jit_serialization (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.008s) 2023-01-11T21:56:30.5661013Z test_per_channel_qtensor_creation_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5661243Z test_per_channel_qtensor_creation_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5661447Z test_per_channel_qtensor_to_memory_format (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5661663Z test_per_channel_to_device (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.001s) 2023-01-11T21:56:30.5661898Z test_per_tensor_qtensor_to_memory_format (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5662110Z test_per_tensor_to_device (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.001s) 2023-01-11T21:56:30.5663093Z test_pickle_checkpoint_qtensor (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:309: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5663173Z device=storage.device, 2023-01-11T21:56:30.5663848Z /opt/conda/lib/python3.10/site-packages/torch/_utils.py:330: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5663936Z device=storage.device, 2023-01-11T21:56:30.5664000Z ok (0.006s) 2023-01-11T21:56:30.5664182Z test_qscheme_pickle (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.003s) 2023-01-11T21:56:30.5664383Z test_qtensor_channel_float_assignment (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.078s) 2023-01-11T21:56:30.5664554Z test_qtensor_copy (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.016s) 2023-01-11T21:56:30.5664734Z test_qtensor_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.007s) 2023-01-11T21:56:30.5664914Z test_qtensor_creation (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.005s) 2023-01-11T21:56:30.5665120Z test_qtensor_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5665293Z test_qtensor_dtypes (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5665488Z test_qtensor_fill_per_channel (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.004s) 2023-01-11T21:56:30.5665684Z test_qtensor_fill_per_channel_nhwc (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.008s) 2023-01-11T21:56:30.5665869Z test_qtensor_fill_per_tensor (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.004s) 2023-01-11T21:56:30.5666057Z test_qtensor_fill_per_tensor_nhwc (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.009s) 2023-01-11T21:56:30.5666248Z test_qtensor_float_assignment (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.003s) 2023-01-11T21:56:30.5666433Z test_qtensor_index_put_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.011s) 2023-01-11T21:56:30.5666656Z test_qtensor_index_put_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5666842Z test_qtensor_index_select_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5667047Z test_qtensor_index_select_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5667224Z test_qtensor_int_repr (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5667776Z test_qtensor_legacy_new_failure (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /var/lib/jenkins/workspace/test/quantization/core/test_quantized_tensor.py:410: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5667940Z self.assertRaises(RuntimeError, lambda: qr.new(r.storage())) 2023-01-11T21:56:30.5668001Z ok (0.006s) 2023-01-11T21:56:30.5668545Z test_qtensor_load_save (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /var/lib/jenkins/workspace/test/quantization/core/test_quantized_tensor.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5668677Z self.assertEqual(qr2.storage().data_ptr(), qrv2.storage().data_ptr()) 2023-01-11T21:56:30.5668737Z ok (0.009s) 2023-01-11T21:56:30.5668930Z test_qtensor_masked_fill_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.012s) 2023-01-11T21:56:30.5669177Z test_qtensor_masked_fill_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: No gpu is available. (0.000s) 2023-01-11T21:56:30.5669381Z test_qtensor_per_channel_load_save (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.007s) 2023-01-11T21:56:30.5669567Z test_qtensor_per_channel_permute (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.010s) 2023-01-11T21:56:30.5669744Z test_qtensor_permute (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.007s) 2023-01-11T21:56:30.5669925Z test_qtensor_quant_dequant (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.002s) 2023-01-11T21:56:30.5670121Z test_qtensor_quantize_per_channel (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (9.031s) 2023-01-11T21:56:30.5670299Z test_qtensor_reshape (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.003s) 2023-01-11T21:56:30.5670723Z test_qtensor_resize (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:767: UserWarning: non-inplace resize is deprecated 2023-01-11T21:56:30.5670889Z warnings.warn("non-inplace resize is deprecated") 2023-01-11T21:56:30.5670948Z ok (0.003s) 2023-01-11T21:56:30.5671501Z test_qtensor_sub_byte_aligned_cols (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... /var/lib/jenkins/workspace/test/quantization/core/test_quantized_tensor.py:290: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5671690Z self.assertEqual(qr.storage().size(), rows * math.ceil(cols / elements_per_byte), f"with {dtype}, {elements_per_byte}") 2023-01-11T21:56:30.5672129Z /var/lib/jenkins/workspace/test/quantization/core/test_quantized_tensor.py:299: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5672318Z self.assertEqual(q.storage().size(), math.ceil(num_elements / elements_per_byte), f"with {dtype}, {elements_per_byte}") 2023-01-11T21:56:30.5672372Z ok (0.006s) 2023-01-11T21:56:30.5672568Z test_qtensor_sub_byte_not_aligned_cols (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.004s) 2023-01-11T21:56:30.5672747Z test_qtensor_unsqueeze (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.013s) 2023-01-11T21:56:30.5672924Z test_qtensor_view (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.008s) 2023-01-11T21:56:30.5673130Z test_quant_pin_memory (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... skip: CUDA is not available (0.000s) 2023-01-11T21:56:30.5673360Z test_quantize_per_channel_float_qparams (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (9.028s) 2023-01-11T21:56:30.5673549Z test_quantize_per_channel_sub_byte (quantization.core.test_quantized_tensor.TestQuantizedTensor) 2023-01-11T21:56:30.5673752Z Tests the per channel quantization scheme for 4-bit qtensors. ... ok (0.304s) 2023-01-11T21:56:30.5673920Z test_repeat (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5674100Z test_torch_qtensor_deepcopy (quantization.core.test_quantized_tensor.TestQuantizedTensor) ... ok (0.001s) 2023-01-11T21:56:30.5674309Z test_observer_scriptable (quantization.core.test_workflow_module.TestRecordHistogramObserver) ... ok (0.015s) 2023-01-11T21:56:30.5674505Z test_record_observer (quantization.core.test_workflow_module.TestRecordHistogramObserver) ... ok (0.008s) 2023-01-11T21:56:30.5674678Z test_rnn (quantization.core.test_quantized_module.TestReferenceQuantizedModule) 2023-01-11T21:56:30.5674854Z Checks the rnn reference quantized modules has correct numerics ... ok (0.011s) 2023-01-11T21:56:30.5675039Z test_rnn_cell (quantization.core.test_quantized_module.TestReferenceQuantizedModule) 2023-01-11T21:56:30.5675189Z Checks the rnn cell reference quantized modules has correct numerics ... ok (0.007s) 2023-01-11T21:56:30.5675369Z test_sparse (quantization.core.test_quantized_module.TestReferenceQuantizedModule) 2023-01-11T21:56:30.5675459Z Embedding and EmbeddingBag ... ok (0.004s) 2023-01-11T21:56:30.5675640Z test_conv2d (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.061s) 2023-01-11T21:56:30.5675829Z test_conv2d_graph (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.021s) 2023-01-11T21:56:30.5676016Z test_conv2d_graph_v2 (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.019s) 2023-01-11T21:56:30.5676208Z test_conv2d_graph_v3 (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.021s) 2023-01-11T21:56:30.5676397Z test_conv2d_nobias (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.048s) 2023-01-11T21:56:30.5676590Z test_conv2d_nobias_graph (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.020s) 2023-01-11T21:56:30.5676783Z test_conv2d_nobias_graph_v2 (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.018s) 2023-01-11T21:56:30.5677044Z test_conv2d_nobias_graph_v3 (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.019s) 2023-01-11T21:56:30.5677217Z test_conv2d_relu (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.051s) 2023-01-11T21:56:30.5677399Z test_conv3d (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.015s) 2023-01-11T21:56:30.5677583Z test_conv3d_relu (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.015s) 2023-01-11T21:56:30.5677781Z test_default_qat_qconfig (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.010s) 2023-01-11T21:56:30.5677963Z test_linear (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.042s) 2023-01-11T21:56:30.5678151Z test_linear_dynamic (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.063s) 2023-01-11T21:56:30.5678336Z test_linear_relu (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.043s) 2023-01-11T21:56:30.5679183Z test_linear_relu_package_quantization_transforms (quantization.bc.test_backward_compatibility.TestSerialization) ... /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5679289Z return self.fget.__get__(instance, owner)() 2023-01-11T21:56:30.5679797Z /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/prepare.py:1435: UserWarning: Passing a QConfig dictionary to prepare is deprecated and will not be supported in a future version. Please pass in a QConfigMapping instead. 2023-01-11T21:56:30.5679871Z warnings.warn( 2023-01-11T21:56:30.5680372Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node child_packed_weight_0 target child_packed_weight_0 child_packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5680585Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5681068Z /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:1346: UserWarning: Node _packed_weight_0 target _packed_weight_0 _packed_weight_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target 2023-01-11T21:56:30.5681308Z warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does ' 2023-01-11T21:56:30.5681378Z ok (0.379s) 2023-01-11T21:56:30.5681558Z test_lstm (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.037s) 2023-01-11T21:56:30.5681760Z test_per_channel_observer (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.004s) 2023-01-11T21:56:30.5681955Z test_per_tensor_observer (quantization.bc.test_backward_compatibility.TestSerialization) ... ok (0.003s) 2023-01-11T21:56:30.5682145Z test_batch_norm2d (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5682262Z Tests the correctness of the batchnorm2d module. ... ok (0.002s) 2023-01-11T21:56:30.5682808Z test_batch_norm2d_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2023-01-11T21:56:30.5682886Z warnings.warn( 2023-01-11T21:56:30.5682953Z ok (0.008s) 2023-01-11T21:56:30.5683137Z test_batch_norm3d (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5683263Z Tests the correctness of the batchnorm3d module. ... ok (0.002s) 2023-01-11T21:56:30.5683477Z test_batch_norm3d_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.007s) 2023-01-11T21:56:30.5683664Z test_channel_shuffle (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5683785Z Tests the correctness of the ChannelShuffle module. ... ok (0.007s) 2023-01-11T21:56:30.5683978Z test_conv1d_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (5.744s) 2023-01-11T21:56:30.5684163Z test_conv2d_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (4.305s) 2023-01-11T21:56:30.5684351Z test_conv3d_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.545s) 2023-01-11T21:56:30.5684532Z test_dropout (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5684655Z Tests the correctness of the dropout module. ... ok (0.001s) 2023-01-11T21:56:30.5684862Z test_dropout_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.005s) 2023-01-11T21:56:30.5685036Z test_elu (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5685151Z Tests the correctness of the ELU module. ... ok (0.002s) 2023-01-11T21:56:30.5685336Z test_embedding_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (6.119s) 2023-01-11T21:56:30.5685523Z test_embedding_bag_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5685697Z Test execution and serialization for dynamic quantized embedding_bag modules on int8 ... ok (6.288s) 2023-01-11T21:56:30.5685876Z test_group_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5686029Z Tests the correctness of the groupnorm module. ... ok (0.003s) 2023-01-11T21:56:30.5686210Z test_instance_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5686343Z Tests the correctness of the instancenorm{n}d modules. ... ok (0.007s) 2023-01-11T21:56:30.5686519Z test_layer_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5686630Z Tests the correctness of the layernorm module. ... ok (0.003s) 2023-01-11T21:56:30.5686820Z test_leaky_relu (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.002s) 2023-01-11T21:56:30.5687679Z test_linear (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... /opt/conda/lib/python3.10/site-packages/torch/package/package_exporter.py:900: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5687794Z storage_type_str = obj.pickle_storage_type() 2023-01-11T21:56:30.5688461Z /opt/conda/lib/python3.10/site-packages/torch/package/package_exporter.py:903: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:56:30.5688546Z storage_numel = obj.size() 2023-01-11T21:56:30.5688612Z ok (22.240s) 2023-01-11T21:56:30.5688798Z test_linear_leaky_relu (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5688961Z test API functionality for nn.intrinsic.quantized.linear_leaky_relu ... ok (11.049s) 2023-01-11T21:56:30.5689159Z test_linear_relu (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (21.755s) 2023-01-11T21:56:30.5689331Z test_linear_tanh (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5689487Z test API functionality for nn.intrinsic.quantized.linear_tanh ... ok (5.530s) 2023-01-11T21:56:30.5689665Z test_pool_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) 2023-01-11T21:56:30.5689785Z Tests the correctness of the pool module. ... ok (0.013s) 2023-01-11T21:56:30.5689972Z test_prelu (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.002s) 2023-01-11T21:56:30.5690173Z test_quant_dequant_api (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.002s) 2023-01-11T21:56:30.5690355Z test_relu (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.001s) 2023-01-11T21:56:30.5690543Z test_sigmoid (quantization.core.test_quantized_module.TestStaticQuantizedModule) ... ok (0.002s) 2023-01-11T21:56:30.5690756Z test_subgraph_rewriter_annotations_int (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (5.136s) 2023-01-11T21:56:30.5690967Z test_subgraph_rewriter_correct_output_replacement (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.437s) 2023-01-11T21:56:30.5691183Z test_subgraph_rewriter_graph_argument_order (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.058s) 2023-01-11T21:56:30.5691436Z test_subgraph_rewriter_internal_pattern_nodes_cannot_have_users_that_are_not_matched (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.041s) 2023-01-11T21:56:30.5691651Z test_subgraph_rewriter_multiple_pattern_match (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.054s) 2023-01-11T21:56:30.5691862Z test_subgraph_rewriter_pattern_is_entire_graph (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.050s) 2023-01-11T21:56:30.5692118Z test_subgraph_rewriter_pattern_output_pattern_node_can_have_users_that_are_not_matched (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.050s) 2023-01-11T21:56:30.5692363Z test_subgraph_rewriter_placeholder_matching (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) 2023-01-11T21:56:30.5692510Z This tests that a placeholder Node can be matched to a Node with ... ok (0.051s) 2023-01-11T21:56:30.5692715Z test_subgraph_rewriter_preserves_logic (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.052s) 2023-01-11T21:56:30.5692940Z test_subgraph_rewriter_replaces_referenced_submodules (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.040s) 2023-01-11T21:56:30.5693137Z test_subgraph_rewriter_single_pattern_match (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.049s) 2023-01-11T21:56:30.5693347Z test_subgraph_rewriter_traced_as_callable (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.049s) 2023-01-11T21:56:30.5693584Z test_subgraph_rewriter_with_oneliner_pattern (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.047s) 2023-01-11T21:56:30.5693810Z test_subgraph_writer_replace_consecutive_submodules (quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.048s) 2023-01-11T21:56:30.5693975Z test_get_fqn_to_example_inputs_complex_args (quantization.core.test_utils.TestUtils) 2023-01-11T21:56:30.5694125Z Test that we can record complex example inputs such as lists and dicts ... ok (0.007s) 2023-01-11T21:56:30.5694292Z test_get_fqn_to_example_inputs_default_kwargs (quantization.core.test_utils.TestUtils) 2023-01-11T21:56:30.5694455Z Test that we can get example inputs for functions with default keyword arguments ... ok (0.007s) 2023-01-11T21:56:30.5694626Z test_get_fqn_to_example_inputs_simple (quantization.core.test_utils.TestUtils) ... ok (0.007s) 2023-01-11T21:56:30.5694779Z test_quantize_weight_clamping_per_channel (quantization.core.test_utils.TestUtils) 2023-01-11T21:56:30.5694946Z Test quant_{min, max} from per channel observer is honored by `_quantize_weight` method ... ok (0.002s) 2023-01-11T21:56:30.5695112Z test_quantize_weight_clamping_per_tensor (quantization.core.test_utils.TestUtils) 2023-01-11T21:56:30.5695279Z Test quant_{min, max} from per tensor observer is honored by `_quantize_weight` method ... ok (0.002s) 2023-01-11T21:56:30.5695290Z 2023-01-11T21:56:30.5695495Z ---------------------------------------------------------------------- 2023-01-11T21:56:30.5695574Z Ran 1001 tests in 786.308s 2023-01-11T21:56:30.5695580Z 2023-01-11T21:56:30.5695652Z OK (skipped=62) 2023-01-11T21:56:30.5695657Z 2023-01-11T21:56:30.5695739Z Generating XML reports... 2023-01-11T21:56:30.5696153Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic-20230111214322.xml 2023-01-11T21:56:30.5696546Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized-20230111214322.xml 2023-01-11T21:56:30.5696962Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_quantization.TestAOMigrationQuantization-20230111214322.xml 2023-01-11T21:56:30.5697376Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx-20230111214322.xml 2023-01-11T21:56:30.5697747Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_backend_config.TestBackendConfig-20230111214322.xml 2023-01-11T21:56:30.5698143Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_bias_correction_eager.TestBiasCorrectionEager-20230111214322.xml 2023-01-11T21:56:30.5698495Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestComparatorOps-20230111214322.xml 2023-01-11T21:56:30.5698891Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized-20230111214322.xml 2023-01-11T21:56:30.5699278Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestDistributed-20230111214322.xml 2023-01-11T21:56:30.5699678Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestDynamicQuantizedModule-20230111214322.xml 2023-01-11T21:56:30.5700051Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestDynamicQuantizedOps-20230111214322.xml 2023-01-11T21:56:30.5700411Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_equalize_eager.TestEqualizeEager-20230111214322.xml 2023-01-11T21:56:30.5700775Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_equalize_fx.TestEqualizeFx-20230111214322.xml 2023-01-11T21:56:30.5701124Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher-20230111214322.xml 2023-01-11T21:56:30.5701495Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXGraphMatcherModels-20230111214322.xml 2023-01-11T21:56:30.5701879Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs-20230111214322.xml 2023-01-11T21:56:30.5702281Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels-20230111214322.xml 2023-01-11T21:56:30.5702778Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows-20230111214322.xml 2023-01-11T21:56:30.5703139Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestFakeQuantize-20230111214322.xml 2023-01-11T21:56:30.5703505Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_ops.TestFakeQuantizeOps-20230111214322.xml 2023-01-11T21:56:30.5703849Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_fuse_eager.TestFuseEager-20230111214322.xml 2023-01-11T21:56:30.5704177Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestFuseFx-20230111214322.xml 2023-01-11T21:56:30.5704538Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_ops.TestFusedObsFakeQuant-20230111214322.xml 2023-01-11T21:56:30.5704934Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestFusedObsFakeQuantModule-20230111214322.xml 2023-01-11T21:56:30.5705284Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_fusion_passes.TestFusionPasses-20230111214322.xml 2023-01-11T21:56:30.5705689Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxDetectInputWeightEqualization-20230111214322.xml 2023-01-11T21:56:30.5706046Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxDetectOutliers-20230111214322.xml 2023-01-11T21:56:30.5706408Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportClass-20230111214322.xml 2023-01-11T21:56:30.5706821Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportDetectDynamicStatic-20230111214322.xml 2023-01-11T21:56:30.5707197Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportDetector-20230111214322.xml 2023-01-11T21:56:30.5707619Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportObserver-20230111214322.xml 2023-01-11T21:56:30.5707999Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportVisualizer-20230111214322.xml 2023-01-11T21:56:30.5708369Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestHistogramObserver-20230111214322.xml 2023-01-11T21:56:30.5708754Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_model_numerics.TestModelNumericsEager-20230111214322.xml 2023-01-11T21:56:30.5709137Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager-20230111214322.xml 2023-01-11T21:56:30.5709519Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestObserver-20230111214322.xml 2023-01-11T21:56:30.5709852Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestPadding-20230111214322.xml 2023-01-11T21:56:30.5710197Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQNNPackOps-20230111214322.xml 2023-01-11T21:56:30.5710544Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_docs.TestQuantizationDocs-20230111214322.xml 2023-01-11T21:56:30.5710915Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeDynamicJitOps-20230111214322.xml 2023-01-11T21:56:30.5711298Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeDynamicJitPasses-20230111214322.xml 2023-01-11T21:56:30.5711681Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerOps-20230111214322.xml 2023-01-11T21:56:30.5712079Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerPTQDynamic-20230111214322.xml 2023-01-11T21:56:30.5712469Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerPTQStatic-20230111214322.xml 2023-01-11T21:56:30.5712837Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_qat.TestQuantizeEagerQAT-20230111214322.xml 2023-01-11T21:56:30.5713242Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_qat.TestQuantizeEagerQATNumerics-20230111214322.xml 2023-01-11T21:56:30.5713582Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFx-20230111214322.xml 2023-01-11T21:56:30.5713931Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFxModels-20230111214322.xml 2023-01-11T21:56:30.5714285Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFxOps-20230111214322.xml 2023-01-11T21:56:30.5714634Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJit-20230111214322.xml 2023-01-11T21:56:30.5714989Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJitOps-20230111214322.xml 2023-01-11T21:56:30.5715350Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJitPasses-20230111214322.xml 2023-01-11T21:56:30.5715722Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_pt2e.TestQuantizePT2EModels-20230111214322.xml 2023-01-11T21:56:30.5716108Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedConv-20230111214322.xml 2023-01-11T21:56:30.5716490Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedEmbeddingOps-20230111214322.xml 2023-01-11T21:56:30.5716959Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_functional.TestQuantizedFunctionalOps-20230111214322.xml 2023-01-11T21:56:30.5717326Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedLinear-20230111214322.xml 2023-01-11T21:56:30.5717674Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedOps-20230111214322.xml 2023-01-11T21:56:30.5718068Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_tensor.TestQuantizedTensor-20230111214322.xml 2023-01-11T21:56:30.5718468Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestRecordHistogramObserver-20230111214322.xml 2023-01-11T21:56:30.5718869Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestReferenceQuantizedModule-20230111214322.xml 2023-01-11T21:56:30.5719254Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.bc.test_backward_compatibility.TestSerialization-20230111214322.xml 2023-01-11T21:56:30.5719643Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestStaticQuantizedModule-20230111214322.xml 2023-01-11T21:56:30.5720031Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter-20230111214322.xml 2023-01-11T21:56:30.5720359Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_utils.TestUtils-20230111214322.xml 2023-01-11T21:56:30.5720366Z 2023-01-11T21:56:30.5720752Z ##[endgroup] 2023-01-11T21:56:30.5721047Z FINISHED PRINTING LOG FILE of test_quantization (/var/lib/jenkins/workspace/test/test-reports/test_quantization_9fc5vzw9) 2023-01-11T21:56:30.5721053Z 2023-01-11T21:56:30.8641442Z Running inductor/test_torchinductor_opinfo ... [2023-01-11 21:56:30.863728] 2023-01-11T21:56:32.5274089Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:56:32.5282531Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:56:32.6160078Z Ignoring disabled issues: [] 2023-01-11T21:56:32.6160372Z Ignoring disabled issues: [] 2023-01-11T21:56:32.6626705Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_torchinductor_opinfo.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=0', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:56:32.662182] 2023-01-11T21:56:32.6627753Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_torchinductor_opinfo.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=1', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:56:32.662212] 2023-01-11T21:56:42.0267341Z 2023-01-11T21:56:42.0267953Z Expand the folded group to see the log file of inductor/test_torchinductor_opinfo 2023-01-11T21:56:42.0269062Z ##[group]PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_oyr8io5w) 2023-01-11T21:56:42.0269988Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:56:42.0270712Z Test results will be stored in test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-484174d0a16e76ad.xml 2023-01-11T21:56:42.0271372Z ============================= test session starts ============================== 2023-01-11T21:56:42.0271747Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T21:56:42.0272005Z cachedir: .pytest_cache 2023-01-11T21:56:42.0272405Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T21:56:42.0272759Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T21:56:42.0273188Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T21:56:42.0273876Z collecting ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py:0: PytestCollectionWarning: cannot collect test class 'TestExpect' because it has a __new__ constructor (from: test/inductor/test_torchinductor_opinfo.py) 2023-01-11T21:56:42.0274346Z collected 0 items 2023-01-11T21:56:42.0274538Z Running 0 items in this shard: 2023-01-11T21:56:42.0274663Z 2023-01-11T21:56:42.0274776Z =============================== warnings summary =============================== 2023-01-11T21:56:42.0275120Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T21:56:42.0275646Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T21:56:42.0275999Z self._mark_plugins_for_rewrite(hook) 2023-01-11T21:56:42.0276131Z 2023-01-11T21:56:42.0276365Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T21:56:42.0276917Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-484174d0a16e76ad.xml - 2023-01-11T21:56:42.0277371Z ============================== 1 warning in 0.25s ============================== 2023-01-11T21:56:42.0277678Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T21:56:42.0277873Z 2023-01-11T21:56:42.0278111Z ##[endgroup] 2023-01-11T21:56:42.0278559Z FINISHED PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_oyr8io5w) 2023-01-11T21:56:42.0278823Z 2023-01-11T21:56:47.2097450Z 2023-01-11T21:56:47.2097892Z Expand the folded group to see the log file of inductor/test_torchinductor_opinfo 2023-01-11T21:56:47.2098848Z ##[group]PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_bhl69r3u) 2023-01-11T21:56:47.2099537Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:56:47.2100116Z Test results will be stored in test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-e42ebd2ecf6eaa23.xml 2023-01-11T21:56:47.2100554Z ============================= test session starts ============================== 2023-01-11T21:56:47.2100933Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T21:56:47.2101196Z cachedir: .pytest_cache 2023-01-11T21:56:47.2101708Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T21:56:47.2102150Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T21:56:47.2102783Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T21:56:47.2103539Z collecting ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py:0: PytestCollectionWarning: cannot collect test class 'TestExpect' because it has a __new__ constructor (from: test/inductor/test_torchinductor_opinfo.py) 2023-01-11T21:56:47.2103948Z collected 0 items 2023-01-11T21:56:47.2104355Z Running 0 items in this shard: 2023-01-11T21:56:47.2104478Z 2023-01-11T21:56:47.2104593Z =============================== warnings summary =============================== 2023-01-11T21:56:47.2104991Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T21:56:47.2105559Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T21:56:47.2105925Z self._mark_plugins_for_rewrite(hook) 2023-01-11T21:56:47.2106058Z 2023-01-11T21:56:47.2106336Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T21:56:47.2106915Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-e42ebd2ecf6eaa23.xml - 2023-01-11T21:56:47.2107370Z ============================== 1 warning in 0.03s ============================== 2023-01-11T21:56:47.2107757Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T21:56:47.2107993Z 2023-01-11T21:56:47.2108238Z ##[endgroup] 2023-01-11T21:56:47.2108748Z FINISHED PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_bhl69r3u) 2023-01-11T21:56:47.2109008Z 2023-01-11T21:56:47.5661153Z Executing ['/opt/conda/bin/python', '-bb', 'inductor/test_torchinductor_opinfo.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '-k=_linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:56:47.565654] 2023-01-11T21:56:56.5191322Z 2023-01-11T21:56:56.5191903Z Expand the folded group to see the log file of inductor/test_torchinductor_opinfo 2023-01-11T21:56:56.5192958Z ##[group]PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_xb63g_n9) 2023-01-11T21:56:56.5193858Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:56:56.5194669Z Test results will be stored in test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-b7995a19757f5615.xml 2023-01-11T21:56:56.5195307Z ============================= test session starts ============================== 2023-01-11T21:56:56.5195930Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T21:56:56.5196349Z cachedir: .pytest_cache 2023-01-11T21:56:56.5197155Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T21:56:56.5197773Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T21:56:56.5198373Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T21:56:56.5199442Z collecting ... /var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py:0: PytestCollectionWarning: cannot collect test class 'TestExpect' because it has a __new__ constructor (from: test/inductor/test_torchinductor_opinfo.py) 2023-01-11T21:56:56.5200090Z collected 0 items 2023-01-11T21:56:56.5200377Z Running 0 items in this shard: 2023-01-11T21:56:56.5200568Z 2023-01-11T21:56:56.5200727Z =============================== warnings summary =============================== 2023-01-11T21:56:56.5201299Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T21:56:56.5202165Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T21:56:56.5202799Z self._mark_plugins_for_rewrite(hook) 2023-01-11T21:56:56.5203008Z 2023-01-11T21:56:56.5203329Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T21:56:56.5204257Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-b7995a19757f5615.xml - 2023-01-11T21:56:56.5205151Z ============================== 1 warning in 0.03s ============================== 2023-01-11T21:56:56.5205718Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T21:56:56.5206074Z 2023-01-11T21:56:56.5206547Z ##[endgroup] 2023-01-11T21:56:56.5207407Z FINISHED PRINTING LOG FILE of inductor/test_torchinductor_opinfo (/var/lib/jenkins/workspace/test/test-reports/inductor-test_torchinductor_opinfo_xb63g_n9) 2023-01-11T21:56:56.5207892Z 2023-01-11T21:56:56.5208205Z Running test_autograd ... [2023-01-11 21:56:56.519783] 2023-01-11T21:56:56.5209039Z Executing ['/opt/conda/bin/python', '-bb', 'test_autograd.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:56:56.520037] 2023-01-11T21:57:16.0395655Z 2023-01-11T21:57:16.0396415Z Expand the folded group to see the log file of test_autograd 2023-01-11T21:57:16.0397664Z ##[group]PRINTING LOG FILE of test_autograd (/var/lib/jenkins/workspace/test/test-reports/test_autograd_gulrsvtr) 2023-01-11T21:57:16.0398025Z 2023-01-11T21:57:16.0398801Z Running tests... 2023-01-11T21:57:16.0399467Z ---------------------------------------------------------------------- 2023-01-11T21:57:16.0400059Z Test results will be stored in test-reports/python-unittest/test_autograd 2023-01-11T21:57:16.0400547Z test_backward_out_of_context (__main__.TestAllowMutationOnSaved) ... ok (0.003s) 2023-01-11T21:57:16.0400980Z test_basic (__main__.TestAllowMutationOnSaved) ... ok (0.007s) 2023-01-11T21:57:16.0401393Z test_disallow_nesting (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0401871Z test_double_backward (__main__.TestAllowMutationOnSaved) ... ok (0.002s) 2023-01-11T21:57:16.0402346Z test_save_base_and_modify_view (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0406658Z test_save_view_modify_base (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0407249Z test_saved_but_not_anymore (__main__.TestAllowMutationOnSaved) ... ok (0.002s) 2023-01-11T21:57:16.0407776Z test_saved_same_tensor_different_versions (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0408202Z test_saved_same_tensor_many_times (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0408700Z test_views (__main__.TestAllowMutationOnSaved) ... ok (0.007s) 2023-01-11T21:57:16.0409008Z test_with_math_views (__main__.TestAllowMutationOnSaved) ... ok (0.002s) 2023-01-11T21:57:16.0409323Z test_with_out_variant (__main__.TestAllowMutationOnSaved) ... ok (0.001s) 2023-01-11T21:57:16.0409840Z test_access_saved_tensor_twice_without_recomputation_works (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0412009Z test_accumulate_grad (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py:197: UserWarning: Using backward() with create_graph=True will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using autograd.grad when creating the graph to avoid this. If you have to use this function, make sure to reset the .grad fields of your parameters to None after use to break the cycle and avoid the leak. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/engine.cpp:1134.) 2023-01-11T21:57:16.0413186Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-01-11T21:57:16.0413715Z ok (0.002s) 2023-01-11T21:57:16.0414239Z test_accumulate_grad_tensor_reference (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0414921Z test_accumulate_grad_with_zero_numel_grad (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0415991Z test_anomaly_assign_parent_cleanup (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:4090: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. 2023-01-11T21:57:16.0417072Z with detect_anomaly(): 2023-01-11T21:57:16.0417461Z ok (0.001s) 2023-01-11T21:57:16.0417928Z test_anomaly_detect_nan (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0418524Z test_anomaly_grad_warnings (__main__.TestAutograd) ... ok (0.013s) 2023-01-11T21:57:16.0419112Z test_anomaly_mode_no_check_nan (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0419712Z test_attribute_deletion (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0420394Z test_autograd_function_extension_enabled_by_default (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0421089Z test_autograd_function_extension_feature_flag (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0421765Z test_autograd_inplace_view_of_view (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0422605Z test_autograd_inplace_views_creation_meta (__main__.TestAutograd) ... ok (0.220s) 2023-01-11T21:57:16.0423222Z test_autograd_inplace_views_cross_dtype (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0423709Z test_autograd_multiple_views_python (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0424214Z test_autograd_python_custom_function_inplace (__main__.TestAutograd) ... ok (0.008s) 2023-01-11T21:57:16.0424751Z test_autograd_simple_views_python (__main__.TestAutograd) ... ok (0.088s) 2023-01-11T21:57:16.0425272Z test_autograd_views_codegen (__main__.TestAutograd) ... ok (0.017s) 2023-01-11T21:57:16.0425529Z test_backward (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0425790Z test_backward_badcalls (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0426057Z test_backward_copy (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0426319Z test_backward_create_graph_warns (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0426597Z test_backward_no_grad (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0426902Z test_backward_twice_retained_graph_with_saved_values (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0427245Z test_backward_twice_retained_graph_without_saved_values (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0427553Z test_backward_twice_with_saved_values (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0428013Z test_backward_twice_without_saved_values (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0428487Z test_backward_with_inputs (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0428810Z test_backward_with_nonleaf_inputs (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0430040Z test_calculate_shape_util (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/nested/__init__.py:58: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:57:16.0430914Z return torch._nested_tensor_from_tensor_list(tensor_list, dtype, None, device, None) 2023-01-11T21:57:16.0431295Z ok (0.002s) 2023-01-11T21:57:16.0431637Z test_callback_adds_callback (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0432072Z test_cant_create_saved_tensors (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0432425Z test_checkpoint_valid_reset_on_error (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0432788Z test_checkpointing (__main__.TestAutograd) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:57:16.0433173Z test_checkpointing_non_reentrant_autocast_cpu (__main__.TestAutograd) 2023-01-11T21:57:16.0433596Z Test that autocast args such as the dtype are preserved during non-reentrant ... ok (0.002s) 2023-01-11T21:57:16.0434088Z test_checkpointing_non_reentrant_autocast_gpu (__main__.TestAutograd) 2023-01-11T21:57:16.0434730Z Test that autocast args/kwargs such as the dtype are preserved during ... skip: Test requires CUDA bf16 support (0.000s) 2023-01-11T21:57:16.0435305Z test_checkpointing_without_reentrant_arbitrary_input_output (__main__.TestAutograd) 2023-01-11T21:57:16.0435643Z Ensures checkpointing without reentrant autograd works with functions ... ok (0.002s) 2023-01-11T21:57:16.0435969Z test_checkpointing_without_reentrant_correct_grad (__main__.TestAutograd) 2023-01-11T21:57:16.0436268Z Verifies that correct gradients are calculated for checkpoint ... ok (0.002s) 2023-01-11T21:57:16.0436658Z test_checkpointing_without_reentrant_custom_function_works (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0436993Z test_checkpointing_without_reentrant_dataparallel (__main__.TestAutograd) 2023-01-11T21:57:16.0437598Z Verifies gradient correctness when checkpoint without reentrant autograd ... ok (0.002s) 2023-01-11T21:57:16.0438332Z test_checkpointing_without_reentrant_detached_tensor_use_reentrant_False (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0439251Z test_checkpointing_without_reentrant_detached_tensor_use_reentrant_True (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0440050Z test_checkpointing_without_reentrant_input_requires_grad_False (__main__.TestAutograd) 2023-01-11T21:57:16.0440873Z Basic test for checkpoint without reentrant autograd. ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:57:16.0441703Z test_checkpointing_without_reentrant_input_requires_grad_True (__main__.TestAutograd) 2023-01-11T21:57:16.0442506Z Basic test for checkpoint without reentrant autograd. ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:57:16.0443386Z test_checkpointing_without_reentrant_memory_savings (__main__.TestAutograd) ... skip: Test requires CUDA (0.001s) 2023-01-11T21:57:16.0444145Z test_checkpointing_without_reentrant_parameter_used_in_an_out (__main__.TestAutograd) 2023-01-11T21:57:16.0444849Z Ensures that gradient hooks are only called once per tensor. ... ok (0.001s) 2023-01-11T21:57:16.0445522Z test_copy_slices_graph_task_updates (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0446179Z test_create_graph_and_full_backward_hook_cycle (__main__.TestAutograd) ... ok (0.446s) 2023-01-11T21:57:16.0446855Z test_current_graph_task_execution_order (__main__.TestAutograd) ... ok (0.011s) 2023-01-11T21:57:16.0447485Z test_current_graph_task_id (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0448067Z test_current_node (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0448646Z test_custom_autograd_no_early_free (__main__.TestAutograd) ... ok (0.008s) 2023-01-11T21:57:16.0449300Z test_custom_autograd_repeated_grad_grad (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0449930Z test_custom_function_cycle (__main__.TestAutograd) ... ok (0.143s) 2023-01-11T21:57:16.0450612Z test_custom_function_error (__main__.TestAutograd) ... ok (0.068s) 2023-01-11T21:57:16.0451207Z test_custom_function_exception (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0451887Z test_custom_function_forward_mode_forward_is_no_op (__main__.TestAutograd) ... ok (0.021s) 2023-01-11T21:57:16.0452605Z test_custom_function_forward_mode_inplace_checks (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0453335Z test_custom_function_forward_mode_non_differentiable (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0454073Z test_custom_function_forward_mode_non_tensor_before_tensor_args (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0454810Z test_custom_function_forward_mode_view_checks (__main__.TestAutograd) ... ok (0.013s) 2023-01-11T21:57:16.0455507Z test_custom_function_forward_mode_wrong_formula (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0456177Z test_custom_function_local_inplace (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0456848Z test_custom_function_mark_dirty_not_differentiable (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0457513Z test_custom_function_no_tensors (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0458249Z test_custom_function_non_tensor_inputs_outputs (__main__.TestAutograd) ... ok (0.009s) 2023-01-11T21:57:16.0458913Z test_custom_function_return_view_in_nograd (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0459569Z test_custom_function_save_for_forward (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0460217Z test_custom_function_saved_tensors (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0460884Z test_custom_function_setup_context_multi_input (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0461559Z test_custom_function_setup_context_multi_output (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0462241Z test_custom_function_setup_context_simple (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0462849Z test_deep_reentrant (__main__.TestAutograd) ... ok (0.197s) 2023-01-11T21:57:16.0463309Z test_default_saved_variable_hooks_double_backward (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0463853Z test_dep_nograd (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0464283Z test_dependent_backward (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0464692Z test_detach (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0465045Z test_detach_base (__main__.TestAutograd) 2023-01-11T21:57:16.0465430Z detaching base does not detach view ... ok (0.001s) 2023-01-11T21:57:16.0465869Z test_detach_then_inplace_raises_in_autograd (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0466733Z test_diagonal_expanded_v (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:2510: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). 2023-01-11T21:57:16.0467521Z v_expanded = torch.tensor(value).expand(10) 2023-01-11T21:57:16.0467830Z ok (0.001s) 2023-01-11T21:57:16.0468144Z test_dir (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0468529Z test_disabling_saved_tensor_hooks (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0469058Z test_disabling_saved_tensor_hooks_nested (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0469589Z test_dont_materialize_grads (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0469865Z test_duplicate_backward_root (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0470161Z test_first_grad_fn_access_in_no_grad_mode (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0470447Z test_free_deep_graph (__main__.TestAutograd) ... ok (1.759s) 2023-01-11T21:57:16.0470727Z test_free_deep_graph_complicated (__main__.TestAutograd) ... ok (1.200s) 2023-01-11T21:57:16.0471008Z test_free_deep_graph_pyfunction (__main__.TestAutograd) ... ok (1.707s) 2023-01-11T21:57:16.0471449Z test_full_backward_hook_double_backward (__main__.TestAutograd) ... ok (0.067s) 2023-01-11T21:57:16.0471914Z test_function (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0472380Z test_function_returns_input (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0472921Z test_function_returns_undefined_tensor (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0473420Z test_gc_in_destructor (__main__.TestAutograd) 2023-01-11T21:57:16.0473926Z Previously, if a Function destructor triggered a garbage collection, ... ok (0.740s) 2023-01-11T21:57:16.0474454Z test_grad (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0474816Z test_grad_badcalls (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0475078Z test_grad_batched_grad (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0475335Z test_grad_empty_inputs (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0475606Z test_grad_fn_attr_bindings (__main__.TestAutograd) ... ok (0.017s) 2023-01-11T21:57:16.0475879Z test_grad_fn_badcalls (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0476229Z test_grad_fn_prehooks (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0476512Z test_grad_fn_prehooks_multiple_outputs (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0476811Z test_grad_fn_prehooks_remove_hooks (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0477103Z test_grad_mode_class_decoration (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0477459Z test_grad_mode_restored_reentrant (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0478779Z test_grad_nonleaf (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:776: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:485.) 2023-01-11T21:57:16.0479546Z self.assertIsNone(x.grad) 2023-01-11T21:57:16.0479734Z ok (0.003s) 2023-01-11T21:57:16.0479950Z test_grad_nonleaf_many_outputs (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0481185Z test_grad_nonleaf_register_hook (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:828: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:485.) 2023-01-11T21:57:16.0481949Z self.assertIsNone(x_list[0].grad) 2023-01-11T21:57:16.0483050Z /var/lib/jenkins/workspace/test/test_autograd.py:835: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:485.) 2023-01-11T21:57:16.0483771Z self.assertIsNone(x_list[i].grad) 2023-01-11T21:57:16.0483960Z ok (0.002s) 2023-01-11T21:57:16.0484169Z test_grad_unreachable (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0484454Z test_grad_unreachable_discovery (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0484766Z test_gradcheck_backward_mul_by_grad_output (__main__.TestAutograd) ... ok (0.020s) 2023-01-11T21:57:16.0485062Z test_gradcheck_check_batched_grad (__main__.TestAutograd) ... ok (0.048s) 2023-01-11T21:57:16.0485359Z test_gradcheck_check_forward_or_backward_only (__main__.TestAutograd) 2023-01-11T21:57:16.0485671Z Depending on settings for check_forward_ad and check_backward_ad, the ... ok (0.020s) 2023-01-11T21:57:16.0486483Z test_gradcheck_check_no_differentiable_outputs (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:16.0486972Z warnings.warn( 2023-01-11T21:57:16.0487143Z ok (0.003s) 2023-01-11T21:57:16.0487386Z test_gradcheck_complex_non_complex_outputs (__main__.TestAutograd) ... ok (0.008s) 2023-01-11T21:57:16.0488158Z test_gradcheck_custom_error (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:16.0488659Z warnings.warn( 2023-01-11T21:57:16.0488828Z ok (0.014s) 2023-01-11T21:57:16.0489069Z test_gradcheck_dense_and_sparse_inputs (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0489352Z test_gradcheck_forward_ad (__main__.TestAutograd) ... ok (0.108s) 2023-01-11T21:57:16.0489646Z test_gradcheck_forward_ad_batched_grad (__main__.TestAutograd) ... ok (0.009s) 2023-01-11T21:57:16.0489968Z test_gradcheck_forward_ad_respects_requires_grad (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0490300Z test_gradcheck_forward_ad_runs_with_no_requires_grad (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0491346Z test_gradcheck_get_analytical_jacobian (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:616: UserWarning: get_analytical_jacobian was part of PyTorch's private API and not meant to be exposed. We are deprecating it and it will be removed in a future version of PyTorch. If you have a specific use for this or feature request for this to be a stable API, please file us an issue at https://github.com/pytorch/pytorch/issues/new 2023-01-11T21:57:16.0492126Z warnings.warn("get_analytical_jacobian was part of PyTorch's private API and not " 2023-01-11T21:57:16.0492372Z ok (0.011s) 2023-01-11T21:57:16.0493292Z test_gradcheck_get_numerical_jacobian (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:209: UserWarning: get_numerical_jacobian was part of PyTorch's private API and not meant to be exposed. We are deprecating it and it will be removed in a future version of PyTorch. If you have a specific use for this or feature request for this to be a stable API, please file us an issue at https://github.com/pytorch/pytorch/issues/new 2023-01-11T21:57:16.0494276Z warnings.warn("get_numerical_jacobian was part of PyTorch's private API and not " 2023-01-11T21:57:16.0494509Z ok (0.008s) 2023-01-11T21:57:16.0495213Z test_gradcheck_jacobian_mismatch (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:16.0495686Z warnings.warn( 2023-01-11T21:57:16.0495855Z ok (0.038s) 2023-01-11T21:57:16.0496556Z test_gradcheck_multiple_mkldnn_inputs (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #1 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:16.0497039Z warnings.warn( 2023-01-11T21:57:16.0497208Z ok (0.015s) 2023-01-11T21:57:16.0497444Z test_gradcheck_nondeterministic (__main__.TestAutograd) ... ok (0.074s) 2023-01-11T21:57:16.0497829Z test_gradcheck_output_shape_or_dtype_depend_on_values (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0498142Z test_gradcheck_single_input (__main__.TestAutograd) ... ok (0.011s) 2023-01-11T21:57:16.0498779Z test_gradcheck_sparse_bsc_input (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:4389: UserWarning: Sparse BSC tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:56.) 2023-01-11T21:57:16.0499428Z gradcheck(fn, torch.rand(4, 8, dtype=torch.double).to_sparse_bsc((2, 2)).requires_grad_(True), 2023-01-11T21:57:16.0499661Z ok (0.025s) 2023-01-11T21:57:16.0499938Z test_gradcheck_sparse_bsr_input (__main__.TestAutograd) ... ok (0.019s) 2023-01-11T21:57:16.0500231Z test_gradcheck_sparse_csc_input (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0500520Z test_gradcheck_sparse_csr_input (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0500797Z test_gradcheck_sparse_input (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0501570Z test_gradcheck_test_outputs (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:16.0502045Z warnings.warn( 2023-01-11T21:57:16.0502203Z ok (0.002s) 2023-01-11T21:57:16.0502581Z test_gradcheck_undefined_grad (__main__.TestAutograd) ... ok (0.012s) 2023-01-11T21:57:16.0502996Z test_gradcheck_validates_input_mkldnn (__main__.TestAutograd) ... ok (0.008s) 2023-01-11T21:57:16.0503303Z test_gradcheck_validates_inputs (__main__.TestAutograd) ... ok (0.008s) 2023-01-11T21:57:16.0503573Z test_graph_save_on_cpu (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0503873Z test_graph_save_on_cpu_cuda (__main__.TestAutograd) ... skip: test requires CUDA (0.001s) 2023-01-11T21:57:16.0504169Z test_hessian_vector (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0504409Z test_hook_none (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0504664Z test_hook_with_no_name (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0504913Z test_hooks (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0505149Z test_hooks_cpp (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0505428Z test_index_backward_does_not_save_tensor (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0505704Z test_indexing (__main__.TestAutograd) ... ok (0.018s) 2023-01-11T21:57:16.0505968Z test_indexing_duplicates (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0506221Z test_inplace (__main__.TestAutograd) ... ok (0.022s) 2023-01-11T21:57:16.0506488Z test_inplace_not_requires_grad (__main__.TestAutograd) ... ok (0.010s) 2023-01-11T21:57:16.0506775Z test_inplace_on_view_backward (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0507049Z test_inplace_on_view_leaf_errors (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0507340Z test_inplace_on_view_saved_output (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0507631Z test_inplace_on_view_weak_grad_fn (__main__.TestAutograd) ... ok (0.092s) 2023-01-11T21:57:16.0507905Z test_input_buffer_accum (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0508160Z test_integer_outputs (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0508425Z test_invalid_gradients (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0508692Z test_isolated_node (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0508944Z test_leaf_assignment (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0509240Z test_legacy_function_deprecation_exception (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0510069Z test_lobpcg (__main__.TestAutograd) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/80338 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.002s) 2023-01-11T21:57:16.0510613Z test_mark_non_differentiable (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0510892Z test_mark_non_differentiable_mixed (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0511188Z test_mark_non_differentiable_none (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0511470Z test_materialize_grads (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0511792Z test_multi_backward (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0512050Z test_multi_backward_no_grad (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0512322Z test_multi_grad_hooks (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0513195Z test_named_tensor_for_complex_views (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:1114: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /var/lib/jenkins/workspace/c10/core/TensorImpl.h:1816.) 2023-01-11T21:57:16.0513774Z return super(Tensor, self).refine_names(names) 2023-01-11T21:57:16.0513963Z ok (0.001s) 2023-01-11T21:57:16.0514207Z test_naughty_anomaly_access (__main__.TestAutograd) ... expected failure (0.002s) 2023-01-11T21:57:16.0514533Z test_naughty_autograd_function_attribute_access (__main__.TestAutograd) ... ok (0.010s) 2023-01-11T21:57:16.0514896Z test_naughty_autograd_function_stashing_ctx (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0515199Z test_nested_anomaly_detect_nan (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0515494Z test_nested_anomaly_printstack_cleanup (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0515767Z test_next_functions (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0516021Z test_no_grad (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0516275Z test_no_grad_assignment (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0516539Z test_no_grad_copy (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0517319Z test_no_grad_copy_sparse (__main__.TestAutograd) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2325: UserWarning: Argument order of nn.functional.embedding_bag was changed. Usage `embedding_bag(weight, input, ...)` is deprecated, and should now be `embedding_bag(input, weight, ...)`. 2023-01-11T21:57:16.0517792Z warnings.warn( 2023-01-11T21:57:16.0517964Z ok (0.005s) 2023-01-11T21:57:16.0518177Z test_no_grad_input (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0518437Z test_no_grad_modifies_version (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0518705Z test_no_grad_python_function (__main__.TestAutograd) 2023-01-11T21:57:16.0518966Z Python Functions should respect grad mode. ... ok (0.001s) 2023-01-11T21:57:16.0519228Z test_no_requires_grad_inplace (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0519508Z test_no_unnecessary_save (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0519790Z test_no_unnecessary_unwrapping (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0520061Z test_not_implemented_fwad (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0520334Z test_not_implemented_grad (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0520606Z test_numpy_requires_grad (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0520880Z test_once_differentiable (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0521169Z test_out_variant_raises_when_inputs_require_grad (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0521498Z test_pack_hook_with_inplace_modification_should_fail (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0521788Z test_pickle (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0522039Z test_pow_zero_tensor_gradient (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0522310Z test_power_function (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0522772Z test_profiler (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0523252Z STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0523694Z STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0524001Z ok (0.002s) 2023-01-11T21:57:16.0524231Z test_profiler_aggregation_fake (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0524722Z test_profiler_aggregation_lstm (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0525216Z STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0525665Z STAGE:2023-01-11 21:57:06 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0525865Z 2023-01-11T21:57:16.0525990Z =================================================================================================================================================================== 2023-01-11T21:57:16.0526175Z TEST 2023-01-11T21:57:16.0526670Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0527133Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes 2023-01-11T21:57:16.0527693Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0528103Z aten::lstm 0.91% 198.000us 4.79% 1.042ms 1.042ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0528395Z aten::lstm 0.91% 198.000us 4.89% 1.064ms 1.064ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0528688Z aten::lstm 0.90% 197.000us 4.80% 1.045ms 1.045ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0528975Z aten::lstm 0.88% 191.000us 4.88% 1.063ms 1.063ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0529255Z aten::lstm 0.87% 189.000us 4.89% 1.065ms 1.065ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0529532Z aten::lstm 0.86% 188.000us 4.82% 1.049ms 1.049ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0529792Z aten::lstm 0.84% 183.000us 4.84% 1.053ms 1.053ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0530079Z aten::lstm 0.83% 181.000us 4.93% 1.074ms 1.074ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0530362Z aten::lstm 0.82% 178.000us 4.80% 1.045ms 1.045ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0530639Z aten::lstm 0.80% 175.000us 4.90% 1.067ms 1.067ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0531144Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0531482Z Self CPU time total: 21.770ms 2023-01-11T21:57:16.0531603Z 2023-01-11T21:57:16.0531964Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0532408Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes 2023-01-11T21:57:16.0532996Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0533386Z aten::lstm 16.11% 3.508ms 97.67% 21.262ms 1.063ms 20 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0533673Z aten::addmm 10.04% 2.186ms 12.62% 2.747ms 13.735us 200 [[80], [3, 20], [20, 80], [], []] 2023-01-11T21:57:16.0533961Z aten::sigmoid_ 9.26% 2.016ms 9.26% 2.016ms 3.360us 600 [[3, 20]] 2023-01-11T21:57:16.0534246Z aten::mul 9.10% 1.981ms 9.10% 1.981ms 3.302us 600 [[3, 20], [3, 20]] 2023-01-11T21:57:16.0534567Z aten::unsafe_split 6.29% 1.369ms 16.13% 3.512ms 17.560us 200 [[3, 80], [], []] 2023-01-11T21:57:16.0534866Z aten::slice 4.97% 1.083ms 5.06% 1.102ms 1.377us 800 [[3, 80], [], [], [], []] 2023-01-11T21:57:16.0535141Z aten::narrow 4.78% 1.040ms 9.39% 2.044ms 2.555us 800 [[3, 80], [], [], []] 2023-01-11T21:57:16.0535425Z aten::tanh_ 3.45% 752.000us 3.45% 752.000us 3.760us 200 [[3, 20]] 2023-01-11T21:57:16.0535704Z aten::tanh 3.16% 689.000us 3.16% 689.000us 3.445us 200 [[3, 20]] 2023-01-11T21:57:16.0535987Z aten::linear 3.03% 659.000us 18.97% 4.129ms 20.645us 200 [[3, 20], [80, 20], [80]] 2023-01-11T21:57:16.0536498Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0536832Z Self CPU time total: 21.770ms 2023-01-11T21:57:16.0536952Z 2023-01-11T21:57:16.0537074Z =================================================================================================================================================================== 2023-01-11T21:57:16.0537271Z TEST 2023-01-11T21:57:16.0537481Z =================================================================================================================================================================== 2023-01-11T21:57:16.0537763Z This report only display top-level ops statistics 2023-01-11T21:57:16.0538267Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0538715Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes 2023-01-11T21:57:16.0539260Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0539648Z aten::lstm 0.91% 198.000us 4.79% 1.042ms 1.042ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0539932Z aten::lstm 0.91% 198.000us 4.89% 1.064ms 1.064ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0540215Z aten::lstm 0.90% 197.000us 4.80% 1.045ms 1.045ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0540528Z aten::lstm 0.88% 191.000us 4.88% 1.063ms 1.063ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0540802Z aten::lstm 0.87% 189.000us 4.89% 1.065ms 1.065ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0541066Z aten::lstm 0.86% 188.000us 4.82% 1.049ms 1.049ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0541339Z aten::lstm 0.84% 183.000us 4.84% 1.053ms 1.053ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0541621Z aten::lstm 0.83% 181.000us 4.93% 1.074ms 1.074ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0541932Z aten::lstm 0.82% 178.000us 4.80% 1.045ms 1.045ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0542214Z aten::lstm 0.80% 175.000us 4.90% 1.067ms 1.067ms 1 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0542848Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0543201Z Self CPU time total: 21.770ms 2023-01-11T21:57:16.0543322Z 2023-01-11T21:57:16.0543588Z ERROR:2023-01-11 21:57:07 16151:16151 CudaDeviceProperties.cpp:27] cudaGetDeviceCount failed with code 35 2023-01-11T21:57:16.0543903Z =================================================================================================================================================================== 2023-01-11T21:57:16.0544190Z This report only display top-level ops statistics 2023-01-11T21:57:16.0544694Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0545140Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes 2023-01-11T21:57:16.0545692Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0546091Z aten::lstm 16.11% 3.508ms 97.67% 21.262ms 1.063ms 20 [[5, 3, 10], [], [], [], [], [], [], [], []] 2023-01-11T21:57:16.0546362Z aten::randn 1.03% 224.000us 2.33% 508.000us 8.467us 60 [[], [], [], [], []] 2023-01-11T21:57:16.0546877Z ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------------------------------------------ 2023-01-11T21:57:16.0547224Z Self CPU time total: 21.770ms 2023-01-11T21:57:16.0547345Z 2023-01-11T21:57:16.0547462Z Total time based on python measurements: 23.738ms 2023-01-11T21:57:16.0547704Z CPU time measurement python side overhead: 9.04% 2023-01-11T21:57:16.0547911Z ok (1.168s) 2023-01-11T21:57:16.0548128Z test_profiler_aggregation_table (__main__.TestAutograd) 2023-01-11T21:57:16.0548614Z Test if the profiling result is aggregated for `str(prof)` ... STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0549107Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0549562Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0549892Z ok (0.002s) 2023-01-11T21:57:16.0550110Z test_profiler_function_event_avg (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0550608Z test_profiler_propagation (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0551095Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0551532Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0551792Z ok (0.046s) 2023-01-11T21:57:16.0552213Z test_profiler_seq_nr (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0552691Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0553162Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0553692Z ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ 2023-01-11T21:57:16.0554100Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls 2023-01-11T21:57:16.0554597Z ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ 2023-01-11T21:57:16.0554959Z aten::add 15.73% 14.000us 15.73% 14.000us 14.000us 1 2023-01-11T21:57:16.0555241Z aten::sum 15.73% 14.000us 16.85% 15.000us 15.000us 1 2023-01-11T21:57:16.0555526Z aten::randn 8.99% 8.000us 20.22% 18.000us 9.000us 2 2023-01-11T21:57:16.0555810Z aten::normal_ 7.87% 7.000us 7.87% 7.000us 3.500us 2 2023-01-11T21:57:16.0556112Z torch::autograd::AccumulateGrad 7.87% 7.000us 19.10% 17.000us 8.500us 2 2023-01-11T21:57:16.0556428Z autograd::engine::evaluate_function: torch::autograd... 6.74% 6.000us 20.22% 18.000us 9.000us 2 2023-01-11T21:57:16.0556742Z aten::copy_ 6.74% 6.000us 6.74% 6.000us 3.000us 2 2023-01-11T21:57:16.0557019Z aten::expand 4.49% 4.000us 5.62% 5.000us 5.000us 1 2023-01-11T21:57:16.0557373Z aten::empty 3.37% 3.000us 3.37% 3.000us 1.500us 2 2023-01-11T21:57:16.0557648Z aten::empty_like 3.37% 3.000us 4.49% 4.000us 4.000us 1 2023-01-11T21:57:16.0557959Z autograd::engine::evaluate_function: SumBackward1 3.37% 3.000us 12.36% 11.000us 11.000us 1 2023-01-11T21:57:16.0558266Z SumBackward1 3.37% 3.000us 8.99% 8.000us 8.000us 1 2023-01-11T21:57:16.0558552Z aten::new_empty_strided 3.37% 3.000us 4.49% 4.000us 2.000us 2 2023-01-11T21:57:16.0558840Z aten::as_strided 2.25% 2.000us 2.25% 2.000us 1.000us 2 2023-01-11T21:57:16.0559114Z aten::ones_like 2.25% 2.000us 6.74% 6.000us 6.000us 1 2023-01-11T21:57:16.0559433Z aten::empty_strided 2.25% 2.000us 2.25% 2.000us 0.667us 3 2023-01-11T21:57:16.0559758Z autograd::engine::evaluate_function: AddBackward0 2.25% 2.000us 2.25% 2.000us 2.000us 1 2023-01-11T21:57:16.0560064Z aten::fill_ 0.00% 0.000us 0.00% 0.000us 0.000us 2 2023-01-11T21:57:16.0560326Z AddBackward0 0.00% 0.000us 0.00% 0.000us 0.000us 1 2023-01-11T21:57:16.0560807Z ------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ 2023-01-11T21:57:16.0561143Z Self CPU time total: 89.000us 2023-01-11T21:57:16.0561263Z 2023-01-11T21:57:16.0561335Z ok (0.004s) 2023-01-11T21:57:16.0561780Z test_profiler_shapes (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0562267Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0562719Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0562923Z 2023-01-11T21:57:16.0563260Z ---------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------- 2023-01-11T21:57:16.0563676Z Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes 2023-01-11T21:57:16.0564201Z ---------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------- 2023-01-11T21:57:16.0564594Z aten::linear 3.27% 5.000us 50.98% 78.000us 78.000us 1 [[128, 20], [30, 20], [30]] 2023-01-11T21:57:16.0564877Z aten::t 6.54% 10.000us 11.76% 18.000us 18.000us 1 [[30, 20]] 2023-01-11T21:57:16.0565161Z aten::transpose 3.92% 6.000us 3.92% 6.000us 6.000us 1 [[30, 20], [], []] 2023-01-11T21:57:16.0565441Z aten::as_strided 1.31% 2.000us 1.31% 2.000us 2.000us 1 [[30, 20], [], [], []] 2023-01-11T21:57:16.0565729Z aten::addmm 30.72% 47.000us 35.95% 55.000us 55.000us 1 [[30], [128, 20], [20, 30], [], []] 2023-01-11T21:57:16.0566015Z aten::expand 1.31% 2.000us 1.31% 2.000us 2.000us 1 [[30], [], []] 2023-01-11T21:57:16.0566303Z aten::as_strided 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[30], [], [], []] 2023-01-11T21:57:16.0566589Z aten::copy_ 3.92% 6.000us 3.92% 6.000us 6.000us 1 [[128, 30], [128, 30], []] 2023-01-11T21:57:16.0566860Z aten::resolve_conj 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[128, 30]] 2023-01-11T21:57:16.0567143Z aten::resolve_conj 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[128, 20]] 2023-01-11T21:57:16.0567422Z aten::linear 1.96% 3.000us 49.02% 75.000us 75.000us 1 [[128, 30], [40, 30], [40]] 2023-01-11T21:57:16.0567701Z aten::t 1.96% 3.000us 3.27% 5.000us 5.000us 1 [[40, 30]] 2023-01-11T21:57:16.0568017Z aten::transpose 1.31% 2.000us 1.31% 2.000us 2.000us 1 [[40, 30], [], []] 2023-01-11T21:57:16.0568286Z aten::as_strided 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[40, 30], [], [], []] 2023-01-11T21:57:16.0568569Z aten::addmm 40.52% 62.000us 43.79% 67.000us 67.000us 1 [[40], [128, 30], [30, 40], [], []] 2023-01-11T21:57:16.0568846Z aten::expand 0.65% 1.000us 0.65% 1.000us 1.000us 1 [[40], [], []] 2023-01-11T21:57:16.0569128Z aten::as_strided 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[40], [], [], []] 2023-01-11T21:57:16.0569440Z aten::copy_ 2.61% 4.000us 2.61% 4.000us 4.000us 1 [[128, 40], [128, 40], []] 2023-01-11T21:57:16.0569717Z aten::resolve_conj 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[128, 40]] 2023-01-11T21:57:16.0570003Z aten::resolve_conj 0.00% 0.000us 0.00% 0.000us 0.000us 1 [[128, 30]] 2023-01-11T21:57:16.0570485Z ---------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------------- 2023-01-11T21:57:16.0570817Z Self CPU time total: 153.000us 2023-01-11T21:57:16.0570939Z 2023-01-11T21:57:16.0570995Z ok (0.003s) 2023-01-11T21:57:16.0571426Z test_profiler_unboxed_only (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0571924Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0572373Z STAGE:2023-01-11 21:57:07 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0572622Z ok (0.001s) 2023-01-11T21:57:16.0572855Z test_pynode_destruction_deadlock (__main__.TestAutograd) ... ok (1.374s) 2023-01-11T21:57:16.0573340Z test_record_function (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0573817Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0574249Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0574686Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0575119Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0575549Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0575807Z ok (0.003s) 2023-01-11T21:57:16.0576240Z test_record_function_callbacks (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0576723Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0577153Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0577413Z ok (0.002s) 2023-01-11T21:57:16.0577845Z test_record_function_legacy (__main__.TestAutograd) ... STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T21:57:16.0578328Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T21:57:16.0578803Z STAGE:2023-01-11 21:57:09 16151:16151 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T21:57:16.0579063Z ok (0.002s) 2023-01-11T21:57:16.0579300Z test_record_function_multithreaded (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0579581Z test_reentrant_child_error (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0579856Z test_reentrant_priority (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0580151Z test_reentrant_with_callbacks_both_depths (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0580445Z test_reentrant_with_callbacks_depth_0 (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0580743Z test_reentrant_with_callbacks_depth_1 (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0581038Z test_reentrant_with_leaf_variable_hook (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0581388Z test_reentrant_with_non_leaf_variable_hook (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0581660Z test_requires_grad (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0581917Z test_requires_grad_ (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0582187Z test_requires_grad_inplace (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0582728Z test_retain_grad (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0582988Z test_retain_grad_cycle (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0583257Z test_retain_grad_inplace (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0583540Z test_retain_grad_inplace_over_view (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0583807Z test_return_duplicate (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0584082Z test_return_duplicate_inplace (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0584351Z test_return_leaf (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0584606Z test_return_leaf_inplace (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0584879Z test_save_none_for_backward (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0585159Z test_save_on_cpu_and_checkpoint (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0585418Z test_save_output_nr (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0585748Z test_saved_variable_packing_unpacking_did_not_save_original_with_default_hooks (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0586132Z test_saved_variable_packing_unpacking_did_not_save_original_with_hooks (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0586508Z test_saved_variable_packing_unpacking_saved_original_with_default_hooks (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0586864Z test_saved_variable_packing_unpacking_saved_original_with_hooks (__main__.TestAutograd) ... ok (0.012s) 2023-01-11T21:57:16.0587209Z test_saved_variable_saved_original_inplace_detach (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0587527Z test_saved_variable_version_counter (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0587821Z test_saved_variables_deprecated (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0588094Z test_saving_variable_to_disk (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0588368Z test_select_expanded_v (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0588624Z test_select_sum (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0588878Z test_set_data_preserve_pyobj (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0589161Z test_set_data_self_requires_grad (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0589448Z test_set_data_tensorimpl_type (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0589716Z test_set_grad_coroutines (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0590009Z test_set_grad_coroutines_benign_exceptions (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0590391Z test_set_grad_coroutines_critical_exceptions (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0590692Z test_set_grad_coroutines_exit (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0590955Z test_set_grad_enabled (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0591232Z test_set_grad_generator_functions (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0591536Z test_set_grad_generator_functions_recursive (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0591801Z test_setitem (__main__.TestAutograd) ... ok (0.013s) 2023-01-11T21:57:16.0592050Z test_setitem_mask (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0592356Z test_setting_default_saved_variable_hooks_twice_should_not_fail (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0592709Z test_setting_default_saved_variable_hooks_twice_should_use_inner (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0593032Z test_shape (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0593286Z test_sharded_grad (__main__.TestAutograd) ... ok (0.003s) 2023-01-11T21:57:16.0593545Z test_simple_reentrant (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0593796Z test_slice_expanded_v (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0594071Z test_sparse_gather_both_scalar (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0594347Z test_sparse_gather_dim0 (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0594612Z test_sparse_gather_dim1 (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0594873Z test_sparse_gather_dim_neg (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0595152Z test_sparse_gather_ind_scalar (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0595432Z test_sparse_gather_x_scalar (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0595692Z test_sparse_mm_backward (__main__.TestAutograd) ... ok (0.007s) 2023-01-11T21:57:16.0596151Z test_symeig_no_eigenvectors (__main__.TestAutograd) ... /var/lib/jenkins/workspace/test/test_autograd.py:4200: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release. 2023-01-11T21:57:16.0596686Z The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion. 2023-01-11T21:57:16.0597005Z L, _ = torch.symeig(A, upper=upper) 2023-01-11T21:57:16.0597266Z should be replaced with 2023-01-11T21:57:16.0597576Z L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L') 2023-01-11T21:57:16.0597788Z and 2023-01-11T21:57:16.0597961Z L, V = torch.symeig(A, eigenvectors=True) 2023-01-11T21:57:16.0598169Z should be replaced with 2023-01-11T21:57:16.0598633Z L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2910.) 2023-01-11T21:57:16.0598985Z w, v = torch.symeig(A, eigenvectors=False) 2023-01-11T21:57:16.0599190Z ok (0.006s) 2023-01-11T21:57:16.0599413Z test_tensor_grad_warnings (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0599684Z test_thread_shutdown (__main__.TestAutograd) ... ok (1.357s) 2023-01-11T21:57:16.0599940Z test_to_sparse_backward (__main__.TestAutograd) ... ok (0.046s) 2023-01-11T21:57:16.0600209Z test_too_many_grads (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0600474Z test_type_conversions (__main__.TestAutograd) ... ok (0.004s) 2023-01-11T21:57:16.0600727Z test_unrelated_inputs (__main__.TestAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0600988Z test_unused_output (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0601261Z test_var_mean_differentiable (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0601525Z test_variable_traverse (__main__.TestAutograd) ... ok (0.104s) 2023-01-11T21:57:16.0601789Z test_version_counter (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0602054Z test_view_func_replay (__main__.TestAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0602365Z test_volatile_deprecated (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0602634Z test_will_engine_execute_node (__main__.TestAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0602936Z test_wrapped_number_saved_variable_hooks (__main__.TestAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0603279Z test_view_func_for_complex_views (autograd.test_complex.TestAutogradComplex) ... ok (0.003s) 2023-01-11T21:57:16.0603628Z test_view_with_multi_output (autograd.test_complex.TestAutogradComplex) ... ok (0.007s) 2023-01-11T21:57:16.0604314Z test_advanced_packing_unpacking (__main__.TestAutogradForwardMode) ... /var/lib/jenkins/workspace/test/test_autograd.py:8092: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0604993Z self.assertEqual(dual.storage().data_ptr(), foo.storage().data_ptr()) 2023-01-11T21:57:16.0605569Z /var/lib/jenkins/workspace/test/test_autograd.py:8101: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0606134Z self.assertEqual(dual_primal.storage().data_ptr(), foo.storage().data_ptr()) 2023-01-11T21:57:16.0606700Z /var/lib/jenkins/workspace/test/test_autograd.py:8102: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0607260Z self.assertEqual(dual_tangent.storage().data_ptr(), bar.storage().data_ptr()) 2023-01-11T21:57:16.0607504Z ok (0.004s) 2023-01-11T21:57:16.0607758Z test_backward_graph_destruction (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0608080Z test_basic_packing_unpacking (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0608424Z test_codegen_ignores_undefined_outputs (__main__.TestAutogradForwardMode) ... ok (0.007s) 2023-01-11T21:57:16.0609107Z test_create_new_zeros_with_same_meta (__main__.TestAutogradForwardMode) ... /var/lib/jenkins/workspace/test/test_autograd.py:8351: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0609747Z self.assertEqual(len(result.storage()), len(target.storage()) * prod_of_t_bdims) 2023-01-11T21:57:16.0609978Z ok (0.007s) 2023-01-11T21:57:16.0610214Z test_default_level (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0610530Z test_detach_view_tracking (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0610853Z test_forward_level_cleanup (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0611155Z test_fwd_grad_enabled (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0611457Z test_grad_cleanup (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0611777Z test_make_dual_forbid_integral_dtype (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0612125Z test_make_dual_inference_tensor_in_inference_mode (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0612470Z test_make_dual_torch_dispatch (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0612796Z test_metadata_check_check_conj (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0613181Z test_metadata_check_checks_ignores_size_zero (__main__.TestAutogradForwardMode) ... ok (0.003s) 2023-01-11T21:57:16.0613853Z test_metadata_check_checks_storage_numel (__main__.TestAutogradForwardMode) ... /var/lib/jenkins/workspace/test/test_autograd.py:7790: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0614460Z self.assertEqual(len(primal.storage()), 5) 2023-01-11T21:57:16.0614663Z ok (0.001s) 2023-01-11T21:57:16.0614948Z test_metadata_check_ignore_storage_offset_for_zero_numel_tensor (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0615667Z test_metadata_check_when_primal_has_conj_bit (__main__.TestAutogradForwardMode) ... /var/lib/jenkins/workspace/test/test_autograd.py:7826: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0616274Z self.assertEqual(len(a.storage()), len(b.storage())) 2023-01-11T21:57:16.0616480Z ok (0.001s) 2023-01-11T21:57:16.0617074Z test_metadata_check_when_primal_has_neg_bit (__main__.TestAutogradForwardMode) ... /var/lib/jenkins/workspace/test/test_autograd.py:7841: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0617675Z self.assertEqual(len(a.storage()), len(b.storage())) 2023-01-11T21:57:16.0617866Z ok (0.001s) 2023-01-11T21:57:16.0618099Z test_nested_level (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0618408Z test_non_differentiable (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0618702Z test_out_variant (__main__.TestAutogradForwardMode) ... ok (0.004s) 2023-01-11T21:57:16.0618988Z test_print (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0619312Z test_set_fw_grad_having_own_fw_grad_at_same_level (__main__.TestAutogradForwardMode) ... ok (0.004s) 2023-01-11T21:57:16.0619647Z test_set_fwd_grad_enabled (__main__.TestAutogradForwardMode) ... ok (0.001s) 2023-01-11T21:57:16.0619937Z test_size_check (__main__.TestAutogradForwardMode) ... ok (0.004s) 2023-01-11T21:57:16.0620256Z test_view_inplace_always_creates_a_view (__main__.TestAutogradForwardMode) ... ok (0.003s) 2023-01-11T21:57:16.0620599Z test_view_inplace_differentiable_views (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0620937Z test_view_inplace_non_differentiable_views (__main__.TestAutogradForwardMode) ... ok (0.002s) 2023-01-11T21:57:16.0621313Z test_inplace_on_view_not_same_layout (__main__.TestAutogradForwardModeBatchedGrad) ... ok (0.001s) 2023-01-11T21:57:16.0621698Z test_inplace_on_view_same_layout (__main__.TestAutogradForwardModeBatchedGrad) ... ok (0.001s) 2023-01-11T21:57:16.0622602Z test_metadata_check_for_storage_numel_skipped (__main__.TestAutogradForwardModeBatchedGrad) ... /var/lib/jenkins/workspace/test/test_autograd.py:7716: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:16.0623248Z self.assertEqual(len(primal.storage()), 5) 2023-01-11T21:57:16.0623439Z ok (0.004s) 2023-01-11T21:57:16.0623704Z test_out_of_place_basic (__main__.TestAutogradForwardModeBatchedGrad) ... ok (0.007s) 2023-01-11T21:57:16.0624079Z test_out_of_place_not_same_layout (__main__.TestAutogradForwardModeBatchedGrad) ... ok (0.001s) 2023-01-11T21:57:16.0624525Z test_construct_standard_basis_for_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0624973Z test_construct_standard_basis_for_cuda_base_tensor (autograd.test_functional.TestAutogradFunctional) ... skip: test requires CUDA (0.000s) 2023-01-11T21:57:16.0625446Z test_construct_standard_basis_for_cuda_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... skip: test requires CUDA (0.000s) 2023-01-11T21:57:16.0625893Z test_construct_standard_basis_for_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.011s) 2023-01-11T21:57:16.0626311Z test_hessian_create_graph_vectorize_False_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.135s) 2023-01-11T21:57:16.0626754Z test_hessian_create_graph_vectorize_False_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.813s) 2023-01-11T21:57:16.0627234Z test_hessian_create_graph_vectorize_True_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.071s) 2023-01-11T21:57:16.0627672Z test_hessian_create_graph_vectorize_True_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.363s) 2023-01-11T21:57:16.0628079Z test_hessian_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0628490Z test_hessian_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.008s) 2023-01-11T21:57:16.0628908Z test_hessian_err_check_strict_vectorize_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0629340Z test_hessian_err_check_strict_vectorize_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0629759Z test_hessian_err_check_vectorize_False_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0630195Z test_hessian_err_check_vectorize_False_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.014s) 2023-01-11T21:57:16.0630625Z test_hessian_err_check_vectorize_True_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0631049Z test_hessian_err_check_vectorize_True_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.007s) 2023-01-11T21:57:16.0631444Z test_hessian_match_vhp_hvp_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0631844Z test_hessian_match_vhp_hvp_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.009s) 2023-01-11T21:57:16.0632239Z test_hessian_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0632617Z test_hessian_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.009s) 2023-01-11T21:57:16.0633008Z test_hessian_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0633396Z test_hessian_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.012s) 2023-01-11T21:57:16.0633795Z test_hessian_output_vectorized_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0634198Z test_hessian_output_vectorized_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0634615Z test_hessian_scalar_vectorize_False_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0635034Z test_hessian_scalar_vectorize_False_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.008s) 2023-01-11T21:57:16.0635455Z test_hessian_scalar_vectorize_True_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0635868Z test_hessian_scalar_vectorize_True_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0636331Z test_hessian_vectorize_correctness_multi_input_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.014s) 2023-01-11T21:57:16.0636772Z test_hessian_vectorize_correctness_multi_input_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.106s) 2023-01-11T21:57:16.0637278Z test_hessian_vectorize_correctness_simple_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0637705Z test_hessian_vectorize_correctness_simple_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.031s) 2023-01-11T21:57:16.0638155Z test_hessian_vectorize_correctness_unrelated_outputs_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0638616Z test_hessian_vectorize_correctness_unrelated_outputs_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.028s) 2023-01-11T21:57:16.0639102Z test_hessian_vectorize_raises_no_warnings_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0639526Z test_hessian_vectorize_raises_no_warnings_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0639941Z test_hvp_create_graph_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.086s) 2023-01-11T21:57:16.0640337Z test_hvp_create_graph_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.451s) 2023-01-11T21:57:16.0640732Z test_hvp_err_check_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0641169Z test_hvp_err_check_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.008s) 2023-01-11T21:57:16.0641591Z test_hvp_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0641997Z test_hvp_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0642379Z test_hvp_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0642760Z test_hvp_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0643137Z test_hvp_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0643515Z test_hvp_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0643881Z test_hvp_scalar_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0644258Z test_hvp_scalar_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0644671Z test_jacobian_create_graph_vectorize_False_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.058s) 2023-01-11T21:57:16.0645115Z test_jacobian_create_graph_vectorize_False_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.326s) 2023-01-11T21:57:16.0645545Z test_jacobian_create_graph_vectorize_True_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.046s) 2023-01-11T21:57:16.0645980Z test_jacobian_create_graph_vectorize_True_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.233s) 2023-01-11T21:57:16.0646408Z test_jacobian_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0646817Z test_jacobian_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.007s) 2023-01-11T21:57:16.0647222Z test_jacobian_err_check_strict_vectorize_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0647651Z test_jacobian_err_check_strict_vectorize_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0648125Z test_jacobian_err_check_vectorize_False_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0648555Z test_jacobian_err_check_vectorize_False_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.007s) 2023-01-11T21:57:16.0648966Z test_jacobian_err_check_vectorize_True_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0649391Z test_jacobian_err_check_vectorize_True_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0649806Z test_jacobian_match_vjp_jvp_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0650199Z test_jacobian_match_vjp_jvp_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.007s) 2023-01-11T21:57:16.0650595Z test_jacobian_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0651019Z test_jacobian_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0651417Z test_jacobian_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0651799Z test_jacobian_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.009s) 2023-01-11T21:57:16.0652204Z test_jacobian_output_vectorized_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0652620Z test_jacobian_output_vectorized_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0653023Z test_jacobian_scalar_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0653403Z test_jacobian_scalar_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0653813Z test_jacobian_scalar_vectorized_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0654232Z test_jacobian_scalar_vectorized_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0654698Z test_jacobian_vectorize_correctness_different_devices_base_tensor (autograd.test_functional.TestAutogradFunctional) ... skip: test requires CUDA (0.000s) 2023-01-11T21:57:16.0655194Z test_jacobian_vectorize_correctness_different_devices_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... skip: test requires CUDA (0.000s) 2023-01-11T21:57:16.0655680Z test_jacobian_vectorize_correctness_different_dtype_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0656137Z test_jacobian_vectorize_correctness_different_dtype_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.009s) 2023-01-11T21:57:16.0656589Z test_jacobian_vectorize_correctness_multi_input_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0657031Z test_jacobian_vectorize_correctness_multi_input_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.017s) 2023-01-11T21:57:16.0657490Z test_jacobian_vectorize_correctness_multi_input_multi_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.008s) 2023-01-11T21:57:16.0657959Z test_jacobian_vectorize_correctness_multi_input_multi_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.057s) 2023-01-11T21:57:16.0658413Z test_jacobian_vectorize_correctness_simple_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0658839Z test_jacobian_vectorize_correctness_simple_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.019s) 2023-01-11T21:57:16.0659290Z test_jacobian_vectorize_correctness_unrelated_outputs_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0659756Z test_jacobian_vectorize_correctness_unrelated_outputs_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.021s) 2023-01-11T21:57:16.0660239Z test_jacobian_vectorize_correctness_zero_dim_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0660670Z test_jacobian_vectorize_correctness_zero_dim_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.033s) 2023-01-11T21:57:16.0661107Z test_jacobian_vectorize_raises_no_warnings_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0661544Z test_jacobian_vectorize_raises_no_warnings_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0661961Z test_jvp_create_graph_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.060s) 2023-01-11T21:57:16.0665136Z test_jvp_create_graph_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.317s) 2023-01-11T21:57:16.0665632Z test_jvp_err_check_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0666024Z test_jvp_err_check_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0666418Z test_jvp_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0666806Z test_jvp_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0667195Z test_jvp_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0667578Z test_jvp_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0667950Z test_jvp_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0668337Z test_jvp_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0668727Z test_jvp_scalar_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0669106Z test_jvp_scalar_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0669481Z test_vhp_create_graph_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.077s) 2023-01-11T21:57:16.0669877Z test_vhp_create_graph_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.415s) 2023-01-11T21:57:16.0670267Z test_vhp_err_check_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0670651Z test_vhp_err_check_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0671033Z test_vhp_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0671435Z test_vhp_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0671831Z test_vhp_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0672200Z test_vhp_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.006s) 2023-01-11T21:57:16.0672585Z test_vhp_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0672971Z test_vhp_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0673355Z test_vhp_scalar_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0673723Z test_vhp_scalar_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0674118Z test_vjp_create_graph_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.054s) 2023-01-11T21:57:16.0674511Z test_vjp_create_graph_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.282s) 2023-01-11T21:57:16.0674943Z test_vjp_err_check_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0675316Z test_vjp_err_check_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0675713Z test_vjp_err_check_strict_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0676116Z test_vjp_err_check_strict_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0676492Z test_vjp_no_grad_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0676878Z test_vjp_no_grad_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.005s) 2023-01-11T21:57:16.0677329Z test_vjp_output_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.002s) 2023-01-11T21:57:16.0677747Z test_vjp_output_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.004s) 2023-01-11T21:57:16.0678118Z test_vjp_scalar_base_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.001s) 2023-01-11T21:57:16.0678499Z test_vjp_scalar_logging_tensor (autograd.test_functional.TestAutogradFunctional) ... ok (0.003s) 2023-01-11T21:57:16.0678867Z test_inference_mode_context_manager (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0679209Z test_inference_mode_decorator (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0679552Z test_inference_mode_existing_autograd_session (__main__.TestAutogradInferenceMode) ... ok (0.006s) 2023-01-11T21:57:16.0679923Z test_inference_mode_handle_direct_view_on_rebase (__main__.TestAutogradInferenceMode) ... ok (0.007s) 2023-01-11T21:57:16.0680295Z test_inference_mode_handle_indirect_view_on_rebase (__main__.TestAutogradInferenceMode) ... ok (0.005s) 2023-01-11T21:57:16.0680667Z test_inference_mode_inf_tensor_in_inf_mode_functional_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0681051Z test_inference_mode_inf_tensor_in_inf_mode_inplace_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0681423Z test_inference_mode_inf_tensor_in_inf_mode_view_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0681808Z test_inference_mode_inf_tensor_in_normal_mode_functional_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0682188Z test_inference_mode_inf_tensor_in_normal_mode_inplace_op (__main__.TestAutogradInferenceMode) ... ok (0.007s) 2023-01-11T21:57:16.0682564Z test_inference_mode_inf_tensor_in_normal_mode_view_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0682924Z test_inference_mode_tensor_creation (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0683284Z test_mix_inference_and_normal_tensor_functional_op (__main__.TestAutogradInferenceMode) ... ok (0.004s) 2023-01-11T21:57:16.0683645Z test_mix_inference_and_normal_tensor_inplace_op (__main__.TestAutogradInferenceMode) ... ok (0.009s) 2023-01-11T21:57:16.0684013Z test_mix_inference_and_normal_tensor_view_op (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0684383Z test_normal_tensor_inplace_output_in_inference_mode (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0684758Z test_normal_tensor_inplace_output_in_normal_mode (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0685113Z test_normal_tensor_view_output_in_inference_mode (__main__.TestAutogradInferenceMode) ... ok (0.001s) 2023-01-11T21:57:16.0685476Z test_normal_tensor_view_output_in_normal_mode (__main__.TestAutogradInferenceMode) ... ok (0.004s) 2023-01-11T21:57:16.0685804Z test_cat_stack_r_to_c (__main__.TestMultithreadAutograd) ... ok (0.106s) 2023-01-11T21:57:16.0686116Z test_dataparallel_saved_tensors_hooks (__main__.TestMultithreadAutograd) ... ok (0.002s) 2023-01-11T21:57:16.0686441Z test_fork_join_in_middle (__main__.TestMultithreadAutograd) ... ok (0.018s) 2023-01-11T21:57:16.0686778Z test_multi_grad_hooks (__main__.TestMultithreadAutograd) ... ok (0.005s) 2023-01-11T21:57:16.0687114Z test_multithreaded_exception_propagation (__main__.TestMultithreadAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0687438Z test_preserve_backtrace (__main__.TestMultithreadAutograd) ... ok (0.001s) 2023-01-11T21:57:16.0687755Z test_python_thread_in_middle (__main__.TestMultithreadAutograd) ... ok (0.021s) 2023-01-11T21:57:16.0688069Z test_simple_backward (__main__.TestMultithreadAutograd) ... ok (0.006s) 2023-01-11T21:57:16.0688374Z test_simple_backward_same_input (__main__.TestMultithreadAutograd) ... ok (0.009s) 2023-01-11T21:57:16.0688558Z 2023-01-11T21:57:16.0688798Z ---------------------------------------------------------------------- 2023-01-11T21:57:16.0689044Z Ran 478 tests in 16.842s 2023-01-11T21:57:16.0689158Z 2023-01-11T21:57:16.0689252Z OK (skipped=11, expected failures=1) 2023-01-11T21:57:16.0689373Z 2023-01-11T21:57:16.0689488Z Generating XML reports... 2023-01-11T21:57:16.0689929Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestAllowMutationOnSaved-20230111215658.xml 2023-01-11T21:57:16.0690451Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestAutograd-20230111215658.xml 2023-01-11T21:57:16.0690997Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-autograd.test_complex.TestAutogradComplex-20230111215658.xml 2023-01-11T21:57:16.0691576Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestAutogradForwardMode-20230111215658.xml 2023-01-11T21:57:16.0692169Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestAutogradForwardModeBatchedGrad-20230111215658.xml 2023-01-11T21:57:16.0692803Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-autograd.test_functional.TestAutogradFunctional-20230111215658.xml 2023-01-11T21:57:16.0693385Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestAutogradInferenceMode-20230111215658.xml 2023-01-11T21:57:16.0693938Z Generated XML report: test-reports/python-unittest/test_autograd/TEST-TestMultithreadAutograd-20230111215658.xml 2023-01-11T21:57:16.0694183Z 2023-01-11T21:57:16.0694529Z ##[endgroup] 2023-01-11T21:57:16.0694904Z FINISHED PRINTING LOG FILE of test_autograd (/var/lib/jenkins/workspace/test/test-reports/test_autograd_gulrsvtr) 2023-01-11T21:57:16.0695105Z 2023-01-11T21:57:16.0695283Z Running test_cpp_extensions_jit ... [2023-01-11 21:57:16.040753] 2023-01-11T21:57:16.0695769Z Executing ['/opt/conda/bin/python', '-bb', 'test_cpp_extensions_jit.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:57:16.041004] 2023-01-11T21:57:52.5003526Z 2023-01-11T21:57:52.5004186Z Expand the folded group to see the log file of test_cpp_extensions_jit 2023-01-11T21:57:52.5005003Z ##[group]PRINTING LOG FILE of test_cpp_extensions_jit (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_jit_jy4nl2_6) 2023-01-11T21:57:52.5005566Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:57:52.5005722Z 2023-01-11T21:57:52.5005799Z Running tests... 2023-01-11T21:57:52.5006114Z ---------------------------------------------------------------------- 2023-01-11T21:57:52.5007649Z Test results will be stored in test-reports/python-unittest/test_cpp_extensions_jit 2023-01-11T21:57:52.5007975Z test_autograd_from_cpp (__main__.TestCppExtensionJIT) ... ok (1.546s) 2023-01-11T21:57:52.5008298Z test_compilation_error_formatting (__main__.TestCppExtensionJIT) ... ok (13.744s) 2023-01-11T21:57:52.5008732Z test_cpp_frontend_module_has_same_output_as_python (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5009179Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/cpp_frontend_extension... 2023-01-11T21:57:52.5009605Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/cpp_frontend_extension/build.ninja... 2023-01-11T21:57:52.5010109Z Building extension module cpp_frontend_extension... 2023-01-11T21:57:52.5010372Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5012383Z [1/2] c++ -MMD -MF cpp_frontend_extension.o.d -DTORCH_EXTENSION_NAME=cpp_frontend_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/workspace/test/cpp_extensions/cpp_frontend_extension.cpp -o cpp_frontend_extension.o 2023-01-11T21:57:52.5015130Z [2/2] c++ cpp_frontend_extension.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o cpp_frontend_extension.so 2023-01-11T21:57:52.5015825Z Loading extension module cpp_frontend_extension... 2023-01-11T21:57:52.5016205Z ok (1.699s) 2023-01-11T21:57:52.5016825Z test_cpp_frontend_module_has_up_to_date_attributes (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5017767Z No modifications detected for re-loaded extension module cpp_frontend_extension, skipping build step... 2023-01-11T21:57:52.5018295Z Loading extension module cpp_frontend_extension... 2023-01-11T21:57:52.5034545Z ok (0.002s) 2023-01-11T21:57:52.5035142Z test_cpp_frontend_module_python_inter_op (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5036110Z No modifications detected for re-loaded extension module cpp_frontend_extension, skipping build step... 2023-01-11T21:57:52.5036704Z Loading extension module cpp_frontend_extension... 2023-01-11T21:57:52.5037054Z ok (0.008s) 2023-01-11T21:57:52.5037628Z test_cpp_frontend_module_python_inter_op_with_cuda (__main__.TestCppExtensionJIT) ... skip: CUDA not found (0.001s) 2023-01-11T21:57:52.5038390Z test_custom_compound_op_autograd (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5039136Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/is_python_module... 2023-01-11T21:57:52.5039788Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/is_python_module/build.ninja... 2023-01-11T21:57:52.5040307Z Building extension module is_python_module... 2023-01-11T21:57:52.5040746Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5043150Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=is_python_module -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/is_python_module/main.cpp -o main.o 2023-01-11T21:57:52.5045116Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o is_python_module.so 2023-01-11T21:57:52.5052155Z Loading extension module is_python_module... 2023-01-11T21:57:52.5054226Z /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:52.5056904Z warnings.warn( 2023-01-11T21:57:52.5058037Z /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #1 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:52.5058788Z warnings.warn( 2023-01-11T21:57:52.5059421Z /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:52.5059835Z warnings.warn( 2023-01-11T21:57:52.5060535Z /opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py:688: UserWarning: Input #1 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 2023-01-11T21:57:52.5060973Z warnings.warn( 2023-01-11T21:57:52.5061252Z ok (1.279s) 2023-01-11T21:57:52.5061523Z test_half_support (__main__.TestCppExtensionJIT) 2023-01-11T21:57:52.5062086Z Checks for an issue with operator< ambiguity for half when certain ... skip: Temporarily disabled (0.001s) 2023-01-11T21:57:52.5062889Z test_inline_jit_compile_custom_op_cuda (__main__.TestCppExtensionJIT) ... skip: Temporarily disabled (0.001s) 2023-01-11T21:57:52.5063526Z test_inline_jit_compile_extension_cuda (__main__.TestCppExtensionJIT) ... skip: Temporarily disabled (0.001s) 2023-01-11T21:57:52.5064276Z test_inline_jit_compile_extension_multiple_sources_and_no_functions (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5064738Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension... 2023-01-11T21:57:52.5065117Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension/build.ninja... 2023-01-11T21:57:52.5065421Z Building extension module inline_jit_extension... 2023-01-11T21:57:52.5065677Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5066996Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension/main.cpp -o main.o 2023-01-11T21:57:52.5068060Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension.so 2023-01-11T21:57:52.5068457Z Loading extension module inline_jit_extension... 2023-01-11T21:57:52.5068669Z ok (1.253s) 2023-01-11T21:57:52.5068952Z test_inline_jit_compile_extension_throws_when_functions_is_bad (__main__.TestCppExtensionJIT) ... ok (0.004s) 2023-01-11T21:57:52.5069393Z test_inline_jit_compile_extension_with_functions_as_dict (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5069852Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_dict... 2023-01-11T21:57:52.5070274Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_dict/build.ninja... 2023-01-11T21:57:52.5070632Z Building extension module inline_jit_extension_with_functions_dict... 2023-01-11T21:57:52.5070995Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5072367Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension_with_functions_dict -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_dict/main.cpp -o main.o 2023-01-11T21:57:52.5073544Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension_with_functions_dict.so 2023-01-11T21:57:52.5073946Z Loading extension module inline_jit_extension_with_functions_dict... 2023-01-11T21:57:52.5074175Z ok (1.224s) 2023-01-11T21:57:52.5074514Z test_inline_jit_compile_extension_with_functions_as_list (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5074977Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_list... 2023-01-11T21:57:52.5075393Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_list/build.ninja... 2023-01-11T21:57:52.5075745Z Building extension module inline_jit_extension_with_functions_list... 2023-01-11T21:57:52.5076010Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5077460Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_jit_extension_with_functions_list -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/inline_jit_extension_with_functions_list/main.cpp -o main.o 2023-01-11T21:57:52.5078581Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o inline_jit_extension_with_functions_list.so 2023-01-11T21:57:52.5078978Z Loading extension module inline_jit_extension_with_functions_list... 2023-01-11T21:57:52.5079210Z ok (1.253s) 2023-01-11T21:57:52.5079524Z test_jit_compile_extension (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5079934Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/jit_extension... 2023-01-11T21:57:52.5080305Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/jit_extension/build.ninja... 2023-01-11T21:57:52.5080607Z Building extension module jit_extension... 2023-01-11T21:57:52.5080845Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5082228Z [1/3] c++ -MMD -MF jit_extension.o.d -DTORCH_EXTENSION_NAME=jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/test/cpp_extensions -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -c /var/lib/jenkins/workspace/test/cpp_extensions/jit_extension.cpp -o jit_extension.o 2023-01-11T21:57:52.5084320Z [2/3] c++ -MMD -MF jit_extension2.o.d -DTORCH_EXTENSION_NAME=jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/test/cpp_extensions -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -c /var/lib/jenkins/workspace/test/cpp_extensions/jit_extension2.cpp -o jit_extension2.o 2023-01-11T21:57:52.5085501Z [3/3] c++ jit_extension.o jit_extension2.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o jit_extension.so 2023-01-11T21:57:52.5085861Z Loading extension module jit_extension... 2023-01-11T21:57:52.5086068Z ok (1.813s) 2023-01-11T21:57:52.5086325Z test_jit_cuda_archflags (__main__.TestCppExtensionJIT) ... skip: CUDA not found (0.001s) 2023-01-11T21:57:52.5086650Z test_jit_cuda_extension (__main__.TestCppExtensionJIT) ... skip: CUDA not found (0.000s) 2023-01-11T21:57:52.5086988Z test_jit_cudnn_extension (__main__.TestCppExtensionJIT) ... skip: CuDNN not found (0.001s) 2023-01-11T21:57:52.5087409Z test_lenient_flag_handling_in_jit_extensions (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5087833Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/lenient_flag_handling_extension... 2023-01-11T21:57:52.5088233Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/lenient_flag_handling_extension/build.ninja... 2023-01-11T21:57:52.5088575Z Building extension module lenient_flag_handling_extension... 2023-01-11T21:57:52.5088843Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5090280Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=lenient_flag_handling_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/test/cpp_extensions -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -O0 -Wall -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/lenient_flag_handling_extension/main.cpp -o main.o 2023-01-11T21:57:52.5091421Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o lenient_flag_handling_extension.so 2023-01-11T21:57:52.5091784Z Loading extension module lenient_flag_handling_extension... 2023-01-11T21:57:52.5092003Z ok (1.480s) 2023-01-11T21:57:52.5092324Z test_reload_jit_extension (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5092731Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension... 2023-01-11T21:57:52.5093111Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/build.ninja... 2023-01-11T21:57:52.5093433Z Building extension module reloaded_jit_extension... 2023-01-11T21:57:52.5093691Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5094997Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/main.cpp -o main.o 2023-01-11T21:57:52.5096119Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension.so 2023-01-11T21:57:52.5096468Z Loading extension module reloaded_jit_extension... 2023-01-11T21:57:52.5096768Z Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5097332Z The input conditions for extension module reloaded_jit_extension have changed. Bumping to version 1 and re-building as reloaded_jit_extension_v1... 2023-01-11T21:57:52.5097746Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/build.ninja... 2023-01-11T21:57:52.5098069Z Building extension module reloaded_jit_extension_v1... 2023-01-11T21:57:52.5098329Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5099640Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension_v1 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/main.cpp -o main.o 2023-01-11T21:57:52.5100721Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension_v1.so 2023-01-11T21:57:52.5101086Z Loading extension module reloaded_jit_extension_v1... 2023-01-11T21:57:52.5101373Z Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5101816Z No modifications detected for re-loaded extension module reloaded_jit_extension_v1, skipping build step... 2023-01-11T21:57:52.5102147Z Loading extension module reloaded_jit_extension_v1... 2023-01-11T21:57:52.5102613Z Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5103138Z The input conditions for extension module reloaded_jit_extension have changed. Bumping to version 2 and re-building as reloaded_jit_extension_v2... 2023-01-11T21:57:52.5103569Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/build.ninja... 2023-01-11T21:57:52.5103897Z Building extension module reloaded_jit_extension_v2... 2023-01-11T21:57:52.5104147Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5105473Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=reloaded_jit_extension_v2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/reloaded_jit_extension/main.cpp -o main.o 2023-01-11T21:57:52.5106605Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o reloaded_jit_extension_v2.so 2023-01-11T21:57:52.5106961Z Loading extension module reloaded_jit_extension_v2... 2023-01-11T21:57:52.5107182Z ok (3.817s) 2023-01-11T21:57:52.5107526Z test_returns_shared_library_path_when_is_python_module_is_true (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5108114Z The input conditions for extension module is_python_module have changed. Bumping to version 1 and re-building as is_python_module_v1... 2023-01-11T21:57:52.5108526Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/is_python_module/build.ninja... 2023-01-11T21:57:52.5108840Z Building extension module is_python_module_v1... 2023-01-11T21:57:52.5109086Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5110424Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=is_python_module_v1 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/is_python_module/main.cpp -o main.o 2023-01-11T21:57:52.5111477Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o is_python_module_v1.so 2023-01-11T21:57:52.5111826Z Loading extension module is_python_module_v1... 2023-01-11T21:57:52.5112022Z ok (1.417s) 2023-01-11T21:57:52.5112374Z test_set_default_type_also_changes_aten_default_type (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T21:57:52.5112811Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/test_set_default_type... 2023-01-11T21:57:52.5113187Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/test_set_default_type/build.ninja... 2023-01-11T21:57:52.5113489Z Building extension module test_set_default_type... 2023-01-11T21:57:52.5113747Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:57:52.5115136Z [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=test_set_default_type -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /var/lib/jenkins/.cache/torch_extensions/py310_cu117/test_set_default_type/main.cpp -o main.o 2023-01-11T21:57:52.5116202Z [2/2] c++ main.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o test_set_default_type.so 2023-01-11T21:57:52.5116545Z Loading extension module test_set_default_type... 2023-01-11T21:57:52.5116741Z ok (1.337s) 2023-01-11T21:57:52.5117029Z test_warning (__main__.TestCppExtensionJIT) ... [W main.cpp:12] Warning: Error with CPUDoubleType (function foo) 2023-01-11T21:57:52.5117426Z [W main.cpp:12] Warning: Error with CPUDoubleType (function foo) 2023-01-11T21:57:52.5117717Z [W main.cpp:12] Warning: Error with CPUDoubleType (function foo) 2023-01-11T21:57:52.5117986Z [W main.cpp:12] Warning: Error with CPUDoubleType (function foo) 2023-01-11T21:57:52.5118406Z UserWarning: Error with torch.DoubleTensor (Triggered internally at /var/lib/jenkins/.cache/torch_extensions/py310_cu117/warn_mod/main.cpp:12.) 2023-01-11T21:57:52.5118709Z ok (2.560s) 2023-01-11T21:57:52.5118816Z 2023-01-11T21:57:52.5119009Z ---------------------------------------------------------------------- 2023-01-11T21:57:52.5119255Z Ran 23 tests in 34.505s 2023-01-11T21:57:52.5119372Z 2023-01-11T21:57:52.5119445Z OK (skipped=7) 2023-01-11T21:57:52.5119555Z 2023-01-11T21:57:52.5119638Z Generating XML reports... 2023-01-11T21:57:52.5120063Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_jit/TEST-TestCppExtensionJIT-20230111215717.xml 2023-01-11T21:57:52.5120317Z 2023-01-11T21:57:52.5120632Z ##[endgroup] 2023-01-11T21:57:52.5121043Z FINISHED PRINTING LOG FILE of test_cpp_extensions_jit (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_jit_jy4nl2_6) 2023-01-11T21:57:52.5121262Z 2023-01-11T21:57:52.5121453Z Running test_cuda ... [2023-01-11 21:57:52.500689] 2023-01-11T21:57:52.5121914Z Executing ['/opt/conda/bin/python', '-bb', 'test_cuda.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:57:52.500917] 2023-01-11T21:57:54.2760931Z 2023-01-11T21:57:54.2761445Z Expand the folded group to see the log file of test_cuda 2023-01-11T21:57:54.2762406Z ##[group]PRINTING LOG FILE of test_cuda (/var/lib/jenkins/workspace/test/test-reports/test_cuda_4umde16w) 2023-01-11T21:57:54.2762932Z CUDA not available, skipping tests 2023-01-11T21:57:54.2763113Z 2023-01-11T21:57:54.2763185Z Running tests... 2023-01-11T21:57:54.2763570Z ---------------------------------------------------------------------- 2023-01-11T21:57:54.2763744Z 2023-01-11T21:57:54.2763945Z ---------------------------------------------------------------------- 2023-01-11T21:57:54.2764184Z Ran 0 tests in 0.000s 2023-01-11T21:57:54.2764306Z 2023-01-11T21:57:54.2764356Z OK 2023-01-11T21:57:54.2764447Z 2023-01-11T21:57:54.2764532Z Generating XML reports... 2023-01-11T21:57:54.2764863Z Test results will be stored in test-reports/python-unittest/test_cuda 2023-01-11T21:57:54.2765034Z 2023-01-11T21:57:54.2765248Z ##[endgroup] 2023-01-11T21:57:54.2765612Z FINISHED PRINTING LOG FILE of test_cuda (/var/lib/jenkins/workspace/test/test-reports/test_cuda_4umde16w) 2023-01-11T21:57:54.2765814Z 2023-01-11T21:57:54.2765980Z Running test_fake_tensor ... [2023-01-11 21:57:54.276111] 2023-01-11T21:57:54.2766450Z Executing ['/opt/conda/bin/python', '-bb', 'test_fake_tensor.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:57:54.276346] 2023-01-11T21:57:56.9222496Z 2023-01-11T21:57:56.9223006Z Expand the folded group to see the log file of test_fake_tensor 2023-01-11T21:57:56.9223955Z ##[group]PRINTING LOG FILE of test_fake_tensor (/var/lib/jenkins/workspace/test/test-reports/test_fake_tensor_22j53fn3) 2023-01-11T21:57:56.9224363Z 2023-01-11T21:57:56.9224486Z Running tests... 2023-01-11T21:57:56.9225113Z ---------------------------------------------------------------------- 2023-01-11T21:57:56.9225696Z Test results will be stored in test-reports/python-unittest/test_fake_tensor 2023-01-11T21:57:56.9226271Z test_aliased_const_write (__main__.FakeTensorConstHandling) ... ok (0.246s) 2023-01-11T21:57:56.9226854Z test_constant_invalidation (__main__.FakeTensorConstHandling) ... ok (0.005s) 2023-01-11T21:57:56.9227448Z test_fake_tensor_batch_norm_cpu (__main__.FakeTensorConstHandling) ... ok (0.081s) 2023-01-11T21:57:56.9228034Z test_fake_tensor_in_intlist_repro (__main__.FakeTensorConstHandling) ... ok (0.008s) 2023-01-11T21:57:56.9228607Z test_inplace_add (__main__.FakeTensorConstHandling) ... ok (0.002s) 2023-01-11T21:57:56.9229187Z test_inplace_view_invalidation (__main__.FakeTensorConstHandling) ... ok (0.001s) 2023-01-11T21:57:56.9229759Z test_shared_storage_invalidation (__main__.FakeTensorConstHandling) ... ok (0.005s) 2023-01-11T21:57:56.9230985Z test_shared_storages (__main__.FakeTensorConstHandling) ... /var/lib/jenkins/workspace/test/test_fake_tensor.py:513: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:56.9232325Z self.assertEqual(x.storage()._cdata, y.storage()._cdata) 2023-01-11T21:57:56.9233369Z /var/lib/jenkins/workspace/test/test_fake_tensor.py:514: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:56.9233988Z self.assertEqual(x.constant.storage()._cdata, y.constant.storage()._cdata) 2023-01-11T21:57:56.9234216Z ok (0.001s) 2023-01-11T21:57:56.9234444Z test_simple (__main__.FakeTensorConstHandling) ... ok (0.001s) 2023-01-11T21:57:56.9234799Z test_dead_key (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9235097Z test_dead_weak_ref (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9235407Z test_memoized_conversion_from_meta (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9235741Z test_memoized_conversion_to_meta (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9236061Z test_no_active_mode (__main__.FakeTensorConverterTest) ... ok (0.008s) 2023-01-11T21:57:56.9236348Z test_no_ref_cycle (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9236658Z test_separate_mode_error (__main__.FakeTensorConverterTest) ... ok (0.007s) 2023-01-11T21:57:56.9237398Z test_separate_tensor_storages_non_view (__main__.FakeTensorConverterTest) ... /var/lib/jenkins/workspace/test/test_fake_tensor.py:602: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:56.9237964Z y.set_(x.storage()) 2023-01-11T21:57:56.9238125Z ok (0.001s) 2023-01-11T21:57:56.9238383Z test_separate_tensor_storages_view (__main__.FakeTensorConverterTest) ... ok (0.001s) 2023-01-11T21:57:56.9238714Z test_like_ops (__main__.FakeTensorOperatorInvariants) ... ok (0.015s) 2023-01-11T21:57:56.9239047Z test_non_kwarg_only_device (__main__.FakeTensorOperatorInvariants) ... ok (0.055s) 2023-01-11T21:57:56.9239381Z test_sparse_new (__main__.FakeTensorOperatorInvariants) ... expected failure (0.004s) 2023-01-11T21:57:56.9239755Z test_tensor_constructors_all_have_kwarg_device (__main__.FakeTensorOperatorInvariants) ... ok (0.107s) 2023-01-11T21:57:56.9240620Z test_fake_tensor_prop_on_nn_module (__main__.FakeTensorPropTest) ... /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py:564: UserWarning: Was not able to add assertion to guarantee correct input value to specialized function. It is up to the user to make sure that your inputs match the inputs you specialized the function with. 2023-01-11T21:57:56.9241135Z warnings.warn( 2023-01-11T21:57:56.9241294Z ok (0.023s) 2023-01-11T21:57:56.9241507Z test_basic (__main__.FakeTensorTest) ... ok (0.002s) 2023-01-11T21:57:56.9241779Z test_binary_op_type_promotion (__main__.FakeTensorTest) ... ok (0.008s) 2023-01-11T21:57:56.9242060Z test_constructor (__main__.FakeTensorTest) ... ok (0.004s) 2023-01-11T21:57:56.9242332Z test_cpu_fallback (__main__.FakeTensorTest) ... skip: requires cuda (0.001s) 2023-01-11T21:57:56.9242634Z test_cuda_lstm (__main__.FakeTensorTest) ... skip: requires cuda (0.001s) 2023-01-11T21:57:56.9242947Z test_cudnn_rnn_with_fallback (__main__.FakeTensorTest) ... skip: requires cuda (0.002s) 2023-01-11T21:57:56.9243262Z test_cudnn_rnn_without_fallback (__main__.FakeTensorTest) ... skip: requires cuda (0.002s) 2023-01-11T21:57:56.9243621Z test_data_dependent_operator (__main__.FakeTensorTest) ... ok (0.004s) 2023-01-11T21:57:56.9244238Z test_deepcopy (__main__.FakeTensorTest) ... /var/lib/jenkins/workspace/test/test_fake_tensor.py:466: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:57:56.9244851Z self.assertEqual(mod_copied.b.storage()._cdata, mod_copied.a.storage()._cdata) 2023-01-11T21:57:56.9245080Z ok (0.005s) 2023-01-11T21:57:56.9245306Z test_fake_dispatch_keys (__main__.FakeTensorTest) ... ok (0.008s) 2023-01-11T21:57:56.9245583Z test_fake_grad_copy (__main__.FakeTensorTest) ... ok (0.001s) 2023-01-11T21:57:56.9245857Z test_fake_mode_error (__main__.FakeTensorTest) ... ok (0.001s) 2023-01-11T21:57:56.9246177Z test_fallback_memory_prop (__main__.FakeTensorTest) ... skip: requires cuda (0.001s) 2023-01-11T21:57:56.9246480Z test_from_numpy (__main__.FakeTensorTest) ... ok (0.010s) 2023-01-11T21:57:56.9246772Z test_index_cuda_with_cpu (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9247076Z test_like_constructor (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9247354Z test_mode (__main__.FakeTensorTest) ... ok (0.004s) 2023-01-11T21:57:56.9247608Z test_nan_to_num (__main__.FakeTensorTest) ... ok (0.013s) 2023-01-11T21:57:56.9247865Z test_new (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9248166Z test_non_kwarg_device (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9248475Z test_non_overlapping_stride_zero (__main__.FakeTensorTest) ... ok (0.014s) 2023-01-11T21:57:56.9248768Z test_non_parameter_grad (__main__.FakeTensorTest) ... ok (0.001s) 2023-01-11T21:57:56.9249052Z test_normalize_device (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9249364Z test_parameter_instantiation (__main__.FakeTensorTest) ... ok (0.004s) 2023-01-11T21:57:56.9249652Z test_print_in_fake_mode (__main__.FakeTensorTest) ... ok (0.001s) 2023-01-11T21:57:56.9249906Z test_randperm (__main__.FakeTensorTest) ... ok (0.007s) 2023-01-11T21:57:56.9250178Z test_recursive_invocation (__main__.FakeTensorTest) ... ok (0.001s) 2023-01-11T21:57:56.9250453Z test_scalar_inputs (__main__.FakeTensorTest) ... ok (0.005s) 2023-01-11T21:57:56.9250736Z test_setitem (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9251030Z test_shape_take_not_device (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9251334Z test_throw (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9251624Z test_type_as (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9251934Z test_upsample_bilinear_small_channels (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9252259Z test_zero_dim (__main__.FakeTensorTest) ... skip: requires cuda (0.000s) 2023-01-11T21:57:56.9252421Z 2023-01-11T21:57:56.9252632Z ---------------------------------------------------------------------- 2023-01-11T21:57:56.9252878Z Ran 57 tests in 0.681s 2023-01-11T21:57:56.9252978Z 2023-01-11T21:57:56.9253078Z OK (skipped=16, expected failures=1) 2023-01-11T21:57:56.9253210Z 2023-01-11T21:57:56.9253296Z Generating XML reports... 2023-01-11T21:57:56.9253745Z Generated XML report: test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorConstHandling-20230111215755.xml 2023-01-11T21:57:56.9254297Z Generated XML report: test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorConverterTest-20230111215755.xml 2023-01-11T21:57:56.9254873Z Generated XML report: test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorOperatorInvariants-20230111215755.xml 2023-01-11T21:57:56.9255425Z Generated XML report: test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorPropTest-20230111215755.xml 2023-01-11T21:57:56.9255980Z Generated XML report: test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorTest-20230111215755.xml 2023-01-11T21:57:56.9256206Z 2023-01-11T21:57:56.9256463Z ##[endgroup] 2023-01-11T21:57:56.9256850Z FINISHED PRINTING LOG FILE of test_fake_tensor (/var/lib/jenkins/workspace/test/test-reports/test_fake_tensor_22j53fn3) 2023-01-11T21:57:56.9257066Z 2023-01-11T21:57:56.9257218Z Running test_nn ... [2023-01-11 21:57:56.922523] 2023-01-11T21:57:56.9257651Z Executing ['/opt/conda/bin/python', '-bb', 'test_nn.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:57:56.922804] 2023-01-11T21:58:37.5245138Z 2023-01-11T21:58:37.5245632Z Expand the folded group to see the log file of test_nn 2023-01-11T21:58:37.5249255Z ##[group]PRINTING LOG FILE of test_nn (/var/lib/jenkins/workspace/test/test-reports/test_nn_to8mjml2) 2023-01-11T21:58:37.5249586Z 2023-01-11T21:58:37.5249697Z Running tests... 2023-01-11T21:58:37.5250501Z ---------------------------------------------------------------------- 2023-01-11T21:58:37.5251020Z Test results will be stored in test-reports/python-unittest/test_nn 2023-01-11T21:58:37.5279287Z test_add_relu (__main__.TestAddRelu) ... ok (0.002s) 2023-01-11T21:58:37.5279710Z test_add_relu_broadcasting (__main__.TestAddRelu) ... ok (0.001s) 2023-01-11T21:58:37.5280016Z test_constant_pad_nd (__main__.TestConstantPadNd) ... ok (0.001s) 2023-01-11T21:58:37.5280318Z test_preserves_memory_format (__main__.TestConstantPadNd) ... ok (0.001s) 2023-01-11T21:58:37.5280618Z test_pickle_softsign (__main__.TestFunctionalPickle) ... ok (0.000s) 2023-01-11T21:58:37.5283780Z test_fuse_module_eval_numerics (__main__.TestFusionEval) ... ok (0.255s) 2023-01-11T21:58:37.5284379Z test_fuse_conv_bn_requires_grad (__main__.TestFusionUtils) ... ok (0.002s) 2023-01-11T21:58:37.5284772Z test_fuse_linear_bn_requires_grad (__main__.TestFusionUtils) ... ok (0.001s) 2023-01-11T21:58:37.5285341Z test_AdaptiveAvgPool1d (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5285994Z test_AdaptiveAvgPool1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5286677Z test_AdaptiveAvgPool1d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5287390Z test_AdaptiveAvgPool1d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5288101Z test_AdaptiveAvgPool1d_one_output (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5288795Z test_AdaptiveAvgPool1d_one_output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5289510Z test_AdaptiveAvgPool2d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5290225Z test_AdaptiveAvgPool2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5303420Z test_AdaptiveAvgPool2d_single (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5303886Z test_AdaptiveAvgPool2d_single_1x1output (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5312471Z test_AdaptiveAvgPool2d_single_1x1output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5313010Z test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5313327Z test_AdaptiveAvgPool2d_tuple (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5313632Z test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5313952Z test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5314276Z test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5314593Z test_AdaptiveAvgPool3d_last_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5314925Z test_AdaptiveAvgPool3d_last_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5315337Z test_AdaptiveAvgPool3d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5316076Z test_AdaptiveAvgPool3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5317419Z test_AdaptiveAvgPool3d_single (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5317743Z test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5318062Z test_AdaptiveAvgPool3d_tuple (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5318380Z test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5318685Z test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5319009Z test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5319359Z test_AdaptiveLogSoftmax (__main__.TestNN) ... ok (0.075s) 2023-01-11T21:58:37.5319947Z test_AdaptiveLogSoftmax_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5320505Z test_AdaptiveMaxPool1d (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5321054Z test_AdaptiveMaxPool1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5321640Z test_AdaptiveMaxPool1d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5322108Z test_AdaptiveMaxPool1d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5322430Z test_AdaptiveMaxPool2d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5322754Z test_AdaptiveMaxPool2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5323061Z test_AdaptiveMaxPool2d_single (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5323377Z test_AdaptiveMaxPool2d_single_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5323687Z test_AdaptiveMaxPool2d_tuple (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5324048Z test_AdaptiveMaxPool2d_tuple_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5324351Z test_AdaptiveMaxPool2d_tuple_none (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5324672Z test_AdaptiveMaxPool2d_tuple_none_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5324993Z test_AdaptiveMaxPool3d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5325397Z test_AdaptiveMaxPool3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5325922Z test_AdaptiveMaxPool3d_single (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5326498Z test_AdaptiveMaxPool3d_single_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5326930Z test_AdaptiveMaxPool3d_single_nonatomic (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5327366Z test_AdaptiveMaxPool3d_single_nonatomic_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5327694Z test_AdaptiveMaxPool3d_tuple (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5328121Z test_AdaptiveMaxPool3d_tuple_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5328612Z test_AdaptiveMaxPool3d_tuple_nonatomic (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5329016Z test_AdaptiveMaxPool3d_tuple_nonatomic_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5329635Z test_AdaptiveMaxPool3d_tuple_none (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5330183Z test_AdaptiveMaxPool3d_tuple_none_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5330465Z test_AvgPool1d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5330742Z test_AvgPool1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5331031Z test_AvgPool1d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5331390Z test_AvgPool1d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5331683Z test_AvgPool1d_stride (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5331975Z test_AvgPool1d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5332264Z test_AvgPool1d_stride_pad (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5332551Z test_AvgPool1d_stride_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5332832Z test_AvgPool2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5333113Z test_AvgPool2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5333386Z test_AvgPool2d_divisor (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5333678Z test_AvgPool2d_divisor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5334017Z test_AvgPool2d_divisor_stride (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5334332Z test_AvgPool2d_divisor_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5334630Z test_AvgPool2d_divisor_stride_pad (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5334953Z test_AvgPool2d_divisor_stride_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5335285Z test_AvgPool2d_divisor_stride_pad_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5335626Z test_AvgPool2d_divisor_stride_pad_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5335969Z test_AvgPool2d_divisor_stride_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5336315Z test_AvgPool2d_divisor_stride_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5336646Z test_AvgPool2d_divisor_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5336968Z test_AvgPool2d_divisor_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5337279Z test_AvgPool2d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5337585Z test_AvgPool2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5337879Z test_AvgPool2d_stride (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5338156Z test_AvgPool2d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5338451Z test_AvgPool2d_stride_pad (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5338751Z test_AvgPool2d_stride_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5339026Z test_AvgPool3d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5339306Z test_AvgPool3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5339592Z test_AvgPool3d_divisor (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5339872Z test_AvgPool3d_divisor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5340174Z test_AvgPool3d_divisor_stride (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5340465Z test_AvgPool3d_divisor_stride1_pad0_gpu_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5340812Z test_AvgPool3d_divisor_stride1_pad0_gpu_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5341157Z test_AvgPool3d_divisor_stride1_pad0_gpu_input_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5341532Z test_AvgPool3d_divisor_stride1_pad0_gpu_input_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5341903Z test_AvgPool3d_divisor_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5342215Z test_AvgPool3d_divisor_stride_pad (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5342887Z test_AvgPool3d_divisor_stride_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5343459Z test_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5344045Z test_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5344637Z test_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5345310Z test_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5345995Z test_AvgPool3d_divisor_stride_pad_gpu_general_output (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5346575Z test_AvgPool3d_divisor_stride_pad_gpu_general_output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5347256Z test_AvgPool3d_divisor_stride_pad_gpu_general_output_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5347900Z test_AvgPool3d_divisor_stride_pad_gpu_general_output_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5348515Z test_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5349110Z test_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5349702Z test_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5350342Z test_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5350951Z test_AvgPool3d_divisor_stride_pad_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5351521Z test_AvgPool3d_divisor_stride_pad_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5352061Z test_AvgPool3d_divisor_stride_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5352614Z test_AvgPool3d_divisor_stride_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5353161Z test_AvgPool3d_divisor_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5353688Z test_AvgPool3d_divisor_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5354179Z test_AvgPool3d_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5354672Z test_AvgPool3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5355161Z test_AvgPool3d_stride (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5355605Z test_AvgPool3d_stride1_pad0_gpu_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5357135Z test_AvgPool3d_stride1_pad0_gpu_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5357774Z test_AvgPool3d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5358255Z test_AvgPool3d_stride_pad (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5358750Z test_AvgPool3d_stride_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5359261Z test_AvgPool3d_stride_pad_gpu_fixedkw_output (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5359826Z test_AvgPool3d_stride_pad_gpu_fixedkw_output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5360380Z test_AvgPool3d_stride_pad_gpu_general_output (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5360918Z test_AvgPool3d_stride_pad_gpu_general_output_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5361535Z test_AvgPool3d_stride_pad_gpu_input_nooverlap (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5362275Z test_AvgPool3d_stride_pad_gpu_input_nooverlap_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5362815Z test_BCELoss (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5363245Z test_BCELoss_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5363837Z test_BCELoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5364210Z test_BCELoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5364518Z test_BCELoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5364792Z test_BCELoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5365102Z test_BCELoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5365494Z test_BCELoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5365835Z test_BCELoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5366133Z test_BCELoss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5366445Z test_BCELoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5366793Z test_BCELoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5367118Z test_BCELoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5367415Z test_BCELoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5367722Z test_BCELoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5368050Z test_BCELoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5368393Z test_BCELoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5368688Z test_BCELoss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5368976Z test_BCELoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5369257Z test_BCELoss_no_reduce_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5369565Z test_BCELoss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5369865Z test_BCELoss_scalar_weights (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5370175Z test_BCELoss_scalar_weights_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5370517Z test_BCELoss_scalar_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5370867Z test_BCELoss_scalar_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5371209Z test_BCELoss_scalar_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5371491Z test_BCELoss_weights (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5371786Z test_BCELoss_weights_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5372121Z test_BCELoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5372454Z test_BCELoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5372776Z test_BCELoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5373073Z test_BCELoss_weights_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5373378Z test_BCELoss_weights_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5373720Z test_BCELoss_weights_no_reduce_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5374047Z test_BCELoss_weights_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5374357Z test_BCEWithLogitsLoss (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5374667Z test_BCEWithLogitsLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5374998Z test_BCEWithLogitsLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5375340Z test_BCEWithLogitsLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5376086Z test_BCEWithLogitsLoss_legacy_enum (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. 2023-01-11T21:58:37.5376495Z warnings.warn(warning.format(ret)) 2023-01-11T21:58:37.5377033Z /opt/conda/lib/python3.10/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. 2023-01-11T21:58:37.5377390Z warnings.warn(warning.format(ret)) 2023-01-11T21:58:37.5377590Z ok (0.008s) 2023-01-11T21:58:37.5377851Z test_BCEWithLogitsLoss_legacy_enum_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5378186Z test_BCEWithLogitsLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5378536Z test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5378921Z test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5379284Z test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5379625Z test_BCEWithLogitsLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5379967Z test_BCEWithLogitsLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5380350Z test_BCEWithLogitsLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5380708Z test_BCEWithLogitsLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5381046Z test_BCEWithLogitsLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5381394Z test_BCEWithLogitsLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5381759Z test_BCEWithLogitsLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5382131Z test_BCEWithLogitsLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5382618Z test_BCEWithLogitsLoss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5382954Z test_BCEWithLogitsLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5383269Z test_BCEWithLogitsLoss_no_reduce_scalar (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5383615Z test_BCEWithLogitsLoss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5383946Z test_BCEWithLogitsLoss_scalar_weights (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5384287Z test_BCEWithLogitsLoss_scalar_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5384650Z test_BCEWithLogitsLoss_scalar_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5385027Z test_BCEWithLogitsLoss_scalar_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5385426Z test_BCEWithLogitsLoss_weights (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5385737Z test_BCEWithLogitsLoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5386099Z test_BCEWithLogitsLoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5386462Z test_BCEWithLogitsLoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5386773Z test_BatchNorm1d_3d_input (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5387060Z test_BatchNorm1d_3d_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5387358Z test_BatchNorm1d_3d_input_eval (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5387668Z test_BatchNorm1d_3d_input_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5387969Z test_BatchNorm1d_3d_input_not_affine (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5388332Z test_BatchNorm1d_3d_input_not_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5388655Z test_BatchNorm1d_3d_input_not_affine_eval (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5388981Z test_BatchNorm1d_3d_input_not_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5389273Z test_BatchNorm1d_affine (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5389568Z test_BatchNorm1d_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5389864Z test_BatchNorm1d_affine_eval (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5390155Z test_BatchNorm1d_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5390466Z test_BatchNorm1d_affine_simple_average (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5390800Z test_BatchNorm1d_affine_simple_average_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5391134Z test_BatchNorm1d_affine_simple_average_eval (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5391463Z test_BatchNorm1d_affine_simple_average_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5391780Z test_BatchNorm1d_not_affine (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5392079Z test_BatchNorm1d_not_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5392382Z test_BatchNorm1d_not_affine_eval (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5392687Z test_BatchNorm1d_not_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5392994Z test_BatchNorm1d_not_tracking_stats (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5393313Z test_BatchNorm1d_not_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5393621Z test_BatchNorm1d_not_tracking_stats_eval (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5393954Z test_BatchNorm1d_not_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5394262Z test_BatchNorm1d_zero_batch (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5394566Z test_BatchNorm1d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5394853Z test_BatchNorm1d_zero_batch_eval (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5395164Z test_BatchNorm1d_zero_batch_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5395451Z test_BatchNorm2d (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5395705Z test_BatchNorm2d_2d_simple_average (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5396022Z test_BatchNorm2d_2d_simple_average_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5396336Z test_BatchNorm2d_2d_simple_average_eval (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5396706Z test_BatchNorm2d_2d_simple_average_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5397029Z test_BatchNorm2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5397311Z test_BatchNorm2d_eval (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5397665Z test_BatchNorm2d_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5397946Z test_BatchNorm2d_momentum (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5398240Z test_BatchNorm2d_momentum_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5398539Z test_BatchNorm2d_momentum_eval (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5398850Z test_BatchNorm2d_momentum_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5399138Z test_BatchNorm2d_not_affine (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5399479Z test_BatchNorm2d_not_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5399781Z test_BatchNorm2d_not_affine_eval (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5400082Z test_BatchNorm2d_not_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5400389Z test_BatchNorm2d_not_tracking_stats (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5400711Z test_BatchNorm2d_not_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5401029Z test_BatchNorm2d_not_tracking_stats_eval (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5401345Z test_BatchNorm2d_not_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5401654Z test_BatchNorm2d_zero_batch (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5401955Z test_BatchNorm2d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5402248Z test_BatchNorm2d_zero_batch_eval (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5402557Z test_BatchNorm2d_zero_batch_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5402844Z test_BatchNorm3d (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5403115Z test_BatchNorm3d_3d_simple_average (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5403418Z test_BatchNorm3d_3d_simple_average_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5403732Z test_BatchNorm3d_3d_simple_average_eval (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5404059Z test_BatchNorm3d_3d_simple_average_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5404377Z test_BatchNorm3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5404662Z test_BatchNorm3d_eval (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5404954Z test_BatchNorm3d_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5405247Z test_BatchNorm3d_momentum (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5405530Z test_BatchNorm3d_momentum_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5405828Z test_BatchNorm3d_momentum_eval (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5406135Z test_BatchNorm3d_momentum_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5406424Z test_BatchNorm3d_not_affine (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5406723Z test_BatchNorm3d_not_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5407028Z test_BatchNorm3d_not_affine_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5407338Z test_BatchNorm3d_not_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5407673Z test_BatchNorm3d_not_tracking_stats (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5407992Z test_BatchNorm3d_not_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5408314Z test_BatchNorm3d_not_tracking_stats_eval (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5408628Z test_BatchNorm3d_not_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5408936Z test_BatchNorm3d_zero_batch (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5409235Z test_BatchNorm3d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5409538Z test_BatchNorm3d_zero_batch_eval (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5409838Z test_BatchNorm3d_zero_batch_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5410114Z test_CELU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5410407Z test_CELU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5410671Z test_CELU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5410955Z test_CELU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5411226Z test_CELU_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5411502Z test_CELU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5411797Z test_CTCLoss_2d_int_target_lengths_intlists (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5412139Z test_CTCLoss_2d_int_target_lengths_intlists_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5412514Z test_CTCLoss_2d_int_target_lengths_intlists_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5412848Z test_CTCLoss_2d_int_target_lengths_intlists_sum_reduction (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5413218Z test_CTCLoss_2d_int_target_lengths_intlists_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5413618Z test_CTCLoss_2d_int_target_lengths_intlists_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5413964Z test_CTCLoss_2d_int_target_lengths_tensors (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5414289Z test_CTCLoss_2d_int_target_lengths_tensors_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5414659Z test_CTCLoss_2d_int_target_lengths_tensors_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5415008Z test_CTCLoss_2d_int_target_lengths_tensors_sum_reduction (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5415368Z test_CTCLoss_2d_int_target_lengths_tensors_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5415758Z test_CTCLoss_2d_int_target_lengths_tensors_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5416089Z test_CTCLoss_2d_lengths_tensors (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5416404Z test_CTCLoss_2d_lengths_tensors_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5416753Z test_CTCLoss_2d_lengths_tensors_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5417063Z test_CTCLoss_2d_lengths_tensors_sum_reduction (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5417411Z test_CTCLoss_2d_lengths_tensors_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5417789Z test_CTCLoss_2d_lengths_tensors_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5418140Z test_CTCLoss_critical_target_len (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5418462Z test_CTCLoss_lengthchecks_cpu (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5418762Z test_CTCLoss_lengthchecks_cuda (__main__.TestNN) ... skip: CUDA not available (0.000s) 2023-01-11T21:58:37.5419052Z test_CTCLoss_lengths_intlists (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5419352Z test_CTCLoss_lengths_intlists_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5419701Z test_CTCLoss_lengths_intlists_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5420023Z test_CTCLoss_lengths_intlists_sum_reduction (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5420362Z test_CTCLoss_lengths_intlists_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5420727Z test_CTCLoss_lengths_intlists_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5421077Z test_CTCLoss_lengths_tensors (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5421390Z test_CTCLoss_lengths_tensors_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5421722Z test_CTCLoss_lengths_tensors_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5422037Z test_CTCLoss_lengths_tensors_sum_reduction (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5422514Z test_CTCLoss_lengths_tensors_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5422918Z test_CTCLoss_lengths_tensors_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5423247Z test_CTCLoss_long_targets (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5423531Z test_CTCLoss_typechecks (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5423813Z test_CTCLoss_zero_infinity (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5424084Z test_ConstantPad1d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5424345Z test_ConstantPad1d_batch (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5424643Z test_ConstantPad1d_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5424948Z test_ConstantPad1d_complex (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5425243Z test_ConstantPad1d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5425576Z test_ConstantPad1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5425861Z test_ConstantPad2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5426108Z test_ConstantPad2d_complex (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5426415Z test_ConstantPad2d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5426746Z test_ConstantPad2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5427048Z test_ConstantPad2d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5427348Z test_ConstantPad2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5427649Z test_ConstantPad3d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5427910Z test_ConstantPad3d_complex (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5428203Z test_ConstantPad3d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5428532Z test_ConstantPad3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5428833Z test_ConstantPad3d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5429150Z test_ConstantPad3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5429421Z test_Conv1d (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5429686Z test_Conv1d_circular_stride2_pad2 (__main__.TestNN) ... ok (0.027s) 2023-01-11T21:58:37.5430075Z test_Conv1d_circular_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5430388Z test_Conv1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5430662Z test_Conv1d_dilated (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5430943Z test_Conv1d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5431222Z test_Conv1d_groups (__main__.TestNN) ... ok (0.059s) 2023-01-11T21:58:37.5431485Z test_Conv1d_groups_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5431758Z test_Conv1d_pad1 (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5432036Z test_Conv1d_pad1_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5432302Z test_Conv1d_pad1size1 (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5432628Z test_Conv1d_pad1size1_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5432906Z test_Conv1d_pad2 (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5433167Z test_Conv1d_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5433448Z test_Conv1d_pad2size1 (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5433730Z test_Conv1d_pad2size1_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5434011Z test_Conv1d_pad_same (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5434848Z test_Conv1d_pad_same2 (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:309: UserWarning: Using padding='same' with even kernel lengths and odd dilation may require a zero-padded copy of the input be created (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Convolution.cpp:997.) 2023-01-11T21:58:37.5435459Z return F.conv1d(input, weight, bias, self.stride, 2023-01-11T21:58:37.5435674Z ok (0.031s) 2023-01-11T21:58:37.5435920Z test_Conv1d_pad_same2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5436235Z test_Conv1d_pad_same_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5436511Z test_Conv1d_pad_same_dilated (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5436812Z test_Conv1d_pad_same_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5437103Z test_Conv1d_pad_valid (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5437438Z test_Conv1d_pad_valid_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5437737Z test_Conv1d_reflect_stride2_pad2 (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5438046Z test_Conv1d_reflect_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5438339Z test_Conv1d_replicate_stride2_pad2 (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5438659Z test_Conv1d_replicate_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5438952Z test_Conv1d_stride (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5439228Z test_Conv1d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5439495Z test_Conv1d_zero_batch (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5439776Z test_Conv1d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5440070Z test_Conv1d_zeros_stride2_pad2 (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5440361Z test_Conv1d_zeros_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5440637Z test_Conv2d (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5440886Z test_Conv2d_circular_stride2_pad2 (__main__.TestNN) ... ok (0.032s) 2023-01-11T21:58:37.5441234Z test_Conv2d_circular_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5441584Z test_Conv2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5441856Z test_Conv2d_depthwise (__main__.TestNN) ... ok (0.025s) 2023-01-11T21:58:37.5442140Z test_Conv2d_depthwise_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5442417Z test_Conv2d_depthwise_dilated (__main__.TestNN) ... ok (0.024s) 2023-01-11T21:58:37.5442722Z test_Conv2d_depthwise_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5443021Z test_Conv2d_depthwise_padded (__main__.TestNN) ... ok (0.026s) 2023-01-11T21:58:37.5443332Z test_Conv2d_depthwise_padded_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5443621Z test_Conv2d_depthwise_strided (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5443959Z test_Conv2d_depthwise_strided_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5444272Z test_Conv2d_depthwise_with_multiplier (__main__.TestNN) ... ok (0.026s) 2023-01-11T21:58:37.5444585Z test_Conv2d_depthwise_with_multiplier_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5444879Z test_Conv2d_dilated (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5445158Z test_Conv2d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5445456Z test_Conv2d_dilated_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5445760Z test_Conv2d_dilated_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5446051Z test_Conv2d_groups (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5446332Z test_Conv2d_groups_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5446604Z test_Conv2d_groups_thnn (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5446896Z test_Conv2d_groups_thnn_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5447199Z test_Conv2d_groups_thnn_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5447519Z test_Conv2d_groups_thnn_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5447819Z test_Conv2d_groups_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5448132Z test_Conv2d_groups_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5448420Z test_Conv2d_no_bias (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5448685Z test_Conv2d_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5448978Z test_Conv2d_no_bias_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5449292Z test_Conv2d_no_bias_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5449586Z test_Conv2d_pad_same (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5449855Z test_Conv2d_pad_same_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5450139Z test_Conv2d_pad_same_dilated (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5450438Z test_Conv2d_pad_same_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5450716Z test_Conv2d_pad_valid (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5450996Z test_Conv2d_pad_valid_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5451274Z test_Conv2d_padding (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5451554Z test_Conv2d_padding_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5451837Z test_Conv2d_padding_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5452158Z test_Conv2d_padding_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5452494Z test_Conv2d_reflect_stride2_pad2 (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5452791Z test_Conv2d_reflect_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5453098Z test_Conv2d_replicate_stride2_pad2 (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5453410Z test_Conv2d_replicate_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5453700Z test_Conv2d_strided (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5453967Z test_Conv2d_strided_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5454264Z test_Conv2d_strided_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5454577Z test_Conv2d_strided_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5454863Z test_Conv2d_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5455209Z test_Conv2d_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5455496Z test_Conv2d_zero_batch (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5455787Z test_Conv2d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5456072Z test_Conv2d_zero_batch_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5456391Z test_Conv2d_zero_batch_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5456697Z test_Conv2d_zeros_stride2_pad2 (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5456988Z test_Conv2d_zeros_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5457262Z test_Conv3d (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5457504Z test_Conv3d_1x1x1_no_bias (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5457800Z test_Conv3d_1x1x1_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5458089Z test_Conv3d_1x1x1_no_bias_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5458409Z test_Conv3d_1x1x1_no_bias_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5458718Z test_Conv3d_circular_stride2_pad2 (__main__.TestNN) ... ok (0.042s) 2023-01-11T21:58:37.5459019Z test_Conv3d_circular_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5459339Z test_Conv3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5459614Z test_Conv3d_dilated (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5459896Z test_Conv3d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5460172Z test_Conv3d_dilated_strided (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5460471Z test_Conv3d_dilated_strided_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5460760Z test_Conv3d_groups (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5461029Z test_Conv3d_groups_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5461322Z test_Conv3d_groups_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5461634Z test_Conv3d_groups_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5461921Z test_Conv3d_no_bias (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5462187Z test_Conv3d_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5462819Z test_Conv3d_no_bias_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5463133Z test_Conv3d_no_bias_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5463406Z test_Conv3d_pad_same (__main__.TestNN) ... ok (0.029s) 2023-01-11T21:58:37.5463764Z test_Conv3d_pad_same_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5464052Z test_Conv3d_pad_same_dilated (__main__.TestNN) ... ok (0.029s) 2023-01-11T21:58:37.5464352Z test_Conv3d_pad_same_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5464629Z test_Conv3d_pad_valid (__main__.TestNN) ... ok (0.025s) 2023-01-11T21:58:37.5464913Z test_Conv3d_pad_valid_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5465212Z test_Conv3d_replicate_stride2_pad2 (__main__.TestNN) ... ok (0.025s) 2023-01-11T21:58:37.5465518Z test_Conv3d_replicate_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5465813Z test_Conv3d_stride (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5466090Z test_Conv3d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5466420Z test_Conv3d_stride_padding (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5466707Z test_Conv3d_stride_padding_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5467019Z test_Conv3d_stride_padding_with_long_tensor (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5467350Z test_Conv3d_stride_padding_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5467655Z test_Conv3d_stride_with_long_tensor (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5467967Z test_Conv3d_stride_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5468268Z test_Conv3d_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5468566Z test_Conv3d_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5468840Z test_Conv3d_zero_batch (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5469127Z test_Conv3d_zero_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5469435Z test_Conv3d_zero_batch_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5469747Z test_Conv3d_zero_batch_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5470053Z test_Conv3d_zeros_stride2_pad2 (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5470359Z test_Conv3d_zeros_stride2_pad2_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5470660Z test_ConvTranspose1d (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5470941Z test_ConvTranspose1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5471241Z test_ConvTranspose1d_dilated (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5471549Z test_ConvTranspose1d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5471842Z test_ConvTranspose1d_groups (__main__.TestNN) ... ok (0.024s) 2023-01-11T21:58:37.5472153Z test_ConvTranspose1d_groups_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5472458Z test_ConvTranspose1d_no_bias (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5472767Z test_ConvTranspose1d_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5473046Z test_ConvTranspose2d (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5473343Z test_ConvTranspose2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5473641Z test_ConvTranspose2d_dilated (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5473936Z test_ConvTranspose2d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5474255Z test_ConvTranspose2d_dilated_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5474604Z test_ConvTranspose2d_dilated_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5474964Z test_ConvTranspose2d_groups (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5475256Z test_ConvTranspose2d_groups_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5475581Z test_ConvTranspose2d_groups_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5475924Z test_ConvTranspose2d_groups_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5476230Z test_ConvTranspose2d_no_bias (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5476538Z test_ConvTranspose2d_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5476854Z test_ConvTranspose2d_no_bias_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5477192Z test_ConvTranspose2d_no_bias_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5477620Z test_ConvTranspose2d_with_long_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5477952Z test_ConvTranspose2d_with_long_tensor_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5478257Z test_ConvTranspose3d (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5478534Z test_ConvTranspose3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5478829Z test_ConvTranspose3d_dilated (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5479136Z test_ConvTranspose3d_dilated_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5479440Z test_CosineEmbeddingLoss (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5479741Z test_CosineEmbeddingLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5480097Z test_CosineEmbeddingLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5480452Z test_CosineEmbeddingLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5480767Z test_CosineEmbeddingLoss_margin (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5481089Z test_CosineEmbeddingLoss_margin_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5481461Z test_CosineEmbeddingLoss_margin_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5481822Z test_CosineEmbeddingLoss_margin_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5482147Z test_CosineEmbeddingLoss_margin_sum_reduction (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5482505Z test_CosineEmbeddingLoss_margin_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5482898Z test_CosineEmbeddingLoss_margin_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5483300Z test_CosineEmbeddingLoss_margin_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5483633Z test_CosineEmbeddingLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5483982Z test_CosineEmbeddingLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5484364Z test_CosineEmbeddingLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5484744Z test_CosineEmbeddingLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5485070Z test_CosineEmbeddingLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5485415Z test_CosineEmbeddingLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5485798Z test_CosineEmbeddingLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5486216Z test_CosineEmbeddingLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5486543Z test_CosineEmbeddingLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5486889Z test_CosineEmbeddingLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5487271Z test_CosineEmbeddingLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5487636Z test_CosineEmbeddingLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5487971Z test_CosineEmbeddingLoss_sum_reduction (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5488315Z test_CosineEmbeddingLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5488740Z test_CosineEmbeddingLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5489106Z test_CosineEmbeddingLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5489419Z test_CrossEntropyLoss (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5489677Z test_CrossEntropyLoss_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5489986Z test_CrossEntropyLoss_2d_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5490323Z test_CrossEntropyLoss_2d_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5490667Z test_CrossEntropyLoss_2d_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5490983Z test_CrossEntropyLoss_2d_ignore_index (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5491309Z test_CrossEntropyLoss_2d_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5491676Z test_CrossEntropyLoss_2d_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5492040Z test_CrossEntropyLoss_2d_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5492378Z test_CrossEntropyLoss_2d_indices_target_smoothing (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5492728Z test_CrossEntropyLoss_2d_indices_target_smoothing_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5493123Z test_CrossEntropyLoss_2d_indices_target_smoothing_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5493515Z test_CrossEntropyLoss_2d_indices_target_smoothing_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5493880Z test_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5494258Z test_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5494677Z test_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5495091Z test_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5495469Z test_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5495842Z test_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5496260Z test_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5496676Z test_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5497086Z test_CrossEntropyLoss_2d_indices_target_smoothing_weight (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5497449Z test_CrossEntropyLoss_2d_indices_target_smoothing_weight_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5497854Z test_CrossEntropyLoss_2d_indices_target_smoothing_weight_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5498261Z test_CrossEntropyLoss_2d_indices_target_smoothing_weight_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5498599Z test_CrossEntropyLoss_2d_prob_target (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5498919Z test_CrossEntropyLoss_2d_prob_target_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5499338Z test_CrossEntropyLoss_2d_prob_target_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5499705Z test_CrossEntropyLoss_2d_prob_target_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5500027Z test_CrossEntropyLoss_2d_prob_target_smoothing (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5500380Z test_CrossEntropyLoss_2d_prob_target_smoothing_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5500763Z test_CrossEntropyLoss_2d_prob_target_smoothing_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5501145Z test_CrossEntropyLoss_2d_prob_target_smoothing_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5501493Z test_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5501874Z test_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5502294Z test_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5502870Z test_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5503225Z test_CrossEntropyLoss_2d_prob_target_smoothing_weight (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5503600Z test_CrossEntropyLoss_2d_prob_target_smoothing_weight_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5504002Z test_CrossEntropyLoss_2d_prob_target_smoothing_weight_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5504408Z test_CrossEntropyLoss_2d_prob_target_smoothing_weight_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5504761Z test_CrossEntropyLoss_2d_prob_target_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5505126Z test_CrossEntropyLoss_2d_prob_target_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5505518Z test_CrossEntropyLoss_2d_prob_target_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5505909Z test_CrossEntropyLoss_2d_prob_target_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5506240Z test_CrossEntropyLoss_2d_prob_target_weights (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5506590Z test_CrossEntropyLoss_2d_prob_target_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5506976Z test_CrossEntropyLoss_2d_prob_target_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5507361Z test_CrossEntropyLoss_2d_prob_target_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5507764Z test_CrossEntropyLoss_2d_prob_target_weights_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5508140Z test_CrossEntropyLoss_2d_prob_target_weights_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5508557Z test_CrossEntropyLoss_2d_prob_target_weights_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5508955Z test_CrossEntropyLoss_2d_prob_target_weights_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5509300Z test_CrossEntropyLoss_2d_sum_reduction (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5509640Z test_CrossEntropyLoss_2d_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5510057Z test_CrossEntropyLoss_2d_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5510418Z test_CrossEntropyLoss_2d_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5510745Z test_CrossEntropyLoss_2d_weights (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5511072Z test_CrossEntropyLoss_2d_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5511435Z test_CrossEntropyLoss_2d_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5511780Z test_CrossEntropyLoss_2d_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5512119Z test_CrossEntropyLoss_3d_indices_target_smoothing (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5512484Z test_CrossEntropyLoss_3d_indices_target_smoothing_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5512880Z test_CrossEntropyLoss_3d_indices_target_smoothing_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5513265Z test_CrossEntropyLoss_3d_indices_target_smoothing_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5513627Z test_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5514010Z test_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5514425Z test_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5514822Z test_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5515200Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5515589Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5516009Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5516413Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5516805Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5517217Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5517719Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5518192Z test_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5518556Z test_CrossEntropyLoss_3d_prob_target (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5518893Z test_CrossEntropyLoss_3d_prob_target_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5519246Z test_CrossEntropyLoss_3d_prob_target_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5519611Z test_CrossEntropyLoss_3d_prob_target_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5519947Z test_CrossEntropyLoss_3d_prob_target_smoothing (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5520300Z test_CrossEntropyLoss_3d_prob_target_smoothing_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5520708Z test_CrossEntropyLoss_3d_prob_target_smoothing_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5521096Z test_CrossEntropyLoss_3d_prob_target_smoothing_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5521457Z test_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5521840Z test_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5522241Z test_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5522653Z test_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5523019Z test_CrossEntropyLoss_3d_prob_target_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5523382Z test_CrossEntropyLoss_3d_prob_target_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5523765Z test_CrossEntropyLoss_3d_prob_target_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5524159Z test_CrossEntropyLoss_3d_prob_target_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5524503Z test_CrossEntropyLoss_3d_prob_target_weights (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5524851Z test_CrossEntropyLoss_3d_prob_target_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5525228Z test_CrossEntropyLoss_3d_prob_target_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5525609Z test_CrossEntropyLoss_3d_prob_target_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5525969Z test_CrossEntropyLoss_3d_prob_target_weights_sum_reduction (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5526351Z test_CrossEntropyLoss_3d_prob_target_weights_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5526743Z test_CrossEntropyLoss_3d_prob_target_weights_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5527155Z test_CrossEntropyLoss_3d_prob_target_weights_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5527507Z test_CrossEntropyLoss_4d_prob_target (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5527829Z test_CrossEntropyLoss_4d_prob_target_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5528197Z test_CrossEntropyLoss_4d_prob_target_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5528568Z test_CrossEntropyLoss_4d_prob_target_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5528944Z test_CrossEntropyLoss_4d_prob_target_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5529298Z test_CrossEntropyLoss_4d_prob_target_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5529686Z test_CrossEntropyLoss_4d_prob_target_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5530075Z test_CrossEntropyLoss_4d_prob_target_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5530421Z test_CrossEntropyLoss_4d_prob_target_weights (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5530760Z test_CrossEntropyLoss_4d_prob_target_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5531144Z test_CrossEntropyLoss_4d_prob_target_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5531571Z test_CrossEntropyLoss_4d_prob_target_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5531926Z test_CrossEntropyLoss_4d_prob_target_weights_sum_reduction (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5532293Z test_CrossEntropyLoss_4d_prob_target_weights_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5532705Z test_CrossEntropyLoss_4d_prob_target_weights_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5533118Z test_CrossEntropyLoss_4d_prob_target_weights_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5533492Z test_CrossEntropyLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5533818Z test_CrossEntropyLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5534160Z test_CrossEntropyLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5534464Z test_CrossEntropyLoss_dim_is_3 (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5534768Z test_CrossEntropyLoss_dim_is_3_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5535119Z test_CrossEntropyLoss_dim_is_3_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5535472Z test_CrossEntropyLoss_dim_is_3_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5535798Z test_CrossEntropyLoss_dim_is_3_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5536129Z test_CrossEntropyLoss_dim_is_3_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5536503Z test_CrossEntropyLoss_dim_is_3_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5536886Z test_CrossEntropyLoss_dim_is_3_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5537212Z test_CrossEntropyLoss_higher_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5537526Z test_CrossEntropyLoss_higher_dim_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5537891Z test_CrossEntropyLoss_higher_dim_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5538246Z test_CrossEntropyLoss_higher_dim_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5538578Z test_CrossEntropyLoss_higher_dim_sum_reduction (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5538920Z test_CrossEntropyLoss_higher_dim_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5539314Z test_CrossEntropyLoss_higher_dim_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5539737Z test_CrossEntropyLoss_higher_dim_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5540049Z test_CrossEntropyLoss_weights (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5540366Z test_CrossEntropyLoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5540721Z test_CrossEntropyLoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5541073Z test_CrossEntropyLoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5541357Z test_CrossMapLRN2d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5541643Z test_CrossMapLRN2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5541912Z test_ELU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5542192Z test_ELU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5542609Z test_ELU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5542892Z test_ELU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5543168Z test_ELU_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5543429Z test_ELU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5543702Z test_Embedding (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5543964Z test_EmbeddingBag_discontiguous (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5544275Z test_EmbeddingBag_discontiguous_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5544575Z test_EmbeddingBag_max (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5544863Z test_EmbeddingBag_max_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5545170Z test_EmbeddingBag_max_padding_idx (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5545484Z test_EmbeddingBag_max_padding_idx_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5545782Z test_EmbeddingBag_mean (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5546071Z test_EmbeddingBag_mean_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5546361Z test_EmbeddingBag_mean_padding_idx (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5546682Z test_EmbeddingBag_mean_padding_idx_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5546985Z test_EmbeddingBag_sparse (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5547280Z test_EmbeddingBag_sparse_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5547554Z test_EmbeddingBag_sum (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5547842Z test_EmbeddingBag_sum_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5548143Z test_EmbeddingBag_sum_padding_idx (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5548447Z test_EmbeddingBag_sum_padding_idx_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5548772Z test_Embedding_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5549056Z test_Embedding_discontiguous (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5549362Z test_Embedding_discontiguous_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5549640Z test_Embedding_sparse (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5549927Z test_Embedding_sparse_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5550198Z test_Flatten (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5550452Z test_Flatten_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5550728Z test_Flatten_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5551089Z test_Flatten_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5551346Z test_Fold (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5551605Z test_Fold_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5551873Z test_Fold_int_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5552152Z test_Fold_int_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5552427Z test_Fold_no_batch_dim_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5552726Z test_Fold_no_batch_dim_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5553027Z test_Fold_no_batch_dim_int_input (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5553323Z test_Fold_no_batch_dim_int_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5553680Z test_FractionalMaxPool2d_ratio (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5554004Z test_FractionalMaxPool2d_ratio_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5554331Z test_FractionalMaxPool2d_ratio_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5554662Z test_FractionalMaxPool2d_ratio_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5555020Z test_FractionalMaxPool2d_ratio_no_batch_dim_no_random_samples (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5555403Z test_FractionalMaxPool2d_ratio_no_batch_dim_no_random_samples_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5555760Z test_FractionalMaxPool2d_ratio_return_indices (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5556097Z test_FractionalMaxPool2d_ratio_return_indices_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5556433Z test_FractionalMaxPool2d_size (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5556747Z test_FractionalMaxPool2d_size_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5557060Z test_FractionalMaxPool2d_size_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5557457Z test_FractionalMaxPool2d_size_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5557819Z test_FractionalMaxPool2d_size_no_batch_dim_no_random_samples (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5558199Z test_FractionalMaxPool2d_size_no_batch_dim_no_random_samples_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5558530Z test_FractionalMaxPool3d_asymsize (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5558861Z test_FractionalMaxPool3d_asymsize_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5559178Z test_FractionalMaxPool3d_ratio (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5559487Z test_FractionalMaxPool3d_ratio_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5559815Z test_FractionalMaxPool3d_ratio_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5560161Z test_FractionalMaxPool3d_ratio_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5560521Z test_FractionalMaxPool3d_ratio_no_batch_dim_no_random_samples (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5560887Z test_FractionalMaxPool3d_ratio_no_batch_dim_no_random_samples_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5561243Z test_FractionalMaxPool3d_ratio_return_indices (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5561596Z test_FractionalMaxPool3d_ratio_return_indices_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5561921Z test_FractionalMaxPool3d_size (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5562300Z test_FractionalMaxPool3d_size_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5562625Z test_FractionalMaxPool3d_size_no_batch_dim (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5562967Z test_FractionalMaxPool3d_size_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5563315Z test_FractionalMaxPool3d_size_no_batch_dim_no_random_samples (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5563687Z test_FractionalMaxPool3d_size_no_batch_dim_no_random_samples_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5563994Z test_GELU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5564256Z test_GELU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5564514Z test_GELU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5564829Z test_GELU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5565105Z test_GELU_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5565372Z test_GELU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5565635Z test_GLU (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5565889Z test_GLU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5566143Z test_GLU_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5566393Z test_GLU_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5566661Z test_GLU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5566941Z test_GLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5567214Z test_GroupNorm_1d_affine (__main__.TestNN) ... ok (0.022s) 2023-01-11T21:58:37.5567472Z test_GroupNorm_1d_affine_GN (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5567776Z test_GroupNorm_1d_affine_GN_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5568077Z test_GroupNorm_1d_affine_GN_eval (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5568372Z test_GroupNorm_1d_affine_GN_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5568701Z test_GroupNorm_1d_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5568990Z test_GroupNorm_1d_affine_eval (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5569282Z test_GroupNorm_1d_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5569619Z test_GroupNorm_1d_affine_large_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5569968Z test_GroupNorm_1d_affine_large_batch_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5570278Z test_GroupNorm_1d_no_affine_IN (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5570570Z test_GroupNorm_1d_no_affine_IN_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5570873Z test_GroupNorm_1d_no_affine_IN_eval (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5571195Z test_GroupNorm_1d_no_affine_IN_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5571486Z test_GroupNorm_1d_no_affine_LN (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5571795Z test_GroupNorm_1d_no_affine_LN_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5572097Z test_GroupNorm_1d_no_affine_LN_eval (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5572409Z test_GroupNorm_1d_no_affine_LN_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5572698Z test_GroupNorm_2d_affine (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5572996Z test_GroupNorm_2d_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5573327Z test_GroupNorm_2d_affine_eval (__main__.TestNN) ... ok (0.024s) 2023-01-11T21:58:37.5573623Z test_GroupNorm_2d_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5573974Z test_GroupNorm_2d_affine_large_feature_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5574336Z test_GroupNorm_2d_affine_large_feature_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5574651Z test_GroupNorm_2d_no_affine_IN (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5574943Z test_GroupNorm_2d_no_affine_IN_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5575248Z test_GroupNorm_2d_no_affine_IN_eval (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5575563Z test_GroupNorm_2d_no_affine_IN_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5575904Z test_GroupNorm_2d_no_affine_LN (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5576198Z test_GroupNorm_2d_no_affine_LN_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5576505Z test_GroupNorm_2d_no_affine_LN_eval (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5576823Z test_GroupNorm_2d_no_affine_LN_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5577165Z test_GroupNorm_2d_no_affine_large_feature_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5577533Z test_GroupNorm_2d_no_affine_large_feature_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5577832Z test_Hardshrink (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5578109Z test_Hardshrink_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5578383Z test_Hardshrink_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5578688Z test_Hardshrink_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5578976Z test_Hardshrink_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5579258Z test_Hardshrink_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5579549Z test_Hardsigmoid_no_batch_dim (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5579856Z test_Hardsigmoid_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5580158Z test_Hardswish_no_batch_dim (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5580442Z test_Hardswish_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5580722Z test_Hardtanh (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5580993Z test_Hardtanh_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5581265Z test_Hardtanh_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5581560Z test_Hardtanh_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5581843Z test_Hardtanh_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5582131Z test_Hardtanh_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5582564Z test_HingeEmbeddingLoss (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5582883Z test_HingeEmbeddingLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5583234Z test_HingeEmbeddingLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5583570Z test_HingeEmbeddingLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5583882Z test_HingeEmbeddingLoss_margin (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5584210Z test_HingeEmbeddingLoss_margin_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5584643Z test_HingeEmbeddingLoss_margin_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5584990Z test_HingeEmbeddingLoss_margin_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5585319Z test_HingeEmbeddingLoss_margin_no_reduce (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5585665Z test_HingeEmbeddingLoss_margin_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5585993Z test_HingeEmbeddingLoss_margin_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5586349Z test_HingeEmbeddingLoss_margin_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5586745Z test_HingeEmbeddingLoss_margin_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5587172Z test_HingeEmbeddingLoss_margin_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5587505Z test_HingeEmbeddingLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5587852Z test_HingeEmbeddingLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5588236Z test_HingeEmbeddingLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5588611Z test_HingeEmbeddingLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5588930Z test_HingeEmbeddingLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5589273Z test_HingeEmbeddingLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5589646Z test_HingeEmbeddingLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5590031Z test_HingeEmbeddingLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5590350Z test_HingeEmbeddingLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5590689Z test_HingeEmbeddingLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5591062Z test_HingeEmbeddingLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5591424Z test_HingeEmbeddingLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5591750Z test_HingeEmbeddingLoss_no_reduce (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5592078Z test_HingeEmbeddingLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5592396Z test_HingeEmbeddingLoss_scalar_margin (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5592728Z test_HingeEmbeddingLoss_scalar_margin_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5593103Z test_HingeEmbeddingLoss_scalar_margin_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5593477Z test_HingeEmbeddingLoss_scalar_margin_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5593824Z test_HingeEmbeddingLoss_scalar_margin_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5594179Z test_HingeEmbeddingLoss_scalar_margin_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5594585Z test_HingeEmbeddingLoss_scalar_margin_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5594985Z test_HingeEmbeddingLoss_scalar_margin_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5595366Z test_HingeEmbeddingLoss_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5595690Z test_HingeEmbeddingLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5596065Z test_HingeEmbeddingLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5596438Z test_HingeEmbeddingLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5596727Z test_HuberLoss (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5597013Z test_HuberLoss_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5597388Z test_HuberLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5597714Z test_HuberLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5598049Z test_HuberLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5598331Z test_HuberLoss_delta (__main__.TestNN) ... ok (0.075s) 2023-01-11T21:58:37.5598619Z test_HuberLoss_delta_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5598899Z test_HuberLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5599217Z test_HuberLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5599572Z test_HuberLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5599923Z test_HuberLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5600217Z test_HuberLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5600531Z test_HuberLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5600887Z test_HuberLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5601234Z test_HuberLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5601522Z test_HuberLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5601836Z test_HuberLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5602181Z test_HuberLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5602513Z test_HuberLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5602812Z test_HuberLoss_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5603123Z test_HuberLoss_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5603479Z test_HuberLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5603814Z test_HuberLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5604159Z test_HuberLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5604453Z test_InstanceNorm1d (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5604729Z test_InstanceNorm1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5605019Z test_InstanceNorm1d_eval (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5605313Z test_InstanceNorm1d_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5605615Z test_InstanceNorm1d_no_batch_dim (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5605915Z test_InstanceNorm1d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5606261Z test_InstanceNorm1d_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5606583Z test_InstanceNorm1d_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5606886Z test_InstanceNorm1d_tracking_stats (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5607206Z test_InstanceNorm1d_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5607524Z test_InstanceNorm1d_tracking_stats_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5607855Z test_InstanceNorm1d_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5608176Z test_InstanceNorm1d_tracking_stats_no_batch_dim (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5608523Z test_InstanceNorm1d_tracking_stats_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5608909Z test_InstanceNorm1d_tracking_stats_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5609265Z test_InstanceNorm1d_tracking_stats_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5609564Z test_InstanceNorm2d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5609850Z test_InstanceNorm2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5610138Z test_InstanceNorm2d_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5610424Z test_InstanceNorm2d_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5610724Z test_InstanceNorm2d_no_batch_dim (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5611036Z test_InstanceNorm2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5611348Z test_InstanceNorm2d_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5611656Z test_InstanceNorm2d_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5611979Z test_InstanceNorm2d_tracking_stats (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5612300Z test_InstanceNorm2d_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5612610Z test_InstanceNorm2d_tracking_stats_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5612936Z test_InstanceNorm2d_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5613268Z test_InstanceNorm2d_tracking_stats_no_batch_dim (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5613609Z test_InstanceNorm2d_tracking_stats_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5613940Z test_InstanceNorm2d_tracking_stats_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5614290Z test_InstanceNorm2d_tracking_stats_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5614610Z test_InstanceNorm3d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5614883Z test_InstanceNorm3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5615170Z test_InstanceNorm3d_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5615472Z test_InstanceNorm3d_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5615773Z test_InstanceNorm3d_no_batch_dim (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5616072Z test_InstanceNorm3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5635253Z test_InstanceNorm3d_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5635706Z test_InstanceNorm3d_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5636031Z test_InstanceNorm3d_tracking_stats (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5636495Z test_InstanceNorm3d_tracking_stats_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5636814Z test_InstanceNorm3d_tracking_stats_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5637143Z test_InstanceNorm3d_tracking_stats_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5637552Z test_InstanceNorm3d_tracking_stats_no_batch_dim (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5637886Z test_InstanceNorm3d_tracking_stats_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5638233Z test_InstanceNorm3d_tracking_stats_no_batch_dim_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5638586Z test_InstanceNorm3d_tracking_stats_no_batch_dim_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5639613Z test_KLDivLoss (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5640129Z warnings.warn( 2023-01-11T21:58:37.5640850Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5641327Z warnings.warn( 2023-01-11T21:58:37.5642039Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5642492Z warnings.warn( 2023-01-11T21:58:37.5642662Z ok (0.005s) 2023-01-11T21:58:37.5642879Z test_KLDivLoss_batch_mean (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5643156Z test_KLDivLoss_batch_mean_log_target (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5643451Z test_KLDivLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5643775Z test_KLDivLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5644096Z test_KLDivLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5644937Z test_KLDivLoss_log_target (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5645453Z warnings.warn( 2023-01-11T21:58:37.5646163Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5646631Z warnings.warn( 2023-01-11T21:58:37.5646797Z ok (0.005s) 2023-01-11T21:58:37.5647048Z test_KLDivLoss_log_target_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5647390Z test_KLDivLoss_log_target_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5647733Z test_KLDivLoss_log_target_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5648077Z test_KLDivLoss_log_target_sum_reduction (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5648414Z test_KLDivLoss_log_target_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5648790Z test_KLDivLoss_log_target_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5649161Z test_KLDivLoss_log_target_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5649468Z test_KLDivLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5649787Z test_KLDivLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5650142Z test_KLDivLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5650494Z test_KLDivLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5650824Z test_KLDivLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5651145Z test_KLDivLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5651498Z test_KLDivLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5651835Z test_KLDivLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5652139Z test_KLDivLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5652450Z test_KLDivLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5652803Z test_KLDivLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5653139Z test_KLDivLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5653441Z test_KLDivLoss_no_reduce (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5653733Z test_KLDivLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5654023Z test_KLDivLoss_no_reduce_log_target (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5654341Z test_KLDivLoss_no_reduce_log_target_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5654650Z test_KLDivLoss_no_reduce_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5654959Z test_KLDivLoss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5655267Z test_KLDivLoss_no_reduce_scalar_log_target (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5655599Z test_KLDivLoss_no_reduce_scalar_log_target_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5656475Z test_KLDivLoss_scalar (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5656993Z warnings.warn( 2023-01-11T21:58:37.5657695Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5658169Z warnings.warn( 2023-01-11T21:58:37.5658878Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5659378Z warnings.warn( 2023-01-11T21:58:37.5659544Z ok (0.004s) 2023-01-11T21:58:37.5659789Z test_KLDivLoss_scalar_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5660131Z test_KLDivLoss_scalar_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5660467Z test_KLDivLoss_scalar_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5661324Z test_KLDivLoss_scalar_log_target (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5661875Z warnings.warn( 2023-01-11T21:58:37.5662713Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5663174Z warnings.warn( 2023-01-11T21:58:37.5663342Z ok (0.005s) 2023-01-11T21:58:37.5663599Z test_KLDivLoss_scalar_log_target_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5663961Z test_KLDivLoss_scalar_log_target_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5664317Z test_KLDivLoss_scalar_log_target_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5664634Z test_KLDivLoss_scalar_log_target_sum_reduction (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5664988Z test_KLDivLoss_scalar_log_target_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5665372Z test_KLDivLoss_scalar_log_target_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5665751Z test_KLDivLoss_scalar_log_target_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5666069Z test_KLDivLoss_scalar_sum_reduction (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5666397Z test_KLDivLoss_scalar_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5666762Z test_KLDivLoss_scalar_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5667125Z test_KLDivLoss_scalar_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5667427Z test_KLDivLoss_sum_reduction (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5667743Z test_KLDivLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5668091Z test_KLDivLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5668427Z test_KLDivLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5668742Z test_KLDivLoss_with_log_target_no_reduce (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5669068Z test_KLDivLoss_with_log_target_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5669386Z test_KLDivLoss_with_target_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5669693Z test_KLDivLoss_with_target_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5669979Z test_L1Loss (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5670311Z test_L1Loss_cuda_cdouble (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5670611Z test_L1Loss_cuda_cfloat (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5670922Z test_L1Loss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5671239Z test_L1Loss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5671545Z test_L1Loss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5671815Z test_L1Loss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5672119Z test_L1Loss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5672465Z test_L1Loss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5672847Z test_L1Loss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5673137Z test_L1Loss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5673442Z test_L1Loss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5673784Z test_L1Loss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5674113Z test_L1Loss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5674408Z test_L1Loss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5674711Z test_L1Loss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5675054Z test_L1Loss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5675380Z test_L1Loss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5675674Z test_L1Loss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5675932Z test_L1Loss_no_reduce_complex (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5676223Z test_L1Loss_no_reduce_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5676546Z test_L1Loss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5676829Z test_L1Loss_no_reduce_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5677127Z test_L1Loss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5677458Z test_L1Loss_scalar (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5677751Z test_L1Loss_scalar_cuda_cdouble (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5678078Z test_L1Loss_scalar_cuda_cfloat (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5678234Z test_L1Loss_scalar_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5678387Z test_L1Loss_scalar_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5678526Z test_L1Loss_scalar_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5678630Z test_LPPool1d (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5678774Z test_LPPool1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5678893Z test_LPPool1d_no_batch_dim (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5679050Z test_LPPool1d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5679159Z test_LPPool1d_norm (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5679307Z test_LPPool1d_norm_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5679412Z test_LPPool2d (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5679582Z test_LPPool2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5679690Z test_LPPool2d_norm (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5679837Z test_LPPool2d_norm_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5679942Z test_LSTM_cell (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5680069Z test_LSTM_cell_forward_hidden_size (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5680196Z test_LSTM_cell_forward_input_size (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5680327Z test_LayerNorm_1d_elementwise_affine (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5680487Z test_LayerNorm_1d_elementwise_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5680621Z test_LayerNorm_1d_elementwise_affine_eval (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5680828Z test_LayerNorm_1d_elementwise_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5680972Z test_LayerNorm_1d_empty_elementwise_affine (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5681150Z test_LayerNorm_1d_empty_elementwise_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5681294Z test_LayerNorm_1d_empty_elementwise_affine_eval (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5681475Z test_LayerNorm_1d_empty_elementwise_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5681607Z test_LayerNorm_1d_no_elementwise_affine (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5681778Z test_LayerNorm_1d_no_elementwise_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5681906Z test_LayerNorm_1d_no_elementwise_affine_eval (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5682080Z test_LayerNorm_1d_no_elementwise_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5682216Z test_LayerNorm_3d_elementwise_affine (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5682384Z test_LayerNorm_3d_elementwise_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5682517Z test_LayerNorm_3d_elementwise_affine_eval (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5682690Z test_LayerNorm_3d_elementwise_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5682824Z test_LayerNorm_3d_no_affine_large_feature (__main__.TestNN) ... ok (0.855s) 2023-01-11T21:58:37.5682994Z test_LayerNorm_3d_no_affine_large_feature_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5683133Z test_LayerNorm_3d_no_affine_large_feature_eval (__main__.TestNN) ... ok (0.791s) 2023-01-11T21:58:37.5683296Z test_LayerNorm_3d_no_affine_large_feature_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5683433Z test_LayerNorm_3d_no_elementwise_affine (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5683602Z test_LayerNorm_3d_no_elementwise_affine_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5683743Z test_LayerNorm_3d_no_elementwise_affine_eval (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5683916Z test_LayerNorm_3d_no_elementwise_affine_eval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5684020Z test_LeakyReLU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5684164Z test_LeakyReLU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5684282Z test_LeakyReLU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5684425Z test_LeakyReLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5684544Z test_LeakyReLU_with_negval (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5684703Z test_LeakyReLU_with_negval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5684861Z test_LeakyReLU_with_negval_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5685024Z test_LeakyReLU_with_negval_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5685148Z test_LeakyReLU_with_zero_negval (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5685311Z test_LeakyReLU_with_zero_negval_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5685412Z test_Linear (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5685537Z test_Linear_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5685651Z test_Linear_no_batch_dim (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5685800Z test_Linear_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5685909Z test_Linear_no_bias (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5686104Z test_Linear_no_bias_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5686227Z test_LocalResponseNorm_1d (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5686382Z test_LocalResponseNorm_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5686517Z test_LocalResponseNorm_2d_uneven_pad (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5686677Z test_LocalResponseNorm_2d_uneven_pad_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5686812Z test_LocalResponseNorm_3d_custom_params (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5686986Z test_LocalResponseNorm_3d_custom_params_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5687089Z test_LogSigmoid (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5687229Z test_LogSigmoid_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5687355Z test_LogSigmoid_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5687513Z test_LogSigmoid_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5687629Z test_LogSigmoid_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5687768Z test_LogSigmoid_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5687873Z test_LogSoftmax (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5688017Z test_LogSoftmax_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5688137Z test_LogSoftmax_multiparam (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5688296Z test_LogSoftmax_multiparam_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5688425Z test_LogSoftmax_multiparam_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5688592Z test_LogSoftmax_multiparam_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5688714Z test_LogSoftmax_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5688856Z test_LogSoftmax_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5688954Z test_MSELoss (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5689100Z test_MSELoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5689243Z test_MSELoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5689382Z test_MSELoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5689498Z test_MSELoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5689660Z test_MSELoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5689825Z test_MSELoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5690018Z test_MSELoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5690126Z test_MSELoss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5690285Z test_MSELoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5690444Z test_MSELoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5690603Z test_MSELoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5690717Z test_MSELoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5690880Z test_MSELoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5691044Z test_MSELoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5691235Z test_MSELoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5691336Z test_MSELoss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5691482Z test_MSELoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5691602Z test_MSELoss_no_reduce_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5691760Z test_MSELoss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5691865Z test_MSELoss_prec (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5692014Z test_MSELoss_prec_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5692161Z test_MSELoss_prec_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5692306Z test_MSELoss_prec_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5692452Z test_MSELoss_prec_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5692551Z test_MSELoss_scalar (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5692707Z test_MSELoss_scalar_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5692863Z test_MSELoss_scalar_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5693015Z test_MSELoss_scalar_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5693169Z test_MSELoss_scalar_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5693296Z test_MSELoss_scalar_sum_reduction (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5693470Z test_MSELoss_scalar_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5693642Z test_MSELoss_scalar_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5693802Z test_MSELoss_scalar_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5693969Z test_MSELoss_scalar_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5694087Z test_MSELoss_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5694251Z test_MSELoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5694412Z test_MSELoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5694571Z test_MSELoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5694691Z test_MarginRankingLoss (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5694856Z test_MarginRankingLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5695050Z test_MarginRankingLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5695197Z test_MarginRankingLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5695322Z test_MarginRankingLoss_margin (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5695494Z test_MarginRankingLoss_margin_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5695661Z test_MarginRankingLoss_margin_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5695829Z test_MarginRankingLoss_margin_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5695972Z test_MarginRankingLoss_margin_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5696156Z test_MarginRankingLoss_margin_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5696373Z test_MarginRankingLoss_margin_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5696557Z test_MarginRankingLoss_margin_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5696683Z test_MarginRankingLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5696866Z test_MarginRankingLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5697044Z test_MarginRankingLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5697222Z test_MarginRankingLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5697359Z test_MarginRankingLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5697540Z test_MarginRankingLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5697720Z test_MarginRankingLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5697895Z test_MarginRankingLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5698030Z test_MarginRankingLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5698193Z test_MarginRankingLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5698369Z test_MarginRankingLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5698546Z test_MarginRankingLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5698679Z test_MarginRankingLoss_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5698858Z test_MarginRankingLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5699037Z test_MarginRankingLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5699214Z test_MarginRankingLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5699319Z test_MaxPool1d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5699450Z test_MaxPool1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5699574Z test_MaxPool1d_return_indices (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5699732Z test_MaxPool1d_return_indices_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5699845Z test_MaxPool1d_stride (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5699994Z test_MaxPool1d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5700146Z test_MaxPool2d_3d_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5700297Z test_MaxPool2d_3d_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5700410Z test_MaxPool2d_4d_input (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5700544Z test_MaxPool2d_4d_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5700665Z test_MaxPool2d_return_indices (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5700824Z test_MaxPool2d_return_indices_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5700927Z test_MaxPool3d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5701068Z test_MaxPool3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5701189Z test_MaxPool3d_return_indices (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5701348Z test_MaxPool3d_return_indices_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5701494Z test_MaxPool3d_stride (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5701632Z test_MaxPool3d_stride_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5701756Z test_MaxPool3d_stride_padding (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5701915Z test_MaxPool3d_stride_padding_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5702028Z test_MaxUnpool1d_net (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5702179Z test_MaxUnpool1d_net_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5702305Z test_MaxUnpool1d_net_no_batch_dim (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5702593Z test_MaxUnpool1d_net_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5702705Z test_MaxUnpool2d_net (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5702856Z test_MaxUnpool2d_net_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5702972Z test_MaxUnpool2d_net_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5703133Z test_MaxUnpool2d_net_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5703244Z test_MaxUnpool3d_net (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5703387Z test_MaxUnpool3d_net_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5703510Z test_MaxUnpool3d_net_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5703670Z test_MaxUnpool3d_net_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5703766Z test_Mish (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5703900Z test_Mish_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5703997Z test_Mish_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5704148Z test_Mish_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5704255Z test_Mish_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5704398Z test_Mish_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5704503Z test_ModuleDict (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5704606Z test_ModuleList (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5704731Z test_MultiLabelMarginLoss (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5704858Z test_MultiLabelMarginLoss_0d_no_reduce (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5705034Z test_MultiLabelMarginLoss_0d_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5705159Z test_MultiLabelMarginLoss_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5705333Z test_MultiLabelMarginLoss_1d_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5705555Z test_MultiLabelMarginLoss_1d_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5705728Z test_MultiLabelMarginLoss_1d_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5705896Z test_MultiLabelMarginLoss_1d_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5706033Z test_MultiLabelMarginLoss_1d_no_reduce (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5706207Z test_MultiLabelMarginLoss_1d_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5706337Z test_MultiLabelMarginLoss_1d_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5706523Z test_MultiLabelMarginLoss_1d_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5706709Z test_MultiLabelMarginLoss_1d_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5706969Z test_MultiLabelMarginLoss_1d_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707154Z test_MultiLabelMarginLoss_1d_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707323Z test_MultiLabelMarginLoss_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707492Z test_MultiLabelMarginLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707659Z test_MultiLabelMarginLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707826Z test_MultiLabelMarginLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5707949Z test_MultiLabelMarginLoss_index_neg (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5708122Z test_MultiLabelMarginLoss_index_neg_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5708269Z test_MultiLabelMarginLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5708457Z test_MultiLabelMarginLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5708640Z test_MultiLabelMarginLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5708822Z test_MultiLabelMarginLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5708964Z test_MultiLabelMarginLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5709146Z test_MultiLabelMarginLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5709330Z test_MultiLabelMarginLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5709500Z test_MultiLabelMarginLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5709643Z test_MultiLabelMarginLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5709823Z test_MultiLabelMarginLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5710003Z test_MultiLabelMarginLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5710185Z test_MultiLabelMarginLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5710320Z test_MultiLabelMarginLoss_no_reduce (__main__.TestNN) ... ok (0.024s) 2023-01-11T21:58:37.5710492Z test_MultiLabelMarginLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5710632Z test_MultiLabelMarginLoss_sum_reduction (__main__.TestNN) ... ok (0.023s) 2023-01-11T21:58:37.5710855Z test_MultiLabelMarginLoss_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5711027Z test_MultiLabelMarginLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5711208Z test_MultiLabelMarginLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5711388Z test_MultiLabelMarginLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5711522Z test_MultiLabelSoftMarginLoss (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5711698Z test_MultiLabelSoftMarginLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5711875Z test_MultiLabelSoftMarginLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5712078Z test_MultiLabelSoftMarginLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5712234Z test_MultiLabelSoftMarginLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5712430Z test_MultiLabelSoftMarginLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5712611Z test_MultiLabelSoftMarginLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5712799Z test_MultiLabelSoftMarginLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5712948Z test_MultiLabelSoftMarginLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5713140Z test_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5713327Z test_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5713519Z test_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5713670Z test_MultiLabelSoftMarginLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5713859Z test_MultiLabelSoftMarginLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5714047Z test_MultiLabelSoftMarginLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5714225Z test_MultiLabelSoftMarginLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5714368Z test_MultiLabelSoftMarginLoss_no_reduce (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5714548Z test_MultiLabelSoftMarginLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5714693Z test_MultiLabelSoftMarginLoss_weights (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5714881Z test_MultiLabelSoftMarginLoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5715066Z test_MultiLabelSoftMarginLoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5715250Z test_MultiLabelSoftMarginLoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5715402Z test_MultiLabelSoftMarginLoss_weights_no_reduce (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5715592Z test_MultiLabelSoftMarginLoss_weights_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5715737Z test_MultiLabelSoftMarginLoss_weights_sum_reduction (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5715937Z test_MultiLabelSoftMarginLoss_weights_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5716174Z test_MultiLabelSoftMarginLoss_weights_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5716373Z test_MultiLabelSoftMarginLoss_weights_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5716488Z test_MultiMarginLoss (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5716606Z test_MultiMarginLoss_1d (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5716770Z test_MultiMarginLoss_1d_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5716929Z test_MultiMarginLoss_1d_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5717088Z test_MultiMarginLoss_1d_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5717205Z test_MultiMarginLoss_1d_no_reduce (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5717462Z test_MultiMarginLoss_1d_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5717602Z test_MultiMarginLoss_1d_sum_reduction (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5717779Z test_MultiMarginLoss_1d_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5717957Z test_MultiMarginLoss_1d_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5718130Z test_MultiMarginLoss_1d_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5718289Z test_MultiMarginLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5718450Z test_MultiMarginLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5718603Z test_MultiMarginLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5718716Z test_MultiMarginLoss_margin (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5718883Z test_MultiMarginLoss_margin_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5719049Z test_MultiMarginLoss_margin_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5719214Z test_MultiMarginLoss_margin_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5719348Z test_MultiMarginLoss_margin_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5719516Z test_MultiMarginLoss_margin_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5719654Z test_MultiMarginLoss_margin_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5719835Z test_MultiMarginLoss_margin_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5720005Z test_MultiMarginLoss_margin_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5720187Z test_MultiMarginLoss_margin_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5720311Z test_MultiMarginLoss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5720471Z test_MultiMarginLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5720588Z test_MultiMarginLoss_p (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5720751Z test_MultiMarginLoss_p_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5720909Z test_MultiMarginLoss_p_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5721066Z test_MultiMarginLoss_p_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5721199Z test_MultiMarginLoss_p_no_reduce (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5721353Z test_MultiMarginLoss_p_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5721517Z test_MultiMarginLoss_p_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5721698Z test_MultiMarginLoss_p_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5721872Z test_MultiMarginLoss_p_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5722049Z test_MultiMarginLoss_p_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5722180Z test_MultiMarginLoss_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5722352Z test_MultiMarginLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5722525Z test_MultiMarginLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5722739Z test_MultiMarginLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5722851Z test_MultiMarginLoss_weights (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5723020Z test_MultiMarginLoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5723189Z test_MultiMarginLoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5723355Z test_MultiMarginLoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5723489Z test_MultiMarginLoss_weights_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5723662Z test_MultiMarginLoss_weights_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5723803Z test_MultiMarginLoss_weights_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5723987Z test_MultiMarginLoss_weights_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5724158Z test_MultiMarginLoss_weights_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5724337Z test_MultiMarginLoss_weights_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5724439Z test_NLLLoss (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5724556Z test_NLLLoss2d_no_reduce (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5724707Z test_NLLLoss2d_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5724838Z test_NLLLoss2d_no_reduce_ignore_index (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5725005Z test_NLLLoss2d_no_reduce_ignore_index_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5725131Z test_NLLLoss2d_no_reduce_weights (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5725298Z test_NLLLoss2d_no_reduce_weights_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5725401Z test_NLLLossNd_no_reduce (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5725556Z test_NLLLossNd_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5725688Z test_NLLLossNd_no_reduce_ignore_index (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5725856Z test_NLLLossNd_no_reduce_ignore_index_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5725982Z test_NLLLossNd_no_reduce_weights (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5726144Z test_NLLLossNd_no_reduce_weights_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5726248Z test_NLLLoss_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5726399Z test_NLLLoss_2d_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5726540Z test_NLLLoss_2d_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5726718Z test_NLLLoss_2d_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5726864Z test_NLLLoss_2d_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5726984Z test_NLLLoss_2d_ignore_index (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5727151Z test_NLLLoss_2d_ignore_index_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5727318Z test_NLLLoss_2d_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5727481Z test_NLLLoss_2d_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5727643Z test_NLLLoss_2d_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5727753Z test_NLLLoss_2d_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5727954Z test_NLLLoss_2d_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5728120Z test_NLLLoss_2d_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5728285Z test_NLLLoss_2d_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5728448Z test_NLLLoss_2d_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5728564Z test_NLLLoss_2d_weights (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5728726Z test_NLLLoss_2d_weights_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5728885Z test_NLLLoss_2d_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5729040Z test_NLLLoss_2d_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5729187Z test_NLLLoss_2d_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5729335Z test_NLLLoss_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5729482Z test_NLLLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5729629Z test_NLLLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5729774Z test_NLLLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5729886Z test_NLLLoss_dim_is_3 (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5730041Z test_NLLLoss_dim_is_3_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5730190Z test_NLLLoss_dim_is_3_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5730335Z test_NLLLoss_dim_is_3_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5730492Z test_NLLLoss_dim_is_3_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5730619Z test_NLLLoss_dim_is_3_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5730792Z test_NLLLoss_dim_is_3_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5730962Z test_NLLLoss_dim_is_3_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5731130Z test_NLLLoss_dim_is_3_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5731296Z test_NLLLoss_dim_is_3_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5731412Z test_NLLLoss_higher_dim (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5731575Z test_NLLLoss_higher_dim_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5731755Z test_NLLLoss_higher_dim_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5731913Z test_NLLLoss_higher_dim_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5732065Z test_NLLLoss_higher_dim_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5732196Z test_NLLLoss_higher_dim_sum_reduction (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5732371Z test_NLLLoss_higher_dim_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5732542Z test_NLLLoss_higher_dim_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5732714Z test_NLLLoss_higher_dim_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5732916Z test_NLLLoss_higher_dim_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5733035Z test_NLLLoss_ignore_index (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5733186Z test_NLLLoss_ignore_index_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5733346Z test_NLLLoss_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5733505Z test_NLLLoss_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5733663Z test_NLLLoss_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5733782Z test_NLLLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5733946Z test_NLLLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5734112Z test_NLLLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5734275Z test_NLLLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5734395Z test_NLLLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5734550Z test_NLLLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5734712Z test_NLLLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5734874Z test_NLLLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5734991Z test_NLLLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5735151Z test_NLLLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5735310Z test_NLLLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5735476Z test_NLLLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5735590Z test_NLLLoss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5735727Z test_NLLLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5735856Z test_NLLLoss_no_reduce_ignore_index (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5736020Z test_NLLLoss_no_reduce_ignore_index_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5736142Z test_NLLLoss_no_reduce_weights (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5736301Z test_NLLLoss_no_reduce_weights_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5736437Z test_NLLLoss_no_reduce_weights_ignore_index (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5736614Z test_NLLLoss_no_reduce_weights_ignore_index_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5736789Z test_NLLLoss_no_reduce_weights_ignore_index_neg (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5736964Z test_NLLLoss_no_reduce_weights_ignore_index_neg_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5737068Z test_NLLLoss_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5737228Z test_NLLLoss_sum_reduction_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5737390Z test_NLLLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5737551Z test_NLLLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5737710Z test_NLLLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5737818Z test_NLLLoss_weights (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5738009Z test_NLLLoss_weights_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5738166Z test_NLLLoss_weights_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5738310Z test_NLLLoss_weights_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5738463Z test_NLLLoss_weights_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5738588Z test_NLLLoss_weights_ignore_index (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5738758Z test_NLLLoss_weights_ignore_index_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5738929Z test_NLLLoss_weights_ignore_index_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5739097Z test_NLLLoss_weights_ignore_index_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5739270Z test_NLLLoss_weights_ignore_index_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5739404Z test_NLLLoss_weights_ignore_index_neg (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5739579Z test_NLLLoss_weights_ignore_index_neg_cuda_bfloat16 (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5739738Z test_NLLLoss_weights_ignore_index_neg_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5739911Z test_NLLLoss_weights_ignore_index_neg_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5740082Z test_NLLLoss_weights_ignore_index_neg_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5740184Z test_PReLU_1d (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5740324Z test_PReLU_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5740444Z test_PReLU_1d_multiparam (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5740601Z test_PReLU_1d_multiparam_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5740700Z test_PReLU_2d (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5740828Z test_PReLU_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5740947Z test_PReLU_2d_multiparam (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5741101Z test_PReLU_2d_multiparam_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5741202Z test_PReLU_3d (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5741339Z test_PReLU_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5741453Z test_PReLU_3d_multiparam (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5741606Z test_PReLU_3d_multiparam_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5741742Z test_PReLU_backward_requires_grad_false (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5741874Z test_PReLU_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5742025Z test_PReLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5742131Z test_PReLU_scalar (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5742277Z test_PReLU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5742519Z test_Padding122112_3dcircular (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5742681Z test_Padding122112_3dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5742800Z test_Padding1221_2dcircular (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5742958Z test_Padding1221_2dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5743064Z test_Padding12_1dcircular (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5743266Z test_Padding12_1dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5743387Z test_Padding2322_2dcircular (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5743544Z test_Padding2322_2dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5743659Z test_Padding31_1dcircular (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5743814Z test_Padding31_1dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5743935Z test_Padding322112_3dcircular (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5744093Z test_Padding322112_3dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5744199Z test_Padding332122_3dcircular (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5744350Z test_Padding332122_3dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5744467Z test_Padding3331_2dcircular (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5744625Z test_Padding3331_2dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5744740Z test_Padding33_1dcircular (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5744890Z test_Padding33_1dcircular_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5745000Z test_PairwiseDistance (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5745130Z test_PairwiseDistance_broadcast_lhs (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5745299Z test_PairwiseDistance_broadcast_lhs_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5745420Z test_PairwiseDistance_broadcast_rhs (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5745594Z test_PairwiseDistance_broadcast_rhs_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5745749Z test_PairwiseDistance_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5745882Z test_PairwiseDistance_no_batch_dim (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5746054Z test_PairwiseDistance_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5746196Z test_PairwiseDistance_with_non_default_args (__main__.TestNN) ... ok (0.019s) 2023-01-11T21:58:37.5746372Z test_PairwiseDistance_with_non_default_args_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5746484Z test_ParameterDict (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5746597Z test_ParameterDict_replication (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5746706Z test_ParameterList (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5746823Z test_ParameterList_meta (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5746950Z test_ParameterList_replication (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5747063Z test_PixelShuffle (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5747250Z test_PixelShuffle_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5747361Z test_PixelUnshuffle (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5747511Z test_PixelUnshuffle_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5747621Z test_PoissonNLLLoss_full_loss (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5747794Z test_PoissonNLLLoss_full_loss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5747961Z test_PoissonNLLLoss_full_loss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5748124Z test_PoissonNLLLoss_full_loss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5748260Z test_PoissonNLLLoss_full_loss_no_log_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5748470Z test_PoissonNLLLoss_full_loss_no_log_input_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5748653Z test_PoissonNLLLoss_full_loss_no_log_input_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5748832Z test_PoissonNLLLoss_full_loss_no_log_input_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5748952Z test_PoissonNLLLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5749130Z test_PoissonNLLLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5749304Z test_PoissonNLLLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5749476Z test_PoissonNLLLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5749609Z test_PoissonNLLLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5749792Z test_PoissonNLLLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5749965Z test_PoissonNLLLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5750137Z test_PoissonNLLLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5750269Z test_PoissonNLLLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5750428Z test_PoissonNLLLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5750598Z test_PoissonNLLLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5750771Z test_PoissonNLLLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5750899Z test_PoissonNLLLoss_no_full_loss (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5751074Z test_PoissonNLLLoss_no_full_loss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5751241Z test_PoissonNLLLoss_no_full_loss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5751406Z test_PoissonNLLLoss_no_full_loss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5751549Z test_PoissonNLLLoss_no_full_loss_no_log_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5751728Z test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5751897Z test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5752073Z test_PoissonNLLLoss_no_full_loss_no_log_input_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5752199Z test_PoissonNLLLoss_no_reduce (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5752404Z test_PoissonNLLLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5752505Z test_RNN_cell (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5752629Z test_RNN_cell_forward_hidden_size (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5752752Z test_RNN_cell_forward_input_size (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5752880Z test_RNN_cell_forward_zero_hidden_size (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5752987Z test_RNN_cell_no_broadcasting (__main__.TestNN) ... ok (0.034s) 2023-01-11T21:58:37.5753123Z test_RNN_change_dropout (__main__.TestNN) ... skip: needs cudnn >= 5.1 (0.001s) 2023-01-11T21:58:37.5753262Z test_RNN_cpu_vs_cudnn_no_dropout (__main__.TestNN) ... skip: needs cudnn (0.000s) 2023-01-11T21:58:37.5753410Z test_RNN_cpu_vs_cudnn_with_dropout (__main__.TestNN) ... skip: needs cudnn >= 5.1 (0.000s) 2023-01-11T21:58:37.5753583Z test_RNN_cudnn_weight_norm (__main__.TestNN) ... skip: needs cudnn (0.001s) 2023-01-11T21:58:37.5753710Z test_RNN_dropout (__main__.TestNN) ... skip: needs cudnn >= 5.1 (0.001s) 2023-01-11T21:58:37.5753843Z test_RNN_dropout_state (__main__.TestNN) ... skip: needs cudnn >= 5.1 (0.001s) 2023-01-11T21:58:37.5753954Z test_RNN_input_size_zero (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5754053Z test_RNN_nonlinearity (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5754147Z test_RReLU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5754284Z test_RReLU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5754396Z test_RReLU_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5754546Z test_RReLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5754660Z test_RReLU_with_up_down (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5754811Z test_RReLU_with_up_down_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5754933Z test_RReLU_with_up_down_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5755080Z test_RReLU_with_up_down_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5755178Z test_ReLU (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5755277Z test_ReLU6 (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5755414Z test_ReLU6_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5755521Z test_ReLU6_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5755668Z test_ReLU6_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5755773Z test_ReLU6_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5755917Z test_ReLU6_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5756041Z test_ReLU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5756154Z test_ReLU_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5756303Z test_ReLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5756408Z test_ReLU_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5756551Z test_ReLU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5756668Z test_ReflectionPad1d (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5756788Z test_ReflectionPad1d_batch (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5756949Z test_ReflectionPad1d_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5757060Z test_ReflectionPad1d_complex (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5757223Z test_ReflectionPad1d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5757438Z test_ReflectionPad1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5757597Z test_ReflectionPad2d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5757719Z test_ReflectionPad2d_complex (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5757879Z test_ReflectionPad2d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5758029Z test_ReflectionPad2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5758159Z test_ReflectionPad2d_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5758314Z test_ReflectionPad2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5758428Z test_ReflectionPad3d (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5758549Z test_ReflectionPad3d_complex (__main__.TestNN) ... ok (0.027s) 2023-01-11T21:58:37.5758709Z test_ReflectionPad3d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5758903Z test_ReflectionPad3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5759033Z test_ReflectionPad3d_no_batch_dim (__main__.TestNN) ... ok (0.012s) 2023-01-11T21:58:37.5759197Z test_ReflectionPad3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5759314Z test_ReplicationPad1d (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5759425Z test_ReplicationPad1d_batch (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5759586Z test_ReplicationPad1d_batch_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5759710Z test_ReplicationPad1d_complex (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5759876Z test_ReplicationPad1d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5760028Z test_ReplicationPad1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5760146Z test_ReplicationPad2d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5760269Z test_ReplicationPad2d_complex (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5760433Z test_ReplicationPad2d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5760574Z test_ReplicationPad2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5760702Z test_ReplicationPad2d_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5760871Z test_ReplicationPad2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5760987Z test_ReplicationPad3d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5761112Z test_ReplicationPad3d_complex (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5761274Z test_ReplicationPad3d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5761430Z test_ReplicationPad3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5761563Z test_ReplicationPad3d_no_batch_dim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5761732Z test_ReplicationPad3d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5761817Z test_SELU (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5761953Z test_SELU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5762068Z test_SELU_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5762216Z test_SELU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5762321Z test_SELU_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5762464Z test_SELU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5762574Z test_Sequential_add (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5762677Z test_Sequential_append (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5762825Z test_Sequential_delitem (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5762934Z test_Sequential_extend (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763048Z test_Sequential_getitem (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5763158Z test_Sequential_iadd (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763268Z test_Sequential_imul (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5763378Z test_Sequential_insert (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5763503Z test_Sequential_insert_fail_case (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763600Z test_Sequential_mul (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763708Z test_Sequential_pop (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763816Z test_Sequential_rmul (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5763928Z test_Sequential_setitem (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5764077Z test_Sequential_setitem_named (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5764177Z test_SiLU (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5764312Z test_SiLU_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5764411Z test_SiLU_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5764558Z test_SiLU_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5764663Z test_SiLU_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5764806Z test_SiLU_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5764906Z test_Sigmoid (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5765045Z test_Sigmoid_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5765161Z test_Sigmoid_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5765311Z test_Sigmoid_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5765413Z test_Sigmoid_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5765557Z test_Sigmoid_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5765665Z test_SmoothL1Loss (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5765782Z test_SmoothL1Loss_beta (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5765932Z test_SmoothL1Loss_beta_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5766084Z test_SmoothL1Loss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5766234Z test_SmoothL1Loss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5766382Z test_SmoothL1Loss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5766498Z test_SmoothL1Loss_no_batch_dim_mean (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5766671Z test_SmoothL1Loss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5766845Z test_SmoothL1Loss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5767012Z test_SmoothL1Loss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5767139Z test_SmoothL1Loss_no_batch_dim_none (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5767307Z test_SmoothL1Loss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5767474Z test_SmoothL1Loss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5767642Z test_SmoothL1Loss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5767769Z test_SmoothL1Loss_no_batch_dim_sum (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5767960Z test_SmoothL1Loss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5768130Z test_SmoothL1Loss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5768297Z test_SmoothL1Loss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5768420Z test_SmoothL1Loss_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5768576Z test_SmoothL1Loss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5768702Z test_SmoothL1Loss_no_reduce_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5768868Z test_SmoothL1Loss_no_reduce_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5768985Z test_SmoothL1Loss_scalar (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5769174Z test_SmoothL1Loss_scalar_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5769326Z test_SmoothL1Loss_scalar_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5769482Z test_SmoothL1Loss_scalar_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5769611Z test_SmoothL1Loss_scalar_sum_reduction (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5769784Z test_SmoothL1Loss_scalar_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5769958Z test_SmoothL1Loss_scalar_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5770132Z test_SmoothL1Loss_scalar_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5770257Z test_SmoothL1Loss_sum_reduction (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5770422Z test_SmoothL1Loss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5770590Z test_SmoothL1Loss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5770743Z test_SmoothL1Loss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5770861Z test_SmoothL1Loss_zero_beta (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5771014Z test_SmoothL1Loss_zero_beta_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5771125Z test_SoftMarginLoss (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5771282Z test_SoftMarginLoss_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5771437Z test_SoftMarginLoss_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5771589Z test_SoftMarginLoss_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5771723Z test_SoftMarginLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5771888Z test_SoftMarginLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5772057Z test_SoftMarginLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5772227Z test_SoftMarginLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5772352Z test_SoftMarginLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5772517Z test_SoftMarginLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5772688Z test_SoftMarginLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5772860Z test_SoftMarginLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5773020Z test_SoftMarginLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5773186Z test_SoftMarginLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5773345Z test_SoftMarginLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5773507Z test_SoftMarginLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5773624Z test_SoftMarginLoss_no_reduce (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5773782Z test_SoftMarginLoss_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5773904Z test_SoftMarginLoss_sum_reduction (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5774075Z test_SoftMarginLoss_sum_reduction_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5774276Z test_SoftMarginLoss_sum_reduction_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5774442Z test_SoftMarginLoss_sum_reduction_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5774531Z test_Softmax (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5774632Z test_Softmax2d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5774768Z test_Softmax2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5774883Z test_Softmax2d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5775033Z test_Softmax2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5775168Z test_Softmax_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5775280Z test_Softmax_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5775430Z test_Softmax_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5775529Z test_Softmax_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5775675Z test_Softmax_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5776096Z test_Softmin (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5776224Z test_Softmin_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5776333Z test_Softmin_multidim (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5776478Z test_Softmin_multidim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5776587Z test_Softmin_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5776731Z test_Softmin_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5776836Z test_Softmin_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5776971Z test_Softmin_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5777074Z test_Softplus (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5777178Z test_Softplus_beta (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5777318Z test_Softplus_beta_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5777437Z test_Softplus_beta_threshold (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5777589Z test_Softplus_beta_threshold_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5777710Z test_Softplus_beta_threshold_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5777874Z test_Softplus_beta_threshold_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5778002Z test_Softplus_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5778115Z test_Softplus_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5778265Z test_Softplus_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5778401Z test_Softshrink (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5778541Z test_Softshrink_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5778653Z test_Softshrink_lambda (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5778800Z test_Softshrink_lambda_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5778914Z test_Softshrink_lambda_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5779059Z test_Softshrink_lambda_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5779174Z test_Softshrink_no_batch_dim (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5779325Z test_Softshrink_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5779422Z test_Softsign (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5779589Z test_Softsign_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5779703Z test_Softsign_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5779851Z test_Softsign_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5779958Z test_Softsign_scalar (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5780095Z test_Softsign_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5780189Z test_Tanh (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5780321Z test_Tanh_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5780430Z test_Tanh_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5780571Z test_Tanh_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5780669Z test_Tanh_scalar (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5780811Z test_Tanh_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5780915Z test_Tanhshrink (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5781046Z test_Tanhshrink_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5781165Z test_Tanhshrink_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5781317Z test_Tanhshrink_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5781426Z test_Tanhshrink_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5781571Z test_Tanhshrink_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5781684Z test_Threshold_large_value (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5781837Z test_Threshold_large_value_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5781952Z test_Threshold_no_batch_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5782099Z test_Threshold_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5782220Z test_Threshold_threshold_value (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5782497Z test_Threshold_threshold_value_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5782627Z test_Threshold_threshold_value_scalar (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5782790Z test_Threshold_threshold_value_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5782942Z test_TransformerDecoderLayer_gelu_activation (__main__.TestNN) ... ok (0.084s) 2023-01-11T21:58:37.5783127Z test_TransformerDecoderLayer_gelu_activation_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5783274Z test_TransformerDecoderLayer_relu_activation (__main__.TestNN) ... ok (0.084s) 2023-01-11T21:58:37.5783455Z test_TransformerDecoderLayer_relu_activation_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5783654Z test_TransformerEncoderLayer_gelu_activation (__main__.TestNN) ... ok (0.046s) 2023-01-11T21:58:37.5783836Z test_TransformerEncoderLayer_gelu_activation_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5783983Z test_TransformerEncoderLayer_relu_activation (__main__.TestNN) ... ok (0.043s) 2023-01-11T21:58:37.5784161Z test_TransformerEncoderLayer_relu_activation_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5784271Z test_Transformer_cell (__main__.TestNN) ... ok (0.642s) 2023-01-11T21:58:37.5784393Z test_Transformer_multilayer_coder (__main__.TestNN) ... ok (0.484s) 2023-01-11T21:58:37.5784553Z test_Transformer_multilayer_coder_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5784681Z test_TripletMarginLoss_no_batch_dim_mean (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5784898Z test_TripletMarginLoss_no_batch_dim_mean_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5785080Z test_TripletMarginLoss_no_batch_dim_mean_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5785256Z test_TripletMarginLoss_no_batch_dim_mean_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5785389Z test_TripletMarginLoss_no_batch_dim_none (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5785569Z test_TripletMarginLoss_no_batch_dim_none_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5785747Z test_TripletMarginLoss_no_batch_dim_none_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5785916Z test_TripletMarginLoss_no_batch_dim_none_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5786051Z test_TripletMarginLoss_no_batch_dim_sum (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5786219Z test_TripletMarginLoss_no_batch_dim_sum_cuda_double (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5786391Z test_TripletMarginLoss_no_batch_dim_sum_cuda_float (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5786564Z test_TripletMarginLoss_no_batch_dim_sum_cuda_half (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5786682Z test_Unflatten_no_batch_dim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5786836Z test_Unflatten_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5786932Z test_Unfold (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5787064Z test_Unfold_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5787175Z test_Unfold_int_input (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5787321Z test_Unfold_int_input_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5787416Z test_ZeroPad2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5787529Z test_ZeroPad2d_complex (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5787679Z test_ZeroPad2d_complex_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5787819Z test_ZeroPad2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5787941Z test_ZeroPad2d_negative_dims (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5788099Z test_ZeroPad2d_negative_dims_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5788215Z test_ZeroPad2d_no_batch_dim (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5788368Z test_ZeroPad2d_no_batch_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5788474Z test_adaptive_log_softmax (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5788572Z test_add_module (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5788774Z test_add_module_raises_error_if_attr_exists (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5788875Z test_affine_grid (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5788976Z test_affine_grid_3d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5789098Z test_affine_grid_error_checking (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5789198Z test_assignment (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5789335Z test_batchnorm_buffer_update_when_stats_are_not_tracked (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5789476Z test_batchnorm_cudnn_half (__main__.TestNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:58:37.5789613Z test_batchnorm_cudnn_nhwc (__main__.TestNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:58:37.5789724Z test_batchnorm_nhwc_cpu (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5789861Z test_batchnorm_nhwc_cuda (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5790246Z test_batchnorm_non_contig_cpu_bn_module_ (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5790526Z test_batchnorm_non_contig_cpu_bn_module_ (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5790686Z test_batchnorm_nonaffine_cuda_half_input (__main__.TestNN) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:58:37.5790840Z test_batchnorm_raises_error_if_bias_is_not_same_size_as_input (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5790983Z test_batchnorm_raises_error_if_less_than_one_value_per_channel (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5791145Z test_batchnorm_raises_error_if_running_mean_is_not_same_size_as_input (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5791306Z test_batchnorm_raises_error_if_running_var_is_not_same_size_as_input (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5791479Z test_batchnorm_raises_error_if_running_var_or_running_mean_have_forward_grad (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5791644Z test_batchnorm_raises_error_if_weight_is_not_same_size_as_input (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5791768Z test_bce_loss_always_nonnegative (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5791891Z test_bce_loss_broadcasts_weights (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5792006Z test_bce_loss_input_range (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5792111Z test_bce_loss_size_mismatch (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5792249Z test_bce_with_logits_broadcasts_pos_weights (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5792379Z test_bce_with_logits_broadcasts_weights (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5792536Z test_bce_with_logits_gives_same_result_as_sigmoid_and_bce_loss (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5792721Z test_bce_with_logits_gives_same_result_as_sigmoid_and_bce_loss_large_tensors_with_grad (__main__.TestNN) ... ok (0.038s) 2023-01-11T21:58:37.5792861Z test_bce_with_logits_has_correct_forward_grad (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5792998Z test_bce_with_logits_has_correct_grad_at_zero (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5793151Z test_bce_with_logits_ones_in_pos_weights_are_the_same_as_none (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5793315Z test_bce_with_logits_raises_if_target_and_input_are_different_size (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5793425Z test_bce_with_logits_stability (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5793580Z test_bce_with_logits_with_pos_weight_has_correct_grad_at_zero (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5793681Z test_bilinear (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5793801Z test_bilinear_broadcasting (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5793911Z test_bilinear_no_bias (__main__.TestNN) ... ok (0.032s) 2023-01-11T21:58:37.5794066Z test_bilinear_non_contiguous (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5794314Z test_broadcast_double_backwards_gpu (__main__.TestNN) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:58:37.5794529Z test_broadcast_no_grad (__main__.TestNN) ... skip: multi-GPU not supported (0.000s) 2023-01-11T21:58:37.5794752Z test_broadcast_not_requiring_grad (__main__.TestNN) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:58:37.5794871Z test_buffer_not_persistent (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5794999Z test_buffer_not_persistent_assign (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5795122Z test_buffer_not_persistent_del (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5795245Z test_buffer_not_persistent_load (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5795374Z test_buffer_not_persistent_overwrite (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5795495Z test_buffers_and_named_buffers (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5796281Z test_call_supports_python_dict_output (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1296: UserWarning: Using non-full backward hooks on a Module that does not return a single Tensor or a tuple of Tensors is deprecated and will be removed in future versions. This hook will be missing some of the grad_output. Please use register_full_backward_hook to get the documented behavior. 2023-01-11T21:58:37.5796501Z warnings.warn("Using non-full backward hooks on a Module that does not return a " 2023-01-11T21:58:37.5796556Z ok (0.002s) 2023-01-11T21:58:37.5796668Z test_channel_shuffle (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5796792Z test_channel_shuffle_return_self (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5796893Z test_children (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5797000Z test_clip_grad_norm (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5797112Z test_clip_grad_value (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5797223Z test_container_copy (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5797416Z test_convert_sync_batchnorm (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5797555Z test_cosine_embedding_loss_invalid_shape (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5797692Z test_cosine_embedding_loss_margin_no_reduce (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5797822Z test_cosine_embedding_loss_no_reduce (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5797957Z test_cosine_embedding_loss_with_diff_type (__main__.TestNN) ... ok (0.036s) 2023-01-11T21:58:37.5798069Z test_cosine_similarity (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5798183Z test_cross_entropy_loss (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5798310Z test_cross_entropy_loss_precision (__main__.TestNN) ... ok (5.919s) 2023-01-11T21:58:37.5798420Z test_cross_entropy_loss_zero_div (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5798684Z test_cudnn_rnn_dropout_states_device (__main__.TestNN) ... skip: CUDNN or multi-gpu not available (0.000s) 2023-01-11T21:58:37.5798827Z test_cudnn_weight_format (__main__.TestNN) ... skip: CUDNN not available (0.001s) 2023-01-11T21:58:37.5798968Z test_cudnn_weight_tying (__main__.TestNN) ... skip: CUDNN not available (0.001s) 2023-01-11T21:58:37.5799062Z test_dir (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5799166Z test_dir_digit (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5799282Z test_elu_inplace_gradgrad (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5799395Z test_elu_inplace_on_view (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5799500Z test_error_RNN_seq_len_zero (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5799603Z test_extra_state (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5799736Z test_extra_state_missing_get_extra_state (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5799869Z test_extra_state_missing_set_extra_state (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5800022Z test_extra_state_non_dict (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5800387Z test_fb_fc_packed (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:2184: UserWarning: fbgemm_pack_gemm_matrix_fp16 is deprecated and will be removed in a future PyTorch release. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/QuantizedLinear.cpp:382.) 2023-01-11T21:58:37.5800518Z packed_w_tensor = torch.fbgemm_pack_gemm_matrix_fp16(w_tensor) 2023-01-11T21:58:37.5800861Z /var/lib/jenkins/workspace/test/test_nn.py:2185: UserWarning: fbgemm_linear_fp16_weight_fp32_activation is deprecated and will be removed in a future PyTorch release. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/QuantizedLinear.cpp:417.) 2023-01-11T21:58:37.5801015Z actual_output = torch.fbgemm_linear_fp16_weight(x_tensor, packed_w_tensor, b_tensor) 2023-01-11T21:58:37.5801069Z ok (0.002s) 2023-01-11T21:58:37.5801205Z test_flatten (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5801318Z test_fold_invalid_arg (__main__.TestNN) ... ok (0.028s) 2023-01-11T21:58:37.5801436Z test_gaussian_nll_loss_args (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5801566Z test_gaussian_nll_loss_broadcasting (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5801669Z test_get_buffer (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5801791Z test_get_buffer_from_submodules (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5801927Z test_getattr_with_property (__main__.TestNN) ... expected failure (0.001s) 2023-01-11T21:58:37.5802020Z test_grid_sample (__main__.TestNN) ... ok (0.906s) 2023-01-11T21:58:37.5802128Z test_grid_sample_3d (__main__.TestNN) ... ok (0.080s) 2023-01-11T21:58:37.5802764Z test_grid_sample_error_checking (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:4235: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. 2023-01-11T21:58:37.5802844Z warnings.warn( 2023-01-11T21:58:37.5802908Z ok (0.034s) 2023-01-11T21:58:37.5803023Z test_hardtanh_backward (__main__.TestNN) ... ok (0.111s) 2023-01-11T21:58:37.5803148Z test_hardtanh_inplace_gradgrad (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5803267Z test_huber_loss_invalid_delta (__main__.TestNN) ... ok (0.008s) 2023-01-11T21:58:37.5803362Z test_inplace_thnn (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5803469Z test_interpolate (__main__.TestNN) ... ok (0.138s) 2023-01-11T21:58:37.5803588Z test_interpolate_bicubic_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5803748Z test_interpolate_bicubic_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5803876Z test_interpolate_bicubic_2d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5804050Z test_interpolate_bicubic_2d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5804177Z test_interpolate_bicubic_scale_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5804342Z test_interpolate_bicubic_scale_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5804473Z test_interpolate_bicubic_scale_tuple_shared_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5804654Z test_interpolate_bicubic_scale_tuple_shared_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5804796Z test_interpolate_bicubic_scale_tuple_skewed_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5804955Z test_interpolate_bicubic_scale_tuple_skewed_2d_align_corners (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5805150Z test_interpolate_bicubic_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5805362Z test_interpolate_bicubic_scale_tuple_skewed_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5805491Z test_interpolate_bicubic_tuple_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5805633Z test_interpolate_bicubic_tuple_2d_align_corners (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5805812Z test_interpolate_bicubic_tuple_2d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5805963Z test_interpolate_bicubic_tuple_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5806083Z test_interpolate_bilinear_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5806240Z test_interpolate_bilinear_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5806371Z test_interpolate_bilinear_2d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5806566Z test_interpolate_bilinear_2d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5806697Z test_interpolate_bilinear_scale_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5806861Z test_interpolate_bilinear_scale_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5807006Z test_interpolate_bilinear_scale_tuple_shared_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5807175Z test_interpolate_bilinear_scale_tuple_shared_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5807315Z test_interpolate_bilinear_scale_tuple_skewed_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5807473Z test_interpolate_bilinear_scale_tuple_skewed_2d_align_corners (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5807669Z test_interpolate_bilinear_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5807850Z test_interpolate_bilinear_scale_tuple_skewed_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5807983Z test_interpolate_bilinear_tuple_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5808126Z test_interpolate_bilinear_tuple_2d_align_corners (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5808305Z test_interpolate_bilinear_tuple_2d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5808469Z test_interpolate_bilinear_tuple_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5808586Z test_interpolate_buffer_overflow (__main__.TestNN) ... ok (3.865s) 2023-01-11T21:58:37.5808741Z test_interpolate_illegal_memory_access (__main__.TestNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:58:37.5808859Z test_interpolate_linear_1d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5808994Z test_interpolate_linear_1d_align_corners (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5809167Z test_interpolate_linear_1d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5809324Z test_interpolate_linear_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5809452Z test_interpolate_linear_1d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5809619Z test_interpolate_linear_1d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5809732Z test_interpolate_linear_scale_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5809871Z test_interpolate_linear_scale_1d_align_corners (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5810050Z test_interpolate_linear_scale_1d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5810211Z test_interpolate_linear_scale_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5810337Z test_interpolate_linear_tuple_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5810536Z test_interpolate_linear_tuple_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5810657Z test_interpolate_nearest_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5810816Z test_interpolate_nearest_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5810946Z test_interpolate_nearest_1d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5811101Z test_interpolate_nearest_1d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5811220Z test_interpolate_nearest_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5811380Z test_interpolate_nearest_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5811518Z test_interpolate_nearest_2d_launch_configs (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5811723Z test_interpolate_nearest_2d_launch_configs_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5811855Z test_interpolate_nearest_2d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5812021Z test_interpolate_nearest_2d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5812138Z test_interpolate_nearest_3d (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5812282Z test_interpolate_nearest_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5812411Z test_interpolate_nearest_3d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5812578Z test_interpolate_nearest_3d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5812704Z test_interpolate_nearest_scale_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5812868Z test_interpolate_nearest_scale_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5812995Z test_interpolate_nearest_scale_2d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5813159Z test_interpolate_nearest_scale_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5813284Z test_interpolate_nearest_scale_3d (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5813445Z test_interpolate_nearest_scale_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5813557Z test_interpolate_nearest_tuple_1d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5813714Z test_interpolate_nearest_tuple_1d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5813842Z test_interpolate_nearest_tuple_2d (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5814003Z test_interpolate_nearest_tuple_2d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5814131Z test_interpolate_nearest_tuple_3d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5814293Z test_interpolate_nearest_tuple_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5814416Z test_interpolate_trilinear_3d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5814576Z test_interpolate_trilinear_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5814695Z test_interpolate_trilinear_3d_zero_dim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5814864Z test_interpolate_trilinear_3d_zero_dim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5814993Z test_interpolate_trilinear_scale_3d (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5815137Z test_interpolate_trilinear_scale_3d_align_corners (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5815321Z test_interpolate_trilinear_scale_3d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5815487Z test_interpolate_trilinear_scale_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5815650Z test_interpolate_trilinear_tuple_3d (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5815794Z test_interpolate_trilinear_tuple_3d_align_corners (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5815962Z test_interpolate_trilinear_tuple_3d_align_corners_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5816126Z test_interpolate_trilinear_tuple_3d_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:58:37.5816250Z test_kl_div_log_softmax_target (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5816937Z test_kl_div_with_diff_type (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5817047Z warnings.warn( 2023-01-11T21:58:37.5817115Z ok (0.002s) 2023-01-11T21:58:37.5817244Z test_kl_div_with_diff_type_log_target (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5817354Z test_l1_loss_correct (__main__.TestNN) ... ok (13.576s) 2023-01-11T21:58:37.5817493Z test_layer_norm_grads_with_create_graph_flag (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5817624Z test_linear_autograd_device_cpu_bias_weightCOO (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5818131Z test_linear_autograd_device_cpu_bias_weightCSC (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:6609: UserWarning: Sparse CSC tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:56.) 2023-01-11T21:58:37.5818268Z module.weight = nn.Parameter(module.weight.to_sparse_csc()) 2023-01-11T21:58:37.5818340Z ok (0.013s) 2023-01-11T21:58:37.5818482Z test_linear_autograd_device_cpu_bias_weightCSR (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5818631Z test_linear_autograd_device_cpu_bias_weightStrided (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5818776Z test_linear_autograd_device_cpu_nobias_weightCOO (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5818915Z test_linear_autograd_device_cpu_nobias_weightCSC (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5819051Z test_linear_autograd_device_cpu_nobias_weightCSR (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5819201Z test_linear_autograd_device_cpu_nobias_weightStrided (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5819307Z test_linear_broadcasting (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5819418Z test_load_state_dict (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5819532Z test_load_state_dict_BC (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5819653Z test_load_state_dict_child (__main__.TestNN) ... ok (0.189s) 2023-01-11T21:58:37.5819770Z test_load_state_dict_custom (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5819889Z test_load_state_dict_invalid (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5820010Z test_load_state_dict_ref_cycle (__main__.TestNN) ... ok (0.169s) 2023-01-11T21:58:37.5820113Z test_load_state_dict_type (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5820222Z test_log_softmax_cpu (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5820332Z test_log_softmax_dim0 (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5820482Z test_log_softmax_dim0_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5820591Z test_log_softmax_dim3 (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5820739Z test_log_softmax_dim3_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5820854Z test_log_softmax_lastdim (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5821044Z test_log_softmax_lastdim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5821144Z test_log_softmax_scalar (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5821294Z test_log_softmax_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5821411Z test_log_softmax_spatial (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5821562Z test_log_softmax_spatial_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5821687Z test_log_softmax_spatial_special (__main__.TestNN) ... ok (0.009s) 2023-01-11T21:58:37.5821850Z test_log_softmax_spatial_special_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5822249Z test_loss_equal_input_target_shape (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:2799: UserWarning: Using a target size (torch.Size([5, 3])) that is different to the input size (torch.Size([3, 5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5822544Z 'mse_loss': lambda x, y: F.mse_loss(x, y), 2023-01-11T21:58:37.5822868Z /var/lib/jenkins/workspace/test/test_nn.py:2800: UserWarning: Using a target size (torch.Size([5, 3])) that is different to the input size (torch.Size([3, 5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5823010Z 'l1_loss': lambda x, y: F.l1_loss(x, y), 2023-01-11T21:58:37.5823311Z /var/lib/jenkins/workspace/test/test_nn.py:2801: UserWarning: Using a target size (torch.Size([5, 3])) that is different to the input size (torch.Size([3, 5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5823475Z 'smooth_l1_loss': lambda x, y: F.smooth_l1_loss(x, y), 2023-01-11T21:58:37.5823784Z /var/lib/jenkins/workspace/test/test_nn.py:2802: UserWarning: Using a target size (torch.Size([5, 3])) that is different to the input size (torch.Size([3, 5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5823937Z 'huber_loss': lambda x, y: F.huber_loss(x, y), 2023-01-11T21:58:37.5824543Z /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:58:37.5824619Z warnings.warn( 2023-01-11T21:58:37.5824685Z ok (0.023s) 2023-01-11T21:58:37.5824823Z test_margin_ranking_loss_margin_no_reduce (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5824951Z test_margin_ranking_loss_no_reduce (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5825063Z test_module_apply_inplace_op (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5825182Z test_module_backcompat (__main__.TestNN) ... ok (0.029s) 2023-01-11T21:58:37.5825298Z test_module_to_argparse (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5825398Z test_modules (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5825513Z test_mse_loss_size_warning (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5825660Z test_multimarginloss_1d_input_0d_target_no_reduce (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5825844Z test_multimarginloss_1d_input_0d_target_no_reduce_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5825954Z test_named_children (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5826049Z test_named_modules (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5826181Z test_named_parameters_remove_duplicate (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5826555Z test_nested_tensor_from_mask (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:2204: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:58:37.5826718Z nt = torch._nested_tensor_from_mask(input, mask) 2023-01-11T21:58:37.5826784Z ok (0.002s) 2023-01-11T21:58:37.5826911Z test_nested_tensor_from_mask_error (__main__.TestNN) ... ok (0.020s) 2023-01-11T21:58:37.5827011Z test_no_grad (__main__.TestNN) ... ok (0.011s) 2023-01-11T21:58:37.5827126Z test_non_leaf_parameters (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5827216Z test_normalize (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5827352Z test_overwrite_module_params_on_conversion (__main__.TestNN) ... ok (0.014s) 2023-01-11T21:58:37.5827507Z test_pack_sequence_batch_sizes_throw (__main__.TestNN) ... skip: CUDA not available (0.000s) 2023-01-11T21:58:37.5827619Z test_pad_scalar_error (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5827769Z test_padding_list (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5827886Z test_pairwise_distance (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5828003Z test_parameter_assignment (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5828130Z test_parameterlistdict_pickle (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5828261Z test_parameterlistdict_setting_attributes (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5828394Z test_parameters_and_named_parameters (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5828508Z test_parameters_to_vector (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5828609Z test_parse_to (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5828752Z test_partial_flat_weights (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5828851Z test_pdist (__main__.TestNN) ... ok (0.015s) 2023-01-11T21:58:37.5829005Z test_pdist_cpu_gradgrad_unimplemented (__main__.TestNN) ... expected failure (0.007s) 2023-01-11T21:58:37.5829165Z test_pdist_cuda_gradgrad_unimplemented (__main__.TestNN) ... expected failure (0.009s) 2023-01-11T21:58:37.5829267Z test_pdist_empty_col (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5829376Z test_pdist_empty_row (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5829481Z test_pdist_large (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5829569Z test_pdist_zeros (__main__.TestNN) 2023-01-11T21:58:37.5829686Z Test that grad is still valid when dist is 0 ... ok (0.008s) 2023-01-11T21:58:37.5829803Z test_pixel_shuffle_nhwc_cpu (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5829924Z test_pixel_shuffle_unshuffle (__main__.TestNN) ... ok (0.462s) 2023-01-11T21:58:37.5830288Z test_pointwise_loss_broadcast (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5830453Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5830768Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5830936Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5831248Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5831418Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5831734Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5831935Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5832245Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5832412Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5832727Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5832892Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5833248Z /var/lib/jenkins/workspace/test/test_nn.py:5402: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5833400Z 'mse_loss': lambda x, y, r: F.mse_loss(x, y, reduction=r), 2023-01-11T21:58:37.5833710Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5833875Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5834186Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5834355Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5834665Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5834828Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5835134Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5835295Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5835602Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5835766Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5836073Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5836222Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5836529Z /var/lib/jenkins/workspace/test/test_nn.py:5403: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5836689Z 'l1_loss': lambda x, y, r: F.l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5837032Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5837219Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5837588Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5837778Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5838121Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5838308Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5838612Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5838799Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5839105Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5839276Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5839582Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5839771Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5840080Z /var/lib/jenkins/workspace/test/test_nn.py:5404: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5840263Z 'smooth_l1_loss': lambda x, y, r: F.smooth_l1_loss(x, y, reduction=r), 2023-01-11T21:58:37.5840570Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5840747Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5841053Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5841227Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5841533Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5841704Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5842008Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5842211Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5842504Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5842675Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5842976Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5843148Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5843488Z /var/lib/jenkins/workspace/test/test_nn.py:5405: UserWarning: Using a target size (torch.Size([2, 10])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. 2023-01-11T21:58:37.5843662Z 'huber_loss': lambda x, y, r: F.huber_loss(x, y, reduction=r), 2023-01-11T21:58:37.5843729Z ok (0.038s) 2023-01-11T21:58:37.5843874Z test_pointwise_loss_target_grad_none_reduction (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5844006Z test_projections_errors_on_gru_and_rnn (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5844121Z test_projections_lstm_args_check (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5844360Z test_projections_lstm_check_device (__main__.TestNN) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:58:37.5844499Z test_projections_lstm_initial_hidden_state (__main__.TestNN) ... ok (0.021s) 2023-01-11T21:58:37.5844651Z test_register_buffer_allows_overwriting_with_same_name (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5844797Z test_register_buffer_raises_error_if_attr_exists (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5844946Z test_register_buffer_raises_error_if_name_is_not_string (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845086Z test_register_buffer_raises_error_if_not_tensor (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845241Z test_register_parameter_allows_overwriting_with_same_name (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845388Z test_register_parameter_raises_error_if_attr_exists (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845529Z test_register_parameter_raises_error_if_name_is_not_string (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845653Z test_register_state_dict_pre_hook (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5845796Z test_register_state_dict_pre_hook_backward_compat (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5845913Z test_relu_inplace_on_view (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5846014Z test_repr (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5846123Z test_requires_grad_ (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5846229Z test_rnn_args_check (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5846444Z test_rnn_check_device (__main__.TestNN) ... skip: multi-GPU not supported (0.001s) 2023-01-11T21:58:37.5846553Z test_rnn_initial_hidden_state (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5846662Z test_rnn_weight_norm (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5847105Z test_share_memory (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:181: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:37.5847215Z self.assertFalse(p.storage().is_shared()) 2023-01-11T21:58:37.5847652Z /var/lib/jenkins/workspace/test/test_nn.py:186: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:37.5847757Z self.assertTrue(p.storage().is_shared()) 2023-01-11T21:58:37.5847823Z ok (0.002s) 2023-01-11T21:58:37.5847953Z test_smoothl1loss_intergral_target (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5848095Z test_smoothl1loss_negative_beta_not_supported (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5848190Z test_softmax_cpu (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5848309Z test_softmax_functional_dim0 (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5848467Z test_softmax_functional_dim0_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5848618Z test_softmax_functional_dim3 (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5848776Z test_softmax_functional_dim3_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5848897Z test_softmax_functional_scalar (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5849057Z test_softmax_functional_scalar_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5849169Z test_softmax_lastdim (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5849304Z test_softmax_lastdim_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5849422Z test_softmax_lastdim_dtype (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5849573Z test_softmax_lastdim_dtype_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5849684Z test_softmax_spatial (__main__.TestNN) ... ok (0.006s) 2023-01-11T21:58:37.5849830Z test_softmax_spatial_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5849953Z test_softmax_spatial_dtype (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5850106Z test_softmax_spatial_dtype_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5850229Z test_softmax_spatial_special (__main__.TestNN) ... ok (0.007s) 2023-01-11T21:58:37.5850375Z test_softmax_spatial_special_cuda (__main__.TestNN) ... skip: Excluded from CUDA tests (0.001s) 2023-01-11T21:58:37.5850475Z test_softmin (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5850925Z test_spectral_norm (__main__.TestNN) ... /var/lib/jenkins/workspace/test/test_nn.py:1899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:37.5851058Z self.assertEqual(m.weight_orig.storage(), m.weight.storage()) 2023-01-11T21:58:37.5851734Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:37.5851825Z device=typed_storage.device, 2023-01-11T21:58:37.5851890Z ok (0.024s) 2023-01-11T21:58:37.5852005Z test_spectral_norm_dim (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5852125Z test_spectral_norm_forward (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5852252Z test_spectral_norm_load_state_dict (__main__.TestNN) ... ok (0.017s) 2023-01-11T21:58:37.5852357Z test_spectral_norm_pickle (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5852461Z test_state_dict (__main__.TestNN) ... ok (0.003s) 2023-01-11T21:58:37.5852615Z test_sync_batchnorm_accuracy_cuda (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5852804Z test_sync_batchnorm_backward_elemt (__main__.TestNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:58:37.5852919Z test_threshold_bfloat16 (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5853026Z test_threshold_int (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5853120Z test_to (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5853245Z test_train_errors_for_invalid_mode (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5853353Z test_transformer_args_check (__main__.TestNN) ... ok (0.174s) 2023-01-11T21:58:37.5853478Z test_transformer_layer_args_check (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5853600Z test_transformerdecoder (__main__.TestNN) ... ok (0.103s) 2023-01-11T21:58:37.5853729Z test_transformerdecoderlayer (__main__.TestNN) ... ok (0.026s) 2023-01-11T21:58:37.5853862Z test_transformerdecoderlayer_gelu (__main__.TestNN) ... ok (0.032s) 2023-01-11T21:58:37.5854009Z test_triplet_margin_loss (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5854137Z test_triplet_margin_loss_no_reduce (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5854246Z test_triplet_margin_loss_swap (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5854377Z test_triplet_margin_loss_swap_no_reduce (__main__.TestNN) ... ok (0.004s) 2023-01-11T21:58:37.5854474Z test_type (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5855154Z test_unflatten (__main__.TestNN) ... /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:1114: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /var/lib/jenkins/workspace/c10/core/TensorImpl.h:1816.) 2023-01-11T21:58:37.5855265Z return super(Tensor, self).refine_names(names) 2023-01-11T21:58:37.5855331Z ok (0.002s) 2023-01-11T21:58:37.5855454Z test_unflatten_invalid_arg (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5855571Z test_unfold_invalid_arg (__main__.TestNN) ... ok (0.013s) 2023-01-11T21:58:37.5855720Z test_upsamplingBilinear2d_spatial_invariance (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5855828Z test_upsamplingLinear1d (__main__.TestNN) ... ok (0.016s) 2023-01-11T21:58:37.5855972Z test_upsamplingLinear1d_spatial_invariance (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5856098Z test_upsamplingTrilinear3d (__main__.TestNN) ... ok (0.044s) 2023-01-11T21:58:37.5856247Z test_upsamplingTrilinear3d_spatial_invariance (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5856364Z test_upsampling_bfloat16 (__main__.TestNN) ... ok (0.018s) 2023-01-11T21:58:37.5856502Z test_upsampling_not_recompute_scale_factor (__main__.TestNN) ... ok (0.010s) 2023-01-11T21:58:37.5856621Z test_upsampling_small_scale (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5856738Z test_vector_to_parameters (__main__.TestNN) ... ok (0.001s) 2023-01-11T21:58:37.5856836Z test_weight_norm (__main__.TestNN) ... ok (0.005s) 2023-01-11T21:58:37.5856950Z test_weight_norm_pickle (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5857051Z test_zero_grad (__main__.TestNN) ... ok (0.002s) 2023-01-11T21:58:37.5857062Z 2023-01-11T21:58:37.5857263Z ---------------------------------------------------------------------- 2023-01-11T21:58:37.5857340Z Ran 2070 tests in 37.830s 2023-01-11T21:58:37.5857346Z 2023-01-11T21:58:37.5857440Z OK (skipped=1111, expected failures=3) 2023-01-11T21:58:37.5857445Z 2023-01-11T21:58:37.5857527Z Generating XML reports... 2023-01-11T21:58:37.5857790Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestAddRelu-20230111215758.xml 2023-01-11T21:58:37.5858060Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestConstantPadNd-20230111215758.xml 2023-01-11T21:58:37.5858348Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestFunctionalPickle-20230111215758.xml 2023-01-11T21:58:37.5858655Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestFusionEval-20230111215758.xml 2023-01-11T21:58:37.5858924Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestFusionUtils-20230111215758.xml 2023-01-11T21:58:37.5859163Z Generated XML report: test-reports/python-unittest/test_nn/TEST-TestNN-20230111215758.xml 2023-01-11T21:58:37.5859168Z 2023-01-11T21:58:37.5859553Z ##[endgroup] 2023-01-11T21:58:37.5859810Z FINISHED PRINTING LOG FILE of test_nn (/var/lib/jenkins/workspace/test/test-reports/test_nn_to8mjml2) 2023-01-11T21:58:37.5859815Z 2023-01-11T21:58:37.5859981Z Running test_overrides ... [2023-01-11 21:58:37.527449] 2023-01-11T21:58:37.5860303Z Executing ['/opt/conda/bin/python', '-bb', 'test_overrides.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:58:37.527649] 2023-01-11T21:58:40.5893256Z 2023-01-11T21:58:40.5893803Z Expand the folded group to see the log file of test_overrides 2023-01-11T21:58:40.5895204Z ##[group]PRINTING LOG FILE of test_overrides (/var/lib/jenkins/workspace/test/test-reports/test_overrides_gz6u4dmy) 2023-01-11T21:58:40.5895511Z 2023-01-11T21:58:40.5895628Z Running tests... 2023-01-11T21:58:40.5896244Z ---------------------------------------------------------------------- 2023-01-11T21:58:40.5896882Z Test results will be stored in test-reports/python-unittest/test_overrides 2023-01-11T21:58:40.5897408Z test_broadcast_all (__main__.TestBroadcastAllOverride) ... ok (0.229s) 2023-01-11T21:58:40.5897954Z test_parameter_does_not_prevent_dispatch (__main__.TestDisabledTorchFunction) ... ok (0.001s) 2023-01-11T21:58:40.5898459Z test_wrapper (__main__.TestEinsumOverride) ... ok (0.002s) 2023-01-11T21:58:40.5898927Z test_gradcheck (__main__.TestGradCheckOverride) ... ok (0.015s) 2023-01-11T21:58:40.5899399Z test_newones (__main__.TestGradNewOnesOverride) ... ok (0.001s) 2023-01-11T21:58:40.5899838Z test_getitem (__main__.TestIndexing) ... ok (0.001s) 2023-01-11T21:58:40.5900276Z test_getitem_subclass (__main__.TestIndexing) ... ok (0.001s) 2023-01-11T21:58:40.5900682Z test_setitem (__main__.TestIndexing) ... ok (0.001s) 2023-01-11T21:58:40.5901099Z test_setitem_subclass (__main__.TestIndexing) ... ok (0.001s) 2023-01-11T21:58:40.5901526Z test_setitem_val (__main__.TestIndexing) ... ok (0.001s) 2023-01-11T21:58:40.5901940Z test_iterator (__main__.TestIterator) ... ok (0.001s) 2023-01-11T21:58:40.5902508Z test_max (__main__.TestNamedTuple) ... ok (0.001s) 2023-01-11T21:58:40.5903430Z test_pickle (__main__.TestPickle) ... ok (0.001s) 2023-01-11T21:58:40.5903839Z test_rnn (__main__.TestRNN) ... ok (0.002s) 2023-01-11T21:58:40.5904252Z test_resolve_name (__main__.TestResolveName) ... ok (0.083s) 2023-01-11T21:58:40.5904735Z test_all_same_mode (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5905247Z test_basic (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5905722Z test_disable_enable_subclass (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5906254Z test_disable_subclass_not_mode (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5906771Z test_distributions_bernoulli (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5907294Z test_error_using_class_method_on_mode (__main__.TestTorchFunctionMode) ... ok (0.002s) 2023-01-11T21:58:40.5907804Z test_factory_override (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5908296Z test_get_cur_mode (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5908773Z test_get_mode_stack (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5909272Z test_mode_notimplemented_loop (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5909769Z test_modes_handle_first (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5910320Z test_modes_return_notimplemented (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5910957Z test_nested_modes_with_python_has_torch_function (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5911724Z test_nested_same_mode (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5912225Z test_nn_parse_to (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5912766Z test_reentrant_mode_idiom (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5913323Z test_restacking_with_ancestor (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5913861Z test_subclass_hash (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5914360Z test_with_mode (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5914920Z test_with_mode_created_separately (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5915476Z test_with_nested_modes (__main__.TestTorchFunctionMode) ... ok (0.001s) 2023-01-11T21:58:40.5916025Z test_Tensor___add__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5916653Z test_Tensor___and__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5917203Z test_Tensor___array__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5917816Z test_Tensor___array_wrap__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5918285Z test_Tensor___bool__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5918595Z test_Tensor___complex__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5918899Z test_Tensor___contains__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5919237Z test_Tensor___cuda_array_interface_____get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5919571Z test_Tensor___deepcopy__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5919867Z test_Tensor___div__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5920177Z test_Tensor___dlpack__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5920499Z test_Tensor___dlpack_device__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5920812Z test_Tensor___eq__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5921103Z test_Tensor___float__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5921420Z test_Tensor___floordiv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5921733Z test_Tensor___format__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5922024Z test_Tensor___ge__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5922329Z test_Tensor___getitem__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5922634Z test_Tensor___gt__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5922935Z test_Tensor___iadd__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5923229Z test_Tensor___iand__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5923534Z test_Tensor___idiv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5923846Z test_Tensor___ifloordiv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5924151Z test_Tensor___ilshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5924458Z test_Tensor___imod__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5924759Z test_Tensor___imul__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5925062Z test_Tensor___index__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5925353Z test_Tensor___int__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5925656Z test_Tensor___invert__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5925961Z test_Tensor___ior__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5926310Z test_Tensor___irshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5926616Z test_Tensor___isub__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5926917Z test_Tensor___ixor__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5927214Z test_Tensor___le__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5927501Z test_Tensor___len__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5927800Z test_Tensor___long__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5928103Z test_Tensor___lshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5928391Z test_Tensor___lt__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5928697Z test_Tensor___matmul__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5929035Z test_Tensor___mod__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5929339Z test_Tensor___mul__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5929623Z test_Tensor___ne__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5929928Z test_Tensor___nonzero__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5930230Z test_Tensor___or__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5930519Z test_Tensor___radd__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5930825Z test_Tensor___rand__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5931127Z test_Tensor___rdiv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5931422Z test_Tensor___reduce_ex__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5931731Z test_Tensor___repr__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5932040Z test_Tensor___reversed__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5932360Z test_Tensor___rfloordiv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5932661Z test_Tensor___rlshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5932967Z test_Tensor___rmatmul__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5933272Z test_Tensor___rmod__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5933558Z test_Tensor___rmul__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5933854Z test_Tensor___ror__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5934156Z test_Tensor___rpow__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5934463Z test_Tensor___rrshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5934758Z test_Tensor___rshift__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5935068Z test_Tensor___rsub__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5935366Z test_Tensor___rxor__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5935655Z test_Tensor___setitem__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5935965Z test_Tensor___setstate__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5936270Z test_Tensor___sub__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5936575Z test_Tensor___truediv__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5936864Z test_Tensor___xor__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5937190Z test_Tensor__autocast_to_full_precision (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5937545Z test_Tensor__autocast_to_reduced_precision (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5937874Z test_Tensor__coalesced_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5938219Z test_Tensor__dimI (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5938520Z test_Tensor__dimV (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5938826Z test_Tensor__indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5939117Z test_Tensor__is_view (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5939437Z test_Tensor__nested_tensor_size (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5939752Z test_Tensor__nnz (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5940045Z test_Tensor__to_dense (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5940361Z test_Tensor__update_names (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5940675Z test_Tensor__values (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5941012Z test_Tensor_abs (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5941299Z test_Tensor_abs_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5941601Z test_Tensor_absolute (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5941910Z test_Tensor_absolute_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5942203Z test_Tensor_acos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5942692Z test_Tensor_acos_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5942997Z test_Tensor_acosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5943300Z test_Tensor_acosh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5943588Z test_Tensor_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5943890Z test_Tensor_add_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5944200Z test_Tensor_addbmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5944494Z test_Tensor_addbmm_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5944798Z test_Tensor_addcdiv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5945103Z test_Tensor_addcdiv_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5945397Z test_Tensor_addcmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5945701Z test_Tensor_addcmul_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5946010Z test_Tensor_addmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5946313Z test_Tensor_addmm_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5946602Z test_Tensor_addmv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5946905Z test_Tensor_addmv_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5947215Z test_Tensor_addr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5947503Z test_Tensor_addr_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5947809Z test_Tensor_adjoint (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5948118Z test_Tensor_align_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5948426Z test_Tensor_align_to (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5948711Z test_Tensor_all (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5949016Z test_Tensor_allclose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5949324Z test_Tensor_amax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5949612Z test_Tensor_amin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5949916Z test_Tensor_aminmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5950295Z test_Tensor_angle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5950596Z test_Tensor_any (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5950890Z test_Tensor_apply_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5951196Z test_Tensor_arccos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5951502Z test_Tensor_arccos_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5951797Z test_Tensor_arccosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5952103Z test_Tensor_arccosh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5952411Z test_Tensor_arcsin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5952713Z test_Tensor_arcsin_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5953005Z test_Tensor_arcsinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5953357Z test_Tensor_arcsinh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5953663Z test_Tensor_arctan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5953953Z test_Tensor_arctan2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5954257Z test_Tensor_arctan2_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5954563Z test_Tensor_arctan_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5954851Z test_Tensor_arctanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5955157Z test_Tensor_arctanh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5955464Z test_Tensor_argmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5955766Z test_Tensor_argmin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5956059Z test_Tensor_argsort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5956367Z test_Tensor_argwhere (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5956680Z test_Tensor_as_strided (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5956983Z test_Tensor_as_strided_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5957308Z test_Tensor_as_strided_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5957688Z test_Tensor_asin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5957992Z test_Tensor_asin_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5958282Z test_Tensor_asinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5958585Z test_Tensor_asinh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5958884Z test_Tensor_atan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5959178Z test_Tensor_atan2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5959480Z test_Tensor_atan2_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5959779Z test_Tensor_atan_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5960077Z test_Tensor_atanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5960366Z test_Tensor_atanh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5960672Z test_Tensor_backward (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5960985Z test_Tensor_baddbmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5961282Z test_Tensor_baddbmm_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5961595Z test_Tensor_bernoulli (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5961908Z test_Tensor_bernoulli_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5962263Z test_Tensor_bfloat16 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5962557Z test_Tensor_bincount (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5962872Z test_Tensor_bitwise_and (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5963191Z test_Tensor_bitwise_and_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5963502Z test_Tensor_bitwise_left_shift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5963833Z test_Tensor_bitwise_left_shift_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5964155Z test_Tensor_bitwise_not (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5964472Z test_Tensor_bitwise_not_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5964773Z test_Tensor_bitwise_or (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5965113Z test_Tensor_bitwise_or_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5965446Z test_Tensor_bitwise_right_shift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5965769Z test_Tensor_bitwise_right_shift_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5966095Z test_Tensor_bitwise_xor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5966407Z test_Tensor_bitwise_xor_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5966715Z test_Tensor_bmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5967004Z test_Tensor_bool (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5967311Z test_Tensor_broadcast_to (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5967620Z test_Tensor_byte (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5967907Z test_Tensor_cauchy_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5968221Z test_Tensor_ccol_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5968535Z test_Tensor_cdouble (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5968951Z test_Tensor_ceil (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5969255Z test_Tensor_ceil_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5969562Z test_Tensor_cfloat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5969865Z test_Tensor_chalf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5970152Z test_Tensor_char (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5970456Z test_Tensor_cholesky (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5970775Z test_Tensor_cholesky_inverse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5971088Z test_Tensor_cholesky_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5971408Z test_Tensor_chunk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5971713Z test_Tensor_clamp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5972022Z test_Tensor_clamp_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5972320Z test_Tensor_clamp_max (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5972630Z test_Tensor_clamp_max_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5972942Z test_Tensor_clamp_min (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5973239Z test_Tensor_clamp_min_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5973546Z test_Tensor_clip (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5973844Z test_Tensor_clip_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5974145Z test_Tensor_clone (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5974480Z test_Tensor_coalesce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5974794Z test_Tensor_col_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5975104Z test_Tensor_conj (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5975402Z test_Tensor_conj_physical (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5975723Z test_Tensor_conj_physical_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5976042Z test_Tensor_contiguous (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5976347Z test_Tensor_copy_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5976644Z test_Tensor_copysign (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5976956Z test_Tensor_copysign_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5977296Z test_Tensor_corrcoef (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5977588Z test_Tensor_cos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5977887Z test_Tensor_cos_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5978186Z test_Tensor_cosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5978475Z test_Tensor_cosh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5978787Z test_Tensor_count_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5979092Z test_Tensor_cov (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5979390Z test_Tensor_cpu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5979678Z test_Tensor_cross (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5979987Z test_Tensor_crow_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5980301Z test_Tensor_cuda (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5980593Z test_Tensor_cummax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5980899Z test_Tensor_cummin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5981205Z test_Tensor_cumprod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5981515Z test_Tensor_cumprod_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5981806Z test_Tensor_cumsum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5982109Z test_Tensor_cumsum_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5982515Z test_Tensor_data_ptr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5982816Z test_Tensor_deg2rad (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5983124Z test_Tensor_deg2rad_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5983440Z test_Tensor_dense_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5983755Z test_Tensor_dequantize (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5984050Z test_Tensor_det (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5984354Z test_Tensor_detach (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5984661Z test_Tensor_detach_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5984950Z test_Tensor_diag (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5985261Z test_Tensor_diag_embed (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5985575Z test_Tensor_diagflat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5985875Z test_Tensor_diagonal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5986190Z test_Tensor_diagonal_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5986585Z test_Tensor_diff (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5986889Z test_Tensor_digamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5987183Z test_Tensor_digamma_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5987485Z test_Tensor_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5987783Z test_Tensor_dist (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5988069Z test_Tensor_div (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5988366Z test_Tensor_div_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5988669Z test_Tensor_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5988976Z test_Tensor_divide_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5989266Z test_Tensor_dot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5989607Z test_Tensor_double (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5989915Z test_Tensor_dsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5990214Z test_Tensor_element_size (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5990520Z test_Tensor_eq (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5990819Z test_Tensor_eq_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5991119Z test_Tensor_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5991406Z test_Tensor_erf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5991704Z test_Tensor_erf_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5992002Z test_Tensor_erfc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5992290Z test_Tensor_erfc_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5992595Z test_Tensor_erfinv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5992898Z test_Tensor_erfinv_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5993198Z test_Tensor_exp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5993482Z test_Tensor_exp2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5993781Z test_Tensor_exp2_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5994079Z test_Tensor_exp_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5994369Z test_Tensor_expand (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5994680Z test_Tensor_expand_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5994987Z test_Tensor_expm1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5995277Z test_Tensor_expm1_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5995592Z test_Tensor_exponential_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5995901Z test_Tensor_fill_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5996215Z test_Tensor_fill_diagonal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5996510Z test_Tensor_fix (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5996808Z test_Tensor_fix_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5997114Z test_Tensor_flatten (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5997459Z test_Tensor_flip (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5997769Z test_Tensor_fliplr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5998075Z test_Tensor_flipud (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5998381Z test_Tensor_float (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5998719Z test_Tensor_float_power (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5999037Z test_Tensor_float_power_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5999346Z test_Tensor_floor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5999633Z test_Tensor_floor_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.5999945Z test_Tensor_floor_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6000269Z test_Tensor_floor_divide_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6000577Z test_Tensor_fmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6000864Z test_Tensor_fmin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6001164Z test_Tensor_fmod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6001467Z test_Tensor_fmod_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6001787Z test_Tensor_frac (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6002091Z test_Tensor_frac_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6002395Z test_Tensor_frexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6002686Z test_Tensor_gather (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6002991Z test_Tensor_gcd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6003291Z test_Tensor_gcd_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6003590Z test_Tensor_ge (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6003876Z test_Tensor_ge_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6004183Z test_Tensor_geometric_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6004492Z test_Tensor_geqrf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6004782Z test_Tensor_ger (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6005089Z test_Tensor_get_device (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6005401Z test_Tensor_greater (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6005712Z test_Tensor_greater_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6006018Z test_Tensor_greater_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6006339Z test_Tensor_greater_equal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6006652Z test_Tensor_gt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6006936Z test_Tensor_gt_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6007239Z test_Tensor_half (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6007549Z test_Tensor_hardshrink (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6007866Z test_Tensor_has_names (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6008165Z test_Tensor_heaviside (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6008478Z test_Tensor_heaviside_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6008785Z test_Tensor_histc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6009076Z test_Tensor_histogram (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6009383Z test_Tensor_hsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6009687Z test_Tensor_hypot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6009989Z test_Tensor_hypot_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6010273Z test_Tensor_i0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6010569Z test_Tensor_i0_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6010903Z test_Tensor_igamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6011198Z test_Tensor_igamma_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6011504Z test_Tensor_igammac (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6011810Z test_Tensor_igammac_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6012110Z test_Tensor_index_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6012424Z test_Tensor_index_add_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6012739Z test_Tensor_index_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6013055Z test_Tensor_index_copy_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6013354Z test_Tensor_index_fill (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6013700Z test_Tensor_index_fill_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6014017Z test_Tensor_index_put (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6014317Z test_Tensor_index_put_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6014634Z test_Tensor_index_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6014950Z test_Tensor_index_reduce_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6015269Z test_Tensor_index_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6015567Z test_Tensor_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6015872Z test_Tensor_inner (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6016172Z test_Tensor_int (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6016464Z test_Tensor_int_repr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6016776Z test_Tensor_inverse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6017079Z test_Tensor_ipu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6017389Z test_Tensor_is_coalesced (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6017693Z test_Tensor_is_complex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6018000Z test_Tensor_is_conj (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6018313Z test_Tensor_is_contiguous (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6018626Z test_Tensor_is_distributed (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6018950Z test_Tensor_is_floating_point (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6019277Z test_Tensor_is_inference (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6019589Z test_Tensor_is_neg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6019886Z test_Tensor_is_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6020196Z test_Tensor_is_pinned (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6020509Z test_Tensor_is_same_size (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6020807Z test_Tensor_is_set_to (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6021113Z test_Tensor_is_shared (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6021421Z test_Tensor_is_signed (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6021727Z test_Tensor_isclose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6022022Z test_Tensor_isfinite (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6022326Z test_Tensor_isinf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6022837Z test_Tensor_isnan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6023131Z test_Tensor_isneginf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6023441Z test_Tensor_isposinf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6023744Z test_Tensor_isreal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6024038Z test_Tensor_istft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6024338Z test_Tensor_item (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6024638Z test_Tensor_kron (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6024943Z test_Tensor_kthvalue (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6025234Z test_Tensor_lcm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6025528Z test_Tensor_lcm_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6025873Z test_Tensor_ldexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6026166Z test_Tensor_ldexp_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6026464Z test_Tensor_le (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6026760Z test_Tensor_le_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6027061Z test_Tensor_lerp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6027348Z test_Tensor_lerp_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6027648Z test_Tensor_less (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6027947Z test_Tensor_less_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6028241Z test_Tensor_less_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6028553Z test_Tensor_less_equal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6028867Z test_Tensor_lgamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6029173Z test_Tensor_lgamma_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6029460Z test_Tensor_log (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6029757Z test_Tensor_log10 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6030060Z test_Tensor_log10_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6030349Z test_Tensor_log1p (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6030650Z test_Tensor_log1p_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6030950Z test_Tensor_log2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6031238Z test_Tensor_log2_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6031535Z test_Tensor_log_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6031842Z test_Tensor_log_normal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6032156Z test_Tensor_log_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6032459Z test_Tensor_logaddexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6032778Z test_Tensor_logaddexp2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6033099Z test_Tensor_logcumsumexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6033399Z test_Tensor_logdet (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6033710Z test_Tensor_logical_and (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6034027Z test_Tensor_logical_and_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6034343Z test_Tensor_logical_not (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6034650Z test_Tensor_logical_not_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6035000Z test_Tensor_logical_or (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6035310Z test_Tensor_logical_or_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6035606Z test_Tensor_logical_xor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6035918Z test_Tensor_logical_xor_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6036225Z test_Tensor_logit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6036528Z test_Tensor_logit_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6036825Z test_Tensor_logsumexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6037133Z test_Tensor_long (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6037486Z test_Tensor_lt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6037805Z test_Tensor_lt_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6038106Z test_Tensor_lu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6038408Z test_Tensor_lu_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6038713Z test_Tensor_map2_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6039000Z test_Tensor_map_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6039306Z test_Tensor_masked_fill (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6039622Z test_Tensor_masked_fill_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6039926Z test_Tensor_masked_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6040249Z test_Tensor_masked_scatter_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6040574Z test_Tensor_masked_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6040888Z test_Tensor_matmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6041185Z test_Tensor_matrix_exp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6041502Z test_Tensor_matrix_power (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6041809Z test_Tensor_max (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6042100Z test_Tensor_maximum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6042403Z test_Tensor_mean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6042705Z test_Tensor_median (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6042994Z test_Tensor_min (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6043300Z test_Tensor_minimum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6043601Z test_Tensor_mm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6043909Z test_Tensor_mode (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6044201Z test_Tensor_moveaxis (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6044510Z test_Tensor_movedim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6044815Z test_Tensor_msort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6045099Z test_Tensor_mul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6045399Z test_Tensor_mul_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6045711Z test_Tensor_multinomial (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6046029Z test_Tensor_multiply (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6046328Z test_Tensor_multiply_ (__main__.TestTorchFunctionOverride) ... ok (0.003s) 2023-01-11T21:58:40.6046630Z test_Tensor_mv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6046971Z test_Tensor_mvlgamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6047270Z test_Tensor_mvlgamma_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6047583Z test_Tensor_nan_to_num (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6047894Z test_Tensor_nan_to_num_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6048202Z test_Tensor_nanmean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6048501Z test_Tensor_nanmedian (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6048818Z test_Tensor_nanquantile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6049130Z test_Tensor_nansum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6049422Z test_Tensor_narrow (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6049764Z test_Tensor_narrow_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6050085Z test_Tensor_ndimension (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6050394Z test_Tensor_ne (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6050678Z test_Tensor_ne_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6050976Z test_Tensor_neg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6051276Z test_Tensor_neg_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6051570Z test_Tensor_negative (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6051881Z test_Tensor_negative_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6052184Z test_Tensor_nelement (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6067299Z test_Tensor_nextafter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6067801Z test_Tensor_nextafter_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6068237Z test_Tensor_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6068618Z test_Tensor_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6068927Z test_Tensor_normal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6069239Z test_Tensor_not_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6069537Z test_Tensor_not_equal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6069846Z test_Tensor_numel (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6070156Z test_Tensor_numpy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6070445Z test_Tensor_orgqr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6070744Z test_Tensor_ormqr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6071049Z test_Tensor_outer (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6071354Z test_Tensor_permute (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6071655Z test_Tensor_pin_memory (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6071969Z test_Tensor_pinverse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6072281Z test_Tensor_polygamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6072586Z test_Tensor_polygamma_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6072899Z test_Tensor_positive (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6073207Z test_Tensor_pow (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6073509Z test_Tensor_pow_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6073798Z test_Tensor_prelu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6074106Z test_Tensor_prod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6074521Z test_Tensor_put (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6074808Z test_Tensor_put_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6075122Z test_Tensor_q_per_channel_axis (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6075460Z test_Tensor_q_per_channel_scales (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6075793Z test_Tensor_q_per_channel_zero_points (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6076121Z test_Tensor_q_scale (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6076432Z test_Tensor_q_zero_point (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6076738Z test_Tensor_qr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6077028Z test_Tensor_qscheme (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6077496Z test_Tensor_quantile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6077816Z test_Tensor_rad2deg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6078112Z test_Tensor_rad2deg_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6078413Z test_Tensor_random_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6078717Z test_Tensor_ravel (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6079031Z test_Tensor_reciprocal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6079338Z test_Tensor_reciprocal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6079658Z test_Tensor_record_stream (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6079977Z test_Tensor_refine_names (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6080288Z test_Tensor_register_hook (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6080600Z test_Tensor_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6080899Z test_Tensor_relu_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6081208Z test_Tensor_remainder (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6081508Z test_Tensor_remainder_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6081817Z test_Tensor_rename (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6082122Z test_Tensor_rename_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6082412Z test_Tensor_renorm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6082714Z test_Tensor_renorm_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6083022Z test_Tensor_repeat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6083344Z test_Tensor_repeat_interleave (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6083662Z test_Tensor_requires_grad_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6083979Z test_Tensor_reshape (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6084291Z test_Tensor_reshape_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6084619Z test_Tensor_resize (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6084926Z test_Tensor_resize_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6085232Z test_Tensor_resize_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6085541Z test_Tensor_resize_as_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6085845Z test_Tensor_resize_as_sparse_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6086166Z test_Tensor_resolve_conj (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6086524Z test_Tensor_resolve_neg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6086828Z test_Tensor_retain_grad (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6087132Z test_Tensor_roll (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6087431Z test_Tensor_rot90 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6087718Z test_Tensor_round (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6088020Z test_Tensor_round_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6088331Z test_Tensor_row_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6088637Z test_Tensor_rsqrt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6088925Z test_Tensor_rsqrt_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6089271Z test_Tensor_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6089581Z test_Tensor_scatter_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6089881Z test_Tensor_scatter_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6090198Z test_Tensor_scatter_add_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6090519Z test_Tensor_scatter_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6090844Z test_Tensor_scatter_reduce_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6091146Z test_Tensor_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6091456Z test_Tensor_select_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6091763Z test_Tensor_set_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6092051Z test_Tensor_sgn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6092354Z test_Tensor_sgn_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6092661Z test_Tensor_share_memory_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6092971Z test_Tensor_short (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6093264Z test_Tensor_sigmoid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6093574Z test_Tensor_sigmoid_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6093874Z test_Tensor_sign (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6094164Z test_Tensor_sign_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6094464Z test_Tensor_signbit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6094767Z test_Tensor_sin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6095065Z test_Tensor_sin_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6095352Z test_Tensor_sinc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6095651Z test_Tensor_sinc_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6095950Z test_Tensor_sinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6096235Z test_Tensor_sinh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6096530Z test_Tensor_size (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6096840Z test_Tensor_slice_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6097142Z test_Tensor_slogdet (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6097442Z test_Tensor_smm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6097747Z test_Tensor_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6098048Z test_Tensor_sort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6098383Z test_Tensor_sparse_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6098699Z test_Tensor_sparse_mask (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6099018Z test_Tensor_sparse_resize_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6099339Z test_Tensor_sparse_resize_and_clear_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6099660Z test_Tensor_split (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6099972Z test_Tensor_split_with_sizes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6100282Z test_Tensor_sqrt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6100567Z test_Tensor_sqrt_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6100868Z test_Tensor_square (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6101206Z test_Tensor_square_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6101501Z test_Tensor_squeeze (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6101806Z test_Tensor_squeeze_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6102115Z test_Tensor_sspaddmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6102585Z test_Tensor_std (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6102879Z test_Tensor_stft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6103182Z test_Tensor_storage (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6103496Z test_Tensor_storage_offset (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6103806Z test_Tensor_storage_type (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6104115Z test_Tensor_sub (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6104413Z test_Tensor_sub_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6104721Z test_Tensor_subtract (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6105020Z test_Tensor_subtract_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6105326Z test_Tensor_sum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6105632Z test_Tensor_sum_to_size (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6105922Z test_Tensor_svd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6106227Z test_Tensor_swapaxes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6106543Z test_Tensor_swapaxes_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6106856Z test_Tensor_swapdims (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6107155Z test_Tensor_swapdims_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6107466Z test_Tensor_symeig (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6107769Z test_Tensor_t (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6108053Z test_Tensor_t_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6108349Z test_Tensor_take (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6108659Z test_Tensor_take_along_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6108960Z test_Tensor_tan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6109258Z test_Tensor_tan_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6109553Z test_Tensor_tanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6109855Z test_Tensor_tanh_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6110149Z test_Tensor_tensor_split (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6110459Z test_Tensor_tile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6110830Z test_Tensor_to (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6111118Z test_Tensor_to_dense (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6111425Z test_Tensor_to_mkldnn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6111733Z test_Tensor_to_sparse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6112037Z test_Tensor_tolist (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6112330Z test_Tensor_topk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6112631Z test_Tensor_trace (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6112939Z test_Tensor_transpose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6113241Z test_Tensor_transpose_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6113605Z test_Tensor_triangular_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6113924Z test_Tensor_tril (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6114223Z test_Tensor_tril_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6114507Z test_Tensor_triu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6114805Z test_Tensor_triu_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6115113Z test_Tensor_true_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6115418Z test_Tensor_true_divide_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6115725Z test_Tensor_trunc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6116025Z test_Tensor_trunc_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6116315Z test_Tensor_type (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6116622Z test_Tensor_type_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6116928Z test_Tensor_unbind (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6117230Z test_Tensor_unfold (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6117584Z test_Tensor_uniform_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6117887Z test_Tensor_unique (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6118202Z test_Tensor_unique_consecutive (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6118514Z test_Tensor_unsafe_chunk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6118830Z test_Tensor_unsafe_split (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6119155Z test_Tensor_unsafe_split_with_sizes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6119485Z test_Tensor_unsqueeze (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6119788Z test_Tensor_unsqueeze_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6120107Z test_Tensor_untyped_storage (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6120419Z test_Tensor_values (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6120708Z test_Tensor_var (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6121012Z test_Tensor_vdot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6121313Z test_Tensor_view (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6121617Z test_Tensor_view_as (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6121906Z test_Tensor_vsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6122208Z test_Tensor_where (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6122514Z test_Tensor_xlogy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6122845Z test_Tensor_xlogy_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6123146Z test_Tensor_xpu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6123446Z test_Tensor_zero_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6123755Z test__TensorBase_H___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6124063Z test__TensorBase_T___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6124397Z test__TensorBase__backward_hooks___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6124733Z test__TensorBase__base___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6125050Z test__TensorBase__cdata___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6125407Z test__TensorBase__grad___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6125744Z test__TensorBase__grad_fn___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6126076Z test__TensorBase__version___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6126394Z test__TensorBase_data___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6126721Z test__TensorBase_device___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6127049Z test__TensorBase_dtype___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6127364Z test__TensorBase_grad___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6127689Z test__TensorBase_grad_fn___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6128014Z test__TensorBase_imag___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6128339Z test__TensorBase_is_cpu___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6128659Z test__TensorBase_is_cuda___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6128987Z test__TensorBase_is_ipu___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6129313Z test__TensorBase_is_leaf___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6129628Z test__TensorBase_is_meta___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6129957Z test__TensorBase_is_mkldnn___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6130285Z test__TensorBase_is_mps___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6130614Z test__TensorBase_is_nested___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6130926Z test__TensorBase_is_ort___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6131259Z test__TensorBase_is_quantized___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6131596Z test__TensorBase_is_sparse___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6131932Z test__TensorBase_is_sparse_csr___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6132257Z test__TensorBase_is_vulkan___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6132585Z test__TensorBase_is_xpu___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6132910Z test__TensorBase_layout___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6133220Z test__TensorBase_mH___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6133535Z test__TensorBase_mT___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6133854Z test__TensorBase_name___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6134212Z test__TensorBase_names___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6134521Z test__TensorBase_ndim___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6134847Z test__TensorBase_output_nr___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6135173Z test__TensorBase_real___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6135496Z test__TensorBase_requires_grad___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6135834Z test__TensorBase_retains_grad___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6136165Z test__TensorBase_shape___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6136490Z test__TensorBase_volatile___get__ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6136785Z test_base (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6137109Z test_grad (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6137432Z test_has_torch_function_non_sequence (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6137730Z test_mean_semantics (__main__.TestTorchFunctionOverride) 2023-01-11T21:58:40.6138017Z Test that a function with one argument can be overrided ... ok (0.001s) 2023-01-11T21:58:40.6138301Z test_mm_semantics (__main__.TestTorchFunctionOverride) 2023-01-11T21:58:40.6138593Z Test that a function with multiple arguments can be overrided ... ok (0.002s) 2023-01-11T21:58:40.6138880Z test_pow_rpow (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6139178Z test_precedence_semantics (__main__.TestTorchFunctionOverride) 2023-01-11T21:58:40.6139478Z Test semantics for __torch_function__ for functions that take ... ok (0.003s) 2023-01-11T21:58:40.6139774Z test_tensor_subclass_propagation (__main__.TestTorchFunctionOverride) 2023-01-11T21:58:40.6140080Z this test exercises the functionality described in ... ok (0.001s) 2023-01-11T21:58:40.6140385Z test_torch__C__fft_fft_fft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6140692Z test_torch__C__fft_fft_fft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6141006Z test_torch__C__fft_fft_fftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6141327Z test_torch__C__fft_fft_fftshift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6141647Z test_torch__C__fft_fft_hfft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6141953Z test_torch__C__fft_fft_hfft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6142269Z test_torch__C__fft_fft_hfftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6142719Z test_torch__C__fft_fft_ifft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6143023Z test_torch__C__fft_fft_ifft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6143339Z test_torch__C__fft_fft_ifftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6143664Z test_torch__C__fft_fft_ifftshift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6143986Z test_torch__C__fft_fft_ihfft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6144293Z test_torch__C__fft_fft_ihfft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6144611Z test_torch__C__fft_fft_ihfftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6144929Z test_torch__C__fft_fft_irfft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6145232Z test_torch__C__fft_fft_irfft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6145550Z test_torch__C__fft_fft_irfftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6145871Z test_torch__C__fft_fft_rfft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6146249Z test_torch__C__fft_fft_rfft2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6146552Z test_torch__C__fft_fft_rfftn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6146884Z test_torch__C__linalg_linalg_cholesky (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6147231Z test_torch__C__linalg_linalg_cholesky_ex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6147571Z test_torch__C__linalg_linalg_cond (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6147896Z test_torch__C__linalg_linalg_cross (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6148234Z test_torch__C__linalg_linalg_det (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6148572Z test_torch__C__linalg_linalg_diagonal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6148935Z test_torch__C__linalg_linalg_eig (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6149273Z test_torch__C__linalg_linalg_eigh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6149613Z test_torch__C__linalg_linalg_eigvals (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6149956Z test_torch__C__linalg_linalg_eigvalsh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6150304Z test_torch__C__linalg_linalg_householder_product (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6150653Z test_torch__C__linalg_linalg_inv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6150986Z test_torch__C__linalg_linalg_inv_ex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6151314Z test_torch__C__linalg_linalg_ldl_factor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6151667Z test_torch__C__linalg_linalg_ldl_factor_ex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6152017Z test_torch__C__linalg_linalg_ldl_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6152352Z test_torch__C__linalg_linalg_lstsq (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6152670Z test_torch__C__linalg_linalg_lu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6153009Z test_torch__C__linalg_linalg_lu_factor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6153354Z test_torch__C__linalg_linalg_lu_factor_ex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6153695Z test_torch__C__linalg_linalg_lu_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6154019Z test_torch__C__linalg_linalg_matmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6154362Z test_torch__C__linalg_linalg_matrix_exp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6154712Z test_torch__C__linalg_linalg_matrix_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6155046Z test_torch__C__linalg_linalg_matrix_power (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6155392Z test_torch__C__linalg_linalg_matrix_rank (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6155737Z test_torch__C__linalg_linalg_multi_dot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6156076Z test_torch__C__linalg_linalg_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6156394Z test_torch__C__linalg_linalg_pinv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6156721Z test_torch__C__linalg_linalg_qr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6157058Z test_torch__C__linalg_linalg_slogdet (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6157436Z test_torch__C__linalg_linalg_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6157812Z test_torch__C__linalg_linalg_solve_ex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6158158Z test_torch__C__linalg_linalg_solve_triangular (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6158503Z test_torch__C__linalg_linalg_svd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6158824Z test_torch__C__linalg_linalg_svdvals (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6159165Z test_torch__C__linalg_linalg_tensorinv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6159510Z test_torch__C__linalg_linalg_tensorsolve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6159853Z test_torch__C__linalg_linalg_vander (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6160223Z test_torch__C__linalg_linalg_vecdot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6160567Z test_torch__C__linalg_linalg_vector_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6160902Z test_torch__C__nn_avg_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6161213Z test_torch__C__nn_avg_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6161528Z test_torch__C__nn_gelu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6161840Z test_torch__C__nn_linear (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6162160Z test_torch__C__nn_log_sigmoid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6162464Z test_torch__C__nn_one_hot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6162778Z test_torch__C__nn_softplus (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6163099Z test_torch__C__nn_softshrink (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6163428Z test_torch__C__special_special_airy_ai (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6163773Z test_torch__C__special_special_bessel_j0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6164119Z test_torch__C__special_special_bessel_j1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6164462Z test_torch__C__special_special_bessel_y0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6164793Z test_torch__C__special_special_bessel_y1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6165154Z test_torch__C__special_special_chebyshev_polynomial_t (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6165538Z test_torch__C__special_special_chebyshev_polynomial_u (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6165911Z test_torch__C__special_special_chebyshev_polynomial_v (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6166276Z test_torch__C__special_special_chebyshev_polynomial_w (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6166639Z test_torch__C__special_special_digamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6166981Z test_torch__C__special_special_entr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6167308Z test_torch__C__special_special_erf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6167643Z test_torch__C__special_special_erfc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6167982Z test_torch__C__special_special_erfcx (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6168326Z test_torch__C__special_special_erfinv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6168654Z test_torch__C__special_special_exp2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6169026Z test_torch__C__special_special_expit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6169365Z test_torch__C__special_special_expm1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6169693Z test_torch__C__special_special_gammainc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6170046Z test_torch__C__special_special_gammaincc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6170394Z test_torch__C__special_special_gammaln (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6170755Z test_torch__C__special_special_hermite_polynomial_h (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6171121Z test_torch__C__special_special_hermite_polynomial_he (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6171474Z test_torch__C__special_special_i0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6171841Z test_torch__C__special_special_i0e (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6172175Z test_torch__C__special_special_i1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6172492Z test_torch__C__special_special_i1e (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6172847Z test_torch__C__special_special_laguerre_polynomial_l (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6173229Z test_torch__C__special_special_legendre_polynomial_p (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6173575Z test_torch__C__special_special_log1p (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6173920Z test_torch__C__special_special_log_ndtr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6174270Z test_torch__C__special_special_log_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6174617Z test_torch__C__special_special_logit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6174956Z test_torch__C__special_special_logsumexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6175315Z test_torch__C__special_special_modified_bessel_i0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6175685Z test_torch__C__special_special_modified_bessel_i1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6176047Z test_torch__C__special_special_modified_bessel_k0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6176401Z test_torch__C__special_special_modified_bessel_k1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6176764Z test_torch__C__special_special_multigammaln (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6177116Z test_torch__C__special_special_ndtr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6177446Z test_torch__C__special_special_ndtri (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6177796Z test_torch__C__special_special_polygamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6178141Z test_torch__C__special_special_psi (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6178480Z test_torch__C__special_special_round (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6178830Z test_torch__C__special_special_scaled_modified_bessel_k0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6179213Z test_torch__C__special_special_scaled_modified_bessel_k1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6179602Z test_torch__C__special_special_shifted_chebyshev_polynomial_t (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6179999Z test_torch__C__special_special_shifted_chebyshev_polynomial_u (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6180420Z test_torch__C__special_special_shifted_chebyshev_polynomial_v (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6180813Z test_torch__C__special_special_shifted_chebyshev_polynomial_w (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6181179Z test_torch__C__special_special_sinc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6181511Z test_torch__C__special_special_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6181872Z test_torch__C__special_special_spherical_bessel_j0 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6182230Z test_torch__C__special_special_xlog1py (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6182711Z test_torch__C__special_special_xlogy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6183042Z test_torch__C__special_special_zeta (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6183438Z test_torch__assert_async (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6183755Z test_torch__conj_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6184076Z test_torch__fw_primal_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6184385Z test_torch__indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6184706Z test_torch__lobpcg_lobpcg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6185038Z test_torch__lowrank_pca_lowrank (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6185357Z test_torch__lowrank_svd_lowrank (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6185682Z test_torch__make_dual_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6186012Z test_torch__native_batch_norm_legit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6186347Z test_torch__neg_view_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6187002Z test_torch__reshape_alias_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6187326Z test_torch__rowwise_prune (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6187657Z test_torch__sparse_broadcast_to_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6187977Z test_torch__values_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6188285Z test_torch_abs (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6188586Z test_torch_absolute (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6188890Z test_torch_acos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6189178Z test_torch_acosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6189501Z test_torch_adaptive_avg_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6189838Z test_torch_adaptive_max_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6190136Z test_torch_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6190438Z test_torch_addbmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6190744Z test_torch_addcdiv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6191051Z test_torch_addcmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6191344Z test_torch_addmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6191644Z test_torch_addmv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6191943Z test_torch_addr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6192234Z test_torch_adjoint (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6192561Z test_torch_affine_grid_generator (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6192940Z test_torch_alias_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6193246Z test_torch_all (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6193536Z test_torch_allclose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6193853Z test_torch_alpha_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6194163Z test_torch_amax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6194449Z test_torch_amin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6194748Z test_torch_aminmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6195052Z test_torch_angle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6195337Z test_torch_any (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6195666Z test_torch_arccos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6195974Z test_torch_arccosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6196280Z test_torch_arcsin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6196572Z test_torch_arcsinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6196875Z test_torch_arctan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6197177Z test_torch_arctan2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6197522Z test_torch_arctanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6197825Z test_torch_argmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6198129Z test_torch_argmin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6198432Z test_torch_argsort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6198730Z test_torch_argwhere (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6199050Z test_torch_as_strided_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6199384Z test_torch_as_strided_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6199687Z test_torch_asin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6199988Z test_torch_asinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6200289Z test_torch_atan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6200588Z test_torch_atan2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6200877Z test_torch_atanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6201184Z test_torch_avg_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6201495Z test_torch_baddbmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6201795Z test_torch_batch_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6202120Z test_torch_batch_norm_backward_elemt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6202464Z test_torch_batch_norm_backward_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6202796Z test_torch_batch_norm_elemt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6203114Z test_torch_batch_norm_gather_stats (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6203468Z test_torch_batch_norm_gather_stats_with_counts (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6203808Z test_torch_batch_norm_stats (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6204127Z test_torch_batch_norm_update_stats (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6204452Z test_torch_bernoulli (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6204805Z test_torch_bilinear (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6205140Z test_torch_binary_cross_entropy_with_logits (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6205457Z test_torch_bincount (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6205762Z test_torch_binomial (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6206071Z test_torch_bitwise_and (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6206379Z test_torch_bitwise_left_shift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6206697Z test_torch_bitwise_not (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6207006Z test_torch_bitwise_or (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6207327Z test_torch_bitwise_right_shift (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6207668Z test_torch_bitwise_xor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6207971Z test_torch_bmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6208279Z test_torch_broadcast_to (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6208583Z test_torch_bucketize (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6208884Z test_torch_cat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6209198Z test_torch_ccol_indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6209511Z test_torch_ceil (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6209799Z test_torch_celu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6210112Z test_torch_channel_shuffle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6210425Z test_torch_cholesky (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6210734Z test_torch_cholesky_inverse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6211056Z test_torch_cholesky_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6211388Z test_torch_choose_qparams_optimized (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6211713Z test_torch_chunk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6212001Z test_torch_clamp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6212305Z test_torch_clamp_max (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6212613Z test_torch_clamp_min (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6212899Z test_torch_clip (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6213200Z test_torch_clone (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6213514Z test_torch_col_indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6213823Z test_torch_column_stack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6214138Z test_torch_combinations (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6214447Z test_torch_complex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6214750Z test_torch_concat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6215050Z test_torch_concatenate (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6215355Z test_torch_conj (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6215663Z test_torch_conj_physical (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6215970Z test_torch_constant_pad_nd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6216282Z test_torch_conv1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6216583Z test_torch_conv2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6216924Z test_torch_conv3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6217215Z test_torch_conv_tbc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6217531Z test_torch_conv_transpose1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6217856Z test_torch_conv_transpose2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6218166Z test_torch_conv_transpose3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6218484Z test_torch_convolution (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6218796Z test_torch_copysign (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6219104Z test_torch_corrcoef (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6219392Z test_torch_cos (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6219749Z test_torch_cosh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6220070Z test_torch_cosine_embedding_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6220391Z test_torch_cosine_similarity (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6220714Z test_torch_count_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6221016Z test_torch_cov (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6221317Z test_torch_cross (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6221618Z test_torch_crow_indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6221934Z test_torch_ctc_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6222237Z test_torch_cummax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6222674Z test_torch_cummin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6222983Z test_torch_cumprod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6223291Z test_torch_cumsum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6223610Z test_torch_cumulative_trapezoid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6223920Z test_torch_deg2rad (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6224230Z test_torch_dequantize (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6224538Z test_torch_det (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6224825Z test_torch_detach (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6225138Z test_torch_detach_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6225443Z test_torch_diag (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6225741Z test_torch_diag_embed (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6226055Z test_torch_diagflat (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6226363Z test_torch_diagonal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6226676Z test_torch_diagonal_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6226990Z test_torch_diagonal_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6227301Z test_torch_diff (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6227604Z test_torch_digamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6227892Z test_torch_dist (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6228187Z test_torch_div (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6228491Z test_torch_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6228793Z test_torch_dot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6229138Z test_torch_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6229442Z test_torch_dsmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6229742Z test_torch_dsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6230033Z test_torch_dstack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6230340Z test_torch_embedding (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6230657Z test_torch_embedding_bag (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6230977Z test_torch_empty_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6231267Z test_torch_eq (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6231565Z test_torch_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6231906Z test_torch_erf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6232194Z test_torch_erfc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6232491Z test_torch_erfinv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6232789Z test_torch_exp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6233075Z test_torch_exp2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6233382Z test_torch_expand_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6233689Z test_torch_expm1 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6234028Z test_torch_fake_quantize_per_channel_affine (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6234376Z test_torch_fake_quantize_per_tensor_affine (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6234728Z test_torch_fbgemm_linear_fp16_weight (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6235097Z test_torch_fbgemm_linear_fp16_weight_fp32_activation (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6235453Z test_torch_fbgemm_linear_int8_weight (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6235795Z test_torch_fbgemm_linear_int8_weight_fp32_activation (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6236161Z test_torch_fbgemm_linear_quantize_weight (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6236513Z test_torch_fbgemm_pack_gemm_matrix_fp16 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6236851Z test_torch_fbgemm_pack_quantized_matrix (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6237194Z test_torch_feature_alpha_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6237581Z test_torch_feature_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6237901Z test_torch_fix (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6238187Z test_torch_flatten (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6238488Z test_torch_flip (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6238789Z test_torch_fliplr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6239082Z test_torch_flipud (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6239393Z test_torch_float_power (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6239697Z test_torch_floor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6240002Z test_torch_floor_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6240295Z test_torch_fmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6240594Z test_torch_fmin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6240934Z test_torch_fmod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6241217Z test_torch_frac (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6241517Z test_torch_frexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6241831Z test_torch_frobenius_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6241973Z test_torch_full_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242130Z test_torch_functional_atleast_1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242275Z test_torch_functional_atleast_2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242431Z test_torch_functional_atleast_3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242589Z test_torch_functional_block_diag (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242802Z test_torch_functional_broadcast_tensors (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6242969Z test_torch_functional_cartesian_prod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6243122Z test_torch_functional_cdist (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6243281Z test_torch_functional_chain_matmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6243435Z test_torch_functional_einsum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6243571Z test_torch_functional_lu (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6243730Z test_torch_functional_meshgrid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6243880Z test_torch_functional_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6244032Z test_torch_functional_split (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6244185Z test_torch_functional_stft (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6244343Z test_torch_functional_tensordot (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6244495Z test_torch_functional_unique (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6244663Z test_torch_functional_unique_consecutive (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6244832Z test_torch_fused_moving_avg_obs_fake_quant (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6244961Z test_torch_gather (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6245096Z test_torch_gcd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6245231Z test_torch_ge (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6245369Z test_torch_geqrf (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6245507Z test_torch_ger (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6245653Z test_torch_gradient (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6245795Z test_torch_greater (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6245942Z test_torch_greater_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246079Z test_torch_grid_sampler (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246230Z test_torch_grid_sampler_2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246379Z test_torch_grid_sampler_3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246522Z test_torch_group_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246657Z test_torch_gru (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246798Z test_torch_gru_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6246935Z test_torch_gt (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6247111Z test_torch_hardshrink (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247241Z test_torch_heaviside (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247399Z test_torch_hinge_embedding_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247538Z test_torch_histc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247680Z test_torch_histogram (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247829Z test_torch_histogramdd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6247965Z test_torch_hsmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6248105Z test_torch_hsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6248245Z test_torch_hstack (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6248404Z test_torch_hypot (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6248542Z test_torch_i0 (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6248682Z test_torch_igamma (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6248825Z test_torch_igammac (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6248961Z test_torch_imag (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6249103Z test_torch_index_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249249Z test_torch_index_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249392Z test_torch_index_fill (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249519Z test_torch_index_put (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249666Z test_torch_index_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249818Z test_torch_index_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6249963Z test_torch_indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6250102Z test_torch_inner (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6250251Z test_torch_instance_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6250395Z test_torch_int_repr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6250534Z test_torch_inverse (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6250665Z test_torch_is_complex (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6250803Z test_torch_is_conj (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6250952Z test_torch_is_distributed (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6251108Z test_torch_is_floating_point (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6251255Z test_torch_is_inference (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6251395Z test_torch_is_neg (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6251537Z test_torch_is_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6251680Z test_torch_is_same_size (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6251810Z test_torch_is_signed (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6251948Z test_torch_isclose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6252090Z test_torch_isfinite (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6252226Z test_torch_isin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6252363Z test_torch_isinf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6252504Z test_torch_isnan (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6252677Z test_torch_isneginf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6252818Z test_torch_isposinf (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6252946Z test_torch_isreal (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6253083Z test_torch_istft (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253222Z test_torch_kl_div (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253357Z test_torch_kron (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253498Z test_torch_kthvalue (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253642Z test_torch_layer_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253776Z test_torch_lcm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6253932Z test_torch_ldexp (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6254074Z test_torch_le (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6254210Z test_torch_lerp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6254346Z test_torch_less (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6254487Z test_torch_less_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6254624Z test_torch_lgamma (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6254758Z test_torch_log (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6254894Z test_torch_log10 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255018Z test_torch_log1p (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255154Z test_torch_log2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255301Z test_torch_log_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255448Z test_torch_logaddexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255594Z test_torch_logaddexp2 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255743Z test_torch_logcumsumexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6255881Z test_torch_logdet (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256026Z test_torch_logical_and (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6256157Z test_torch_logical_not (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256301Z test_torch_logical_or (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256443Z test_torch_logical_xor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256582Z test_torch_logit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256727Z test_torch_logsumexp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6256866Z test_torch_lstm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257006Z test_torch_lstm_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257140Z test_torch_lt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257269Z test_torch_lu_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257409Z test_torch_lu_unpack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257564Z test_torch_margin_ranking_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257709Z test_torch_masked_fill (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6257862Z test_torch_masked_scatter (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6258008Z test_torch_masked_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6258182Z test_torch_matmul (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6258325Z test_torch_matrix_exp (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6258459Z test_torch_matrix_power (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6258596Z test_torch_max (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6258736Z test_torch_max_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6258896Z test_torch_max_pool1d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259040Z test_torch_max_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259183Z test_torch_max_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259323Z test_torch_maximum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259461Z test_torch_mean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259618Z test_torch_median (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259754Z test_torch_min (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6259894Z test_torch_minimum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260046Z test_torch_miopen_batch_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260202Z test_torch_miopen_convolution (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260367Z test_torch_miopen_convolution_add_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260529Z test_torch_miopen_convolution_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260697Z test_torch_miopen_convolution_transpose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6260865Z test_torch_miopen_depthwise_convolution (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261004Z test_torch_miopen_rnn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261141Z test_torch_mode (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261284Z test_torch_moveaxis (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261427Z test_torch_movedim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261567Z test_torch_msort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261703Z test_torch_mul (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6261850Z test_torch_multinomial (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6261994Z test_torch_multiply (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6262118Z test_torch_mv (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6262258Z test_torch_mvlgamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6262569Z test_torch_nan_to_num (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6262716Z test_torch_nanmean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6262861Z test_torch_nanmedian (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263009Z test_torch_nanquantile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263149Z test_torch_nansum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263278Z test_torch_narrow (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263423Z test_torch_narrow_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263574Z test_torch_native_batch_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263733Z test_torch_native_channel_shuffle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6263886Z test_torch_native_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264098Z test_torch_native_group_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264248Z test_torch_native_layer_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264391Z test_torch_native_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264524Z test_torch_ne (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264649Z test_torch_neg (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6264789Z test_torch_negative (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6264931Z test_torch_nextafter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6265090Z test_torch_nn_functional__threshold (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6265301Z test_torch_nn_functional_adaptive_avg_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6265475Z test_torch_nn_functional_adaptive_avg_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6265641Z test_torch_nn_functional_adaptive_max_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6265830Z test_torch_nn_functional_adaptive_max_pool1d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6265989Z test_torch_nn_functional_adaptive_max_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6266176Z test_torch_nn_functional_adaptive_max_pool2d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6266348Z test_torch_nn_functional_adaptive_max_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6266535Z test_torch_nn_functional_adaptive_max_pool3d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6266699Z test_torch_nn_functional_affine_grid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6266871Z test_torch_nn_functional_alpha_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267031Z test_torch_nn_functional_batch_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267205Z test_torch_nn_functional_binary_cross_entropy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267388Z test_torch_nn_functional_binary_cross_entropy_with_logits (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267527Z test_torch_nn_functional_celu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267702Z test_torch_nn_functional_cosine_embedding_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6267865Z test_torch_nn_functional_cross_entropy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6268025Z test_torch_nn_functional_ctc_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6268186Z test_torch_nn_functional_dropout (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6268344Z test_torch_nn_functional_dropout1d (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6268504Z test_torch_nn_functional_dropout2d (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6268662Z test_torch_nn_functional_dropout3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6268801Z test_torch_nn_functional_elu (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6268960Z test_torch_nn_functional_embedding (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6269122Z test_torch_nn_functional_embedding_bag (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6269298Z test_torch_nn_functional_feature_alpha_dropout (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6269493Z test_torch_nn_functional_fold (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6269665Z test_torch_nn_functional_fractional_max_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6269855Z test_torch_nn_functional_fractional_max_pool2d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6270030Z test_torch_nn_functional_fractional_max_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6270217Z test_torch_nn_functional_fractional_max_pool3d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6270371Z test_torch_nn_functional_gaussian_nll_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6270523Z test_torch_nn_functional_glu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6270686Z test_torch_nn_functional_grid_sample (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6270879Z test_torch_nn_functional_group_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271047Z test_torch_nn_functional_gumbel_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271206Z test_torch_nn_functional_hardtanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271380Z test_torch_nn_functional_hinge_embedding_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271537Z test_torch_nn_functional_huber_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271698Z test_torch_nn_functional_instance_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6271847Z test_torch_nn_functional_interpolate (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272001Z test_torch_nn_functional_kl_div (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272160Z test_torch_nn_functional_l1_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272322Z test_torch_nn_functional_layer_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272480Z test_torch_nn_functional_leaky_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272652Z test_torch_nn_functional_local_response_norm (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6272814Z test_torch_nn_functional_log_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6272974Z test_torch_nn_functional_lp_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6273122Z test_torch_nn_functional_lp_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6273295Z test_torch_nn_functional_margin_ranking_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6273454Z test_torch_nn_functional_max_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6273634Z test_torch_nn_functional_max_pool1d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6273792Z test_torch_nn_functional_max_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6273966Z test_torch_nn_functional_max_pool2d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274126Z test_torch_nn_functional_max_pool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274301Z test_torch_nn_functional_max_pool3d_with_indices (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274463Z test_torch_nn_functional_max_unpool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274614Z test_torch_nn_functional_max_unpool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274774Z test_torch_nn_functional_max_unpool3d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6274967Z test_torch_nn_functional_mish (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275125Z test_torch_nn_functional_mse_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275307Z test_torch_nn_functional_multi_head_attention_forward (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275476Z test_torch_nn_functional_multi_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275652Z test_torch_nn_functional_multilabel_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275831Z test_torch_nn_functional_multilabel_soft_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6275986Z test_torch_nn_functional_nll_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276135Z test_torch_nn_functional_normalize (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276317Z test_torch_nn_functional_pad (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276487Z test_torch_nn_functional_poisson_nll_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276639Z test_torch_nn_functional_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276793Z test_torch_nn_functional_relu6 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6276943Z test_torch_nn_functional_rrelu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277092Z test_torch_nn_functional_selu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277238Z test_torch_nn_functional_silu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277443Z test_torch_nn_functional_smooth_l1_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277611Z test_torch_nn_functional_soft_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277773Z test_torch_nn_functional_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6277928Z test_torch_nn_functional_softmin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278084Z test_torch_nn_functional_softsign (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278243Z test_torch_nn_functional_tanhshrink (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278414Z test_torch_nn_functional_triplet_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278599Z test_torch_nn_functional_triplet_margin_with_distance_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278754Z test_torch_nn_functional_unfold (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6278893Z test_torch_nn_init_constant_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279052Z test_torch_nn_init_kaiming_uniform_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279203Z test_torch_nn_init_normal_ (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279353Z test_torch_nn_init_uniform_ (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6279492Z test_torch_nonzero (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279642Z test_torch_norm_except_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279786Z test_torch_not_equal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6279932Z test_torch_nuclear_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280057Z test_torch_numel (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280197Z test_torch_ones_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280334Z test_torch_orgqr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280507Z test_torch_ormqr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280644Z test_torch_outer (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280799Z test_torch_pairwise_distance (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6280937Z test_torch_pdist (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281077Z test_torch_permute (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281212Z test_torch_permute_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281353Z test_torch_pinverse (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281501Z test_torch_pixel_shuffle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281649Z test_torch_pixel_unshuffle (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281831Z test_torch_poisson (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6281986Z test_torch_poisson_nll_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282124Z test_torch_polar (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282272Z test_torch_polygamma (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282402Z test_torch_positive (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282539Z test_torch_pow (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282675Z test_torch_prelu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282812Z test_torch_prod (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6282946Z test_torch_put (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283098Z test_torch_q_per_channel_axis (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283256Z test_torch_q_per_channel_scales (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283418Z test_torch_q_per_channel_zero_points (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6283546Z test_torch_q_scale (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283691Z test_torch_q_zero_point (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283826Z test_torch_qr (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6283967Z test_torch_quantile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284121Z test_torch_quantize_per_channel (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284277Z test_torch_quantize_per_tensor (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284444Z test_torch_quantize_per_tensor_dynamic (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284603Z test_torch_quantized_batch_norm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284749Z test_torch_quantized_gru_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6284904Z test_torch_quantized_lstm_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285060Z test_torch_quantized_max_pool1d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285213Z test_torch_quantized_max_pool2d (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285372Z test_torch_quantized_rnn_relu_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285530Z test_torch_quantized_rnn_tanh_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285672Z test_torch_rad2deg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285814Z test_torch_rand_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6285950Z test_torch_randint_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286129Z test_torch_randn_like (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286269Z test_torch_ravel (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286407Z test_torch_real (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286552Z test_torch_reciprocal (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286690Z test_torch_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286835Z test_torch_remainder (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6286976Z test_torch_renorm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287117Z test_torch_repeat_interleave (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287258Z test_torch_reshape (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287434Z test_torch_resolve_conj (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287582Z test_torch_resolve_neg (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287723Z test_torch_rnn_relu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6287868Z test_torch_rnn_relu_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288008Z test_torch_rnn_tanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288154Z test_torch_rnn_tanh_cell (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288278Z test_torch_roll (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288413Z test_torch_rot90 (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288551Z test_torch_round (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288703Z test_torch_row_indices_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288850Z test_torch_row_stack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6288990Z test_torch_rrelu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289123Z test_torch_rsqrt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289262Z test_torch_rsub (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289389Z test_torch_saddmm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289532Z test_torch_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289680Z test_torch_scatter_add (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289829Z test_torch_scatter_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6289976Z test_torch_searchsorted (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290124Z test_torch_segment_reduce (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290269Z test_torch_select (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290414Z test_torch_select_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290550Z test_torch_select_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290687Z test_torch_selu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290822Z test_torch_sgn (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6290962Z test_torch_sigmoid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291098Z test_torch_sign (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291237Z test_torch_signbit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291373Z test_torch_sin (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291510Z test_torch_sinc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291667Z test_torch_sinh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291813Z test_torch_slice_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6291961Z test_torch_slice_scatter (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292104Z test_torch_slogdet (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292238Z test_torch_smm (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292377Z test_torch_softmax (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292513Z test_torch_sort (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292658Z test_torch_split_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292796Z test_torch_split_with_sizes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6292983Z test_torch_split_with_sizes_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293123Z test_torch_sqrt (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293262Z test_torch_square (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293400Z test_torch_squeeze (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293546Z test_torch_squeeze_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293684Z test_torch_stack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293820Z test_torch_std (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6293948Z test_torch_std_mean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294080Z test_torch_sub (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294222Z test_torch_subtract (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294356Z test_torch_sum (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294491Z test_torch_svd (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294632Z test_torch_swapaxes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294772Z test_torch_swapdims (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6294912Z test_torch_symeig (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295034Z test_torch_t (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295171Z test_torch_t_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295308Z test_torch_take (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295456Z test_torch_take_along_dim (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295591Z test_torch_tan (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295727Z test_torch_tanh (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6295878Z test_torch_tensor_split (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296024Z test_torch_threshold (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296146Z test_torch_tile (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296281Z test_torch_topk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296419Z test_torch_trace (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296559Z test_torch_transpose (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296711Z test_torch_transpose_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296854Z test_torch_trapezoid (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6296991Z test_torch_trapz (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297146Z test_torch_triangular_solve (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297300Z test_torch_tril (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297458Z test_torch_triplet_margin_loss (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297593Z test_torch_triu (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297739Z test_torch_true_divide (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6297874Z test_torch_trunc (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298013Z test_torch_unbind (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298159Z test_torch_unbind_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298303Z test_torch_unflatten (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298434Z test_torch_unfold_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298611Z test_torch_unsafe_chunk (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298763Z test_torch_unsafe_split (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6298922Z test_torch_unsafe_split_with_sizes (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299065Z test_torch_unsqueeze (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6299214Z test_torch_unsqueeze_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299360Z test_torch_values_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299495Z test_torch_var (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299624Z test_torch_var_mean (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299761Z test_torch_vdot (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6299909Z test_torch_view_as_complex (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300070Z test_torch_view_as_complex_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300214Z test_torch_view_as_real (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300365Z test_torch_view_as_real_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300506Z test_torch_view_copy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300648Z test_torch_vsplit (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300776Z test_torch_vstack (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6300913Z test_torch_where (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6301052Z test_torch_xlogy (__main__.TestTorchFunctionOverride) ... ok (0.001s) 2023-01-11T21:58:40.6301195Z test_torch_zeros_like (__main__.TestTorchFunctionOverride) ... ok (0.000s) 2023-01-11T21:58:40.6301342Z test_user_implementation_raises (__main__.TestTorchFunctionOverride) 2023-01-11T21:58:40.6301499Z Test that errors raised in user implementations propagate correctly ... ok (0.001s) 2023-01-11T21:58:40.6301657Z test_warn_on_invalid_torch_function (__main__.TestTorchFunctionWarning) ... ok (0.009s) 2023-01-11T21:58:40.6301798Z test_wrap_torch_function (__main__.TestWrapTorchFunction) ... ok (0.001s) 2023-01-11T21:58:40.6301809Z 2023-01-11T21:58:40.6302091Z ---------------------------------------------------------------------- 2023-01-11T21:58:40.6302171Z Ran 1416 tests in 1.182s 2023-01-11T21:58:40.6302176Z 2023-01-11T21:58:40.6302237Z OK 2023-01-11T21:58:40.6302243Z 2023-01-11T21:58:40.6302325Z Generating XML reports... 2023-01-11T21:58:40.6302815Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestBroadcastAllOverride-20230111215838.xml 2023-01-11T21:58:40.6303141Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestDisabledTorchFunction-20230111215838.xml 2023-01-11T21:58:40.6303501Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestEinsumOverride-20230111215838.xml 2023-01-11T21:58:40.6303805Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestGradCheckOverride-20230111215838.xml 2023-01-11T21:58:40.6304112Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestGradNewOnesOverride-20230111215838.xml 2023-01-11T21:58:40.6304375Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestIndexing-20230111215838.xml 2023-01-11T21:58:40.6304647Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestIterator-20230111215838.xml 2023-01-11T21:58:40.6304921Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestNamedTuple-20230111215838.xml 2023-01-11T21:58:40.6305185Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestPickle-20230111215838.xml 2023-01-11T21:58:40.6305438Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestRNN-20230111215838.xml 2023-01-11T21:58:40.6305764Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestResolveName-20230111215838.xml 2023-01-11T21:58:40.6306065Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionMode-20230111215838.xml 2023-01-11T21:58:40.6306378Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionOverride-20230111215838.xml 2023-01-11T21:58:40.6306686Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionWarning-20230111215838.xml 2023-01-11T21:58:40.6306972Z Generated XML report: test-reports/python-unittest/test_overrides/TEST-TestWrapTorchFunction-20230111215838.xml 2023-01-11T21:58:40.6306990Z 2023-01-11T21:58:40.6307322Z ##[endgroup] 2023-01-11T21:58:40.6307607Z FINISHED PRINTING LOG FILE of test_overrides (/var/lib/jenkins/workspace/test/test-reports/test_overrides_gz6u4dmy) 2023-01-11T21:58:40.6307612Z 2023-01-11T21:58:40.6307778Z Running test_sparse_csr ... [2023-01-11 21:58:40.590824] 2023-01-11T21:58:40.6308112Z Executing ['/opt/conda/bin/python', '-bb', 'test_sparse_csr.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:58:40.591265] 2023-01-11T21:58:43.6077628Z 2023-01-11T21:58:43.6078343Z Expand the folded group to see the log file of test_sparse_csr 2023-01-11T21:58:43.6079230Z ##[group]PRINTING LOG FILE of test_sparse_csr (/var/lib/jenkins/workspace/test/test-reports/test_sparse_csr_4s1dozzd) 2023-01-11T21:58:43.6079451Z 2023-01-11T21:58:43.6079545Z Running tests... 2023-01-11T21:58:43.6080033Z ---------------------------------------------------------------------- 2023-01-11T21:58:43.6080509Z Test results will be stored in test-reports/python-unittest/test_sparse_csr 2023-01-11T21:58:43.6080859Z test_make_crow_indices (__main__.TestSparseCSRSampler) ... ok (0.381s) 2023-01-11T21:58:43.6081039Z 2023-01-11T21:58:43.6081290Z ---------------------------------------------------------------------- 2023-01-11T21:58:43.6081548Z Ran 1 test in 0.381s 2023-01-11T21:58:43.6081660Z 2023-01-11T21:58:43.6081720Z OK 2023-01-11T21:58:43.6081810Z 2023-01-11T21:58:43.6081893Z Generating XML reports... 2023-01-11T21:58:43.6082395Z Generated XML report: test-reports/python-unittest/test_sparse_csr/TEST-TestSparseCSRSampler-20230111215842.xml 2023-01-11T21:58:43.6082627Z 2023-01-11T21:58:43.6082904Z ##[endgroup] 2023-01-11T21:58:43.6083336Z FINISHED PRINTING LOG FILE of test_sparse_csr (/var/lib/jenkins/workspace/test/test-reports/test_sparse_csr_4s1dozzd) 2023-01-11T21:58:43.6083603Z 2023-01-11T21:58:43.6083770Z Running test_torch ... [2023-01-11 21:58:43.607909] 2023-01-11T21:58:43.6084215Z Executing ['/opt/conda/bin/python', '-bb', 'test_torch.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:58:43.608155] 2023-01-11T21:58:47.3479059Z 2023-01-11T21:58:47.3479988Z Expand the folded group to see the log file of test_torch 2023-01-11T21:58:47.3481040Z ##[group]PRINTING LOG FILE of test_torch (/var/lib/jenkins/workspace/test/test-reports/test_torch_r7omzqvk) 2023-01-11T21:58:47.3481821Z 2023-01-11T21:58:47.3482470Z Running tests... 2023-01-11T21:58:47.3483258Z ---------------------------------------------------------------------- 2023-01-11T21:58:47.3484018Z Test results will be stored in test-reports/python-unittest/test_torch 2023-01-11T21:58:47.3484618Z test_basic_vitals (__main__.TestBasicVitalSigns) ... ok (0.001s) 2023-01-11T21:58:47.3485492Z test_basic_vitals_read_write (__main__.TestBasicVitalSigns) ... ok (0.001s) 2023-01-11T21:58:47.3485956Z test_dataloader_vitals (__main__.TestBasicVitalSigns) ... ok (0.001s) 2023-01-11T21:58:47.3486499Z test_RNGState (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3486963Z test_RNGStateAliasing (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3487333Z test_RNG_after_pickle (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3487735Z test_Size (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3488313Z test_Size_iter (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3488724Z test_Size_scalar (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3489143Z test_add_meta_scalar (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3489605Z test_allow_tensor_metadata_change (__main__.TestTorch) ... ok (0.000s) 2023-01-11T21:58:47.3490043Z test_apply (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3490426Z test_as_subclass (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3490762Z test_assert_async (__main__.TestTorch) ... ok (0.018s) 2023-01-11T21:58:47.3491123Z test_backward_hooks_traverse (__main__.TestTorch) ... ok (0.071s) 2023-01-11T21:58:47.3491521Z test_batch_norm_cpu_inference (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3492693Z test_bmm_multithreaded (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [1, 23, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3493757Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3494749Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [1, 0, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3495398Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3496056Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [1, 0, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3496848Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3497520Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [0, 23, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3498240Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3498897Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [0, 23, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3499513Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3500206Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [0, 0, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3500803Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3501457Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [1, 23, 12], which does not match the required output shape [0, 0, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3502064Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3502952Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [10, 23, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3503589Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3504244Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [10, 0, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3504871Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3505515Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [10, 0, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3506136Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3506793Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [0, 23, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3507468Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3508133Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [0, 23, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3508753Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3509438Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [0, 0, 12]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3510060Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3510715Z /var/lib/jenkins/workspace/test/test_torch.py:8590: UserWarning: An output with one or more elements was resized since it had shape [10, 23, 12], which does not match the required output shape [0, 0, 0]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:33.) 2023-01-11T21:58:47.3511350Z torch.bmm(b1, b2, out=res2) 2023-01-11T21:58:47.3511542Z ok (0.657s) 2023-01-11T21:58:47.3511761Z test_boxMullerState (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3512070Z test_c10_layer_norm (__main__.TestTorch) ... skip: Pytorch is compiled without Caffe2 (0.001s) 2023-01-11T21:58:47.3512366Z test_cat_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3512617Z test_chunk_neg_dim (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3512857Z test_conj_neg_tolist (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3513103Z test_contains (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3513353Z test_copy_broadcast (__main__.TestTorch) ... ok (0.006s) 2023-01-11T21:58:47.3513591Z test_copy_dtypes (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3513840Z test_copy_float16 (__main__.TestTorch) ... ok (0.088s) 2023-01-11T21:58:47.3514090Z test_copy_many_to_one (__main__.TestTorch) ... ok (0.006s) 2023-01-11T21:58:47.3514830Z test_copy_transpose (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:7754: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/EmptyTensor.cpp:32.) 2023-01-11T21:58:47.3515310Z x = torch.arange(100 * 100).reshape(100, 100).to(dtype=torch.complex32).t() 2023-01-11T21:58:47.3515545Z ok (0.003s) 2023-01-11T21:58:47.3515897Z test_cuda_not_built (__main__.TestTorch) ... skip: CUDA is built, can't test CUDA not built error (0.001s) 2023-01-11T21:58:47.3516186Z test_cummax_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3516436Z test_cummin_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3516689Z test_cumprod_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3516942Z test_cumsum_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3517168Z test_cxx_flags (__main__.TestTorch) ... ok (0.000s) 2023-01-11T21:58:47.3518058Z test_dead_weak_ref (__main__.TestTorch) ... [W python_variable.cpp:319] Warning: Deallocating Tensor that still has live PyObject references. This probably happened because you took out a weak reference to Tensor and didn't call _fix_weakref() after dereferencing it. Subsequent accesses to this tensor via the PyObject will now fail. (function decref) 2023-01-11T21:58:47.3518552Z ok (0.004s) 2023-01-11T21:58:47.3518761Z test_deepcopy_gradient (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3519032Z test_deepcopy_parameter (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3519299Z test_deterministic_flag (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3519552Z test_device (__main__.TestTorch) ... ok (0.018s) 2023-01-11T21:58:47.3519772Z test_dir (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3520001Z test_doc (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3520226Z test_doc_template (__main__.TestTorch) 2023-01-11T21:58:47.3520530Z Test that all public API doc strings use the same standard template for ... ok (0.020s) 2023-01-11T21:58:47.3520824Z test_dot_data_use (__main__.TestTorch) ... ok (0.012s) 2023-01-11T21:58:47.3521075Z test_dtype_is_signed (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3521668Z test_element_size (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:5939: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3522210Z byte = torch.ByteStorage().element_size() 2023-01-11T21:58:47.3522753Z /var/lib/jenkins/workspace/test/test_torch.py:5940: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3523271Z char = torch.CharStorage().element_size() 2023-01-11T21:58:47.3523811Z /var/lib/jenkins/workspace/test/test_torch.py:5941: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3524322Z short = torch.ShortStorage().element_size() 2023-01-11T21:58:47.3524848Z /var/lib/jenkins/workspace/test/test_torch.py:5942: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3525353Z int = torch.IntStorage().element_size() 2023-01-11T21:58:47.3525881Z /var/lib/jenkins/workspace/test/test_torch.py:5943: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3526383Z long = torch.LongStorage().element_size() 2023-01-11T21:58:47.3526917Z /var/lib/jenkins/workspace/test/test_torch.py:5944: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3527415Z float = torch.FloatStorage().element_size() 2023-01-11T21:58:47.3527948Z /var/lib/jenkins/workspace/test/test_torch.py:5945: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3528495Z double = torch.DoubleStorage().element_size() 2023-01-11T21:58:47.3529031Z /var/lib/jenkins/workspace/test/test_torch.py:5946: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3529544Z bool = torch.BoolStorage().element_size() 2023-01-11T21:58:47.3530093Z /var/lib/jenkins/workspace/test/test_torch.py:5947: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3530618Z bfloat16 = torch.BFloat16Storage().element_size() 2023-01-11T21:58:47.3531157Z /var/lib/jenkins/workspace/test/test_torch.py:5948: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3531698Z complexfloat = torch.ComplexFloatStorage().element_size() 2023-01-11T21:58:47.3532242Z /var/lib/jenkins/workspace/test/test_torch.py:5949: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3532783Z complexdouble = torch.ComplexDoubleStorage().element_size() 2023-01-11T21:58:47.3533013Z ok (0.002s) 2023-01-11T21:58:47.3533222Z test_empty_meta (__main__.TestTorch) ... ok (0.006s) 2023-01-11T21:58:47.3533461Z test_empty_storage_view (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3533703Z test_equal (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3533959Z test_error_msg_type_translation (__main__.TestTorch) ... ok (0.011s) 2023-01-11T21:58:47.3534211Z test_fill_diagonal (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3534468Z test_fix_weakref_no_leak (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3534725Z test_format_scalar_meta (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3535310Z test_from_buffer (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6322: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3535887Z self.assertEqual(torch.ByteStorage.from_buffer(a).tolist(), [1, 2, 3, 4]) 2023-01-11T21:58:47.3536702Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:675: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3537202Z return list(self) 2023-01-11T21:58:47.3537938Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:649: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3538529Z return iter(map(lambda i: self[i], range(self.size()))) 2023-01-11T21:58:47.3539059Z /var/lib/jenkins/workspace/test/test_torch.py:6323: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3539632Z shorts = torch.ShortStorage.from_buffer(a, 'big') 2023-01-11T21:58:47.3540184Z /var/lib/jenkins/workspace/test/test_torch.py:6324: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3540686Z self.assertEqual(shorts.size(), 2) 2023-01-11T21:58:47.3541253Z /var/lib/jenkins/workspace/test/test_torch.py:6325: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3541747Z self.assertEqual(shorts.tolist(), [258, 772]) 2023-01-11T21:58:47.3542277Z /var/lib/jenkins/workspace/test/test_torch.py:6326: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3543030Z ints = torch.IntStorage.from_buffer(a, 'little') 2023-01-11T21:58:47.3543581Z /var/lib/jenkins/workspace/test/test_torch.py:6327: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3544081Z self.assertEqual(ints.size(), 1) 2023-01-11T21:58:47.3544593Z /var/lib/jenkins/workspace/test/test_torch.py:6328: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3545083Z self.assertEqual(ints[0], 67305985) 2023-01-11T21:58:47.3545609Z /var/lib/jenkins/workspace/test/test_torch.py:6330: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3546176Z floats = torch.FloatStorage.from_buffer(f, 'big') 2023-01-11T21:58:47.3546720Z /var/lib/jenkins/workspace/test/test_torch.py:6331: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3547205Z self.assertEqual(floats.size(), 1) 2023-01-11T21:58:47.3547732Z /var/lib/jenkins/workspace/test/test_torch.py:6332: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3548222Z self.assertEqual(floats[0], 2.25) 2023-01-11T21:58:47.3548745Z /var/lib/jenkins/workspace/test/test_torch.py:6335: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3549372Z bools = torch.BoolStorage.from_buffer(f, 'big') 2023-01-11T21:58:47.3549895Z /var/lib/jenkins/workspace/test/test_torch.py:6336: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3550389Z self.assertEqual(bools.size(), 8) 2023-01-11T21:58:47.3550957Z /var/lib/jenkins/workspace/test/test_torch.py:6337: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3551513Z self.assertEqual(bools.tolist(), [False, True, True, True, True, True, True, True]) 2023-01-11T21:58:47.3552082Z /var/lib/jenkins/workspace/test/test_torch.py:6338: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3552641Z self.assertEqual(bools.type(), 'torch.BoolStorage') 2023-01-11T21:58:47.3553430Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:959: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3554010Z if self.device.type not in ['cpu', 'cuda']: 2023-01-11T21:58:47.3554780Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:962: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3555382Z module = torch if self.device.type == 'cpu' else torch.cuda 2023-01-11T21:58:47.3556148Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:978: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3556722Z return (cls_device == instance.device.type) and (cls.dtype == instance.dtype) 2023-01-11T21:58:47.3557583Z /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3558113Z return self.fget.__get__(instance, owner)() 2023-01-11T21:58:47.3558652Z /var/lib/jenkins/workspace/test/test_torch.py:6342: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3559218Z bools = torch.BoolStorage.from_buffer(f, 'big') 2023-01-11T21:58:47.3559775Z /var/lib/jenkins/workspace/test/test_torch.py:6343: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3560336Z self.assertEqual(bools.size(), 19) 2023-01-11T21:58:47.3560870Z /var/lib/jenkins/workspace/test/test_torch.py:6346: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3561433Z bools = torch.BoolStorage.from_buffer(f, 'big') 2023-01-11T21:58:47.3561987Z /var/lib/jenkins/workspace/test/test_torch.py:6347: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3562481Z self.assertEqual(bools.size(), 4) 2023-01-11T21:58:47.3563008Z /var/lib/jenkins/workspace/test/test_torch.py:6348: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3563538Z self.assertEqual(bools.tolist(), [False, True, True, True]) 2023-01-11T21:58:47.3564084Z /var/lib/jenkins/workspace/test/test_torch.py:6349: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3564591Z bytes = torch.ByteStorage.from_buffer(a) 2023-01-11T21:58:47.3565118Z /var/lib/jenkins/workspace/test/test_torch.py:6350: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3565612Z self.assertEqual(bytes.nbytes(), 4) 2023-01-11T21:58:47.3566134Z /var/lib/jenkins/workspace/test/test_torch.py:6351: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3566646Z self.assertEqual(bytes.tolist(), [1, 2, 3, 4]) 2023-01-11T21:58:47.3566834Z ok (0.004s) 2023-01-11T21:58:47.3567370Z test_from_file (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6663: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3567931Z s1 = torch.FloatStorage.from_file(filename, True, size) 2023-01-11T21:58:47.3568713Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3569230Z storage = cls(wrap_storage=untyped_storage) 2023-01-11T21:58:47.3569765Z /var/lib/jenkins/workspace/test/test_torch.py:6665: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3570346Z self.assertEqual(s1.data_ptr(), torch.FloatTensor(s1).data_ptr()) 2023-01-11T21:58:47.3570907Z /var/lib/jenkins/workspace/test/test_torch.py:6668: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3571439Z s2 = torch.FloatStorage.from_file(filename, True, size) 2023-01-11T21:58:47.3571642Z ok (0.004s) 2023-01-11T21:58:47.3571853Z test_gather_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3572104Z test_generator_cpu (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3572383Z test_has_internal_overlap (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3573089Z test_has_storage (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:7615: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3573644Z self.assertIsNotNone(torch.tensor([]).storage()) 2023-01-11T21:58:47.3574193Z /var/lib/jenkins/workspace/test/test_torch.py:7616: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3574715Z self.assertIsNotNone(torch.empty(0).storage()) 2023-01-11T21:58:47.3575247Z /var/lib/jenkins/workspace/test/test_torch.py:7617: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3575777Z self.assertIsNotNone(torch.tensor([]).clone().storage()) 2023-01-11T21:58:47.3576328Z /var/lib/jenkins/workspace/test/test_torch.py:7618: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3576861Z self.assertIsNotNone(torch.tensor([0, 0, 0]).nonzero().storage()) 2023-01-11T21:58:47.3577419Z /var/lib/jenkins/workspace/test/test_torch.py:7619: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3577929Z self.assertIsNotNone(torch.tensor([]).new().storage()) 2023-01-11T21:58:47.3578143Z ok (0.001s) 2023-01-11T21:58:47.3578349Z test_index_add (__main__.TestTorch) ... ok (0.015s) 2023-01-11T21:58:47.3578595Z test_index_add_all_dtypes (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3579457Z test_index_add_correctness (__main__.TestTorch) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/91184 for platform(s) linux, rocm, win, windows. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.002s) 2023-01-11T21:58:47.3580026Z test_index_add_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3580327Z test_index_copy_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3580577Z test_index_fill_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3580836Z test_index_select_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3581106Z test_invalid_generator_raises (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3581369Z test_is_nonzero (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3581857Z test_is_same_size (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:5821: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T21:58:47.3582509Z nt1 = torch.nested.nested_tensor([torch.ones(2, 4), torch.ones(3, 4), torch.ones(5, 4)]) 2023-01-11T21:58:47.3582770Z ok (0.006s) 2023-01-11T21:58:47.3583025Z test_iter (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3583261Z test_kthvalue_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3583524Z test_logcumsumexp_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3583782Z test_manual_seed (__main__.TestTorch) ... ok (0.003s) 2023-01-11T21:58:47.3584007Z test_map (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3584240Z test_map2 (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3584481Z test_max_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3584712Z test_mean_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3584957Z test_median_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3585205Z test_memory_format (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3585496Z test_memory_format_contiguous_returns_same_tensor_if_already_satisfies (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3585809Z test_memory_format_empty (__main__.TestTorch) ... ok (0.011s) 2023-01-11T21:58:47.3586061Z test_min_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3586304Z test_mode_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3586625Z test_multinomial_invalid_probs (__main__.TestTorch) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.001s) 2023-01-11T21:58:47.3586953Z test_nanmedian_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3587205Z test_narrow_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3587426Z test_ndim (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3587990Z test_new (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:7016: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3588551Z self.assertEqual(x.new(y.storage()).data_ptr(), y.data_ptr()) 2023-01-11T21:58:47.3589100Z /var/lib/jenkins/workspace/test/test_torch.py:7022: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3589644Z self.assertRaises(RuntimeError, lambda: x.new(z.storage())) 2023-01-11T21:58:47.3589850Z ok (0.005s) 2023-01-11T21:58:47.3590074Z test_newaxis_numpy_comparison (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3590334Z test_newindex (__main__.TestTorch) ... ok (0.010s) 2023-01-11T21:58:47.3590715Z test_no_cuda_monkeypatch (__main__.TestTorch) ... skip: Skipped for cuda-enabled build (0.000s) 2023-01-11T21:58:47.3591009Z test_norm_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3591254Z test_normal_shape (__main__.TestTorch) ... ok (0.026s) 2023-01-11T21:58:47.3591539Z test_numel (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3591767Z test_parallel_info (__main__.TestTorch) ... ok (0.000s) 2023-01-11T21:58:47.3592018Z test_parsing_double (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3592264Z test_parsing_int64 (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3592795Z test_parsing_intlist (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6302: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here. 2023-01-11T21:58:47.3593503Z Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations 2023-01-11T21:58:47.3593888Z self.assertRaises(TypeError, lambda: torch.ones((np.float(3.), torch.tensor(4)))) 2023-01-11T21:58:47.3594177Z ok (0.014s) 2023-01-11T21:58:47.3594368Z test_permute (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3594608Z test_pickle (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3594852Z test_pickle_dtype (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3595092Z test_pickle_function (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3595348Z test_pickle_parameter (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3595624Z test_pickle_parameter_no_requires_grad (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3595894Z test_pickle_size (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3596124Z test_pin_memory (__main__.TestTorch) ... ok (0.013s) 2023-01-11T21:58:47.3596359Z test_print (__main__.TestTorch) ... ok (0.040s) 2023-01-11T21:58:47.3596597Z test_prod_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3596835Z test_pyobj_preserved (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3597084Z test_qengine (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3597325Z test_renorm_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3597639Z test_resurrected_weak_ref (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3597898Z test_reversed (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3598145Z test_scatter_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3598395Z test_select_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3598677Z test_set_flush_denormal (__main__.TestTorch) ... skip: flush_denormal not supported (0.001s) 2023-01-11T21:58:47.3598982Z test_setting_real_imag_to_a_number (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3599249Z test_show_config (__main__.TestTorch) ... ok (0.000s) 2023-01-11T21:58:47.3599481Z test_size_neg_dim (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3600065Z test_sizeof (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6980: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3600629Z sizeof_empty = torch.randn(0).storage().__sizeof__() 2023-01-11T21:58:47.3601411Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:665: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3601964Z return super(TypedStorage, self).__sizeof__() + self.nbytes() 2023-01-11T21:58:47.3602502Z /var/lib/jenkins/workspace/test/test_torch.py:6981: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3603062Z sizeof_10 = torch.randn(10).storage().__sizeof__() 2023-01-11T21:58:47.3603593Z /var/lib/jenkins/workspace/test/test_torch.py:6982: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3604108Z sizeof_100 = torch.randn(100).storage().__sizeof__() 2023-01-11T21:58:47.3604682Z /var/lib/jenkins/workspace/test/test_torch.py:6986: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3605204Z sizeof_empty = torch.randn(0).to(torch.uint8).storage().__sizeof__() 2023-01-11T21:58:47.3605754Z /var/lib/jenkins/workspace/test/test_torch.py:6987: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3606277Z sizeof_10 = torch.randn(10).to(torch.uint8).storage().__sizeof__() 2023-01-11T21:58:47.3606825Z /var/lib/jenkins/workspace/test/test_torch.py:6988: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3607348Z sizeof_100 = torch.randn(100).to(torch.uint8).storage().__sizeof__() 2023-01-11T21:58:47.3607553Z ok (0.001s) 2023-01-11T21:58:47.3607750Z test_slice (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3608058Z test_slow_test (__main__.TestTorch) ... skip: test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test (0.000s) 2023-01-11T21:58:47.3608357Z test_sobolengine_bounds (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3608631Z test_sobolengine_bounds_scrambled (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3608909Z test_sobolengine_continuing (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3609192Z test_sobolengine_continuing_scrambled (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3609469Z test_sobolengine_distribution (__main__.TestTorch) ... ok (0.003s) 2023-01-11T21:58:47.3609757Z test_sobolengine_distribution_scrambled (__main__.TestTorch) ... ok (0.005s) 2023-01-11T21:58:47.3610035Z test_sobolengine_draw (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3610288Z test_sobolengine_draw_base2 (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3610565Z test_sobolengine_draw_base2_scrambled (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3610849Z test_sobolengine_draw_scrambled (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3611111Z test_sobolengine_fast_forward (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3611397Z test_sobolengine_fast_forward_scrambled (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3611679Z test_sobolengine_first_point (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3611954Z test_sobolengine_high_dim (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3612206Z test_sobolengine_raise (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3612464Z test_sobolengine_reset (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3612733Z test_sobolengine_reset_scrambled (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3612986Z test_sort_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3613267Z test_split_neg_dim (__main__.TestTorch) ... ok (0.003s) 2023-01-11T21:58:47.3613517Z test_squeeze_neg_dim (__main__.TestTorch) ... ok (0.003s) 2023-01-11T21:58:47.3613750Z test_std_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3614333Z test_storage_casts (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6470: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3614942Z storage = torch.IntStorage([-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3615509Z /var/lib/jenkins/workspace/test/test_torch.py:6471: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3616015Z self.assertEqual(storage.size(), 6) 2023-01-11T21:58:47.3616532Z /var/lib/jenkins/workspace/test/test_torch.py:6472: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3617104Z self.assertEqual(storage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3617877Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:675: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3618383Z return list(self) 2023-01-11T21:58:47.3619118Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:649: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3619640Z return iter(map(lambda i: self[i], range(self.size()))) 2023-01-11T21:58:47.3620182Z /var/lib/jenkins/workspace/test/test_torch.py:6473: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3620764Z self.assertEqual(storage.type(), 'torch.IntStorage') 2023-01-11T21:58:47.3621543Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:959: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3622112Z if self.device.type not in ['cpu', 'cuda']: 2023-01-11T21:58:47.3623014Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:962: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3623622Z module = torch if self.device.type == 'cpu' else torch.cuda 2023-01-11T21:58:47.3624181Z /var/lib/jenkins/workspace/test/test_torch.py:6476: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3624744Z floatStorage = storage.float() 2023-01-11T21:58:47.3625506Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:808: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3626087Z storage = torch.tensor([], dtype=self.dtype, device=self.device).set_(self).to(dtype)._typed_storage() 2023-01-11T21:58:47.3626948Z /opt/conda/lib/python3.10/site-packages/torch/storage.py:809: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3627485Z if storage.data_ptr() == self.data_ptr(): 2023-01-11T21:58:47.3628017Z /var/lib/jenkins/workspace/test/test_torch.py:6477: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3628529Z self.assertEqual(floatStorage.size(), 6) 2023-01-11T21:58:47.3629061Z /var/lib/jenkins/workspace/test/test_torch.py:6478: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3629656Z self.assertEqual(floatStorage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3630210Z /var/lib/jenkins/workspace/test/test_torch.py:6479: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3630808Z self.assertEqual(floatStorage.type(), 'torch.FloatStorage') 2023-01-11T21:58:47.3631378Z /var/lib/jenkins/workspace/test/test_torch.py:6480: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3631957Z self.assertEqual(floatStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3632513Z /var/lib/jenkins/workspace/test/test_torch.py:6483: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3633009Z halfStorage = storage.half() 2023-01-11T21:58:47.3633521Z /var/lib/jenkins/workspace/test/test_torch.py:6484: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3634033Z self.assertEqual(halfStorage.size(), 6) 2023-01-11T21:58:47.3634556Z /var/lib/jenkins/workspace/test/test_torch.py:6485: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3635192Z self.assertEqual(halfStorage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3635733Z /var/lib/jenkins/workspace/test/test_torch.py:6486: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3636321Z self.assertEqual(halfStorage.type(), 'torch.HalfStorage') 2023-01-11T21:58:47.3636910Z /var/lib/jenkins/workspace/test/test_torch.py:6487: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3637545Z self.assertEqual(halfStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3638112Z /var/lib/jenkins/workspace/test/test_torch.py:6490: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3638621Z bfloat16Storage = storage.bfloat16() 2023-01-11T21:58:47.3639151Z /var/lib/jenkins/workspace/test/test_torch.py:6491: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3639664Z self.assertEqual(bfloat16Storage.size(), 6) 2023-01-11T21:58:47.3640194Z /var/lib/jenkins/workspace/test/test_torch.py:6492: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3640788Z self.assertEqual(bfloat16Storage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3641342Z /var/lib/jenkins/workspace/test/test_torch.py:6493: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3641951Z self.assertEqual(bfloat16Storage.type(), 'torch.BFloat16Storage') 2023-01-11T21:58:47.3642520Z /var/lib/jenkins/workspace/test/test_torch.py:6494: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3643111Z self.assertEqual(bfloat16Storage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3643664Z /var/lib/jenkins/workspace/test/test_torch.py:6497: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3644153Z longStorage = storage.long() 2023-01-11T21:58:47.3644668Z /var/lib/jenkins/workspace/test/test_torch.py:6498: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3645242Z self.assertEqual(longStorage.size(), 6) 2023-01-11T21:58:47.3645758Z /var/lib/jenkins/workspace/test/test_torch.py:6499: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3646341Z self.assertEqual(longStorage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3646890Z /var/lib/jenkins/workspace/test/test_torch.py:6500: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3647510Z self.assertEqual(longStorage.type(), 'torch.LongStorage') 2023-01-11T21:58:47.3648071Z /var/lib/jenkins/workspace/test/test_torch.py:6501: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3648650Z self.assertEqual(longStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3649198Z /var/lib/jenkins/workspace/test/test_torch.py:6504: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3649695Z shortStorage = storage.short() 2023-01-11T21:58:47.3650209Z /var/lib/jenkins/workspace/test/test_torch.py:6505: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3650723Z self.assertEqual(shortStorage.size(), 6) 2023-01-11T21:58:47.3651246Z /var/lib/jenkins/workspace/test/test_torch.py:6506: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3651825Z self.assertEqual(shortStorage.tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3652375Z /var/lib/jenkins/workspace/test/test_torch.py:6507: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3652964Z self.assertEqual(shortStorage.type(), 'torch.ShortStorage') 2023-01-11T21:58:47.3653524Z /var/lib/jenkins/workspace/test/test_torch.py:6508: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3654100Z self.assertEqual(shortStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3654650Z /var/lib/jenkins/workspace/test/test_torch.py:6511: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3655181Z doubleStorage = storage.double() 2023-01-11T21:58:47.3655705Z /var/lib/jenkins/workspace/test/test_torch.py:6512: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3656220Z self.assertEqual(doubleStorage.size(), 6) 2023-01-11T21:58:47.3656737Z /var/lib/jenkins/workspace/test/test_torch.py:6513: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3657375Z self.assertEqual(doubleStorage.tolist(), [-1.0, 0.0, 1.0, 2.0, 3.0, 4.0]) 2023-01-11T21:58:47.3657934Z /var/lib/jenkins/workspace/test/test_torch.py:6514: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3658527Z self.assertEqual(doubleStorage.type(), 'torch.DoubleStorage') 2023-01-11T21:58:47.3659094Z /var/lib/jenkins/workspace/test/test_torch.py:6515: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3659677Z self.assertEqual(doubleStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3660225Z /var/lib/jenkins/workspace/test/test_torch.py:6518: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3660705Z charStorage = storage.char() 2023-01-11T21:58:47.3661215Z /var/lib/jenkins/workspace/test/test_torch.py:6519: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3661714Z self.assertEqual(charStorage.size(), 6) 2023-01-11T21:58:47.3662231Z /var/lib/jenkins/workspace/test/test_torch.py:6520: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3662981Z self.assertEqual(charStorage.tolist(), [-1.0, 0.0, 1.0, 2.0, 3.0, 4.0]) 2023-01-11T21:58:47.3663531Z /var/lib/jenkins/workspace/test/test_torch.py:6521: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3664125Z self.assertEqual(charStorage.type(), 'torch.CharStorage') 2023-01-11T21:58:47.3664685Z /var/lib/jenkins/workspace/test/test_torch.py:6522: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3665324Z self.assertEqual(charStorage.int().tolist(), [-1, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3666000Z /var/lib/jenkins/workspace/test/test_torch.py:6525: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3666483Z byteStorage = storage.byte() 2023-01-11T21:58:47.3666996Z /var/lib/jenkins/workspace/test/test_torch.py:6526: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3667504Z self.assertEqual(byteStorage.size(), 6) 2023-01-11T21:58:47.3668080Z /var/lib/jenkins/workspace/test/test_torch.py:6527: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3668611Z self.assertEqual(byteStorage.tolist(), [255, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3669140Z /var/lib/jenkins/workspace/test/test_torch.py:6528: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3669729Z self.assertEqual(byteStorage.type(), 'torch.ByteStorage') 2023-01-11T21:58:47.3670282Z /var/lib/jenkins/workspace/test/test_torch.py:6529: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3670800Z self.assertEqual(byteStorage.int().tolist(), [255, 0, 1, 2, 3, 4]) 2023-01-11T21:58:47.3671345Z /var/lib/jenkins/workspace/test/test_torch.py:6532: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3671817Z boolStorage = storage.bool() 2023-01-11T21:58:47.3672329Z /var/lib/jenkins/workspace/test/test_torch.py:6533: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3672827Z self.assertEqual(boolStorage.size(), 6) 2023-01-11T21:58:47.3673361Z /var/lib/jenkins/workspace/test/test_torch.py:6534: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3673904Z self.assertEqual(boolStorage.tolist(), [True, False, True, True, True, True]) 2023-01-11T21:58:47.3674466Z /var/lib/jenkins/workspace/test/test_torch.py:6535: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3675086Z self.assertEqual(boolStorage.type(), 'torch.BoolStorage') 2023-01-11T21:58:47.3675634Z /var/lib/jenkins/workspace/test/test_torch.py:6536: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3676150Z self.assertEqual(boolStorage.int().tolist(), [1, 0, 1, 1, 1, 1]) 2023-01-11T21:58:47.3676693Z /var/lib/jenkins/workspace/test/test_torch.py:6539: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3677304Z complexfloat_storage = torch.ComplexFloatStorage([-1, 0, 1 + 2j, 2.5j, 3.5, 4 - 2j]) 2023-01-11T21:58:47.3677973Z /var/lib/jenkins/workspace/test/test_torch.py:6540: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3678500Z self.assertEqual(complexfloat_storage.size(), 6) 2023-01-11T21:58:47.3679047Z /var/lib/jenkins/workspace/test/test_torch.py:6541: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3679673Z self.assertEqual(complexfloat_storage.tolist(), [-1, 0, 1 + 2j, 2.5j, 3.5, 4 - 2j]) 2023-01-11T21:58:47.3680226Z /var/lib/jenkins/workspace/test/test_torch.py:6542: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3680854Z self.assertEqual(complexfloat_storage.type(), 'torch.ComplexFloatStorage') 2023-01-11T21:58:47.3681440Z /var/lib/jenkins/workspace/test/test_torch.py:6545: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3681971Z complexdouble_storage = complexfloat_storage.complex_double() 2023-01-11T21:58:47.3682527Z /var/lib/jenkins/workspace/test/test_torch.py:6546: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3683043Z self.assertEqual(complexdouble_storage.size(), 6) 2023-01-11T21:58:47.3683578Z /var/lib/jenkins/workspace/test/test_torch.py:6547: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3684186Z self.assertEqual(complexdouble_storage.tolist(), [-1, 0, 1 + 2j, 2.5j, 3.5, 4 - 2j]) 2023-01-11T21:58:47.3684750Z /var/lib/jenkins/workspace/test/test_torch.py:6548: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3685407Z self.assertEqual(complexdouble_storage.type(), 'torch.ComplexDoubleStorage') 2023-01-11T21:58:47.3685648Z ok (0.011s) 2023-01-11T21:58:47.3686187Z test_storage_error (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6364: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3686721Z torch.storage._LegacyStorage() 2023-01-11T21:58:47.3687503Z /opt/conda/lib/python3.10/site-packages/torch/_utils.py:768: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3688034Z return self.fget.__get__(instance, owner)() 2023-01-11T21:58:47.3688552Z /var/lib/jenkins/workspace/test/test_torch.py:6378: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3689068Z storage_class(device='cpu') 2023-01-11T21:58:47.3689582Z /var/lib/jenkins/workspace/test/test_torch.py:6381: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3690078Z storage_class(dtype=torch.float) 2023-01-11T21:58:47.3690598Z /var/lib/jenkins/workspace/test/test_torch.py:6387: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3691061Z storage_class(0, 0) 2023-01-11T21:58:47.3691557Z /var/lib/jenkins/workspace/test/test_torch.py:6390: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3692060Z storage_class('string') 2023-01-11T21:58:47.3692569Z /var/lib/jenkins/workspace/test/test_torch.py:6393: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3693061Z storage_class(torch.tensor([])) 2023-01-11T21:58:47.3693564Z /var/lib/jenkins/workspace/test/test_torch.py:6395: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3694027Z s = storage_class() 2023-01-11T21:58:47.3694528Z /var/lib/jenkins/workspace/test/test_torch.py:6398: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3695064Z storage_class(0, wrap_storage=s.untyped()) 2023-01-11T21:58:47.3695577Z /var/lib/jenkins/workspace/test/test_torch.py:6401: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3696058Z storage_class(wrap_storage=s) 2023-01-11T21:58:47.3696562Z /var/lib/jenkins/workspace/test/test_torch.py:6420: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3697092Z torch.TypedStorage(0, wrap_storage=s.untyped(), dtype=dtype) 2023-01-11T21:58:47.3697670Z /var/lib/jenkins/workspace/test/test_torch.py:6423: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3698177Z torch.TypedStorage(wrap_storage=s.untyped()) 2023-01-11T21:58:47.3698704Z /var/lib/jenkins/workspace/test/test_torch.py:6426: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3699218Z torch.TypedStorage(wrap_storage=s.untyped(), dtype=0) 2023-01-11T21:58:47.3699769Z /var/lib/jenkins/workspace/test/test_torch.py:6429: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3700310Z torch.TypedStorage(wrap_storage=s.untyped(), dtype=dtype, device=device) 2023-01-11T21:58:47.3700858Z /var/lib/jenkins/workspace/test/test_torch.py:6432: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3701376Z torch.TypedStorage(wrap_storage=s, dtype=dtype) 2023-01-11T21:58:47.3701914Z /var/lib/jenkins/workspace/test/test_torch.py:6435: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3702612Z torch.TypedStorage(dtype=dtype, device='xla') 2023-01-11T21:58:47.3703148Z /var/lib/jenkins/workspace/test/test_torch.py:6443: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3703673Z torch.TypedStorage(torch.tensor([]), dtype=dtype, device=device) 2023-01-11T21:58:47.3704233Z /var/lib/jenkins/workspace/test/test_torch.py:6446: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3704751Z torch.TypedStorage(0, 0, dtype=dtype, device=device) 2023-01-11T21:58:47.3715943Z /var/lib/jenkins/workspace/test/test_torch.py:6449: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3716586Z s_other = torch.TypedStorage([1, 2, 3, 4], device=device, dtype=dtype) 2023-01-11T21:58:47.3717150Z /var/lib/jenkins/workspace/test/test_torch.py:6452: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3717700Z s.fill_(s_other) 2023-01-11T21:58:47.3717872Z ok (0.041s) 2023-01-11T21:58:47.3718544Z test_storage_error_no_attribute (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6461: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3719084Z storage_class.from_buffer() 2023-01-11T21:58:47.3719257Z ok (0.001s) 2023-01-11T21:58:47.3719471Z test_structseq_repr (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3719734Z test_subclass_preserved (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3719989Z test_subclass_tensors (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3720243Z test_sum_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3720492Z test_t_not_2d_error (__main__.TestTorch) ... ok (0.009s) 2023-01-11T21:58:47.3720728Z test_tensor_base_init (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3720984Z test_tensor_base_new (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3721244Z test_tensor_ctor_scalar (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3721503Z test_tensor_cycle_via_dict (__main__.TestTorch) ... ok (0.108s) 2023-01-11T21:58:47.3721755Z test_tensor_cycle_via_slots (__main__.TestTorch) ... ok (0.049s) 2023-01-11T21:58:47.3722017Z test_tensor_dict_dealloc (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3722285Z test_tensor_finalizer_dealloc (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3722874Z test_tensor_set (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:5839: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3723439Z self.assertEqual(t1.storage()._cdata, t2.storage()._cdata) 2023-01-11T21:58:47.3723994Z /var/lib/jenkins/workspace/test/test_torch.py:5841: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3724481Z t1.set_(t2.storage(), 0, size) 2023-01-11T21:58:47.3724996Z /var/lib/jenkins/workspace/test/test_torch.py:5843: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3725477Z t1.set_(t2.storage(), 0, tuple(size)) 2023-01-11T21:58:47.3726001Z /var/lib/jenkins/workspace/test/test_torch.py:5847: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3726535Z t1.set_(t2.storage(), 0, size, stride) 2023-01-11T21:58:47.3727047Z /var/lib/jenkins/workspace/test/test_torch.py:5849: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3727548Z t1.set_(t2.storage(), 0, size=size, stride=stride) 2023-01-11T21:58:47.3728104Z /var/lib/jenkins/workspace/test/test_torch.py:5857: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3728633Z self.assertEqual(t1.storage()._cdata, t2.storage()._cdata) 2023-01-11T21:58:47.3729176Z /var/lib/jenkins/workspace/test/test_torch.py:5859: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3729651Z t1.set_(source=t2.storage()) 2023-01-11T21:58:47.3730160Z /var/lib/jenkins/workspace/test/test_torch.py:5860: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3730667Z self.assertEqual(t1.storage()._cdata, t2.storage()._cdata) 2023-01-11T21:58:47.3731212Z /var/lib/jenkins/workspace/test/test_torch.py:5862: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3731736Z t1.set_(source=t2.storage(), storage_offset=0, size=size, stride=stride) 2023-01-11T21:58:47.3732285Z /var/lib/jenkins/workspace/test/test_torch.py:5869: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3732813Z self.assertEqual(t1.storage()._cdata, t2.storage()._cdata) 2023-01-11T21:58:47.3733009Z ok (0.003s) 2023-01-11T21:58:47.3733555Z test_tensor_set_errors (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:5876: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3734151Z self.assertRaises(RuntimeError, lambda: f_cpu.set_(d_cpu.storage())) 2023-01-11T21:58:47.3734715Z /var/lib/jenkins/workspace/test/test_torch.py:5878: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3735237Z lambda: f_cpu.set_(d_cpu.storage(), 0, d_cpu.size(), d_cpu.stride())) 2023-01-11T21:58:47.3735459Z ok (0.004s) 2023-01-11T21:58:47.3735675Z test_tensor_slot_dealloc (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3735985Z test_tensor_weakref_dealloc (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3736252Z test_tensoriterator_output_setup (__main__.TestTorch) ... ok (0.114s) 2023-01-11T21:58:47.3736863Z test_to (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:7842: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:56.) 2023-01-11T21:58:47.3737416Z a = torch.tensor([[0, 1, 2], [2, 0, 3]]).to_sparse_csr() 2023-01-11T21:58:47.3737613Z ok (0.004s) 2023-01-11T21:58:47.3737804Z test_to_with_tensor (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3738048Z test_topk_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3738296Z test_torch_from_file (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3738573Z test_transpose_neg_dim (__main__.TestTorch) ... ok (0.004s) 2023-01-11T21:58:47.3738818Z test_type (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3739051Z test_type_alias (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3739305Z test_type_conversion_via_dtype_name (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3739932Z test_typed_storage_deprecation_warning (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6624: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3740487Z s0 = torch.FloatStorage(10) 2023-01-11T21:58:47.3740669Z ok (0.002s) 2023-01-11T21:58:47.3741223Z test_typed_storage_internal_no_warning (__main__.TestTorch) ... /var/lib/jenkins/workspace/test/test_torch.py:6554: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3741747Z s0 = torch.FloatStorage(10) 2023-01-11T21:58:47.3742266Z /var/lib/jenkins/workspace/test/test_torch.py:6555: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T21:58:47.3742929Z s0_untyped = s0.untyped() 2023-01-11T21:58:47.3743108Z ok (0.002s) 2023-01-11T21:58:47.3743305Z test_unbind_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3743553Z test_unflatten (__main__.TestTorch) ... ok (0.021s) 2023-01-11T21:58:47.3743802Z test_unfold_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3744044Z test_unsqueeze_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3744311Z test_upsample_nearest1d_meta (__main__.TestTorch) ... ok (0.011s) 2023-01-11T21:58:47.3744583Z test_upsample_nearest2d_meta (__main__.TestTorch) ... ok (0.013s) 2023-01-11T21:58:47.3744839Z test_var_neg_dim (__main__.TestTorch) ... ok (0.002s) 2023-01-11T21:58:47.3745068Z test_warn_types (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3745321Z test_wildcard_import (__main__.TestTorch) ... ok (0.001s) 2023-01-11T21:58:47.3745467Z 2023-01-11T21:58:47.3745713Z ---------------------------------------------------------------------- 2023-01-11T21:58:47.3745943Z Ran 181 tests in 1.700s 2023-01-11T21:58:47.3746057Z 2023-01-11T21:58:47.3746129Z OK (skipped=7) 2023-01-11T21:58:47.3746235Z 2023-01-11T21:58:47.3746318Z Generating XML reports... 2023-01-11T21:58:47.3746724Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestBasicVitalSigns-20230111215845.xml 2023-01-11T21:58:47.3747287Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorch-20230111215845.xml 2023-01-11T21:58:47.3747591Z [TORCH_VITAL] Dataloader.enabled True 2023-01-11T21:58:47.3747854Z [TORCH_VITAL] Dataloader.basic_unit_test TEST_VALUE_STRING 2023-01-11T21:58:47.3748079Z [TORCH_VITAL] CUDA.used False 2023-01-11T21:58:47.3748201Z 2023-01-11T21:58:47.3748539Z ##[endgroup] 2023-01-11T21:58:47.3748903Z FINISHED PRINTING LOG FILE of test_torch (/var/lib/jenkins/workspace/test/test-reports/test_torch_r7omzqvk) 2023-01-11T21:58:47.3749099Z 2023-01-11T21:58:47.3749294Z Running distributions/test_distributions ... [2023-01-11 21:58:47.349048] 2023-01-11T21:58:47.3749816Z Executing ['/opt/conda/bin/python', '-bb', 'distributions/test_distributions.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:58:47.349306] 2023-01-11T21:59:30.0896920Z 2023-01-11T21:59:30.0898185Z Expand the folded group to see the log file of distributions/test_distributions 2023-01-11T21:59:30.0899076Z ##[group]PRINTING LOG FILE of distributions/test_distributions (/var/lib/jenkins/workspace/test/test-reports/distributions-test_distributions_q4k_g_kn) 2023-01-11T21:59:30.0899531Z 2023-01-11T21:59:30.0899779Z Running tests... 2023-01-11T21:59:30.0900366Z ---------------------------------------------------------------------- 2023-01-11T21:59:30.0901117Z Test results will be stored in test-reports/python-unittest/distributions.test_distributions 2023-01-11T21:59:30.0906590Z test_cdf (__main__.TestAgainstScipy) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:107: UserWarning: Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim. 2023-01-11T21:59:30.0907312Z warnings.warn("Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim.") 2023-01-11T21:59:30.0907810Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.0908142Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.0908337Z ok (0.041s) 2023-01-11T21:59:30.0908553Z test_icdf (__main__.TestAgainstScipy) ... ok (0.023s) 2023-01-11T21:59:30.0908807Z test_mean (__main__.TestAgainstScipy) ... ok (1.140s) 2023-01-11T21:59:30.0909900Z test_variance_stddev (__main__.TestAgainstScipy) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py:679: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/utils/tensor_numpy.cpp:210.) 2023-01-11T21:59:30.0910630Z return torch.as_tensor(tensor_like) 2023-01-11T21:59:30.0910811Z ok (0.034s) 2023-01-11T21:59:30.0913165Z test_params_constraints (__main__.TestConstraints) ... ok (0.055s) 2023-01-11T21:59:30.0913683Z test_support_constraints (__main__.TestConstraints) ... ok (0.071s) 2023-01-11T21:59:30.0914230Z test_bernoulli_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0914810Z test_bernoulli_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0915328Z test_beta_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0915874Z test_beta_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0916396Z test_binomial_shape (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0916935Z test_binomial_shape_vectorized_n (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0917491Z test_categorical_shape (__main__.TestDistributionShapes) ... ok (0.003s) 2023-01-11T21:59:30.0918119Z test_cauchy_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.001s) 2023-01-11T21:59:30.0918880Z test_cauchy_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0919432Z test_chi2_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0919997Z test_chi2_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0920647Z test_continuous_bernoulli_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0921287Z test_continuous_bernoulli_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0921873Z test_dirichlet_shape (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0922410Z test_entropy_shape (__main__.TestDistributionShapes) ... ok (0.021s) 2023-01-11T21:59:30.0922982Z test_exponential_shape_scalar_param (__main__.TestDistributionShapes) ... ok (0.001s) 2023-01-11T21:59:30.0923695Z test_exponential_shape_tensor_param (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0924313Z test_gamma_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0924901Z test_gamma_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0925506Z test_geometric_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0926090Z test_geometric_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0926686Z test_gumbel_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0927303Z test_halfcauchy_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0927888Z test_halfcauchy_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0928472Z test_kumaraswamy_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0929059Z test_laplace_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0929646Z test_laplace_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0930215Z test_mixture_same_family_shape (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0930770Z test_multinomial_shape (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0931334Z test_normal_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.001s) 2023-01-11T21:59:30.0931905Z test_normal_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0932467Z test_one_hot_categorical_shape (__main__.TestDistributionShapes) ... ok (0.004s) 2023-01-11T21:59:30.0933056Z test_pareto_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0933661Z test_studentT_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0934209Z test_studentT_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0934808Z test_uniform_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.001s) 2023-01-11T21:59:30.0935410Z test_uniform_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0935982Z test_vonmises_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0936567Z test_vonmises_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0937152Z test_weibull_scale_scalar_params (__main__.TestDistributionShapes) ... ok (0.002s) 2023-01-11T21:59:30.0937737Z test_wishart_shape_scalar_params (__main__.TestDistributionShapes) ... ok (0.003s) 2023-01-11T21:59:30.0938279Z test_wishart_shape_tensor_params (__main__.TestDistributionShapes) ... ok (0.007s) 2023-01-11T21:59:30.0938824Z test_argmax_relaxed_categorical (__main__.TestDistributions) ... ok (0.020s) 2023-01-11T21:59:30.0939334Z test_bernoulli (__main__.TestDistributions) ... ok (0.073s) 2023-01-11T21:59:30.0939916Z test_bernoulli_3d (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0940419Z test_bernoulli_enumerate_support (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0940937Z test_beta_log_prob (__main__.TestDistributions) ... ok (0.063s) 2023-01-11T21:59:30.0941428Z test_beta_sample (__main__.TestDistributions) ... ok (0.287s) 2023-01-11T21:59:30.0941893Z test_beta_shape (__main__.TestDistributions) ... ok (0.003s) 2023-01-11T21:59:30.0942585Z test_beta_underflow (__main__.TestDistributions) ... ok (0.034s) 2023-01-11T21:59:30.0943119Z test_beta_underflow_gpu (__main__.TestDistributions) ... skip: CUDA not found (0.001s) 2023-01-11T21:59:30.0943613Z test_binomial (__main__.TestDistributions) ... ok (0.024s) 2023-01-11T21:59:30.0944114Z test_binomial_enumerate_support (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0944650Z test_binomial_extreme_vals (__main__.TestDistributions) ... ok (0.003s) 2023-01-11T21:59:30.0945321Z test_binomial_log_prob_and_entropy (__main__.TestDistributions) ... ok (0.057s) 2023-01-11T21:59:30.0945890Z test_binomial_log_prob_vectorized_count (__main__.TestDistributions) ... ok (0.004s) 2023-01-11T21:59:30.0946426Z test_binomial_sample (__main__.TestDistributions) ... ok (0.054s) 2023-01-11T21:59:30.0946925Z test_binomial_stable (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0947574Z test_binomial_vectorized_count (__main__.TestDistributions) ... ok (0.076s) 2023-01-11T21:59:30.0948092Z test_categorical_1d (__main__.TestDistributions) ... ok (0.005s) 2023-01-11T21:59:30.0948572Z test_categorical_2d (__main__.TestDistributions) ... ok (0.008s) 2023-01-11T21:59:30.0949100Z test_categorical_enumerate_support (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0949608Z test_cauchy (__main__.TestDistributions) ... ok (0.012s) 2023-01-11T21:59:30.0950594Z test_cdf_icdf_inverse (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.0951250Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.0951610Z ok (0.195s) 2023-01-11T21:59:30.0951998Z test_cdf_log_prob (__main__.TestDistributions) ... ok (0.086s) 2023-01-11T21:59:30.0952469Z test_chi2_sample (__main__.TestDistributions) ... ok (0.092s) 2023-01-11T21:59:30.0952919Z test_chi2_shape (__main__.TestDistributions) ... ok (0.005s) 2023-01-11T21:59:30.0953407Z test_continuous_bernoulli (__main__.TestDistributions) ... ok (0.014s) 2023-01-11T21:59:30.0953925Z test_continuous_bernoulli_3d (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0954445Z test_dirichlet_log_prob (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0954879Z test_dirichlet_mode (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0955360Z test_dirichlet_sample (__main__.TestDistributions) ... ok (0.038s) 2023-01-11T21:59:30.0955844Z test_dirichlet_shape (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0956798Z test_distribution_expand (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.0957427Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.0957823Z ok (0.554s) 2023-01-11T21:59:30.0958186Z test_distribution_subclass_expand (__main__.TestDistributions) ... ok (0.142s) 2023-01-11T21:59:30.0958697Z test_enumerate_support_type (__main__.TestDistributions) ... ok (0.026s) 2023-01-11T21:59:30.0959222Z test_exponential (__main__.TestDistributions) ... ok (0.031s) 2023-01-11T21:59:30.0959711Z test_exponential_sample (__main__.TestDistributions) ... ok (0.089s) 2023-01-11T21:59:30.0960218Z test_fishersnedecor (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0960723Z test_fishersnedecor_sample (__main__.TestDistributions) ... ok (0.883s) 2023-01-11T21:59:30.0961390Z test_gamma_gpu_sample (__main__.TestDistributions) ... skip: CUDA not found (0.001s) 2023-01-11T21:59:30.0961945Z test_gamma_gpu_shape (__main__.TestDistributions) ... skip: CUDA not found (0.001s) 2023-01-11T21:59:30.0962480Z test_gamma_log_prob_at_boundary (__main__.TestDistributions) ... ok (0.004s) 2023-01-11T21:59:30.0963000Z test_gamma_sample (__main__.TestDistributions) ... ok (0.269s) 2023-01-11T21:59:30.0963453Z test_gamma_shape (__main__.TestDistributions) ... ok (0.005s) 2023-01-11T21:59:30.0963927Z test_geometric (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0964443Z test_geometric_log_prob_and_entropy (__main__.TestDistributions) ... ok (0.012s) 2023-01-11T21:59:30.0964970Z test_geometric_sample (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0965454Z test_gumbel (__main__.TestDistributions) ... ok (0.005s) 2023-01-11T21:59:30.0965913Z test_gumbel_sample (__main__.TestDistributions) ... ok (0.521s) 2023-01-11T21:59:30.0966476Z test_halfcauchy (__main__.TestDistributions) ... ok (0.009s) 2023-01-11T21:59:30.0966955Z test_halfnormal (__main__.TestDistributions) ... ok (0.009s) 2023-01-11T21:59:30.0967454Z test_halfnormal_logprob (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0967973Z test_halfnormal_sample (__main__.TestDistributions) ... ok (0.089s) 2023-01-11T21:59:30.0968465Z test_has_examples (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0969434Z test_independent_expand (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.0970058Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.0970404Z ok (0.669s) 2023-01-11T21:59:30.0970804Z test_independent_shape (__main__.TestDistributions) ... ok (0.264s) 2023-01-11T21:59:30.0971348Z test_invalid_parameter_broadcasting (__main__.TestDistributions) ... ok (0.033s) 2023-01-11T21:59:30.0971910Z test_kumaraswamy_mean_variance (__main__.TestDistributions) ... ok (0.030s) 2023-01-11T21:59:30.0972423Z test_kumaraswamy_shape (__main__.TestDistributions) ... ok (0.003s) 2023-01-11T21:59:30.0972911Z test_laplace (__main__.TestDistributions) ... ok (0.018s) 2023-01-11T21:59:30.0973393Z test_laplace_sample (__main__.TestDistributions) ... ok (0.267s) 2023-01-11T21:59:30.0973885Z test_lazy_property_grad (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0974387Z test_lkj_cholesky_log_prob (__main__.TestDistributions) ... ok (0.012s) 2023-01-11T21:59:30.0974884Z test_logisticnormal (__main__.TestDistributions) ... ok (0.023s) 2023-01-11T21:59:30.0975384Z test_logisticnormal_logprob (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0975925Z test_logisticnormal_sample (__main__.TestDistributions) ... ok (0.307s) 2023-01-11T21:59:30.0976432Z test_lognormal (__main__.TestDistributions) ... ok (0.017s) 2023-01-11T21:59:30.0976923Z test_lognormal_logprob (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0977417Z test_lognormal_sample (__main__.TestDistributions) ... ok (0.264s) 2023-01-11T21:59:30.0977951Z test_lowrank_multivariate_normal_log_prob (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0978614Z test_lowrank_multivariate_normal_moments (__main__.TestDistributions) ... ok (0.036s) 2023-01-11T21:59:30.0979187Z test_lowrank_multivariate_normal_properties (__main__.TestDistributions) ... ok (0.003s) 2023-01-11T21:59:30.0979762Z test_lowrank_multivariate_normal_sample (__main__.TestDistributions) ... ok (0.038s) 2023-01-11T21:59:30.0980338Z test_lowrank_multivariate_normal_shape (__main__.TestDistributions) ... ok (0.047s) 2023-01-11T21:59:30.0980903Z test_mixture_same_family_log_prob (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0981438Z test_mixture_same_family_sample (__main__.TestDistributions) ... ok (0.045s) 2023-01-11T21:59:30.0981970Z test_mixture_same_family_shape (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0983184Z test_mode (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.0983817Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.0984164Z ok (0.111s) 2023-01-11T21:59:30.0984541Z test_multinomial_1d (__main__.TestDistributions) ... ok (0.011s) 2023-01-11T21:59:30.0985053Z test_multinomial_1d_log_prob_and_entropy (__main__.TestDistributions) ... ok (0.004s) 2023-01-11T21:59:30.0985580Z test_multinomial_2d (__main__.TestDistributions) ... ok (0.012s) 2023-01-11T21:59:30.0986090Z test_multivariate_normal_log_prob (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0986641Z test_multivariate_normal_moments (__main__.TestDistributions) ... ok (0.026s) 2023-01-11T21:59:30.0987174Z test_multivariate_normal_properties (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0987727Z test_multivariate_normal_sample (__main__.TestDistributions) ... ok (0.101s) 2023-01-11T21:59:30.0988396Z test_multivariate_normal_shape (__main__.TestDistributions) ... ok (0.067s) 2023-01-11T21:59:30.0988980Z test_multivariate_normal_stable_with_precision_matrix (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0989536Z test_negative_binomial (__main__.TestDistributions) ... ok (0.018s) 2023-01-11T21:59:30.0990061Z test_negative_binomial_log_prob (__main__.TestDistributions) ... ok (0.052s) 2023-01-11T21:59:30.0990645Z test_negative_binomial_log_prob_vectorized_count (__main__.TestDistributions) ... ok (0.003s) 2023-01-11T21:59:30.0991170Z test_normal (__main__.TestDistributions) ... ok (0.017s) 2023-01-11T21:59:30.0991634Z test_normal_sample (__main__.TestDistributions) ... ok (0.264s) 2023-01-11T21:59:30.0992151Z test_one_hot_categorical_1d (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0992654Z test_one_hot_categorical_2d (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0993200Z test_one_hot_categorical_enumerate_support (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.0993719Z test_pareto (__main__.TestDistributions) ... ok (0.005s) 2023-01-11T21:59:30.0994183Z test_pareto_sample (__main__.TestDistributions) ... ok (0.264s) 2023-01-11T21:59:30.0994677Z test_poisson_forward_ad (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0995199Z test_poisson_gpu_sample (__main__.TestDistributions) ... skip: CUDA not found (0.000s) 2023-01-11T21:59:30.0995727Z test_poisson_log_prob (__main__.TestDistributions) ... ok (0.007s) 2023-01-11T21:59:30.0996238Z test_poisson_sample (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.0996704Z test_poisson_shape (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.0997188Z test_relaxed_bernoulli (__main__.TestDistributions) ... ok (0.015s) 2023-01-11T21:59:30.0997800Z test_relaxed_one_hot_categorical_1d (__main__.TestDistributions) ... ok (0.009s) 2023-01-11T21:59:30.0998351Z test_relaxed_one_hot_categorical_2d (__main__.TestDistributions) ... ok (0.014s) 2023-01-11T21:59:30.0998856Z test_repr (__main__.TestDistributions) ... ok (0.019s) 2023-01-11T21:59:30.0999354Z test_rounded_relaxed_bernoulli (__main__.TestDistributions) ... ok (0.020s) 2023-01-11T21:59:30.1000369Z test_rsample_requires_grad (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1001050Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1001400Z ok (0.017s) 2023-01-11T21:59:30.1001800Z test_sample_detached (__main__.TestDistributions) ... ok (0.018s) 2023-01-11T21:59:30.1002273Z test_studentT (__main__.TestDistributions) ... ok (0.006s) 2023-01-11T21:59:30.1002753Z test_studentT_log_prob (__main__.TestDistributions) ... ok (0.091s) 2023-01-11T21:59:30.1003222Z test_studentT_sample (__main__.TestDistributions) ... ok (1.110s) 2023-01-11T21:59:30.1003714Z test_support_attributes (__main__.TestDistributions) ... ok (0.023s) 2023-01-11T21:59:30.1004323Z test_uniform (__main__.TestDistributions) ... ok (0.012s) 2023-01-11T21:59:30.1004847Z test_valid_parameter_broadcasting (__main__.TestDistributions) ... ok (0.026s) 2023-01-11T21:59:30.1005385Z test_vonmises_logprob (__main__.TestDistributions) ... ok (0.017s) 2023-01-11T21:59:30.1005875Z test_vonmises_sample (__main__.TestDistributions) ... ok (6.787s) 2023-01-11T21:59:30.1007056Z test_wishart_log_prob (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:107: UserWarning: Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim. 2023-01-11T21:59:30.1008114Z warnings.warn("Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim.") 2023-01-11T21:59:30.1008985Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1009638Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1010011Z ok (0.075s) 2023-01-11T21:59:30.1010403Z test_wishart_moments (__main__.TestDistributions) ... ok (1.518s) 2023-01-11T21:59:30.1010912Z test_wishart_properties (__main__.TestDistributions) ... ok (0.002s) 2023-01-11T21:59:30.1011428Z test_wishart_sample (__main__.TestDistributions) ... ok (0.295s) 2023-01-11T21:59:30.1012357Z test_wishart_shape (__main__.TestDistributions) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1012970Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1013718Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1014274Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1015017Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1015583Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1016322Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1016894Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1017643Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1018188Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1018941Z /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1019492Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1019826Z ok (0.127s) 2023-01-11T21:59:30.1020256Z test_wishart_stable_with_precision_matrix (__main__.TestDistributions) ... ok (0.001s) 2023-01-11T21:59:30.1020850Z test_zero_excluded_binomial (__main__.TestDistributions) ... skip: CUDA not found (0.001s) 2023-01-11T21:59:30.1021356Z test_cat_event_dim (__main__.TestFunctors) ... ok (0.001s) 2023-01-11T21:59:30.1021803Z test_cat_transform (__main__.TestFunctors) ... ok (0.003s) 2023-01-11T21:59:30.1022289Z test_cat_transform_non_uniform (__main__.TestFunctors) ... ok (0.003s) 2023-01-11T21:59:30.1022939Z test_stack_transform (__main__.TestFunctors) ... ok (0.003s) 2023-01-11T21:59:30.1023763Z test_cdf (__main__.TestJit) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:253: UserWarning: Singular sample detected. 2023-01-11T21:59:30.1024345Z warnings.warn("Singular sample detected.") 2023-01-11T21:59:30.1024677Z ok (2.505s) 2023-01-11T21:59:30.1025546Z test_entropy (__main__.TestJit) ... /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py:107: UserWarning: Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim. 2023-01-11T21:59:30.1026563Z warnings.warn("Low df values detected. Singular samples are highly likely to occur for ndim - 1 < df < ndim.") 2023-01-11T21:59:30.1027152Z ok (3.315s) 2023-01-11T21:59:30.1027486Z test_enumerate_support (__main__.TestJit) ... ok (0.228s) 2023-01-11T21:59:30.1027865Z test_log_prob (__main__.TestJit) ... ok (5.717s) 2023-01-11T21:59:30.1028226Z test_mean (__main__.TestJit) ... ok (1.532s) 2023-01-11T21:59:30.1028604Z test_rsample (__main__.TestJit) ... ok (0.382s) 2023-01-11T21:59:30.1028973Z test_sample (__main__.TestJit) ... ok (0.515s) 2023-01-11T21:59:30.1029346Z test_variance (__main__.TestJit) ... ok (2.607s) 2023-01-11T21:59:30.1029793Z test_entropy_exponential_family (__main__.TestKL) ... ok (0.038s) 2023-01-11T21:59:30.1030262Z test_entropy_monte_carlo (__main__.TestKL) ... ok (2.669s) 2023-01-11T21:59:30.1030696Z test_kl_edgecases (__main__.TestKL) ... ok (0.012s) 2023-01-11T21:59:30.1031114Z test_kl_exponential_family (__main__.TestKL) ... ok (0.021s) 2023-01-11T21:59:30.1031477Z test_kl_infinite (__main__.TestKL) ... ok (0.015s) 2023-01-11T21:59:30.1032004Z test_kl_lowrank_multivariate_normal (__main__.TestKL) ... ok (0.024s) 2023-01-11T21:59:30.1032537Z test_kl_lowrank_multivariate_normal_batched (__main__.TestKL) ... ok (0.017s) 2023-01-11T21:59:30.1032999Z test_kl_monte_carlo (__main__.TestKL) ... ok (0.705s) 2023-01-11T21:59:30.1033415Z test_kl_multivariate_normal (__main__.TestKL) ... ok (0.029s) 2023-01-11T21:59:30.1033883Z test_kl_multivariate_normal_batched (__main__.TestKL) ... ok (0.014s) 2023-01-11T21:59:30.1034407Z test_kl_multivariate_normal_batched_broadcasted (__main__.TestKL) ... ok (0.014s) 2023-01-11T21:59:30.1034857Z test_kl_shape (__main__.TestKL) ... ok (0.045s) 2023-01-11T21:59:30.1035259Z test_kl_transformed (__main__.TestKL) ... ok (0.011s) 2023-01-11T21:59:30.1035763Z test_lazy_logits_initialization (__main__.TestLazyLogitsInitialization) ... ok (0.004s) 2023-01-11T21:59:30.1036399Z test_lazy_probs_initialization (__main__.TestLazyLogitsInitialization) ... ok (0.002s) 2023-01-11T21:59:30.1036976Z test_bernoulli_gradient (__main__.TestNumericalStability) ... ok (0.014s) 2023-01-11T21:59:30.1037542Z test_bernoulli_with_logits_overflow (__main__.TestNumericalStability) ... ok (0.003s) 2023-01-11T21:59:30.1038196Z test_bernoulli_with_logits_underflow (__main__.TestNumericalStability) ... ok (0.003s) 2023-01-11T21:59:30.1038753Z test_categorical_log_prob (__main__.TestNumericalStability) ... ok (0.002s) 2023-01-11T21:59:30.1039314Z test_categorical_log_prob_with_logits (__main__.TestNumericalStability) ... ok (0.002s) 2023-01-11T21:59:30.1039906Z test_continuous_bernoulli_gradient (__main__.TestNumericalStability) ... ok (0.028s) 2023-01-11T21:59:30.1040502Z test_continuous_bernoulli_with_logits_overflow (__main__.TestNumericalStability) ... ok (0.004s) 2023-01-11T21:59:30.1041125Z test_continuous_bernoulli_with_logits_underflow (__main__.TestNumericalStability) ... ok (0.004s) 2023-01-11T21:59:30.1041708Z test_multinomial_log_prob (__main__.TestNumericalStability) ... ok (0.002s) 2023-01-11T21:59:30.1042297Z test_multinomial_log_prob_with_logits (__main__.TestNumericalStability) ... ok (0.002s) 2023-01-11T21:59:30.1042806Z test_beta_wrt_alpha (__main__.TestRsample) ... ok (0.043s) 2023-01-11T21:59:30.1043252Z test_beta_wrt_beta (__main__.TestRsample) ... ok (0.042s) 2023-01-11T21:59:30.1043663Z test_chi2 (__main__.TestRsample) ... ok (0.023s) 2023-01-11T21:59:30.1044117Z test_dirichlet_multivariate (__main__.TestRsample) ... ok (0.503s) 2023-01-11T21:59:30.1044601Z test_dirichlet_on_diagonal (__main__.TestRsample) ... ok (0.045s) 2023-01-11T21:59:30.1045097Z test_dirichlet_tangent_field (__main__.TestRsample) ... ok (0.083s) 2023-01-11T21:59:30.1045538Z test_gamma (__main__.TestRsample) ... ok (0.023s) 2023-01-11T21:59:30.1045962Z test_invalid (__main__.TestValidation) ... ok (0.019s) 2023-01-11T21:59:30.1046441Z test_invalid_log_probs_arg (__main__.TestValidation) ... ok (0.277s) 2023-01-11T21:59:30.1046920Z test_valid (__main__.TestValidation) ... ok (0.012s) 2023-01-11T21:59:30.1047413Z test_warning_unimplemented_constraints (__main__.TestValidation) ... ok (0.008s) 2023-01-11T21:59:30.1047838Z 2023-01-11T21:59:30.1048230Z ---------------------------------------------------------------------- 2023-01-11T21:59:30.1048658Z Ran 219 tests in 40.211s 2023-01-11T21:59:30.1048857Z 2023-01-11T21:59:30.1048980Z OK (skipped=5) 2023-01-11T21:59:30.1049156Z 2023-01-11T21:59:30.1049300Z Generating XML reports... 2023-01-11T21:59:30.1050109Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestAgainstScipy-20230111215849.xml 2023-01-11T21:59:30.1051151Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestConstraints-20230111215849.xml 2023-01-11T21:59:30.1052185Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestDistributionShapes-20230111215849.xml 2023-01-11T21:59:30.1053258Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestDistributions-20230111215849.xml 2023-01-11T21:59:30.1054377Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestFunctors-20230111215849.xml 2023-01-11T21:59:30.1055353Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestJit-20230111215849.xml 2023-01-11T21:59:30.1056239Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestKL-20230111215849.xml 2023-01-11T21:59:30.1057305Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestLazyLogitsInitialization-20230111215849.xml 2023-01-11T21:59:30.1058406Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestNumericalStability-20230111215849.xml 2023-01-11T21:59:30.1059443Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestRsample-20230111215849.xml 2023-01-11T21:59:30.1060444Z Generated XML report: test-reports/python-unittest/distributions.test_distributions/TEST-TestValidation-20230111215849.xml 2023-01-11T21:59:30.1060909Z 2023-01-11T21:59:30.1061403Z ##[endgroup] 2023-01-11T21:59:30.1062527Z FINISHED PRINTING LOG FILE of distributions/test_distributions (/var/lib/jenkins/workspace/test/test-reports/distributions-test_distributions_q4k_g_kn) 2023-01-11T21:59:30.1062979Z 2023-01-11T21:59:30.1063295Z Running nn/test_convolution ... [2023-01-11 21:59:30.090060] 2023-01-11T21:59:30.1064108Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_convolution.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:59:30.090336] 2023-01-11T21:59:38.2256044Z 2023-01-11T21:59:38.2256638Z Expand the folded group to see the log file of nn/test_convolution 2023-01-11T21:59:38.2257512Z ##[group]PRINTING LOG FILE of nn/test_convolution (/var/lib/jenkins/workspace/test/test-reports/nn-test_convolution_wt79iwbk) 2023-01-11T21:59:38.2257778Z 2023-01-11T21:59:38.2257885Z Running tests... 2023-01-11T21:59:38.2258487Z ---------------------------------------------------------------------- 2023-01-11T21:59:38.2259046Z Test results will be stored in test-reports/python-unittest/nn.test_convolution 2023-01-11T21:59:38.2260487Z test_Conv1d_module_same_padding (__main__.TestConvolutionNN) ... /var/lib/jenkins/workspace/test/nn/test_convolution.py:152: UserWarning: Using padding='same' with even kernel lengths and odd dilation may require a zero-padded copy of the input be created (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Convolution.cpp:997.) 2023-01-11T21:59:38.2261512Z expect = F.conv1d(x, module.weight, module.bias, padding='same') 2023-01-11T21:59:38.2261893Z ok (0.005s) 2023-01-11T21:59:38.2262287Z test_Conv2d_1x1 (__main__.TestConvolutionNN) ... ok (0.005s) 2023-01-11T21:59:38.2262954Z test_Conv2d_OneDNN (__main__.TestConvolutionNN) ... ok (0.018s) 2023-01-11T21:59:38.2263428Z test_Conv2d_backward_twice (__main__.TestConvolutionNN) ... ok (0.007s) 2023-01-11T21:59:38.2263739Z test_Conv2d_groups_nobias (__main__.TestConvolutionNN) ... ok (0.041s) 2023-01-11T21:59:38.2264290Z test_Conv2d_groups_nobias_v2 (__main__.TestConvolutionNN) ... ok (0.005s) 2023-01-11T21:59:38.2264595Z test_Conv2d_inconsistent_types (__main__.TestConvolutionNN) ... ok (0.008s) 2023-01-11T21:59:38.2264936Z test_Conv2d_inconsistent_types_on_GPU_with_cudnn (__main__.TestConvolutionNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:59:38.2265330Z test_Conv2d_inconsistent_types_on_GPU_without_cudnn (__main__.TestConvolutionNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:59:38.2265672Z test_Conv2d_missing_argument (__main__.TestConvolutionNN) ... ok (0.002s) 2023-01-11T21:59:38.2265977Z test_Conv2d_module_same_padding (__main__.TestConvolutionNN) ... ok (0.003s) 2023-01-11T21:59:38.2266263Z test_Conv3d_groups_nobias (__main__.TestConvolutionNN) ... ok (0.005s) 2023-01-11T21:59:38.2266554Z test_Conv3d_groups_wbias (__main__.TestConvolutionNN) ... ok (0.005s) 2023-01-11T21:59:38.2266916Z test_Conv3d_module_same_padding (__main__.TestConvolutionNN) ... ok (0.003s) 2023-01-11T21:59:38.2267250Z test_ConvTranspose2d_half_cublas_gemm (__main__.TestConvolutionNN) ... skip: CUDA not available (0.000s) 2023-01-11T21:59:38.2267587Z test_ConvTranspose2d_output_size (__main__.TestConvolutionNN) ... ok (0.003s) 2023-01-11T21:59:38.2267925Z test_ConvTranspose2d_output_size_downsample_upsample (__main__.TestConvolutionNN) ... ok (4.115s) 2023-01-11T21:59:38.2268301Z test_ConvTranspose3d_correct_output_size (__main__.TestConvolutionNN) ... ok (0.001s) 2023-01-11T21:59:38.2268611Z test_conv2d_discontiguous_weight (__main__.TestConvolutionNN) ... ok (0.039s) 2023-01-11T21:59:38.2268903Z test_conv_backcompat (__main__.TestConvolutionNN) ... ok (0.010s) 2023-01-11T21:59:38.2269228Z test_conv_cudnn_memory_layout_dominance (__main__.TestConvolutionNN) ... skip: CUDA unavailable (0.004s) 2023-01-11T21:59:38.2269552Z test_conv_invalid_groups (__main__.TestConvolutionNN) ... ok (0.001s) 2023-01-11T21:59:38.2269864Z test_conv_modules_raise_error_on_incorrect_input_size (__main__.TestConvolutionNN) ... ok (0.094s) 2023-01-11T21:59:38.2270191Z test_conv_padding_mode (__main__.TestConvolutionNN) ... ok (0.001s) 2023-01-11T21:59:38.2271216Z test_conv_shapecheck (__main__.TestConvolutionNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1120: UserWarning: Complex modules are a new feature under active development whose design may change, and some modules might not work as expected when using complex tensors as parameters or buffers. Please file an issue at https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml if a complex module does not work as expected. 2023-01-11T21:59:38.2271837Z warnings.warn( 2023-01-11T21:59:38.2271996Z ok (0.179s) 2023-01-11T21:59:38.2272214Z test_conv_tbc (__main__.TestConvolutionNN) ... ok (0.009s) 2023-01-11T21:59:38.2272522Z test_cudnn_non_contiguous (__main__.TestConvolutionNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:59:38.2272867Z test_cudnn_noncontiguous_weight (__main__.TestConvolutionNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:59:38.2273189Z test_functional_grad_conv (__main__.TestConvolutionNN) ... ok (0.004s) 2023-01-11T21:59:38.2273487Z test_functional_grad_conv2d (__main__.TestConvolutionNN) ... ok (0.158s) 2023-01-11T21:59:38.2273782Z test_grad_conv1d_input (__main__.TestConvolutionNN) ... ok (0.058s) 2023-01-11T21:59:38.2274057Z test_grad_conv1d_weight (__main__.TestConvolutionNN) ... ok (0.042s) 2023-01-11T21:59:38.2274344Z test_grad_conv2d_input (__main__.TestConvolutionNN) ... ok (0.064s) 2023-01-11T21:59:38.2274628Z test_grad_conv2d_weight (__main__.TestConvolutionNN) ... ok (0.062s) 2023-01-11T21:59:38.2274902Z test_grad_conv3d_input (__main__.TestConvolutionNN) ... ok (0.082s) 2023-01-11T21:59:38.2275187Z test_grad_conv3d_weight (__main__.TestConvolutionNN) ... ok (0.060s) 2023-01-11T21:59:38.2275517Z test_grouped_conv_cudnn_nhwc_support (__main__.TestConvolutionNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:59:38.2276587Z test_invalid_conv1d (__main__.TestConvolutionNN) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1120: UserWarning: Complex modules are a new feature under active development whose design may change, and some modules might not work as expected when using complex tensors as parameters or buffers. Please file an issue at https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml if a complex module does not work as expected. 2023-01-11T21:59:38.2277195Z warnings.warn( 2023-01-11T21:59:38.2277357Z ok (0.080s) 2023-01-11T21:59:38.2277580Z test_invalid_conv2d (__main__.TestConvolutionNN) ... ok (0.158s) 2023-01-11T21:59:38.2277989Z test_invalid_conv3d (__main__.TestConvolutionNN) ... ok (0.079s) 2023-01-11T21:59:38.2278263Z test_mismatch_shape_conv2d (__main__.TestConvolutionNN) ... ok (0.007s) 2023-01-11T21:59:38.2278545Z test_nnpack_conv (__main__.TestConvolutionNN) ... ok (0.363s) 2023-01-11T21:59:38.2278915Z test_thnn_conv_strided_padded_dilated (__main__.TestConvolutionNN) ... skip: CUDA not available (0.001s) 2023-01-11T21:59:38.2279123Z 2023-01-11T21:59:38.2279316Z ---------------------------------------------------------------------- 2023-01-11T21:59:38.2279558Z Ran 43 tests in 5.789s 2023-01-11T21:59:38.2279672Z 2023-01-11T21:59:38.2279742Z OK (skipped=8) 2023-01-11T21:59:38.2279847Z 2023-01-11T21:59:38.2279934Z Generating XML reports... 2023-01-11T21:59:38.2280349Z Generated XML report: test-reports/python-unittest/nn.test_convolution/TEST-TestConvolutionNN-20230111215931.xml 2023-01-11T21:59:38.2280591Z 2023-01-11T21:59:38.2280889Z ##[endgroup] 2023-01-11T21:59:38.2281288Z FINISHED PRINTING LOG FILE of nn/test_convolution (/var/lib/jenkins/workspace/test/test-reports/nn-test_convolution_wt79iwbk) 2023-01-11T21:59:38.2281513Z 2023-01-11T21:59:38.2281664Z Running nn/test_pooling ... [2023-01-11 21:59:38.225729] 2023-01-11T21:59:38.2282141Z Executing ['/opt/conda/bin/python', '-bb', 'nn/test_pooling.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:59:38.226004] 2023-01-11T21:59:40.9086760Z 2023-01-11T21:59:40.9087282Z Expand the folded group to see the log file of nn/test_pooling 2023-01-11T21:59:40.9088354Z ##[group]PRINTING LOG FILE of nn/test_pooling (/var/lib/jenkins/workspace/test/test-reports/nn-test_pooling_i3n92shm) 2023-01-11T21:59:40.9088912Z 2023-01-11T21:59:40.9089030Z Running tests... 2023-01-11T21:59:40.9089839Z ---------------------------------------------------------------------- 2023-01-11T21:59:40.9090500Z Test results will be stored in test-reports/python-unittest/nn.test_pooling 2023-01-11T21:59:40.9091049Z test_avg_pool1d_ceil_mode (__main__.TestAvgPool) ... ok (0.002s) 2023-01-11T21:59:40.9091497Z test_avg_pool2d_ceil_mode (__main__.TestAvgPool) ... ok (0.001s) 2023-01-11T21:59:40.9091833Z test_avg_pool3d_ceil_mode (__main__.TestAvgPool) ... ok (0.001s) 2023-01-11T21:59:40.9092353Z test_doubletensor_avg_pool2d (__main__.TestAvgPool) ... ok (0.008s) 2023-01-11T21:59:40.9092760Z test_doubletensor_avg_pool2d_with_divisor (__main__.TestAvgPool) ... ok (0.005s) 2023-01-11T21:59:40.9093066Z test_doubletensor_avg_pool3d (__main__.TestAvgPool) ... ok (0.047s) 2023-01-11T21:59:40.9093363Z test_doubletensor_avg_pool3d_with_divisor (__main__.TestAvgPool) ... ok (0.160s) 2023-01-11T21:59:40.9093669Z test_MaxUnpool2d_output_size (__main__.TestPoolingNN) ... ok (0.009s) 2023-01-11T21:59:40.9093951Z test_adaptive_pooling_avg_nhwc (__main__.TestPoolingNN) ... ok (0.003s) 2023-01-11T21:59:40.9094301Z test_adaptive_pooling_avg_nhwc_launch_config_backward (__main__.TestPoolingNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:59:40.9094690Z test_adaptive_pooling_avg_nhwc_launch_config_forward (__main__.TestPoolingNN) ... skip: CUDA unavailable (0.001s) 2023-01-11T21:59:40.9095034Z test_adaptive_pooling_avg_nhwc_non_contiguous (__main__.TestPoolingNN) ... ok (0.002s) 2023-01-11T21:59:40.9095344Z test_adaptive_pooling_bfloat16 (__main__.TestPoolingNN) ... ok (0.006s) 2023-01-11T21:59:40.9095857Z test_adaptive_pooling_input_size (__main__.TestPoolingNN) ... ok (0.001s) 2023-01-11T21:59:40.9096157Z test_adaptive_pooling_size_none (__main__.TestPoolingNN) ... ok (0.002s) 2023-01-11T21:59:40.9096441Z test_adaptive_pooling_size_overflow (__main__.TestPoolingNN) ... ok (0.009s) 2023-01-11T21:59:40.9096721Z test_max_unpool (__main__.TestPoolingNN) ... ok (0.225s) 2023-01-11T21:59:40.9096991Z test_max_unpool2d_nhwc_cpu (__main__.TestPoolingNN) ... ok (0.002s) 2023-01-11T21:59:40.9097151Z 2023-01-11T21:59:40.9097350Z ---------------------------------------------------------------------- 2023-01-11T21:59:40.9097596Z Ran 18 tests in 0.486s 2023-01-11T21:59:40.9097709Z 2023-01-11T21:59:40.9097782Z OK (skipped=2) 2023-01-11T21:59:40.9097889Z 2023-01-11T21:59:40.9097975Z Generating XML reports... 2023-01-11T21:59:40.9098368Z Generated XML report: test-reports/python-unittest/nn.test_pooling/TEST-TestAvgPool-20230111215939.xml 2023-01-11T21:59:40.9098945Z Generated XML report: test-reports/python-unittest/nn.test_pooling/TEST-TestPoolingNN-20230111215939.xml 2023-01-11T21:59:40.9099171Z 2023-01-11T21:59:40.9099442Z ##[endgroup] 2023-01-11T21:59:40.9099819Z FINISHED PRINTING LOG FILE of nn/test_pooling (/var/lib/jenkins/workspace/test/test-reports/nn-test_pooling_i3n92shm) 2023-01-11T21:59:40.9100035Z 2023-01-11T21:59:40.9100209Z Running test_cpp_api_parity ... [2023-01-11 21:59:40.908760] 2023-01-11T21:59:40.9100685Z Executing ['/opt/conda/bin/python', '-bb', 'test_cpp_api_parity.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 21:59:40.909009] 2023-01-11T21:59:54.4080537Z 2023-01-11T21:59:54.4081083Z Expand the folded group to see the log file of test_cpp_api_parity 2023-01-11T21:59:54.4082147Z ##[group]PRINTING LOG FILE of test_cpp_api_parity (/var/lib/jenkins/workspace/test/test-reports/test_cpp_api_parity_ip7klvi_) 2023-01-11T21:59:54.4082932Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:59:54.4083195Z 2023-01-11T21:59:54.4083310Z Running tests... 2023-01-11T21:59:54.4083792Z ---------------------------------------------------------------------- 2023-01-11T21:59:54.4084462Z Test results will be stored in test-reports/python-unittest/test_cpp_api_parity 2023-01-11T21:59:54.4085047Z test_torch_nn_AdaptiveAvgPool1d (__main__.TestCppApiParity) ... ok (0.032s) 2023-01-11T21:59:54.4085546Z test_torch_nn_AdaptiveAvgPool1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4086125Z test_torch_nn_AdaptiveAvgPool1d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4086704Z test_torch_nn_AdaptiveAvgPool1d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4087199Z test_torch_nn_AdaptiveAvgPool1d_one_output (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4087786Z test_torch_nn_AdaptiveAvgPool1d_one_output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4088421Z test_torch_nn_AdaptiveAvgPool2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4089055Z test_torch_nn_AdaptiveAvgPool2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4089686Z test_torch_nn_AdaptiveAvgPool2d_single (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4090292Z test_torch_nn_AdaptiveAvgPool2d_single_1x1output (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4090994Z test_torch_nn_AdaptiveAvgPool2d_single_1x1output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4091567Z test_torch_nn_AdaptiveAvgPool2d_single_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4092115Z test_torch_nn_AdaptiveAvgPool2d_tuple (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4092678Z test_torch_nn_AdaptiveAvgPool2d_tuple_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4093515Z test_torch_nn_AdaptiveAvgPool2d_tuple_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4093950Z test_torch_nn_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4094444Z test_torch_nn_AdaptiveAvgPool3d_last_dim (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4095045Z test_torch_nn_AdaptiveAvgPool3d_last_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4095631Z test_torch_nn_AdaptiveAvgPool3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4096256Z test_torch_nn_AdaptiveAvgPool3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4096882Z test_torch_nn_AdaptiveAvgPool3d_single (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4097555Z test_torch_nn_AdaptiveAvgPool3d_single_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4098054Z test_torch_nn_AdaptiveAvgPool3d_tuple (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4098415Z test_torch_nn_AdaptiveAvgPool3d_tuple_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4098916Z test_torch_nn_AdaptiveAvgPool3d_tuple_none (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4099344Z test_torch_nn_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4099683Z test_torch_nn_AdaptiveMaxPool1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4100077Z test_torch_nn_AdaptiveMaxPool1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4100429Z test_torch_nn_AdaptiveMaxPool1d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4100796Z test_torch_nn_AdaptiveMaxPool1d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4101416Z test_torch_nn_AdaptiveMaxPool2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4102053Z test_torch_nn_AdaptiveMaxPool2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4102666Z test_torch_nn_AdaptiveMaxPool2d_single (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4103031Z test_torch_nn_AdaptiveMaxPool2d_single_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4103370Z test_torch_nn_AdaptiveMaxPool2d_tuple (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4103724Z test_torch_nn_AdaptiveMaxPool2d_tuple_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4104079Z test_torch_nn_AdaptiveMaxPool2d_tuple_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4104427Z test_torch_nn_AdaptiveMaxPool2d_tuple_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4104794Z test_torch_nn_AdaptiveMaxPool3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4105160Z test_torch_nn_AdaptiveMaxPool3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4105583Z test_torch_nn_AdaptiveMaxPool3d_single (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4105941Z test_torch_nn_AdaptiveMaxPool3d_single_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4106301Z test_torch_nn_AdaptiveMaxPool3d_single_nonatomic (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4106665Z test_torch_nn_AdaptiveMaxPool3d_single_nonatomic_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4107026Z test_torch_nn_AdaptiveMaxPool3d_tuple (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4107380Z test_torch_nn_AdaptiveMaxPool3d_tuple_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4107838Z test_torch_nn_AdaptiveMaxPool3d_tuple_nonatomic (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4108203Z test_torch_nn_AdaptiveMaxPool3d_tuple_nonatomic_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4108569Z test_torch_nn_AdaptiveMaxPool3d_tuple_none (__main__.TestCppApiParity) ... ok (0.020s) 2023-01-11T21:59:54.4108934Z test_torch_nn_AdaptiveMaxPool3d_tuple_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4109256Z test_torch_nn_AvgPool1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4109572Z test_torch_nn_AvgPool1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4109899Z test_torch_nn_AvgPool1d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4110246Z test_torch_nn_AvgPool1d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4110616Z test_torch_nn_AvgPool1d_stride (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4110954Z test_torch_nn_AvgPool1d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4111290Z test_torch_nn_AvgPool1d_stride_pad (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4111621Z test_torch_nn_AvgPool1d_stride_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4111942Z test_torch_nn_AvgPool2d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4112256Z test_torch_nn_AvgPool2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4112578Z test_torch_nn_AvgPool2d_divisor (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4112902Z test_torch_nn_AvgPool2d_divisor_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4113239Z test_torch_nn_AvgPool2d_divisor_stride (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4113593Z test_torch_nn_AvgPool2d_divisor_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4113939Z test_torch_nn_AvgPool2d_divisor_stride_pad (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4114281Z test_torch_nn_AvgPool2d_divisor_stride_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4114630Z test_torch_nn_AvgPool2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4114970Z test_torch_nn_AvgPool2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4115289Z test_torch_nn_AvgPool2d_stride (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4115619Z test_torch_nn_AvgPool2d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4115951Z test_torch_nn_AvgPool2d_stride_pad (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4116295Z test_torch_nn_AvgPool2d_stride_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4116604Z test_torch_nn_AvgPool3d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4116923Z test_torch_nn_AvgPool3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4117243Z test_torch_nn_AvgPool3d_divisor (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4117566Z test_torch_nn_AvgPool3d_divisor_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4117978Z test_torch_nn_AvgPool3d_divisor_stride (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4118318Z test_torch_nn_AvgPool3d_divisor_stride1_pad0_gpu_input (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4118697Z test_torch_nn_AvgPool3d_divisor_stride1_pad0_gpu_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4119074Z test_torch_nn_AvgPool3d_divisor_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4119460Z test_torch_nn_AvgPool3d_divisor_stride_pad (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4119816Z test_torch_nn_AvgPool3d_divisor_stride_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4120188Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4120570Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_fixedkw_output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4120958Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_general_output (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4121353Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_general_output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4121781Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4122168Z test_torch_nn_AvgPool3d_divisor_stride_pad_gpu_input_nooverlap_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4122532Z test_torch_nn_AvgPool3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4122980Z test_torch_nn_AvgPool3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4123301Z test_torch_nn_AvgPool3d_stride (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4123624Z test_torch_nn_AvgPool3d_stride1_pad0_gpu_input (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4123987Z test_torch_nn_AvgPool3d_stride1_pad0_gpu_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4124358Z test_torch_nn_AvgPool3d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4124684Z test_torch_nn_AvgPool3d_stride_pad (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4125024Z test_torch_nn_AvgPool3d_stride_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4125383Z test_torch_nn_AvgPool3d_stride_pad_gpu_fixedkw_output (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4125768Z test_torch_nn_AvgPool3d_stride_pad_gpu_fixedkw_output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4126133Z test_torch_nn_AvgPool3d_stride_pad_gpu_general_output (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4126513Z test_torch_nn_AvgPool3d_stride_pad_gpu_general_output_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4126895Z test_torch_nn_AvgPool3d_stride_pad_gpu_input_nooverlap (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4127278Z test_torch_nn_AvgPool3d_stride_pad_gpu_input_nooverlap_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4127607Z test_torch_nn_BCELoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4127919Z test_torch_nn_BCELoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4128260Z test_torch_nn_BCELoss_no_batch_dim_mean (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4128608Z test_torch_nn_BCELoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4128966Z test_torch_nn_BCELoss_no_batch_dim_none (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4129323Z test_torch_nn_BCELoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4129677Z test_torch_nn_BCELoss_no_batch_dim_sum (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4130025Z test_torch_nn_BCELoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4130407Z test_torch_nn_BCELoss_scalar_weights (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4130751Z test_torch_nn_BCELoss_scalar_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4131081Z test_torch_nn_BCELoss_weights (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4131396Z test_torch_nn_BCELoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4131729Z test_torch_nn_BCEWithLogitsLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4132073Z test_torch_nn_BCEWithLogitsLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4132433Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4132853Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4133244Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_none (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4133631Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4134004Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4134390Z test_torch_nn_BCEWithLogitsLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4134759Z test_torch_nn_BCEWithLogitsLoss_scalar_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4135132Z test_torch_nn_BCEWithLogitsLoss_scalar_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4135483Z test_torch_nn_BCEWithLogitsLoss_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4135846Z test_torch_nn_BCEWithLogitsLoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4136192Z test_torch_nn_BatchNorm1d_3d_input (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4136523Z test_torch_nn_BatchNorm1d_3d_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4136868Z test_torch_nn_BatchNorm1d_3d_input_not_affine (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4137226Z test_torch_nn_BatchNorm1d_3d_input_not_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4137569Z test_torch_nn_BatchNorm1d_affine (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4137893Z test_torch_nn_BatchNorm1d_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4138244Z test_torch_nn_BatchNorm1d_affine_simple_average (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4138613Z test_torch_nn_BatchNorm1d_affine_simple_average_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4138969Z test_torch_nn_BatchNorm1d_not_affine (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4139302Z test_torch_nn_BatchNorm1d_not_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4139650Z test_torch_nn_BatchNorm1d_not_tracking_stats (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4140011Z test_torch_nn_BatchNorm1d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4140341Z test_torch_nn_BatchNorm1d_zero_batch (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4140683Z test_torch_nn_BatchNorm1d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4141011Z test_torch_nn_BatchNorm2d (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4141323Z test_torch_nn_BatchNorm2d_2d_simple_average (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4141704Z test_torch_nn_BatchNorm2d_2d_simple_average_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4142066Z test_torch_nn_BatchNorm2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4142523Z test_torch_nn_BatchNorm2d_momentum (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4142889Z test_torch_nn_BatchNorm2d_momentum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4143384Z test_torch_nn_BatchNorm2d_not_affine (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4143957Z test_torch_nn_BatchNorm2d_not_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4144545Z test_torch_nn_BatchNorm2d_not_tracking_stats (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4145251Z test_torch_nn_BatchNorm2d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4145871Z test_torch_nn_BatchNorm2d_zero_batch (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4146453Z test_torch_nn_BatchNorm2d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4146989Z test_torch_nn_BatchNorm3d (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4147486Z test_torch_nn_BatchNorm3d_3d_simple_average (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4148102Z test_torch_nn_BatchNorm3d_3d_simple_average_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4148705Z test_torch_nn_BatchNorm3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4149261Z test_torch_nn_BatchNorm3d_momentum (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4149797Z test_torch_nn_BatchNorm3d_momentum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4150385Z test_torch_nn_BatchNorm3d_not_affine (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4150967Z test_torch_nn_BatchNorm3d_not_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4151574Z test_torch_nn_BatchNorm3d_not_tracking_stats (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4152204Z test_torch_nn_BatchNorm3d_not_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4152782Z test_torch_nn_BatchNorm3d_zero_batch (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4153344Z test_torch_nn_BatchNorm3d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4153843Z test_torch_nn_CELU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4154342Z test_torch_nn_CELU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4154864Z test_torch_nn_CELU_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4155394Z test_torch_nn_CELU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4155938Z test_torch_nn_CTCLoss_2d_int_target_lengths_tensors (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4156549Z test_torch_nn_CTCLoss_2d_int_target_lengths_tensors_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4157123Z test_torch_nn_CTCLoss_2d_lengths_tensors (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4157756Z test_torch_nn_CTCLoss_2d_lengths_tensors_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4158373Z test_torch_nn_CTCLoss_lengths_tensors (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4159008Z test_torch_nn_CTCLoss_lengths_tensors_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4159715Z test_torch_nn_ConstantPad1d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4160265Z test_torch_nn_ConstantPad1d_batch (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4160824Z test_torch_nn_ConstantPad1d_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4161328Z test_torch_nn_ConstantPad1d_complex (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4161967Z test_torch_nn_ConstantPad1d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4162402Z test_torch_nn_ConstantPad1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4162729Z test_torch_nn_ConstantPad2d (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4163039Z test_torch_nn_ConstantPad2d_complex (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4163737Z test_torch_nn_ConstantPad2d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4164173Z test_torch_nn_ConstantPad2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4164515Z test_torch_nn_ConstantPad2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4164870Z test_torch_nn_ConstantPad2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4165193Z test_torch_nn_ConstantPad3d (__main__.TestCppApiParity) ... ok (0.033s) 2023-01-11T21:59:54.4165504Z test_torch_nn_ConstantPad3d_complex (__main__.TestCppApiParity) ... ok (0.041s) 2023-01-11T21:59:54.4165941Z test_torch_nn_ConstantPad3d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4166479Z test_torch_nn_ConstantPad3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4167095Z test_torch_nn_ConstantPad3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.022s) 2023-01-11T21:59:54.4167456Z test_torch_nn_ConstantPad3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4167779Z test_torch_nn_Conv1d (__main__.TestCppApiParity) ... ok (0.032s) 2023-01-11T21:59:54.4168069Z test_torch_nn_Conv1d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4168420Z test_torch_nn_Conv1d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4168772Z test_torch_nn_Conv1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4169087Z test_torch_nn_Conv1d_dilated (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4169402Z test_torch_nn_Conv1d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4169725Z test_torch_nn_Conv1d_groups (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4170050Z test_torch_nn_Conv1d_groups_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4170366Z test_torch_nn_Conv1d_pad1 (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4170671Z test_torch_nn_Conv1d_pad1_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4170992Z test_torch_nn_Conv1d_pad1size1 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4171320Z test_torch_nn_Conv1d_pad1size1_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4171626Z test_torch_nn_Conv1d_pad2 (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4171942Z test_torch_nn_Conv1d_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4172259Z test_torch_nn_Conv1d_pad2size1 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4172583Z test_torch_nn_Conv1d_pad2size1_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4172896Z test_torch_nn_Conv1d_pad_same (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4173877Z test_torch_nn_Conv1d_pad_same2 (__main__.TestCppApiParity) ... /opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:309: UserWarning: Using padding='same' with even kernel lengths and odd dilation may require a zero-padded copy of the input be created (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/Convolution.cpp:997.) 2023-01-11T21:59:54.4174447Z return F.conv1d(input, weight, bias, self.stride, 2023-01-11T21:59:54.4174650Z ok (0.014s) 2023-01-11T21:59:54.4174903Z test_torch_nn_Conv1d_pad_same2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4175255Z test_torch_nn_Conv1d_pad_same_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4175588Z test_torch_nn_Conv1d_pad_same_dilated (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4175992Z test_torch_nn_Conv1d_pad_same_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4176314Z test_torch_nn_Conv1d_pad_valid (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4176640Z test_torch_nn_Conv1d_pad_valid_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4176978Z test_torch_nn_Conv1d_reflect_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4177315Z test_torch_nn_Conv1d_reflect_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4177665Z test_torch_nn_Conv1d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4178024Z test_torch_nn_Conv1d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4178360Z test_torch_nn_Conv1d_stride (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4178668Z test_torch_nn_Conv1d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4178994Z test_torch_nn_Conv1d_zero_batch (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4179322Z test_torch_nn_Conv1d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4179653Z test_torch_nn_Conv1d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4179987Z test_torch_nn_Conv1d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4180305Z test_torch_nn_Conv2d (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4180607Z test_torch_nn_Conv2d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4180949Z test_torch_nn_Conv2d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4181301Z test_torch_nn_Conv2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4181623Z test_torch_nn_Conv2d_depthwise (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4181956Z test_torch_nn_Conv2d_depthwise_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4182279Z test_torch_nn_Conv2d_depthwise_dilated (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4182822Z test_torch_nn_Conv2d_depthwise_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4183169Z test_torch_nn_Conv2d_depthwise_padded (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4183505Z test_torch_nn_Conv2d_depthwise_padded_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4183849Z test_torch_nn_Conv2d_depthwise_strided (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4184198Z test_torch_nn_Conv2d_depthwise_strided_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4184554Z test_torch_nn_Conv2d_depthwise_with_multiplier (__main__.TestCppApiParity) ... ok (0.023s) 2023-01-11T21:59:54.4184971Z test_torch_nn_Conv2d_depthwise_with_multiplier_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4185312Z test_torch_nn_Conv2d_dilated (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4185639Z test_torch_nn_Conv2d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4185963Z test_torch_nn_Conv2d_groups (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4186274Z test_torch_nn_Conv2d_groups_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4186598Z test_torch_nn_Conv2d_groups_thnn (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4186931Z test_torch_nn_Conv2d_groups_thnn_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4187241Z test_torch_nn_Conv2d_no_bias (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4187606Z test_torch_nn_Conv2d_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4187933Z test_torch_nn_Conv2d_pad_same (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4188260Z test_torch_nn_Conv2d_pad_same_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4188579Z test_torch_nn_Conv2d_pad_same_dilated (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4188925Z test_torch_nn_Conv2d_pad_same_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4189260Z test_torch_nn_Conv2d_pad_valid (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4189583Z test_torch_nn_Conv2d_pad_valid_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4189907Z test_torch_nn_Conv2d_padding (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4190238Z test_torch_nn_Conv2d_padding_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4190576Z test_torch_nn_Conv2d_reflect_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4190913Z test_torch_nn_Conv2d_reflect_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4191262Z test_torch_nn_Conv2d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4191623Z test_torch_nn_Conv2d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4191956Z test_torch_nn_Conv2d_strided (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4192267Z test_torch_nn_Conv2d_strided_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4192591Z test_torch_nn_Conv2d_zero_batch (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4192919Z test_torch_nn_Conv2d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4193243Z test_torch_nn_Conv2d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4193588Z test_torch_nn_Conv2d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4193905Z test_torch_nn_Conv3d (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4194196Z test_torch_nn_Conv3d_1x1x1_no_bias (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4194514Z test_torch_nn_Conv3d_1x1x1_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4194853Z test_torch_nn_Conv3d_circular_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4195205Z test_torch_nn_Conv3d_circular_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4195545Z test_torch_nn_Conv3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4195858Z test_torch_nn_Conv3d_dilated (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4196219Z test_torch_nn_Conv3d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4196547Z test_torch_nn_Conv3d_dilated_strided (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4196874Z test_torch_nn_Conv3d_dilated_strided_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4197202Z test_torch_nn_Conv3d_groups (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4197523Z test_torch_nn_Conv3d_groups_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4197929Z test_torch_nn_Conv3d_no_bias (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4198238Z test_torch_nn_Conv3d_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4198556Z test_torch_nn_Conv3d_pad_same (__main__.TestCppApiParity) ... ok (0.028s) 2023-01-11T21:59:54.4198924Z test_torch_nn_Conv3d_pad_same_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4199248Z test_torch_nn_Conv3d_pad_same_dilated (__main__.TestCppApiParity) ... ok (0.026s) 2023-01-11T21:59:54.4199591Z test_torch_nn_Conv3d_pad_same_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4199921Z test_torch_nn_Conv3d_pad_valid (__main__.TestCppApiParity) ... ok (0.019s) 2023-01-11T21:59:54.4200246Z test_torch_nn_Conv3d_pad_valid_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4200573Z test_torch_nn_Conv3d_replicate_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.017s) 2023-01-11T21:59:54.4200930Z test_torch_nn_Conv3d_replicate_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4201263Z test_torch_nn_Conv3d_stride (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4201574Z test_torch_nn_Conv3d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4201907Z test_torch_nn_Conv3d_stride_padding (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4202245Z test_torch_nn_Conv3d_stride_padding_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4202575Z test_torch_nn_Conv3d_zero_batch (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4202892Z test_torch_nn_Conv3d_zero_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4203222Z test_torch_nn_Conv3d_zeros_stride2_pad2 (__main__.TestCppApiParity) ... ok (0.017s) 2023-01-11T21:59:54.4203567Z test_torch_nn_Conv3d_zeros_stride2_pad2_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4203896Z test_torch_nn_ConvTranspose1d (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4204219Z test_torch_nn_ConvTranspose1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4204563Z test_torch_nn_ConvTranspose1d_dilated (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4204912Z test_torch_nn_ConvTranspose1d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4205246Z test_torch_nn_ConvTranspose1d_groups (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4205593Z test_torch_nn_ConvTranspose1d_groups_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4205934Z test_torch_nn_ConvTranspose1d_no_bias (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4206278Z test_torch_nn_ConvTranspose1d_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4206599Z test_torch_nn_ConvTranspose2d (__main__.TestCppApiParity) ... ok (0.024s) 2023-01-11T21:59:54.4206932Z test_torch_nn_ConvTranspose2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4207309Z test_torch_nn_ConvTranspose2d_dilated (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4207645Z test_torch_nn_ConvTranspose2d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4207991Z test_torch_nn_ConvTranspose2d_groups (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4208336Z test_torch_nn_ConvTranspose2d_groups_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4208675Z test_torch_nn_ConvTranspose2d_no_bias (__main__.TestCppApiParity) ... ok (0.023s) 2023-01-11T21:59:54.4209003Z test_torch_nn_ConvTranspose2d_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4209335Z test_torch_nn_ConvTranspose3d (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4209669Z test_torch_nn_ConvTranspose3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4210035Z test_torch_nn_ConvTranspose3d_dilated (__main__.TestCppApiParity) ... ok (0.023s) 2023-01-11T21:59:54.4210376Z test_torch_nn_ConvTranspose3d_dilated_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4210718Z test_torch_nn_CosineEmbeddingLoss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4211067Z test_torch_nn_CosineEmbeddingLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4211409Z test_torch_nn_CosineEmbeddingLoss_margin (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4211770Z test_torch_nn_CosineEmbeddingLoss_margin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4212139Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4212522Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4212890Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_none (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4213274Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4213648Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4214027Z test_torch_nn_CosineEmbeddingLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4214365Z test_torch_nn_CrossEntropyLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4214677Z test_torch_nn_CrossEntropyLoss_2d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4215019Z test_torch_nn_CrossEntropyLoss_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4215358Z test_torch_nn_CrossEntropyLoss_2d_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4215732Z test_torch_nn_CrossEntropyLoss_2d_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4216109Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4216498Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4216891Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4217306Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4217721Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4218142Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_sum_reduction_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4218583Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_weight (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4218990Z test_torch_nn_CrossEntropyLoss_2d_indices_target_smoothing_weight_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4219369Z test_torch_nn_CrossEntropyLoss_2d_prob_target (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4219734Z test_torch_nn_CrossEntropyLoss_2d_prob_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4220095Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4220478Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4220872Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4221321Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing_sum_reduction_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4221712Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing_weight (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4222107Z test_torch_nn_CrossEntropyLoss_2d_prob_target_smoothing_weight_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4222695Z test_torch_nn_CrossEntropyLoss_2d_prob_target_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4223077Z test_torch_nn_CrossEntropyLoss_2d_prob_target_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4223429Z test_torch_nn_CrossEntropyLoss_2d_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4223787Z test_torch_nn_CrossEntropyLoss_2d_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4224166Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4224546Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4224948Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4225360Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4225773Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4226179Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4226604Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4227049Z test_torch_nn_CrossEntropyLoss_3d_indices_target_smoothing_sum_reduction_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4227446Z test_torch_nn_CrossEntropyLoss_3d_prob_target (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4227799Z test_torch_nn_CrossEntropyLoss_3d_prob_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4228168Z test_torch_nn_CrossEntropyLoss_3d_prob_target_smoothing (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4228550Z test_torch_nn_CrossEntropyLoss_3d_prob_target_smoothing_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4228945Z test_torch_nn_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4229347Z test_torch_nn_CrossEntropyLoss_3d_prob_target_smoothing_sum_reduction_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4229806Z test_torch_nn_CrossEntropyLoss_3d_prob_target_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4230187Z test_torch_nn_CrossEntropyLoss_3d_prob_target_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4230554Z test_torch_nn_CrossEntropyLoss_4d_prob_target (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4230906Z test_torch_nn_CrossEntropyLoss_4d_prob_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4231273Z test_torch_nn_CrossEntropyLoss_4d_prob_target_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4231652Z test_torch_nn_CrossEntropyLoss_4d_prob_target_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4232071Z test_torch_nn_CrossEntropyLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4232402Z test_torch_nn_CrossEntropyLoss_dim_is_3 (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4232753Z test_torch_nn_CrossEntropyLoss_dim_is_3_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4233107Z test_torch_nn_CrossEntropyLoss_higher_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4233466Z test_torch_nn_CrossEntropyLoss_higher_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4233804Z test_torch_nn_CrossEntropyLoss_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4234160Z test_torch_nn_CrossEntropyLoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4234498Z test_torch_nn_CrossMapLRN2d (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4234820Z test_torch_nn_CrossMapLRN2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4235141Z test_torch_nn_ELU (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4235445Z test_torch_nn_ELU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4235754Z test_torch_nn_ELU_scalar (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4236062Z test_torch_nn_ELU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4236377Z test_torch_nn_Embedding (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4236690Z test_torch_nn_EmbeddingBag_discontiguous (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4237038Z test_torch_nn_EmbeddingBag_discontiguous_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4237379Z test_torch_nn_EmbeddingBag_max (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4237772Z test_torch_nn_EmbeddingBag_max_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4238125Z test_torch_nn_EmbeddingBag_max_padding_idx (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4238471Z test_torch_nn_EmbeddingBag_max_padding_idx_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4238812Z test_torch_nn_EmbeddingBag_mean (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4239154Z test_torch_nn_EmbeddingBag_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4239493Z test_torch_nn_EmbeddingBag_mean_padding_idx (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4239853Z test_torch_nn_EmbeddingBag_mean_padding_idx_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4240197Z test_torch_nn_EmbeddingBag_sparse (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4240540Z test_torch_nn_EmbeddingBag_sparse_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4240903Z test_torch_nn_EmbeddingBag_sum (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4241238Z test_torch_nn_EmbeddingBag_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4241580Z test_torch_nn_EmbeddingBag_sum_padding_idx (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4241941Z test_torch_nn_EmbeddingBag_sum_padding_idx_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4242286Z test_torch_nn_Embedding_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4242617Z test_torch_nn_Embedding_discontiguous (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4242969Z test_torch_nn_Embedding_discontiguous_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4243295Z test_torch_nn_Embedding_sparse (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4243658Z test_torch_nn_Embedding_sparse_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4243982Z test_torch_nn_Flatten (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4244293Z test_torch_nn_Flatten_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4244603Z test_torch_nn_Flatten_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4244940Z test_torch_nn_Flatten_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4245250Z test_torch_nn_Fold (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4245542Z test_torch_nn_Fold_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4245850Z test_torch_nn_Fold_int_input (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4246174Z test_torch_nn_Fold_int_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4246512Z test_torch_nn_Fold_no_batch_dim_input (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4246841Z test_torch_nn_Fold_no_batch_dim_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4247180Z test_torch_nn_Fold_no_batch_dim_int_input (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4247525Z test_torch_nn_Fold_no_batch_dim_int_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4247874Z test_torch_nn_FractionalMaxPool2d_ratio (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4248224Z test_torch_nn_FractionalMaxPool2d_ratio_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4248595Z test_torch_nn_FractionalMaxPool2d_ratio_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4248979Z test_torch_nn_FractionalMaxPool2d_ratio_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4249367Z test_torch_nn_FractionalMaxPool2d_ratio_no_batch_dim_no_random_samples (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4249790Z test_torch_nn_FractionalMaxPool2d_ratio_no_batch_dim_no_random_samples_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4250178Z test_torch_nn_FractionalMaxPool2d_size (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4250537Z test_torch_nn_FractionalMaxPool2d_size_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4250890Z test_torch_nn_FractionalMaxPool2d_size_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4251270Z test_torch_nn_FractionalMaxPool2d_size_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4251671Z test_torch_nn_FractionalMaxPool2d_size_no_batch_dim_no_random_samples (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4252133Z test_torch_nn_FractionalMaxPool2d_size_no_batch_dim_no_random_samples_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4252514Z test_torch_nn_FractionalMaxPool3d_asymsize (__main__.TestCppApiParity) ... ok (0.016s) 2023-01-11T21:59:54.4252879Z test_torch_nn_FractionalMaxPool3d_asymsize_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4253239Z test_torch_nn_FractionalMaxPool3d_ratio (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4253603Z test_torch_nn_FractionalMaxPool3d_ratio_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4253959Z test_torch_nn_FractionalMaxPool3d_ratio_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4254341Z test_torch_nn_FractionalMaxPool3d_ratio_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4254787Z test_torch_nn_FractionalMaxPool3d_ratio_no_batch_dim_no_random_samples (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4255213Z test_torch_nn_FractionalMaxPool3d_ratio_no_batch_dim_no_random_samples_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4255588Z test_torch_nn_FractionalMaxPool3d_size (__main__.TestCppApiParity) ... ok (0.014s) 2023-01-11T21:59:54.4255947Z test_torch_nn_FractionalMaxPool3d_size_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4256311Z test_torch_nn_FractionalMaxPool3d_size_no_batch_dim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4256679Z test_torch_nn_FractionalMaxPool3d_size_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4257080Z test_torch_nn_FractionalMaxPool3d_size_no_batch_dim_no_random_samples (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4257505Z test_torch_nn_FractionalMaxPool3d_size_no_batch_dim_no_random_samples_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4257869Z test_torch_nn_GELU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4258160Z test_torch_nn_GELU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4258472Z test_torch_nn_GELU_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4258793Z test_torch_nn_GELU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4259100Z test_torch_nn_GLU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4259385Z test_torch_nn_GLU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4259687Z test_torch_nn_GLU_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4259997Z test_torch_nn_GLU_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4260304Z test_torch_nn_GroupNorm_1d_affine (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4260614Z test_torch_nn_GroupNorm_1d_affine_GN (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4260955Z test_torch_nn_GroupNorm_1d_affine_GN_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4261316Z test_torch_nn_GroupNorm_1d_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4261646Z test_torch_nn_GroupNorm_1d_affine_large_batch (__main__.TestCppApiParity) ... ok (0.020s) 2023-01-11T21:59:54.4262006Z test_torch_nn_GroupNorm_1d_affine_large_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4262500Z test_torch_nn_GroupNorm_1d_no_affine_IN (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4262845Z test_torch_nn_GroupNorm_1d_no_affine_IN_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4263181Z test_torch_nn_GroupNorm_1d_no_affine_LN (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4263591Z test_torch_nn_GroupNorm_1d_no_affine_LN_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4263932Z test_torch_nn_GroupNorm_2d_affine (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4264260Z test_torch_nn_GroupNorm_2d_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4264607Z test_torch_nn_GroupNorm_2d_affine_large_feature (__main__.TestCppApiParity) ... ok (0.026s) 2023-01-11T21:59:54.4264969Z test_torch_nn_GroupNorm_2d_affine_large_feature_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4265321Z test_torch_nn_GroupNorm_2d_no_affine_IN (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4265652Z test_torch_nn_GroupNorm_2d_no_affine_IN_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4265988Z test_torch_nn_GroupNorm_2d_no_affine_LN (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4266371Z test_torch_nn_GroupNorm_2d_no_affine_LN_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4266711Z test_torch_nn_GroupNorm_2d_no_affine_large_feature (__main__.TestCppApiParity) ... ok (0.103s) 2023-01-11T21:59:54.4267078Z test_torch_nn_GroupNorm_2d_no_affine_large_feature_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4267414Z test_torch_nn_Hardshrink (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4267733Z test_torch_nn_Hardshrink_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4268047Z test_torch_nn_Hardshrink_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4268382Z test_torch_nn_Hardshrink_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4268701Z test_torch_nn_Hardtanh (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4269022Z test_torch_nn_Hardtanh_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4269328Z test_torch_nn_Hardtanh_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4269656Z test_torch_nn_Hardtanh_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4269991Z test_torch_nn_HingeEmbeddingLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4270328Z test_torch_nn_HingeEmbeddingLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4270678Z test_torch_nn_HingeEmbeddingLoss_margin (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4271037Z test_torch_nn_HingeEmbeddingLoss_margin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4271417Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4271797Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4272191Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_none (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4272579Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4272969Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4273350Z test_torch_nn_HingeEmbeddingLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4273719Z test_torch_nn_HingeEmbeddingLoss_scalar_margin (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4274094Z test_torch_nn_HingeEmbeddingLoss_scalar_margin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4274417Z test_torch_nn_HuberLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4274777Z test_torch_nn_HuberLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4275096Z test_torch_nn_InstanceNorm1d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4275432Z test_torch_nn_InstanceNorm1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4275757Z test_torch_nn_InstanceNorm1d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4276114Z test_torch_nn_InstanceNorm1d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4276468Z test_torch_nn_InstanceNorm1d_tracking_stats (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4276829Z test_torch_nn_InstanceNorm1d_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4277185Z test_torch_nn_InstanceNorm1d_tracking_stats_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4277614Z test_torch_nn_InstanceNorm1d_tracking_stats_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4278030Z test_torch_nn_InstanceNorm2d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4278352Z test_torch_nn_InstanceNorm2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4278692Z test_torch_nn_InstanceNorm2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4279045Z test_torch_nn_InstanceNorm2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4279398Z test_torch_nn_InstanceNorm2d_tracking_stats (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4279747Z test_torch_nn_InstanceNorm2d_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4280117Z test_torch_nn_InstanceNorm2d_tracking_stats_no_batch_dim (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4280501Z test_torch_nn_InstanceNorm2d_tracking_stats_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4280850Z test_torch_nn_InstanceNorm3d (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4281168Z test_torch_nn_InstanceNorm3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4281509Z test_torch_nn_InstanceNorm3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4281870Z test_torch_nn_InstanceNorm3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4282211Z test_torch_nn_InstanceNorm3d_tracking_stats (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4282572Z test_torch_nn_InstanceNorm3d_tracking_stats_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4282944Z test_torch_nn_InstanceNorm3d_tracking_stats_no_batch_dim (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4283330Z test_torch_nn_InstanceNorm3d_tracking_stats_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4284324Z test_torch_nn_KLDivLoss (__main__.TestCppApiParity) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2918: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. 2023-01-11T21:59:54.4284859Z warnings.warn( 2023-01-11T21:59:54.4285744Z /var/lib/jenkins/workspace/test/cpp_api_parity/module_impl_check.py:149: UserWarning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/api/include/torch/nn/functional/loss.h:57.) 2023-01-11T21:59:54.4286474Z cpp_test_fn(arg_dict_file_path, module_file_path, forward_output_file_path, backward_grad_dict_file_path) 2023-01-11T21:59:54.4286736Z ok (0.008s) 2023-01-11T21:59:54.4286986Z test_torch_nn_KLDivLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4287318Z test_torch_nn_KLDivLoss_log_target (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4287657Z test_torch_nn_KLDivLoss_log_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4287993Z test_torch_nn_KLDivLoss_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4288320Z test_torch_nn_KLDivLoss_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4288660Z test_torch_nn_KLDivLoss_scalar_log_target (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4289055Z test_torch_nn_KLDivLoss_scalar_log_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4289372Z test_torch_nn_L1Loss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4289681Z test_torch_nn_L1Loss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4289992Z test_torch_nn_L1Loss_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4290318Z test_torch_nn_L1Loss_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4290623Z test_torch_nn_LPPool1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4290943Z test_torch_nn_LPPool1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4291270Z test_torch_nn_LPPool1d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4291602Z test_torch_nn_LPPool1d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4291934Z test_torch_nn_LPPool1d_norm (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4292257Z test_torch_nn_LPPool1d_norm_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4292574Z test_torch_nn_LPPool2d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4292878Z test_torch_nn_LPPool2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4293194Z test_torch_nn_LPPool2d_norm (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4293520Z test_torch_nn_LPPool2d_norm_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4293865Z test_torch_nn_LayerNorm_1d_elementwise_affine (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4294216Z test_torch_nn_LayerNorm_1d_elementwise_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4294589Z test_torch_nn_LayerNorm_1d_empty_elementwise_affine (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4294964Z test_torch_nn_LayerNorm_1d_empty_elementwise_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4295318Z test_torch_nn_LayerNorm_1d_no_elementwise_affine (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4295687Z test_torch_nn_LayerNorm_1d_no_elementwise_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4296049Z test_torch_nn_LayerNorm_3d_elementwise_affine (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4296412Z test_torch_nn_LayerNorm_3d_elementwise_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4296764Z test_torch_nn_LayerNorm_3d_no_affine_large_feature (__main__.TestCppApiParity) ... ok (0.055s) 2023-01-11T21:59:54.4297133Z test_torch_nn_LayerNorm_3d_no_affine_large_feature_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4297538Z test_torch_nn_LayerNorm_3d_no_elementwise_affine (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4297904Z test_torch_nn_LayerNorm_3d_no_elementwise_affine_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4298228Z test_torch_nn_LeakyReLU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4298547Z test_torch_nn_LeakyReLU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4298872Z test_torch_nn_LeakyReLU_with_negval (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4299205Z test_torch_nn_LeakyReLU_with_negval_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4299551Z test_torch_nn_LeakyReLU_with_negval_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4299906Z test_torch_nn_LeakyReLU_with_negval_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4300301Z test_torch_nn_LeakyReLU_with_zero_negval (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4300639Z test_torch_nn_LeakyReLU_with_zero_negval_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4300962Z test_torch_nn_Linear (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4301271Z test_torch_nn_Linear_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4301594Z test_torch_nn_Linear_no_batch_dim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4301921Z test_torch_nn_Linear_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4302247Z test_torch_nn_Linear_no_bias (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4302746Z test_torch_nn_Linear_no_bias_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4303075Z test_torch_nn_LocalResponseNorm_1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4303429Z test_torch_nn_LocalResponseNorm_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4303784Z test_torch_nn_LocalResponseNorm_2d_uneven_pad (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4304152Z test_torch_nn_LocalResponseNorm_2d_uneven_pad_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4304508Z test_torch_nn_LocalResponseNorm_3d_custom_params (__main__.TestCppApiParity) ... ok (0.028s) 2023-01-11T21:59:54.4304885Z test_torch_nn_LocalResponseNorm_3d_custom_params_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4305225Z test_torch_nn_LogSigmoid (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4305548Z test_torch_nn_LogSigmoid_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4305862Z test_torch_nn_LogSigmoid_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4306204Z test_torch_nn_LogSigmoid_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4306523Z test_torch_nn_LogSoftmax (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4306831Z test_torch_nn_LogSoftmax_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4307162Z test_torch_nn_LogSoftmax_multiparam (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4307508Z test_torch_nn_LogSoftmax_multiparam_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4307859Z test_torch_nn_LogSoftmax_multiparam_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4308203Z test_torch_nn_LogSoftmax_multiparam_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4308548Z test_torch_nn_LogSoftmax_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4308968Z test_torch_nn_LogSoftmax_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4309276Z test_torch_nn_MSELoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4309586Z test_torch_nn_MSELoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4309900Z test_torch_nn_MSELoss_prec (__main__.TestCppApiParity) ... ok (0.038s) 2023-01-11T21:59:54.4310223Z test_torch_nn_MSELoss_prec_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4310530Z test_torch_nn_MSELoss_scalar (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4310854Z test_torch_nn_MSELoss_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4311184Z test_torch_nn_MarginRankingLoss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4311555Z test_torch_nn_MarginRankingLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4311906Z test_torch_nn_MarginRankingLoss_margin (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4312261Z test_torch_nn_MarginRankingLoss_margin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4312621Z test_torch_nn_MarginRankingLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4312985Z test_torch_nn_MarginRankingLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4313357Z test_torch_nn_MarginRankingLoss_no_batch_dim_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4313733Z test_torch_nn_MarginRankingLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4314099Z test_torch_nn_MarginRankingLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4314459Z test_torch_nn_MarginRankingLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4314798Z test_torch_nn_MaxPool1d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4315115Z test_torch_nn_MaxPool1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4315425Z test_torch_nn_MaxPool1d_stride (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4315758Z test_torch_nn_MaxPool1d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4316085Z test_torch_nn_MaxPool2d_3d_input (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4316419Z test_torch_nn_MaxPool2d_3d_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4316736Z test_torch_nn_MaxPool2d_4d_input (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4317066Z test_torch_nn_MaxPool2d_4d_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4317386Z test_torch_nn_MaxPool3d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4317761Z test_torch_nn_MaxPool3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4318075Z test_torch_nn_MaxPool3d_stride (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4318408Z test_torch_nn_MaxPool3d_stride_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4318745Z test_torch_nn_MaxPool3d_stride_padding (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4319086Z test_torch_nn_MaxPool3d_stride_padding_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4319406Z test_torch_nn_Mish (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4319709Z test_torch_nn_Mish_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4320016Z test_torch_nn_Mish_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4320363Z test_torch_nn_Mish_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4320701Z test_torch_nn_MultiLabelMarginLoss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4321029Z test_torch_nn_MultiLabelMarginLoss_1d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4321372Z test_torch_nn_MultiLabelMarginLoss_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4321749Z test_torch_nn_MultiLabelMarginLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4322117Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4322499Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4322869Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4323288Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4323670Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4324052Z test_torch_nn_MultiLabelMarginLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4324410Z test_torch_nn_MultiLabelSoftMarginLoss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4324781Z test_torch_nn_MultiLabelSoftMarginLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4325162Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4325550Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4325949Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4326347Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4326742Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4327124Z test_torch_nn_MultiLabelSoftMarginLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4327509Z test_torch_nn_MultiLabelSoftMarginLoss_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4327894Z test_torch_nn_MultiLabelSoftMarginLoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4328246Z test_torch_nn_MultiMarginLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4328541Z test_torch_nn_MultiMarginLoss_1d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4328884Z test_torch_nn_MultiMarginLoss_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4329244Z test_torch_nn_MultiMarginLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4329583Z test_torch_nn_MultiMarginLoss_margin (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4329918Z test_torch_nn_MultiMarginLoss_margin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4330257Z test_torch_nn_MultiMarginLoss_p (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4330596Z test_torch_nn_MultiMarginLoss_p_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4330924Z test_torch_nn_MultiMarginLoss_weights (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4331277Z test_torch_nn_MultiMarginLoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4331642Z test_torch_nn_NLLLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4331925Z test_torch_nn_NLLLoss_2d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4332225Z test_torch_nn_NLLLoss_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4332552Z test_torch_nn_NLLLoss_2d_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4332896Z test_torch_nn_NLLLoss_2d_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4333218Z test_torch_nn_NLLLoss_2d_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4333553Z test_torch_nn_NLLLoss_2d_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4333899Z test_torch_nn_NLLLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4334215Z test_torch_nn_NLLLoss_dim_is_3 (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4334582Z test_torch_nn_NLLLoss_dim_is_3_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4334913Z test_torch_nn_NLLLoss_higher_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4335250Z test_torch_nn_NLLLoss_higher_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4335586Z test_torch_nn_NLLLoss_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4350561Z test_torch_nn_NLLLoss_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4351072Z test_torch_nn_NLLLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... expected failure (0.015s) 2023-01-11T21:59:54.4351439Z test_torch_nn_NLLLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4351808Z test_torch_nn_NLLLoss_no_batch_dim_none (__main__.TestCppApiParity) ... expected failure (0.011s) 2023-01-11T21:59:54.4352186Z test_torch_nn_NLLLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4352546Z test_torch_nn_NLLLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... expected failure (0.011s) 2023-01-11T21:59:54.4352894Z test_torch_nn_NLLLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4353226Z test_torch_nn_NLLLoss_weights (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4353561Z test_torch_nn_NLLLoss_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4353888Z test_torch_nn_NLLLoss_weights_ignore_index (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4354241Z test_torch_nn_NLLLoss_weights_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4354593Z test_torch_nn_NLLLoss_weights_ignore_index_neg (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4354961Z test_torch_nn_NLLLoss_weights_ignore_index_neg_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4355283Z test_torch_nn_PReLU_1d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4355598Z test_torch_nn_PReLU_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4355923Z test_torch_nn_PReLU_1d_multiparam (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4356265Z test_torch_nn_PReLU_1d_multiparam_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4356574Z test_torch_nn_PReLU_2d (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4356887Z test_torch_nn_PReLU_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4357207Z test_torch_nn_PReLU_2d_multiparam (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4357538Z test_torch_nn_PReLU_2d_multiparam_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4358076Z test_torch_nn_PReLU_3d (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4358389Z test_torch_nn_PReLU_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4358713Z test_torch_nn_PReLU_3d_multiparam (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4359038Z test_torch_nn_PReLU_3d_multiparam_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4359362Z test_torch_nn_PReLU_scalar (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4359686Z test_torch_nn_PReLU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4359999Z test_torch_nn_PairwiseDistance (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4360323Z test_torch_nn_PairwiseDistance_broadcast_lhs (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4360746Z test_torch_nn_PairwiseDistance_broadcast_lhs_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4361116Z test_torch_nn_PairwiseDistance_broadcast_rhs (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4361474Z test_torch_nn_PairwiseDistance_broadcast_rhs_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4361853Z test_torch_nn_PairwiseDistance_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4362199Z test_torch_nn_PairwiseDistance_no_batch_dim (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4362564Z test_torch_nn_PairwiseDistance_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4362920Z test_torch_nn_PairwiseDistance_with_non_default_args (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4363304Z test_torch_nn_PairwiseDistance_with_non_default_args_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4363660Z test_torch_nn_PixelShuffle (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4363976Z test_torch_nn_PixelShuffle_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4364301Z test_torch_nn_PixelUnshuffle (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4364633Z test_torch_nn_PixelUnshuffle_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4364972Z test_torch_nn_PoissonNLLLoss_full_loss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4365311Z test_torch_nn_PoissonNLLLoss_full_loss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4365668Z test_torch_nn_PoissonNLLLoss_full_loss_no_log_input (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4366040Z test_torch_nn_PoissonNLLLoss_full_loss_no_log_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4366403Z test_torch_nn_PoissonNLLLoss_no_full_loss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4366748Z test_torch_nn_PoissonNLLLoss_no_full_loss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4367112Z test_torch_nn_PoissonNLLLoss_no_full_loss_no_log_input (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4367491Z test_torch_nn_PoissonNLLLoss_no_full_loss_no_log_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4367815Z test_torch_nn_RReLU (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4368130Z test_torch_nn_RReLU_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4368454Z test_torch_nn_RReLU_with_up_down (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4368792Z test_torch_nn_RReLU_with_up_down_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4369122Z test_torch_nn_RReLU_with_up_down_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4369509Z test_torch_nn_RReLU_with_up_down_scalar_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4369830Z test_torch_nn_ReLU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4370092Z test_torch_nn_ReLU6 (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4370395Z test_torch_nn_ReLU6_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4370705Z test_torch_nn_ReLU6_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4371025Z test_torch_nn_ReLU6_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4371348Z test_torch_nn_ReLU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4371659Z test_torch_nn_ReLU_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4372011Z test_torch_nn_ReLU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4372329Z test_torch_nn_ReflectionPad1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4372640Z test_torch_nn_ReflectionPad1d_batch (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4372990Z test_torch_nn_ReflectionPad1d_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4373337Z test_torch_nn_ReflectionPad1d_complex (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4373676Z test_torch_nn_ReflectionPad1d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4374048Z test_torch_nn_ReflectionPad1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4374378Z test_torch_nn_ReflectionPad2d (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4374694Z test_torch_nn_ReflectionPad2d_complex (__main__.TestCppApiParity) ... ok (0.026s) 2023-01-11T21:59:54.4375041Z test_torch_nn_ReflectionPad2d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4375407Z test_torch_nn_ReflectionPad2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4375749Z test_torch_nn_ReflectionPad2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4376094Z test_torch_nn_ReflectionPad2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4376438Z test_torch_nn_ReflectionPad3d (__main__.TestCppApiParity) ... ok (0.031s) 2023-01-11T21:59:54.4376751Z test_torch_nn_ReflectionPad3d_complex (__main__.TestCppApiParity) ... ok (0.049s) 2023-01-11T21:59:54.4377103Z test_torch_nn_ReflectionPad3d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4377458Z test_torch_nn_ReflectionPad3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4377806Z test_torch_nn_ReflectionPad3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.020s) 2023-01-11T21:59:54.4378165Z test_torch_nn_ReflectionPad3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4378509Z test_torch_nn_ReplicationPad1d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4378814Z test_torch_nn_ReplicationPad1d_batch (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4379168Z test_torch_nn_ReplicationPad1d_batch_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4379521Z test_torch_nn_ReplicationPad1d_complex (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4379869Z test_torch_nn_ReplicationPad1d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4380241Z test_torch_nn_ReplicationPad1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4380619Z test_torch_nn_ReplicationPad2d (__main__.TestCppApiParity) ... ok (0.018s) 2023-01-11T21:59:54.4380937Z test_torch_nn_ReplicationPad2d_complex (__main__.TestCppApiParity) ... ok (0.019s) 2023-01-11T21:59:54.4381280Z test_torch_nn_ReplicationPad2d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4381655Z test_torch_nn_ReplicationPad2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4382006Z test_torch_nn_ReplicationPad2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4382487Z test_torch_nn_ReplicationPad2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4382859Z test_torch_nn_ReplicationPad3d (__main__.TestCppApiParity) ... ok (0.019s) 2023-01-11T21:59:54.4383177Z test_torch_nn_ReplicationPad3d_complex (__main__.TestCppApiParity) ... ok (0.029s) 2023-01-11T21:59:54.4383587Z test_torch_nn_ReplicationPad3d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4383951Z test_torch_nn_ReplicationPad3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4384299Z test_torch_nn_ReplicationPad3d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.015s) 2023-01-11T21:59:54.4384661Z test_torch_nn_ReplicationPad3d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4384989Z test_torch_nn_SELU (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4385279Z test_torch_nn_SELU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4385588Z test_torch_nn_SELU_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4385906Z test_torch_nn_SELU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4386227Z test_torch_nn_SampleModule_has_parity (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4386582Z test_torch_nn_SampleModule_has_parity_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4386942Z test_torch_nn_SampleModule_no_parity (__main__.TestCppApiParity) ... expected failure (0.011s) 2023-01-11T21:59:54.4387303Z test_torch_nn_SampleModule_no_parity_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4387610Z test_torch_nn_SiLU (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4387913Z test_torch_nn_SiLU_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4388221Z test_torch_nn_SiLU_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4388527Z test_torch_nn_SiLU_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4388836Z test_torch_nn_Sigmoid (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4389144Z test_torch_nn_Sigmoid_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4389461Z test_torch_nn_Sigmoid_scalar (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4389773Z test_torch_nn_Sigmoid_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4390095Z test_torch_nn_SmoothL1Loss (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4390418Z test_torch_nn_SmoothL1Loss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4390730Z test_torch_nn_SmoothL1Loss_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4391065Z test_torch_nn_SmoothL1Loss_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4391394Z test_torch_nn_SoftMarginLoss (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4391722Z test_torch_nn_SoftMarginLoss_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4392073Z test_torch_nn_SoftMarginLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4392504Z test_torch_nn_SoftMarginLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4392882Z test_torch_nn_SoftMarginLoss_no_batch_dim_none (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4393260Z test_torch_nn_SoftMarginLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4393623Z test_torch_nn_SoftMarginLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... expected failure (0.008s) 2023-01-11T21:59:54.4393995Z test_torch_nn_SoftMarginLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4394324Z test_torch_nn_Softmax (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4394597Z test_torch_nn_Softmax2d (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4394944Z test_torch_nn_Softmax2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4395270Z test_torch_nn_Softmax2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4395608Z test_torch_nn_Softmax2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4395947Z test_torch_nn_Softmax_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4396266Z test_torch_nn_Softmax_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4396599Z test_torch_nn_Softmax_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4396920Z test_torch_nn_Softmax_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4397232Z test_torch_nn_Softmax_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4397545Z test_torch_nn_Softmin (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4397914Z test_torch_nn_Softmin_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4398218Z test_torch_nn_Softmin_multidim (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4398548Z test_torch_nn_Softmin_multidim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4398874Z test_torch_nn_Softmin_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4399205Z test_torch_nn_Softmin_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4399513Z test_torch_nn_Softmin_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4399842Z test_torch_nn_Softmin_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4400154Z test_torch_nn_Softplus (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4400434Z test_torch_nn_Softplus_beta (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4400759Z test_torch_nn_Softplus_beta_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4401092Z test_torch_nn_Softplus_beta_threshold (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4401437Z test_torch_nn_Softplus_beta_threshold_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4401771Z test_torch_nn_Softplus_beta_threshold_scalar (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4402131Z test_torch_nn_Softplus_beta_threshold_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4402487Z test_torch_nn_Softplus_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4402781Z test_torch_nn_Softshrink (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4403099Z test_torch_nn_Softshrink_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4403463Z test_torch_nn_Softshrink_lambda (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4403798Z test_torch_nn_Softshrink_lambda_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4404125Z test_torch_nn_Softshrink_lambda_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4404474Z test_torch_nn_Softshrink_lambda_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4404796Z test_torch_nn_Softsign (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4405106Z test_torch_nn_Softsign_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4405406Z test_torch_nn_Softsign_scalar (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4405732Z test_torch_nn_Softsign_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4406050Z test_torch_nn_Tanh (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4406375Z test_torch_nn_Tanh_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4406684Z test_torch_nn_Tanh_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4407001Z test_torch_nn_Tanh_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4407312Z test_torch_nn_Tanhshrink (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4407618Z test_torch_nn_Tanhshrink_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4407937Z test_torch_nn_Tanhshrink_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4408265Z test_torch_nn_Tanhshrink_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4408749Z test_torch_nn_Threshold_large_value (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4409092Z test_torch_nn_Threshold_large_value_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4409436Z test_torch_nn_Threshold_threshold_value (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4409787Z test_torch_nn_Threshold_threshold_value_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4410126Z test_torch_nn_Threshold_threshold_value_scalar (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4410488Z test_torch_nn_Threshold_threshold_value_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4410867Z test_torch_nn_TransformerDecoderLayer_gelu_activation (__main__.TestCppApiParity) ... ok (0.052s) 2023-01-11T21:59:54.4411267Z test_torch_nn_TransformerDecoderLayer_gelu_activation_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4411650Z test_torch_nn_TransformerDecoderLayer_relu_activation (__main__.TestCppApiParity) ... ok (0.052s) 2023-01-11T21:59:54.4412051Z test_torch_nn_TransformerDecoderLayer_relu_activation_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4412447Z test_torch_nn_TransformerEncoderLayer_gelu_activation (__main__.TestCppApiParity) ... ok (0.037s) 2023-01-11T21:59:54.4412830Z test_torch_nn_TransformerEncoderLayer_gelu_activation_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4413223Z test_torch_nn_TransformerEncoderLayer_relu_activation (__main__.TestCppApiParity) ... ok (0.039s) 2023-01-11T21:59:54.4413618Z test_torch_nn_TransformerEncoderLayer_relu_activation_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4413992Z test_torch_nn_Transformer_multilayer_coder (__main__.TestCppApiParity) ... ok (0.173s) 2023-01-11T21:59:54.4414338Z test_torch_nn_Transformer_multilayer_coder_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4414698Z test_torch_nn_TripletMarginLoss_no_batch_dim_mean (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4415117Z test_torch_nn_TripletMarginLoss_no_batch_dim_mean_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4415486Z test_torch_nn_TripletMarginLoss_no_batch_dim_none (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4415848Z test_torch_nn_TripletMarginLoss_no_batch_dim_none_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4416216Z test_torch_nn_TripletMarginLoss_no_batch_dim_sum (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4416583Z test_torch_nn_TripletMarginLoss_no_batch_dim_sum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4416932Z test_torch_nn_Unflatten_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4417264Z test_torch_nn_Unflatten_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4417616Z test_torch_nn_Unfold (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4417930Z test_torch_nn_Unfold_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4418233Z test_torch_nn_Unfold_int_input (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4418562Z test_torch_nn_Unfold_int_input_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4418882Z test_torch_nn_ZeroPad2d (__main__.TestCppApiParity) ... ok (0.010s) 2023-01-11T21:59:54.4419180Z test_torch_nn_ZeroPad2d_complex (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4419504Z test_torch_nn_ZeroPad2d_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4419854Z test_torch_nn_ZeroPad2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4420184Z test_torch_nn_ZeroPad2d_negative_dims (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4420528Z test_torch_nn_ZeroPad2d_negative_dims_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4420871Z test_torch_nn_ZeroPad2d_no_batch_dim (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4421215Z test_torch_nn_ZeroPad2d_no_batch_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4421564Z test_torch_nn_functional_BCELoss_no_reduce (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4421906Z test_torch_nn_functional_BCELoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4422266Z test_torch_nn_functional_BCELoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4422749Z test_torch_nn_functional_BCELoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4423118Z test_torch_nn_functional_BCELoss_weights_no_reduce (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4423481Z test_torch_nn_functional_BCELoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4423855Z test_torch_nn_functional_BCELoss_weights_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4424240Z test_torch_nn_functional_BCELoss_weights_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4425058Z test_torch_nn_functional_BCEWithLogitsLoss_legacy_enum (__main__.TestCppApiParity) ... /opt/conda/lib/python3.10/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. 2023-01-11T21:59:54.4425499Z warnings.warn(warning.format(ret)) 2023-01-11T21:59:54.4425692Z ok (0.006s) 2023-01-11T21:59:54.4425986Z test_torch_nn_functional_BCEWithLogitsLoss_legacy_enum_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4426360Z test_torch_nn_functional_BCEWithLogitsLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4426837Z test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4427221Z test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4427615Z test_torch_nn_functional_BCEWithLogitsLoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4428000Z test_torch_nn_functional_HingeEmbeddingLoss_margin_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4428402Z test_torch_nn_functional_HingeEmbeddingLoss_margin_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4428790Z test_torch_nn_functional_HingeEmbeddingLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4429216Z test_torch_nn_functional_HingeEmbeddingLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4429570Z test_torch_nn_functional_HuberLoss_delta (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4429926Z test_torch_nn_functional_HuberLoss_delta_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4430278Z test_torch_nn_functional_KLDivLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4430642Z test_torch_nn_functional_KLDivLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4431001Z test_torch_nn_functional_KLDivLoss_no_reduce_log_target (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4431382Z test_torch_nn_functional_KLDivLoss_no_reduce_log_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4431756Z test_torch_nn_functional_KLDivLoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4432125Z test_torch_nn_functional_KLDivLoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4432510Z test_torch_nn_functional_KLDivLoss_no_reduce_scalar_log_target (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4432908Z test_torch_nn_functional_KLDivLoss_no_reduce_scalar_log_target_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4433299Z test_torch_nn_functional_KLDivLoss_with_log_target_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4433677Z test_torch_nn_functional_KLDivLoss_with_log_target_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4434057Z test_torch_nn_functional_KLDivLoss_with_target_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4434441Z test_torch_nn_functional_KLDivLoss_with_target_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4434809Z test_torch_nn_functional_L1Loss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4435131Z test_torch_nn_functional_L1Loss_no_reduce_complex (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4435498Z test_torch_nn_functional_L1Loss_no_reduce_complex_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4435883Z test_torch_nn_functional_L1Loss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4436237Z test_torch_nn_functional_L1Loss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4436590Z test_torch_nn_functional_L1Loss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4436946Z test_torch_nn_functional_MSELoss_no_reduce (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4437303Z test_torch_nn_functional_MSELoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4437757Z test_torch_nn_functional_MSELoss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4438116Z test_torch_nn_functional_MSELoss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4438500Z test_torch_nn_functional_MultiLabelMarginLoss_0d_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4438902Z test_torch_nn_functional_MultiLabelMarginLoss_0d_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4439282Z test_torch_nn_functional_MultiLabelMarginLoss_1d_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4439678Z test_torch_nn_functional_MultiLabelMarginLoss_1d_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4440071Z test_torch_nn_functional_MultiLabelMarginLoss_index_neg (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4440506Z test_torch_nn_functional_MultiLabelMarginLoss_index_neg_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4440885Z test_torch_nn_functional_MultiLabelMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4441283Z test_torch_nn_functional_MultiLabelMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4441677Z test_torch_nn_functional_MultiLabelSoftMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4442089Z test_torch_nn_functional_MultiLabelSoftMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4442492Z test_torch_nn_functional_MultiLabelSoftMarginLoss_weights_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4442924Z test_torch_nn_functional_MultiLabelSoftMarginLoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4443327Z test_torch_nn_functional_MultiMarginLoss_1d_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4443708Z test_torch_nn_functional_MultiMarginLoss_1d_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4444082Z test_torch_nn_functional_MultiMarginLoss_margin_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4444472Z test_torch_nn_functional_MultiMarginLoss_margin_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4444848Z test_torch_nn_functional_MultiMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4445224Z test_torch_nn_functional_MultiMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4445583Z test_torch_nn_functional_MultiMarginLoss_p_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4445967Z test_torch_nn_functional_MultiMarginLoss_p_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4446351Z test_torch_nn_functional_MultiMarginLoss_weights_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4446742Z test_torch_nn_functional_MultiMarginLoss_weights_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4447100Z test_torch_nn_functional_NLLLoss2d_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4447460Z test_torch_nn_functional_NLLLoss2d_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4447831Z test_torch_nn_functional_NLLLoss2d_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4448203Z test_torch_nn_functional_NLLLoss2d_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4448623Z test_torch_nn_functional_NLLLoss2d_no_reduce_weights (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4449000Z test_torch_nn_functional_NLLLoss2d_no_reduce_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4449362Z test_torch_nn_functional_NLLLossNd_no_reduce (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4449713Z test_torch_nn_functional_NLLLossNd_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4450083Z test_torch_nn_functional_NLLLossNd_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4450469Z test_torch_nn_functional_NLLLossNd_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4450847Z test_torch_nn_functional_NLLLossNd_no_reduce_weights (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4451243Z test_torch_nn_functional_NLLLossNd_no_reduce_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4451606Z test_torch_nn_functional_NLLLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4451962Z test_torch_nn_functional_NLLLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4452331Z test_torch_nn_functional_NLLLoss_no_reduce_ignore_index (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4452696Z test_torch_nn_functional_NLLLoss_no_reduce_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4453068Z test_torch_nn_functional_NLLLoss_no_reduce_weights (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4453438Z test_torch_nn_functional_NLLLoss_no_reduce_weights_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4453805Z test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4454204Z test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4454598Z test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_neg (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4454999Z test_torch_nn_functional_NLLLoss_no_reduce_weights_ignore_index_neg_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4455369Z test_torch_nn_functional_Padding122112_3dcircular (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4455738Z test_torch_nn_functional_Padding122112_3dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4456100Z test_torch_nn_functional_Padding1221_2dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4456464Z test_torch_nn_functional_Padding1221_2dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4456821Z test_torch_nn_functional_Padding12_1dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4457184Z test_torch_nn_functional_Padding12_1dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4457541Z test_torch_nn_functional_Padding2322_2dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4457899Z test_torch_nn_functional_Padding2322_2dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4458244Z test_torch_nn_functional_Padding31_1dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4458603Z test_torch_nn_functional_Padding31_1dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4458961Z test_torch_nn_functional_Padding322112_3dcircular (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4459327Z test_torch_nn_functional_Padding322112_3dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4459710Z test_torch_nn_functional_Padding332122_3dcircular (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4460073Z test_torch_nn_functional_Padding332122_3dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4460436Z test_torch_nn_functional_Padding3331_2dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4460786Z test_torch_nn_functional_Padding3331_2dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4461147Z test_torch_nn_functional_Padding33_1dcircular (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4461509Z test_torch_nn_functional_Padding33_1dcircular_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4461871Z test_torch_nn_functional_PoissonNLLLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4462266Z test_torch_nn_functional_PoissonNLLLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4462768Z test_torch_nn_functional_SmoothL1Loss_beta (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4463124Z test_torch_nn_functional_SmoothL1Loss_beta_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4463478Z test_torch_nn_functional_SmoothL1Loss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4463828Z test_torch_nn_functional_SmoothL1Loss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4464201Z test_torch_nn_functional_SmoothL1Loss_no_reduce_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4464580Z test_torch_nn_functional_SmoothL1Loss_no_reduce_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4464945Z test_torch_nn_functional_SmoothL1Loss_zero_beta (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4465296Z test_torch_nn_functional_SmoothL1Loss_zero_beta_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4465662Z test_torch_nn_functional_SoftMarginLoss_no_reduce (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4466031Z test_torch_nn_functional_SoftMarginLoss_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4466382Z test_torch_nn_functional_interpolate_bicubic_2d (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4466743Z test_torch_nn_functional_interpolate_bicubic_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4467111Z test_torch_nn_functional_interpolate_bicubic_2d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4467487Z test_torch_nn_functional_interpolate_bicubic_2d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4467855Z test_torch_nn_functional_interpolate_bicubic_scale_2d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4468229Z test_torch_nn_functional_interpolate_bicubic_scale_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4468617Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_shared_2d (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4469020Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_shared_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4469406Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4469793Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4470227Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4470727Z test_torch_nn_functional_interpolate_bicubic_scale_tuple_skewed_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4471101Z test_torch_nn_functional_interpolate_bicubic_tuple_2d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4471465Z test_torch_nn_functional_interpolate_bicubic_tuple_2d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4471870Z test_torch_nn_functional_interpolate_bicubic_tuple_2d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4472287Z test_torch_nn_functional_interpolate_bicubic_tuple_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4472640Z test_torch_nn_functional_interpolate_bilinear_2d (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4473057Z test_torch_nn_functional_interpolate_bilinear_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4473433Z test_torch_nn_functional_interpolate_bilinear_2d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4473815Z test_torch_nn_functional_interpolate_bilinear_2d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4474183Z test_torch_nn_functional_interpolate_bilinear_scale_2d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4474557Z test_torch_nn_functional_interpolate_bilinear_scale_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4474945Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_shared_2d (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4475346Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_shared_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4475736Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4476129Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4476559Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4476766Z test_torch_nn_functional_interpolate_bilinear_scale_tuple_skewed_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4476937Z test_torch_nn_functional_interpolate_bilinear_tuple_2d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4477106Z test_torch_nn_functional_interpolate_bilinear_tuple_2d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4477315Z test_torch_nn_functional_interpolate_bilinear_tuple_2d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4477512Z test_torch_nn_functional_interpolate_bilinear_tuple_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4477728Z test_torch_nn_functional_interpolate_linear_1d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4477907Z test_torch_nn_functional_interpolate_linear_1d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4478107Z test_torch_nn_functional_interpolate_linear_1d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4478290Z test_torch_nn_functional_interpolate_linear_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4478458Z test_torch_nn_functional_interpolate_linear_1d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4478649Z test_torch_nn_functional_interpolate_linear_1d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4478815Z test_torch_nn_functional_interpolate_linear_scale_1d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4479020Z test_torch_nn_functional_interpolate_linear_scale_1d_align_corners (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4479223Z test_torch_nn_functional_interpolate_linear_scale_1d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4479418Z test_torch_nn_functional_interpolate_linear_scale_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4479582Z test_torch_nn_functional_interpolate_linear_tuple_1d (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4479775Z test_torch_nn_functional_interpolate_linear_tuple_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4479932Z test_torch_nn_functional_interpolate_nearest_1d (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4480151Z test_torch_nn_functional_interpolate_nearest_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4480322Z test_torch_nn_functional_interpolate_nearest_1d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4480517Z test_torch_nn_functional_interpolate_nearest_1d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4480662Z test_torch_nn_functional_interpolate_nearest_2d (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4480847Z test_torch_nn_functional_interpolate_nearest_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4481023Z test_torch_nn_functional_interpolate_nearest_2d_launch_configs (__main__.TestCppApiParity) ... ok (0.013s) 2023-01-11T21:59:54.4481226Z test_torch_nn_functional_interpolate_nearest_2d_launch_configs_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4481397Z test_torch_nn_functional_interpolate_nearest_2d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4481594Z test_torch_nn_functional_interpolate_nearest_2d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4481752Z test_torch_nn_functional_interpolate_nearest_3d (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4481942Z test_torch_nn_functional_interpolate_nearest_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4482109Z test_torch_nn_functional_interpolate_nearest_3d_zero_dim (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4482289Z test_torch_nn_functional_interpolate_nearest_3d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4482452Z test_torch_nn_functional_interpolate_nearest_scale_1d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4482646Z test_torch_nn_functional_interpolate_nearest_scale_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4482813Z test_torch_nn_functional_interpolate_nearest_scale_2d (__main__.TestCppApiParity) ... ok (0.009s) 2023-01-11T21:59:54.4483002Z test_torch_nn_functional_interpolate_nearest_scale_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4483165Z test_torch_nn_functional_interpolate_nearest_scale_3d (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4483356Z test_torch_nn_functional_interpolate_nearest_scale_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4483520Z test_torch_nn_functional_interpolate_nearest_tuple_1d (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4483709Z test_torch_nn_functional_interpolate_nearest_tuple_1d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4483876Z test_torch_nn_functional_interpolate_nearest_tuple_2d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4484096Z test_torch_nn_functional_interpolate_nearest_tuple_2d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4484261Z test_torch_nn_functional_interpolate_nearest_tuple_3d (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4484450Z test_torch_nn_functional_interpolate_nearest_tuple_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4484609Z test_torch_nn_functional_interpolate_trilinear_3d (__main__.TestCppApiParity) ... ok (0.012s) 2023-01-11T21:59:54.4484797Z test_torch_nn_functional_interpolate_trilinear_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4484966Z test_torch_nn_functional_interpolate_trilinear_3d_zero_dim (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4485162Z test_torch_nn_functional_interpolate_trilinear_3d_zero_dim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4485374Z test_torch_nn_functional_interpolate_trilinear_scale_3d (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4485560Z test_torch_nn_functional_interpolate_trilinear_scale_3d_align_corners (__main__.TestCppApiParity) ... ok (0.011s) 2023-01-11T21:59:54.4485758Z test_torch_nn_functional_interpolate_trilinear_scale_3d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4485954Z test_torch_nn_functional_interpolate_trilinear_scale_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4486120Z test_torch_nn_functional_interpolate_trilinear_tuple_3d (__main__.TestCppApiParity) ... ok (0.008s) 2023-01-11T21:59:54.4486299Z test_torch_nn_functional_interpolate_trilinear_tuple_3d_align_corners (__main__.TestCppApiParity) ... ok (0.007s) 2023-01-11T21:59:54.4486508Z test_torch_nn_functional_interpolate_trilinear_tuple_3d_align_corners_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4486703Z test_torch_nn_functional_interpolate_trilinear_tuple_3d_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4486856Z test_torch_nn_functional_log_softmax_dim0 (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4487034Z test_torch_nn_functional_log_softmax_dim0_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4487183Z test_torch_nn_functional_log_softmax_dim3 (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4487349Z test_torch_nn_functional_log_softmax_dim3_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4487502Z test_torch_nn_functional_log_softmax_lastdim (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4487684Z test_torch_nn_functional_log_softmax_lastdim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4487836Z test_torch_nn_functional_log_softmax_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4488021Z test_torch_nn_functional_log_softmax_scalar_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4488174Z test_torch_nn_functional_log_softmax_spatial (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4488357Z test_torch_nn_functional_log_softmax_spatial_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4488521Z test_torch_nn_functional_log_softmax_spatial_special (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4488711Z test_torch_nn_functional_log_softmax_spatial_special_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4488880Z test_torch_nn_functional_multimarginloss_1d_input_0d_target_no_reduce (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4489089Z test_torch_nn_functional_multimarginloss_1d_input_0d_target_no_reduce_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4489320Z test_torch_nn_functional_sample_functional_has_parity (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4489514Z test_torch_nn_functional_sample_functional_has_parity_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4489696Z test_torch_nn_functional_sample_functional_no_parity (__main__.TestCppApiParity) ... expected failure (0.005s) 2023-01-11T21:59:54.4489889Z test_torch_nn_functional_sample_functional_no_parity_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4490047Z test_torch_nn_functional_softmax_functional_dim0 (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4490241Z test_torch_nn_functional_softmax_functional_dim0_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4490398Z test_torch_nn_functional_softmax_functional_dim3 (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4490623Z test_torch_nn_functional_softmax_functional_dim3_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4490781Z test_torch_nn_functional_softmax_functional_scalar (__main__.TestCppApiParity) ... ok (0.004s) 2023-01-11T21:59:54.4490978Z test_torch_nn_functional_softmax_functional_scalar_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4491130Z test_torch_nn_functional_softmax_lastdim (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4491311Z test_torch_nn_functional_softmax_lastdim_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4491468Z test_torch_nn_functional_softmax_lastdim_dtype (__main__.TestCppApiParity) ... ok (0.006s) 2023-01-11T21:59:54.4491660Z test_torch_nn_functional_softmax_lastdim_dtype_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4491813Z test_torch_nn_functional_softmax_spatial (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4491995Z test_torch_nn_functional_softmax_spatial_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4492153Z test_torch_nn_functional_softmax_spatial_dtype (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4492331Z test_torch_nn_functional_softmax_spatial_dtype_cuda (__main__.TestCppApiParity) ... skip: Excluded from CUDA tests (0.000s) 2023-01-11T21:59:54.4492491Z test_torch_nn_functional_softmax_spatial_special (__main__.TestCppApiParity) ... ok (0.005s) 2023-01-11T21:59:54.4492679Z test_torch_nn_functional_softmax_spatial_special_cuda (__main__.TestCppApiParity) ... skip: CUDA unavailable (0.000s) 2023-01-11T21:59:54.4492692Z 2023-01-11T21:59:54.4492970Z ---------------------------------------------------------------------- 2023-01-11T21:59:54.4493050Z Ran 1100 tests in 6.108s 2023-01-11T21:59:54.4493056Z 2023-01-11T21:59:54.4493153Z OK (skipped=550, expected failures=17) 2023-01-11T21:59:54.4493160Z 2023-01-11T21:59:54.4493245Z Generating XML reports... 2023-01-11T21:59:54.4493566Z Generated XML report: test-reports/python-unittest/test_cpp_api_parity/TEST-TestCppApiParity-20230111215947.xml 2023-01-11T21:59:54.4493571Z 2023-01-11T21:59:54.4493928Z ##[endgroup] 2023-01-11T21:59:54.4494230Z FINISHED PRINTING LOG FILE of test_cpp_api_parity (/var/lib/jenkins/workspace/test/test-reports/test_cpp_api_parity_ip7klvi_) 2023-01-11T21:59:54.4494235Z 2023-01-11T21:59:54.4494422Z Running test_cpp_extensions_aot_ninja ... [2023-01-11 21:59:54.409813] 2023-01-11T21:59:56.0132634Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T21:59:56.1191575Z running install 2023-01-11T21:59:56.1192260Z /opt/conda/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. 2023-01-11T21:59:56.1192643Z warnings.warn( 2023-01-11T21:59:56.1279991Z running build 2023-01-11T21:59:56.1280416Z running build_py 2023-01-11T21:59:56.1328765Z creating build 2023-01-11T21:59:56.1329242Z creating build/lib.linux-x86_64-cpython-310 2023-01-11T21:59:56.1329760Z creating build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension 2023-01-11T21:59:56.1330208Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension 2023-01-11T21:59:56.1331621Z running build_ext 2023-01-11T21:59:56.1638174Z building 'torch_test_cpp_extension.cpp' extension 2023-01-11T21:59:56.1638646Z creating /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310 2023-01-11T21:59:56.1931556Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/build.ninja... 2023-01-11T21:59:56.1931932Z Compiling objects... 2023-01-11T21:59:56.1932152Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:59:57.3472993Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/extension.o.d -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T21:59:57.3476164Z In file included from /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/Exceptions.h:14:0, 2023-01-11T21:59:57.3477101Z from /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2023-01-11T21:59:57.3478039Z from /opt/conda/lib/python3.10/site-packages/torch/include/torch/extension.h:6, 2023-01-11T21:59:57.3478679Z from /var/lib/jenkins/workspace/test/cpp_extensions/extension.cpp:1: 2023-01-11T21:59:57.3482409Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘class pybind11::class_’: 2023-01-11T21:59:57.3483200Z /var/lib/jenkins/workspace/test/cpp_extensions/extension.cpp:40:53: required from here 2023-01-11T21:59:57.3484760Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: ‘pybind11::class_’ declared with greater visibility than the type of its field ‘pybind11::class_::’ [-Wattributes] 2023-01-11T21:59:57.3485709Z class class_ : public detail::generic_type { 2023-01-11T21:59:57.3486088Z ^~~~~~ 2023-01-11T21:59:57.3487341Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: ‘pybind11::class_’ declared with greater visibility than its base ‘pybind11::detail::generic_type’ [-Wattributes] 2023-01-11T21:59:57.3567884Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/cpp.cpython-310-x86_64-linux-gnu.so 2023-01-11T21:59:57.7341233Z building 'torch_test_cpp_extension.ort' extension 2023-01-11T21:59:57.7624926Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/build.ninja... 2023-01-11T21:59:57.7654423Z Compiling objects... 2023-01-11T21:59:57.7654748Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T21:59:58.8787223Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/ort_extension.o.d -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/ort_extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/ort_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ort -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T21:59:58.8831225Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/ort_extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/ort.cpython-310-x86_64-linux-gnu.so 2023-01-11T21:59:59.2082302Z building 'torch_test_cpp_extension.rng' extension 2023-01-11T21:59:59.2365198Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/build.ninja... 2023-01-11T21:59:59.2365571Z Compiling objects... 2023-01-11T21:59:59.2365801Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T22:00:00.3900088Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/rng_extension.o.d -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/rng_extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T22:00:00.3901777Z In file included from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8:0, 2023-01-11T22:00:00.3902215Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec.h:6, 2023-01-11T22:00:00.3902848Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2023-01-11T22:00:00.3903317Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:8, 2023-01-11T22:00:00.3903675Z from /var/lib/jenkins/workspace/test/cpp_extensions/rng_extension.cpp:6: 2023-01-11T22:00:00.3904156Z /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1008:0: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2023-01-11T22:00:00.3904638Z # pragma unroll 2023-01-11T22:00:00.3904804Z 2023-01-11T22:00:00.3946635Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-310/rng_extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/rng.cpython-310-x86_64-linux-gnu.so 2023-01-11T22:00:00.7636574Z running install_lib 2023-01-11T22:00:00.7677049Z creating install 2023-01-11T22:00:00.7677387Z creating install/opt 2023-01-11T22:00:00.7677774Z creating install/opt/conda 2023-01-11T22:00:00.7678158Z creating install/opt/conda/lib 2023-01-11T22:00:00.7678442Z creating install/opt/conda/lib/python3.10 2023-01-11T22:00:00.7678895Z creating install/opt/conda/lib/python3.10/site-packages 2023-01-11T22:00:00.7679484Z creating install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:00.7680080Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/__init__.py -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:00.7680804Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/cpp.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:00.7752897Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/ort.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:00.7825138Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/rng.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:00.7902863Z byte-compiling ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension/__init__.py to __init__.cpython-310.pyc 2023-01-11T22:00:00.7903950Z running install_egg_info 2023-01-11T22:00:00.7998747Z running egg_info 2023-01-11T22:00:00.7999231Z creating torch_test_cpp_extension.egg-info 2023-01-11T22:00:00.8032059Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2023-01-11T22:00:00.8033785Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2023-01-11T22:00:00.8036244Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2023-01-11T22:00:00.8037116Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2023-01-11T22:00:00.8071791Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2023-01-11T22:00:00.8077095Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2023-01-11T22:00:00.8077787Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension-0.0.0-py3.10.egg-info 2023-01-11T22:00:00.8081599Z running install_scripts 2023-01-11T22:00:02.6568406Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:02.6762114Z running install 2023-01-11T22:00:02.6763368Z /opt/conda/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. 2023-01-11T22:00:02.6764066Z warnings.warn( 2023-01-11T22:00:02.6849095Z running build 2023-01-11T22:00:02.6849312Z running build_ext 2023-01-11T22:00:02.7141696Z building 'no_python_abi_suffix_test' extension 2023-01-11T22:00:02.7142024Z creating /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build 2023-01-11T22:00:02.7142691Z creating /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310 2023-01-11T22:00:02.7420610Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/build.ninja... 2023-01-11T22:00:02.7421229Z Compiling objects... 2023-01-11T22:00:02.7421500Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T22:00:02.8216663Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/no_python_abi_suffix_test.o.d -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include/python3.10 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/no_python_abi_suffix_test.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=no_python_abi_suffix_test -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T22:00:02.8254274Z creating build/lib.linux-x86_64-cpython-310 2023-01-11T22:00:02.8255757Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/no_python_abi_suffix_test.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/no_python_abi_suffix_test.so 2023-01-11T22:00:02.8833391Z running install_lib 2023-01-11T22:00:02.8870571Z creating install 2023-01-11T22:00:02.8870874Z creating install/opt 2023-01-11T22:00:02.8871228Z creating install/opt/conda 2023-01-11T22:00:02.8871655Z creating install/opt/conda/lib 2023-01-11T22:00:02.8871945Z creating install/opt/conda/lib/python3.10 2023-01-11T22:00:02.8872369Z creating install/opt/conda/lib/python3.10/site-packages 2023-01-11T22:00:02.8872831Z copying build/lib.linux-x86_64-cpython-310/no_python_abi_suffix_test.so -> ./install/opt/conda/lib/python3.10/site-packages 2023-01-11T22:00:02.8876957Z running install_egg_info 2023-01-11T22:00:02.8967663Z running egg_info 2023-01-11T22:00:02.8968187Z creating no_python_abi_suffix_test.egg-info 2023-01-11T22:00:02.9000728Z writing no_python_abi_suffix_test.egg-info/PKG-INFO 2023-01-11T22:00:02.9002627Z writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt 2023-01-11T22:00:02.9005228Z writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt 2023-01-11T22:00:02.9005884Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2023-01-11T22:00:02.9040217Z reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2023-01-11T22:00:02.9045390Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2023-01-11T22:00:02.9046135Z Copying no_python_abi_suffix_test.egg-info to ./install/opt/conda/lib/python3.10/site-packages/no_python_abi_suffix_test-0.0.0-py3.10.egg-info 2023-01-11T22:00:02.9049503Z running install_scripts 2023-01-11T22:00:03.2054957Z Executing ['/opt/conda/bin/python', '-bb', 'test_cpp_extensions_aot_ninja.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:03.205091] 2023-01-11T22:00:05.4185614Z 2023-01-11T22:00:05.4186194Z Expand the folded group to see the log file of test_cpp_extensions_aot_ninja 2023-01-11T22:00:05.4187285Z ##[group]PRINTING LOG FILE of test_cpp_extensions_aot_ninja (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_aot_ninja_1uthn5zt) 2023-01-11T22:00:05.4188140Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:05.4188356Z 2023-01-11T22:00:05.4188453Z Running tests... 2023-01-11T22:00:05.4189160Z ---------------------------------------------------------------------- 2023-01-11T22:00:05.4189822Z Test results will be stored in test-reports/python-unittest/test_cpp_extensions_aot_ninja 2023-01-11T22:00:05.4190327Z test_backward (__main__.TestCppExtensionAOT) ... ok (0.011s) 2023-01-11T22:00:05.4190807Z test_cublas_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:05.4191347Z test_cuda_dlink_libs (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:05.4191855Z test_cuda_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:05.4192412Z test_cusolver_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:05.4192938Z test_extension_function (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:05.4193429Z test_extension_module (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:05.4194095Z test_no_python_abi_suffix_sets_the_correct_library_name (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:05.4194646Z test_optional (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:05.4195063Z test_add (__main__.TestORTTensor) ... ok (0.003s) 2023-01-11T22:00:05.4195476Z test_conv_backend_override (__main__.TestORTTensor) ... ok (0.001s) 2023-01-11T22:00:05.4195934Z test_unregistered (__main__.TestORTTensor) ... ok (0.006s) 2023-01-11T22:00:05.4196350Z test_zeros (__main__.TestORTTensor) ... ok (0.001s) 2023-01-11T22:00:05.4196801Z test_pybind_return_types (__main__.TestPybindTypeCasters) ... ok (0.001s) 2023-01-11T22:00:05.4197248Z test_rng (__main__.TestRNGExtension) ... ok (0.002s) 2023-01-11T22:00:05.4197793Z test_torch_library (__main__.TestTorchLibrary) ... skip: CUDA not found (0.001s) 2023-01-11T22:00:05.4198085Z 2023-01-11T22:00:05.4198439Z ---------------------------------------------------------------------- 2023-01-11T22:00:05.4198811Z Ran 16 tests in 0.031s 2023-01-11T22:00:05.4199001Z 2023-01-11T22:00:05.4199117Z OK (skipped=5) 2023-01-11T22:00:05.4199283Z 2023-01-11T22:00:05.4199422Z Generating XML reports... 2023-01-11T22:00:05.4200183Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestCppExtensionAOT-20230111220005.xml 2023-01-11T22:00:05.4201115Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestORTTensor-20230111220005.xml 2023-01-11T22:00:05.4202075Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestPybindTypeCasters-20230111220005.xml 2023-01-11T22:00:05.4203016Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestRNGExtension-20230111220005.xml 2023-01-11T22:00:05.4203932Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestTorchLibrary-20230111220005.xml 2023-01-11T22:00:05.4204345Z 2023-01-11T22:00:05.4204746Z ##[endgroup] 2023-01-11T22:00:05.4205478Z FINISHED PRINTING LOG FILE of test_cpp_extensions_aot_ninja (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_aot_ninja_1uthn5zt) 2023-01-11T22:00:05.4205889Z 2023-01-11T22:00:05.4206224Z Running test_cpp_extensions_aot_no_ninja ... [2023-01-11 22:00:05.418831] 2023-01-11T22:00:06.9114236Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:07.0122390Z running install 2023-01-11T22:00:07.0123214Z /opt/conda/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. 2023-01-11T22:00:07.0123601Z warnings.warn( 2023-01-11T22:00:07.0207508Z running build 2023-01-11T22:00:07.0207728Z running build_py 2023-01-11T22:00:07.0245388Z creating build 2023-01-11T22:00:07.0246018Z creating build/lib.linux-x86_64-cpython-310 2023-01-11T22:00:07.0246450Z creating build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension 2023-01-11T22:00:07.0246909Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension 2023-01-11T22:00:07.0248467Z running build_ext 2023-01-11T22:00:07.0259579Z building 'torch_test_cpp_extension.cpp' extension 2023-01-11T22:00:07.0260199Z creating build/temp.linux-x86_64-cpython-310 2023-01-11T22:00:07.0263559Z gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c extension.cpp -o build/temp.linux-x86_64-cpython-310/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T22:00:08.0347767Z In file included from /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/Exceptions.h:14:0, 2023-01-11T22:00:08.0348317Z from /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2023-01-11T22:00:08.0348798Z from /opt/conda/lib/python3.10/site-packages/torch/include/torch/extension.h:6, 2023-01-11T22:00:08.0349086Z from extension.cpp:1: 2023-01-11T22:00:08.0349725Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘class pybind11::class_’: 2023-01-11T22:00:08.0350117Z extension.cpp:40:53: required from here 2023-01-11T22:00:08.0350857Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: ‘pybind11::class_’ declared with greater visibility than the type of its field ‘pybind11::class_::’ [-Wattributes] 2023-01-11T22:00:08.0351342Z class class_ : public detail::generic_type { 2023-01-11T22:00:08.0351559Z ^~~~~~ 2023-01-11T22:00:08.0352195Z /opt/conda/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: ‘pybind11::class_’ declared with greater visibility than its base ‘pybind11::detail::generic_type’ [-Wattributes] 2023-01-11T22:00:08.0355100Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib build/temp.linux-x86_64-cpython-310/extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/cpp.cpython-310-x86_64-linux-gnu.so 2023-01-11T22:00:08.3825090Z building 'torch_test_cpp_extension.ort' extension 2023-01-11T22:00:08.3827056Z gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c ort_extension.cpp -o build/temp.linux-x86_64-cpython-310/ort_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=ort -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T22:00:09.3388394Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib build/temp.linux-x86_64-cpython-310/ort_extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/ort.cpython-310-x86_64-linux-gnu.so 2023-01-11T22:00:09.6802444Z building 'torch_test_cpp_extension.rng' extension 2023-01-11T22:00:09.6804781Z gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -Iself_compiler_include_dirs_test -I/opt/conda/include/python3.10 -c rng_extension.cpp -o build/temp.linux-x86_64-cpython-310/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2023-01-11T22:00:10.7431234Z In file included from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8:0, 2023-01-11T22:00:10.7431721Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec.h:6, 2023-01-11T22:00:10.7432154Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2023-01-11T22:00:10.7432625Z from /opt/conda/lib/python3.10/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:8, 2023-01-11T22:00:10.7432933Z from rng_extension.cpp:6: 2023-01-11T22:00:10.7433377Z /opt/conda/lib/python3.10/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1008:0: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2023-01-11T22:00:10.7433676Z # pragma unroll 2023-01-11T22:00:10.7433846Z 2023-01-11T22:00:10.7438566Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib build/temp.linux-x86_64-cpython-310/rng_extension.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/rng.cpython-310-x86_64-linux-gnu.so 2023-01-11T22:00:11.1171101Z running install_lib 2023-01-11T22:00:11.1210981Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/cpp.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:11.1297286Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/ort.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:11.1383473Z copying build/lib.linux-x86_64-cpython-310/torch_test_cpp_extension/rng.cpython-310-x86_64-linux-gnu.so -> ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension 2023-01-11T22:00:11.1474242Z running install_egg_info 2023-01-11T22:00:11.1568026Z running egg_info 2023-01-11T22:00:11.1599713Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2023-01-11T22:00:11.1610815Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2023-01-11T22:00:11.1621394Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2023-01-11T22:00:11.1665343Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2023-01-11T22:00:11.1670602Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2023-01-11T22:00:11.1679650Z removing './install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension-0.0.0-py3.10.egg-info' (and everything under it) 2023-01-11T22:00:11.1681378Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/lib/python3.10/site-packages/torch_test_cpp_extension-0.0.0-py3.10.egg-info 2023-01-11T22:00:11.1684604Z running install_scripts 2023-01-11T22:00:12.9793153Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:12.9985686Z running install 2023-01-11T22:00:12.9987168Z /opt/conda/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. 2023-01-11T22:00:12.9987648Z warnings.warn( 2023-01-11T22:00:13.0072200Z running build 2023-01-11T22:00:13.0072412Z running build_ext 2023-01-11T22:00:13.0364026Z building 'no_python_abi_suffix_test' extension 2023-01-11T22:00:13.0635066Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/build.ninja... 2023-01-11T22:00:13.0642707Z Compiling objects... 2023-01-11T22:00:13.0643024Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T22:00:13.0887809Z ninja: no work to do. 2023-01-11T22:00:13.0922976Z g++ -pthread -B /opt/conda/compiler_compat -shared -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib -Wl,-rpath,/opt/conda/lib -Wl,-rpath-link,/opt/conda/lib -L/opt/conda/lib /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-310/no_python_abi_suffix_test.o -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/no_python_abi_suffix_test.so 2023-01-11T22:00:13.1488566Z running install_lib 2023-01-11T22:00:13.1526121Z copying build/lib.linux-x86_64-cpython-310/no_python_abi_suffix_test.so -> ./install/opt/conda/lib/python3.10/site-packages 2023-01-11T22:00:13.1530511Z running install_egg_info 2023-01-11T22:00:13.1622302Z running egg_info 2023-01-11T22:00:13.1654647Z writing no_python_abi_suffix_test.egg-info/PKG-INFO 2023-01-11T22:00:13.1679598Z writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt 2023-01-11T22:00:13.1689768Z writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt 2023-01-11T22:00:13.1734408Z reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2023-01-11T22:00:13.1739687Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2023-01-11T22:00:13.1751450Z removing './install/opt/conda/lib/python3.10/site-packages/no_python_abi_suffix_test-0.0.0-py3.10.egg-info' (and everything under it) 2023-01-11T22:00:13.1753156Z Copying no_python_abi_suffix_test.egg-info to ./install/opt/conda/lib/python3.10/site-packages/no_python_abi_suffix_test-0.0.0-py3.10.egg-info 2023-01-11T22:00:13.1756501Z running install_scripts 2023-01-11T22:00:13.4660472Z Executing ['/opt/conda/bin/python', '-bb', 'test_cpp_extensions_aot_no_ninja.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:13.465684] 2023-01-11T22:00:15.7426668Z 2023-01-11T22:00:15.7427155Z Expand the folded group to see the log file of test_cpp_extensions_aot_no_ninja 2023-01-11T22:00:15.7428260Z ##[group]PRINTING LOG FILE of test_cpp_extensions_aot_no_ninja (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_aot_no_ninja_mvc4iqfa) 2023-01-11T22:00:15.7428841Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:15.7429031Z 2023-01-11T22:00:15.7429105Z Running tests... 2023-01-11T22:00:15.7429417Z ---------------------------------------------------------------------- 2023-01-11T22:00:15.7429968Z Test results will be stored in test-reports/python-unittest/test_cpp_extensions_aot_no_ninja 2023-01-11T22:00:15.7430360Z test_backward (__main__.TestCppExtensionAOT) ... ok (0.011s) 2023-01-11T22:00:15.7430704Z test_cublas_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:15.7431074Z test_cuda_dlink_libs (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:15.7431561Z test_cuda_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:15.7431999Z test_cusolver_extension (__main__.TestCppExtensionAOT) ... skip: CUDA not found (0.000s) 2023-01-11T22:00:15.7432566Z test_extension_function (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:15.7432866Z test_extension_module (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:15.7433249Z test_no_python_abi_suffix_sets_the_correct_library_name (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:15.7433551Z test_optional (__main__.TestCppExtensionAOT) ... ok (0.001s) 2023-01-11T22:00:15.7433856Z test_add (__main__.TestORTTensor) ... ok (0.001s) 2023-01-11T22:00:15.7434120Z test_conv_backend_override (__main__.TestORTTensor) ... ok (0.001s) 2023-01-11T22:00:15.7434386Z test_unregistered (__main__.TestORTTensor) ... ok (0.006s) 2023-01-11T22:00:15.7434693Z test_zeros (__main__.TestORTTensor) ... ok (0.001s) 2023-01-11T22:00:15.7434971Z test_pybind_return_types (__main__.TestPybindTypeCasters) ... ok (0.001s) 2023-01-11T22:00:15.7435290Z test_rng (__main__.TestRNGExtension) ... ok (0.002s) 2023-01-11T22:00:15.7435645Z test_torch_library (__main__.TestTorchLibrary) ... skip: CUDA not found (0.001s) 2023-01-11T22:00:15.7435825Z 2023-01-11T22:00:15.7436091Z ---------------------------------------------------------------------- 2023-01-11T22:00:15.7436335Z Ran 16 tests in 0.029s 2023-01-11T22:00:15.7436435Z 2023-01-11T22:00:15.7436506Z OK (skipped=5) 2023-01-11T22:00:15.7436613Z 2023-01-11T22:00:15.7436707Z Generating XML reports... 2023-01-11T22:00:15.7437213Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestCppExtensionAOT-20230111220015.xml 2023-01-11T22:00:15.7437877Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestORTTensor-20230111220015.xml 2023-01-11T22:00:15.7438496Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestPybindTypeCasters-20230111220015.xml 2023-01-11T22:00:15.7439107Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestRNGExtension-20230111220015.xml 2023-01-11T22:00:15.7439676Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestTorchLibrary-20230111220015.xml 2023-01-11T22:00:15.7439973Z 2023-01-11T22:00:15.7440219Z ##[endgroup] 2023-01-11T22:00:15.7440713Z FINISHED PRINTING LOG FILE of test_cpp_extensions_aot_no_ninja (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_aot_no_ninja_mvc4iqfa) 2023-01-11T22:00:15.7440962Z 2023-01-11T22:00:15.7441195Z Running test_cpp_extensions_open_device_registration ... [2023-01-11 22:00:15.742843] 2023-01-11T22:00:15.7441797Z Executing ['/opt/conda/bin/python', '-bb', 'test_cpp_extensions_open_device_registration.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:15.743064] 2023-01-11T22:00:39.3143532Z 2023-01-11T22:00:39.3144164Z Expand the folded group to see the log file of test_cpp_extensions_open_device_registration 2023-01-11T22:00:39.3145036Z ##[group]PRINTING LOG FILE of test_cpp_extensions_open_device_registration (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_open_device_registration_33_5dxtm) 2023-01-11T22:00:39.3145937Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:00:39.3146221Z 2023-01-11T22:00:39.3146337Z Running tests... 2023-01-11T22:00:39.3146678Z ---------------------------------------------------------------------- 2023-01-11T22:00:39.3147472Z Test results will be stored in test-reports/python-unittest/test_cpp_extensions_open_device_registration 2023-01-11T22:00:39.3147992Z test_open_device_registration (__main__.TestCppExtensionOpenRgistration) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T22:00:39.3148437Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/custom_device_extension... 2023-01-11T22:00:39.3148810Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/custom_device_extension/build.ninja... 2023-01-11T22:00:39.3149292Z Building extension module custom_device_extension... 2023-01-11T22:00:39.3149956Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T22:00:39.3151458Z [1/2] c++ -MMD -MF open_registration_extension.o.d -DTORCH_EXTENSION_NAME=custom_device_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/test/cpp_extensions -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -c /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension.cpp -o open_registration_extension.o 2023-01-11T22:00:39.3152754Z [2/2] c++ open_registration_extension.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o custom_device_extension.so 2023-01-11T22:00:39.3153131Z Loading extension module custom_device_extension... 2023-01-11T22:00:39.3153330Z ok (21.630s) 2023-01-11T22:00:39.3153433Z 2023-01-11T22:00:39.3153633Z ---------------------------------------------------------------------- 2023-01-11T22:00:39.3153873Z Ran 1 test in 21.642s 2023-01-11T22:00:39.3153973Z 2023-01-11T22:00:39.3154033Z OK 2023-01-11T22:00:39.3154123Z 2023-01-11T22:00:39.3154209Z Generating XML reports... 2023-01-11T22:00:39.3154719Z Generated XML report: test-reports/python-unittest/test_cpp_extensions_open_device_registration/TEST-TestCppExtensionOpenRgistration-20230111220017.xml 2023-01-11T22:00:39.3155025Z 2023-01-11T22:00:39.3155255Z ##[endgroup] 2023-01-11T22:00:39.3155736Z FINISHED PRINTING LOG FILE of test_cpp_extensions_open_device_registration (/var/lib/jenkins/workspace/test/test-reports/test_cpp_extensions_open_device_registration_33_5dxtm) 2023-01-11T22:00:39.3156011Z 2023-01-11T22:00:39.3156208Z Running test_cuda_nvml_based_avail ... [2023-01-11 22:00:39.314495] 2023-01-11T22:00:39.3156728Z Executing ['/opt/conda/bin/python', '-bb', 'test_cuda_nvml_based_avail.py', '-v', '--subprocess', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:39.314745] 2023-01-11T22:00:40.8673451Z 2023-01-11T22:00:40.8673980Z Expand the folded group to see the log file of test_cuda_nvml_based_avail 2023-01-11T22:00:40.8675313Z ##[group]PRINTING LOG FILE of test_cuda_nvml_based_avail (/var/lib/jenkins/workspace/test/test-reports/test_cuda_nvml_based_avail_3xzfsqj4) 2023-01-11T22:00:40.8675884Z CUDA not available, skipping tests 2023-01-11T22:00:40.8676024Z 2023-01-11T22:00:40.8676232Z ##[endgroup] 2023-01-11T22:00:40.8677032Z FINISHED PRINTING LOG FILE of test_cuda_nvml_based_avail (/var/lib/jenkins/workspace/test/test-reports/test_cuda_nvml_based_avail_3xzfsqj4) 2023-01-11T22:00:40.8677405Z 2023-01-11T22:00:40.8677598Z Running test_cuda_primary_ctx ... [2023-01-11 22:00:40.867458] 2023-01-11T22:00:40.8679051Z Executing ['/opt/conda/bin/python', '-bb', 'test_cuda_primary_ctx.py', '-v', '--subprocess', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:40.867697] 2023-01-11T22:00:42.3949284Z 2023-01-11T22:00:42.3949926Z Expand the folded group to see the log file of test_cuda_primary_ctx 2023-01-11T22:00:42.3950908Z ##[group]PRINTING LOG FILE of test_cuda_primary_ctx (/var/lib/jenkins/workspace/test/test-reports/test_cuda_primary_ctx_365vsslp) 2023-01-11T22:00:42.3951238Z CUDA not available, skipping tests 2023-01-11T22:00:42.3951370Z 2023-01-11T22:00:42.3951631Z ##[endgroup] 2023-01-11T22:00:42.3952126Z FINISHED PRINTING LOG FILE of test_cuda_primary_ctx (/var/lib/jenkins/workspace/test/test-reports/test_cuda_primary_ctx_365vsslp) 2023-01-11T22:00:42.3952340Z 2023-01-11T22:00:42.3952543Z Running test_cuda_trace ... [2023-01-11 22:00:42.394991] 2023-01-11T22:00:42.3954046Z Executing ['/opt/conda/bin/python', '-bb', 'test_cuda_trace.py', '-v', '--subprocess', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:42.395222] 2023-01-11T22:00:43.9320158Z 2023-01-11T22:00:43.9320678Z Expand the folded group to see the log file of test_cuda_trace 2023-01-11T22:00:43.9321333Z ##[group]PRINTING LOG FILE of test_cuda_trace (/var/lib/jenkins/workspace/test/test-reports/test_cuda_trace_yr1b68oh) 2023-01-11T22:00:43.9321662Z CUDA not available, skipping tests 2023-01-11T22:00:43.9321794Z 2023-01-11T22:00:43.9322006Z ##[endgroup] 2023-01-11T22:00:43.9322541Z FINISHED PRINTING LOG FILE of test_cuda_trace (/var/lib/jenkins/workspace/test/test-reports/test_cuda_trace_yr1b68oh) 2023-01-11T22:00:43.9322894Z 2023-01-11T22:00:43.9323160Z Running test_dispatch ... [2023-01-11 22:00:43.932111] 2023-01-11T22:00:43.9325064Z Executing ['/opt/conda/bin/python', '-bb', 'test_dispatch.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:00:43.932325] 2023-01-11T22:01:08.9405323Z 2023-01-11T22:01:08.9405707Z Expand the folded group to see the log file of test_dispatch 2023-01-11T22:01:08.9406612Z ##[group]PRINTING LOG FILE of test_dispatch (/var/lib/jenkins/workspace/test/test-reports/test_dispatch_tl3w7n2s) 2023-01-11T22:01:08.9407766Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:01:08.9408052Z 2023-01-11T22:01:08.9408194Z Running tests... 2023-01-11T22:01:08.9409911Z ---------------------------------------------------------------------- 2023-01-11T22:01:08.9410566Z Test results will be stored in test-reports/python-unittest/test_dispatch 2023-01-11T22:01:08.9411083Z test_all_invariants (__main__.TestDispatch) ... ok (0.233s) 2023-01-11T22:01:08.9411448Z test_computed_table (__main__.TestDispatch) ... ok (7.012s) 2023-01-11T22:01:08.9411784Z test_computed_table_with_ambiguous_autogradother (__main__.TestDispatch) ... ok (0.011s) 2023-01-11T22:01:08.9412102Z test_computed_table_with_autograd (__main__.TestDispatch) ... ok (0.002s) 2023-01-11T22:01:08.9412433Z test_computed_table_with_cpu_autograd_defaultbackend (__main__.TestDispatch) ... ok (0.197s) 2023-01-11T22:01:08.9412770Z test_computed_table_with_cpu_autograd_math (__main__.TestDispatch) ... ok (0.211s) 2023-01-11T22:01:08.9413095Z test_computed_table_with_cpu_autograd_math_defaultbackend (__main__.TestDispatch) ... ok (6.608s) 2023-01-11T22:01:08.9413430Z test_computed_table_with_cpu_defaultbackend (__main__.TestDispatch) ... ok (0.010s) 2023-01-11T22:01:08.9413738Z test_computed_table_with_cpu_math (__main__.TestDispatch) ... ok (0.011s) 2023-01-11T22:01:08.9414059Z test_computed_table_with_cpu_math_autogradcpu_fallthrough (__main__.TestDispatch) ... ok (0.002s) 2023-01-11T22:01:08.9414379Z test_computed_table_with_math (__main__.TestDispatch) ... ok (0.002s) 2023-01-11T22:01:08.9414639Z test_def (__main__.TestDispatch) ... ok (6.688s) 2023-01-11T22:01:08.9414903Z test_def_impl_schema_mismatch (__main__.TestDispatch) ... ok (0.011s) 2023-01-11T22:01:08.9415155Z test_def_only (__main__.TestDispatch) ... ok (0.001s) 2023-01-11T22:01:08.9415425Z test_def_with_explicit_alias (__main__.TestDispatch) ... ok (0.001s) 2023-01-11T22:01:08.9415702Z test_def_with_inference (__main__.TestDispatch) ... ok (0.223s) 2023-01-11T22:01:08.9416007Z test_dispatch_print_registrations_for_dispatch_key_invalid (__main__.TestDispatch) ... ok (0.002s) 2023-01-11T22:01:08.9416320Z test_find_dangling_impls (__main__.TestDispatch) ... ok (0.001s) 2023-01-11T22:01:08.9416801Z test_find_dangling_impls_ext (__main__.TestDispatch) ... Using /var/lib/jenkins/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... 2023-01-11T22:01:08.9417207Z Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py310_cu117/dangling_impl_extension... 2023-01-11T22:01:08.9417580Z Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py310_cu117/dangling_impl_extension/build.ninja... 2023-01-11T22:01:08.9417901Z Building extension module dangling_impl_extension... 2023-01-11T22:01:08.9418158Z Using envvar MAX_JOBS (6) as the number of workers... 2023-01-11T22:01:08.9419702Z [1/2] c++ -MMD -MF dangling_impl_extension.o.d -DTORCH_EXTENSION_NAME=dangling_impl_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -c /var/lib/jenkins/workspace/test/cpp_extensions/dangling_impl_extension.cpp -o dangling_impl_extension.o 2023-01-11T22:01:08.9420872Z [2/2] c++ dangling_impl_extension.o -shared -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o dangling_impl_extension.so 2023-01-11T22:01:08.9421246Z Loading extension module dangling_impl_extension... 2023-01-11T22:01:08.9421496Z ok (1.526s) 2023-01-11T22:01:08.9421715Z test_impl_only (__main__.TestDispatch) ... ok (0.213s) 2023-01-11T22:01:08.9421993Z test_multiple_def_alias_defaulting (__main__.TestDispatch) ... ok (0.008s) 2023-01-11T22:01:08.9422277Z test_multiple_def_alias_mismatch (__main__.TestDispatch) ... ok (0.008s) 2023-01-11T22:01:08.9422741Z test_multiple_def_error (__main__.TestDispatch) ... ok (0.008s) 2023-01-11T22:01:08.9423011Z test_multiple_fallback (__main__.TestDispatch) ... ok (0.009s) 2023-01-11T22:01:08.9423407Z test_overwrite_math (__main__.TestDispatch) ... [W OperatorEntry.cpp:159] Warning: Overriding a previously registered kernel for the same operator and the same dispatch key 2023-01-11T22:01:08.9423752Z operator: __test45643__::foo 2023-01-11T22:01:08.9423937Z no debug info 2023-01-11T22:01:08.9424117Z dispatch key: (catch all) 2023-01-11T22:01:08.9424295Z previous kernel: fn1 2023-01-11T22:01:08.9424512Z new kernel: fn2 (function registerKernel) 2023-01-11T22:01:08.9424857Z [W OperatorEntry.cpp:159] Warning: Overriding a previously registered kernel for the same operator and the same dispatch key 2023-01-11T22:01:08.9425144Z operator: __test45644__::foo 2023-01-11T22:01:08.9425325Z no debug info 2023-01-11T22:01:08.9425505Z dispatch key: (catch all) 2023-01-11T22:01:08.9425682Z previous kernel: fn1 2023-01-11T22:01:08.9425896Z new kernel: fn2 (function registerKernel) 2023-01-11T22:01:08.9426096Z ok (0.001s) 2023-01-11T22:01:08.9426319Z test_autogradother (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9426606Z test_basic (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9426909Z test_defaultbackend_autogradcpu (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9427228Z test_defaultbackend_math (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9427530Z test_duplicate_registrations (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9427842Z test_math_autogradcpu (__main__.TestPythonDispatcher) ... ok (0.001s) 2023-01-11T22:01:08.9428169Z test_quantized_structured_not_implemented (__main__.TestPythonDispatcher) ... ok (0.024s) 2023-01-11T22:01:08.9428360Z 2023-01-11T22:01:08.9428571Z ---------------------------------------------------------------------- 2023-01-11T22:01:08.9428805Z Ran 32 tests in 23.029s 2023-01-11T22:01:08.9428920Z 2023-01-11T22:01:08.9428981Z OK 2023-01-11T22:01:08.9429069Z 2023-01-11T22:01:08.9429153Z Generating XML reports... 2023-01-11T22:01:08.9429538Z Generated XML report: test-reports/python-unittest/test_dispatch/TEST-TestDispatch-20230111220045.xml 2023-01-11T22:01:08.9430061Z Generated XML report: test-reports/python-unittest/test_dispatch/TEST-TestPythonDispatcher-20230111220045.xml 2023-01-11T22:01:08.9430299Z 2023-01-11T22:01:08.9430572Z ##[endgroup] 2023-01-11T22:01:08.9430939Z FINISHED PRINTING LOG FILE of test_dispatch (/var/lib/jenkins/workspace/test/test-reports/test_dispatch_tl3w7n2s) 2023-01-11T22:01:08.9431228Z 2023-01-11T22:01:08.9431383Z Running test_fx ... [2023-01-11 22:01:08.940718] 2023-01-11T22:01:08.9431839Z Executing ['/opt/conda/bin/python', '-bb', 'test_fx.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:01:08.940977] 2023-01-11T22:05:43.9983982Z 2023-01-11T22:05:43.9984903Z Expand the folded group to see the log file of test_fx 2023-01-11T22:05:43.9985810Z ##[group]PRINTING LOG FILE of test_fx (/var/lib/jenkins/workspace/test/test-reports/test_fx_jso06ja3) 2023-01-11T22:05:43.9986135Z 2023-01-11T22:05:43.9990619Z Running tests... 2023-01-11T22:05:43.9991406Z ---------------------------------------------------------------------- 2023-01-11T22:05:43.9991873Z Test results will be stored in test-reports/python-unittest/test_fx 2023-01-11T22:05:43.9992259Z test_annotate (fx.test_gradual_type.AnnotationsTest) ... ok (0.018s) 2023-01-11T22:05:43.9992689Z test_annotations (fx.test_gradual_type.AnnotationsTest) 2023-01-11T22:05:43.9993199Z Test type annotations in the forward function. ... ok (0.003s) 2023-01-11T22:05:43.9993623Z test_broadcasting1 (fx.test_gradual_type.AnnotationsTest) ... ok (0.001s) 2023-01-11T22:05:43.9994619Z test_broadcasting2 (fx.test_gradual_type.AnnotationsTest) ... ok (0.001s) 2023-01-11T22:05:43.9994920Z test_broadcasting3 (fx.test_gradual_type.AnnotationsTest) ... ok (0.001s) 2023-01-11T22:05:43.9995316Z test_consistency (fx.test_gradual_type.AnnotationsTest) 2023-01-11T22:05:43.9995749Z Test the consistency relation. ... ok (0.001s) 2023-01-11T22:05:43.9996171Z test_precision (fx.test_gradual_type.AnnotationsTest) 2023-01-11T22:05:43.9996488Z Test the consistency relation. ... ok (0.001s) 2023-01-11T22:05:43.9996798Z test_banned_list (fx.test_cse_pass.TestCSEPass) ... ok (0.061s) 2023-01-11T22:05:43.9997071Z test_empty (fx.test_cse_pass.TestCSEPass) ... ok (0.007s) 2023-01-11T22:05:43.9997439Z test_immutable_list_multiple_entries (fx.test_cse_pass.TestCSEPass) ... ok (0.059s) 2023-01-11T22:05:43.9997863Z test_immutable_list_type (fx.test_cse_pass.TestCSEPass) ... ok (0.059s) 2023-01-11T22:05:43.9998298Z test_kwarg (fx.test_cse_pass.TestCSEPass) ... ok (0.027s) 2023-01-11T22:05:43.9998812Z test_nested_immutable_list_type (fx.test_cse_pass.TestCSEPass) ... ok (0.030s) 2023-01-11T22:05:43.9999298Z test_nochange (fx.test_cse_pass.TestCSEPass) ... ok (0.038s) 2023-01-11T22:05:43.9999863Z test_rand_like (fx.test_cse_pass.TestCSEPass) ... ok (0.026s) 2023-01-11T22:05:44.0000319Z test_rand_n (fx.test_cse_pass.TestCSEPass) ... ok (0.026s) 2023-01-11T22:05:44.0000709Z test_random (fx.test_cse_pass.TestCSEPass) ... ok (0.081s) 2023-01-11T22:05:44.0000989Z test_simple (fx.test_cse_pass.TestCSEPass) ... ok (0.045s) 2023-01-11T22:05:44.0001252Z test_simple_2 (fx.test_cse_pass.TestCSEPass) ... ok (0.059s) 2023-01-11T22:05:44.0001637Z test_simple_multiple_same_ops (fx.test_cse_pass.TestCSEPass) ... ok (0.058s) 2023-01-11T22:05:44.0002103Z test_two_args (fx.test_cse_pass.TestCSEPass) ... ok (0.061s) 2023-01-11T22:05:44.0002552Z test_two_args_default (fx.test_cse_pass.TestCSEPass) ... ok (0.058s) 2023-01-11T22:05:44.0002942Z test_correctness_CSEPass_MutationInput_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.030s) 2023-01-11T22:05:44.0003372Z test_correctness_CSEPass_MutationMetadata_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.016s) 2023-01-11T22:05:44.0003775Z test_correctness_CSEPass_MutationTorchTensorCall_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.028s) 2023-01-11T22:05:44.0004556Z test_correctness_CSEPass_Mutation_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.029s) 2023-01-11T22:05:44.0005164Z test_correctness_CSEPass_ReturnList_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.041s) 2023-01-11T22:05:44.0005734Z test_correctness_CSEPass_TakeList_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.015s) 2023-01-11T22:05:44.0006350Z test_correctness_factory_CSEPass_FactoryFunctionCall_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.020s) 2023-01-11T22:05:44.0007093Z test_correctness_factory_CSEPass_MutationFactory_cpu (fx.test_common_passes.TestCommonPass) ... ok (0.028s) 2023-01-11T22:05:44.0007891Z test_check_inline_non_const (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0009284Z Perform constant folding conversion and check that the non-const module is inlined ... /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/const_fold.py:248: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer 2023-01-11T22:05:44.0010002Z new_node = root_const_gm.graph.get_attr(in_node.target) 2023-01-11T22:05:44.0010219Z ok (0.008s) 2023-01-11T22:05:44.0010468Z test_check_inline_non_const_mult_return (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0010876Z Perform constant folding conversion and check that the non-const module is inlined ... ok (0.006s) 2023-01-11T22:05:44.0011282Z test_check_skip_folding_quant_dequant_pattern (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0011663Z Set up skip_folding_quant_dequant function to skip quant/dequant pattern. ... ok (0.022s) 2023-01-11T22:05:44.0011998Z test_const_fold_basic_one_attr_name_collision (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0012328Z Perform constant folding conversion, from original mod to split constant folding ... ok (0.007s) 2023-01-11T22:05:44.0012740Z test_const_fold_basic_one_attr_no_name_collision (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0013221Z Perform constant folding conversion, from original mod to split constant folding ... ok (0.006s) 2023-01-11T22:05:44.0013566Z test_const_fold_basic_placeholder_reordered (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0013878Z Test code path where placeholder comes after normal op node in FX ... ok (0.003s) 2023-01-11T22:05:44.0014189Z test_const_fold_basic_two_attr (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0014505Z Perform constant folding conversion, from original mod to split constant ... ok (0.005s) 2023-01-11T22:05:44.0014820Z test_const_fold_basic_two_attr_three_input (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0015146Z Perform constant folding conversion, from original mod to split constant ... ok (0.005s) 2023-01-11T22:05:44.0015493Z test_const_fold_has_inlined_call_module_node (fx.test_fx_const_fold.TestConstFold) ... ok (0.006s) 2023-01-11T22:05:44.0016061Z test_const_fold_module_attr (fx.test_fx_const_fold.TestConstFold) ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:973: UserWarning: Failed to fetch module mod! 2023-01-11T22:05:44.0016428Z warnings.warn(f"Failed to fetch module {module_path}!") 2023-01-11T22:05:44.0016638Z ok (0.117s) 2023-01-11T22:05:44.0016880Z test_const_fold_multi_const_folded_attrs (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0017197Z Perform constant folding conversion, from original mod to split constant ... ok (0.008s) 2023-01-11T22:05:44.0017499Z test_const_fold_noop (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0017790Z Check that a graph with no constant folding is handled correctly. ... ok (0.003s) 2023-01-11T22:05:44.0018096Z test_const_fold_submod_hierarchy (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0018646Z Perform constant folding conversion, from original mod to split constant folding ... /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py:973: UserWarning: Failed to fetch module my_mod! 2023-01-11T22:05:44.0019039Z warnings.warn(f"Failed to fetch module {module_path}!") 2023-01-11T22:05:44.0019362Z ok (0.005s) 2023-01-11T22:05:44.0019597Z test_const_fold_tensor_meta (fx.test_fx_const_fold.TestConstFold) ... ok (0.011s) 2023-01-11T22:05:44.0019925Z test_const_fold_unused_placeholder (fx.test_fx_const_fold.TestConstFold) ... ok (0.005s) 2023-01-11T22:05:44.0020237Z test_dict_output (fx.test_fx_const_fold.TestConstFold) ... ok (0.005s) 2023-01-11T22:05:44.0020575Z test_fold_module (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0020835Z Perform constant folding with a call_module node. ... ok (0.005s) 2023-01-11T22:05:44.0021112Z test_retain_node_meta (fx.test_fx_const_fold.TestConstFold) 2023-01-11T22:05:44.0021421Z Perform constant folding conversion, and validate that node meta is retained. ... ok (0.006s) 2023-01-11T22:05:44.0021725Z test_three_outputs (fx.test_fx_const_fold.TestConstFold) ... ok (0.005s) 2023-01-11T22:05:44.0022022Z test_two_outputs (fx.test_fx_const_fold.TestConstFold) ... ok (0.005s) 2023-01-11T22:05:44.0022547Z test_param_dim_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.006s) 2023-01-11T22:05:44.0022978Z test_param_ndim_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.005s) 2023-01-11T22:05:44.0023533Z test_param_nelement_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.005s) 2023-01-11T22:05:44.0024038Z test_param_numel_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.005s) 2023-01-11T22:05:44.0024454Z test_param_shape_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.005s) 2023-01-11T22:05:44.0024869Z test_param_size_const (fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow) ... ok (0.005s) 2023-01-11T22:05:44.0025178Z test_dead_chain (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0025533Z Tests that a chain of two nodes in the graph are DCE'd correctly. ... graph(): 2023-01-11T22:05:44.0025796Z %x : [#users=2] = placeholder[target=x] 2023-01-11T22:05:44.0026054Z %add : [#users=1] = call_function[target=operator.add](args = (%x, 1), kwargs = {}) 2023-01-11T22:05:44.0026362Z %mul : [#users=0] = call_function[target=operator.mul](args = (%add, 7), kwargs = {}) 2023-01-11T22:05:44.0026628Z %attr_1 : [#users=1] = get_attr[target=attr_1] 2023-01-11T22:05:44.0026961Z %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, %attr_1), kwargs = {}) 2023-01-11T22:05:44.0027232Z return add_1 2023-01-11T22:05:44.0027396Z ok (0.003s) 2023-01-11T22:05:44.0027599Z test_dead_getattr (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0027908Z Tests that a getatrr in the graph is DCE'd correctly. ... graph(): 2023-01-11T22:05:44.0028159Z %x : [#users=2] = placeholder[target=x] 2023-01-11T22:05:44.0028425Z %add : [#users=1] = call_function[target=operator.add](args = (%x, 1), kwargs = {}) 2023-01-11T22:05:44.0028680Z %attr_1 : [#users=1] = get_attr[target=attr_1] 2023-01-11T22:05:44.0029070Z %mul : [#users=0] = call_function[target=operator.mul](args = (%add, %attr_1), kwargs = {}) 2023-01-11T22:05:44.0029415Z %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, 11), kwargs = {}) 2023-01-11T22:05:44.0029637Z return add_1 2023-01-11T22:05:44.0029801Z ok (0.003s) 2023-01-11T22:05:44.0030011Z test_dead_placeholder (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0030385Z Tests that a placeholder in the graph is not DCE'd, as that would change ... graph(): 2023-01-11T22:05:44.0030784Z %x : [#users=1] = placeholder[target=x] 2023-01-11T22:05:44.0031082Z %y : [#users=0] = placeholder[target=y] 2023-01-11T22:05:44.0031495Z %add : [#users=1] = call_function[target=operator.add](args = (%x, 7), kwargs = {}) 2023-01-11T22:05:44.0031869Z return add 2023-01-11T22:05:44.0032121Z ok (0.002s) 2023-01-11T22:05:44.0032502Z test_dead_placeholder_with_user (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0033146Z Tests that a placeholder in the graph is not DCE'd, as that would change ... graph(): 2023-01-11T22:05:44.0033569Z %x : [#users=1] = placeholder[target=x] 2023-01-11T22:05:44.0033929Z %y : [#users=1] = placeholder[target=y] 2023-01-11T22:05:44.0034289Z %add : [#users=0] = call_function[target=operator.add](args = (%y, 2), kwargs = {}) 2023-01-11T22:05:44.0034791Z %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, 7), kwargs = {}) 2023-01-11T22:05:44.0035305Z return add_1 2023-01-11T22:05:44.0035568Z ok (0.003s) 2023-01-11T22:05:44.0035919Z test_keep_module_with_side_effects (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0036550Z Test that DCE doesn't remove a module if it's specified as having side effects. ... graph(): 2023-01-11T22:05:44.0037037Z %a : torch.Tensor [#users=2] = placeholder[target=a] 2023-01-11T22:05:44.0037477Z %relu : [#users=0] = call_module[target=relu](args = (%a,), kwargs = {}) 2023-01-11T22:05:44.0037971Z %mul : [#users=1] = call_function[target=operator.mul](args = (%a, 2), kwargs = {}) 2023-01-11T22:05:44.0038400Z return mul 2023-01-11T22:05:44.0038675Z ok (0.003s) 2023-01-11T22:05:44.0039107Z test_keep_torch_assert (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0039763Z Test that DCE doesn't remove torch._assert since it has side effects. ... graph(): 2023-01-11T22:05:44.0040259Z %a : torch.Tensor [#users=2] = placeholder[target=a] 2023-01-11T22:05:44.0040896Z %equal : [#users=1] = call_function[target=torch.equal](args = (%a, %a), kwargs = {}) 2023-01-11T22:05:44.0041235Z %_assert : [#users=0] = call_function[target=torch._assert](args = (%equal, a must equal a), kwargs = {}) 2023-01-11T22:05:44.0041561Z %mul : [#users=1] = call_function[target=operator.mul](args = (%a, 2), kwargs = {}) 2023-01-11T22:05:44.0041780Z return mul 2023-01-11T22:05:44.0041946Z ok (0.003s) 2023-01-11T22:05:44.0042143Z test_simple (fx.test_dce_pass.TestDCE) 2023-01-11T22:05:44.0042457Z Tests that a single node in the graph is DCE'd correctly. ... graph(): 2023-01-11T22:05:44.0042715Z %x : [#users=2] = placeholder[target=x] 2023-01-11T22:05:44.0042980Z %add : [#users=0] = call_function[target=operator.add](args = (%x, 1), kwargs = {}) 2023-01-11T22:05:44.0043242Z %attr_1 : [#users=1] = get_attr[target=attr_1] 2023-01-11T22:05:44.0043510Z %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, %attr_1), kwargs = {}) 2023-01-11T22:05:44.0043759Z return add_1 2023-01-11T22:05:44.0043924Z ok (0.003s) 2023-01-11T22:05:44.0044117Z test_all_input_nodes (__main__.TestFX) ... ok (0.013s) 2023-01-11T22:05:44.0044373Z test_annotation_with_future (__main__.TestFX) ... ok (0.009s) 2023-01-11T22:05:44.0045158Z test_annotations_empty_tuple (__main__.TestFX) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T22:05:44.0045775Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T22:05:44.0045981Z ok (0.027s) 2023-01-11T22:05:44.0046215Z test_annotations_with_forward_references (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0046519Z test_annotations_with_no_forward_references (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0046847Z test_annotations_with_non_torch_reference_and_internal_forward_references (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0047211Z test_annotations_with_non_torch_reference_and_no_internal_forward_references (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0047506Z test_args_kwargs (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0047748Z test_args_kwargs_no_self (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0048081Z test_assert (__main__.TestFX) ... skip: Does not work on Python-3.10 (0.000s) 2023-01-11T22:05:44.0048584Z test_ast_rewriter_reassigns_submodules (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0048861Z test_ast_rewriter_rewrites_assert (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0049136Z test_ast_rewriter_rewrites_assert_with_message (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0049406Z test_ast_rewriter_wrap (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0049667Z test_ast_rewriter_wrap_fn_directly (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0049993Z test_ast_rewriter_wrap_with_submodule (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0050264Z test_ast_rewriter_wrapped_via_decorator (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0050572Z test_ast_rewriter_wrapped_via_decorator_and_transformed (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0050865Z test_autowrap_functions (__main__.TestFX) ... ok (0.061s) 2023-01-11T22:05:44.0051110Z test_concrete_arg_none_assert (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0051367Z test_construct_root_dict (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0051604Z test_copy_it (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0051836Z test_copy_no_remap (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0052056Z test_ctx_mgr (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0052291Z test_custom_codegen (__main__.TestFX) ... ok (0.021s) 2023-01-11T22:05:44.0052553Z test_custom_codegen_with_transformer (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0052835Z test_custom_import (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0053096Z test_custom_proxy_dynamic_value (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0053388Z test_custom_proxy_input_dependent_control_flow (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0053655Z test_custom_proxy_type (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0053914Z test_custom_proxy_type_literal (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0054229Z test_custom_traceback_not_raised_when_exception_source_is_submodule (__main__.TestFX) ... ok (0.009s) 2023-01-11T22:05:44.0054580Z test_custom_traceback_raised_when_exception_source_is_graphmodule (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0054887Z test_deepcopy_graph_with_tracer_cls (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0055186Z test_deepcopy_graphmodule_with_transform (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0055477Z test_deepcopy_recursion_depth (__main__.TestFX) ... ok (0.059s) 2023-01-11T22:05:44.0055724Z test_deepcopy_tracer (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0055989Z test_deepcopy_with_submods_params (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0056275Z test_delete_unused_submodules_leaf (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0056523Z test_dict (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0056746Z test_direct_param_use (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0057001Z test_disallow_override (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0057244Z test_ellipsis (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0057477Z test_empty_graph_codegen (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0057726Z test_erase_node_error (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0057973Z test_example_shape_prop (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0058199Z test_find_uses (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0058456Z test_fn_type_annotation_empty (__main__.TestFX) ... ok (0.019s) 2023-01-11T22:05:44.0058715Z test_fn_type_annotations (__main__.TestFX) ... ok (0.027s) 2023-01-11T22:05:44.0058946Z test_fx_and_or (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0059184Z test_fx_create_arg (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0059421Z test_fx_shifts (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0059650Z test_fx_stateless (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0059889Z test_get_torch_func_signature (__main__.TestFX) ... ok (0.101s) 2023-01-11T22:05:44.0060182Z test_getitem (__main__.TestFX) ... skip: Will be checked in test_getitem_subproc (0.000s) 2023-01-11T22:05:44.0060465Z test_getitem_subproc (__main__.TestFX) ... ok (0.031s) 2023-01-11T22:05:44.0060705Z test_graph_edit_with_proxy (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0073562Z test_graph_fns (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0073912Z test_graph_module (__main__.TestFX) ... ok (0.037s) 2023-01-11T22:05:44.0074316Z test_graph_module_init_buffer_param_copied_dict_init (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0074648Z test_graph_module_init_buffer_param_copied_mod_init (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0074959Z test_graph_module_replicate_for_dp (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0075220Z test_graph_unique_names (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0075479Z test_graph_unique_names_manual (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0075738Z test_immutable_dict_pytree_ops (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0075989Z test_immutable_list_pytree_ops (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0076234Z test_imul_code_print (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0076460Z test_inf_nan (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0076681Z test_inf_nan_kwds (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0076952Z test_inline_graph (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0077199Z test_insertion_point (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0077445Z test_interpreter (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0077693Z test_interpreter_default_args (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0077959Z test_interpreter_gc_values (__main__.TestFX) ... ok (0.353s) 2023-01-11T22:05:44.0078227Z test_interpreter_noop_resnet18 (__main__.TestFX) ... ok (0.418s) 2023-01-11T22:05:44.0078495Z test_interpreter_not_enough_args (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0078768Z test_interpreter_onthefly_swap (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0079138Z test_interpreter_partial_eval (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0079420Z test_interpreter_run_node_override (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0079680Z test_interpreter_star_args (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0079951Z test_interpreter_with_codegen (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0080201Z test_layout (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0080419Z test_leaf_module (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0080658Z test_matmul_tracing (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0080919Z test_module_deepcopy_edit_nodes (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0081162Z test_move_before (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0081410Z test_multi_insert_point (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0081669Z test_multiple_default_args (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0081924Z test_named_tuple_inlined (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0082175Z test_namedtuple_return_qualname (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0082444Z test_namedtuple_return_trace (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0082698Z test_native_callable (__main__.TestFX) ... ok (0.025s) 2023-01-11T22:05:44.0082927Z test_no_mutation (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0083164Z test_node_tagging (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0083410Z test_nonetype_annotation (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0084176Z test_partial_trace (__main__.TestFX) ... /opt/conda/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py:564: UserWarning: Was not able to add assertion to guarantee correct input f to specialized function. It is up to the user to make sure that your inputs match the inputs you specialized the function with. 2023-01-11T22:05:44.0084630Z warnings.warn( 2023-01-11T22:05:44.0084744Z 2023-01-11T22:05:44.0084748Z 2023-01-11T22:05:44.0084752Z 2023-01-11T22:05:44.0084833Z def forward(self, x, y_1): 2023-01-11T22:05:44.0085032Z eq = y_1 == True; y_1 = None 2023-01-11T22:05:44.0085387Z _assert = torch._assert(eq, 'y has been specialized to have value True but got another value'); eq = None 2023-01-11T22:05:44.0085649Z mul = 2 * x; x = None 2023-01-11T22:05:44.0085863Z return mul 2023-01-11T22:05:44.0086023Z 2023-01-11T22:05:44.0086162Z ok (0.007s) 2023-01-11T22:05:44.0086373Z test_pickle_custom_import (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0086633Z test_pickle_graphmodule (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0086884Z test_pickle_nonetype_annotation (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0087151Z test_pickle_torch_custom_ops (__main__.TestFX) ... ok (0.007s) 2023-01-11T22:05:44.0087692Z test_prepend_self (__main__.TestFX) ... /opt/conda/lib/python3.10/site-packages/torch/fx/node.py:244: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph. 2023-01-11T22:05:44.0088115Z warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.") 2023-01-11T22:05:44.0088364Z ok (0.001s) 2023-01-11T22:05:44.0088567Z test_pretty_print (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0088845Z test_pretty_print_graph (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0089086Z test_pretty_print_node (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0089337Z test_pretty_print_targets (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0089602Z test_profiler_ranges_side_effect (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0089838Z test_pytree (__main__.TestFX) ... ok (0.101s) 2023-01-11T22:05:44.0090074Z test_pytree_concrete (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0090326Z test_reassign_args_kwargs_uses (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0090581Z test_regular_and_default_args (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0090827Z test_remove_uses (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0091082Z test_remove_uses_with_custom_filter (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0091339Z test_replace_input (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0091564Z test_replace_uses (__main__.TestFX) ... ok (0.236s) 2023-01-11T22:05:44.0091793Z test_reserved_getattr (__main__.TestFX) 2023-01-11T22:05:44.0092066Z Ensure that we do not name any nodes with a reserved builtin like `getattr` ... ok (0.003s) 2023-01-11T22:05:44.0092335Z test_return_tuple (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0092580Z test_return_type_exists (__main__.TestFX) ... ok (0.018s) 2023-01-11T22:05:44.0092827Z test_script_method_trace (__main__.TestFX) ... ok (0.009s) 2023-01-11T22:05:44.0093069Z test_script_tensor_constant (__main__.TestFX) ... ok (0.018s) 2023-01-11T22:05:44.0093312Z test_sequential (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0093561Z test_shape_prop_aggregate (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0093813Z test_shape_prop_layout (__main__.TestFX) ... ok (0.036s) 2023-01-11T22:05:44.0094049Z test_shape_prop_layout_3d (__main__.TestFX) ... ok (0.684s) 2023-01-11T22:05:44.0094299Z test_single_default_arg (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0094539Z test_snake_case (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0094753Z test_sqrt (__main__.TestFX) ... ok (0.008s) 2023-01-11T22:05:44.0094976Z test_stack_traces (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0095232Z test_stack_traces_with_transformer (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0095487Z test_string_literal_return (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0095754Z test_submodule_manipulation_API (__main__.TestFX) ... ok (0.019s) 2023-01-11T22:05:44.0096019Z test_symbolic_trace_assert (__main__.TestFX) ... ok (0.010s) 2023-01-11T22:05:44.0096279Z test_symbolic_trace_sequential (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0096521Z test_tensor_attribute (__main__.TestFX) ... ok (0.006s) 2023-01-11T22:05:44.0096779Z test_tensor_attribute_coalseced (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0097036Z test_tensor_constant (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0097268Z test_throw_out_variant (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0097546Z test_torch_custom_ops (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0097788Z test_torch_fx_getattr (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0098536Z test_torch_fx_len (__main__.TestFX) ... /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. 2023-01-11T22:05:44.0099129Z warnings.warn("The TorchScript type system doesn't support " 2023-01-11T22:05:44.0099349Z ok (0.027s) 2023-01-11T22:05:44.0099559Z test_torch_op_overloads (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0099815Z test_torchbind_class_attribute_in_fx (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0100109Z test_torchbind_class_attribute_in_fx_tensor_arg (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0100454Z test_trace_buffer_slice (__main__.TestFX) ... skip: Hotfix for SEV remediation (0.001s) 2023-01-11T22:05:44.0100743Z test_trace_dict_int_keys (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0100982Z test_trace_dict_proxy_keys (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0101232Z test_trace_fn_constant (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0101472Z test_trace_function (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0101716Z test_trace_multiple_funcs (__main__.TestFX) ... 2.0.0a0+git8419ddd 2023-01-11T22:05:44.0101926Z ok (0.007s) 2023-01-11T22:05:44.0102166Z test_tracing_graphmodules_as_leaf_submodules (__main__.TestFX) ... ok (0.015s) 2023-01-11T22:05:44.0102600Z test_transformer_multi_outputs (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0102864Z test_transformer_noop (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0103116Z test_transformer_op_swap (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0103376Z test_tuple_no_subscript (__main__.TestFX) ... ok (0.005s) 2023-01-11T22:05:44.0103612Z test_typename_print (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0103849Z test_unpack (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0104095Z test_unpack_dict_better_error (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0104346Z test_unpack_list_better_error (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0104603Z test_update_args_api (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0104865Z test_update_args_kwargs_yells_at_you (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0105117Z test_update_kwargs_api (__main__.TestFX) ... ok (0.002s) 2023-01-11T22:05:44.0105397Z test_user_friendly_call_provenance_with_function (__main__.TestFX) ... ok (0.017s) 2023-01-11T22:05:44.0105703Z test_user_friendly_call_provenance_with_module (__main__.TestFX) ... ok (0.017s) 2023-01-11T22:05:44.0105962Z test_wrap (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0106196Z test_wrap_decorated_function (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0106457Z test_wrap_fn_directly (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0106705Z test_wrap_with_submodule (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0106939Z test_wrapped_method (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0107185Z test_wrapped_retrace (__main__.TestFX) ... ok (0.004s) 2023-01-11T22:05:44.0107442Z test_wrapped_via_decorator (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0107710Z test_wrapped_via_decorator_and_transformed (__main__.TestFX) ... ok (0.003s) 2023-01-11T22:05:44.0107986Z test_wrong_target_type (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0108224Z test_wrong_topo (__main__.TestFX) ... ok (0.001s) 2023-01-11T22:05:44.0108508Z test_class_member_back_compat (__main__.TestFXAPIBackwardCompatibility) 2023-01-11T22:05:44.0108813Z Test backward compatibility for members of classes with ... ok (0.002s) 2023-01-11T22:05:44.0109122Z test_function_back_compat (__main__.TestFXAPIBackwardCompatibility) 2023-01-11T22:05:44.0109499Z Test backward compatibility for function signatures with ... ok (0.008s) 2023-01-11T22:05:44.0109812Z test_public_api_surface (__main__.TestFXAPIBackwardCompatibility) ... ok (0.002s) 2023-01-11T22:05:44.0110156Z test_nn_functional_adaptive_avg_pool1d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0110492Z test_nn_functional_adaptive_avg_pool2d (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0110825Z test_nn_functional_adaptive_avg_pool3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0111147Z test_nn_functional_adaptive_max_pool1d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0111495Z test_nn_functional_adaptive_max_pool1d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0111842Z test_nn_functional_adaptive_max_pool2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0112213Z test_nn_functional_adaptive_max_pool2d_with_indices (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0112561Z test_nn_functional_adaptive_max_pool3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0112912Z test_nn_functional_adaptive_max_pool3d_with_indices (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0113316Z test_nn_functional_affine_grid (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0113629Z test_nn_functional_alpha_dropout (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0113954Z test_nn_functional_avg_pool1d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0114275Z test_nn_functional_avg_pool2d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0114576Z test_nn_functional_avg_pool3d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0114888Z test_nn_functional_batch_norm (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0115205Z test_nn_functional_bilinear (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0115536Z test_nn_functional_binary_cross_entropy (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0115875Z test_nn_functional_binary_cross_entropy_with_logits (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0116208Z test_nn_functional_celu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0116511Z test_nn_functional_celu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0116816Z test_nn_functional_channel_shuffle (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0117135Z test_nn_functional_conv1d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0117442Z test_nn_functional_conv2d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0117746Z test_nn_functional_conv3d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0118037Z test_nn_functional_conv_tbc (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0118363Z test_nn_functional_conv_transpose1d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0118694Z test_nn_functional_conv_transpose2d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0119012Z test_nn_functional_conv_transpose3d (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0119405Z test_nn_functional_cosine_embedding_loss (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0119744Z test_nn_functional_cosine_similarity (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0120075Z test_nn_functional_cross_entropy (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0120379Z test_nn_functional_ctc_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0120689Z test_nn_functional_dropout (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0121006Z test_nn_functional_dropout1d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0121349Z test_nn_functional_dropout2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0121663Z test_nn_functional_dropout3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0121964Z test_nn_functional_elu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0122261Z test_nn_functional_elu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0122552Z test_nn_functional_embedding (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0122870Z test_nn_functional_embedding_bag (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0123199Z test_nn_functional_feature_alpha_dropout (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0123504Z test_nn_functional_fold (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0123820Z test_nn_functional_fractional_max_pool2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0124204Z test_nn_functional_fractional_max_pool2d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0124554Z test_nn_functional_fractional_max_pool3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0124891Z test_nn_functional_fractional_max_pool3d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0125234Z test_nn_functional_gaussian_nll_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0125543Z test_nn_functional_gelu (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0125831Z test_nn_functional_glu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0126137Z test_nn_functional_grid_sample (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0126452Z test_nn_functional_group_norm (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0126769Z test_nn_functional_gumbel_softmax (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0127081Z test_nn_functional_hardshrink (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0127399Z test_nn_functional_hardsigmoid (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0127713Z test_nn_functional_hardswish (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0128010Z test_nn_functional_hardtanh (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0128322Z test_nn_functional_hardtanh_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0128647Z test_nn_functional_hinge_embedding_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0128969Z test_nn_functional_huber_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0129270Z test_nn_functional_instance_norm (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0129586Z test_nn_functional_interpolate (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0129901Z test_nn_functional_kl_div (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0130193Z test_nn_functional_l1_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0130500Z test_nn_functional_layer_norm (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0130809Z test_nn_functional_leaky_relu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0131118Z test_nn_functional_leaky_relu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0131411Z test_nn_functional_linear (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0131725Z test_nn_functional_local_response_norm (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0132046Z test_nn_functional_log_softmax (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0132346Z test_nn_functional_logsigmoid (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0132691Z test_nn_functional_lp_pool1d (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0133000Z test_nn_functional_lp_pool2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0133321Z test_nn_functional_margin_ranking_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0133628Z test_nn_functional_max_pool1d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0133952Z test_nn_functional_max_pool1d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0134275Z test_nn_functional_max_pool2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0134597Z test_nn_functional_max_pool2d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0134908Z test_nn_functional_max_pool3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0135233Z test_nn_functional_max_pool3d_with_indices (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0135609Z test_nn_functional_max_unpool1d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0135916Z test_nn_functional_max_unpool2d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0136228Z test_nn_functional_max_unpool3d (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0136535Z test_nn_functional_mish (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0136837Z test_nn_functional_mse_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0137153Z test_nn_functional_multi_head_attention_forward (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0137494Z test_nn_functional_multi_margin_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0137826Z test_nn_functional_multilabel_margin_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0138157Z test_nn_functional_multilabel_soft_margin_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0138504Z test_nn_functional_native_channel_shuffle (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0138825Z test_nn_functional_nll_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0139133Z test_nn_functional_normalize (__main__.TestFunctionalTracing) ... ok (0.003s) 2023-01-11T22:05:44.0139428Z test_nn_functional_one_hot (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0139730Z test_nn_functional_pad (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0140043Z test_nn_functional_pairwise_distance (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0140348Z test_nn_functional_pdist (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0140660Z test_nn_functional_pixel_shuffle (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0140982Z test_nn_functional_pixel_unshuffle (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0141309Z test_nn_functional_poisson_nll_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0141607Z test_nn_functional_prelu (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0141906Z test_nn_functional_relu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0142201Z test_nn_functional_relu6 (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0142614Z test_nn_functional_relu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0142912Z test_nn_functional_rrelu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0143211Z test_nn_functional_rrelu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0143513Z test_nn_functional_selu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0143798Z test_nn_functional_selu_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0144096Z test_nn_functional_silu (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0144468Z test_nn_functional_smooth_l1_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0144779Z test_nn_functional_soft_margin_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0145098Z test_nn_functional_softmax (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0145405Z test_nn_functional_softmin (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0145713Z test_nn_functional_softplus (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0146012Z test_nn_functional_softshrink (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0146327Z test_nn_functional_threshold (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0146642Z test_nn_functional_threshold_ (__main__.TestFunctionalTracing) ... ok (0.001s) 2023-01-11T22:05:44.0146950Z test_nn_functional_triplet_margin_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0147338Z test_nn_functional_triplet_margin_with_distance_loss (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0147673Z test_nn_functional_unfold (__main__.TestFunctionalTracing) ... ok (0.002s) 2023-01-11T22:05:44.0148346Z test_nn_functional_upsample (__main__.TestFunctionalTracing) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:3736: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. 2023-01-11T22:05:44.0148854Z warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") 2023-01-11T22:05:44.0149128Z ok (0.002s) 2023-01-11T22:05:44.0149724Z test_nn_functional_upsample_bilinear (__main__.TestFunctionalTracing) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:4078: UserWarning: nn.functional.upsample_bilinear is deprecated. Use nn.functional.interpolate instead. 2023-01-11T22:05:44.0150269Z warnings.warn("nn.functional.upsample_bilinear is deprecated. Use nn.functional.interpolate instead.") 2023-01-11T22:05:44.0150537Z ok (0.002s) 2023-01-11T22:05:44.0151125Z test_nn_functional_upsample_nearest (__main__.TestFunctionalTracing) ... /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:4022: UserWarning: nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead. 2023-01-11T22:05:44.0151655Z warnings.warn("nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead.") 2023-01-11T22:05:44.0151933Z ok (0.002s) 2023-01-11T22:05:44.0152142Z test_pass_manager (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0152431Z Tests that the pass manager runs the passes correctly. ... ok (0.007s) 2023-01-11T22:05:44.0152732Z test_pass_manager_bad_checks (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0153039Z Checks that we error if we pass in a check function with the wrong parameters ... ok (0.001s) 2023-01-11T22:05:44.0153352Z test_pass_manager_checks (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0153647Z Tests that users can add in check functions correctly ... ok (0.002s) 2023-01-11T22:05:44.0153925Z test_pass_manager_error (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0154177Z Tests error catching + debug ... ok (0.004s) 2023-01-11T22:05:44.0154446Z test_this_before_that_pass_constraint (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0154732Z Tests the construction of constraints ... ok (0.001s) 2023-01-11T22:05:44.0154989Z test_topological_sort (fx.test_pass_infra.TestPassManager) 2023-01-11T22:05:44.0155286Z Tests that passes are correctly ordered based on contraints. ... ok (0.002s) 2023-01-11T22:05:44.0155639Z test_matching_pattern_with_list_type_arg (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.007s) 2023-01-11T22:05:44.0156019Z test_replace_pattern_with_filters (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.015s) 2023-01-11T22:05:44.0156410Z test_subgraph_rewriter_annotations_int (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.004s) 2023-01-11T22:05:44.0156836Z test_subgraph_rewriter_call_method (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.007s) 2023-01-11T22:05:44.0157234Z test_subgraph_rewriter_correct_output_replacement (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0157628Z test_subgraph_rewriter_graph_argument_order (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0158066Z test_subgraph_rewriter_internal_pattern_nodes_cannot_have_users_that_are_not_matched (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0158492Z test_subgraph_rewriter_local_revert (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.010s) 2023-01-11T22:05:44.0158883Z test_subgraph_rewriter_multiple_pattern_match (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.010s) 2023-01-11T22:05:44.0159338Z test_subgraph_rewriter_nodes_with_kwargs (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.007s) 2023-01-11T22:05:44.0159772Z test_subgraph_rewriter_pattern_is_entire_graph (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0160220Z test_subgraph_rewriter_pattern_output_pattern_node_can_have_users_that_are_not_matched (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0160652Z test_subgraph_rewriter_placeholder_matching (fx.test_subgraph_rewriter.TestSubgraphRewriter) 2023-01-11T22:05:44.0160987Z This tests that a placeholder Node can be matched to a Node with ... ok (0.008s) 2023-01-11T22:05:44.0161341Z test_subgraph_rewriter_preserves_logic (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0161748Z test_subgraph_rewriter_replace_consecutive_submodules (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0162160Z test_subgraph_rewriter_replace_with_duplicated_outputs (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0162582Z test_subgraph_rewriter_replace_with_multiple_outputs (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.009s) 2023-01-11T22:05:44.0163007Z test_subgraph_rewriter_replaces_referenced_submodules (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.009s) 2023-01-11T22:05:44.0163424Z test_subgraph_rewriter_single_pattern_match (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.009s) 2023-01-11T22:05:44.0163810Z test_subgraph_rewriter_traced_as_callable (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.009s) 2023-01-11T22:05:44.0164211Z test_subgraph_rewriter_with_oneliner_pattern (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0164616Z test_subgraph_rewriter_with_overlapping_matches (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0165020Z test_subgraph_rewriter_with_unused_args (fx.test_subgraph_rewriter.TestSubgraphRewriter) ... ok (0.008s) 2023-01-11T22:05:44.0165358Z test_torchvision_models_alexnet (__main__.TestVisionTracing) ... ok (0.503s) 2023-01-11T22:05:44.0165678Z test_torchvision_models_convnext_base (__main__.TestVisionTracing) ... ok (2.298s) 2023-01-11T22:05:44.0166112Z test_torchvision_models_convnext_large (__main__.TestVisionTracing) ... ok (3.997s) 2023-01-11T22:05:44.0166419Z test_torchvision_models_convnext_small (__main__.TestVisionTracing) ... ok (1.447s) 2023-01-11T22:05:44.0166735Z test_torchvision_models_convnext_tiny (__main__.TestVisionTracing) ... ok (0.791s) 2023-01-11T22:05:44.0167052Z test_torchvision_models_densenet121 (__main__.TestVisionTracing) ... ok (1.650s) 2023-01-11T22:05:44.0167367Z test_torchvision_models_densenet161 (__main__.TestVisionTracing) ... ok (2.824s) 2023-01-11T22:05:44.0167660Z test_torchvision_models_densenet169 (__main__.TestVisionTracing) ... ok (2.291s) 2023-01-11T22:05:44.0167977Z test_torchvision_models_densenet201 (__main__.TestVisionTracing) ... ok (2.970s) 2023-01-11T22:05:44.0168793Z test_torchvision_models_detection_fasterrcnn_mobilenet_v3_large_320_fpn (__main__.TestVisionTracing) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained_backbone' is deprecated since 0.13 and may be removed in the future, please use 'weights_backbone' instead. 2023-01-11T22:05:44.0169315Z warnings.warn( 2023-01-11T22:05:44.0169963Z /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights_backbone' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights_backbone=None`. 2023-01-11T22:05:44.0170412Z warnings.warn(msg) 2023-01-11T22:05:44.0170592Z ok (0.232s) 2023-01-11T22:05:44.0170877Z test_torchvision_models_detection_fasterrcnn_mobilenet_v3_large_fpn (__main__.TestVisionTracing) ... ok (0.229s) 2023-01-11T22:05:44.0171269Z test_torchvision_models_detection_fasterrcnn_resnet50_fpn (__main__.TestVisionTracing) ... ok (0.610s) 2023-01-11T22:05:44.0171640Z test_torchvision_models_detection_fasterrcnn_resnet50_fpn_v2 (__main__.TestVisionTracing) ... ok (0.661s) 2023-01-11T22:05:44.0172001Z test_torchvision_models_detection_fcos_resnet50_fpn (__main__.TestVisionTracing) ... ok (0.626s) 2023-01-11T22:05:44.0172347Z test_torchvision_models_detection_keypointrcnn_resnet50_fpn (__main__.TestVisionTracing) ... ok (0.955s) 2023-01-11T22:05:44.0172711Z test_torchvision_models_detection_maskrcnn_resnet50_fpn (__main__.TestVisionTracing) ... ok (0.662s) 2023-01-11T22:05:44.0173066Z test_torchvision_models_detection_maskrcnn_resnet50_fpn_v2 (__main__.TestVisionTracing) ... ok (0.714s) 2023-01-11T22:05:44.0173425Z test_torchvision_models_detection_retinanet_resnet50_fpn (__main__.TestVisionTracing) ... ok (0.629s) 2023-01-11T22:05:44.0173769Z test_torchvision_models_detection_retinanet_resnet50_fpn_v2 (__main__.TestVisionTracing) ... ok (0.670s) 2023-01-11T22:05:44.0174110Z test_torchvision_models_detection_ssd300_vgg16 (__main__.TestVisionTracing) ... ok (3.108s) 2023-01-11T22:05:44.0174467Z test_torchvision_models_detection_ssdlite320_mobilenet_v3_large (__main__.TestVisionTracing) ... ok (0.160s) 2023-01-11T22:05:44.0174812Z test_torchvision_models_efficientnet_b0 (__main__.TestVisionTracing) ... ok (1.081s) 2023-01-11T22:05:44.0175122Z test_torchvision_models_efficientnet_b1 (__main__.TestVisionTracing) ... ok (1.518s) 2023-01-11T22:05:44.0175442Z test_torchvision_models_efficientnet_b2 (__main__.TestVisionTracing) ... ok (1.494s) 2023-01-11T22:05:44.0175757Z test_torchvision_models_efficientnet_b3 (__main__.TestVisionTracing) ... ok (1.796s) 2023-01-11T22:05:44.0176065Z test_torchvision_models_efficientnet_b4 (__main__.TestVisionTracing) ... ok (2.426s) 2023-01-11T22:05:44.0176381Z test_torchvision_models_efficientnet_b5 (__main__.TestVisionTracing) ... ok (3.047s) 2023-01-11T22:05:44.0176699Z test_torchvision_models_efficientnet_b6 (__main__.TestVisionTracing) ... ok (3.903s) 2023-01-11T22:05:44.0177017Z test_torchvision_models_efficientnet_b7 (__main__.TestVisionTracing) ... ok (5.062s) 2023-01-11T22:05:44.0177326Z test_torchvision_models_efficientnet_v2_l (__main__.TestVisionTracing) ... ok (7.592s) 2023-01-11T22:05:44.0177647Z test_torchvision_models_efficientnet_v2_m (__main__.TestVisionTracing) ... ok (4.915s) 2023-01-11T22:05:44.0177968Z test_torchvision_models_efficientnet_v2_s (__main__.TestVisionTracing) ... ok (3.074s) 2023-01-11T22:05:44.0178862Z test_torchvision_models_googlenet (__main__.TestVisionTracing) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/googlenet.py:47: FutureWarning: The default weight initialization of GoogleNet will be changed in future releases of torchvision. If you wish to keep the old behavior (which leads to long initialization times due to scipy/scipy#11299), please set init_weights=True. 2023-01-11T22:05:44.0179395Z warnings.warn( 2023-01-11T22:05:44.0179563Z ok (1.020s) 2023-01-11T22:05:44.0180416Z test_torchvision_models_inception_v3 (__main__.TestVisionTracing) ... /var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/inception.py:43: FutureWarning: The default weight initialization of inception_v3 will be changed in future releases of torchvision. If you wish to keep the old behavior (which leads to long initialization times due to scipy/scipy#11299), please set init_weights=True. 2023-01-11T22:05:44.0180960Z warnings.warn( 2023-01-11T22:05:44.0181118Z ok (1.548s) 2023-01-11T22:05:44.0181822Z test_torchvision_models_maxvit_t (__main__.TestVisionTracing) ... /opt/conda/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/TensorShape.cpp:3452.) 2023-01-11T22:05:44.0182557Z return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-01-11T22:05:44.0182785Z ok (3.429s) 2023-01-11T22:05:44.0183073Z test_torchvision_models_mnasnet0_5 (__main__.TestVisionTracing) ... ok (0.732s) 2023-01-11T22:05:44.0183384Z test_torchvision_models_mnasnet0_75 (__main__.TestVisionTracing) ... ok (0.743s) 2023-01-11T22:05:44.0183693Z test_torchvision_models_mnasnet1_0 (__main__.TestVisionTracing) ... ok (0.742s) 2023-01-11T22:05:44.0184004Z test_torchvision_models_mnasnet1_3 (__main__.TestVisionTracing) ... ok (0.852s) 2023-01-11T22:05:44.0184300Z test_torchvision_models_mobilenet_v2 (__main__.TestVisionTracing) ... ok (0.816s) 2023-01-11T22:05:44.0184620Z test_torchvision_models_mobilenet_v3_large (__main__.TestVisionTracing) ... ok (1.182s) 2023-01-11T22:05:44.0184945Z test_torchvision_models_mobilenet_v3_small (__main__.TestVisionTracing) ... ok (0.785s) 2023-01-11T22:05:44.0185253Z test_torchvision_models_regnet_x_16gf (__main__.TestVisionTracing) ... ok (2.516s) 2023-01-11T22:05:44.0185568Z test_torchvision_models_regnet_x_1_6gf (__main__.TestVisionTracing) ... ok (1.283s) 2023-01-11T22:05:44.0185886Z test_torchvision_models_regnet_x_32gf (__main__.TestVisionTracing) ... ok (4.136s) 2023-01-11T22:05:44.0186200Z test_torchvision_models_regnet_x_3_2gf (__main__.TestVisionTracing) ... ok (1.902s) 2023-01-11T22:05:44.0186496Z test_torchvision_models_regnet_x_400mf (__main__.TestVisionTracing) ... ok (1.416s) 2023-01-11T22:05:44.0186803Z test_torchvision_models_regnet_x_800mf (__main__.TestVisionTracing) ... ok (1.215s) 2023-01-11T22:05:44.0187111Z test_torchvision_models_regnet_x_8gf (__main__.TestVisionTracing) ... ok (2.266s) 2023-01-11T22:05:44.0187408Z test_torchvision_models_regnet_y_128gf (__main__.TestVisionTracing) ... ok (18.645s) 2023-01-11T22:05:44.0187721Z test_torchvision_models_regnet_y_16gf (__main__.TestVisionTracing) ... ok (3.483s) 2023-01-11T22:05:44.0188027Z test_torchvision_models_regnet_y_1_6gf (__main__.TestVisionTracing) ... ok (2.482s) 2023-01-11T22:05:44.0188335Z test_torchvision_models_regnet_y_32gf (__main__.TestVisionTracing) ... ok (5.491s) 2023-01-11T22:05:44.0188635Z test_torchvision_models_regnet_y_3_2gf (__main__.TestVisionTracing) ... ok (2.199s) 2023-01-11T22:05:44.0188943Z test_torchvision_models_regnet_y_400mf (__main__.TestVisionTracing) ... ok (1.510s) 2023-01-11T22:05:44.0189244Z test_torchvision_models_regnet_y_800mf (__main__.TestVisionTracing) ... ok (1.539s) 2023-01-11T22:05:44.0189535Z test_torchvision_models_regnet_y_8gf (__main__.TestVisionTracing) ... ok (2.392s) 2023-01-11T22:05:44.0189841Z test_torchvision_models_resnet101 (__main__.TestVisionTracing) ... ok (2.286s) 2023-01-11T22:05:44.0190146Z test_torchvision_models_resnet152 (__main__.TestVisionTracing) ... ok (3.252s) 2023-01-11T22:05:44.0190451Z test_torchvision_models_resnet18 (__main__.TestVisionTracing) ... ok (0.598s) 2023-01-11T22:05:44.0190742Z test_torchvision_models_resnet34 (__main__.TestVisionTracing) ... ok (1.151s) 2023-01-11T22:05:44.0191042Z test_torchvision_models_resnet50 (__main__.TestVisionTracing) ... ok (1.176s) 2023-01-11T22:05:44.0191424Z test_torchvision_models_resnext101_32x8d (__main__.TestVisionTracing) ... ok (3.232s) 2023-01-11T22:05:44.0191730Z test_torchvision_models_resnext101_64x4d (__main__.TestVisionTracing) ... ok (3.104s) 2023-01-11T22:05:44.0192044Z test_torchvision_models_resnext50_32x4d (__main__.TestVisionTracing) ... ok (1.188s) 2023-01-11T22:05:44.0192400Z test_torchvision_models_segmentation_deeplabv3_mobilenet_v3_large (__main__.TestVisionTracing) ... ok (1.269s) 2023-01-11T22:05:44.0192773Z test_torchvision_models_segmentation_deeplabv3_resnet101 (__main__.TestVisionTracing) ... ok (2.471s) 2023-01-11T22:05:44.0193120Z test_torchvision_models_segmentation_deeplabv3_resnet50 (__main__.TestVisionTracing) ... ok (1.347s) 2023-01-11T22:05:44.0193468Z test_torchvision_models_segmentation_fcn_resnet101 (__main__.TestVisionTracing) ... ok (2.157s) 2023-01-11T22:05:44.0193819Z test_torchvision_models_segmentation_fcn_resnet50 (__main__.TestVisionTracing) ... ok (1.288s) 2023-01-11T22:05:44.0194207Z test_torchvision_models_segmentation_lraspp_mobilenet_v3_large (__main__.TestVisionTracing) ... ok (1.209s) 2023-01-11T22:05:44.0194541Z test_torchvision_models_shufflenet_v2_x0_5 (__main__.TestVisionTracing) ... ok (1.135s) 2023-01-11T22:05:44.0194861Z test_torchvision_models_shufflenet_v2_x1_0 (__main__.TestVisionTracing) ... ok (1.059s) 2023-01-11T22:05:44.0195176Z test_torchvision_models_shufflenet_v2_x1_5 (__main__.TestVisionTracing) ... ok (1.063s) 2023-01-11T22:05:44.0195483Z test_torchvision_models_shufflenet_v2_x2_0 (__main__.TestVisionTracing) ... ok (1.132s) 2023-01-11T22:05:44.0195798Z test_torchvision_models_squeezenet1_0 (__main__.TestVisionTracing) ... ok (0.354s) 2023-01-11T22:05:44.0196113Z test_torchvision_models_squeezenet1_1 (__main__.TestVisionTracing) ... ok (0.310s) 2023-01-11T22:05:44.0196420Z test_torchvision_models_swin_b (__main__.TestVisionTracing) ... ok (3.584s) 2023-01-11T22:05:44.0196707Z test_torchvision_models_swin_s (__main__.TestVisionTracing) ... ok (3.036s) 2023-01-11T22:05:44.0197007Z test_torchvision_models_swin_t (__main__.TestVisionTracing) ... ok (1.493s) 2023-01-11T22:05:44.0197308Z test_torchvision_models_swin_v2_b (__main__.TestVisionTracing) ... ok (4.276s) 2023-01-11T22:05:44.0197597Z test_torchvision_models_swin_v2_s (__main__.TestVisionTracing) ... ok (3.652s) 2023-01-11T22:05:44.0197896Z test_torchvision_models_swin_v2_t (__main__.TestVisionTracing) ... ok (1.839s) 2023-01-11T22:05:44.0198193Z test_torchvision_models_vgg11 (__main__.TestVisionTracing) ... ok (2.800s) 2023-01-11T22:05:44.0198492Z test_torchvision_models_vgg11_bn (__main__.TestVisionTracing) ... ok (2.991s) 2023-01-11T22:05:44.0198781Z test_torchvision_models_vgg13 (__main__.TestVisionTracing) ... ok (3.049s) 2023-01-11T22:05:44.0199138Z test_torchvision_models_vgg13_bn (__main__.TestVisionTracing) ... ok (2.891s) 2023-01-11T22:05:44.0199445Z test_torchvision_models_vgg16 (__main__.TestVisionTracing) ... ok (3.203s) 2023-01-11T22:05:44.0199729Z test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (3.084s) 2023-01-11T22:05:44.0200027Z test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (3.352s) 2023-01-11T22:05:44.0200325Z test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (3.466s) 2023-01-11T22:05:44.0200628Z test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (1.139s) 2023-01-11T22:05:44.0200929Z test_torchvision_models_video_mvit_v1_b (__main__.TestVisionTracing) ... ok (5.543s) 2023-01-11T22:05:44.0201248Z test_torchvision_models_video_mvit_v2_s (__main__.TestVisionTracing) ... ok (5.992s) 2023-01-11T22:05:44.0201570Z test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (1.577s) 2023-01-11T22:05:44.0201874Z test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (1.441s) 2023-01-11T22:05:44.0202179Z test_torchvision_models_video_s3d (__main__.TestVisionTracing) ... ok (2.587s) 2023-01-11T22:05:44.0202491Z test_torchvision_models_video_swin3d_b (__main__.TestVisionTracing) ... ok (3.519s) 2023-01-11T22:05:44.0202846Z test_torchvision_models_video_swin3d_s (__main__.TestVisionTracing) ... ok (2.786s) 2023-01-11T22:05:44.0203139Z test_torchvision_models_video_swin3d_t (__main__.TestVisionTracing) ... ok (1.467s) 2023-01-11T22:05:44.0203444Z test_torchvision_models_vit_b_16 (__main__.TestVisionTracing) ... ok (2.194s) 2023-01-11T22:05:44.0203742Z test_torchvision_models_vit_b_32 (__main__.TestVisionTracing) ... ok (1.838s) 2023-01-11T22:05:44.0204029Z test_torchvision_models_vit_h_14 (__main__.TestVisionTracing) ... ok (11.818s) 2023-01-11T22:05:44.0204328Z test_torchvision_models_vit_l_16 (__main__.TestVisionTracing) ... ok (6.593s) 2023-01-11T22:05:44.0204624Z test_torchvision_models_vit_l_32 (__main__.TestVisionTracing) ... ok (6.242s) 2023-01-11T22:05:44.0204933Z test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (4.098s) 2023-01-11T22:05:44.0205263Z test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (2.182s) 2023-01-11T22:05:44.0205585Z test_flatten_fully_static (fx.test_gradual_type.TypeCheckerTest) ... ok (0.017s) 2023-01-11T22:05:44.0205893Z test_resnet50 (fx.test_gradual_type.TypeCheckerTest) ... ok (1.852s) 2023-01-11T22:05:44.0206195Z test_symbolic_add_with_broadcast (fx.test_gradual_type.TypeCheckerTest) ... ok (0.004s) 2023-01-11T22:05:44.0206532Z test_symbolic_add_with_broadcast_2 (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0206858Z test_type_check_add_false (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0207173Z test_type_check_add_true (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0207488Z test_type_check_add_with_broadcast (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0207827Z test_type_check_add_with_scalar (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0208158Z test_type_check_batch_norm_2D (fx.test_gradual_type.TypeCheckerTest) ... ok (0.004s) 2023-01-11T22:05:44.0208489Z test_type_check_batch_norm_2D_broadcast (fx.test_gradual_type.TypeCheckerTest) ... ok (0.007s) 2023-01-11T22:05:44.0208830Z test_type_check_batch_norm_2D_false (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0209167Z test_type_check_batch_norm_symbolic (fx.test_gradual_type.TypeCheckerTest) ... ok (0.004s) 2023-01-11T22:05:44.0209490Z test_type_check_conv2D (fx.test_gradual_type.TypeCheckerTest) ... ok (0.004s) 2023-01-11T22:05:44.0209788Z test_type_check_conv2D_2 (fx.test_gradual_type.TypeCheckerTest) ... ok (0.007s) 2023-01-11T22:05:44.0210113Z test_type_check_conv2D_2_fully_static (fx.test_gradual_type.TypeCheckerTest) ... ok (0.028s) 2023-01-11T22:05:44.0210461Z test_type_check_conv2D_maxpool2d_flatten (fx.test_gradual_type.TypeCheckerTest) ... ok (0.005s) 2023-01-11T22:05:44.0210787Z test_type_check_conv2D_types (fx.test_gradual_type.TypeCheckerTest) ... ok (0.008s) 2023-01-11T22:05:44.0211109Z test_type_check_flatten (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0211426Z test_type_check_flatten3 (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0211740Z test_type_check_flatten_2 (fx.test_gradual_type.TypeCheckerTest) ... ok (0.002s) 2023-01-11T22:05:44.0212054Z test_type_check_reshape_dyn_false (fx.test_gradual_type.TypeCheckerTest) ... ok (0.002s) 2023-01-11T22:05:44.0212388Z test_type_check_reshape_dyn_true (fx.test_gradual_type.TypeCheckerTest) ... ok (0.002s) 2023-01-11T22:05:44.0212734Z test_type_check_reshape_dyn_true_param_false (fx.test_gradual_type.TypeCheckerTest) ... ok (0.002s) 2023-01-11T22:05:44.0213075Z test_type_check_reshape_false (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0213387Z test_type_check_reshape_true (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0213751Z test_type_check_symbolic_inferenceconv2D_maxpool2d_flatten (fx.test_gradual_type.TypeCheckerTest) ... ok (0.019s) 2023-01-11T22:05:44.0214150Z test_type_check_transpose_False (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0214473Z test_type_check_transpose_true (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0214808Z test_type_maxpool2d_fully_static (fx.test_gradual_type.TypeCheckerTest) ... ok (0.025s) 2023-01-11T22:05:44.0215158Z test_type_typechecl_maxpool2d_3dinput (fx.test_gradual_type.TypeCheckerTest) ... ok (0.003s) 2023-01-11T22:05:44.0215500Z test_typecheck_basicblock (fx.test_gradual_type.TypeCheckerTest) ... ok (0.006s) 2023-01-11T22:05:44.0215671Z 2023-01-11T22:05:44.0215899Z ---------------------------------------------------------------------- 2023-01-11T22:05:44.0216142Z Ran 526 tests in 269.381s 2023-01-11T22:05:44.0216258Z 2023-01-11T22:05:44.0216331Z OK (skipped=3) 2023-01-11T22:05:44.0216439Z 2023-01-11T22:05:44.0216509Z Generating XML reports... 2023-01-11T22:05:44.0216982Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_gradual_type.AnnotationsTest-20230111220111.xml 2023-01-11T22:05:44.0217513Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_cse_pass.TestCSEPass-20230111220111.xml 2023-01-11T22:05:44.0218051Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_common_passes.TestCommonPass-20230111220111.xml 2023-01-11T22:05:44.0218567Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_fx_const_fold.TestConstFold-20230111220111.xml 2023-01-11T22:05:44.0219193Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow-20230111220111.xml 2023-01-11T22:05:44.0219770Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_dce_pass.TestDCE-20230111220111.xml 2023-01-11T22:05:44.0220229Z Generated XML report: test-reports/python-unittest/test_fx/TEST-TestFX-20230111220111.xml 2023-01-11T22:05:44.0220733Z Generated XML report: test-reports/python-unittest/test_fx/TEST-TestFXAPIBackwardCompatibility-20230111220111.xml 2023-01-11T22:05:44.0221280Z Generated XML report: test-reports/python-unittest/test_fx/TEST-TestFunctionalTracing-20230111220111.xml 2023-01-11T22:05:44.0221815Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_pass_infra.TestPassManager-20230111220111.xml 2023-01-11T22:05:44.0222535Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_subgraph_rewriter.TestSubgraphRewriter-20230111220111.xml 2023-01-11T22:05:44.0223065Z Generated XML report: test-reports/python-unittest/test_fx/TEST-TestVisionTracing-20230111220111.xml 2023-01-11T22:05:44.0223585Z Generated XML report: test-reports/python-unittest/test_fx/TEST-fx.test_gradual_type.TypeCheckerTest-20230111220111.xml 2023-01-11T22:05:44.0223821Z 2023-01-11T22:05:44.0224171Z ##[endgroup] 2023-01-11T22:05:44.0224520Z FINISHED PRINTING LOG FILE of test_fx (/var/lib/jenkins/workspace/test/test-reports/test_fx_jso06ja3) 2023-01-11T22:05:44.0224720Z 2023-01-11T22:05:44.0224883Z Running test_indexing ... [2023-01-11 22:05:44.002579] 2023-01-11T22:05:44.0225345Z Executing ['/opt/conda/bin/python', '-bb', 'test_indexing.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:44.002821] 2023-01-11T22:05:45.9732667Z 2023-01-11T22:05:45.9733982Z Expand the folded group to see the log file of test_indexing 2023-01-11T22:05:45.9735596Z ##[group]PRINTING LOG FILE of test_indexing (/var/lib/jenkins/workspace/test/test-reports/test_indexing_o2r311vl) 2023-01-11T22:05:45.9736008Z 2023-01-11T22:05:45.9736095Z Running tests... 2023-01-11T22:05:45.9736508Z ---------------------------------------------------------------------- 2023-01-11T22:05:45.9736681Z 2023-01-11T22:05:45.9736935Z ---------------------------------------------------------------------- 2023-01-11T22:05:45.9737248Z Ran 0 tests in 0.000s 2023-01-11T22:05:45.9737388Z 2023-01-11T22:05:45.9737451Z OK 2023-01-11T22:05:45.9737571Z 2023-01-11T22:05:45.9737668Z Generating XML reports... 2023-01-11T22:05:45.9738487Z Test results will be stored in test-reports/python-unittest/test_indexing 2023-01-11T22:05:45.9738793Z 2023-01-11T22:05:45.9739033Z ##[endgroup] 2023-01-11T22:05:45.9739427Z FINISHED PRINTING LOG FILE of test_indexing (/var/lib/jenkins/workspace/test/test-reports/test_indexing_o2r311vl) 2023-01-11T22:05:45.9739643Z 2023-01-11T22:05:45.9739819Z Running test_jit_cuda_fuser ... [2023-01-11 22:05:45.973350] 2023-01-11T22:05:45.9740288Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit_cuda_fuser.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:45.973641] 2023-01-11T22:05:48.7644758Z 2023-01-11T22:05:48.7645206Z Expand the folded group to see the log file of test_jit_cuda_fuser 2023-01-11T22:05:48.7645973Z ##[group]PRINTING LOG FILE of test_jit_cuda_fuser (/var/lib/jenkins/workspace/test/test-reports/test_jit_cuda_fuser_nfdw1hv2) 2023-01-11T22:05:48.7646342Z 2023-01-11T22:05:48.7646456Z Running tests... 2023-01-11T22:05:48.7647412Z ---------------------------------------------------------------------- 2023-01-11T22:05:48.7647801Z Test results will be stored in test-reports/python-unittest/test_jit_cuda_fuser 2023-01-11T22:05:48.7648129Z test__softmax_function (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7648464Z test__softmax_function_half_to_float (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7648781Z test_addcmul_ops (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7649335Z test_alias_pass_fix (__main__.TestCudaFuser) ... skip: skipping this test since unsqueeze is disabled now (0.001s) 2023-01-11T22:05:48.7649899Z test_autocast_1 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7650392Z test_autocast_1_bfloat (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7650925Z test_autocast_2 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7651379Z test_autocast_2_bfloat (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7651725Z test_backward_type (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7652055Z test_batch_norm_half (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7652394Z test_batch_norm_impl_index_correctness (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7652740Z test_batch_norm_impl_index_inner_bcast (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7653090Z test_bfloat (__main__.TestCudaFuser) ... skip: device does not support BFloat16 (0.001s) 2023-01-11T22:05:48.7653604Z test_binary_bitwise (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7654108Z test_binary_ops (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7654706Z test_binary_ops_channels_last_with_bcast (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7655408Z test_binary_ops_complex (__main__.TestCudaFuser) ... skip: see issue https://github.com/csarofeen/pytorch/issues/1730 (0.001s) 2023-01-11T22:05:48.7656058Z test_binary_ops_permutation (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7656609Z test_branches (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7657125Z test_broadcasting_0 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7657411Z test_broadcasting_1 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7657703Z test_broadcasting_2 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7657999Z test_broadcasting_3 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7658467Z test_broadcasting_multiple_output (__main__.TestCudaFuser) ... skip: broadcast on branches can't be resolved yet (0.001s) 2023-01-11T22:05:48.7658874Z test_broadcasting_multiple_output_shape (__main__.TestCudaFuser) ... skip: Broadcast with different output not supported yet (0.001s) 2023-01-11T22:05:48.7659366Z test_broadcasting_partition_logic_0 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7659705Z test_broadcasting_partition_logic_1 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7660042Z test_build_shape_expression_native_dropout (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7660372Z test_category_rule (__main__.TestCudaFuser) ... skip: requires CUDA (0.002s) 2023-01-11T22:05:48.7660696Z test_channels_last_with_broadcast (__main__.TestCudaFuser) ... skip: requires CUDA (0.002s) 2023-01-11T22:05:48.7661005Z test_chunk (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7661276Z test_clamp (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7661580Z test_clamp_reversed_bound (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7661938Z test_clean_profile_ivalue (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7662234Z test_const (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7662728Z test_contiguous_on_broadcasted (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7663046Z test_conv2d_bias (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7663359Z test_conv2d_symbolic_shapes (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7663655Z test_cpu_scalar (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7663959Z test_cuda_fusion_guard (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7664341Z test_cuda_fusion_guard_backward (__main__.TestCudaFuser) ... skip: requires NVFuser (0.001s) 2023-01-11T22:05:48.7664658Z test_device_constant (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7665012Z test_disable_const_chunk_propagation_for_normalization (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7665362Z test_disable_sibling_fuse (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7665667Z test_dropout_inference_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7666008Z test_dropout_train_nograd_fusion (__main__.TestCudaFuser) ... skip: not enough memory (0.001s) 2023-01-11T22:05:48.7666347Z test_dropout_train_nograd_prob_check (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7666669Z test_dropout_training_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7666995Z test_dropout_training_prob_check (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7667309Z test_dynamic_size (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7667603Z test_expand (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7667900Z test_fix_shape_expression_bn (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7668242Z test_flatten (__main__.TestCudaFuser) ... skip: skipping this test since flatten is disabled now (0.001s) 2023-01-11T22:05:48.7668561Z test_gelu (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7668839Z test_grad_sum_to_size (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7669168Z test_graph_for_with_missing_optimized_engine (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7669491Z test_graph_rng (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7669775Z test_half (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7670053Z test_high_rank_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7670361Z test_inf_quick_patch (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7670721Z test_inplace_removal (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7671029Z test_input_output_passthrough (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7671347Z test_int_tensor_input (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7671648Z test_issue1445_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7671944Z test_issue_1785 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7672232Z test_layer_norm_autodiff (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7672548Z test_layer_norm_parser (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7672874Z test_layer_norm_trivial_reduce_dim (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7673169Z test_linear (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7673516Z test_linear_symbolic_shapes (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7673839Z test_multiple_device_pw (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7674157Z test_native_batch_norm_backward (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7674463Z test_native_layer_norm (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7674774Z test_native_layer_norm_bfloat (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7675092Z test_native_layer_norm_half (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7675424Z test_nested_view (__main__.TestCudaFuser) ... skip: skipping this test since view is disabled now (0.000s) 2023-01-11T22:05:48.7675755Z test_no_tensor_input (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7676047Z test_norm (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7676339Z test_norm_bfloat (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7676626Z test_norm_channels_last (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7676927Z test_norm_half (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7677222Z test_norm_half_layer (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7677506Z test_norm_large (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7677814Z test_normalization_partition (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7678163Z test_nvfuser_comparison_callbacks_with_fallback (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7678533Z test_nvfuser_comparison_callbacks_without_fallback (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7678863Z test_overlapped_input (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7679250Z test_permutation_preservation (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7679598Z test_permutation_preservation_edge_case_0 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7679965Z test_permutation_preservation_edge_case_1_broken (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7680311Z test_permutation_preservation_edge_case_2 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7680634Z test_permute (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7680950Z test_pointwise_reference_tensor (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7681256Z test_profile_ivalue (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7681585Z test_profile_ivalue_multiple_profiles (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7681978Z test_profiling_node (__main__.TestCudaFuser) ... skip: Skipped due to rand_like behavior change (0.000s) 2023-01-11T22:05:48.7682329Z test_pw_single_reduction_partition (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7682633Z test_random_topo (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7682927Z test_reduction (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7683232Z test_reduction_dtypes_axis (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7683534Z test_reduction_empty_axes (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7683852Z test_reduction_multiple_output (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7684178Z test_reduction_permutation (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7684536Z test_reduction_sizes_op (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7684853Z test_remove_output_used_only_in_dtype (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7685161Z test_rsub (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7685456Z test_scalar_cuda_tensor (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7685748Z test_scalar_input (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7686043Z test_scalar_tensor (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7686350Z test_scalar_tensor_permuted (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7686689Z test_scheduler_with_polymorphic_broadcast (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7687059Z test_shape_expression (__main__.TestCudaFuser) ... skip: skipping this test since squeeze/unsqueeze is disabled now (0.001s) 2023-01-11T22:05:48.7687419Z test_sibling_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7687741Z test_sibling_fusion_no_scalar_inputs (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7688074Z test_single_reduction_broadcast (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7688377Z test_singleton_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7688673Z test_skip_parser (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7688965Z test_softmax (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7689247Z test_softmax_bfloat (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7689544Z test_softmax_dtype (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7689840Z test_softmax_half (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7690142Z test_softplus_fuser (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7690473Z test_squeeze (__main__.TestCudaFuser) ... skip: skipping this test since squeeze/unsqueeze is disabled now (0.000s) 2023-01-11T22:05:48.7690869Z test_squeeze_negative_dim (__main__.TestCudaFuser) ... skip: skipping this test since squeeze/unsqueeze is disabled now (0.000s) 2023-01-11T22:05:48.7691264Z test_squeeze_zero (__main__.TestCudaFuser) ... skip: skipping this test since squeeze/unsqueeze is disabled now (0.001s) 2023-01-11T22:05:48.7691591Z test_strict_fusion (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7691885Z test_sum_to_one (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7692175Z test_sum_to_size (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7692465Z test_ternary_ops (__main__.TestCudaFuser) ... skip: requires CUDA (0.002s) 2023-01-11T22:05:48.7692775Z test_ternary_ops_integer_compatibility (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7693150Z test_ternary_ops_type_promotion (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7693458Z test_to_boolean (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7693731Z test_to_copy (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7694026Z test_to_dtype_bf16_to_bf16 (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7694327Z test_to_dtype_bf16_to_fp32 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7694632Z test_to_dtype_fp16_to_fp16 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7694920Z test_to_dtype_fp16_to_fp32 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7695219Z test_to_dtype_fp32_to_bf16 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7695552Z test_to_dtype_fp32_to_fp16 (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7695853Z test_transpose (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7696146Z test_transpose_default (__main__.TestCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7696454Z test_trivial_reduction (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7696752Z test_type_as_op (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7697035Z test_type_inference (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7697332Z test_unary_bitwise (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7697621Z test_unary_ops (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7697959Z test_unsqueeze (__main__.TestCudaFuser) ... skip: skipping this test since squeeze/unsqueeze is disabled now (0.000s) 2023-01-11T22:05:48.7698283Z test_variance (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7698586Z test_variance_profiling (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7698916Z test_view (__main__.TestCudaFuser) ... skip: skipping this test since view is disabled now (0.000s) 2023-01-11T22:05:48.7699227Z test_view_before_permute (__main__.TestCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7699582Z test_view_copy_graph_guard (__main__.TestCudaFuser) ... skip: skipping this test since reshape is disabled now (0.000s) 2023-01-11T22:05:48.7699980Z test_view_copy_graph_guard_double_fusion (__main__.TestCudaFuser) ... skip: skipping this test since view is disabled now (0.001s) 2023-01-11T22:05:48.7700343Z test_can_be_enabled_nvfuser (__main__.TestEnableDisableCudaFuser) ... ok (0.001s) 2023-01-11T22:05:48.7700680Z test_context_manager_test (__main__.TestEnableDisableCudaFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:05:48.7701038Z test_register_fuser (__main__.TestEnableDisableCudaFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:05:48.7701373Z test_register_fuser_cpu (__main__.TestEnableDisableCudaFuser) ... ok (0.005s) 2023-01-11T22:05:48.7701684Z test_autodiff_fallback (jit.test_fuser_common.TestFuserCommon) ... ok (0.054s) 2023-01-11T22:05:48.7701863Z 2023-01-11T22:05:48.7702101Z ---------------------------------------------------------------------- 2023-01-11T22:05:48.7702462Z Ran 158 tests in 0.168s 2023-01-11T22:05:48.7702638Z 2023-01-11T22:05:48.7702714Z OK (skipped=155) 2023-01-11T22:05:48.7702824Z 2023-01-11T22:05:48.7702896Z Generating XML reports... 2023-01-11T22:05:48.7703370Z Generated XML report: test-reports/python-unittest/test_jit_cuda_fuser/TEST-TestEnableDisableCudaFuser-20230111220548.xml 2023-01-11T22:05:48.7703959Z Generated XML report: test-reports/python-unittest/test_jit_cuda_fuser/TEST-jit.test_fuser_common.TestFuserCommon-20230111220548.xml 2023-01-11T22:05:48.7704498Z Generated XML report: test-reports/python-unittest/test_jit_cuda_fuser/TEST-TestCudaFuser-20230111220548.xml 2023-01-11T22:05:48.7704780Z 2023-01-11T22:05:48.7705078Z ##[endgroup] 2023-01-11T22:05:48.7705477Z FINISHED PRINTING LOG FILE of test_jit_cuda_fuser (/var/lib/jenkins/workspace/test/test-reports/test_jit_cuda_fuser_nfdw1hv2) 2023-01-11T22:05:48.7705700Z 2023-01-11T22:05:48.7705871Z Running test_jit_disabled ... [2023-01-11 22:05:48.764936] 2023-01-11T22:05:48.7706333Z Executing ['/opt/conda/bin/python', '-bb', 'test_jit_disabled.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:48.765443] 2023-01-11T22:05:50.7294047Z 2023-01-11T22:05:50.7294671Z Expand the folded group to see the log file of test_jit_disabled 2023-01-11T22:05:50.7295694Z ##[group]PRINTING LOG FILE of test_jit_disabled (/var/lib/jenkins/workspace/test/test-reports/test_jit_disabled_ttoqxyt1) 2023-01-11T22:05:50.7296012Z 2023-01-11T22:05:50.7296134Z Running tests... 2023-01-11T22:05:50.7296607Z ---------------------------------------------------------------------- 2023-01-11T22:05:50.7297251Z Test results will be stored in test-reports/python-unittest/test_jit_disabled 2023-01-11T22:05:50.7297662Z test_attribute (__main__.TestJitDisabled) ... ok (0.256s) 2023-01-11T22:05:50.7298001Z test_recursive_script (__main__.TestJitDisabled) ... ok (0.032s) 2023-01-11T22:05:50.7298300Z test_script_module_construction (__main__.TestJitDisabled) ... ok (0.032s) 2023-01-11T22:05:50.7298472Z 2023-01-11T22:05:50.7298666Z ---------------------------------------------------------------------- 2023-01-11T22:05:50.7298909Z Ran 3 tests in 0.321s 2023-01-11T22:05:50.7299024Z 2023-01-11T22:05:50.7299085Z OK 2023-01-11T22:05:50.7299176Z 2023-01-11T22:05:50.7299245Z Generating XML reports... 2023-01-11T22:05:50.7299668Z Generated XML report: test-reports/python-unittest/test_jit_disabled/TEST-TestJitDisabled-20230111220550.xml 2023-01-11T22:05:50.7299902Z 2023-01-11T22:05:50.7300144Z ##[endgroup] 2023-01-11T22:05:50.7300524Z FINISHED PRINTING LOG FILE of test_jit_disabled (/var/lib/jenkins/workspace/test/test-reports/test_jit_disabled_ttoqxyt1) 2023-01-11T22:05:50.7300748Z 2023-01-11T22:05:50.7300915Z Running test_linalg ... [2023-01-11 22:05:50.729439] 2023-01-11T22:05:50.7301372Z Executing ['/opt/conda/bin/python', '-bb', 'test_linalg.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:50.729670] 2023-01-11T22:05:52.6343737Z 2023-01-11T22:05:52.6344263Z Expand the folded group to see the log file of test_linalg 2023-01-11T22:05:52.6345193Z ##[group]PRINTING LOG FILE of test_linalg (/var/lib/jenkins/workspace/test/test-reports/test_linalg_l0999tk3) 2023-01-11T22:05:52.6345407Z 2023-01-11T22:05:52.6345482Z Running tests... 2023-01-11T22:05:52.6345903Z ---------------------------------------------------------------------- 2023-01-11T22:05:52.6346075Z 2023-01-11T22:05:52.6346301Z ---------------------------------------------------------------------- 2023-01-11T22:05:52.6346579Z Ran 0 tests in 0.000s 2023-01-11T22:05:52.6346691Z 2023-01-11T22:05:52.6346753Z OK 2023-01-11T22:05:52.6346859Z 2023-01-11T22:05:52.6346944Z Generating XML reports... 2023-01-11T22:05:52.6347255Z Test results will be stored in test-reports/python-unittest/test_linalg 2023-01-11T22:05:52.6347431Z 2023-01-11T22:05:52.6347647Z ##[endgroup] 2023-01-11T22:05:52.6348120Z FINISHED PRINTING LOG FILE of test_linalg (/var/lib/jenkins/workspace/test/test-reports/test_linalg_l0999tk3) 2023-01-11T22:05:52.6348352Z 2023-01-11T22:05:52.6348545Z Running test_mobile_optimizer ... [2023-01-11 22:05:52.634506] 2023-01-11T22:05:52.6349035Z Executing ['/opt/conda/bin/python', '-bb', 'test_mobile_optimizer.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:52.634725] 2023-01-11T22:05:56.8808894Z 2023-01-11T22:05:56.8809694Z Expand the folded group to see the log file of test_mobile_optimizer 2023-01-11T22:05:56.8810993Z ##[group]PRINTING LOG FILE of test_mobile_optimizer (/var/lib/jenkins/workspace/test/test-reports/test_mobile_optimizer_2kec7tf5) 2023-01-11T22:05:56.8811738Z 2023-01-11T22:05:56.8811872Z Running tests... 2023-01-11T22:05:56.8812288Z ---------------------------------------------------------------------- 2023-01-11T22:05:56.8812770Z Test results will be stored in test-reports/python-unittest/test_mobile_optimizer 2023-01-11T22:05:56.8813086Z test_clone_module_with_class (__main__.TestOptimizer) ... ok (0.241s) 2023-01-11T22:05:56.8814134Z test_generate_mobile_module_lints (__main__.TestOptimizer) ... /opt/conda/lib/python3.10/site-packages/torch/utils/bundled_inputs.py:394: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:05:56.8814865Z if arg._typed_storage().size() <= MAX_RAW_TENSOR_SIZE or skip_size_check: 2023-01-11T22:05:56.8815104Z ok (0.029s) 2023-01-11T22:05:56.8815424Z test_hoist_conv_packed_params (__main__.TestOptimizer) ... ok (0.464s) 2023-01-11T22:05:56.8815769Z test_mobilenet_optimize_for_mobile (__main__.TestOptimizer) ... ok (1.348s) 2023-01-11T22:05:56.8816059Z test_optimize_for_mobile (__main__.TestOptimizer) ... ok (0.172s) 2023-01-11T22:05:56.8816393Z test_preserve_bundled_inputs_methods (__main__.TestOptimizer) ... ok (0.037s) 2023-01-11T22:05:56.8816701Z test_quantized_conv_no_asan_failures (__main__.TestOptimizer) ... ok (0.130s) 2023-01-11T22:05:56.8816858Z 2023-01-11T22:05:56.8817080Z ---------------------------------------------------------------------- 2023-01-11T22:05:56.8817353Z Ran 7 tests in 2.423s 2023-01-11T22:05:56.8817468Z 2023-01-11T22:05:56.8817529Z OK 2023-01-11T22:05:56.8817619Z 2023-01-11T22:05:56.8817690Z Generating XML reports... 2023-01-11T22:05:56.8818162Z Generated XML report: test-reports/python-unittest/test_mobile_optimizer/TEST-TestOptimizer-20230111220554.xml 2023-01-11T22:05:56.8818395Z 2023-01-11T22:05:56.8818685Z ##[endgroup] 2023-01-11T22:05:56.8819088Z FINISHED PRINTING LOG FILE of test_mobile_optimizer (/var/lib/jenkins/workspace/test/test-reports/test_mobile_optimizer_2kec7tf5) 2023-01-11T22:05:56.8819318Z 2023-01-11T22:05:56.8819532Z Running test_modules ... [2023-01-11 22:05:56.881047] 2023-01-11T22:05:56.8819990Z Executing ['/opt/conda/bin/python', '-bb', 'test_modules.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:56.881305] 2023-01-11T22:05:59.3453926Z 2023-01-11T22:05:59.3454481Z Expand the folded group to see the log file of test_modules 2023-01-11T22:05:59.3455410Z ##[group]PRINTING LOG FILE of test_modules (/var/lib/jenkins/workspace/test/test-reports/test_modules_y75r8rd8) 2023-01-11T22:05:59.3455681Z 2023-01-11T22:05:59.3455745Z Running tests... 2023-01-11T22:05:59.3456246Z ---------------------------------------------------------------------- 2023-01-11T22:05:59.3456504Z 2023-01-11T22:05:59.3456709Z ---------------------------------------------------------------------- 2023-01-11T22:05:59.3456960Z Ran 0 tests in 0.000s 2023-01-11T22:05:59.3457120Z 2023-01-11T22:05:59.3457182Z OK 2023-01-11T22:05:59.3457274Z 2023-01-11T22:05:59.3457362Z Generating XML reports... 2023-01-11T22:05:59.3457709Z Test results will be stored in test-reports/python-unittest/test_modules 2023-01-11T22:05:59.3457930Z 2023-01-11T22:05:59.3458146Z ##[endgroup] 2023-01-11T22:05:59.3458609Z FINISHED PRINTING LOG FILE of test_modules (/var/lib/jenkins/workspace/test/test-reports/test_modules_y75r8rd8) 2023-01-11T22:05:59.3458822Z 2023-01-11T22:05:59.3459013Z Running test_multiprocessing ... [2023-01-11 22:05:59.345526] 2023-01-11T22:05:59.3459495Z Executing ['/opt/conda/bin/python', '-bb', 'test_multiprocessing.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:05:59.345742] 2023-01-11T22:06:11.3333412Z 2023-01-11T22:06:11.3333964Z Expand the folded group to see the log file of test_multiprocessing 2023-01-11T22:06:11.3335044Z ##[group]PRINTING LOG FILE of test_multiprocessing (/var/lib/jenkins/workspace/test/test-reports/test_multiprocessing_p7p9y9kt) 2023-01-11T22:06:11.3335669Z 2023-01-11T22:06:11.3335765Z Running tests... 2023-01-11T22:06:11.3336313Z ---------------------------------------------------------------------- 2023-01-11T22:06:11.3337385Z Test results will be stored in test-reports/python-unittest/test_multiprocessing 2023-01-11T22:06:11.3337843Z test_autograd_errors (__main__.TestMultiprocessing) ... ok (0.246s) 2023-01-11T22:06:11.3338727Z test_autograd_fine_with_spawn (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:11.3339309Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:11.3340029Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:11.3340662Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:11.3341505Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:11.3342039Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:11.3342956Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:11.3343533Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:11.3344385Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:11.3345027Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:11.3345760Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:11.3346360Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:11.3346745Z ok (1.373s) 2023-01-11T22:06:11.3347203Z test_cuda_bad_call (__main__.TestMultiprocessing) ... skip: CUDA not available (0.001s) 2023-01-11T22:06:11.3347772Z test_cuda_ipc_deadlock (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3348378Z test_cuda_memory_allocation (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3349012Z test_cuda_parameter_sharing (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s) 2023-01-11T22:06:11.3349629Z test_cuda_send_many (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3350175Z test_cuda_simple (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s) 2023-01-11T22:06:11.3350748Z test_cuda_small_tensors (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3351354Z test_cuda_variable_sharing (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s) 2023-01-11T22:06:11.3351877Z test_empty_shared (__main__.TestMultiprocessing) ... ok (0.001s) 2023-01-11T22:06:11.3352364Z test_empty_tensor_sharing (__main__.TestMultiprocessing) ... ok (0.003s) 2023-01-11T22:06:11.3352919Z test_empty_tensor_sharing_cuda (__main__.TestMultiprocessing) ... skip: CUDA not available (0.000s) 2023-01-11T22:06:11.3353495Z test_event (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3354061Z test_event_handle_exporter (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s) 2023-01-11T22:06:11.3354676Z test_event_handle_importer (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3355310Z test_event_handle_multi_gpu (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3355932Z test_event_multiprocess (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3356475Z test_fd_pool (__main__.TestMultiprocessing) ... ok (0.614s) 2023-01-11T22:06:11.3357793Z test_fd_preserve_sharing (__main__.TestMultiprocessing) ... /var/lib/jenkins/workspace/test/test_multiprocessing.py:304: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3358857Z data = [x.storage(), x, x[2], x[:, 1]] 2023-01-11T22:06:11.3360444Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:1904: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3361424Z device=typed_storage.device, 2023-01-11T22:06:11.3362494Z /var/lib/jenkins/workspace/test/test_multiprocessing.py:312: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3363515Z self.assertEqual(t.storage()._cdata, storage_cdata) 2023-01-11T22:06:11.3363887Z ok (0.145s) 2023-01-11T22:06:11.3364879Z test_fd_sharing (__main__.TestMultiprocessing) ... /var/lib/jenkins/workspace/test/test_multiprocessing.py:284: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3365831Z s1 = t1.storage() 2023-01-11T22:06:11.3366786Z /var/lib/jenkins/workspace/test/test_multiprocessing.py:285: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3367671Z s2 = t2.storage() 2023-01-11T22:06:11.3368611Z /var/lib/jenkins/workspace/test/test_multiprocessing.py:287: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3369579Z self.assertEqual(s1.data_ptr(), s1.data_ptr()) 2023-01-11T22:06:11.3369918Z ok (0.638s) 2023-01-11T22:06:11.3370944Z test_fs (__main__.TestMultiprocessing) ... /var/lib/jenkins/workspace/test/test_multiprocessing.py:378: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2023-01-11T22:06:11.3371951Z x = torch.DoubleStorage(4) 2023-01-11T22:06:11.3372265Z ok (5.006s) 2023-01-11T22:06:11.3372664Z test_fs_is_shared (__main__.TestMultiprocessing) ... ok (0.001s) 2023-01-11T22:06:11.3373149Z test_fs_pool (__main__.TestMultiprocessing) ... ok (0.501s) 2023-01-11T22:06:11.3373658Z test_fs_preserve_sharing (__main__.TestMultiprocessing) ... ok (0.099s) 2023-01-11T22:06:11.3375247Z test_fs_sharing (__main__.TestMultiprocessing) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/67002 for platform(s) windows, mac, linux, dynamo, rocm. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests. (0.000s) 2023-01-11T22:06:11.3376425Z test_inherit_tensor (__main__.TestMultiprocessing) ... ok (0.011s) 2023-01-11T22:06:11.3377498Z test_integer_parameter_serialization_cpu (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:11.3378240Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:11.3378987Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:11.3379602Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:11.3380006Z ok (1.564s) 2023-01-11T22:06:11.3380523Z test_integer_parameter_serialization_cuda (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s) 2023-01-11T22:06:11.3381103Z test_is_shared (__main__.TestMultiprocessing) ... ok (0.001s) 2023-01-11T22:06:11.3381644Z test_is_shared_cuda (__main__.TestMultiprocessing) ... skip: CUDA not available (0.000s) 2023-01-11T22:06:11.3382325Z test_leaf_variable_sharing (__main__.TestMultiprocessing) ... ok (0.015s) 2023-01-11T22:06:11.3383037Z test_mixed_types_cuda_sharing (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.001s) 2023-01-11T22:06:11.3383638Z test_non_leaf_variable_sharing (__main__.TestMultiprocessing) ... ok (0.002s) 2023-01-11T22:06:11.3385297Z test_parameter_sharing (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.10/site-packages/torch/utils/hooks.py:87: UserWarning: backward hook .hook at 0x7fad44f14c10> on tensor will not be serialized. If this is expected, you can decorate the function with @torch.utils.hooks.unserializable_hook to suppress this warning 2023-01-11T22:06:11.3386324Z warnings.warn("backward hook {} on tensor will not be " 2023-01-11T22:06:11.3386684Z ok (0.010s) 2023-01-11T22:06:11.3388222Z test_variable_sharing (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.10/site-packages/torch/utils/hooks.py:87: UserWarning: backward hook .hook at 0x7fad44f157e0> on tensor will not be serialized. If this is expected, you can decorate the function with @torch.utils.hooks.unserializable_hook to suppress this warning 2023-01-11T22:06:11.3389306Z warnings.warn("backward hook {} on tensor will not be " 2023-01-11T22:06:11.3389676Z ok (0.018s) 2023-01-11T22:06:11.3390110Z test_wrong_cuda_fork (__main__.TestMultiprocessing) ... skip: CUDA not available (0.001s) 2023-01-11T22:06:11.3390397Z 2023-01-11T22:06:11.3390753Z ---------------------------------------------------------------------- 2023-01-11T22:06:11.3391165Z Ran 37 tests in 10.260s 2023-01-11T22:06:11.3391365Z 2023-01-11T22:06:11.3391485Z OK (skipped=19) 2023-01-11T22:06:11.3391652Z 2023-01-11T22:06:11.3391791Z Generating XML reports... 2023-01-11T22:06:11.3392551Z Generated XML report: test-reports/python-unittest/test_multiprocessing/TEST-TestMultiprocessing-20230111220600.xml 2023-01-11T22:06:11.3392968Z 2023-01-11T22:06:11.3393408Z ##[endgroup] 2023-01-11T22:06:11.3394036Z FINISHED PRINTING LOG FILE of test_multiprocessing (/var/lib/jenkins/workspace/test/test-reports/test_multiprocessing_p7p9y9kt) 2023-01-11T22:06:11.3394466Z 2023-01-11T22:06:11.3394830Z Running test_multiprocessing_spawn ... [2023-01-11 22:06:11.333528] 2023-01-11T22:06:11.3395708Z Executing ['/opt/conda/bin/python', '-bb', 'test_multiprocessing_spawn.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:11.333808] 2023-01-11T22:06:29.9797656Z 2023-01-11T22:06:29.9798188Z Expand the folded group to see the log file of test_multiprocessing_spawn 2023-01-11T22:06:29.9799354Z ##[group]PRINTING LOG FILE of test_multiprocessing_spawn (/var/lib/jenkins/workspace/test/test-reports/test_multiprocessing_spawn_jml9v8n9) 2023-01-11T22:06:29.9799806Z 2023-01-11T22:06:29.9799915Z Running tests... 2023-01-11T22:06:29.9800506Z ---------------------------------------------------------------------- 2023-01-11T22:06:29.9801594Z Test results will be stored in test-reports/python-unittest/test_multiprocessing_spawn 2023-01-11T22:06:29.9802062Z test_errors_pickleable (__main__.ErrorTest) ... ok (0.225s) 2023-01-11T22:06:29.9802451Z test_exception_all (__main__.ForkTest) ... ok (0.052s) 2023-01-11T22:06:29.9802831Z test_exception_single (__main__.ForkTest) ... ok (0.021s) 2023-01-11T22:06:29.9803204Z test_first_argument_index (__main__.ForkTest) ... ok (0.010s) 2023-01-11T22:06:29.9803686Z test_success (__main__.ForkTest) ... ok (0.009s) 2023-01-11T22:06:29.9804123Z test_success_first_then_exception (__main__.ForkTest) ... ok (0.112s) 2023-01-11T22:06:29.9804558Z test_success_non_blocking (__main__.ForkTest) ... ok (0.011s) 2023-01-11T22:06:29.9804949Z test_terminate_exit (__main__.ForkTest) ... ok (0.009s) 2023-01-11T22:06:29.9805336Z test_terminate_signal (__main__.ForkTest) ... ok (0.233s) 2023-01-11T22:06:29.9806395Z test_exception_all (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9807003Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9807747Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9808351Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9809175Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9809817Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9810657Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9811327Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9811733Z ok (1.632s) 2023-01-11T22:06:29.9812506Z test_exception_raises (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9813283Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9814077Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9814676Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9815066Z ok (1.563s) 2023-01-11T22:06:29.9815875Z test_exception_single (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9816532Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9817370Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9818006Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9818814Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9819413Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9820249Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9820894Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9821704Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9822254Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9823502Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9824021Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9824962Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9825481Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9826302Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9826878Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9827267Z ok (3.281s) 2023-01-11T22:06:29.9828088Z test_first_argument_index (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9828771Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9829404Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9829825Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9830268Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9830603Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9831036Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9831362Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9831589Z ok (1.622s) 2023-01-11T22:06:29.9831803Z test_signal_raises (__main__.SpawnTest) ... ok (0.002s) 2023-01-11T22:06:29.9832263Z test_success (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9832621Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9833052Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9833398Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9833814Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9834144Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9834571Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9834901Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9835125Z ok (1.659s) 2023-01-11T22:06:29.9835588Z test_success_first_then_exception (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9835968Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9836388Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9836731Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9837160Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9837513Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9837929Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9838270Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9838493Z ok (1.725s) 2023-01-11T22:06:29.9838943Z test_success_non_blocking (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9839368Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9839864Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9840215Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9840641Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9840973Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9841404Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9841747Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9841960Z ok (1.629s) 2023-01-11T22:06:29.9842404Z test_terminate_exit (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9842811Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9843232Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9843573Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9844008Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9844337Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9844750Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9845090Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9845312Z ok (1.595s) 2023-01-11T22:06:29.9845748Z test_terminate_signal (__main__.SpawnTest) ... /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9846114Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9846545Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9846884Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9847304Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:122: UserWarning: loaded 76 slow tests 2023-01-11T22:06:29.9847628Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests") 2023-01-11T22:06:29.9848056Z /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py:126: UserWarning: loaded 210 disabled tests 2023-01-11T22:06:29.9848397Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests") 2023-01-11T22:06:29.9848607Z ok (1.550s) 2023-01-11T22:06:29.9848708Z 2023-01-11T22:06:29.9848911Z ---------------------------------------------------------------------- 2023-01-11T22:06:29.9849160Z Ran 19 tests in 16.941s 2023-01-11T22:06:29.9849274Z 2023-01-11T22:06:29.9849322Z OK 2023-01-11T22:06:29.9849412Z 2023-01-11T22:06:29.9849496Z Generating XML reports... 2023-01-11T22:06:29.9849909Z Generated XML report: test-reports/python-unittest/test_multiprocessing_spawn/TEST-ErrorTest-20230111220612.xml 2023-01-11T22:06:29.9850428Z Generated XML report: test-reports/python-unittest/test_multiprocessing_spawn/TEST-ForkTest-20230111220612.xml 2023-01-11T22:06:29.9850921Z Generated XML report: test-reports/python-unittest/test_multiprocessing_spawn/TEST-SpawnTest-20230111220612.xml 2023-01-11T22:06:29.9851146Z 2023-01-11T22:06:29.9851457Z ##[endgroup] 2023-01-11T22:06:29.9851886Z FINISHED PRINTING LOG FILE of test_multiprocessing_spawn (/var/lib/jenkins/workspace/test/test-reports/test_multiprocessing_spawn_jml9v8n9) 2023-01-11T22:06:29.9852115Z 2023-01-11T22:06:29.9852300Z Running test_namedtuple_return_api ... [2023-01-11 22:06:29.979984] 2023-01-11T22:06:29.9852844Z Executing ['/opt/conda/bin/python', '-bb', 'test_namedtuple_return_api.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:29.980275] 2023-01-11T22:06:32.9954491Z 2023-01-11T22:06:32.9954973Z Expand the folded group to see the log file of test_namedtuple_return_api 2023-01-11T22:06:32.9955923Z ##[group]PRINTING LOG FILE of test_namedtuple_return_api (/var/lib/jenkins/workspace/test/test-reports/test_namedtuple_return_api_v4dh998s) 2023-01-11T22:06:32.9956300Z 2023-01-11T22:06:32.9956416Z Running tests... 2023-01-11T22:06:32.9957037Z ---------------------------------------------------------------------- 2023-01-11T22:06:32.9957762Z Test results will be stored in test-reports/python-unittest/test_namedtuple_return_api 2023-01-11T22:06:32.9958361Z test_import_return_types (__main__.TestNamedTupleAPI) ... ok (0.210s) 2023-01-11T22:06:32.9959196Z test_namedtuple_return (__main__.TestNamedTupleAPI) ... /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:149: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release. 2023-01-11T22:06:32.9960083Z The boolean parameter 'some' has been replaced with a string parameter 'mode'. 2023-01-11T22:06:32.9960440Z Q, R = torch.qr(A, some) 2023-01-11T22:06:32.9960711Z should be replaced with 2023-01-11T22:06:32.9961516Z Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2459.) 2023-01-11T22:06:32.9962084Z ret1 = func(a, *op.input) 2023-01-11T22:06:32.9962668Z /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:155: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release. 2023-01-11T22:06:32.9963163Z The boolean parameter 'some' has been replaced with a string parameter 'mode'. 2023-01-11T22:06:32.9963417Z Q, R = torch.qr(A, some) 2023-01-11T22:06:32.9963596Z should be replaced with 2023-01-11T22:06:32.9964081Z Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2471.) 2023-01-11T22:06:32.9964449Z ret2 = func(a, *op.input, out=tuple(ret1)) 2023-01-11T22:06:32.9964839Z /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:149: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release. 2023-01-11T22:06:32.9965321Z The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion. 2023-01-11T22:06:32.9965647Z L, _ = torch.symeig(A, upper=upper) 2023-01-11T22:06:32.9965852Z should be replaced with 2023-01-11T22:06:32.9966132Z L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L') 2023-01-11T22:06:32.9966344Z and 2023-01-11T22:06:32.9966531Z L, V = torch.symeig(A, eigenvectors=True) 2023-01-11T22:06:32.9966728Z should be replaced with 2023-01-11T22:06:32.9967198Z L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2910.) 2023-01-11T22:06:32.9967537Z ret1 = func(a, *op.input) 2023-01-11T22:06:32.9967894Z /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:155: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release. 2023-01-11T22:06:32.9968377Z The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion. 2023-01-11T22:06:32.9968694Z L, _ = torch.symeig(A, upper=upper) 2023-01-11T22:06:32.9968898Z should be replaced with 2023-01-11T22:06:32.9969173Z L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L') 2023-01-11T22:06:32.9969385Z and 2023-01-11T22:06:32.9969571Z L, V = torch.symeig(A, eigenvectors=True) 2023-01-11T22:06:32.9969767Z should be replaced with 2023-01-11T22:06:32.9970230Z L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2928.) 2023-01-11T22:06:32.9970714Z ret2 = func(a, *op.input, out=tuple(ret1)) 2023-01-11T22:06:32.9971125Z /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:149: UserWarning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangularand will be removed in a future PyTorch release. 2023-01-11T22:06:32.9971604Z torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs. 2023-01-11T22:06:32.9971934Z X = torch.triangular_solve(B, A).solution 2023-01-11T22:06:32.9972146Z should be replaced with 2023-01-11T22:06:32.9972474Z X = torch.linalg.solve_triangular(A, B). (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2225.) 2023-01-11T22:06:32.9972798Z ret1 = func(a, *op.input) 2023-01-11T22:06:32.9973240Z /var/lib/jenkins/workspace/test/test_namedtuple_return_api.py:149: UserWarning: torch.lu is deprecated in favor of torch.linalg.lu_factor / torch.linalg.lu_factor_ex and will be removed in a future PyTorch release. 2023-01-11T22:06:32.9973642Z LU, pivots = torch.lu(A, compute_pivots) 2023-01-11T22:06:32.9973839Z should be replaced with 2023-01-11T22:06:32.9974070Z LU, pivots = torch.linalg.lu_factor(A, compute_pivots) 2023-01-11T22:06:32.9974280Z and 2023-01-11T22:06:32.9974481Z LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True) 2023-01-11T22:06:32.9974712Z should be replaced with 2023-01-11T22:06:32.9975071Z LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots) (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:2029.) 2023-01-11T22:06:32.9975412Z ret1 = func(a, *op.input) 2023-01-11T22:06:32.9975581Z ok (0.039s) 2023-01-11T22:06:32.9975820Z test_native_functions_yaml (__main__.TestNamedTupleAPI) ... ok (1.117s) 2023-01-11T22:06:32.9975993Z 2023-01-11T22:06:32.9976199Z ---------------------------------------------------------------------- 2023-01-11T22:06:32.9976435Z Ran 3 tests in 1.366s 2023-01-11T22:06:32.9976551Z 2023-01-11T22:06:32.9976613Z OK 2023-01-11T22:06:32.9976704Z 2023-01-11T22:06:32.9976787Z Generating XML reports... 2023-01-11T22:06:32.9977212Z Generated XML report: test-reports/python-unittest/test_namedtuple_return_api/TEST-TestNamedTupleAPI-20230111220631.xml 2023-01-11T22:06:32.9977461Z 2023-01-11T22:06:32.9977742Z ##[endgroup] 2023-01-11T22:06:32.9978168Z FINISHED PRINTING LOG FILE of test_namedtuple_return_api (/var/lib/jenkins/workspace/test/test-reports/test_namedtuple_return_api_v4dh998s) 2023-01-11T22:06:32.9978409Z 2023-01-11T22:06:32.9978562Z Running test_ops ... [2023-01-11 22:06:32.995544] 2023-01-11T22:06:34.5481026Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:34.5661498Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:34.6173331Z Ignoring disabled issues: [] 2023-01-11T22:06:34.6305913Z Ignoring disabled issues: [] 2023-01-11T22:06:34.8046269Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=0', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:34.804174] 2023-01-11T22:06:34.8047861Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=1', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:34.804276] 2023-01-11T22:06:37.9685924Z 2023-01-11T22:06:37.9686426Z Expand the folded group to see the log file of test_ops 2023-01-11T22:06:37.9687170Z ##[group]PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_p3qr_v_u) 2023-01-11T22:06:37.9687737Z Test results will be stored in test-reports/python-pytest/test_ops/test_ops-b5301370d4336bb2.xml 2023-01-11T22:06:37.9688058Z ============================= test session starts ============================== 2023-01-11T22:06:37.9688642Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:37.9688903Z cachedir: .pytest_cache 2023-01-11T22:06:37.9689319Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:37.9689674Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:37.9690086Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:37.9690382Z collecting ... collected 0 items 2023-01-11T22:06:37.9690588Z Running 0 items in this shard: 2023-01-11T22:06:37.9690710Z 2023-01-11T22:06:37.9690810Z =============================== warnings summary =============================== 2023-01-11T22:06:37.9691158Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:37.9691759Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:37.9692121Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:37.9692240Z 2023-01-11T22:06:37.9692473Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:37.9692952Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops/test_ops-b5301370d4336bb2.xml - 2023-01-11T22:06:37.9693280Z ============================== 1 warning in 0.04s ============================== 2023-01-11T22:06:37.9693581Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:37.9693759Z 2023-01-11T22:06:37.9693986Z ##[endgroup] 2023-01-11T22:06:37.9694341Z FINISHED PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_p3qr_v_u) 2023-01-11T22:06:37.9694539Z 2023-01-11T22:06:38.0000064Z 2023-01-11T22:06:38.0000436Z Expand the folded group to see the log file of test_ops 2023-01-11T22:06:38.0001175Z ##[group]PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_12evdp8v) 2023-01-11T22:06:38.0002089Z Test results will be stored in test-reports/python-pytest/test_ops/test_ops-b489e73817dffd05.xml 2023-01-11T22:06:38.0002477Z ============================= test session starts ============================== 2023-01-11T22:06:38.0002969Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:38.0003439Z cachedir: .pytest_cache 2023-01-11T22:06:38.0003974Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:38.0004332Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:38.0004745Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:38.0005048Z collecting ... collected 0 items 2023-01-11T22:06:38.0005264Z Running 0 items in this shard: 2023-01-11T22:06:38.0005390Z 2023-01-11T22:06:38.0005523Z =============================== warnings summary =============================== 2023-01-11T22:06:38.0006010Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:38.0006804Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:38.0007169Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:38.0007301Z 2023-01-11T22:06:38.0007529Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:38.0008007Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops/test_ops-b489e73817dffd05.xml - 2023-01-11T22:06:38.0008342Z ============================== 1 warning in 0.04s ============================== 2023-01-11T22:06:38.0008645Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:38.0008995Z 2023-01-11T22:06:38.0009259Z ##[endgroup] 2023-01-11T22:06:38.0009619Z FINISHED PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_12evdp8v) 2023-01-11T22:06:38.0009820Z 2023-01-11T22:06:38.3380869Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '-k=_linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:38.337669] 2023-01-11T22:06:41.3317242Z 2023-01-11T22:06:41.3317717Z Expand the folded group to see the log file of test_ops 2023-01-11T22:06:41.3318622Z ##[group]PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_h_1vzg1z) 2023-01-11T22:06:41.3319220Z Test results will be stored in test-reports/python-pytest/test_ops/test_ops-a5a4f1a02e3ee27c.xml 2023-01-11T22:06:41.3319816Z ============================= test session starts ============================== 2023-01-11T22:06:41.3320196Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:41.3320451Z cachedir: .pytest_cache 2023-01-11T22:06:41.3332249Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:41.3333098Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:41.3333932Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:41.3334432Z collecting ... collected 0 items 2023-01-11T22:06:41.3334793Z Running 0 items in this shard: 2023-01-11T22:06:41.3335001Z 2023-01-11T22:06:41.3335184Z =============================== warnings summary =============================== 2023-01-11T22:06:41.3335791Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:41.3336669Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:41.3337288Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:41.3337505Z 2023-01-11T22:06:41.3337896Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:41.3338694Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops/test_ops-a5a4f1a02e3ee27c.xml - 2023-01-11T22:06:41.3339243Z ============================== 1 warning in 0.12s ============================== 2023-01-11T22:06:41.3339734Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:41.3340059Z 2023-01-11T22:06:41.3340490Z ##[endgroup] 2023-01-11T22:06:41.3341075Z FINISHED PRINTING LOG FILE of test_ops (/var/lib/jenkins/workspace/test/test-reports/test_ops_h_1vzg1z) 2023-01-11T22:06:41.3341399Z 2023-01-11T22:06:41.3341737Z Running test_ops_fwd_gradients ... [2023-01-11 22:06:41.332143] 2023-01-11T22:06:42.8888939Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:42.8946588Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:42.9545820Z Ignoring disabled issues: [] 2023-01-11T22:06:42.9597054Z Ignoring disabled issues: [] 2023-01-11T22:06:42.9733948Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_fwd_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=0', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:42.973127] 2023-01-11T22:06:42.9783555Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_fwd_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=1', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:42.978069] 2023-01-11T22:06:45.7198762Z 2023-01-11T22:06:45.7199359Z Expand the folded group to see the log file of test_ops_fwd_gradients 2023-01-11T22:06:45.7200671Z ##[group]PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_iw077jfx) 2023-01-11T22:06:45.7201795Z Test results will be stored in test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-b9403dcd25081305.xml 2023-01-11T22:06:45.7202398Z ============================= test session starts ============================== 2023-01-11T22:06:45.7203037Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:45.7203462Z cachedir: .pytest_cache 2023-01-11T22:06:45.7204205Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:45.7204834Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:45.7205568Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:45.7206199Z collecting ... collected 0 items 2023-01-11T22:06:45.7206549Z Running 0 items in this shard: 2023-01-11T22:06:45.7206767Z 2023-01-11T22:06:45.7206959Z =============================== warnings summary =============================== 2023-01-11T22:06:45.7207569Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:45.7208530Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:45.7209163Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:45.7209400Z 2023-01-11T22:06:45.7209785Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:45.7210735Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-b9403dcd25081305.xml - 2023-01-11T22:06:45.7211385Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:45.7211927Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:45.7212275Z 2023-01-11T22:06:45.7212668Z ##[endgroup] 2023-01-11T22:06:45.7213399Z FINISHED PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_iw077jfx) 2023-01-11T22:06:45.7213817Z 2023-01-11T22:06:45.7662540Z 2023-01-11T22:06:45.7663073Z Expand the folded group to see the log file of test_ops_fwd_gradients 2023-01-11T22:06:45.7664109Z ##[group]PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_qo6xd6ys) 2023-01-11T22:06:45.7665041Z Test results will be stored in test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-1e0b07ffbf426a50.xml 2023-01-11T22:06:45.7665524Z ============================= test session starts ============================== 2023-01-11T22:06:45.7665995Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:45.7666277Z cachedir: .pytest_cache 2023-01-11T22:06:45.7666694Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:45.7667128Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:45.7667788Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:45.7668211Z collecting ... collected 0 items 2023-01-11T22:06:45.7668452Z Running 0 items in this shard: 2023-01-11T22:06:45.7668578Z 2023-01-11T22:06:45.7668691Z =============================== warnings summary =============================== 2023-01-11T22:06:45.7669040Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:45.7669572Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:45.7670118Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:45.7670251Z 2023-01-11T22:06:45.7670488Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:45.7671012Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-1e0b07ffbf426a50.xml - 2023-01-11T22:06:45.7671380Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:45.7671682Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:45.7671878Z 2023-01-11T22:06:45.7672131Z ##[endgroup] 2023-01-11T22:06:45.7672538Z FINISHED PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_qo6xd6ys) 2023-01-11T22:06:45.7672768Z 2023-01-11T22:06:46.1155147Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_fwd_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '-k=_linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:46.115058] 2023-01-11T22:06:48.7607551Z 2023-01-11T22:06:48.7608171Z Expand the folded group to see the log file of test_ops_fwd_gradients 2023-01-11T22:06:48.7609352Z ##[group]PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_04gf6wqu) 2023-01-11T22:06:48.7610396Z Test results will be stored in test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-27a816f975c007fa.xml 2023-01-11T22:06:48.7610745Z ============================= test session starts ============================== 2023-01-11T22:06:48.7611103Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:48.7611364Z cachedir: .pytest_cache 2023-01-11T22:06:48.7611793Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:48.7612150Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:48.7612562Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:48.7612993Z collecting ... collected 0 items 2023-01-11T22:06:48.7613203Z Running 0 items in this shard: 2023-01-11T22:06:48.7613327Z 2023-01-11T22:06:48.7613428Z =============================== warnings summary =============================== 2023-01-11T22:06:48.7613882Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:48.7614540Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:48.7614956Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:48.7615087Z 2023-01-11T22:06:48.7615351Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:48.7615926Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-27a816f975c007fa.xml - 2023-01-11T22:06:48.7616333Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:48.7616665Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:48.7616846Z 2023-01-11T22:06:48.7617127Z ##[endgroup] 2023-01-11T22:06:48.7617538Z FINISHED PRINTING LOG FILE of test_ops_fwd_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_fwd_gradients_04gf6wqu) 2023-01-11T22:06:48.7617766Z 2023-01-11T22:06:48.7617944Z Running test_ops_gradients ... [2023-01-11 22:06:48.761213] 2023-01-11T22:06:50.3324668Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:50.3375816Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:50.4023874Z Ignoring disabled issues: [] 2023-01-11T22:06:50.4181696Z Ignoring disabled issues: [] 2023-01-11T22:06:50.4211371Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=0', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:50.420845] 2023-01-11T22:06:50.4368803Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=1', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:50.436568] 2023-01-11T22:06:53.2078808Z 2023-01-11T22:06:53.2079693Z Expand the folded group to see the log file of test_ops_gradients 2023-01-11T22:06:53.2084077Z ##[group]PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients_wjsvgr44) 2023-01-11T22:06:53.2084930Z Test results will be stored in test-reports/python-pytest/test_ops_gradients/test_ops_gradients-57f53faeab72d232.xml 2023-01-11T22:06:53.2085285Z ============================= test session starts ============================== 2023-01-11T22:06:53.2085699Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:53.2085975Z cachedir: .pytest_cache 2023-01-11T22:06:53.2086395Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:53.2086789Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:53.2087263Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:53.2087563Z collecting ... collected 0 items 2023-01-11T22:06:53.2087758Z Running 0 items in this shard: 2023-01-11T22:06:53.2087883Z 2023-01-11T22:06:53.2088040Z =============================== warnings summary =============================== 2023-01-11T22:06:53.2088406Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:53.2088978Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:53.2089320Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:53.2089467Z 2023-01-11T22:06:53.2089736Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:53.2090295Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-57f53faeab72d232.xml - 2023-01-11T22:06:53.2090663Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:53.2090958Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:53.2091195Z 2023-01-11T22:06:53.2091435Z ##[endgroup] 2023-01-11T22:06:53.2091887Z FINISHED PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients_wjsvgr44) 2023-01-11T22:06:53.2092105Z 2023-01-11T22:06:53.2092109Z 2023-01-11T22:06:53.2092243Z Expand the folded group to see the log file of test_ops_gradients 2023-01-11T22:06:53.2092780Z ##[group]PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients__bi2bxls) 2023-01-11T22:06:53.2093330Z Test results will be stored in test-reports/python-pytest/test_ops_gradients/test_ops_gradients-fac064cd5b4e0e08.xml 2023-01-11T22:06:53.2093660Z ============================= test session starts ============================== 2023-01-11T22:06:53.2094037Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:53.2094320Z cachedir: .pytest_cache 2023-01-11T22:06:53.2094732Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:53.2095128Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:53.2095702Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:53.2095998Z collecting ... collected 0 items 2023-01-11T22:06:53.2096193Z Running 0 items in this shard: 2023-01-11T22:06:53.2096348Z 2023-01-11T22:06:53.2096476Z =============================== warnings summary =============================== 2023-01-11T22:06:53.2096832Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:53.2097418Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:53.2097762Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:53.2097944Z 2023-01-11T22:06:53.2098171Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:53.2098784Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-fac064cd5b4e0e08.xml - 2023-01-11T22:06:53.2099142Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:53.2099499Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:53.2099694Z 2023-01-11T22:06:53.2099913Z ##[endgroup] 2023-01-11T22:06:53.2100347Z FINISHED PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients__bi2bxls) 2023-01-11T22:06:53.2100572Z 2023-01-11T22:06:53.5558389Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_gradients.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '-k=_linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:53.555453] 2023-01-11T22:06:56.2452960Z 2023-01-11T22:06:56.2453537Z Expand the folded group to see the log file of test_ops_gradients 2023-01-11T22:06:56.2454860Z ##[group]PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients_h20d9p2f) 2023-01-11T22:06:56.2455549Z Test results will be stored in test-reports/python-pytest/test_ops_gradients/test_ops_gradients-94cc7b66651244d1.xml 2023-01-11T22:06:56.2455891Z ============================= test session starts ============================== 2023-01-11T22:06:56.2456245Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:06:56.2456505Z cachedir: .pytest_cache 2023-01-11T22:06:56.2456920Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:06:56.2457282Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:06:56.2457755Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:06:56.2458054Z collecting ... collected 0 items 2023-01-11T22:06:56.2458314Z Running 0 items in this shard: 2023-01-11T22:06:56.2458446Z 2023-01-11T22:06:56.2458576Z =============================== warnings summary =============================== 2023-01-11T22:06:56.2458937Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:06:56.2459460Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:06:56.2459814Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:06:56.2459934Z 2023-01-11T22:06:56.2460169Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:06:56.2460686Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-94cc7b66651244d1.xml - 2023-01-11T22:06:56.2461042Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:06:56.2461352Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:06:56.2461718Z 2023-01-11T22:06:56.2461961Z ##[endgroup] 2023-01-11T22:06:56.2462566Z FINISHED PRINTING LOG FILE of test_ops_gradients (/var/lib/jenkins/workspace/test/test-reports/test_ops_gradients_h20d9p2f) 2023-01-11T22:06:56.2462796Z 2023-01-11T22:06:56.2462971Z Running test_ops_jit ... [2023-01-11 22:06:56.245722] 2023-01-11T22:06:57.8450104Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:57.8526576Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' 2023-01-11T22:06:57.9186510Z Ignoring disabled issues: [] 2023-01-11T22:06:57.9305645Z Ignoring disabled issues: [] 2023-01-11T22:06:57.9377955Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_jit.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=0', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:57.937400] 2023-01-11T22:06:57.9493812Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_jit.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '--shard-id=1', '--num-shards=2', '-k=not _linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:06:57.949058] 2023-01-11T22:07:00.7614698Z 2023-01-11T22:07:00.7615218Z Expand the folded group to see the log file of test_ops_jit 2023-01-11T22:07:00.7616235Z ##[group]PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_a0yd3h9f) 2023-01-11T22:07:00.7617270Z Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-fe16d0ecf9be8ab0.xml 2023-01-11T22:07:00.7617824Z ============================= test session starts ============================== 2023-01-11T22:07:00.7618335Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:07:00.7618692Z cachedir: .pytest_cache 2023-01-11T22:07:00.7619133Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:07:00.7619489Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:07:00.7619918Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:07:00.7620211Z collecting ... collected 0 items 2023-01-11T22:07:00.7620402Z Running 0 items in this shard: 2023-01-11T22:07:00.7620523Z 2023-01-11T22:07:00.7620635Z =============================== warnings summary =============================== 2023-01-11T22:07:00.7620986Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:07:00.7621512Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:07:00.7621855Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:07:00.7621985Z 2023-01-11T22:07:00.7622222Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:07:00.7622910Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_jit/test_ops_jit-fe16d0ecf9be8ab0.xml - 2023-01-11T22:07:00.7623261Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:07:00.7623555Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:07:00.7623748Z 2023-01-11T22:07:00.7624003Z ##[endgroup] 2023-01-11T22:07:00.7624379Z FINISHED PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_a0yd3h9f) 2023-01-11T22:07:00.7624575Z 2023-01-11T22:07:00.7843417Z 2023-01-11T22:07:00.7843811Z Expand the folded group to see the log file of test_ops_jit 2023-01-11T22:07:00.7844791Z ##[group]PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_huxx9xsf) 2023-01-11T22:07:00.7845508Z Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-f889cd96ee3679f1.xml 2023-01-11T22:07:00.7846026Z ============================= test session starts ============================== 2023-01-11T22:07:00.7846383Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:07:00.7846639Z cachedir: .pytest_cache 2023-01-11T22:07:00.7847056Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:07:00.7847462Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:07:00.7847938Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:07:00.7848355Z collecting ... collected 0 items 2023-01-11T22:07:00.7848651Z Running 0 items in this shard: 2023-01-11T22:07:00.7848805Z 2023-01-11T22:07:00.7848957Z =============================== warnings summary =============================== 2023-01-11T22:07:00.7849788Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:07:00.7851124Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:07:00.7851971Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:07:00.7852293Z 2023-01-11T22:07:00.7852825Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:07:00.7854037Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_jit/test_ops_jit-f889cd96ee3679f1.xml - 2023-01-11T22:07:00.7854856Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:07:00.7855533Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:07:00.7856001Z 2023-01-11T22:07:00.7856554Z ##[endgroup] 2023-01-11T22:07:00.7857467Z FINISHED PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_huxx9xsf) 2023-01-11T22:07:00.7857978Z 2023-01-11T22:07:01.1353301Z Executing ['/opt/conda/bin/python', '-bb', 'test_ops_jit.py', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2', '-k=_linalg_cholesky_', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:01.134932] 2023-01-11T22:07:03.9056376Z 2023-01-11T22:07:03.9057061Z Expand the folded group to see the log file of test_ops_jit 2023-01-11T22:07:03.9058091Z ##[group]PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_82kkahjm) 2023-01-11T22:07:03.9058691Z Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-0627f0d222682e84.xml 2023-01-11T22:07:03.9059011Z ============================= test session starts ============================== 2023-01-11T22:07:03.9059381Z platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0 -- /opt/conda/bin/python 2023-01-11T22:07:03.9059627Z cachedir: .pytest_cache 2023-01-11T22:07:03.9060067Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2023-01-11T22:07:03.9060422Z rootdir: /var/lib/jenkins/workspace, configfile: pytest.ini 2023-01-11T22:07:03.9060847Z plugins: hypothesis-5.35.1, flakefinder-1.1.0, rerunfailures-10.3, shard-0.1.2, xdist-3.1.0, xdoctest-1.1.0 2023-01-11T22:07:03.9061133Z collecting ... collected 0 items 2023-01-11T22:07:03.9061338Z Running 0 items in this shard: 2023-01-11T22:07:03.9061505Z 2023-01-11T22:07:03.9061653Z =============================== warnings summary =============================== 2023-01-11T22:07:03.9061997Z ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171 2023-01-11T22:07:03.9062679Z /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1171: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: hypothesis 2023-01-11T22:07:03.9063043Z self._mark_plugins_for_rewrite(hook) 2023-01-11T22:07:03.9063380Z 2023-01-11T22:07:03.9063664Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2023-01-11T22:07:03.9064182Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops_jit/test_ops_jit-0627f0d222682e84.xml - 2023-01-11T22:07:03.9064523Z ============================== 1 warning in 0.01s ============================== 2023-01-11T22:07:03.9064829Z If in CI, skip info is located in the xml test reports, please either go to s3 or the hud to download them 2023-01-11T22:07:03.9065024Z 2023-01-11T22:07:03.9065250Z ##[endgroup] 2023-01-11T22:07:03.9065618Z FINISHED PRINTING LOG FILE of test_ops_jit (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_82kkahjm) 2023-01-11T22:07:03.9065850Z 2023-01-11T22:07:03.9066097Z Running test_prims ... [2023-01-11 22:07:03.906066] 2023-01-11T22:07:03.9066604Z Executing ['/opt/conda/bin/python', '-bb', 'test_prims.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:03.906301] 2023-01-11T22:07:06.4744881Z 2023-01-11T22:07:06.4745500Z Expand the folded group to see the log file of test_prims 2023-01-11T22:07:06.4746278Z ##[group]PRINTING LOG FILE of test_prims (/var/lib/jenkins/workspace/test/test-reports/test_prims_v_ku9jcy) 2023-01-11T22:07:06.4746562Z 2023-01-11T22:07:06.4746641Z Running tests... 2023-01-11T22:07:06.4747047Z ---------------------------------------------------------------------- 2023-01-11T22:07:06.4747460Z Test results will be stored in test-reports/python-unittest/test_prims 2023-01-11T22:07:06.4747755Z test_mul_complex (__main__.TestPrimsBasic) ... ok (0.002s) 2023-01-11T22:07:06.4748032Z test_torch_ops (__main__.TestPrimsBasic) ... ok (0.002s) 2023-01-11T22:07:06.4748225Z 2023-01-11T22:07:06.4748432Z ---------------------------------------------------------------------- 2023-01-11T22:07:06.4748666Z Ran 2 tests in 0.004s 2023-01-11T22:07:06.4748783Z 2023-01-11T22:07:06.4748874Z OK 2023-01-11T22:07:06.4749005Z 2023-01-11T22:07:06.4749123Z Generating XML reports... 2023-01-11T22:07:06.4749549Z Generated XML report: test-reports/python-unittest/test_prims/TEST-TestPrimsBasic-20230111220706.xml 2023-01-11T22:07:06.4749779Z 2023-01-11T22:07:06.4750013Z ##[endgroup] 2023-01-11T22:07:06.4750385Z FINISHED PRINTING LOG FILE of test_prims (/var/lib/jenkins/workspace/test/test-reports/test_prims_v_ku9jcy) 2023-01-11T22:07:06.4750591Z 2023-01-11T22:07:06.4750752Z Running test_reductions ... [2023-01-11 22:07:06.474567] 2023-01-11T22:07:06.4751228Z Executing ['/opt/conda/bin/python', '-bb', 'test_reductions.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:06.474776] 2023-01-11T22:07:09.0237879Z 2023-01-11T22:07:09.0238421Z Expand the folded group to see the log file of test_reductions 2023-01-11T22:07:09.0239429Z ##[group]PRINTING LOG FILE of test_reductions (/var/lib/jenkins/workspace/test/test-reports/test_reductions_s4gn_tll) 2023-01-11T22:07:09.0239804Z 2023-01-11T22:07:09.0239898Z Running tests... 2023-01-11T22:07:09.0240402Z ---------------------------------------------------------------------- 2023-01-11T22:07:09.0240635Z 2023-01-11T22:07:09.0240867Z ---------------------------------------------------------------------- 2023-01-11T22:07:09.0241114Z Ran 0 tests in 0.000s 2023-01-11T22:07:09.0241230Z 2023-01-11T22:07:09.0241293Z OK 2023-01-11T22:07:09.0241385Z 2023-01-11T22:07:09.0241517Z Generating XML reports... 2023-01-11T22:07:09.0241870Z Test results will be stored in test-reports/python-unittest/test_reductions 2023-01-11T22:07:09.0242148Z 2023-01-11T22:07:09.0242454Z ##[endgroup] 2023-01-11T22:07:09.0242900Z FINISHED PRINTING LOG FILE of test_reductions (/var/lib/jenkins/workspace/test/test-reports/test_reductions_s4gn_tll) 2023-01-11T22:07:09.0243174Z 2023-01-11T22:07:09.0243352Z Running test_show_pickle ... [2023-01-11 22:07:09.023845] 2023-01-11T22:07:09.0243873Z Executing ['/opt/conda/bin/python', '-bb', 'test_show_pickle.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:09.024140] 2023-01-11T22:07:10.8703101Z 2023-01-11T22:07:10.8703801Z Expand the folded group to see the log file of test_show_pickle 2023-01-11T22:07:10.8704785Z ##[group]PRINTING LOG FILE of test_show_pickle (/var/lib/jenkins/workspace/test/test-reports/test_show_pickle_11w5apkv) 2023-01-11T22:07:10.8705197Z 2023-01-11T22:07:10.8705275Z Running tests... 2023-01-11T22:07:10.8705741Z ---------------------------------------------------------------------- 2023-01-11T22:07:10.8706204Z Test results will be stored in test-reports/python-unittest/test_show_pickle 2023-01-11T22:07:10.8706501Z test_scripted_model (__main__.TestShowPickle) ... ok (0.227s) 2023-01-11T22:07:10.8706660Z 2023-01-11T22:07:10.8706874Z ---------------------------------------------------------------------- 2023-01-11T22:07:10.8707151Z Ran 1 test in 0.227s 2023-01-11T22:07:10.8707262Z 2023-01-11T22:07:10.8707325Z OK 2023-01-11T22:07:10.8707402Z 2023-01-11T22:07:10.8707522Z Generating XML reports... 2023-01-11T22:07:10.8708246Z Generated XML report: test-reports/python-unittest/test_show_pickle/TEST-TestShowPickle-20230111220710.xml 2023-01-11T22:07:10.8708529Z 2023-01-11T22:07:10.8708771Z ##[endgroup] 2023-01-11T22:07:10.8709146Z FINISHED PRINTING LOG FILE of test_show_pickle (/var/lib/jenkins/workspace/test/test-reports/test_show_pickle_11w5apkv) 2023-01-11T22:07:10.8709415Z 2023-01-11T22:07:10.8709596Z Running test_spectral_ops ... [2023-01-11 22:07:10.870405] 2023-01-11T22:07:10.8710131Z Executing ['/opt/conda/bin/python', '-bb', 'test_spectral_ops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:10.870643] 2023-01-11T22:07:14.2568019Z 2023-01-11T22:07:14.2568616Z Expand the folded group to see the log file of test_spectral_ops 2023-01-11T22:07:14.2569719Z ##[group]PRINTING LOG FILE of test_spectral_ops (/var/lib/jenkins/workspace/test/test-reports/test_spectral_ops_glwj31_w) 2023-01-11T22:07:14.2570584Z /var/lib/jenkins/workspace/test/test_spectral_ops.py:44: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 2023-01-11T22:07:14.2571312Z if LooseVersion(np.__version__) >= '1.20.0' and ( 2023-01-11T22:07:14.2571680Z /var/lib/jenkins/workspace/test/test_spectral_ops.py:45: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 2023-01-11T22:07:14.2572217Z not has_scipy_fft or LooseVersion(scipy.__version__) >= '1.6.0') 2023-01-11T22:07:14.2572364Z 2023-01-11T22:07:14.2572436Z Running tests... 2023-01-11T22:07:14.2572745Z ---------------------------------------------------------------------- 2023-01-11T22:07:14.2572918Z 2023-01-11T22:07:14.2573112Z ---------------------------------------------------------------------- 2023-01-11T22:07:14.2573336Z Ran 0 tests in 0.000s 2023-01-11T22:07:14.2573448Z 2023-01-11T22:07:14.2573508Z OK 2023-01-11T22:07:14.2573600Z 2023-01-11T22:07:14.2573684Z Generating XML reports... 2023-01-11T22:07:14.2574018Z Test results will be stored in test-reports/python-unittest/test_spectral_ops 2023-01-11T22:07:14.2574193Z 2023-01-11T22:07:14.2574419Z ##[endgroup] 2023-01-11T22:07:14.2574811Z FINISHED PRINTING LOG FILE of test_spectral_ops (/var/lib/jenkins/workspace/test/test-reports/test_spectral_ops_glwj31_w) 2023-01-11T22:07:14.2575034Z 2023-01-11T22:07:14.2575220Z Running test_tensor_creation_ops ... [2023-01-11 22:07:14.256836] 2023-01-11T22:07:14.2575699Z Executing ['/opt/conda/bin/python', '-bb', 'test_tensor_creation_ops.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:14.257128] 2023-01-11T22:07:16.1915977Z 2023-01-11T22:07:16.1916636Z Expand the folded group to see the log file of test_tensor_creation_ops 2023-01-11T22:07:16.1917531Z ##[group]PRINTING LOG FILE of test_tensor_creation_ops (/var/lib/jenkins/workspace/test/test-reports/test_tensor_creation_ops_3zjs2c7_) 2023-01-11T22:07:16.1917786Z 2023-01-11T22:07:16.1917860Z Running tests... 2023-01-11T22:07:16.1918333Z ---------------------------------------------------------------------- 2023-01-11T22:07:16.1918716Z 2023-01-11T22:07:16.1918919Z ---------------------------------------------------------------------- 2023-01-11T22:07:16.1919142Z Ran 0 tests in 0.000s 2023-01-11T22:07:16.1919257Z 2023-01-11T22:07:16.1919318Z OK 2023-01-11T22:07:16.1919468Z 2023-01-11T22:07:16.1919582Z Generating XML reports... 2023-01-11T22:07:16.1919966Z Test results will be stored in test-reports/python-unittest/test_tensor_creation_ops 2023-01-11T22:07:16.1920161Z 2023-01-11T22:07:16.1920444Z ##[endgroup] 2023-01-11T22:07:16.1920865Z FINISHED PRINTING LOG FILE of test_tensor_creation_ops (/var/lib/jenkins/workspace/test/test-reports/test_tensor_creation_ops_3zjs2c7_) 2023-01-11T22:07:16.1921097Z 2023-01-11T22:07:16.1921255Z Running test_tensorexpr ... [2023-01-11 22:07:16.191635] 2023-01-11T22:07:16.1921736Z Executing ['/opt/conda/bin/python', '-bb', 'test_tensorexpr.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2023-01-11 22:07:16.191865] 2023-01-11T22:08:02.2324020Z 2023-01-11T22:08:02.2324649Z Expand the folded group to see the log file of test_tensorexpr 2023-01-11T22:08:02.2325445Z ##[group]PRINTING LOG FILE of test_tensorexpr (/var/lib/jenkins/workspace/test/test-reports/test_tensorexpr_48q3w91m) 2023-01-11T22:08:02.2327274Z 2023-01-11T22:08:02.2327672Z Running tests... 2023-01-11T22:08:02.2328320Z ---------------------------------------------------------------------- 2023-01-11T22:08:02.2328986Z Test results will be stored in test-reports/python-unittest/test_tensorexpr 2023-01-11T22:08:02.2329309Z test_add_const_rhs (__main__.TestTensorExprFuser) ... ok (0.263s) 2023-01-11T22:08:02.2329629Z test_add_sub (__main__.TestTensorExprFuser) ... ok (0.041s) 2023-01-11T22:08:02.2330082Z test_alias_analysis_input_and_module (__main__.TestTensorExprFuser) ... ok (0.046s) 2023-01-11T22:08:02.2330398Z test_alias_analysis_inputs (__main__.TestTensorExprFuser) ... ok (0.010s) 2023-01-11T22:08:02.2330710Z test_alias_analysis_module (__main__.TestTensorExprFuser) ... ok (0.128s) 2023-01-11T22:08:02.2331008Z test_all_combos (__main__.TestTensorExprFuser) ... ok (0.039s) 2023-01-11T22:08:02.2331356Z test_alpha (__main__.TestTensorExprFuser) ... ok (0.026s) 2023-01-11T22:08:02.2331718Z test_binary_ops (__main__.TestTensorExprFuser) ... ok (1.458s) 2023-01-11T22:08:02.2332145Z test_bitwise_ops (__main__.TestTensorExprFuser) ... ok (0.164s) 2023-01-11T22:08:02.2332661Z test_broadcast (__main__.TestTensorExprFuser) ... ok (0.039s) 2023-01-11T22:08:02.2333088Z test_broadcast3 (__main__.TestTensorExprFuser) ... ok (0.146s) 2023-01-11T22:08:02.2333425Z test_broadcast_2 (__main__.TestTensorExprFuser) ... ok (0.033s) 2023-01-11T22:08:02.2333812Z test_broadcast_big2 (__main__.TestTensorExprFuser) ... ok (0.047s) 2023-01-11T22:08:02.2334295Z test_cat (__main__.TestTensorExprFuser) ... ok (3.256s) 2023-01-11T22:08:02.2334800Z test_cat_empty_tensors (__main__.TestTensorExprFuser) ... ok (0.115s) 2023-01-11T22:08:02.2335251Z test_cat_negative_dim (__main__.TestTensorExprFuser) ... ok (0.069s) 2023-01-11T22:08:02.2335943Z test_cat_only (__main__.TestTensorExprFuser) ... skip: cat is broken with fusion group inlining disabled (0.001s) 2023-01-11T22:08:02.2336302Z test_cat_promote_inputs (__main__.TestTensorExprFuser) ... ok (0.131s) 2023-01-11T22:08:02.2336795Z test_cat_with_constant_dim (__main__.TestTensorExprFuser) ... ok (0.047s) 2023-01-11T22:08:02.2337279Z test_char (__main__.TestTensorExprFuser) ... ok (0.029s) 2023-01-11T22:08:02.2337766Z test_chunk (__main__.TestTensorExprFuser) ... ok (0.213s) 2023-01-11T22:08:02.2338050Z test_clamp (__main__.TestTensorExprFuser) ... ok (0.038s) 2023-01-11T22:08:02.2338304Z test_constant (__main__.TestTensorExprFuser) ... ok (0.027s) 2023-01-11T22:08:02.2338576Z test_double (__main__.TestTensorExprFuser) ... ok (0.026s) 2023-01-11T22:08:02.2338860Z test_double_intrinsics (__main__.TestTensorExprFuser) ... ok (0.033s) 2023-01-11T22:08:02.2339199Z test_dynamic_shape (__main__.TestTensorExprFuser) ... skip: dynamic shapes are not quite there yet (0.001s) 2023-01-11T22:08:02.2339661Z test_easy (__main__.TestTensorExprFuser) ... ok (0.038s) 2023-01-11T22:08:02.2339920Z test_eq (__main__.TestTensorExprFuser) ... ok (0.064s) 2023-01-11T22:08:02.2340185Z test_exp_pow (__main__.TestTensorExprFuser) ... ok (0.060s) 2023-01-11T22:08:02.2340443Z test_four_arg (__main__.TestTensorExprFuser) ... ok (0.039s) 2023-01-11T22:08:02.2340719Z test_ge (__main__.TestTensorExprFuser) ... ok (0.064s) 2023-01-11T22:08:02.2340974Z test_gt (__main__.TestTensorExprFuser) ... ok (0.063s) 2023-01-11T22:08:02.2341266Z test_guard_fails (__main__.TestTensorExprFuser) ... skip: requires CUDA (0.001s) 2023-01-11T22:08:02.2341552Z test_half_bn_relu (__main__.TestTensorExprFuser) ... ok (0.001s) 2023-01-11T22:08:02.2341834Z test_half_gelu (__main__.TestTensorExprFuser) ... ok (0.003s) 2023-01-11T22:08:02.2342115Z test_int64_promotion (__main__.TestTensorExprFuser) ... ok (0.027s) 2023-01-11T22:08:02.2342616Z test_int_output (__main__.TestTensorExprFuser) ... ok (0.027s) 2023-01-11T22:08:02.2342977Z test_le (__main__.TestTensorExprFuser) ... ok (0.063s) 2023-01-11T22:08:02.2343241Z test_loop (__main__.TestTensorExprFuser) ... ok (0.075s) 2023-01-11T22:08:02.2343489Z test_lt (__main__.TestTensorExprFuser) ... ok (0.063s) 2023-01-11T22:08:02.2343749Z test_mask (__main__.TestTensorExprFuser) ... ok (0.026s) 2023-01-11T22:08:02.2344012Z test_min_max (__main__.TestTensorExprFuser) ... ok (0.039s) 2023-01-11T22:08:02.2344292Z test_min_max_reduction (__main__.TestTensorExprFuser) ... ok (0.026s) 2023-01-11T22:08:02.2344577Z test_min_max_reduction2 (__main__.TestTensorExprFuser) ... ok (0.026s) 2023-01-11T22:08:02.2344876Z test_min_max_reduction_dim1 (__main__.TestTensorExprFuser) ... ok (0.029s) 2023-01-11T22:08:02.2345180Z test_min_max_reduction_dim1_2 (__main__.TestTensorExprFuser) ... ok (0.041s) 2023-01-11T22:08:02.2345459Z test_multi_rand (__main__.TestTensorExprFuser) ... ok (0.052s) 2023-01-11T22:08:02.2345739Z test_multioutput (__main__.TestTensorExprFuser) ... ok (0.036s) 2023-01-11T22:08:02.2346035Z test_multiple_outputs (__main__.TestTensorExprFuser) ... ok (0.179s) 2023-01-11T22:08:02.2346316Z test_nans (__main__.TestTensorExprFuser) ... ok (0.101s) 2023-01-11T22:08:02.2346562Z test_ne (__main__.TestTensorExprFuser) ... ok (0.064s) 2023-01-11T22:08:02.2346828Z test_promotion (__main__.TestTensorExprFuser) ... ok (0.033s) 2023-01-11T22:08:02.2383326Z test_propagated_mem_layout (__main__.TestTensorExprFuser) ... ok (30.894s) 2023-01-11T22:08:02.2383790Z test_rand_like (__main__.TestTensorExprFuser) ... ok (0.066s) 2023-01-11T22:08:02.2384277Z test_rank_two (__main__.TestTensorExprFuser) ... ok (0.044s) 2023-01-11T22:08:02.2384754Z test_relu (__main__.TestTensorExprFuser) ... ok (0.037s) 2023-01-11T22:08:02.2385232Z test_remainder (__main__.TestTensorExprFuser) ... ok (0.261s) 2023-01-11T22:08:02.2385722Z test_reps (__main__.TestTensorExprFuser) ... ok (0.043s) 2023-01-11T22:08:02.2386201Z test_scalar (__main__.TestTensorExprFuser) ... ok (0.119s) 2023-01-11T22:08:02.2386696Z test_short (__main__.TestTensorExprFuser) ... ok (0.028s) 2023-01-11T22:08:02.2387134Z test_simple_add (__main__.TestTensorExprFuser) ... ok (0.017s) 2023-01-11T22:08:02.2387408Z test_sin_pow (__main__.TestTensorExprFuser) ... ok (0.313s) 2023-01-11T22:08:02.2387677Z test_slice (__main__.TestTensorExprFuser) ... ok (0.058s) 2023-01-11T22:08:02.2387939Z test_sliced_stride (__main__.TestTensorExprFuser) ... ok (0.108s) 2023-01-11T22:08:02.2388220Z test_softmax_cpu (__main__.TestTensorExprFuser) ... ok (0.657s) 2023-01-11T22:08:02.2388522Z test_softmax_cuda (__main__.TestTensorExprFuser) ... skip: requires CUDA (0.000s) 2023-01-11T22:08:02.2388832Z test_strided_output_preserved (__main__.TestTensorExprFuser) ... ok (0.112s) 2023-01-11T22:08:02.2389127Z test_three_arg (__main__.TestTensorExprFuser) ... ok (0.037s) 2023-01-11T22:08:02.2389398Z test_three_arg2 (__main__.TestTensorExprFuser) ... ok (0.040s) 2023-01-11T22:08:02.2389674Z test_transpose (__main__.TestTensorExprFuser) ... ok (0.162s) 2023-01-11T22:08:02.2390059Z test_unary_ops (__main__.TestTensorExprFuser) ... ok (3.221s) 2023-01-11T22:08:02.2390427Z test_unsqueeze (__main__.TestTensorExprFuser) ... ok (0.061s) 2023-01-11T22:08:02.2390696Z test_where (__main__.TestTensorExprFuser) ... ok (0.162s) 2023-01-11T22:08:02.2390845Z 2023-01-11T22:08:02.2391096Z ---------------------------------------------------------------------- 2023-01-11T22:08:02.2391339Z Ran 73 tests in 44.111s 2023-01-11T22:08:02.2391453Z 2023-01-11T22:08:02.2391523Z OK (skipped=4) 2023-01-11T22:08:02.2391629Z 2023-01-11T22:08:02.2391714Z Generating XML reports... 2023-01-11T22:08:02.2392137Z Generated XML report: test-reports/python-unittest/test_tensorexpr/TEST-TestTensorExprFuser-20230111220717.xml 2023-01-11T22:08:02.2392380Z 2023-01-11T22:08:02.2392740Z ##[endgroup] 2023-01-11T22:08:02.2393133Z FINISHED PRINTING LOG FILE of test_tensorexpr (/var/lib/jenkins/workspace/test/test-reports/test_tensorexpr_48q3w91m) 2023-01-11T22:08:02.2393423Z 2023-01-11T22:08:02.2393568Z Running doctests ... [2023-01-11 22:08:02.232620] 2023-01-11T22:08:02.2652999Z Start doctest_module('/opt/conda/lib/python3.10/site-packages/torch') 2023-01-11T22:08:02.2653397Z Listing tests 2023-01-11T22:08:08.3517125Z gathering tests 2023-01-11T22:08:08.3531665Z running 663 test(s) 2023-01-11T22:08:08.3572897Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/__init__.py::is_tensor:0, line 429 <- wrt source file 2023-01-11T22:08:08.3579990Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/__init__.py::is_tensor:0 2023-01-11T22:08:08.3580540Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/__init__.py::set_default_tensor_type:0, line 458 <- wrt source file 2023-01-11T22:08:08.3582563Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/__init__.py::set_default_tensor_type:0 2023-01-11T22:08:08.3583114Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/__init__.py::set_default_dtype:0, line 496 <- wrt source file 2023-01-11T22:08:08.3585722Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/__init__.py::set_default_dtype:0 2023-01-11T22:08:08.3586251Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/__init__.py::use_deterministic_algorithms:0, line 629 <- wrt source file 2023-01-11T22:08:08.3588615Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/__init__.py::use_deterministic_algorithms:0 2023-01-11T22:08:08.3589589Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/__init__.py::compile:0, line 1221 <- wrt source file 2023-01-11T22:08:08.3590406Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/__init__.py::compile:0 2023-01-11T22:08:08.3591381Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so::Generator:0, line 15 <- wrt source file 2023-01-11T22:08:08.3592401Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so::Generator:0 2023-01-11T22:08:08.3593466Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so::_LinAlgError:0, line 5 <- wrt source file 2023-01-11T22:08:08.3594529Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so::_LinAlgError:0 2023-01-11T22:08:08.3595520Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_namedtensor_internals.py::update_names:0, line 125 <- wrt source file 2023-01-11T22:08:08.3596520Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_namedtensor_internals.py::update_names:0 2023-01-11T22:08:08.3597514Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.register_hook:0, line 508 <- wrt source file 2023-01-11T22:08:08.3606763Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.register_hook:0 2023-01-11T22:08:08.3608159Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.refine_names:0, line 1096 <- wrt source file 2023-01-11T22:08:08.3865661Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.refine_names:0 2023-01-11T22:08:08.3869997Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.align_to:0, line 1141 <- wrt source file 2023-01-11T22:08:08.3876512Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.align_to:0 2023-01-11T22:08:08.3877456Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.rename:0, line 1214 <- wrt source file 2023-01-11T22:08:08.3886101Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.rename:0 2023-01-11T22:08:08.3887272Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.to_sparse_coo:0, line 1244 <- wrt source file 2023-01-11T22:08:08.3895480Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor.py::Tensor.to_sparse_coo:0 2023-01-11T22:08:08.3896415Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py::set_printoptions:0, line 49 <- wrt source file 2023-01-11T22:08:08.3923085Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_tensor_str.py::set_printoptions:0 2023-01-11T22:08:08.3924079Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::broadcast_tensors:0, line 61 <- wrt source file 2023-01-11T22:08:08.3931580Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::broadcast_tensors:0 2023-01-11T22:08:08.3932163Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::broadcast_shapes:0, line 89 <- wrt source file 2023-01-11T22:08:08.3935331Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::broadcast_shapes:0 2023-01-11T22:08:08.3935851Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::split:0, line 161 <- wrt source file 2023-01-11T22:08:08.3948812Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::split:0 2023-01-11T22:08:08.3949423Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::einsum:0, line 269 <- wrt source file 2023-01-11T22:08:08.4158156Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::einsum:0 2023-01-11T22:08:08.4158860Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::meshgrid:0, line 450 <- wrt source file 2023-01-11T22:08:08.4194643Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::meshgrid:0 2023-01-11T22:08:08.4195208Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::_unique_impl:0, line 764 <- wrt source file 2023-01-11T22:08:08.4209422Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::_unique_impl:0 2023-01-11T22:08:08.4210046Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::_unique_consecutive_impl:0, line 842 <- wrt source file 2023-01-11T22:08:08.4222755Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::_unique_consecutive_impl:0 2023-01-11T22:08:08.4223286Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::tensordot:0, line 1040 <- wrt source file 2023-01-11T22:08:08.4236437Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::tensordot:0 2023-01-11T22:08:08.4236979Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::cartesian_prod:0, line 1118 <- wrt source file 2023-01-11T22:08:08.4244972Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::cartesian_prod:0 2023-01-11T22:08:08.4245504Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::block_diag:0, line 1152 <- wrt source file 2023-01-11T22:08:08.4256192Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::block_diag:0 2023-01-11T22:08:08.4256719Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::cdist:0, line 1203 <- wrt source file 2023-01-11T22:08:08.4272333Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::cdist:0 2023-01-11T22:08:08.4272850Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_1d:0, line 1243 <- wrt source file 2023-01-11T22:08:08.4290152Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_1d:0 2023-01-11T22:08:08.4290794Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_2d:0, line 1279 <- wrt source file 2023-01-11T22:08:08.4309292Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_2d:0 2023-01-11T22:08:08.4309808Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_3d:0, line 1317 <- wrt source file 2023-01-11T22:08:08.4333232Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::atleast_3d:0 2023-01-11T22:08:08.4333725Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::norm:0, line 1455 <- wrt source file 2023-01-11T22:08:08.4371660Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/functional.py::norm:0 2023-01-11T22:08:08.4372258Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::chain_matmul:0, line 1606 <- wrt source file 2023-01-11T22:08:08.4373335Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/functional.py::chain_matmul:0 2023-01-11T22:08:08.4373922Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/functional.py::_lu_impl:0, line 1706 <- wrt source file 2023-01-11T22:08:08.4376188Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/functional.py::_lu_impl:0 2023-01-11T22:08:08.4377099Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::list:0, line 391 <- wrt source file 2023-01-11T22:08:08.4377933Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::list:0 2023-01-11T22:08:08.4378667Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::help:0, line 444 <- wrt source file 2023-01-11T22:08:08.4379316Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::help:0 2023-01-11T22:08:08.4379850Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::load:0, line 524 <- wrt source file 2023-01-11T22:08:08.4380504Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::load:0 2023-01-11T22:08:08.4381212Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::_load_local:0, line 563 <- wrt source file 2023-01-11T22:08:08.4381914Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::_load_local:0 2023-01-11T22:08:08.4382768Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::download_url_to_file:0, line 592 <- wrt source file 2023-01-11T22:08:08.4383287Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::download_url_to_file:0 2023-01-11T22:08:08.4383793Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/hub.py::load_state_dict_from_url:0, line 701 <- wrt source file 2023-01-11T22:08:08.4384298Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/hub.py::load_state_dict_from_url:0 2023-01-11T22:08:08.4384950Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/library.py::Library.define:0, line 61 <- wrt source file 2023-01-11T22:08:08.4385448Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/library.py::Library.define:0 2023-01-11T22:08:08.4385950Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/library.py::Library.impl:0, line 81 <- wrt source file 2023-01-11T22:08:08.4386444Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/library.py::Library.impl:0 2023-01-11T22:08:08.4387010Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::get_ignored_functions:0, line 67 <- wrt source file 2023-01-11T22:08:08.4387779Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::get_ignored_functions:0 2023-01-11T22:08:08.4388301Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::get_testing_overrides:0, line 336 <- wrt source file 2023-01-11T22:08:08.4426687Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::get_testing_overrides:0 2023-01-11T22:08:08.4427237Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::wrap_torch_function:0, line 1391 <- wrt source file 2023-01-11T22:08:08.4430805Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::wrap_torch_function:0 2023-01-11T22:08:08.4431369Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::handle_torch_function:0, line 1508 <- wrt source file 2023-01-11T22:08:08.4434469Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::handle_torch_function:0 2023-01-11T22:08:08.4435093Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::is_tensor_method_or_property:0, line 1732 <- wrt source file 2023-01-11T22:08:08.4477084Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::is_tensor_method_or_property:0 2023-01-11T22:08:08.4477639Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/overrides.py::is_tensor_like:0, line 1750 <- wrt source file 2023-01-11T22:08:08.4485648Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/overrides.py::is_tensor_like:0 2023-01-11T22:08:08.4486346Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/quasirandom.py::SobolEngine:0, line 37 <- wrt source file 2023-01-11T22:08:08.4487326Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/quasirandom.py::SobolEngine:0 2023-01-11T22:08:08.4488115Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/serialization.py::save:0, line 429 <- wrt source file 2023-01-11T22:08:08.4488610Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/serialization.py::save:0 2023-01-11T22:08:08.4489120Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/serialization.py::load:0, line 754 <- wrt source file 2023-01-11T22:08:08.4492185Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/serialization.py::load:0 2023-01-11T22:08:08.4492843Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/torch_version.py::TorchVersion:0, line 49 <- wrt source file 2023-01-11T22:08:08.4493437Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/torch_version.py::TorchVersion:0 2023-01-11T22:08:08.4494013Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_prims_common/__init__.py::compute_required_storage_length:0, line 1495 <- wrt source file 2023-01-11T22:08:08.4499601Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_prims_common/__init__.py::compute_required_storage_length:0 2023-01-11T22:08:08.4500307Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.then:0, line 147 <- wrt source file 2023-01-11T22:08:08.4501062Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.then:0 2023-01-11T22:08:08.4501605Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.add_done_callback:0, line 195 <- wrt source file 2023-01-11T22:08:08.4504651Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.add_done_callback:0 2023-01-11T22:08:08.4505384Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.set_result:0, line 228 <- wrt source file 2023-01-11T22:08:08.4506297Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.set_result:0 2023-01-11T22:08:08.4507061Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.set_exception:0, line 257 <- wrt source file 2023-01-11T22:08:08.4507948Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::Future.set_exception:0 2023-01-11T22:08:08.4508828Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::collect_all:0, line 288 <- wrt source file 2023-01-11T22:08:08.4509751Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/futures/__init__.py::collect_all:0 2023-01-11T22:08:08.4510639Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py::annotate:0, line 103 <- wrt source file 2023-01-11T22:08:08.4511505Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py::annotate:0 2023-01-11T22:08:08.4512018Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py::strict_fusion:0, line 202 <- wrt source file 2023-01-11T22:08:08.4512505Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/__init__.py::strict_fusion:0 2023-01-11T22:08:08.4513043Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/monitor/__init__.py::TensorboardEventHandler:0, line 21 <- wrt source file 2023-01-11T22:08:08.4528115Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/monitor/__init__.py::TensorboardEventHandler:0 2023-01-11T22:08:08.4528788Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nested/__init__.py::as_nested_tensor:0, line 39 <- wrt source file 2023-01-11T22:08:08.4543842Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nested/__init__.py::as_nested_tensor:0 2023-01-11T22:08:08.4544546Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/sparse/__init__.py::sum:0, line 175 <- wrt source file 2023-01-11T22:08:08.4583179Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/sparse/__init__.py::sum:0 2023-01-11T22:08:08.4583766Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py::aot_function:0, line 2139 <- wrt source file 2023-01-11T22:08:08.5828087Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py::aot_function:0 2023-01-11T22:08:08.5828720Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/benchmark_utils.py::benchmark_utilization:0, line 162 <- wrt source file 2023-01-11T22:08:08.5829295Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_functorch/benchmark_utils.py::benchmark_utilization:0 2023-01-11T22:08:08.5829858Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::vjp:0, line 195 <- wrt source file 2023-01-11T22:08:08.5909123Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::vjp:0 2023-01-11T22:08:08.5909679Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jacrev:0, line 382 <- wrt source file 2023-01-11T22:08:08.6065278Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jacrev:0 2023-01-11T22:08:08.6065846Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jvp:0, line 882 <- wrt source file 2023-01-11T22:08:08.6736881Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jvp:0 2023-01-11T22:08:08.6737474Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jacfwd:0, line 1024 <- wrt source file 2023-01-11T22:08:08.6889873Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::jacfwd:0 2023-01-11T22:08:08.6890452Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::hessian:0, line 1173 <- wrt source file 2023-01-11T22:08:08.6936275Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::hessian:0 2023-01-11T22:08:08.6936926Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::grad:0, line 1290 <- wrt source file 2023-01-11T22:08:08.6937585Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::grad:0 2023-01-11T22:08:08.6938141Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::functionalize:0, line 1441 <- wrt source file 2023-01-11T22:08:08.6943008Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py::functionalize:0 2023-01-11T22:08:08.6944023Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/fx_minifier.py::minifier:0, line 72 <- wrt source file 2023-01-11T22:08:08.6944620Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_functorch/fx_minifier.py::minifier:0 2023-01-11T22:08:08.6945142Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_functorch/vmap.py::vmap:0, line 306 <- wrt source file 2023-01-11T22:08:08.6986611Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/_functorch/vmap.py::vmap:0 2023-01-11T22:08:08.6987339Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_prims/context.py::NvfuserPrimsMode:0, line 90 <- wrt source file 2023-01-11T22:08:08.6988033Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_prims/context.py::NvfuserPrimsMode:0 2023-01-11T22:08:08.6988747Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/_prims/context.py::TorchRefsMode:0, line 141 <- wrt source file 2023-01-11T22:08:08.6989417Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/_prims/context.py::TorchRefsMode:0 2023-01-11T22:08:08.6990179Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/qat/modules/linear_relu.py::LinearReLU:0, line 21 <- wrt source file 2023-01-11T22:08:08.6990941Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/qat/modules/linear_relu.py::LinearReLU:0 2023-01-11T22:08:08.6991760Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/dynamic/modules/linear_relu.py::LinearReLU:0, line 21 <- wrt source file 2023-01-11T22:08:08.6992560Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/dynamic/modules/linear_relu.py::LinearReLU:0 2023-01-11T22:08:08.6993370Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearReLU:0, line 22 <- wrt source file 2023-01-11T22:08:08.6994142Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearReLU:0 2023-01-11T22:08:08.6994958Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearLeakyReLU:0, line 59 <- wrt source file 2023-01-11T22:08:08.6995922Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearLeakyReLU:0 2023-01-11T22:08:08.6996716Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearTanh:0, line 126 <- wrt source file 2023-01-11T22:08:08.6997494Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearTanh:0 2023-01-11T22:08:08.6998257Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTMCell:0, line 24 <- wrt source file 2023-01-11T22:08:08.7018687Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTMCell:0 2023-01-11T22:08:08.7020067Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTM:0, line 274 <- wrt source file 2023-01-11T22:08:08.7053667Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTM:0 2023-01-11T22:08:08.7056143Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv1d:0, line 166 <- wrt source file 2023-01-11T22:08:08.7057049Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv1d:0 2023-01-11T22:08:08.7057607Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv2d:0, line 226 <- wrt source file 2023-01-11T22:08:08.7058684Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv2d:0 2023-01-11T22:08:08.7059439Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv3d:0, line 287 <- wrt source file 2023-01-11T22:08:08.7059977Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/functional.py::conv3d:0 2023-01-11T22:08:08.7060531Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/__init__.py::Quantize:0, line 74 <- wrt source file 2023-01-11T22:08:08.7065910Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/__init__.py::Quantize:0 2023-01-11T22:08:08.7066480Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/__init__.py::DeQuantize:0, line 114 <- wrt source file 2023-01-11T22:08:08.7072453Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/__init__.py::DeQuantize:0 2023-01-11T22:08:08.7073426Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv1d:0, line 34 <- wrt source file 2023-01-11T22:08:08.7074077Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv1d:0 2023-01-11T22:08:08.7074835Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv2d:0, line 105 <- wrt source file 2023-01-11T22:08:08.7075518Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv2d:0 2023-01-11T22:08:08.7076271Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv3d:0, line 170 <- wrt source file 2023-01-11T22:08:08.7076961Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv3d:0 2023-01-11T22:08:08.7077716Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose1d:0, line 236 <- wrt source file 2023-01-11T22:08:08.7078699Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose1d:0 2023-01-11T22:08:08.7079699Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose2d:0, line 297 <- wrt source file 2023-01-11T22:08:08.7080545Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose2d:0 2023-01-11T22:08:08.7081525Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose3d:0, line 358 <- wrt source file 2023-01-11T22:08:08.7082372Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose3d:0 2023-01-11T22:08:08.7083426Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/linear.py::Linear:0, line 28 <- wrt source file 2023-01-11T22:08:08.7084170Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/linear.py::Linear:0 2023-01-11T22:08:08.7084732Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTM:0, line 391 <- wrt source file 2023-01-11T22:08:08.7085284Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTM:0 2023-01-11T22:08:08.7085853Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRU:0, line 618 <- wrt source file 2023-01-11T22:08:08.7086688Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRU:0 2023-01-11T22:08:08.7087487Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::RNNCell:0, line 940 <- wrt source file 2023-01-11T22:08:08.7088329Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::RNNCell:0 2023-01-11T22:08:08.7089193Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTMCell:0, line 993 <- wrt source file 2023-01-11T22:08:08.7089952Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTMCell:0 2023-01-11T22:08:08.7090737Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRUCell:0, line 1036 <- wrt source file 2023-01-11T22:08:08.7091462Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRUCell:0 2023-01-11T22:08:08.7092263Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/activation.py::ReLU6:0, line 31 <- wrt source file 2023-01-11T22:08:08.7092892Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/activation.py::ReLU6:0 2023-01-11T22:08:08.7093603Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv1d:0, line 295 <- wrt source file 2023-01-11T22:08:08.7094259Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv1d:0 2023-01-11T22:08:08.7094811Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv2d:0, line 403 <- wrt source file 2023-01-11T22:08:08.7095342Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv2d:0 2023-01-11T22:08:08.7095889Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv3d:0, line 502 <- wrt source file 2023-01-11T22:08:08.7096409Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv3d:0 2023-01-11T22:08:08.7097064Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose1d:0, line 685 <- wrt source file 2023-01-11T22:08:08.7097629Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose1d:0 2023-01-11T22:08:08.7098221Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose2d:0, line 775 <- wrt source file 2023-01-11T22:08:08.7098769Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose2d:0 2023-01-11T22:08:08.7099350Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose3d:0, line 869 <- wrt source file 2023-01-11T22:08:08.7099940Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose3d:0 2023-01-11T22:08:08.7100765Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::Embedding:0, line 84 <- wrt source file 2023-01-11T22:08:08.7119850Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::Embedding:0 2023-01-11T22:08:08.7120769Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::EmbeddingBag:0, line 209 <- wrt source file 2023-01-11T22:08:08.7144423Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::EmbeddingBag:0 2023-01-11T22:08:08.7145380Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::FloatFunctional:0, line 21 <- wrt source file 2023-01-11T22:08:08.7150383Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::FloatFunctional:0 2023-01-11T22:08:08.7151196Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::QFunctional:0, line 141 <- wrt source file 2023-01-11T22:08:08.7154552Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::QFunctional:0 2023-01-11T22:08:08.7155538Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/linear.py::Linear:0, line 117 <- wrt source file 2023-01-11T22:08:08.7156279Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/linear.py::Linear:0 2023-01-11T22:08:08.7156991Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/rnn.py::LSTM:0, line 20 <- wrt source file 2023-01-11T22:08:08.7157776Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/rnn.py::LSTM:0 2023-01-11T22:08:08.7158669Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/activation_sparsifier/activation_sparsifier.py::ActivationSparsifier:0, line 59 <- wrt source file 2023-01-11T22:08:08.7159666Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/activation_sparsifier/activation_sparsifier.py::ActivationSparsifier:0 2023-01-11T22:08:08.7160639Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/data_scheduler/base_data_scheduler.py::BaseDataScheduler.get_schedule_param:0, line 92 <- wrt source file 2023-01-11T22:08:08.7192616Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/data_scheduler/base_data_scheduler.py::BaseDataScheduler.get_schedule_param:0 2023-01-11T22:08:08.7193971Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/data_sparsifier/base_data_sparsifier.py::BaseDataSparsifier:0, line 54 <- wrt source file 2023-01-11T22:08:08.7195373Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/_experimental/data_sparsifier/base_data_sparsifier.py::BaseDataSparsifier:0 2023-01-11T22:08:08.7196479Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/scheduler/lambda_scheduler.py::LambdaSL:0, line 19 <- wrt source file 2023-01-11T22:08:08.7197714Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/scheduler/lambda_scheduler.py::LambdaSL:0 2023-01-11T22:08:08.7198801Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier:0, line 45 <- wrt source file 2023-01-11T22:08:08.7200013Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier:0 2023-01-11T22:08:08.7201358Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier.squash_mask:0, line 237 <- wrt source file 2023-01-11T22:08:08.7205792Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier.squash_mask:0 2023-01-11T22:08:08.7206894Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuse_modules.py::fuse_modules:0, line 143 <- wrt source file 2023-01-11T22:08:08.7207880Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuse_modules.py::fuse_modules:0 2023-01-11T22:08:08.7208957Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn:0, line 27 <- wrt source file 2023-01-11T22:08:08.7217019Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn:0 2023-01-11T22:08:08.7218133Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn_relu:0, line 64 <- wrt source file 2023-01-11T22:08:08.7225194Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn_relu:0 2023-01-11T22:08:08.7226328Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_linear_bn:0, line 111 <- wrt source file 2023-01-11T22:08:08.7232246Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_linear_bn:0 2023-01-11T22:08:08.7233417Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_convtranspose_bn:0, line 139 <- wrt source file 2023-01-11T22:08:08.7239696Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_convtranspose_bn:0 2023-01-11T22:08:08.7240824Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py::_with_args:0, line 85 <- wrt source file 2023-01-11T22:08:08.7241721Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py::_with_args:0 2023-01-11T22:08:08.7242773Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py::_with_callable_args:0, line 106 <- wrt source file 2023-01-11T22:08:08.7243812Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/observer.py::_with_callable_args:0 2023-01-11T22:08:08.7244877Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::fuse_fx:0, line 242 <- wrt source file 2023-01-11T22:08:08.7245862Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::fuse_fx:0 2023-01-11T22:08:08.7248529Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::prepare_fx:0, line 301 <- wrt source file 2023-01-11T22:08:08.7249718Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::prepare_fx:0 2023-01-11T22:08:08.7252237Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::prepare_qat_fx:0, line 439 <- wrt source file 2023-01-11T22:08:08.7253176Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::prepare_qat_fx:0 2023-01-11T22:08:08.7253977Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::convert_fx:0, line 604 <- wrt source file 2023-01-11T22:08:08.7254590Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::convert_fx:0 2023-01-11T22:08:08.7255384Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::convert_to_reference_fx:0, line 663 <- wrt source file 2023-01-11T22:08:08.7256128Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::convert_to_reference_fx:0 2023-01-11T22:08:08.7256814Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::_convert_to_reference_decomposed_fx:0, line 715 <- wrt source file 2023-01-11T22:08:08.7257649Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py::_convert_to_reference_decomposed_fx:0 2023-01-11T22:08:08.7258464Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_path_of_module:0, line 408 <- wrt source file 2023-01-11T22:08:08.7259126Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_path_of_module:0 2023-01-11T22:08:08.7259682Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_signature_locals:0, line 429 <- wrt source file 2023-01-11T22:08:08.7260392Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_signature_locals:0 2023-01-11T22:08:08.7261035Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_default_kwargs:0, line 442 <- wrt source file 2023-01-11T22:08:08.7261575Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_default_kwargs:0 2023-01-11T22:08:08.7262136Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_normalize_kwargs:0, line 463 <- wrt source file 2023-01-11T22:08:08.7262838Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_normalize_kwargs:0 2023-01-11T22:08:08.7263496Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_num_pos_args:0, line 483 <- wrt source file 2023-01-11T22:08:08.7264188Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/utils.py::_get_num_pos_args:0 2023-01-11T22:08:08.7264947Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/backend_config/backend_config.py::DTypeConfig:0, line 131 <- wrt source file 2023-01-11T22:08:08.7265543Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/backend_config/backend_config.py::DTypeConfig:0 2023-01-11T22:08:08.7266161Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/backend_config/onednn.py::_fuse_linear_bn_leaky_relu:0, line 80 <- wrt source file 2023-01-11T22:08:08.7266767Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/backend_config/onednn.py::_fuse_linear_bn_leaky_relu:0 2023-01-11T22:08:08.7267388Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report.py::ModelReport:0, line 79 <- wrt source file 2023-01-11T22:08:08.7270229Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report.py::ModelReport:0 2023-01-11T22:08:08.7271307Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_filtered_tables:0, line 324 <- wrt source file 2023-01-11T22:08:08.7272427Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_filtered_tables:0 2023-01-11T22:08:08.7273256Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_table_visualization:0, line 407 <- wrt source file 2023-01-11T22:08:08.7274150Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_table_visualization:0 2023-01-11T22:08:08.7275185Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_plot_visualization:0, line 557 <- wrt source file 2023-01-11T22:08:08.7276259Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_plot_visualization:0 2023-01-11T22:08:08.7277128Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_histogram_visualization:0, line 619 <- wrt source file 2023-01-11T22:08:08.7278097Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py::ModelReportVisualizer.generate_histogram_visualization:0 2023-01-11T22:08:08.7279158Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/anomaly_mode.py::detect_anomaly:0, line 25 <- wrt source file 2023-01-11T22:08:08.7279841Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/anomaly_mode.py::detect_anomaly:0 2023-01-11T22:08:08.7280424Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::make_dual:0, line 63 <- wrt source file 2023-01-11T22:08:08.7281053Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::make_dual:0 2023-01-11T22:08:08.7281594Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::unpack_dual:0, line 126 <- wrt source file 2023-01-11T22:08:08.7282113Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::unpack_dual:0 2023-01-11T22:08:08.7282643Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::dual_level:0, line 163 <- wrt source file 2023-01-11T22:08:08.7283156Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/forward_ad.py::dual_level:0 2023-01-11T22:08:08.7283718Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.save_for_backward:0, line 51 <- wrt source file 2023-01-11T22:08:08.7284288Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.save_for_backward:0 2023-01-11T22:08:08.7284851Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.save_for_forward:0, line 93 <- wrt source file 2023-01-11T22:08:08.7285410Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.save_for_forward:0 2023-01-11T22:08:08.7286176Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.mark_dirty:0, line 143 <- wrt source file 2023-01-11T22:08:08.7287030Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.mark_dirty:0 2023-01-11T22:08:08.7287808Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.mark_non_differentiable:0, line 187 <- wrt source file 2023-01-11T22:08:08.7288553Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.mark_non_differentiable:0 2023-01-11T22:08:08.7289274Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.set_materialize_grads:0, line 215 <- wrt source file 2023-01-11T22:08:08.7289859Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::FunctionCtx.set_materialize_grads:0 2023-01-11T22:08:08.7290457Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::Function:0, line 387 <- wrt source file 2023-01-11T22:08:08.7290968Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py::Function:0 2023-01-11T22:08:08.7291582Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::vjp:0, line 248 <- wrt source file 2023-01-11T22:08:08.7292590Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::vjp:0 2023-01-11T22:08:08.7293169Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::jvp:0, line 346 <- wrt source file 2023-01-11T22:08:08.7296303Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::jvp:0 2023-01-11T22:08:08.7296914Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::jacobian:0, line 548 <- wrt source file 2023-01-11T22:08:08.7299447Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::jacobian:0 2023-01-11T22:08:08.7300024Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::hessian:0, line 760 <- wrt source file 2023-01-11T22:08:08.7302879Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::hessian:0 2023-01-11T22:08:08.7303517Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::vhp:0, line 864 <- wrt source file 2023-01-11T22:08:08.7306345Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::vhp:0 2023-01-11T22:08:08.7306896Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::hvp:0, line 955 <- wrt source file 2023-01-11T22:08:08.7309844Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/functional.py::hvp:0 2023-01-11T22:08:08.7310389Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::no_grad:0, line 120 <- wrt source file 2023-01-11T22:08:08.7311702Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::no_grad:0 2023-01-11T22:08:08.7312247Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::enable_grad:0, line 166 <- wrt source file 2023-01-11T22:08:08.7313948Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::enable_grad:0 2023-01-11T22:08:08.7314502Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::set_grad_enabled:0, line 216 <- wrt source file 2023-01-11T22:08:08.7316014Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::set_grad_enabled:0 2023-01-11T22:08:08.7316574Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::inference_mode:0, line 273 <- wrt source file 2023-01-11T22:08:08.7318203Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py::inference_mode:0 2023-01-11T22:08:08.7318832Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::saved_tensors_hooks:0, line 50 <- wrt source file 2023-01-11T22:08:08.7320000Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::saved_tensors_hooks:0 2023-01-11T22:08:08.7320761Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::save_on_cpu:0, line 109 <- wrt source file 2023-01-11T22:08:08.7321438Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::save_on_cpu:0 2023-01-11T22:08:08.7321978Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::disable_saved_tensors_hooks:0, line 164 <- wrt source file 2023-01-11T22:08:08.7322617Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::disable_saved_tensors_hooks:0 2023-01-11T22:08:08.7323179Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::register_multi_grad_hook:0, line 204 <- wrt source file 2023-01-11T22:08:08.7336287Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::register_multi_grad_hook:0 2023-01-11T22:08:08.7336842Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::allow_mutation_on_saved_tensors:0, line 406 <- wrt source file 2023-01-11T22:08:08.7354512Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py::allow_mutation_on_saved_tensors:0 2023-01-11T22:08:08.7355518Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::profile:0, line 123 <- wrt source file 2023-01-11T22:08:08.7356493Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::profile:0 2023-01-11T22:08:08.7357487Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::record_function:0, line 457 <- wrt source file 2023-01-11T22:08:08.7358434Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::record_function:0 2023-01-11T22:08:08.7359442Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::emit_itt:0, line 582 <- wrt source file 2023-01-11T22:08:08.7360390Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::emit_itt:0 2023-01-11T22:08:08.7361372Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::emit_nvtx:0, line 651 <- wrt source file 2023-01-11T22:08:08.7362295Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/autograd/profiler.py::emit_nvtx:0 2023-01-11T22:08:08.7363294Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:0, line 95 <- wrt source file 2023-01-11T22:08:08.7364235Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:0 2023-01-11T22:08:08.7365211Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:1, line 106 <- wrt source file 2023-01-11T22:08:08.7366132Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:1 2023-01-11T22:08:08.7367090Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:2, line 119 <- wrt source file 2023-01-11T22:08:08.7368014Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_jit_fn:2 2023-01-11T22:08:08.7368995Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_multi_output_jit_fn:0, line 151 <- wrt source file 2023-01-11T22:08:08.7370172Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/cuda/jiterator.py::_create_multi_output_jit_fn:0 2023-01-11T22:08:08.7371210Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/argparse_util.py::env:0, line 23 <- wrt source file 2023-01-11T22:08:08.7372168Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/argparse_util.py::env:0 2023-01-11T22:08:08.7373160Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/argparse_util.py::check_env:0, line 73 <- wrt source file 2023-01-11T22:08:08.7374109Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/argparse_util.py::check_env:0 2023-01-11T22:08:08.7375148Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::batch_isend_irecv:0, line 1377 <- wrt source file 2023-01-11T22:08:08.7376294Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::batch_isend_irecv:0 2023-01-11T22:08:08.7377331Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_reduce:0, line 1633 <- wrt source file 2023-01-11T22:08:08.7378375Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_reduce:0 2023-01-11T22:08:08.7379432Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_object:0, line 1997 <- wrt source file 2023-01-11T22:08:08.7380484Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_object:0 2023-01-11T22:08:08.7381573Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::gather_object:0, line 2085 <- wrt source file 2023-01-11T22:08:08.7382787Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::gather_object:0 2023-01-11T22:08:08.7383854Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::broadcast_object_list:0, line 2192 <- wrt source file 2023-01-11T22:08:08.7384944Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::broadcast_object_list:0 2023-01-11T22:08:08.7386056Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::scatter_object_list:0, line 2288 <- wrt source file 2023-01-11T22:08:08.7387066Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::scatter_object_list:0 2023-01-11T22:08:08.7388114Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather:0, line 2375 <- wrt source file 2023-01-11T22:08:08.7389101Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather:0 2023-01-11T22:08:08.7390159Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_into_tensor:0, line 2455 <- wrt source file 2023-01-11T22:08:08.7391225Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_into_tensor:0 2023-01-11T22:08:08.7392343Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_coalesced:0, line 2564 <- wrt source file 2023-01-11T22:08:08.7393349Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_gather_coalesced:0 2023-01-11T22:08:08.7394423Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::scatter:0, line 2722 <- wrt source file 2023-01-11T22:08:08.7395574Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::scatter:0 2023-01-11T22:08:08.7396664Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::reduce_scatter_tensor:0, line 2929 <- wrt source file 2023-01-11T22:08:08.7397722Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::reduce_scatter_tensor:0 2023-01-11T22:08:08.7398824Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_to_all_single:0, line 3046 <- wrt source file 2023-01-11T22:08:08.7402786Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_to_all_single:0 2023-01-11T22:08:08.7403867Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_to_all:0, line 3164 <- wrt source file 2023-01-11T22:08:08.7409774Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::all_to_all:0 2023-01-11T22:08:08.7410860Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::monitored_barrier:0, line 3346 <- wrt source file 2023-01-11T22:08:08.7411893Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::monitored_barrier:0 2023-01-11T22:08:08.7412973Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::new_subgroups:0, line 3598 <- wrt source file 2023-01-11T22:08:08.7414024Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::new_subgroups:0 2023-01-11T22:08:08.7415139Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::new_subgroups_by_enumeration:0, line 3713 <- wrt source file 2023-01-11T22:08:08.7416231Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py::new_subgroups_by_enumeration:0 2023-01-11T22:08:08.7417239Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/launch.py::__doc__:0, line 81 <- wrt source file 2023-01-11T22:08:08.7418158Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/launch.py::__doc__:0 2023-01-11T22:08:08.7419024Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/run.py::__doc__:0, line 297 <- wrt source file 2023-01-11T22:08:08.7419837Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/run.py::__doc__:0 2023-01-11T22:08:08.7420717Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/autograd/__init__.py::context:0, line 39 <- wrt source file 2023-01-11T22:08:08.7421679Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/autograd/__init__.py::context:0 2023-01-11T22:08:08.7422931Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.no_sync:0, line 509 <- wrt source file 2023-01-11T22:08:08.7424028Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.no_sync:0 2023-01-11T22:08:08.7425217Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.register_comm_hook:0, line 826 <- wrt source file 2023-01-11T22:08:08.7461109Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.register_comm_hook:0 2023-01-11T22:08:08.7462534Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.register_comm_hook:1, line 838 <- wrt source file 2023-01-11T22:08:08.7463739Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel.register_comm_hook:1 2023-01-11T22:08:08.7465129Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel._register_builtin_comm_hook:0, line 874 <- wrt source file 2023-01-11T22:08:08.7466410Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel._register_builtin_comm_hook:0 2023-01-11T22:08:08.7467703Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel._register_fused_optim:0, line 930 <- wrt source file 2023-01-11T22:08:08.7468965Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/_ddp.py::DistributedDataParallel._register_fused_optim:0 2023-01-11T22:08:08.7470272Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/checkpoint_activation.py::checkpoint:0, line 198 <- wrt source file 2023-01-11T22:08:08.7471363Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/checkpoint_activation.py::checkpoint:0 2023-01-11T22:08:08.7472434Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/contract.py::contract:0, line 44 <- wrt source file 2023-01-11T22:08:08.7473442Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/contract.py::contract:0 2023-01-11T22:08:08.7474503Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/replicate.py::replicate:0, line 21 <- wrt source file 2023-01-11T22:08:08.7475564Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_composable/replicate.py::replicate:0 2023-01-11T22:08:08.7476635Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/partial_tensor.py::_PartialTensor:0, line 61 <- wrt source file 2023-01-11T22:08:08.7477723Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/partial_tensor.py::_PartialTensor:0 2023-01-11T22:08:08.7478903Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_optim/__init__.py::named_params_with_sharded_tensor:0, line 32 <- wrt source file 2023-01-11T22:08:08.7480179Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_optim/__init__.py::named_params_with_sharded_tensor:0 2023-01-11T22:08:08.7481381Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::init_from_local_shards:0, line 366 <- wrt source file 2023-01-11T22:08:08.7482508Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::init_from_local_shards:0 2023-01-11T22:08:08.7483620Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::custom_sharded_op_impl:0, line 430 <- wrt source file 2023-01-11T22:08:08.7484688Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::custom_sharded_op_impl:0 2023-01-11T22:08:08.7485873Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py::ShardedTensor._init_from_local_tensor:0, line 808 <- wrt source file 2023-01-11T22:08:08.7487092Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py::ShardedTensor._init_from_local_tensor:0 2023-01-11T22:08:08.7488292Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py::ShardedTensor.reshard:0, line 958 <- wrt source file 2023-01-11T22:08:08.7489387Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py::ShardedTensor.reshard:0 2023-01-11T22:08:08.7490657Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/_ops/_common.py::_sharded_op_common:0, line 15 <- wrt source file 2023-01-11T22:08:08.7491761Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/_ops/_common.py::_sharded_op_common:0 2023-01-11T22:08:08.7492809Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharding_plan/api.py::ShardingPlan:0, line 36 <- wrt source file 2023-01-11T22:08:08.7493775Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_shard/sharding_plan/api.py::ShardingPlan:0 2023-01-11T22:08:08.7494839Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/_tools/memory_tracker.py::MemoryTracker:0, line 57 <- wrt source file 2023-01-11T22:08:08.7496010Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/_tools/memory_tracker.py::MemoryTracker:0 2023-01-11T22:08:08.7497031Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/join.py::Join:0, line 148 <- wrt source file 2023-01-11T22:08:08.7497983Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/join.py::Join:0 2023-01-11T22:08:08.7499060Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/__init__.py::register_ddp_comm_hook:0, line 99 <- wrt source file 2023-01-11T22:08:08.7500170Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/__init__.py::register_ddp_comm_hook:0 2023-01-11T22:08:08.7501341Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/debugging_hooks.py::noop_hook:0, line 23 <- wrt source file 2023-01-11T22:08:08.7502584Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/debugging_hooks.py::noop_hook:0 2023-01-11T22:08:08.7503721Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::allreduce_hook:0, line 37 <- wrt source file 2023-01-11T22:08:08.7504826Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::allreduce_hook:0 2023-01-11T22:08:08.7506002Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_hook:0, line 54 <- wrt source file 2023-01-11T22:08:08.7507096Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_hook:0 2023-01-11T22:08:08.7508254Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_hook:0, line 90 <- wrt source file 2023-01-11T22:08:08.7509401Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_hook:0 2023-01-11T22:08:08.7510590Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_wrapper:0, line 123 <- wrt source file 2023-01-11T22:08:08.7511787Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_wrapper:0 2023-01-11T22:08:08.7512998Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_wrapper:0, line 161 <- wrt source file 2023-01-11T22:08:08.7514200Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_wrapper:0 2023-01-11T22:08:08.7515556Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/post_localSGD_hook.py::post_localSGD_hook:0, line 85 <- wrt source file 2023-01-11T22:08:08.7516790Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/post_localSGD_hook.py::post_localSGD_hook:0 2023-01-11T22:08:08.7517967Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::powerSGD_hook:0, line 382 <- wrt source file 2023-01-11T22:08:08.7519197Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::powerSGD_hook:0 2023-01-11T22:08:08.7520489Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::batched_powerSGD_hook:0, line 691 <- wrt source file 2023-01-11T22:08:08.7521668Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::batched_powerSGD_hook:0 2023-01-11T22:08:08.7522894Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_pertensor_hook:0, line 62 <- wrt source file 2023-01-11T22:08:08.7524122Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_pertensor_hook:0 2023-01-11T22:08:08.7525374Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_perchannel_hook:0, line 142 <- wrt source file 2023-01-11T22:08:08.7526595Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_perchannel_hook:0 2023-01-11T22:08:08.7527867Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/model_averaging/averagers.py::PeriodicModelAverager:0, line 51 <- wrt source file 2023-01-11T22:08:08.7529066Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/model_averaging/averagers.py::PeriodicModelAverager:0 2023-01-11T22:08:08.7530404Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/model_averaging/hierarchical_model_averager.py::HierarchicalModelAverager:0, line 50 <- wrt source file 2023-01-11T22:08:08.7531728Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/algorithms/model_averaging/hierarchical_model_averager.py::HierarchicalModelAverager:0 2023-01-11T22:08:08.7532995Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/optimizer.py::load_sharded_optimizer_state_dict:0, line 205 <- wrt source file 2023-01-11T22:08:08.7534145Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/optimizer.py::load_sharded_optimizer_state_dict:0 2023-01-11T22:08:08.7535286Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/planner.py::SavePlanner:0, line 126 <- wrt source file 2023-01-11T22:08:08.7536328Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/planner.py::SavePlanner:0 2023-01-11T22:08:08.7537398Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/planner.py::LoadPlanner:0, line 281 <- wrt source file 2023-01-11T22:08:08.7538414Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/planner.py::LoadPlanner:0 2023-01-11T22:08:08.7539529Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py::load_state_dict:0, line 63 <- wrt source file 2023-01-11T22:08:08.7540695Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py::load_state_dict:0 2023-01-11T22:08:08.7541782Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py::save_state_dict:0, line 60 <- wrt source file 2023-01-11T22:08:08.7542980Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py::save_state_dict:0 2023-01-11T22:08:08.7544099Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/__init__.py::start_processes:0, line 132 <- wrt source file 2023-01-11T22:08:08.7545219Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/__init__.py::start_processes:0 2023-01-11T22:08:08.7546434Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py::Std.from_str:0, line 110 <- wrt source file 2023-01-11T22:08:08.7547499Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py::Std.from_str:0 2023-01-11T22:08:08.7548610Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py::to_map:0, line 150 <- wrt source file 2023-01-11T22:08:08.7549644Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py::to_map:0 2023-01-11T22:08:08.7550821Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py::ChildFailedError:0, line 203 <- wrt source file 2023-01-11T22:08:08.7552010Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py::ChildFailedError:0 2023-01-11T22:08:08.7553245Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/api.py::RendezvousHandler.shutdown:0, line 112 <- wrt source file 2023-01-11T22:08:08.7554451Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/api.py::RendezvousHandler.shutdown:0 2023-01-11T22:08:08.7555571Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/api.py::StateDictType:0, line 221 <- wrt source file 2023-01-11T22:08:08.7556532Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/api.py::StateDictType:0 2023-01-11T22:08:08.7557583Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/api.py::FullStateDictConfig:0, line 258 <- wrt source file 2023-01-11T22:08:08.7558635Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/api.py::FullStateDictConfig:0 2023-01-11T22:08:08.7559918Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel:0, line 125 <- wrt source file 2023-01-11T22:08:08.7561149Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel:0 2023-01-11T22:08:08.7562436Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.set_state_dict_type:0, line 551 <- wrt source file 2023-01-11T22:08:08.7563761Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.set_state_dict_type:0 2023-01-11T22:08:08.7565131Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.state_dict_type:0, line 619 <- wrt source file 2023-01-11T22:08:08.7566608Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.state_dict_type:0 2023-01-11T22:08:08.7568006Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.shard_full_optim_state_dict:0, line 1346 <- wrt source file 2023-01-11T22:08:08.7569343Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.shard_full_optim_state_dict:0 2023-01-11T22:08:08.7570769Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.scatter_full_optim_state_dict:0, line 1455 <- wrt source file 2023-01-11T22:08:08.7572269Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.scatter_full_optim_state_dict:0 2023-01-11T22:08:08.7573613Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.rekey_optim_state_dict:0, line 1586 <- wrt source file 2023-01-11T22:08:08.7574850Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.rekey_optim_state_dict:0 2023-01-11T22:08:08.7576028Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/sharded_grad_scaler.py::ShardedGradScaler:0, line 45 <- wrt source file 2023-01-11T22:08:08.7577089Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/sharded_grad_scaler.py::ShardedGradScaler:0 2023-01-11T22:08:08.7578206Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/functional.py::_all_gather_base:0, line 130 <- wrt source file 2023-01-11T22:08:08.7579258Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/functional.py::_all_gather_base:0 2023-01-11T22:08:08.7580356Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::_RemoteModule.__init__:0, line 201 <- wrt source file 2023-01-11T22:08:08.7581433Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::_RemoteModule.__init__:0 2023-01-11T22:08:08.7582762Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::_RemoteModule.init_from_module_rref:0, line 524 <- wrt source file 2023-01-11T22:08:08.7583963Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::_RemoteModule.init_from_module_rref:0 2023-01-11T22:08:08.7585097Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::RemoteModule:0, line 646 <- wrt source file 2023-01-11T22:08:08.7586163Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/nn/api/remote_module.py::RemoteModule:0 2023-01-11T22:08:08.7587362Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_apply_optimizer_in_backward:0, line 27 <- wrt source file 2023-01-11T22:08:08.7588563Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_apply_optimizer_in_backward:0 2023-01-11T22:08:08.7589730Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/named_optimizer.py::_NamedOptimizer:0, line 38 <- wrt source file 2023-01-11T22:08:08.7590823Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/named_optimizer.py::_NamedOptimizer:0 2023-01-11T22:08:08.7591952Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/optimizer.py::DistributedOptimizer:0, line 160 <- wrt source file 2023-01-11T22:08:08.7593211Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/optimizer.py::DistributedOptimizer:0 2023-01-11T22:08:08.7594404Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/post_localSGD_optimizer.py::PostLocalSGDOptimizer:0, line 18 <- wrt source file 2023-01-11T22:08:08.7595614Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/post_localSGD_optimizer.py::PostLocalSGDOptimizer:0 2023-01-11T22:08:08.7596675Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/utils.py::register_functional_optim:0, line 35 <- wrt source file 2023-01-11T22:08:08.7597684Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/utils.py::register_functional_optim:0 2023-01-11T22:08:08.7598971Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/zero_redundancy_optimizer.py::ZeroRedundancyOptimizer:0, line 325 <- wrt source file 2023-01-11T22:08:08.7600323Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/optim/zero_redundancy_optimizer.py::ZeroRedundancyOptimizer:0 2023-01-11T22:08:08.7601434Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/pipeline/sync/pipe.py::WithDevice:0, line 152 <- wrt source file 2023-01-11T22:08:08.7602430Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/pipeline/sync/pipe.py::WithDevice:0 2023-01-11T22:08:08.7603460Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/pipeline/sync/pipe.py::Pipe:0, line 274 <- wrt source file 2023-01-11T22:08:08.7604402Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/pipeline/sync/pipe.py::Pipe:0 2023-01-11T22:08:08.7605406Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::_wait_all:0, line 160 <- wrt source file 2023-01-11T22:08:08.7606289Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::_wait_all:0 2023-01-11T22:08:08.7607151Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::shutdown:0, line 333 <- wrt source file 2023-01-11T22:08:08.7607914Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::shutdown:0 2023-01-11T22:08:08.7608692Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::remote:0, line 582 <- wrt source file 2023-01-11T22:08:08.7609464Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::remote:0 2023-01-11T22:08:08.7610373Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::rpc_sync:0, line 766 <- wrt source file 2023-01-11T22:08:08.7611160Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::rpc_sync:0 2023-01-11T22:08:08.7612071Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::rpc_async:0, line 858 <- wrt source file 2023-01-11T22:08:08.7612975Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/api.py::rpc_async:0 2023-01-11T22:08:08.7613968Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/functions.py::async_execution:0, line 33 <- wrt source file 2023-01-11T22:08:08.7614910Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/functions.py::async_execution:0 2023-01-11T22:08:08.7615871Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/options.py::TensorPipeRpcBackendOptions.set_device_map:0, line 117 <- wrt source file 2023-01-11T22:08:08.7616986Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/options.py::TensorPipeRpcBackendOptions.set_device_map:0 2023-01-11T22:08:08.7618101Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/server_process_global_profiler.py::_server_process_global_profile:0, line 58 <- wrt source file 2023-01-11T22:08:08.7619211Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/rpc/server_process_global_profiler.py::_server_process_global_profile:0 2023-01-11T22:08:08.7620156Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/_utils.py::_prepare_input_validate:0, line 33 <- wrt source file 2023-01-11T22:08:08.7621169Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/_utils.py::_prepare_input_validate:0 2023-01-11T22:08:08.7622503Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/_utils.py::_prepare_output_validate:0, line 78 <- wrt source file 2023-01-11T22:08:08.7623595Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/_utils.py::_prepare_output_validate:0 2023-01-11T22:08:08.7624618Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/api.py::parallelize_module:0, line 63 <- wrt source file 2023-01-11T22:08:08.7625530Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributed/tensor/parallel/api.py::parallelize_module:0 2023-01-11T22:08:08.7626418Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/bernoulli.py::Bernoulli:0, line 21 <- wrt source file 2023-01-11T22:08:08.7627280Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/bernoulli.py::Bernoulli:0 2023-01-11T22:08:08.7628142Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/beta.py::Beta:0, line 16 <- wrt source file 2023-01-11T22:08:08.7629098Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/beta.py::Beta:0 2023-01-11T22:08:08.7630047Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/binomial.py::Binomial:0, line 20 <- wrt source file 2023-01-11T22:08:08.7634690Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/binomial.py::Binomial:0 2023-01-11T22:08:08.7635681Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/categorical.py::Categorical:0, line 37 <- wrt source file 2023-01-11T22:08:08.7643134Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/categorical.py::Categorical:0 2023-01-11T22:08:08.7644269Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/cauchy.py::Cauchy:0, line 19 <- wrt source file 2023-01-11T22:08:08.7649782Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/cauchy.py::Cauchy:0 2023-01-11T22:08:08.7650731Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/chi2.py::Chi2:0, line 12 <- wrt source file 2023-01-11T22:08:08.7656218Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/chi2.py::Chi2:0 2023-01-11T22:08:08.7657262Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/constraints.py::_DependentProperty:0, line 152 <- wrt source file 2023-01-11T22:08:08.7658313Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/constraints.py::_DependentProperty:0 2023-01-11T22:08:08.7659337Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/continuous_bernoulli.py::ContinuousBernoulli:0, line 24 <- wrt source file 2023-01-11T22:08:08.7666402Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/continuous_bernoulli.py::ContinuousBernoulli:0 2023-01-11T22:08:08.7667129Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/dirichlet.py::Dirichlet:0, line 35 <- wrt source file 2023-01-11T22:08:08.7672196Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/dirichlet.py::Dirichlet:0 2023-01-11T22:08:08.7672766Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/exponential.py::Exponential:0, line 15 <- wrt source file 2023-01-11T22:08:08.7677528Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/exponential.py::Exponential:0 2023-01-11T22:08:08.7678109Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/fishersnedecor.py::FisherSnedecor:0, line 16 <- wrt source file 2023-01-11T22:08:08.7684807Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/fishersnedecor.py::FisherSnedecor:0 2023-01-11T22:08:08.7685463Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/gamma.py::Gamma:0, line 19 <- wrt source file 2023-01-11T22:08:08.7689899Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/gamma.py::Gamma:0 2023-01-11T22:08:08.7690432Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/geometric.py::Geometric:0, line 21 <- wrt source file 2023-01-11T22:08:08.7695613Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/geometric.py::Geometric:0 2023-01-11T22:08:08.7696186Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/gumbel.py::Gumbel:0, line 17 <- wrt source file 2023-01-11T22:08:08.7703612Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/gumbel.py::Gumbel:0 2023-01-11T22:08:08.7704196Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/half_cauchy.py::HalfCauchy:0, line 20 <- wrt source file 2023-01-11T22:08:08.7709831Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/half_cauchy.py::HalfCauchy:0 2023-01-11T22:08:08.7710398Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/half_normal.py::HalfNormal:0, line 20 <- wrt source file 2023-01-11T22:08:08.7715549Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/half_normal.py::HalfNormal:0 2023-01-11T22:08:08.7716118Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/independent.py::Independent:0, line 18 <- wrt source file 2023-01-11T22:08:08.7728654Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/independent.py::Independent:0 2023-01-11T22:08:08.7729416Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/kumaraswamy.py::Kumaraswamy:0, line 25 <- wrt source file 2023-01-11T22:08:08.7735697Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/kumaraswamy.py::Kumaraswamy:0 2023-01-11T22:08:08.7736432Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/laplace.py::Laplace:0, line 14 <- wrt source file 2023-01-11T22:08:08.7741595Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/laplace.py::Laplace:0 2023-01-11T22:08:08.7742296Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/lkj_cholesky.py::LKJCholesky:0, line 38 <- wrt source file 2023-01-11T22:08:08.7754022Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/lkj_cholesky.py::LKJCholesky:0 2023-01-11T22:08:08.7754660Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/log_normal.py::LogNormal:0, line 17 <- wrt source file 2023-01-11T22:08:08.7760861Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/log_normal.py::LogNormal:0 2023-01-11T22:08:08.7761558Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/logistic_normal.py::LogisticNormal:0, line 22 <- wrt source file 2023-01-11T22:08:08.7770139Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/logistic_normal.py::LogisticNormal:0 2023-01-11T22:08:08.7770929Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/lowrank_multivariate_normal.py::LowRankMultivariateNormal:0, line 56 <- wrt source file 2023-01-11T22:08:08.7772947Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/lowrank_multivariate_normal.py::LowRankMultivariateNormal:0 2023-01-11T22:08:08.7773575Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/mixture_same_family.py::MixtureSameFamily:0, line 19 <- wrt source file 2023-01-11T22:08:08.7776824Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/mixture_same_family.py::MixtureSameFamily:0 2023-01-11T22:08:08.7777803Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/multinomial.py::Multinomial:0, line 34 <- wrt source file 2023-01-11T22:08:08.7778670Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/multinomial.py::Multinomial:0 2023-01-11T22:08:08.7779278Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/multivariate_normal.py::MultivariateNormal:0, line 94 <- wrt source file 2023-01-11T22:08:08.7779870Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/multivariate_normal.py::MultivariateNormal:0 2023-01-11T22:08:08.7780432Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/normal.py::Normal:0, line 18 <- wrt source file 2023-01-11T22:08:08.7784369Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/normal.py::Normal:0 2023-01-11T22:08:08.7784972Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/one_hot_categorical.py::OneHotCategorical:0, line 27 <- wrt source file 2023-01-11T22:08:08.7793008Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/one_hot_categorical.py::OneHotCategorical:0 2023-01-11T22:08:08.7793583Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/pareto.py::Pareto:0, line 14 <- wrt source file 2023-01-11T22:08:08.7800224Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/pareto.py::Pareto:0 2023-01-11T22:08:08.7800910Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/poisson.py::Poisson:0, line 20 <- wrt source file 2023-01-11T22:08:08.7801726Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/poisson.py::Poisson:0 2023-01-11T22:08:08.7802322Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/relaxed_bernoulli.py::RelaxedBernoulli:0, line 102 <- wrt source file 2023-01-11T22:08:08.7809819Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/relaxed_bernoulli.py::RelaxedBernoulli:0 2023-01-11T22:08:08.7810453Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/relaxed_categorical.py::RelaxedOneHotCategorical:0, line 96 <- wrt source file 2023-01-11T22:08:08.7820105Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/relaxed_categorical.py::RelaxedOneHotCategorical:0 2023-01-11T22:08:08.7820683Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/studentT.py::StudentT:0, line 17 <- wrt source file 2023-01-11T22:08:08.7828532Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/studentT.py::StudentT:0 2023-01-11T22:08:08.7829840Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::CatTransform:0, line 1002 <- wrt source file 2023-01-11T22:08:08.7830947Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::CatTransform:0 2023-01-11T22:08:08.7831787Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::StackTransform:0, line 1105 <- wrt source file 2023-01-11T22:08:08.7832845Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::StackTransform:0 2023-01-11T22:08:08.7833564Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::CumulativeDistributionTransform:0, line 1178 <- wrt source file 2023-01-11T22:08:08.7834190Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/transforms.py::CumulativeDistributionTransform:0 2023-01-11T22:08:08.7834836Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/uniform.py::Uniform:0, line 17 <- wrt source file 2023-01-11T22:08:08.7837082Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/uniform.py::Uniform:0 2023-01-11T22:08:08.7837697Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/von_mises.py::VonMises:0, line 79 <- wrt source file 2023-01-11T22:08:08.7846045Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/von_mises.py::VonMises:0 2023-01-11T22:08:08.7846645Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/weibull.py::Weibull:0, line 16 <- wrt source file 2023-01-11T22:08:08.7853063Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/distributions/weibull.py::Weibull:0 2023-01-11T22:08:08.7853785Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py::Wishart:0, line 36 <- wrt source file 2023-01-11T22:08:08.7854492Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/distributions/wishart.py::Wishart:0 2023-01-11T22:08:08.7855200Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::_snake_case:0, line 79 <- wrt source file 2023-01-11T22:08:08.7855806Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::_snake_case:0 2023-01-11T22:08:08.7856447Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::Graph.eliminate_dead_code:0, line 1363 <- wrt source file 2023-01-11T22:08:08.7857021Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::Graph.eliminate_dead_code:0 2023-01-11T22:08:08.7858645Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::Graph.on_generate_code:0, line 1431 <- wrt source file 2023-01-11T22:08:08.7859214Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/graph.py::Graph.on_generate_code:0 2023-01-11T22:08:08.7860191Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py::Interpreter:0, line 37 <- wrt source file 2023-01-11T22:08:08.7860798Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py::Interpreter:0 2023-01-11T22:08:08.7861641Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py::Transformer:0, line 380 <- wrt source file 2023-01-11T22:08:08.7862181Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py::Transformer:0 2023-01-11T22:08:08.7863466Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/subgraph_rewriter.py::replace_pattern:0, line 108 <- wrt source file 2023-01-11T22:08:08.7864210Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/subgraph_rewriter.py::replace_pattern:0 2023-01-11T22:08:08.7865401Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::TensorType:0, line 11 <- wrt source file 2023-01-11T22:08:08.7866508Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::TensorType:0 2023-01-11T22:08:08.7867464Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::is_consistent:0, line 62 <- wrt source file 2023-01-11T22:08:08.7867964Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::is_consistent:0 2023-01-11T22:08:08.7868494Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::is_more_precise:0, line 88 <- wrt source file 2023-01-11T22:08:08.7869178Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/tensor_type.py::is_more_precise:0 2023-01-11T22:08:08.7870095Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/rewriter.py::AST_Rewriter.visit_AnnAssign:0, line 87 <- wrt source file 2023-01-11T22:08:08.7871104Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/rewriter.py::AST_Rewriter.visit_AnnAssign:0 2023-01-11T22:08:08.7871797Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/core.py::reify:0, line 42 <- wrt source file 2023-01-11T22:08:08.7872407Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/core.py::reify:0 2023-01-11T22:08:08.7872984Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/match.py::VarDispatcher:0, line 42 <- wrt source file 2023-01-11T22:08:08.7873652Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/match.py::VarDispatcher:0 2023-01-11T22:08:08.7874289Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::unifiable:0, line 10 <- wrt source file 2023-01-11T22:08:08.7874881Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::unifiable:0 2023-01-11T22:08:08.7875546Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::reify_object:0, line 36 <- wrt source file 2023-01-11T22:08:08.7876456Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::reify_object:0 2023-01-11T22:08:08.7877036Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::unify_object:0, line 91 <- wrt source file 2023-01-11T22:08:08.7877591Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/more.py::unify_object:0 2023-01-11T22:08:08.7878175Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::merge:0, line 22 <- wrt source file 2023-01-11T22:08:08.7902140Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::merge:0 2023-01-11T22:08:08.7902922Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::merge_with:0, line 49 <- wrt source file 2023-01-11T22:08:08.7906451Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::merge_with:0 2023-01-11T22:08:08.7907100Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::valmap:0, line 75 <- wrt source file 2023-01-11T22:08:08.7910229Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::valmap:0 2023-01-11T22:08:08.7910829Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::keymap:0, line 91 <- wrt source file 2023-01-11T22:08:08.7914516Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::keymap:0 2023-01-11T22:08:08.7915149Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::itemmap:0, line 107 <- wrt source file 2023-01-11T22:08:08.7917980Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::itemmap:0 2023-01-11T22:08:08.7918602Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::valfilter:0, line 123 <- wrt source file 2023-01-11T22:08:08.7922957Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::valfilter:0 2023-01-11T22:08:08.7923694Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::keyfilter:0, line 143 <- wrt source file 2023-01-11T22:08:08.7927887Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::keyfilter:0 2023-01-11T22:08:08.7928510Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::itemfilter:0, line 163 <- wrt source file 2023-01-11T22:08:08.7933339Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::itemfilter:0 2023-01-11T22:08:08.7933958Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc:0, line 189 <- wrt source file 2023-01-11T22:08:08.7937144Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc:0 2023-01-11T22:08:08.7937755Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::dissoc:0, line 206 <- wrt source file 2023-01-11T22:08:08.7942615Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::dissoc:0 2023-01-11T22:08:08.7943224Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc_in:0, line 232 <- wrt source file 2023-01-11T22:08:08.7946962Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc_in:0 2023-01-11T22:08:08.7947581Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::update_in:0, line 259 <- wrt source file 2023-01-11T22:08:08.7954856Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::update_in:0 2023-01-11T22:08:08.7955458Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::get_in:0, line 311 <- wrt source file 2023-01-11T22:08:08.7964343Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::get_in:0 2023-01-11T22:08:08.7964948Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::groupby:0, line 357 <- wrt source file 2023-01-11T22:08:08.7968528Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::groupby:0 2023-01-11T22:08:08.7969139Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::first:0, line 393 <- wrt source file 2023-01-11T22:08:08.7971566Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/unification_tools.py::first:0 2023-01-11T22:08:08.7972334Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::transitive_get:0, line 12 <- wrt source file 2023-01-11T22:08:08.7975887Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::transitive_get:0 2023-01-11T22:08:08.7976575Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::_toposort:0, line 39 <- wrt source file 2023-01-11T22:08:08.7977375Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::_toposort:0 2023-01-11T22:08:08.7977949Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::reverse_dict:0, line 67 <- wrt source file 2023-01-11T22:08:08.7981023Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::reverse_dict:0 2023-01-11T22:08:08.7981686Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::freeze:0, line 92 <- wrt source file 2023-01-11T22:08:08.7985024Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/utils.py::freeze:0 2023-01-11T22:08:08.7985594Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/variable.py::variables:0, line 62 <- wrt source file 2023-01-11T22:08:08.7987821Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/variable.py::variables:0 2023-01-11T22:08:08.7988442Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/core.py::dispatch:0, line 18 <- wrt source file 2023-01-11T22:08:08.7991415Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/core.py::dispatch:0 2023-01-11T22:08:08.7992123Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher:0, line 100 <- wrt source file 2023-01-11T22:08:08.7992829Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher:0 2023-01-11T22:08:08.7993542Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.register:0, line 124 <- wrt source file 2023-01-11T22:08:08.7994584Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.register:0 2023-01-11T22:08:08.7995432Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.add:0, line 176 <- wrt source file 2023-01-11T22:08:08.7996226Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.add:0 2023-01-11T22:08:08.7997103Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.dispatch:0, line 288 <- wrt source file 2023-01-11T22:08:08.7997968Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.dispatch:0 2023-01-11T22:08:08.7998645Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::str_signature:0, line 418 <- wrt source file 2023-01-11T22:08:08.8001007Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::str_signature:0 2023-01-11T22:08:08.8001667Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::expand_tuples:0, line 15 <- wrt source file 2023-01-11T22:08:08.8005078Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::expand_tuples:0 2023-01-11T22:08:08.8005716Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::_toposort:0, line 38 <- wrt source file 2023-01-11T22:08:08.8009117Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::_toposort:0 2023-01-11T22:08:08.8009754Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::reverse_dict:0, line 66 <- wrt source file 2023-01-11T22:08:08.8012837Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::reverse_dict:0 2023-01-11T22:08:08.8013568Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::groupby:0, line 85 <- wrt source file 2023-01-11T22:08:08.8017148Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::groupby:0 2023-01-11T22:08:08.8017766Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::typename:0, line 115 <- wrt source file 2023-01-11T22:08:08.8021002Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::typename:0 2023-01-11T22:08:08.8021811Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::isvariadic:0, line 47 <- wrt source file 2023-01-11T22:08:08.8022765Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::isvariadic:0 2023-01-11T22:08:08.8023419Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::Variadic:0, line 80 <- wrt source file 2023-01-11T22:08:08.8025156Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::Variadic:0 2023-01-11T22:08:08.8026344Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/passes/shape_prop.py::ShapeProp:0, line 76 <- wrt source file 2023-01-11T22:08:08.8026872Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/passes/shape_prop.py::ShapeProp:0 2023-01-11T22:08:08.8028960Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/fx/passes/split_module.py::split_module:0, line 68 <- wrt source file 2023-01-11T22:08:08.8029980Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/fx/passes/split_module.py::split_module:0 2023-01-11T22:08:08.8030899Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py::AttributeTypeIsSupportedChecker:0, line 35 <- wrt source file 2023-01-11T22:08:08.8031667Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/_check.py::AttributeTypeIsSupportedChecker:0 2023-01-11T22:08:08.8032599Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::save:0, line 53 <- wrt source file 2023-01-11T22:08:08.8033452Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::save:0 2023-01-11T22:08:08.8034160Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::load:0, line 111 <- wrt source file 2023-01-11T22:08:08.8034953Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::load:0 2023-01-11T22:08:08.8035946Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::save_jit_module_to_flatbuffer:0, line 235 <- wrt source file 2023-01-11T22:08:08.8036890Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/_serialization.py::save_jit_module_to_flatbuffer:0 2023-01-11T22:08:08.8037576Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_load_for_lite_interpreter:0, line 23 <- wrt source file 2023-01-11T22:08:08.8038308Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_load_for_lite_interpreter:0 2023-01-11T22:08:08.8039078Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_model_bytecode_version:0, line 89 <- wrt source file 2023-01-11T22:08:08.8039737Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_model_bytecode_version:0 2023-01-11T22:08:08.8040378Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_mobile_model_contained_types:0, line 119 <- wrt source file 2023-01-11T22:08:08.8054719Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_mobile_model_contained_types:0 2023-01-11T22:08:08.8055396Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_model_ops_and_info:0, line 199 <- wrt source file 2023-01-11T22:08:08.8055945Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/jit/mobile/__init__.py::_get_model_ops_and_info:0 2023-01-11T22:08:08.8056511Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py::is_masked_tensor:0, line 22 <- wrt source file 2023-01-11T22:08:08.8057049Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/masked/maskedtensor/core.py::is_masked_tensor:0 2023-01-11T22:08:08.8057633Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::fractional_max_pool2d_with_indices:0, line 452 <- wrt source file 2023-01-11T22:08:08.8161809Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::fractional_max_pool2d_with_indices:0 2023-01-11T22:08:08.8162702Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::fractional_max_pool3d_with_indices:0, line 553 <- wrt source file 2023-01-11T22:08:09.0025600Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::fractional_max_pool3d_with_indices:0 2023-01-11T22:08:09.0041160Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::gumbel_softmax:0, line 1878 <- wrt source file 2023-01-11T22:08:09.0052146Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::gumbel_softmax:0 2023-01-11T22:08:09.0053400Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::embedding:0, line 2149 <- wrt source file 2023-01-11T22:08:09.0064105Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::embedding:0 2023-01-11T22:08:09.0065077Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::embedding_bag:0, line 2286 <- wrt source file 2023-01-11T22:08:09.0077507Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::embedding_bag:0 2023-01-11T22:08:09.0078535Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::ctc_loss:0, line 2615 <- wrt source file 2023-01-11T22:08:09.0102250Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::ctc_loss:0 2023-01-11T22:08:09.0103413Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::nll_loss:0, line 2681 <- wrt source file 2023-01-11T22:08:09.0110425Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::nll_loss:0 2023-01-11T22:08:09.0111735Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::cross_entropy:0, line 3000 <- wrt source file 2023-01-11T22:08:09.0120823Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::cross_entropy:0 2023-01-11T22:08:09.0121874Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::binary_cross_entropy:0, line 3066 <- wrt source file 2023-01-11T22:08:09.0128745Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::binary_cross_entropy:0 2023-01-11T22:08:09.0129799Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::binary_cross_entropy_with_logits:0, line 3138 <- wrt source file 2023-01-11T22:08:09.0136891Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py::binary_cross_entropy_with_logits:0 2023-01-11T22:08:09.0138264Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv1d_input:0, line 23 <- wrt source file 2023-01-11T22:08:09.0147728Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv1d_input:0 2023-01-11T22:08:09.0148875Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv1d_weight:0, line 53 <- wrt source file 2023-01-11T22:08:09.0154950Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv1d_weight:0 2023-01-11T22:08:09.0155634Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv2d_input:0, line 86 <- wrt source file 2023-01-11T22:08:09.0164215Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv2d_input:0 2023-01-11T22:08:09.0164886Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv2d_weight:0, line 116 <- wrt source file 2023-01-11T22:08:09.0170666Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv2d_weight:0 2023-01-11T22:08:09.0171360Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv3d_input:0, line 149 <- wrt source file 2023-01-11T22:08:09.0205088Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv3d_input:0 2023-01-11T22:08:09.0205992Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv3d_weight:0, line 179 <- wrt source file 2023-01-11T22:08:09.0230511Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/grad.py::conv3d_weight:0 2023-01-11T22:08:09.0231406Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::calculate_gain:0, line 96 <- wrt source file 2023-01-11T22:08:09.0235216Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::calculate_gain:0 2023-01-11T22:08:09.0235888Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::uniform_:0, line 132 <- wrt source file 2023-01-11T22:08:09.0239521Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::uniform_:0 2023-01-11T22:08:09.0240180Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::normal_:0, line 150 <- wrt source file 2023-01-11T22:08:09.0243954Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::normal_:0 2023-01-11T22:08:09.0244620Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::trunc_normal_:0, line 173 <- wrt source file 2023-01-11T22:08:09.0248966Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::trunc_normal_:0 2023-01-11T22:08:09.0249625Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::constant_:0, line 187 <- wrt source file 2023-01-11T22:08:09.0254914Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::constant_:0 2023-01-11T22:08:09.0256197Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::ones_:0, line 202 <- wrt source file 2023-01-11T22:08:09.0260333Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::ones_:0 2023-01-11T22:08:09.0261689Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::zeros_:0, line 215 <- wrt source file 2023-01-11T22:08:09.0265815Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::zeros_:0 2023-01-11T22:08:09.0266915Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::eye_:0, line 230 <- wrt source file 2023-01-11T22:08:09.0271148Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::eye_:0 2023-01-11T22:08:09.0271727Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::dirac_:0, line 251 <- wrt source file 2023-01-11T22:08:09.0276560Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::dirac_:0 2023-01-11T22:08:09.0277092Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::xavier_uniform_:0, line 320 <- wrt source file 2023-01-11T22:08:09.0280616Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::xavier_uniform_:0 2023-01-11T22:08:09.0281121Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::xavier_normal_:0, line 347 <- wrt source file 2023-01-11T22:08:09.0284459Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::xavier_normal_:0 2023-01-11T22:08:09.0284978Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::kaiming_uniform_:0, line 392 <- wrt source file 2023-01-11T22:08:09.0288576Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::kaiming_uniform_:0 2023-01-11T22:08:09.0289090Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::kaiming_normal_:0, line 441 <- wrt source file 2023-01-11T22:08:09.0292845Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::kaiming_normal_:0 2023-01-11T22:08:09.0293504Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::orthogonal_:0, line 466 <- wrt source file 2023-01-11T22:08:09.0294302Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::orthogonal_:0 2023-01-11T22:08:09.0294818Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::sparse_:0, line 512 <- wrt source file 2023-01-11T22:08:09.0299619Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/init.py::sparse_:0 2023-01-11T22:08:09.0300156Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Threshold:0, line 40 <- wrt source file 2023-01-11T22:08:09.0304892Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Threshold:0 2023-01-11T22:08:09.0305441Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ReLU:0, line 83 <- wrt source file 2023-01-11T22:08:09.0312288Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ReLU:0 2023-01-11T22:08:09.0312834Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::RReLU:0, line 142 <- wrt source file 2023-01-11T22:08:09.0317080Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::RReLU:0 2023-01-11T22:08:09.0317701Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardtanh:0, line 202 <- wrt source file 2023-01-11T22:08:09.0322156Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardtanh:0 2023-01-11T22:08:09.0322924Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ReLU6:0, line 260 <- wrt source file 2023-01-11T22:08:09.0326614Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ReLU6:0 2023-01-11T22:08:09.0327261Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Sigmoid:0, line 288 <- wrt source file 2023-01-11T22:08:09.0330797Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Sigmoid:0 2023-01-11T22:08:09.0331532Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardsigmoid:0, line 320 <- wrt source file 2023-01-11T22:08:09.0335229Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardsigmoid:0 2023-01-11T22:08:09.0336002Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Tanh:0, line 352 <- wrt source file 2023-01-11T22:08:09.0339616Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Tanh:0 2023-01-11T22:08:09.0340293Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::SiLU:0, line 383 <- wrt source file 2023-01-11T22:08:09.0344943Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::SiLU:0 2023-01-11T22:08:09.0345609Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Mish:0, line 419 <- wrt source file 2023-01-11T22:08:09.0349493Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Mish:0 2023-01-11T22:08:09.0350183Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardswish:0, line 461 <- wrt source file 2023-01-11T22:08:09.0353954Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardswish:0 2023-01-11T22:08:09.0354612Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ELU:0, line 502 <- wrt source file 2023-01-11T22:08:09.0358532Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::ELU:0 2023-01-11T22:08:09.0359215Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::CELU:0, line 543 <- wrt source file 2023-01-11T22:08:09.0363001Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::CELU:0 2023-01-11T22:08:09.0363668Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::SELU:0, line 595 <- wrt source file 2023-01-11T22:08:09.0367383Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::SELU:0 2023-01-11T22:08:09.0368074Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::GLU:0, line 631 <- wrt source file 2023-01-11T22:08:09.0372165Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::GLU:0 2023-01-11T22:08:09.0372851Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::GELU:0, line 672 <- wrt source file 2023-01-11T22:08:09.0378517Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::GELU:0 2023-01-11T22:08:09.0379230Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardshrink:0, line 714 <- wrt source file 2023-01-11T22:08:09.0383925Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Hardshrink:0 2023-01-11T22:08:09.0384655Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LeakyReLU:0, line 761 <- wrt source file 2023-01-11T22:08:09.0388825Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LeakyReLU:0 2023-01-11T22:08:09.0389553Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LogSigmoid:0, line 796 <- wrt source file 2023-01-11T22:08:09.0393672Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LogSigmoid:0 2023-01-11T22:08:09.0394384Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softplus:0, line 827 <- wrt source file 2023-01-11T22:08:09.0398558Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softplus:0 2023-01-11T22:08:09.0399328Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softshrink:0, line 869 <- wrt source file 2023-01-11T22:08:09.0403705Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softshrink:0 2023-01-11T22:08:09.0404537Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::MultiheadAttention:0, line 938 <- wrt source file 2023-01-11T22:08:09.0405283Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::MultiheadAttention:0 2023-01-11T22:08:09.0405997Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::PReLU:0, line 1274 <- wrt source file 2023-01-11T22:08:09.0410705Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::PReLU:0 2023-01-11T22:08:09.0411420Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softsign:0, line 1309 <- wrt source file 2023-01-11T22:08:09.0415884Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softsign:0 2023-01-11T22:08:09.0416597Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Tanhshrink:0, line 1332 <- wrt source file 2023-01-11T22:08:09.0421080Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Tanhshrink:0 2023-01-11T22:08:09.0421799Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmin:0, line 1366 <- wrt source file 2023-01-11T22:08:09.0427358Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmin:0 2023-01-11T22:08:09.0428089Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmax:0, line 1421 <- wrt source file 2023-01-11T22:08:09.0432510Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmax:0 2023-01-11T22:08:09.0433241Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmax2d:0, line 1461 <- wrt source file 2023-01-11T22:08:09.0437048Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::Softmax2d:0 2023-01-11T22:08:09.0437607Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LogSoftmax:0, line 1493 <- wrt source file 2023-01-11T22:08:09.0441780Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py::LogSoftmax:0 2023-01-11T22:08:09.0442336Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm1d:0, line 290 <- wrt source file 2023-01-11T22:08:09.0450239Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm1d:0 2023-01-11T22:08:09.0450967Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm2d:0, line 399 <- wrt source file 2023-01-11T22:08:09.0972388Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm2d:0 2023-01-11T22:08:09.0973180Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm3d:0, line 505 <- wrt source file 2023-01-11T22:08:09.6331674Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::BatchNorm3d:0 2023-01-11T22:08:09.6447348Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::SyncBatchNorm:0, line 627 <- wrt source file 2023-01-11T22:08:09.6450013Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::SyncBatchNorm:0 2023-01-11T22:08:09.6451029Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::SyncBatchNorm.convert_sync_batchnorm:0, line 782 <- wrt source file 2023-01-11T22:08:09.6452371Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py::SyncBatchNorm.convert_sync_batchnorm:0 2023-01-11T22:08:09.6453294Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/channelshuffle.py::ChannelShuffle:0, line 17 <- wrt source file 2023-01-11T22:08:09.6471189Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/channelshuffle.py::ChannelShuffle:0 2023-01-11T22:08:09.6473068Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::Sequential:0, line 63 <- wrt source file 2023-01-11T22:08:09.6473768Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::Sequential:0 2023-01-11T22:08:09.6474486Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ModuleList:0, line 261 <- wrt source file 2023-01-11T22:08:09.6475160Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ModuleList:0 2023-01-11T22:08:09.6475900Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ModuleDict:0, line 433 <- wrt source file 2023-01-11T22:08:09.6476587Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ModuleDict:0 2023-01-11T22:08:09.6477313Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ParameterList:0, line 567 <- wrt source file 2023-01-11T22:08:09.6478038Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ParameterList:0 2023-01-11T22:08:09.6478755Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ParameterDict:0, line 707 <- wrt source file 2023-01-11T22:08:09.6479506Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py::ParameterDict:0 2023-01-11T22:08:09.6480239Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/distance.py::PairwiseDistance:0, line 36 <- wrt source file 2023-01-11T22:08:09.6488838Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/distance.py::PairwiseDistance:0 2023-01-11T22:08:09.6489559Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/distance.py::CosineSimilarity:0, line 72 <- wrt source file 2023-01-11T22:08:09.6500581Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/distance.py::CosineSimilarity:0 2023-01-11T22:08:09.6501328Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout:0, line 49 <- wrt source file 2023-01-11T22:08:09.6507061Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout:0 2023-01-11T22:08:09.6507584Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout1d:0, line 91 <- wrt source file 2023-01-11T22:08:09.6515099Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout1d:0 2023-01-11T22:08:09.6515636Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout2d:0, line 140 <- wrt source file 2023-01-11T22:08:09.6569541Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout2d:0 2023-01-11T22:08:09.6570256Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout3d:0, line 182 <- wrt source file 2023-01-11T22:08:09.6764585Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::Dropout3d:0 2023-01-11T22:08:09.6765319Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::AlphaDropout:0, line 225 <- wrt source file 2023-01-11T22:08:09.6770767Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::AlphaDropout:0 2023-01-11T22:08:09.6771562Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::FeatureAlphaDropout:0, line 272 <- wrt source file 2023-01-11T22:08:09.6969651Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/dropout.py::FeatureAlphaDropout:0 2023-01-11T22:08:09.6970411Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/flatten.py::Flatten:0, line 24 <- wrt source file 2023-01-11T22:08:09.6977533Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/flatten.py::Flatten:0 2023-01-11T22:08:09.6978244Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/flatten.py::Unflatten:0, line 76 <- wrt source file 2023-01-11T22:08:09.6996899Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/flatten.py::Unflatten:0 2023-01-11T22:08:09.6997865Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/fold.py::Fold:0, line 111 <- wrt source file 2023-01-11T22:08:09.7004516Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/fold.py::Fold:0 2023-01-11T22:08:09.7005447Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/fold.py::Unfold:0, line 253 <- wrt source file 2023-01-11T22:08:09.7020518Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/fold.py::Unfold:0 2023-01-11T22:08:09.7021903Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm1d:0, line 135 <- wrt source file 2023-01-11T22:08:09.7042577Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm1d:0 2023-01-11T22:08:09.7043633Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm2d:0, line 251 <- wrt source file 2023-01-11T22:08:09.7520494Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm2d:0 2023-01-11T22:08:09.7521521Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm3d:0, line 367 <- wrt source file 2023-01-11T22:08:10.2876339Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm3d:0 2023-01-11T22:08:10.2992346Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py::LazyModuleMixin:0, line 77 <- wrt source file 2023-01-11T22:08:10.2997595Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/lazy.py::LazyModuleMixin:0 2023-01-11T22:08:10.2998467Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Identity:0, line 33 <- wrt source file 2023-01-11T22:08:10.3006893Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Identity:0 2023-01-11T22:08:10.3008120Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Linear:0, line 78 <- wrt source file 2023-01-11T22:08:10.3017528Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Linear:0 2023-01-11T22:08:10.3018666Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Bilinear:0, line 164 <- wrt source file 2023-01-11T22:08:10.3042756Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py::Bilinear:0 2023-01-11T22:08:10.3044112Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::L1Loss:0, line 88 <- wrt source file 2023-01-11T22:08:10.3051785Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::L1Loss:0 2023-01-11T22:08:10.3052646Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::NLLLoss:0, line 184 <- wrt source file 2023-01-11T22:08:10.3076975Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::NLLLoss:0 2023-01-11T22:08:10.3077926Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::PoissonNLLLoss:0, line 271 <- wrt source file 2023-01-11T22:08:10.3085870Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::PoissonNLLLoss:0 2023-01-11T22:08:10.3086623Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::GaussianNLLLoss:0, line 343 <- wrt source file 2023-01-11T22:08:10.3102658Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::GaussianNLLLoss:0 2023-01-11T22:08:10.3103362Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::KLDivLoss:0, line 451 <- wrt source file 2023-01-11T22:08:10.3113268Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::KLDivLoss:0 2023-01-11T22:08:10.3113957Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MSELoss:0, line 523 <- wrt source file 2023-01-11T22:08:10.3120958Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MSELoss:0 2023-01-11T22:08:10.3121459Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::BCELoss:0, line 605 <- wrt source file 2023-01-11T22:08:10.3128392Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::BCELoss:0 2023-01-11T22:08:10.3128927Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::BCEWithLogitsLoss:0, line 668 <- wrt source file 2023-01-11T22:08:10.3141462Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::BCEWithLogitsLoss:0 2023-01-11T22:08:10.3142048Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MultiLabelMarginLoss:0, line 831 <- wrt source file 2023-01-11T22:08:10.3151549Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MultiLabelMarginLoss:0 2023-01-11T22:08:10.3152113Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::CrossEntropyLoss:0, line 1149 <- wrt source file 2023-01-11T22:08:10.3161853Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::CrossEntropyLoss:0 2023-01-11T22:08:10.3162557Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MarginRankingLoss:0, line 1317 <- wrt source file 2023-01-11T22:08:10.3169856Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MarginRankingLoss:0 2023-01-11T22:08:10.3170597Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MultiMarginLoss:0, line 1388 <- wrt source file 2023-01-11T22:08:10.3179176Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::MultiMarginLoss:0 2023-01-11T22:08:10.3179905Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::TripletMarginLoss:0, line 1468 <- wrt source file 2023-01-11T22:08:10.3195516Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::TripletMarginLoss:0 2023-01-11T22:08:10.3196282Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::TripletMarginWithDistanceLoss:0, line 1559 <- wrt source file 2023-01-11T22:08:10.3228821Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::TripletMarginWithDistanceLoss:0 2023-01-11T22:08:10.3229571Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::CTCLoss:0, line 1670 <- wrt source file 2023-01-11T22:08:10.3263144Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/loss.py::CTCLoss:0 2023-01-11T22:08:10.3263896Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.register_buffer:0, line 491 <- wrt source file 2023-01-11T22:08:10.3264636Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.register_buffer:0 2023-01-11T22:08:10.3265368Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.apply:0, line 847 <- wrt source file 2023-01-11T22:08:10.3280428Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.apply:0 2023-01-11T22:08:10.3281157Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.to:0, line 1072 <- wrt source file 2023-01-11T22:08:10.3289621Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.to:0 2023-01-11T22:08:10.3290350Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.state_dict:0, line 1763 <- wrt source file 2023-01-11T22:08:10.3291059Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.state_dict:0 2023-01-11T22:08:10.3291781Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.parameters:0, line 2050 <- wrt source file 2023-01-11T22:08:10.3292490Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.parameters:0 2023-01-11T22:08:10.3293246Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_parameters:0, line 2082 <- wrt source file 2023-01-11T22:08:10.3293962Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_parameters:0 2023-01-11T22:08:10.3294688Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.buffers:0, line 2107 <- wrt source file 2023-01-11T22:08:10.3295379Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.buffers:0 2023-01-11T22:08:10.3296098Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_buffers:0, line 2133 <- wrt source file 2023-01-11T22:08:10.3296804Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_buffers:0 2023-01-11T22:08:10.3297526Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_children:0, line 2163 <- wrt source file 2023-01-11T22:08:10.3298232Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_children:0 2023-01-11T22:08:10.3298965Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.modules:0, line 2187 <- wrt source file 2023-01-11T22:08:10.3310559Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.modules:0 2023-01-11T22:08:10.3311193Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_modules:0, line 2221 <- wrt source file 2023-01-11T22:08:10.3316843Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py::Module.named_modules:0 2023-01-11T22:08:10.3317466Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::LocalResponseNorm:0, line 34 <- wrt source file 2023-01-11T22:08:10.3384476Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::LocalResponseNorm:0 2023-01-11T22:08:10.3385375Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::LayerNorm:0, line 140 <- wrt source file 2023-01-11T22:08:10.3395087Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::LayerNorm:0 2023-01-11T22:08:10.3395820Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::GroupNorm:0, line 230 <- wrt source file 2023-01-11T22:08:10.3404890Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py::GroupNorm:0 2023-01-11T22:08:10.3405612Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad1d:0, line 48 <- wrt source file 2023-01-11T22:08:10.3414083Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad1d:0 2023-01-11T22:08:10.3414815Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad2d:0, line 101 <- wrt source file 2023-01-11T22:08:10.3420868Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad2d:0 2023-01-11T22:08:10.3421589Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad3d:0, line 157 <- wrt source file 2023-01-11T22:08:10.3470436Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ConstantPad3d:0 2023-01-11T22:08:10.3471174Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad1d:0, line 201 <- wrt source file 2023-01-11T22:08:10.3477381Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad1d:0 2023-01-11T22:08:10.3478113Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad2d:0, line 244 <- wrt source file 2023-01-11T22:08:10.3485131Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad2d:0 2023-01-11T22:08:10.3486519Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad3d:0, line 301 <- wrt source file 2023-01-11T22:08:10.3490709Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReflectionPad3d:0 2023-01-11T22:08:10.3491930Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad1d:0, line 358 <- wrt source file 2023-01-11T22:08:10.3497978Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad1d:0 2023-01-11T22:08:10.3498752Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad2d:0, line 401 <- wrt source file 2023-01-11T22:08:10.3505621Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad2d:0 2023-01-11T22:08:10.3506380Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad3d:0, line 458 <- wrt source file 2023-01-11T22:08:11.5091406Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ReplicationPad3d:0 2023-01-11T22:08:11.5319715Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ZeroPad2d:0, line 494 <- wrt source file 2023-01-11T22:08:11.5327605Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/padding.py::ZeroPad2d:0 2023-01-11T22:08:11.5328185Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pixelshuffle.py::PixelShuffle:0, line 36 <- wrt source file 2023-01-11T22:08:11.5335642Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pixelshuffle.py::PixelShuffle:0 2023-01-11T22:08:11.5336702Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pixelshuffle.py::PixelUnshuffle:0, line 86 <- wrt source file 2023-01-11T22:08:11.5342260Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pixelshuffle.py::PixelUnshuffle:0 2023-01-11T22:08:11.5343470Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool1d:0, line 76 <- wrt source file 2023-01-11T22:08:11.5350624Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool1d:0 2023-01-11T22:08:11.5351624Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool2d:0, line 148 <- wrt source file 2023-01-11T22:08:11.5445266Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool2d:0 2023-01-11T22:08:11.5446221Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool3d:0, line 226 <- wrt source file 2023-01-11T22:08:11.9548520Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxPool3d:0 2023-01-11T22:08:11.9591769Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool1d:0, line 293 <- wrt source file 2023-01-11T22:08:11.9606302Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool1d:0 2023-01-11T22:08:11.9607074Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool2d:0, line 366 <- wrt source file 2023-01-11T22:08:11.9627869Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool2d:0 2023-01-11T22:08:11.9628610Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool3d:0, line 451 <- wrt source file 2023-01-11T22:08:12.1145992Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::MaxUnpool3d:0 2023-01-11T22:08:12.1146605Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool1d:0, line 524 <- wrt source file 2023-01-11T22:08:12.1157067Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool1d:0 2023-01-11T22:08:12.1157803Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool2d:0, line 600 <- wrt source file 2023-01-11T22:08:12.1243867Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool2d:0 2023-01-11T22:08:12.1244620Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool3d:0, line 686 <- wrt source file 2023-01-11T22:08:12.4942304Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AvgPool3d:0 2023-01-11T22:08:12.4984187Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool2d:0, line 749 <- wrt source file 2023-01-11T22:08:12.5080051Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool2d:0 2023-01-11T22:08:12.5080834Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool3d:0, line 819 <- wrt source file 2023-01-11T22:08:12.6483002Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool3d:0 2023-01-11T22:08:12.6483786Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::LPPool1d:0, line 909 <- wrt source file 2023-01-11T22:08:12.6494607Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::LPPool1d:0 2023-01-11T22:08:12.6495321Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::LPPool2d:0, line 960 <- wrt source file 2023-01-11T22:08:12.6603091Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::LPPool2d:0 2023-01-11T22:08:12.6603844Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool1d:0, line 1011 <- wrt source file 2023-01-11T22:08:12.6608606Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool1d:0 2023-01-11T22:08:12.6609370Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool2d:0, line 1045 <- wrt source file 2023-01-11T22:08:12.6619945Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool2d:0 2023-01-11T22:08:12.6620690Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool3d:0, line 1088 <- wrt source file 2023-01-11T22:08:12.6696770Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool3d:0 2023-01-11T22:08:12.6697553Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool1d:0, line 1135 <- wrt source file 2023-01-11T22:08:12.6702032Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool1d:0 2023-01-11T22:08:12.6702915Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool2d:0, line 1166 <- wrt source file 2023-01-11T22:08:12.6713487Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool2d:0 2023-01-11T22:08:12.6714247Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool3d:0, line 1205 <- wrt source file 2023-01-11T22:08:12.6749544Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool3d:0 2023-01-11T22:08:12.6750300Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::RNN:0, line 436 <- wrt source file 2023-01-11T22:08:12.6762447Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::RNN:0 2023-01-11T22:08:12.6763234Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::LSTM:0, line 702 <- wrt source file 2023-01-11T22:08:12.6781491Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::LSTM:0 2023-01-11T22:08:12.6782742Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::GRU:0, line 933 <- wrt source file 2023-01-11T22:08:12.6800816Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::GRU:0 2023-01-11T22:08:12.6802419Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::RNNCell:0, line 1106 <- wrt source file 2023-01-11T22:08:12.6813798Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::RNNCell:0 2023-01-11T22:08:12.6814356Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::LSTMCell:0, line 1207 <- wrt source file 2023-01-11T22:08:12.6825988Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::LSTMCell:0 2023-01-11T22:08:12.6826522Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::GRUCell:0, line 1300 <- wrt source file 2023-01-11T22:08:12.6841698Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py::GRUCell:0 2023-01-11T22:08:12.6842700Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::Embedding:0, line 67 <- wrt source file 2023-01-11T22:08:12.6857562Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::Embedding:0 2023-01-11T22:08:12.6858311Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::Embedding.from_pretrained:0, line 200 <- wrt source file 2023-01-11T22:08:12.6864064Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::Embedding.from_pretrained:0 2023-01-11T22:08:12.6864626Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::EmbeddingBag:0, line 278 <- wrt source file 2023-01-11T22:08:12.6881427Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::EmbeddingBag:0 2023-01-11T22:08:12.6882174Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::EmbeddingBag.from_pretrained:0, line 429 <- wrt source file 2023-01-11T22:08:12.6888937Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py::EmbeddingBag.from_pretrained:0 2023-01-11T22:08:12.6889701Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::Transformer:0, line 42 <- wrt source file 2023-01-11T22:08:13.3906248Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::Transformer:0 2023-01-11T22:08:13.3920616Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::Transformer.forward:0, line 134 <- wrt source file 2023-01-11T22:08:13.3921607Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::Transformer.forward:0 2023-01-11T22:08:13.3922386Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerEncoder:0, line 181 <- wrt source file 2023-01-11T22:08:13.4599725Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerEncoder:0 2023-01-11T22:08:13.4604019Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerDecoder:0, line 325 <- wrt source file 2023-01-11T22:08:13.5972786Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerDecoder:0 2023-01-11T22:08:13.5979648Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerEncoderLayer:0, line 391 <- wrt source file 2023-01-11T22:08:13.6242792Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerEncoderLayer:0 2023-01-11T22:08:13.6243639Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerDecoderLayer:0, line 608 <- wrt source file 2023-01-11T22:08:13.6672649Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/transformer.py::TransformerDecoderLayer:0 2023-01-11T22:08:13.6673471Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::Upsample:0, line 74 <- wrt source file 2023-01-11T22:08:13.6699917Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::Upsample:0 2023-01-11T22:08:13.6700661Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::UpsamplingNearest2d:0, line 196 <- wrt source file 2023-01-11T22:08:13.6714088Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::UpsamplingNearest2d:0 2023-01-11T22:08:13.6714713Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::UpsamplingBilinear2d:0, line 242 <- wrt source file 2023-01-11T22:08:13.6723701Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/modules/upsampling.py::UpsamplingBilinear2d:0 2023-01-11T22:08:13.6724752Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py::DataParallel:0, line 116 <- wrt source file 2023-01-11T22:08:13.6725824Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py::DataParallel:0 2023-01-11T22:08:13.6726752Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel:0, line 534 <- wrt source file 2023-01-11T22:08:13.6727498Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel:0 2023-01-11T22:08:13.6728376Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.no_sync:0, line 1051 <- wrt source file 2023-01-11T22:08:13.6729160Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.no_sync:0 2023-01-11T22:08:13.6730047Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.join:0, line 1377 <- wrt source file 2023-01-11T22:08:13.6730686Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.join:0 2023-01-11T22:08:13.6731348Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:0, line 1550 <- wrt source file 2023-01-11T22:08:13.6731990Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:0 2023-01-11T22:08:13.6732660Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:1, line 1560 <- wrt source file 2023-01-11T22:08:13.6733309Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:1 2023-01-11T22:08:13.6733981Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_builtin_comm_hook:0, line 1594 <- wrt source file 2023-01-11T22:08:13.6734631Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_builtin_comm_hook:0 2023-01-11T22:08:13.6735348Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_fused_optim:0, line 1652 <- wrt source file 2023-01-11T22:08:13.6736003Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_fused_optim:0 2023-01-11T22:08:13.6736626Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_per_sample_grad.py::call_for_per_sample_grads:0, line 32 <- wrt source file 2023-01-11T22:08:13.6737194Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_per_sample_grad.py::call_for_per_sample_grads:0 2023-01-11T22:08:13.6737836Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/init.py::skip_init:0, line 30 <- wrt source file 2023-01-11T22:08:13.6742660Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/init.py::skip_init:0 2023-01-11T22:08:13.6743350Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/memory_format.py::convert_conv2d_weight_memory_format:0, line 54 <- wrt source file 2023-01-11T22:08:13.6744018Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/memory_format.py::convert_conv2d_weight_memory_format:0 2023-01-11T22:08:13.6744621Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrizations.py::orthogonal:0, line 245 <- wrt source file 2023-01-11T22:08:13.6745651Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrizations.py::orthogonal:0 2023-01-11T22:08:13.6746439Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrizations.py::spectral_norm:0, line 462 <- wrt source file 2023-01-11T22:08:13.6747383Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrizations.py::spectral_norm:0 2023-01-11T22:08:13.6748002Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrize.py::register_parametrization:0, line 463 <- wrt source file 2023-01-11T22:08:13.6753192Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/parametrize.py::register_parametrization:0 2023-01-11T22:08:13.6754031Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::identity:0, line 846 <- wrt source file 2023-01-11T22:08:13.6754758Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::identity:0 2023-01-11T22:08:13.6755400Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::random_unstructured:0, line 880 <- wrt source file 2023-01-11T22:08:13.6756140Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::random_unstructured:0 2023-01-11T22:08:13.6756758Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::l1_unstructured:0, line 921 <- wrt source file 2023-01-11T22:08:13.6757414Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::l1_unstructured:0 2023-01-11T22:08:13.6758146Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::random_structured:0, line 959 <- wrt source file 2023-01-11T22:08:13.6758675Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::random_structured:0 2023-01-11T22:08:13.6759267Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::ln_structured:0, line 1005 <- wrt source file 2023-01-11T22:08:13.6771277Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::ln_structured:0 2023-01-11T22:08:13.6771893Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::global_unstructured:0, line 1058 <- wrt source file 2023-01-11T22:08:13.6788083Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::global_unstructured:0 2023-01-11T22:08:13.6788692Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::custom_from_mask:0, line 1160 <- wrt source file 2023-01-11T22:08:13.6798236Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::custom_from_mask:0 2023-01-11T22:08:13.6798814Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::remove:0, line 1188 <- wrt source file 2023-01-11T22:08:13.6805271Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::remove:0 2023-01-11T22:08:13.6805959Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::is_pruned:0, line 1215 <- wrt source file 2023-01-11T22:08:13.6814795Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/prune.py::is_pruned:0 2023-01-11T22:08:13.6815332Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pad_packed_sequence:0, line 282 <- wrt source file 2023-01-11T22:08:13.6831004Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pad_packed_sequence:0 2023-01-11T22:08:13.6831521Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pad_sequence:0, line 359 <- wrt source file 2023-01-11T22:08:13.6837643Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pad_sequence:0 2023-01-11T22:08:13.6838274Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::unpad_sequence:0, line 412 <- wrt source file 2023-01-11T22:08:13.6852872Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::unpad_sequence:0 2023-01-11T22:08:13.6853570Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pack_sequence:0, line 467 <- wrt source file 2023-01-11T22:08:13.6861305Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::pack_sequence:0 2023-01-11T22:08:13.6862013Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::unpack_sequence:0, line 495 <- wrt source file 2023-01-11T22:08:13.6879997Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/rnn.py::unpack_sequence:0 2023-01-11T22:08:13.6881014Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/spectral_norm.py::spectral_norm:0, line 267 <- wrt source file 2023-01-11T22:08:13.6888519Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/spectral_norm.py::spectral_norm:0 2023-01-11T22:08:13.6889596Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/spectral_norm.py::remove_spectral_norm:0, line 294 <- wrt source file 2023-01-11T22:08:13.6896811Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/spectral_norm.py::remove_spectral_norm:0 2023-01-11T22:08:13.6898014Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/stateless.py::functional_call:0, line 123 <- wrt source file 2023-01-11T22:08:13.6901980Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/stateless.py::functional_call:0 2023-01-11T22:08:13.6903195Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py::weight_norm:0, line 99 <- wrt source file 2023-01-11T22:08:13.6910940Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py::weight_norm:0 2023-01-11T22:08:13.6911982Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py::remove_weight_norm:0, line 121 <- wrt source file 2023-01-11T22:08:13.6917884Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py::remove_weight_norm:0 2023-01-11T22:08:13.6919286Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_expanded_weights/conv_utils.py::unfold3d:0, line 203 <- wrt source file 2023-01-11T22:08:13.6920593Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_expanded_weights/conv_utils.py::unfold3d:0 2023-01-11T22:08:13.6921967Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_expanded_weights/expanded_weights_utils.py::sum_over_all_but_batch_and_last_n:0, line 108 <- wrt source file 2023-01-11T22:08:13.6943573Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/nn/utils/_expanded_weights/expanded_weights_utils.py::sum_over_all_but_batch_and_last_n:0 2023-01-11T22:08:13.6944783Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/onnx/_type_utils.py::JitScalarType:0, line 66 <- wrt source file 2023-01-11T22:08:13.6945822Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/onnx/_type_utils.py::JitScalarType:0 2023-01-11T22:08:13.6946815Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/onnx/verification.py::find_mismatch:0, line 1746 <- wrt source file 2023-01-11T22:08:13.6947802Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/onnx/verification.py::find_mismatch:0 2023-01-11T22:08:13.6948863Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/engine.py::DiagnosticEngine:0, line 20 <- wrt source file 2023-01-11T22:08:13.6950141Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/engine.py::DiagnosticEngine:0 2023-01-11T22:08:13.6951164Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::LambdaLR:0, line 200 <- wrt source file 2023-01-11T22:08:13.6952486Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::LambdaLR:0 2023-01-11T22:08:13.6953595Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::MultiplicativeLR:0, line 286 <- wrt source file 2023-01-11T22:08:13.6954769Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::MultiplicativeLR:0 2023-01-11T22:08:13.6956119Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::StepLR:0, line 369 <- wrt source file 2023-01-11T22:08:13.6957034Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::StepLR:0 2023-01-11T22:08:13.6958007Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::MultiStepLR:0, line 418 <- wrt source file 2023-01-11T22:08:13.6958978Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::MultiStepLR:0 2023-01-11T22:08:13.6960021Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ConstantLR:0, line 467 <- wrt source file 2023-01-11T22:08:13.6960928Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ConstantLR:0 2023-01-11T22:08:13.6961900Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::LinearLR:0, line 529 <- wrt source file 2023-01-11T22:08:13.6962827Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::LinearLR:0 2023-01-11T22:08:13.6963752Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::SequentialLR:0, line 621 <- wrt source file 2023-01-11T22:08:13.6964711Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::SequentialLR:0 2023-01-11T22:08:13.6965684Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::PolynomialLR:0, line 729 <- wrt source file 2023-01-11T22:08:13.6966612Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::PolynomialLR:0 2023-01-11T22:08:13.6967688Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ChainedScheduler:0, line 849 <- wrt source file 2023-01-11T22:08:13.6968631Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ChainedScheduler:0 2023-01-11T22:08:13.6969638Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ReduceLROnPlateau:0, line 953 <- wrt source file 2023-01-11T22:08:13.6970607Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::ReduceLROnPlateau:0 2023-01-11T22:08:13.6971561Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CyclicLR:0, line 1168 <- wrt source file 2023-01-11T22:08:13.6972540Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CyclicLR:0 2023-01-11T22:08:13.6973627Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:0, line 1389 <- wrt source file 2023-01-11T22:08:13.6974742Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:0 2023-01-11T22:08:13.6975926Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:1, line 1405 <- wrt source file 2023-01-11T22:08:13.6977387Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:1 2023-01-11T22:08:13.6978652Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::OneCycleLR:0, line 1547 <- wrt source file 2023-01-11T22:08:13.6979473Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/lr_scheduler.py::OneCycleLR:0 2023-01-11T22:08:13.6980319Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/sgd.py::SGD:0, line 58 <- wrt source file 2023-01-11T22:08:13.6981176Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/sgd.py::SGD:0 2023-01-11T22:08:13.6982129Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::AveragedModel:0, line 38 <- wrt source file 2023-01-11T22:08:13.6983247Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::AveragedModel:0 2023-01-11T22:08:13.6984176Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::AveragedModel:1, line 64 <- wrt source file 2023-01-11T22:08:13.6985050Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::AveragedModel:1 2023-01-11T22:08:13.6985894Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::update_bn:0, line 161 <- wrt source file 2023-01-11T22:08:13.6986693Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::update_bn:0 2023-01-11T22:08:13.6987535Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::SWALR:0, line 222 <- wrt source file 2023-01-11T22:08:13.6988414Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/optim/swa_utils.py::SWALR:0 2023-01-11T22:08:13.6989394Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/package/glob_group.py::GlobGroup:0, line 19 <- wrt source file 2023-01-11T22:08:13.6990296Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/package/glob_group.py::GlobGroup:0 2023-01-11T22:08:13.6991170Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/profiler/profiler.py::profile:0, line 363 <- wrt source file 2023-01-11T22:08:13.6992062Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/profiler/profiler.py::profile:0 2023-01-11T22:08:13.6992941Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py::assert_close:0, line 1395 <- wrt source file 2023-01-11T22:08:13.7032184Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py::assert_close:0 2023-01-11T22:08:13.7033201Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_creation.py::make_tensor:0, line 93 <- wrt source file 2023-01-11T22:08:13.7034116Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_creation.py::make_tensor:0 2023-01-11T22:08:13.7035293Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::parametrize:0, line 305 <- wrt source file 2023-01-11T22:08:13.7036309Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::parametrize:0 2023-01-11T22:08:13.7037369Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_symmetric_psd_matrix:0, line 3447 <- wrt source file 2023-01-11T22:08:13.7038460Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_symmetric_psd_matrix:0 2023-01-11T22:08:13.7039652Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_psd_matrix:0, line 3461 <- wrt source file 2023-01-11T22:08:13.7040724Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_psd_matrix:0 2023-01-11T22:08:13.7041935Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_pd_matrix:0, line 3491 <- wrt source file 2023-01-11T22:08:13.7043008Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_pd_matrix:0 2023-01-11T22:08:13.7044142Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py::skip_unless_torch_gpu:0, line 57 <- wrt source file 2023-01-11T22:08:13.7045319Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py::skip_unless_torch_gpu:0 2023-01-11T22:08:13.7046446Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/backend_registration.py::rename_privateuse1_backend:0, line 24 <- wrt source file 2023-01-11T22:08:13.7047505Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/backend_registration.py::rename_privateuse1_backend:0 2023-01-11T22:08:13.7048523Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py::checkpoint_sequential:0, line 306 <- wrt source file 2023-01-11T22:08:13.7049419Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py::checkpoint_sequential:0 2023-01-11T22:08:13.7050401Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CppExtension:0, line 912 <- wrt source file 2023-01-11T22:08:13.7051196Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CppExtension:0 2023-01-11T22:08:13.7052131Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CUDAExtension:0, line 960 <- wrt source file 2023-01-11T22:08:13.7053003Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CUDAExtension:0 2023-01-11T22:08:13.7053973Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CUDAExtension:1, line 1037 <- wrt source file 2023-01-11T22:08:13.7054882Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::CUDAExtension:1 2023-01-11T22:08:13.7055848Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::load:0, line 1273 <- wrt source file 2023-01-11T22:08:13.7056729Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::load:0 2023-01-11T22:08:13.7057690Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::load_inline:0, line 1364 <- wrt source file 2023-01-11T22:08:13.7058624Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py::load_inline:0 2023-01-11T22:08:13.7059544Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/dlpack.py::from_dlpack:0, line 71 <- wrt source file 2023-01-11T22:08:13.7067398Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/dlpack.py::from_dlpack:0 2023-01-11T22:08:13.7068473Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/throughput_benchmark.py::ThroughputBenchmark:0, line 77 <- wrt source file 2023-01-11T22:08:13.7069573Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/throughput_benchmark.py::ThroughputBenchmark:0 2023-01-11T22:08:13.7070630Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataset.py::IterableDataset:0, line 84 <- wrt source file 2023-01-11T22:08:13.7077594Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataset.py::IterableDataset:0 2023-01-11T22:08:13.7078641Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataset.py::random_split:0, line 320 <- wrt source file 2023-01-11T22:08:13.7079792Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataset.py::random_split:0 2023-01-11T22:08:13.7080821Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/distributed.py::DistributedSampler:0, line 51 <- wrt source file 2023-01-11T22:08:13.7081820Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/distributed.py::DistributedSampler:0 2023-01-11T22:08:13.7082730Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/sampler.py::WeightedRandomSampler:0, line 172 <- wrt source file 2023-01-11T22:08:13.7088188Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/data/sampler.py::WeightedRandomSampler:0 2023-01-11T22:08:13.7089416Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/sampler.py::BatchSampler:0, line 220 <- wrt source file 2023-01-11T22:08:13.7094771Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/data/sampler.py::BatchSampler:0 2023-01-11T22:08:13.7095831Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::default_convert:0, line 36 <- wrt source file 2023-01-11T22:08:13.7099358Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::default_convert:0 2023-01-11T22:08:13.7100412Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::collate:0, line 102 <- wrt source file 2023-01-11T22:08:13.7105990Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::collate:0 2023-01-11T22:08:13.7107061Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::default_collate:0, line 231 <- wrt source file 2023-01-11T22:08:13.7112842Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py::default_collate:0 2023-01-11T22:08:13.7117236Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/datapipe.py::IterDataPipe:0, line 84 <- wrt source file 2023-01-11T22:08:13.7118262Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/datapipe.py::IterDataPipe:0 2023-01-11T22:08:13.7119420Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/datapipe.py::MapDataPipe:0, line 232 <- wrt source file 2023-01-11T22:08:13.7120466Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/datapipe.py::MapDataPipe:0 2023-01-11T22:08:13.7121546Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/callable.py::MapperIterDataPipe:0, line 46 <- wrt source file 2023-01-11T22:08:13.7122930Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/callable.py::MapperIterDataPipe:0 2023-01-11T22:08:13.7124248Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/callable.py::CollatorIterDataPipe:0, line 187 <- wrt source file 2023-01-11T22:08:13.7126575Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/callable.py::CollatorIterDataPipe:0 2023-01-11T22:08:13.7127606Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combinatorics.py::ShufflerIterDataPipe:0, line 80 <- wrt source file 2023-01-11T22:08:13.7128633Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combinatorics.py::ShufflerIterDataPipe:0 2023-01-11T22:08:13.7129760Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ConcaterIterDataPipe:0, line 33 <- wrt source file 2023-01-11T22:08:13.7154407Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ConcaterIterDataPipe:0 2023-01-11T22:08:13.7155583Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ForkerIterDataPipe:0, line 75 <- wrt source file 2023-01-11T22:08:13.7156710Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ForkerIterDataPipe:0 2023-01-11T22:08:13.7157860Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::_ChildDataPipe:0, line 250 <- wrt source file 2023-01-11T22:08:13.7158964Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::_ChildDataPipe:0 2023-01-11T22:08:13.7160174Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::DemultiplexerIterDataPipe:0, line 329 <- wrt source file 2023-01-11T22:08:13.7161353Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::DemultiplexerIterDataPipe:0 2023-01-11T22:08:13.7162508Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::MultiplexerIterDataPipe:0, line 507 <- wrt source file 2023-01-11T22:08:13.7163912Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::MultiplexerIterDataPipe:0 2023-01-11T22:08:13.7164751Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ZipperIterDataPipe:0, line 572 <- wrt source file 2023-01-11T22:08:13.7165810Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/combining.py::ZipperIterDataPipe:0 2023-01-11T22:08:13.7166938Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/filelister.py::FileListerIterDataPipe:0, line 29 <- wrt source file 2023-01-11T22:08:13.7167868Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/filelister.py::FileListerIterDataPipe:0 2023-01-11T22:08:13.7168872Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/fileopener.py::FileOpenerIterDataPipe:0, line 33 <- wrt source file 2023-01-11T22:08:13.7169740Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/fileopener.py::FileOpenerIterDataPipe:0 2023-01-11T22:08:13.7170542Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::BatcherIterDataPipe:0, line 102 <- wrt source file 2023-01-11T22:08:13.7171141Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::BatcherIterDataPipe:0 2023-01-11T22:08:13.7171760Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::UnBatcherIterDataPipe:0, line 159 <- wrt source file 2023-01-11T22:08:13.7172480Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::UnBatcherIterDataPipe:0 2023-01-11T22:08:13.7173100Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::GrouperIterDataPipe:0, line 226 <- wrt source file 2023-01-11T22:08:13.7175628Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/grouping.py::GrouperIterDataPipe:0 2023-01-11T22:08:13.7176401Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/selecting.py::FilterIterDataPipe:0, line 34 <- wrt source file 2023-01-11T22:08:13.7177462Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/selecting.py::FilterIterDataPipe:0 2023-01-11T22:08:13.7178627Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/streamreader.py::StreamReaderIterDataPipe:0, line 20 <- wrt source file 2023-01-11T22:08:13.7179720Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/streamreader.py::StreamReaderIterDataPipe:0 2023-01-11T22:08:13.7180371Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/utils.py::IterableWrapperIterDataPipe:0, line 23 <- wrt source file 2023-01-11T22:08:13.7181018Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/iter/utils.py::IterableWrapperIterDataPipe:0 2023-01-11T22:08:13.7182064Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/callable.py::MapperMapDataPipe:0, line 30 <- wrt source file 2023-01-11T22:08:13.7182967Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/callable.py::MapperMapDataPipe:0 2023-01-11T22:08:13.7183974Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combinatorics.py::ShufflerIterDataPipe:0, line 31 <- wrt source file 2023-01-11T22:08:13.7185111Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combinatorics.py::ShufflerIterDataPipe:0 2023-01-11T22:08:13.7185946Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py::ConcaterMapDataPipe:0, line 24 <- wrt source file 2023-01-11T22:08:13.7186907Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py::ConcaterMapDataPipe:0 2023-01-11T22:08:13.7187538Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py::ZipperMapDataPipe:0, line 66 <- wrt source file 2023-01-11T22:08:13.7188585Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py::ZipperMapDataPipe:0 2023-01-11T22:08:13.7189437Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/grouping.py::BatcherMapDataPipe:0, line 23 <- wrt source file 2023-01-11T22:08:13.7190015Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/grouping.py::BatcherMapDataPipe:0 2023-01-11T22:08:13.7190639Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/utils.py::SequenceWrapperMapDataPipe:0, line 23 <- wrt source file 2023-01-11T22:08:13.7191325Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/map/utils.py::SequenceWrapperMapDataPipe:0 2023-01-11T22:08:13.7192105Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py::validate_input_col:0, line 33 <- wrt source file 2023-01-11T22:08:13.7193051Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py::validate_input_col:0 2023-01-11T22:08:13.7194159Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/hipify/hipify_python.py::find_closure_group:0, line 415 <- wrt source file 2023-01-11T22:08:13.7195176Z * SUCCESS: /opt/conda/lib/python3.10/site-packages/torch/utils/hipify/hipify_python.py::find_closure_group:0 2023-01-11T22:08:13.7196241Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/hipify/hipify_python.py::replace_extern_shared:0, line 511 <- wrt source file 2023-01-11T22:08:13.7197110Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/hipify/hipify_python.py::replace_extern_shared:0 2023-01-11T22:08:13.7198166Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.__init__:0, line 213 <- wrt source file 2023-01-11T22:08:13.7199360Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.__init__:0 2023-01-11T22:08:13.7200177Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_hparams:0, line 320 <- wrt source file 2023-01-11T22:08:13.7201051Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_hparams:0 2023-01-11T22:08:13.7202019Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalar:0, line 368 <- wrt source file 2023-01-11T22:08:13.7203037Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalar:0 2023-01-11T22:08:13.7203985Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalars:0, line 404 <- wrt source file 2023-01-11T22:08:13.7204820Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalars:0 2023-01-11T22:08:13.7205717Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram:0, line 462 <- wrt source file 2023-01-11T22:08:13.7206363Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram:0 2023-01-11T22:08:13.7207101Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram_raw:0, line 519 <- wrt source file 2023-01-11T22:08:13.7207877Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram_raw:0 2023-01-11T22:08:13.7208529Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_image:0, line 585 <- wrt source file 2023-01-11T22:08:13.7209162Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_image:0 2023-01-11T22:08:13.7209777Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_images:0, line 638 <- wrt source file 2023-01-11T22:08:13.7210480Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_images:0 2023-01-11T22:08:13.7211116Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_text:0, line 810 <- wrt source file 2023-01-11T22:08:13.7211669Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_text:0 2023-01-11T22:08:13.7212263Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_embedding:0, line 896 <- wrt source file 2023-01-11T22:08:13.7212920Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_embedding:0 2023-01-11T22:08:13.7213587Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_pr_curve:0, line 1001 <- wrt source file 2023-01-11T22:08:13.7214274Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_pr_curve:0 2023-01-11T22:08:13.7215031Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_multilinechart:0, line 1076 <- wrt source file 2023-01-11T22:08:13.7215672Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_multilinechart:0 2023-01-11T22:08:13.7216363Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_marginchart:0, line 1095 <- wrt source file 2023-01-11T22:08:13.7217262Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_marginchart:0 2023-01-11T22:08:13.7217987Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars:0, line 1117 <- wrt source file 2023-01-11T22:08:13.7218956Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars:0 2023-01-11T22:08:13.7219603Z * DOCTEST : /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_mesh:0, line 1161 <- wrt source file 2023-01-11T22:08:13.7220169Z * SKIPPED: /opt/conda/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_mesh:0 2023-01-11T22:08:13.7220476Z ============ 2023-01-11T22:08:13.7220663Z Finished doctests 2023-01-11T22:08:13.7220834Z 287 / 663 passed 2023-01-11T22:08:13.7220999Z  2023-01-11T22:08:13.7221232Z === Found 3 run-time warnings === 2023-01-11T22:08:13.7221505Z --- Runtime Warning: 1 / 3 --- 2023-01-11T22:08:13.7221765Z example = 2023-01-11T22:08:13.7222872Z /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:1114: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /var/lib/jenkins/workspace/c10/core/TensorImpl.h:1816.) 2023-01-11T22:08:13.7223427Z return super(Tensor, self).refine_names(names) 2023-01-11T22:08:13.7223625Z 2023-01-11T22:08:13.7223853Z --- Runtime Warning: 2 / 3 --- 2023-01-11T22:08:13.7224124Z example = 2023-01-11T22:08:13.7224818Z /opt/conda/lib/python3.10/site-packages/torch/nested/__init__.py:58: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:179.) 2023-01-11T22:08:13.7225325Z return torch._nested_tensor_from_tensor_list(tensor_list, dtype, None, device, None) 2023-01-11T22:08:13.7225549Z 2023-01-11T22:08:13.7225782Z --- Runtime Warning: 3 / 3 --- 2023-01-11T22:08:13.7226062Z example = 2023-01-11T22:08:13.7226939Z /opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:921: UserWarning: Your compiler for AOTAutograd is returning a a function that doesn't take boxed arguments. Please wrap it with functorch.compile.make_boxed_func or handle the boxed arguments yourself. See https://github.com/pytorch/pytorch/pull/83137#issuecomment-1211320670 for rationale. 2023-01-11T22:08:13.7227543Z warnings.warn( 2023-01-11T22:08:13.7227712Z 2023-01-11T22:08:13.7227995Z === 287 passed, 376 skipped, 3 warnings in 11.46 seconds === 2023-01-11T22:08:14.0861670Z 2023-01-11T22:08:14.0862155Z real 45m47.839s 2023-01-11T22:08:14.0862696Z user 87m51.939s 2023-01-11T22:08:14.0863024Z sys 10m9.710s 2023-01-11T22:08:14.0863232Z + assert_git_not_dirty 2023-01-11T22:08:14.0863633Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:08:14.0863957Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:08:14.0866241Z ++ git status --porcelain 2023-01-11T22:08:23.8491185Z + git_status= 2023-01-11T22:08:23.8491530Z + [[ -n '' ]] 2023-01-11T22:08:23.8504323Z + test_aten 2023-01-11T22:08:23.8505066Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *asan* ]] 2023-01-11T22:08:23.8505721Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:08:23.8506493Z + echo 'Running ATen tests with pytorch lib' 2023-01-11T22:08:23.8506908Z Running ATen tests with pytorch lib 2023-01-11T22:08:23.8507284Z + [[ -n '' ]] 2023-01-11T22:08:23.8507685Z + echo 'Running test with the build folder' 2023-01-11T22:08:23.8508052Z Running test with the build folder 2023-01-11T22:08:23.8508370Z + TEST_BASE_DIR=build/bin 2023-01-11T22:08:23.8509277Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10d_cuda_test.so build/bin 2023-01-11T22:08:23.8607523Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libcaffe2_nvrtc.so build/bin 2023-01-11T22:08:23.8616498Z + ln -sf '/opt/conda/lib/python3.10/site-packages/torch/lib/libmkldnn*' build/bin 2023-01-11T22:08:23.8626478Z + ln -sf '/opt/conda/lib/python3.10/site-packages/torch/lib/libnccl*' build/bin 2023-01-11T22:08:23.8635735Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda_linalg.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorchbind_test.so build/bin 2023-01-11T22:08:23.8643875Z + ln -sf '/opt/conda/lib/python3.10/site-packages/torch/lib/libtbb*' build/bin 2023-01-11T22:08:23.8651943Z + ls build/bin 2023-01-11T22:08:23.8737636Z CppSignature_test 2023-01-11T22:08:23.8738044Z Dict_test 2023-01-11T22:08:23.8738315Z Dimname_test 2023-01-11T22:08:23.8738617Z FileStoreTest 2023-01-11T22:08:23.8738899Z HashStoreTest 2023-01-11T22:08:23.8739177Z IListRef_test 2023-01-11T22:08:23.8739488Z KernelFunction_test 2023-01-11T22:08:23.8739779Z List_test 2023-01-11T22:08:23.8740060Z MaybeOwned_test 2023-01-11T22:08:23.8740379Z NamedTensor_test 2023-01-11T22:08:23.8740720Z ProcessGroupGlooAsyncTest 2023-01-11T22:08:23.8741107Z ProcessGroupGlooTest 2023-01-11T22:08:23.8741435Z ProcessGroupMPITest 2023-01-11T22:08:23.8741630Z ProcessGroupNCCLErrorsTest 2023-01-11T22:08:23.8741856Z ProcessGroupNCCLTest 2023-01-11T22:08:23.8742149Z ProcessGroupUCCTest 2023-01-11T22:08:23.8742511Z TCPStoreTest 2023-01-11T22:08:23.8742767Z aot_model_compiler_test 2023-01-11T22:08:23.8742953Z apply_utils_test 2023-01-11T22:08:23.8743106Z atest 2023-01-11T22:08:23.8743270Z backend_fallback_test 2023-01-11T22:08:23.8743439Z basic 2023-01-11T22:08:23.8743582Z broadcast_test 2023-01-11T22:08:23.8743752Z c10_Array_test 2023-01-11T22:08:23.8743917Z c10_Bitset_test 2023-01-11T22:08:23.8744070Z c10_C++17_test 2023-01-11T22:08:23.8744270Z c10_CompileTimeFunctionPointer_test 2023-01-11T22:08:23.8744487Z c10_ConstexprCrc_test 2023-01-11T22:08:23.8744666Z c10_DeadlockDetection_test 2023-01-11T22:08:23.8744857Z c10_DeviceGuard_test 2023-01-11T22:08:23.8745031Z c10_Device_test 2023-01-11T22:08:23.8745376Z c10_DispatchKeySet_test 2023-01-11T22:08:23.8745558Z c10_Half_test 2023-01-11T22:08:23.8745738Z c10_InlineDeviceGuard_test 2023-01-11T22:08:23.8745927Z c10_InlineStreamGuard_test 2023-01-11T22:08:23.8746117Z c10_LeftRight_test 2023-01-11T22:08:23.8746307Z c10_Metaprogramming_test 2023-01-11T22:08:23.8746546Z c10_SizesAndStrides_test 2023-01-11T22:08:23.8746719Z c10_SmallVectorTest 2023-01-11T22:08:23.8746904Z c10_StreamGuard_test 2023-01-11T22:08:23.8747075Z c10_SymInt_test 2023-01-11T22:08:23.8747303Z c10_Synchronized_test 2023-01-11T22:08:23.8747580Z c10_ThreadLocal_test 2023-01-11T22:08:23.8747849Z c10_TypeIndex_test 2023-01-11T22:08:23.8748079Z c10_TypeList_test 2023-01-11T22:08:23.8748333Z c10_TypeTraits_test 2023-01-11T22:08:23.8748612Z c10_accumulate_test 2023-01-11T22:08:23.8748873Z c10_bfloat16_test 2023-01-11T22:08:23.8749163Z c10_complex_math_test 2023-01-11T22:08:23.8749462Z c10_complex_test 2023-01-11T22:08:23.8749764Z c10_cuda_CUDAAssertionsTest_1_var_test 2023-01-11T22:08:23.8750375Z c10_cuda_CUDAAssertionsTest_catches_stream 2023-01-11T22:08:23.8750903Z c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device 2023-01-11T22:08:23.8751321Z c10_cuda_CUDAAssertionsTest_from_2_processes 2023-01-11T22:08:23.8751781Z c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads 2023-01-11T22:08:23.8752318Z c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks 2023-01-11T22:08:23.8752810Z c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block 2023-01-11T22:08:23.8753161Z c10_cuda_CUDATest 2023-01-11T22:08:23.8753441Z c10_either_test 2023-01-11T22:08:23.8753732Z c10_exception_test 2023-01-11T22:08:23.8753994Z c10_flags_test 2023-01-11T22:08:23.8754308Z c10_intrusive_ptr_benchmark 2023-01-11T22:08:23.8754638Z c10_intrusive_ptr_test 2023-01-11T22:08:23.8754901Z c10_irange_test 2023-01-11T22:08:23.8755163Z c10_logging_test 2023-01-11T22:08:23.8755424Z c10_optional_test 2023-01-11T22:08:23.8755727Z c10_ordered_preserving_dict_test 2023-01-11T22:08:23.8756050Z c10_registry_test 2023-01-11T22:08:23.8756342Z c10_string_view_test 2023-01-11T22:08:23.8756641Z c10_tempfile_test 2023-01-11T22:08:23.8756936Z c10_typeid_test 2023-01-11T22:08:23.8757243Z cpu_generator_test 2023-01-11T22:08:23.8757568Z cpu_profiling_allocator_test 2023-01-11T22:08:23.8757883Z cpu_rng_test 2023-01-11T22:08:23.8758157Z cuda_apply_test 2023-01-11T22:08:23.8758461Z cuda_atomic_ops_test 2023-01-11T22:08:23.8758801Z cuda_caching_host_allocator_test 2023-01-11T22:08:23.8759229Z cuda_complex_math_test 2023-01-11T22:08:23.8759532Z cuda_complex_test 2023-01-11T22:08:23.8759809Z cuda_cub_test 2023-01-11T22:08:23.8760056Z cuda_cudnn_test 2023-01-11T22:08:23.8760322Z cuda_device_test 2023-01-11T22:08:23.8760622Z cuda_distributions_test 2023-01-11T22:08:23.8760931Z cuda_dlconvertor_test 2023-01-11T22:08:23.8761204Z cuda_generator_test 2023-01-11T22:08:23.8761507Z cuda_half_test 2023-01-11T22:08:23.8761819Z cuda_integer_divider_test 2023-01-11T22:08:23.8762149Z cuda_optional_test 2023-01-11T22:08:23.8762492Z cuda_packedtensoraccessor_test 2023-01-11T22:08:23.8762837Z cuda_reportMemoryUsage_test 2023-01-11T22:08:23.8763139Z cuda_stream_test 2023-01-11T22:08:23.8763424Z cuda_vectorized_test 2023-01-11T22:08:23.8763732Z dispatch_key_set_test 2023-01-11T22:08:23.8764023Z dlconvertor_test 2023-01-11T22:08:23.8764333Z example_allreduce 2023-01-11T22:08:23.8764651Z extension_backend_test 2023-01-11T22:08:23.8764900Z half_test 2023-01-11T22:08:23.8765165Z inline_container_test 2023-01-11T22:08:23.8765450Z ivalue_test 2023-01-11T22:08:23.8765745Z kernel_function_legacy_test 2023-01-11T22:08:23.8766061Z kernel_function_test 2023-01-11T22:08:23.8766358Z kernel_lambda_legacy_test 2023-01-11T22:08:23.8766632Z kernel_lambda_test 2023-01-11T22:08:23.8766913Z kernel_stackbased_test 2023-01-11T22:08:23.8767171Z lazy_tensor_test 2023-01-11T22:08:23.8767334Z legacy_vmap_test 2023-01-11T22:08:23.8767505Z libc10.so 2023-01-11T22:08:23.8767771Z libc10_cuda.so 2023-01-11T22:08:23.8767933Z libc10d_cuda_test.so 2023-01-11T22:08:23.8768110Z libcaffe2_nvrtc.so 2023-01-11T22:08:23.8768381Z 'libmkldnn*' 2023-01-11T22:08:23.8768554Z 'libnccl*' 2023-01-11T22:08:23.8768730Z 'libtbb*' 2023-01-11T22:08:23.8768887Z libtorch.so 2023-01-11T22:08:23.8769044Z libtorch_cpu.so 2023-01-11T22:08:23.8769220Z libtorch_cuda.so 2023-01-11T22:08:23.8769447Z libtorch_cuda_linalg.so 2023-01-11T22:08:23.8769661Z libtorch_global_deps.so 2023-01-11T22:08:23.8769843Z libtorch_python.so 2023-01-11T22:08:23.8770024Z libtorchbind_test.so 2023-01-11T22:08:23.8770212Z make_boxed_from_unboxed_functor_test 2023-01-11T22:08:23.8770406Z math_kernel_test 2023-01-11T22:08:23.8770578Z memory_format_test 2023-01-11T22:08:23.8770750Z memory_overlapping_test 2023-01-11T22:08:23.8770941Z mobile_memory_cleanup 2023-01-11T22:08:23.8771114Z native_test 2023-01-11T22:08:23.8771270Z op_allowlist_test 2023-01-11T22:08:23.8771449Z op_registration_test 2023-01-11T22:08:23.8771634Z operator_name_test 2023-01-11T22:08:23.8771841Z operators_test 2023-01-11T22:08:23.8772037Z packedtensoraccessor_test 2023-01-11T22:08:23.8772233Z parallel_benchmark 2023-01-11T22:08:23.8772387Z pow_test 2023-01-11T22:08:23.8772541Z protoc 2023-01-11T22:08:23.8772745Z protoc-3.13.0.0 2023-01-11T22:08:23.8772909Z quantized_test 2023-01-11T22:08:23.8773078Z reduce_ops_test 2023-01-11T22:08:23.8773261Z reportMemoryUsage_test 2023-01-11T22:08:23.8773435Z scalar_tensor_test 2023-01-11T22:08:23.8773605Z scalar_test 2023-01-11T22:08:23.8773783Z stride_properties_test 2023-01-11T22:08:23.8773956Z tensor_iterator_test 2023-01-11T22:08:23.8774125Z test_api 2023-01-11T22:08:23.8774282Z test_cpp_rpc 2023-01-11T22:08:23.8774437Z test_dist_autograd 2023-01-11T22:08:23.8774620Z test_edge_op_registration 2023-01-11T22:08:23.8774796Z test_jit 2023-01-11T22:08:23.8774935Z test_lazy 2023-01-11T22:08:23.8775093Z test_mobile_nnc 2023-01-11T22:08:23.8775258Z test_parallel 2023-01-11T22:08:23.8775412Z test_tensorexpr 2023-01-11T22:08:23.8775586Z thread_init_test 2023-01-11T22:08:23.8775757Z torch_shm_manager 2023-01-11T22:08:23.8775919Z tutorial_tensorexpr 2023-01-11T22:08:23.8776093Z type_ptr_test 2023-01-11T22:08:23.8776256Z type_test 2023-01-11T22:08:23.8776412Z undefined_tensor_test 2023-01-11T22:08:23.8776584Z variant_test 2023-01-11T22:08:23.8776758Z vec_test_all_types_AVX2 2023-01-11T22:08:23.8776935Z vec_test_all_types_DEFAULT 2023-01-11T22:08:23.8777123Z verify_api_visibility 2023-01-11T22:08:23.8777296Z weakref_test 2023-01-11T22:08:23.8777447Z wrapdim_test 2023-01-11T22:08:23.8777615Z xla_tensor_test 2023-01-11T22:08:23.8777804Z + aten/tools/run_tests.sh build/bin 2023-01-11T22:08:23.8778003Z + set -e 2023-01-11T22:08:23.8778183Z ++ dirname aten/tools/run_tests.sh 2023-01-11T22:08:23.8797779Z + VALGRIND_SUP=/var/lib/jenkins/workspace/aten/tools/valgrind.sup 2023-01-11T22:08:23.8798168Z + pushd build/bin 2023-01-11T22:08:23.8798475Z ~/workspace/build/bin ~/workspace 2023-01-11T22:08:23.8798724Z + VALGRIND=ON 2023-01-11T22:08:23.8798890Z + ./basic 2023-01-11T22:08:25.2508169Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:25.2508792Z [==========] Running 5 tests from 1 test suite. 2023-01-11T22:08:25.2509107Z [----------] Global test environment set-up. 2023-01-11T22:08:25.2509406Z [----------] 5 tests from BasicTest 2023-01-11T22:08:25.2509677Z [ RUN ] BasicTest.BasicTestCPU 2023-01-11T22:08:25.3802702Z 5 ms 2023-01-11T22:08:25.4557109Z 72 ms 2023-01-11T22:08:25.5677098Z 109 ms 2023-01-11T22:08:25.6332647Z [ OK ] BasicTest.BasicTestCPU (382 ms) 2023-01-11T22:08:25.6333117Z [ RUN ] BasicTest.BasicTestHalfCPU 2023-01-11T22:08:25.6385820Z 2 ms 2023-01-11T22:08:25.7333272Z 93 ms 2023-01-11T22:08:25.8653249Z 131 ms 2023-01-11T22:08:25.8786895Z [ OK ] BasicTest.BasicTestHalfCPU (245 ms) 2023-01-11T22:08:25.8787256Z [ RUN ] BasicTest.BasicTestCUDA 2023-01-11T22:08:25.8787781Z [ OK ] BasicTest.BasicTestCUDA (0 ms) 2023-01-11T22:08:25.8788094Z [ RUN ] BasicTest.FactoryMethodsTest 2023-01-11T22:08:25.8855049Z [ OK ] BasicTest.FactoryMethodsTest (6 ms) 2023-01-11T22:08:25.8855506Z [ RUN ] BasicTest.BasicStdTestCPU 2023-01-11T22:08:25.8856190Z Simple example: called once 2023-01-11T22:08:25.8859753Z throw: call_once will retry 2023-01-11T22:08:25.8861402Z throw: call_once will retry 2023-01-11T22:08:25.8863542Z Didn't throw, call_once will not attempt again 2023-01-11T22:08:25.8864125Z [ OK ] BasicTest.BasicStdTestCPU (0 ms) 2023-01-11T22:08:25.8864697Z [----------] 5 tests from BasicTest (635 ms total) 2023-01-11T22:08:25.8864851Z 2023-01-11T22:08:25.8865030Z [----------] Global test environment tear-down 2023-01-11T22:08:25.8865352Z [==========] 5 tests from 1 test suite ran. (635 ms total) 2023-01-11T22:08:25.8865780Z [ PASSED ] 5 tests. 2023-01-11T22:08:26.1254871Z + ./atest 2023-01-11T22:08:26.4484116Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:26.4488616Z [==========] Running 16 tests from 1 test suite. 2023-01-11T22:08:26.4488981Z [----------] Global test environment set-up. 2023-01-11T22:08:26.4489277Z [----------] 16 tests from atest 2023-01-11T22:08:26.4489585Z [ RUN ] atest.operators 2023-01-11T22:08:26.4492192Z [ OK ] atest.operators (0 ms) 2023-01-11T22:08:26.4492547Z [ RUN ] atest.logical_and_operators 2023-01-11T22:08:26.4493779Z [ OK ] atest.logical_and_operators (0 ms) 2023-01-11T22:08:26.4494181Z [ RUN ] atest.logical_or_operators 2023-01-11T22:08:26.4494523Z [ OK ] atest.logical_or_operators (0 ms) 2023-01-11T22:08:26.4494813Z [ RUN ] atest.logical_xor_operators 2023-01-11T22:08:26.4495106Z [ OK ] atest.logical_xor_operators (0 ms) 2023-01-11T22:08:26.4495377Z [ RUN ] atest.lt_operators 2023-01-11T22:08:26.4503026Z [ OK ] atest.lt_operators (0 ms) 2023-01-11T22:08:26.4503518Z [ RUN ] atest.le_operators 2023-01-11T22:08:26.4503996Z [ OK ] atest.le_operators (0 ms) 2023-01-11T22:08:26.4504496Z [ RUN ] atest.gt_operators 2023-01-11T22:08:26.4504954Z [ OK ] atest.gt_operators (0 ms) 2023-01-11T22:08:26.4505437Z [ RUN ] atest.ge_operators 2023-01-11T22:08:26.4505749Z [ OK ] atest.ge_operators (0 ms) 2023-01-11T22:08:26.4505997Z [ RUN ] atest.eq_operators 2023-01-11T22:08:26.4506263Z [ OK ] atest.eq_operators (0 ms) 2023-01-11T22:08:26.4506522Z [ RUN ] atest.ne_operators 2023-01-11T22:08:26.4506790Z [ OK ] atest.ne_operators (0 ms) 2023-01-11T22:08:26.4507038Z [ RUN ] atest.add_operators 2023-01-11T22:08:26.4507330Z [ OK ] atest.add_operators (0 ms) 2023-01-11T22:08:26.4507597Z [ RUN ] atest.max_operators 2023-01-11T22:08:26.4516665Z [ OK ] atest.max_operators (1 ms) 2023-01-11T22:08:26.4517162Z [ RUN ] atest.min_operators 2023-01-11T22:08:26.4517801Z [ OK ] atest.min_operators (0 ms) 2023-01-11T22:08:26.4518236Z [ RUN ] atest.sigmoid_backward_operator 2023-01-11T22:08:26.4518575Z [ OK ] atest.sigmoid_backward_operator (0 ms) 2023-01-11T22:08:26.4518874Z [ RUN ] atest.fmod_tensor_operators 2023-01-11T22:08:26.4519709Z [ OK ] atest.fmod_tensor_operators (0 ms) 2023-01-11T22:08:26.4519974Z [ RUN ] atest.atest 2023-01-11T22:08:26.4642999Z [ OK ] atest.atest (12 ms) 2023-01-11T22:08:26.4643408Z [----------] 16 tests from atest (15 ms total) 2023-01-11T22:08:26.4643561Z 2023-01-11T22:08:26.4643744Z [----------] Global test environment tear-down 2023-01-11T22:08:26.4644223Z [==========] 16 tests from 1 test suite ran. (15 ms total) 2023-01-11T22:08:26.4644472Z [ PASSED ] 16 tests. 2023-01-11T22:08:26.5397284Z + ./scalar_test 2023-01-11T22:08:26.8644234Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:26.8645040Z [==========] Running 4 tests from 1 test suite. 2023-01-11T22:08:26.8645364Z [----------] Global test environment set-up. 2023-01-11T22:08:26.8645654Z [----------] 4 tests from TestScalar 2023-01-11T22:08:26.8645916Z [ RUN ] TestScalar.TestScalar 2023-01-11T22:08:26.8646116Z H2: 3 257 3 1 2023-01-11T22:08:26.8782060Z [ OK ] TestScalar.TestScalar (13 ms) 2023-01-11T22:08:26.8782648Z [ RUN ] TestScalar.TestConj 2023-01-11T22:08:26.8783039Z [ OK ] TestScalar.TestConj (0 ms) 2023-01-11T22:08:26.8783568Z [ RUN ] TestScalar.TestEqual 2023-01-11T22:08:26.8783983Z [ OK ] TestScalar.TestEqual (0 ms) 2023-01-11T22:08:26.8784512Z [ RUN ] TestScalar.TestFormatting 2023-01-11T22:08:26.8785069Z [ OK ] TestScalar.TestFormatting (0 ms) 2023-01-11T22:08:26.8785403Z [----------] 4 tests from TestScalar (13 ms total) 2023-01-11T22:08:26.8785579Z 2023-01-11T22:08:26.8785783Z [----------] Global test environment tear-down 2023-01-11T22:08:26.8786102Z [==========] 4 tests from 1 test suite ran. (13 ms total) 2023-01-11T22:08:26.8786364Z [ PASSED ] 4 tests. 2023-01-11T22:08:26.9525429Z + ./broadcast_test 2023-01-11T22:08:27.2754713Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:27.2755207Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:27.2756125Z [----------] Global test environment set-up. 2023-01-11T22:08:27.2756457Z [----------] 1 test from BroadcastTest 2023-01-11T22:08:27.2756751Z [ RUN ] BroadcastTest.Broadcast 2023-01-11T22:08:27.3421611Z [ OK ] BroadcastTest.Broadcast (66 ms) 2023-01-11T22:08:27.3421992Z [----------] 1 test from BroadcastTest (66 ms total) 2023-01-11T22:08:27.3422201Z 2023-01-11T22:08:27.3422549Z [----------] Global test environment tear-down 2023-01-11T22:08:27.3422939Z [==========] 1 test from 1 test suite ran. (66 ms total) 2023-01-11T22:08:27.3423204Z [ PASSED ] 1 test. 2023-01-11T22:08:27.4145609Z + ./wrapdim_test 2023-01-11T22:08:27.7404980Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:27.7405510Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:27.7405817Z [----------] Global test environment set-up. 2023-01-11T22:08:27.7406101Z [----------] 1 test from TestWrapdim 2023-01-11T22:08:27.7406394Z [ RUN ] TestWrapdim.TestWrapdim 2023-01-11T22:08:27.7410716Z [ OK ] TestWrapdim.TestWrapdim (0 ms) 2023-01-11T22:08:27.7411249Z [----------] 1 test from TestWrapdim (0 ms total) 2023-01-11T22:08:27.7411538Z 2023-01-11T22:08:27.7411784Z [----------] Global test environment tear-down 2023-01-11T22:08:27.7412092Z [==========] 1 test from 1 test suite ran. (0 ms total) 2023-01-11T22:08:27.7412345Z [ PASSED ] 1 test. 2023-01-11T22:08:27.8114998Z + ./apply_utils_test 2023-01-11T22:08:28.1336709Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:28.1337434Z [==========] Running 6 tests from 1 test suite. 2023-01-11T22:08:28.1337894Z [----------] Global test environment set-up. 2023-01-11T22:08:28.1338385Z [----------] 6 tests from ApplyUtilsTest 2023-01-11T22:08:28.1338903Z [ RUN ] ApplyUtilsTest.Contiguous2D 2023-01-11T22:08:28.1369149Z [ OK ] ApplyUtilsTest.Contiguous2D (3 ms) 2023-01-11T22:08:28.1370084Z [ RUN ] ApplyUtilsTest.Small2D 2023-01-11T22:08:28.1370663Z [ OK ] ApplyUtilsTest.Small2D (0 ms) 2023-01-11T22:08:28.1371206Z [ RUN ] ApplyUtilsTest._2D 2023-01-11T22:08:28.1382779Z [ OK ] ApplyUtilsTest._2D (1 ms) 2023-01-11T22:08:28.1383336Z [ RUN ] ApplyUtilsTest._3D 2023-01-11T22:08:28.1385379Z [ OK ] ApplyUtilsTest._3D (0 ms) 2023-01-11T22:08:28.1385918Z [ RUN ] ApplyUtilsTest.Medium3D 2023-01-11T22:08:28.1399882Z [ OK ] ApplyUtilsTest.Medium3D (1 ms) 2023-01-11T22:08:28.1400201Z [ RUN ] ApplyUtilsTest._10D 2023-01-11T22:08:28.2240467Z [ OK ] ApplyUtilsTest._10D (83 ms) 2023-01-11T22:08:28.2240842Z [----------] 6 tests from ApplyUtilsTest (90 ms total) 2023-01-11T22:08:28.2241040Z 2023-01-11T22:08:28.2241245Z [----------] Global test environment tear-down 2023-01-11T22:08:28.2241834Z [==========] 6 tests from 1 test suite ran. (90 ms total) 2023-01-11T22:08:28.2242105Z [ PASSED ] 6 tests. 2023-01-11T22:08:28.2976888Z + ./dlconvertor_test 2023-01-11T22:08:28.6207606Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:28.6208091Z [==========] Running 2 tests from 1 test suite. 2023-01-11T22:08:28.6208389Z [----------] Global test environment set-up. 2023-01-11T22:08:28.6208691Z [----------] 2 tests from TestDlconvertor 2023-01-11T22:08:28.6209005Z [ RUN ] TestDlconvertor.TestDlconvertor 2023-01-11T22:08:28.6211459Z [ OK ] TestDlconvertor.TestDlconvertor (0 ms) 2023-01-11T22:08:28.6212135Z [ RUN ] TestDlconvertor.TestDlconvertorNoStrides 2023-01-11T22:08:28.6212611Z [ OK ] TestDlconvertor.TestDlconvertorNoStrides (0 ms) 2023-01-11T22:08:28.6212980Z [----------] 2 tests from TestDlconvertor (0 ms total) 2023-01-11T22:08:28.6213161Z 2023-01-11T22:08:28.6213329Z [----------] Global test environment tear-down 2023-01-11T22:08:28.6213627Z [==========] 2 tests from 1 test suite ran. (0 ms total) 2023-01-11T22:08:28.6213886Z [ PASSED ] 2 tests. 2023-01-11T22:08:28.6920572Z + ./native_test 2023-01-11T22:08:29.0146877Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:29.0147646Z [==========] Running 2 tests from 1 test suite. 2023-01-11T22:08:29.0147960Z [----------] Global test environment set-up. 2023-01-11T22:08:29.0148257Z [----------] 2 tests from TestNative 2023-01-11T22:08:29.0148539Z [ RUN ] TestNative.NativeTestCPU 2023-01-11T22:08:29.0341388Z [W TensorCompare.cpp:493] Warning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead. (function operator()) 2023-01-11T22:08:29.0350374Z [ OK ] TestNative.NativeTestCPU (20 ms) 2023-01-11T22:08:29.0350927Z [ RUN ] TestNative.NativeTestGPU 2023-01-11T22:08:29.0351361Z [ OK ] TestNative.NativeTestGPU (0 ms) 2023-01-11T22:08:29.0351674Z [----------] 2 tests from TestNative (20 ms total) 2023-01-11T22:08:29.0351828Z 2023-01-11T22:08:29.0351997Z [----------] Global test environment tear-down 2023-01-11T22:08:29.0352313Z [==========] 2 tests from 1 test suite ran. (20 ms total) 2023-01-11T22:08:29.0352557Z [ PASSED ] 2 tests. 2023-01-11T22:08:29.1090531Z + ./scalar_tensor_test 2023-01-11T22:08:29.4345014Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:29.4345726Z [==========] Running 3 tests from 1 test suite. 2023-01-11T22:08:29.4346032Z [----------] Global test environment set-up. 2023-01-11T22:08:29.4346341Z [----------] 3 tests from TestScalarTensor 2023-01-11T22:08:29.4346815Z [ RUN ] TestScalarTensor.TestScalarTensorCPU 2023-01-11T22:08:29.6593109Z [ OK ] TestScalarTensor.TestScalarTensorCPU (224 ms) 2023-01-11T22:08:29.6593840Z [ RUN ] TestScalarTensor.TestScalarTensorCUDA 2023-01-11T22:08:29.6594284Z [ OK ] TestScalarTensor.TestScalarTensorCUDA (0 ms) 2023-01-11T22:08:29.6594653Z [ RUN ] TestScalarTensor.TestScalarTensorMPS 2023-01-11T22:08:29.6595018Z [ OK ] TestScalarTensor.TestScalarTensorMPS (0 ms) 2023-01-11T22:08:29.6595372Z [----------] 3 tests from TestScalarTensor (224 ms total) 2023-01-11T22:08:29.6595538Z 2023-01-11T22:08:29.6595706Z [----------] Global test environment tear-down 2023-01-11T22:08:29.6596021Z [==========] 3 tests from 1 test suite ran. (224 ms total) 2023-01-11T22:08:29.6596278Z [ PASSED ] 3 tests. 2023-01-11T22:08:29.7339046Z + [[ -x ./tensor_interop_test ]] 2023-01-11T22:08:29.7339620Z + ./undefined_tensor_test 2023-01-11T22:08:30.0632047Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:30.0632871Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:30.0633183Z [----------] Global test environment set-up. 2023-01-11T22:08:30.0633461Z [----------] 1 test from TestUndefined 2023-01-11T22:08:30.0633754Z [ RUN ] TestUndefined.UndefinedTest 2023-01-11T22:08:30.0962216Z [ OK ] TestUndefined.UndefinedTest (33 ms) 2023-01-11T22:08:30.0962828Z [----------] 1 test from TestUndefined (33 ms total) 2023-01-11T22:08:30.0963098Z 2023-01-11T22:08:30.0963276Z [----------] Global test environment tear-down 2023-01-11T22:08:30.0963587Z [==========] 1 test from 1 test suite ran. (33 ms total) 2023-01-11T22:08:30.0963844Z [ PASSED ] 1 test. 2023-01-11T22:08:30.1718928Z + ./extension_backend_test 2023-01-11T22:08:30.5005169Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:30.5009962Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:30.5010464Z [----------] Global test environment set-up. 2023-01-11T22:08:30.5010937Z [----------] 1 test from BackendExtensionTest 2023-01-11T22:08:30.5011423Z [ RUN ] BackendExtensionTest.TestRegisterOp 2023-01-11T22:08:30.5011894Z [ OK ] BackendExtensionTest.TestRegisterOp (0 ms) 2023-01-11T22:08:30.5012379Z [----------] 1 test from BackendExtensionTest (0 ms total) 2023-01-11T22:08:30.5012654Z 2023-01-11T22:08:30.5012917Z [----------] Global test environment tear-down 2023-01-11T22:08:30.5013405Z [==========] 1 test from 1 test suite ran. (0 ms total) 2023-01-11T22:08:30.5013783Z [ PASSED ] 1 test. 2023-01-11T22:08:30.5755490Z + ./lazy_tensor_test 2023-01-11T22:08:30.9026094Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:30.9026836Z [==========] Running 2 tests from 2 test suites. 2023-01-11T22:08:30.9027299Z [----------] Global test environment set-up. 2023-01-11T22:08:30.9027764Z [----------] 1 test from XlaTensorTest 2023-01-11T22:08:30.9028240Z [ RUN ] XlaTensorTest.TestNoStorage 2023-01-11T22:08:30.9028775Z [ OK ] XlaTensorTest.TestNoStorage (0 ms) 2023-01-11T22:08:30.9029319Z [----------] 1 test from XlaTensorTest (0 ms total) 2023-01-11T22:08:30.9029560Z 2023-01-11T22:08:30.9029798Z [----------] 1 test from LazyTensorTest 2023-01-11T22:08:30.9030275Z [ RUN ] LazyTensorTest.TestNoStorage 2023-01-11T22:08:30.9030795Z [ OK ] LazyTensorTest.TestNoStorage (0 ms) 2023-01-11T22:08:30.9031300Z [----------] 1 test from LazyTensorTest (0 ms total) 2023-01-11T22:08:30.9031545Z 2023-01-11T22:08:30.9031823Z [----------] Global test environment tear-down 2023-01-11T22:08:30.9032603Z [==========] 2 tests from 2 test suites ran. (0 ms total) 2023-01-11T22:08:30.9032997Z [ PASSED ] 2 tests. 2023-01-11T22:08:30.9773111Z + ./tensor_iterator_test 2023-01-11T22:08:31.3042377Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:31.3044473Z [==========] Running 65 tests from 1 test suite. 2023-01-11T22:08:31.3044774Z [----------] Global test environment set-up. 2023-01-11T22:08:31.3045083Z [----------] 65 tests from TensorIteratorTest 2023-01-11T22:08:31.3045391Z [ RUN ] TensorIteratorTest.CPUScalar 2023-01-11T22:08:31.3045703Z [ OK ] TensorIteratorTest.CPUScalar (0 ms) 2023-01-11T22:08:31.3046038Z [ RUN ] TensorIteratorTest.CPUScalarInputs 2023-01-11T22:08:31.3046391Z [ OK ] TensorIteratorTest.CPUScalarInputs (0 ms) 2023-01-11T22:08:31.3046962Z [ RUN ] TensorIteratorTest.MixedDevices 2023-01-11T22:08:31.3047302Z [ OK ] TensorIteratorTest.MixedDevices (0 ms) 2023-01-11T22:08:31.3047656Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Byte 2023-01-11T22:08:31.3067236Z [ OK ] TensorIteratorTest.SerialLoopUnary_Byte (2 ms) 2023-01-11T22:08:31.3067862Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Char 2023-01-11T22:08:31.3085008Z [ OK ] TensorIteratorTest.SerialLoopUnary_Char (1 ms) 2023-01-11T22:08:31.3085620Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Short 2023-01-11T22:08:31.3102261Z [ OK ] TensorIteratorTest.SerialLoopUnary_Short (1 ms) 2023-01-11T22:08:31.3103110Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Int 2023-01-11T22:08:31.3119804Z [ OK ] TensorIteratorTest.SerialLoopUnary_Int (1 ms) 2023-01-11T22:08:31.3120408Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Long 2023-01-11T22:08:31.3137018Z [ OK ] TensorIteratorTest.SerialLoopUnary_Long (1 ms) 2023-01-11T22:08:31.3137612Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Float 2023-01-11T22:08:31.3154957Z [ OK ] TensorIteratorTest.SerialLoopUnary_Float (1 ms) 2023-01-11T22:08:31.3155558Z [ RUN ] TensorIteratorTest.SerialLoopUnary_Double 2023-01-11T22:08:31.3172437Z [ OK ] TensorIteratorTest.SerialLoopUnary_Double (1 ms) 2023-01-11T22:08:31.3173038Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Byte 2023-01-11T22:08:31.3189731Z [ OK ] TensorIteratorTest.SerialLoopBinary_Byte (1 ms) 2023-01-11T22:08:31.3190304Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Char 2023-01-11T22:08:31.3207020Z [ OK ] TensorIteratorTest.SerialLoopBinary_Char (1 ms) 2023-01-11T22:08:31.3207606Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Short 2023-01-11T22:08:31.3224419Z [ OK ] TensorIteratorTest.SerialLoopBinary_Short (1 ms) 2023-01-11T22:08:31.3225001Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Int 2023-01-11T22:08:31.3241640Z [ OK ] TensorIteratorTest.SerialLoopBinary_Int (1 ms) 2023-01-11T22:08:31.3242242Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Long 2023-01-11T22:08:31.3258788Z [ OK ] TensorIteratorTest.SerialLoopBinary_Long (1 ms) 2023-01-11T22:08:31.3259414Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Float 2023-01-11T22:08:31.3276177Z [ OK ] TensorIteratorTest.SerialLoopBinary_Float (1 ms) 2023-01-11T22:08:31.3276798Z [ RUN ] TensorIteratorTest.SerialLoopBinary_Double 2023-01-11T22:08:31.3293545Z [ OK ] TensorIteratorTest.SerialLoopBinary_Double (1 ms) 2023-01-11T22:08:31.3294169Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Byte 2023-01-11T22:08:31.3310925Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Byte (1 ms) 2023-01-11T22:08:31.3311641Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Char 2023-01-11T22:08:31.3328153Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Char (1 ms) 2023-01-11T22:08:31.3328759Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Short 2023-01-11T22:08:31.3345461Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Short (1 ms) 2023-01-11T22:08:31.3346054Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Int 2023-01-11T22:08:31.3362683Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Int (1 ms) 2023-01-11T22:08:31.3363291Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Long 2023-01-11T22:08:31.3379831Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Long (1 ms) 2023-01-11T22:08:31.3380440Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Float 2023-01-11T22:08:31.3397133Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Float (1 ms) 2023-01-11T22:08:31.3397959Z [ RUN ] TensorIteratorTest.SerialLoopPointwise_Double 2023-01-11T22:08:31.3414512Z [ OK ] TensorIteratorTest.SerialLoopPointwise_Double (1 ms) 2023-01-11T22:08:31.3415146Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Byte 2023-01-11T22:08:31.3415842Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Byte (0 ms) 2023-01-11T22:08:31.3416519Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Char 2023-01-11T22:08:31.3417190Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Char (0 ms) 2023-01-11T22:08:31.3417945Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Short 2023-01-11T22:08:31.3418729Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Short (0 ms) 2023-01-11T22:08:31.3419423Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Int 2023-01-11T22:08:31.3420063Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Int (0 ms) 2023-01-11T22:08:31.3420658Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Long 2023-01-11T22:08:31.3421265Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Long (0 ms) 2023-01-11T22:08:31.3421796Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Float 2023-01-11T22:08:31.3422603Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Float (0 ms) 2023-01-11T22:08:31.3423261Z [ RUN ] TensorIteratorTest.SerialLoopUnaryNoOutput_Double 2023-01-11T22:08:31.3424034Z [ OK ] TensorIteratorTest.SerialLoopUnaryNoOutput_Double (0 ms) 2023-01-11T22:08:31.3424784Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Byte 2023-01-11T22:08:31.3425348Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Byte (0 ms) 2023-01-11T22:08:31.3425762Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Char 2023-01-11T22:08:31.3426182Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Char (0 ms) 2023-01-11T22:08:31.3426640Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Short 2023-01-11T22:08:31.3427187Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Short (0 ms) 2023-01-11T22:08:31.3427869Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Int 2023-01-11T22:08:31.3428587Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Int (0 ms) 2023-01-11T22:08:31.3429004Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Long 2023-01-11T22:08:31.3429420Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Long (0 ms) 2023-01-11T22:08:31.3429824Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Float 2023-01-11T22:08:31.3430244Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Float (0 ms) 2023-01-11T22:08:31.3430662Z [ RUN ] TensorIteratorTest.SerialLoopBinaryNoOutput_Double 2023-01-11T22:08:31.3431089Z [ OK ] TensorIteratorTest.SerialLoopBinaryNoOutput_Double (0 ms) 2023-01-11T22:08:31.3431619Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Byte 2023-01-11T22:08:31.3432053Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Byte (0 ms) 2023-01-11T22:08:31.3432479Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Char 2023-01-11T22:08:31.3432898Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Char (0 ms) 2023-01-11T22:08:31.3433327Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Short 2023-01-11T22:08:31.3433763Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Short (0 ms) 2023-01-11T22:08:31.3434191Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Int 2023-01-11T22:08:31.3434608Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Int (0 ms) 2023-01-11T22:08:31.3435034Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Long 2023-01-11T22:08:31.3435515Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Long (0 ms) 2023-01-11T22:08:31.3435940Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Float 2023-01-11T22:08:31.3436373Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Float (0 ms) 2023-01-11T22:08:31.3436817Z [ RUN ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Double 2023-01-11T22:08:31.3437256Z [ OK ] TensorIteratorTest.SerialLoopPoinwiseNoOutput_Double (0 ms) 2023-01-11T22:08:31.3437655Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Byte 2023-01-11T22:08:31.3438053Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Byte (0 ms) 2023-01-11T22:08:31.3438444Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Char 2023-01-11T22:08:31.3438824Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Char (0 ms) 2023-01-11T22:08:31.3439284Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Short 2023-01-11T22:08:31.3439687Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Short (0 ms) 2023-01-11T22:08:31.3440080Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Int 2023-01-11T22:08:31.3440460Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Int (0 ms) 2023-01-11T22:08:31.3440851Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Long 2023-01-11T22:08:31.3441247Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Long (0 ms) 2023-01-11T22:08:31.3441629Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Float 2023-01-11T22:08:31.3442027Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Float (0 ms) 2023-01-11T22:08:31.3442423Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Double 2023-01-11T22:08:31.3442828Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Double (0 ms) 2023-01-11T22:08:31.3443209Z [ RUN ] TensorIteratorTest.ComparisonLoopBinary_Bool 2023-01-11T22:08:31.3443610Z [ OK ] TensorIteratorTest.ComparisonLoopBinary_Bool (0 ms) 2023-01-11T22:08:31.3444000Z [ RUN ] TensorIteratorTest.SerialLoopSingleThread 2023-01-11T22:08:31.3503245Z [ OK ] TensorIteratorTest.SerialLoopSingleThread (4 ms) 2023-01-11T22:08:31.3503873Z [ RUN ] TensorIteratorTest.InputDType 2023-01-11T22:08:31.3504522Z [ OK ] TensorIteratorTest.InputDType (0 ms) 2023-01-11T22:08:31.3504953Z [ RUN ] TensorIteratorTest.ComputeCommonDTypeInputOnly 2023-01-11T22:08:31.3505373Z [ OK ] TensorIteratorTest.ComputeCommonDTypeInputOnly (0 ms) 2023-01-11T22:08:31.3505817Z [ RUN ] TensorIteratorTest.DoNotComputeCommonDTypeInputOnly 2023-01-11T22:08:31.3506276Z [ OK ] TensorIteratorTest.DoNotComputeCommonDTypeInputOnly (0 ms) 2023-01-11T22:08:31.3506691Z [ RUN ] TensorIteratorTest.FailNonPromotingBinaryOp 2023-01-11T22:08:31.3507101Z [ OK ] TensorIteratorTest.FailNonPromotingBinaryOp (0 ms) 2023-01-11T22:08:31.3507633Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Byte 2023-01-11T22:08:31.3508060Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Byte (0 ms) 2023-01-11T22:08:31.3508467Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Char 2023-01-11T22:08:31.3519886Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Char (0 ms) 2023-01-11T22:08:31.3520531Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Short 2023-01-11T22:08:31.3521069Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Short (0 ms) 2023-01-11T22:08:31.3521649Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Int 2023-01-11T22:08:31.3522290Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Int (0 ms) 2023-01-11T22:08:31.3522946Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Long 2023-01-11T22:08:31.3523648Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Long (0 ms) 2023-01-11T22:08:31.3524514Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Float 2023-01-11T22:08:31.3525263Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Float (0 ms) 2023-01-11T22:08:31.3526002Z [ RUN ] TensorIteratorTest.CpuKernelMultipleOutputs_Double 2023-01-11T22:08:31.3526724Z [ OK ] TensorIteratorTest.CpuKernelMultipleOutputs_Double (0 ms) 2023-01-11T22:08:31.3527390Z [----------] 65 tests from TensorIteratorTest (44 ms total) 2023-01-11T22:08:31.3527691Z 2023-01-11T22:08:31.3527976Z [----------] Global test environment tear-down 2023-01-11T22:08:31.3528481Z [==========] 65 tests from 1 test suite ran. (44 ms total) 2023-01-11T22:08:31.3528919Z [ PASSED ] 65 tests. 2023-01-11T22:08:31.4224936Z + ./Dimname_test 2023-01-11T22:08:31.7506716Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:31.7513014Z [==========] Running 4 tests from 1 test suite. 2023-01-11T22:08:31.7513598Z [----------] Global test environment set-up. 2023-01-11T22:08:31.7514108Z [----------] 4 tests from DimnameTest 2023-01-11T22:08:31.7514616Z [ RUN ] DimnameTest.isValidIdentifier 2023-01-11T22:08:31.7515195Z [ OK ] DimnameTest.isValidIdentifier (0 ms) 2023-01-11T22:08:31.7515729Z [ RUN ] DimnameTest.wildcardName 2023-01-11T22:08:31.7516242Z [ OK ] DimnameTest.wildcardName (0 ms) 2023-01-11T22:08:31.7516779Z [ RUN ] DimnameTest.createNormalName 2023-01-11T22:08:31.7533026Z [ OK ] DimnameTest.createNormalName (1 ms) 2023-01-11T22:08:31.7533524Z [ RUN ] DimnameTest.unifyAndMatch 2023-01-11T22:08:31.7533995Z [ OK ] DimnameTest.unifyAndMatch (0 ms) 2023-01-11T22:08:31.7534505Z [----------] 4 tests from DimnameTest (2 ms total) 2023-01-11T22:08:31.7534734Z 2023-01-11T22:08:31.7535010Z [----------] Global test environment tear-down 2023-01-11T22:08:31.7535467Z [==========] 4 tests from 1 test suite ran. (2 ms total) 2023-01-11T22:08:31.7535855Z [ PASSED ] 4 tests. 2023-01-11T22:08:31.8238731Z + ./Dict_test 2023-01-11T22:08:32.1488738Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:32.1489487Z [==========] Running 47 tests from 2 test suites. 2023-01-11T22:08:32.1490073Z [----------] Global test environment set-up. 2023-01-11T22:08:32.1490594Z [----------] 46 tests from DictTest 2023-01-11T22:08:32.1491142Z [ RUN ] DictTest.givenEmptyDict_whenCallingEmpty_thenReturnsTrue 2023-01-11T22:08:32.1491878Z [ OK ] DictTest.givenEmptyDict_whenCallingEmpty_thenReturnsTrue (0 ms) 2023-01-11T22:08:32.1492660Z [ RUN ] DictTest.givenNonemptyDict_whenCallingEmpty_thenReturnsFalse 2023-01-11T22:08:32.1493246Z [ OK ] DictTest.givenNonemptyDict_whenCallingEmpty_thenReturnsFalse (0 ms) 2023-01-11T22:08:32.1494420Z [ RUN ] DictTest.givenEmptyDict_whenCallingSize_thenReturnsZero 2023-01-11T22:08:32.1495502Z [ OK ] DictTest.givenEmptyDict_whenCallingSize_thenReturnsZero (0 ms) 2023-01-11T22:08:32.1496561Z [ RUN ] DictTest.givenNonemptyDict_whenCallingSize_thenReturnsNumberOfElements 2023-01-11T22:08:32.1497583Z [ OK ] DictTest.givenNonemptyDict_whenCallingSize_thenReturnsNumberOfElements (0 ms) 2023-01-11T22:08:32.1498511Z [ RUN ] DictTest.givenNonemptyDict_whenCallingClear_thenIsEmpty 2023-01-11T22:08:32.1499374Z [ OK ] DictTest.givenNonemptyDict_whenCallingClear_thenIsEmpty (0 ms) 2023-01-11T22:08:32.1500337Z [ RUN ] DictTest.whenInsertingNewKey_thenReturnsTrueAndIteratorToNewElement 2023-01-11T22:08:32.1501397Z [ OK ] DictTest.whenInsertingNewKey_thenReturnsTrueAndIteratorToNewElement (0 ms) 2023-01-11T22:08:32.1502777Z [ RUN ] DictTest.whenInsertingExistingKey_thenReturnsFalseAndIteratorToExistingElement 2023-01-11T22:08:32.1504009Z [ OK ] DictTest.whenInsertingExistingKey_thenReturnsFalseAndIteratorToExistingElement (0 ms) 2023-01-11T22:08:32.1505054Z [ RUN ] DictTest.whenInsertingExistingKey_thenDoesNotModifyDict 2023-01-11T22:08:32.1505995Z [ OK ] DictTest.whenInsertingExistingKey_thenDoesNotModifyDict (0 ms) 2023-01-11T22:08:32.1507035Z [ RUN ] DictTest.whenInsertOrAssigningNewKey_thenReturnsTrueAndIteratorToNewElement 2023-01-11T22:08:32.1508196Z [ OK ] DictTest.whenInsertOrAssigningNewKey_thenReturnsTrueAndIteratorToNewElement (0 ms) 2023-01-11T22:08:32.1509430Z [ RUN ] DictTest.whenInsertOrAssigningExistingKey_thenReturnsFalseAndIteratorToChangedElement 2023-01-11T22:08:32.1510717Z [ OK ] DictTest.whenInsertOrAssigningExistingKey_thenReturnsFalseAndIteratorToChangedElement (0 ms) 2023-01-11T22:08:32.1511810Z [ RUN ] DictTest.whenInsertOrAssigningExistingKey_thenDoesModifyDict 2023-01-11T22:08:32.1512814Z [ OK ] DictTest.whenInsertOrAssigningExistingKey_thenDoesModifyDict (0 ms) 2023-01-11T22:08:32.1513720Z [ RUN ] DictTest.givenEmptyDict_whenIterating_thenBeginIsEnd 2023-01-11T22:08:32.1514635Z [ OK ] DictTest.givenEmptyDict_whenIterating_thenBeginIsEnd (0 ms) 2023-01-11T22:08:32.1515476Z [ RUN ] DictTest.givenMutableDict_whenIterating_thenFindsElements 2023-01-11T22:08:32.1516366Z [ OK ] DictTest.givenMutableDict_whenIterating_thenFindsElements (0 ms) 2023-01-11T22:08:32.1517317Z [ RUN ] DictTest.givenMutableDict_whenIteratingWithForeach_thenFindsElements 2023-01-11T22:08:32.1518313Z [ OK ] DictTest.givenMutableDict_whenIteratingWithForeach_thenFindsElements (0 ms) 2023-01-11T22:08:32.1519265Z [ RUN ] DictTest.givenConstDict_whenIterating_thenFindsElements 2023-01-11T22:08:32.1520150Z [ OK ] DictTest.givenConstDict_whenIterating_thenFindsElements (0 ms) 2023-01-11T22:08:32.1521066Z [ RUN ] DictTest.givenConstDict_whenIteratingWithForeach_thenFindsElements 2023-01-11T22:08:32.1522030Z [ OK ] DictTest.givenConstDict_whenIteratingWithForeach_thenFindsElements (0 ms) 2023-01-11T22:08:32.1522877Z [ RUN ] DictTest.givenIterator_thenCanModifyValue 2023-01-11T22:08:32.1523653Z [ OK ] DictTest.givenIterator_thenCanModifyValue (0 ms) 2023-01-11T22:08:32.1524547Z [ RUN ] DictTest.givenOneElementDict_whenErasingByIterator_thenDictIsEmpty 2023-01-11T22:08:32.1525513Z [ OK ] DictTest.givenOneElementDict_whenErasingByIterator_thenDictIsEmpty (0 ms) 2023-01-11T22:08:32.1526521Z [ RUN ] DictTest.givenOneElementDict_whenErasingByKey_thenReturnsOneAndDictIsEmpty 2023-01-11T22:08:32.1527594Z [ OK ] DictTest.givenOneElementDict_whenErasingByKey_thenReturnsOneAndDictIsEmpty (0 ms) 2023-01-11T22:08:32.1528838Z [ RUN ] DictTest.givenOneElementDict_whenErasingByNonexistingKey_thenReturnsZeroAndDictIsUnchanged 2023-01-11T22:08:32.1530027Z [ OK ] DictTest.givenOneElementDict_whenErasingByNonexistingKey_thenReturnsZeroAndDictIsUnchanged (0 ms) 2023-01-11T22:08:32.1531104Z [ RUN ] DictTest.whenCallingAtWithExistingKey_thenReturnsCorrectElement 2023-01-11T22:08:32.1532114Z [ OK ] DictTest.whenCallingAtWithExistingKey_thenReturnsCorrectElement (0 ms) 2023-01-11T22:08:32.1533108Z [ RUN ] DictTest.whenCallingAtWithNonExistingKey_thenReturnsCorrectElement 2023-01-11T22:08:32.1534169Z [ OK ] DictTest.whenCallingAtWithNonExistingKey_thenReturnsCorrectElement (0 ms) 2023-01-11T22:08:32.1535237Z [ RUN ] DictTest.givenMutableDict_whenCallingFindOnExistingKey_thenFindsCorrectElement 2023-01-11T22:08:32.1536395Z [ OK ] DictTest.givenMutableDict_whenCallingFindOnExistingKey_thenFindsCorrectElement (0 ms) 2023-01-11T22:08:32.1537426Z [ RUN ] DictTest.givenMutableDict_whenCallingFindOnNonExistingKey_thenReturnsEnd 2023-01-11T22:08:32.1538458Z [ OK ] DictTest.givenMutableDict_whenCallingFindOnNonExistingKey_thenReturnsEnd (0 ms) 2023-01-11T22:08:32.1539495Z [ RUN ] DictTest.givenConstDict_whenCallingFindOnExistingKey_thenFindsCorrectElement 2023-01-11T22:08:32.1540569Z [ OK ] DictTest.givenConstDict_whenCallingFindOnExistingKey_thenFindsCorrectElement (0 ms) 2023-01-11T22:08:32.1541571Z [ RUN ] DictTest.givenConstDict_whenCallingFindOnNonExistingKey_thenReturnsEnd 2023-01-11T22:08:32.1542699Z [ OK ] DictTest.givenConstDict_whenCallingFindOnNonExistingKey_thenReturnsEnd (0 ms) 2023-01-11T22:08:32.1543691Z [ RUN ] DictTest.whenCallingContainsWithExistingKey_thenReturnsTrue 2023-01-11T22:08:32.1544664Z [ OK ] DictTest.whenCallingContainsWithExistingKey_thenReturnsTrue (0 ms) 2023-01-11T22:08:32.1545678Z [ RUN ] DictTest.whenCallingContainsWithNonExistingKey_thenReturnsFalse 2023-01-11T22:08:32.1546723Z [ OK ] DictTest.whenCallingContainsWithNonExistingKey_thenReturnsFalse (0 ms) 2023-01-11T22:08:32.1547614Z [ RUN ] DictTest.whenCallingReserve_thenDoesntCrash 2023-01-11T22:08:32.1548391Z [ OK ] DictTest.whenCallingReserve_thenDoesntCrash (0 ms) 2023-01-11T22:08:32.1549206Z [ RUN ] DictTest.whenCopyConstructingDict_thenAreEqual 2023-01-11T22:08:32.1550048Z [ OK ] DictTest.whenCopyConstructingDict_thenAreEqual (0 ms) 2023-01-11T22:08:32.1550837Z [ RUN ] DictTest.whenCopyAssigningDict_thenAreEqual 2023-01-11T22:08:32.1551643Z [ OK ] DictTest.whenCopyAssigningDict_thenAreEqual (0 ms) 2023-01-11T22:08:32.1552401Z [ RUN ] DictTest.whenCopyingDict_thenAreEqual 2023-01-11T22:08:32.1553140Z [ OK ] DictTest.whenCopyingDict_thenAreEqual (0 ms) 2023-01-11T22:08:32.1553927Z [ RUN ] DictTest.whenMoveConstructingDict_thenNewIsCorrect 2023-01-11T22:08:32.1554802Z [ OK ] DictTest.whenMoveConstructingDict_thenNewIsCorrect (0 ms) 2023-01-11T22:08:32.1555638Z [ RUN ] DictTest.whenMoveAssigningDict_thenNewIsCorrect 2023-01-11T22:08:32.1556443Z [ OK ] DictTest.whenMoveAssigningDict_thenNewIsCorrect (0 ms) 2023-01-11T22:08:32.1557293Z [ RUN ] DictTest.whenMoveConstructingDict_thenOldIsUnchanged 2023-01-11T22:08:32.1558179Z [ OK ] DictTest.whenMoveConstructingDict_thenOldIsUnchanged (0 ms) 2023-01-11T22:08:32.1559063Z [ RUN ] DictTest.whenMoveAssigningDict_thenOldIsUnchanged 2023-01-11T22:08:32.1559923Z [ OK ] DictTest.whenMoveAssigningDict_thenOldIsUnchanged (0 ms) 2023-01-11T22:08:32.1560949Z [ RUN ] DictTest.givenIterator_whenPostfixIncrementing_thenMovesToNextAndReturnsOldPosition 2023-01-11T22:08:32.1562214Z [ OK ] DictTest.givenIterator_whenPostfixIncrementing_thenMovesToNextAndReturnsOldPosition (0 ms) 2023-01-11T22:08:32.1563328Z [ RUN ] DictTest.givenIterator_whenPrefixIncrementing_thenMovesToNextAndReturnsNewPosition 2023-01-11T22:08:32.1564488Z [ OK ] DictTest.givenIterator_whenPrefixIncrementing_thenMovesToNextAndReturnsNewPosition (0 ms) 2023-01-11T22:08:32.1565412Z [ RUN ] DictTest.givenEqualIterators_thenAreEqual 2023-01-11T22:08:32.1566212Z [ OK ] DictTest.givenEqualIterators_thenAreEqual (0 ms) 2023-01-11T22:08:32.1567008Z [ RUN ] DictTest.givenDifferentIterators_thenAreNotEqual 2023-01-11T22:08:32.1567887Z [ OK ] DictTest.givenDifferentIterators_thenAreNotEqual (0 ms) 2023-01-11T22:08:32.1568817Z [ RUN ] DictTest.givenIterator_whenDereferencing_thenPointsToCorrectElement 2023-01-11T22:08:32.1569871Z [ OK ] DictTest.givenIterator_whenDereferencing_thenPointsToCorrectElement (0 ms) 2023-01-11T22:08:32.1570798Z [ RUN ] DictTest.givenIterator_whenWritingToValue_thenChangesValue 2023-01-11T22:08:32.1571703Z [ OK ] DictTest.givenIterator_whenWritingToValue_thenChangesValue (0 ms) 2023-01-11T22:08:32.1572391Z [ RUN ] DictTest.isReferenceType 2023-01-11T22:08:32.1572816Z [ OK ] DictTest.isReferenceType (0 ms) 2023-01-11T22:08:32.1573384Z [ RUN ] DictTest.copyHasSeparateStorage 2023-01-11T22:08:32.1573866Z [ OK ] DictTest.copyHasSeparateStorage (0 ms) 2023-01-11T22:08:32.1574176Z [ RUN ] DictTest.dictTensorAsKey 2023-01-11T22:08:32.1574478Z [ OK ] DictTest.dictTensorAsKey (0 ms) 2023-01-11T22:08:32.1574768Z [ RUN ] DictTest.dictEquality 2023-01-11T22:08:32.1575044Z [ OK ] DictTest.dictEquality (0 ms) 2023-01-11T22:08:32.1575358Z [----------] 46 tests from DictTest (0 ms total) 2023-01-11T22:08:32.1575519Z 2023-01-11T22:08:32.1575697Z [----------] 1 test from ListTest_IValueBasedList 2023-01-11T22:08:32.1576127Z [ RUN ] ListTest_IValueBasedList.givenIterator_whenWritingToValueFromIterator_thenChangesValue 2023-01-11T22:08:32.1576639Z [ OK ] ListTest_IValueBasedList.givenIterator_whenWritingToValueFromIterator_thenChangesValue (0 ms) 2023-01-11T22:08:32.1577067Z [----------] 1 test from ListTest_IValueBasedList (0 ms total) 2023-01-11T22:08:32.1577236Z 2023-01-11T22:08:32.1577404Z [----------] Global test environment tear-down 2023-01-11T22:08:32.1577705Z [==========] 47 tests from 2 test suites ran. (0 ms total) 2023-01-11T22:08:32.1577967Z [ PASSED ] 47 tests. 2023-01-11T22:08:32.2206953Z + ./NamedTensor_test 2023-01-11T22:08:32.5457564Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:32.5461749Z [==========] Running 10 tests from 1 test suite. 2023-01-11T22:08:32.5462273Z [----------] Global test environment set-up. 2023-01-11T22:08:32.5462957Z [----------] 10 tests from NamedTensorTest 2023-01-11T22:08:32.5463408Z [ RUN ] NamedTensorTest.isNamed 2023-01-11T22:08:32.5464168Z [W TensorImpl.h:1816] Warning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (function operator()) 2023-01-11T22:08:32.5465129Z [ OK ] NamedTensorTest.isNamed (0 ms) 2023-01-11T22:08:32.5465663Z [ RUN ] NamedTensorTest.attachMetadata 2023-01-11T22:08:32.5466244Z [ OK ] NamedTensorTest.attachMetadata (0 ms) 2023-01-11T22:08:32.5466871Z [ RUN ] NamedTensorTest.internalSetNamesInplace 2023-01-11T22:08:32.5467542Z [ OK ] NamedTensorTest.internalSetNamesInplace (0 ms) 2023-01-11T22:08:32.5468410Z [ RUN ] NamedTensorTest.empty 2023-01-11T22:08:32.5501572Z [ OK ] NamedTensorTest.empty (3 ms) 2023-01-11T22:08:32.5502181Z [ RUN ] NamedTensorTest.dimnameToPosition 2023-01-11T22:08:32.5507640Z [ OK ] NamedTensorTest.dimnameToPosition (0 ms) 2023-01-11T22:08:32.5508284Z [ RUN ] NamedTensorTest.unifyFromRight 2023-01-11T22:08:32.5576615Z [ OK ] NamedTensorTest.unifyFromRight (6 ms) 2023-01-11T22:08:32.5577165Z [ RUN ] NamedTensorTest.alias 2023-01-11T22:08:32.5577725Z [ OK ] NamedTensorTest.alias (0 ms) 2023-01-11T22:08:32.5578047Z [ RUN ] NamedTensorTest.NoNamesGuard 2023-01-11T22:08:32.5578374Z [ OK ] NamedTensorTest.NoNamesGuard (0 ms) 2023-01-11T22:08:32.5578687Z [ RUN ] NamedTensorTest.TensorNamePrint 2023-01-11T22:08:32.5579030Z [ OK ] NamedTensorTest.TensorNamePrint (0 ms) 2023-01-11T22:08:32.5579574Z [ RUN ] NamedTensorTest.TensorNamesCheckUnique 2023-01-11T22:08:32.5583002Z [ OK ] NamedTensorTest.TensorNamesCheckUnique (0 ms) 2023-01-11T22:08:32.5583653Z [----------] 10 tests from NamedTensorTest (12 ms total) 2023-01-11T22:08:32.5583969Z 2023-01-11T22:08:32.5584243Z [----------] Global test environment tear-down 2023-01-11T22:08:32.5584564Z [==========] 10 tests from 1 test suite ran. (12 ms total) 2023-01-11T22:08:32.5584811Z [ PASSED ] 10 tests. 2023-01-11T22:08:32.6304726Z + ./cpu_generator_test 2023-01-11T22:08:32.9538853Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:32.9544772Z [==========] Running 15 tests from 1 test suite. 2023-01-11T22:08:32.9545214Z [----------] Global test environment set-up. 2023-01-11T22:08:32.9545537Z [----------] 15 tests from CPUGeneratorImpl 2023-01-11T22:08:32.9546024Z [ RUN ] CPUGeneratorImpl.TestGeneratorDynamicCast 2023-01-11T22:08:32.9546641Z [ OK ] CPUGeneratorImpl.TestGeneratorDynamicCast (0 ms) 2023-01-11T22:08:32.9547094Z [ RUN ] CPUGeneratorImpl.TestDefaultGenerator 2023-01-11T22:08:32.9547458Z [ OK ] CPUGeneratorImpl.TestDefaultGenerator (0 ms) 2023-01-11T22:08:32.9547850Z [ RUN ] CPUGeneratorImpl.TestCloning 2023-01-11T22:08:32.9548363Z [ OK ] CPUGeneratorImpl.TestCloning (0 ms) 2023-01-11T22:08:32.9548817Z [ RUN ] CPUGeneratorImpl.TestMultithreadingGetEngineOperator 2023-01-11T22:08:32.9549295Z [ OK ] CPUGeneratorImpl.TestMultithreadingGetEngineOperator (0 ms) 2023-01-11T22:08:32.9549710Z [ RUN ] CPUGeneratorImpl.TestGetSetCurrentSeed 2023-01-11T22:08:32.9550089Z [ OK ] CPUGeneratorImpl.TestGetSetCurrentSeed (0 ms) 2023-01-11T22:08:32.9550494Z [ RUN ] CPUGeneratorImpl.TestMultithreadingGetSetCurrentSeed 2023-01-11T22:08:32.9552514Z [ OK ] CPUGeneratorImpl.TestMultithreadingGetSetCurrentSeed (0 ms) 2023-01-11T22:08:32.9552915Z [ RUN ] CPUGeneratorImpl.TestRNGForking 2023-01-11T22:08:32.9573572Z [ OK ] CPUGeneratorImpl.TestRNGForking (2 ms) 2023-01-11T22:08:32.9574085Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineReproducibility 2023-01-11T22:08:32.9574554Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineReproducibility (0 ms) 2023-01-11T22:08:32.9574970Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineOffset1 2023-01-11T22:08:32.9575352Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineOffset1 (0 ms) 2023-01-11T22:08:32.9575735Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineOffset2 2023-01-11T22:08:32.9576123Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineOffset2 (0 ms) 2023-01-11T22:08:32.9576493Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineOffset3 2023-01-11T22:08:32.9576969Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineOffset3 (0 ms) 2023-01-11T22:08:32.9577505Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineIndex 2023-01-11T22:08:32.9578017Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineIndex (0 ms) 2023-01-11T22:08:32.9578683Z [ RUN ] CPUGeneratorImpl.TestMT19937EngineReproducibility 2023-01-11T22:08:32.9579129Z [ OK ] CPUGeneratorImpl.TestMT19937EngineReproducibility (0 ms) 2023-01-11T22:08:32.9579578Z [ RUN ] CPUGeneratorImpl.TestPhiloxEngineReproducibilityRandN 2023-01-11T22:08:32.9580042Z [ OK ] CPUGeneratorImpl.TestPhiloxEngineReproducibilityRandN (0 ms) 2023-01-11T22:08:32.9580465Z [ RUN ] CPUGeneratorImpl.TestPhiloxDeterministic 2023-01-11T22:08:32.9580854Z [ OK ] CPUGeneratorImpl.TestPhiloxDeterministic (0 ms) 2023-01-11T22:08:32.9581226Z [----------] 15 tests from CPUGeneratorImpl (3 ms total) 2023-01-11T22:08:32.9581376Z 2023-01-11T22:08:32.9581541Z [----------] Global test environment tear-down 2023-01-11T22:08:32.9581925Z [==========] 15 tests from 1 test suite ran. (3 ms total) 2023-01-11T22:08:32.9582185Z [ PASSED ] 15 tests. 2023-01-11T22:08:33.0283236Z + ./legacy_vmap_test 2023-01-11T22:08:33.3625353Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:33.3630805Z [==========] Running 23 tests from 1 test suite. 2023-01-11T22:08:33.3631230Z [----------] Global test environment set-up. 2023-01-11T22:08:33.3631506Z [----------] 23 tests from VmapTest 2023-01-11T22:08:33.3631798Z [ RUN ] VmapTest.TestBatchedTensor 2023-01-11T22:08:33.3654828Z [ OK ] VmapTest.TestBatchedTensor (2 ms) 2023-01-11T22:08:33.3655365Z [ RUN ] VmapTest.TestBatchedTensorMaxLevel 2023-01-11T22:08:33.3669723Z [ OK ] VmapTest.TestBatchedTensorMaxLevel (1 ms) 2023-01-11T22:08:33.3670128Z [ RUN ] VmapTest.TestBatchedTensorActualDim 2023-01-11T22:08:33.3694565Z [ OK ] VmapTest.TestBatchedTensorActualDim (2 ms) 2023-01-11T22:08:33.3694947Z [ RUN ] VmapTest.TestMultiBatchVmapTransform 2023-01-11T22:08:33.3704848Z [ OK ] VmapTest.TestMultiBatchVmapTransform (1 ms) 2023-01-11T22:08:33.3705281Z [ RUN ] VmapTest.TestVmapPhysicalViewGetPhysicalDim 2023-01-11T22:08:33.3714442Z [ OK ] VmapTest.TestVmapPhysicalViewGetPhysicalDim (0 ms) 2023-01-11T22:08:33.3714900Z [ RUN ] VmapTest.TestVmapPhysicalViewGetPhysicalDims 2023-01-11T22:08:33.3724225Z [ OK ] VmapTest.TestVmapPhysicalViewGetPhysicalDims (0 ms) 2023-01-11T22:08:33.3724690Z [ RUN ] VmapTest.TestVmapPhysicalViewNewLogicalFromPhysical 2023-01-11T22:08:33.3725141Z [ OK ] VmapTest.TestVmapPhysicalViewNewLogicalFromPhysical (0 ms) 2023-01-11T22:08:33.3725529Z [ RUN ] VmapTest.TestBatchedTensorSum 2023-01-11T22:08:33.3728419Z [ OK ] VmapTest.TestBatchedTensorSum (0 ms) 2023-01-11T22:08:33.3728874Z [ RUN ] VmapTest.TestBroadcastingVmapTransformBatchedBatched 2023-01-11T22:08:33.3734798Z [ OK ] VmapTest.TestBroadcastingVmapTransformBatchedBatched (0 ms) 2023-01-11T22:08:33.3735275Z [ RUN ] VmapTest.TestBroadcastingVmapTransformBatchedUnbatched 2023-01-11T22:08:33.3740269Z [ OK ] VmapTest.TestBroadcastingVmapTransformBatchedUnbatched (0 ms) 2023-01-11T22:08:33.3740714Z [ RUN ] VmapTest.TestBroadcastingVmapTransformMaxLevels 2023-01-11T22:08:33.3744271Z [ OK ] VmapTest.TestBroadcastingVmapTransformMaxLevels (0 ms) 2023-01-11T22:08:33.3744650Z [ RUN ] VmapTest.TestBatchedTensorMul 2023-01-11T22:08:33.3746494Z [ OK ] VmapTest.TestBatchedTensorMul (0 ms) 2023-01-11T22:08:33.3746813Z [ RUN ] VmapTest.TestBatchedTensorSize 2023-01-11T22:08:33.3751746Z [ OK ] VmapTest.TestBatchedTensorSize (0 ms) 2023-01-11T22:08:33.3752622Z [ RUN ] VmapTest.TestVmapPhysicalViewGetPhysicalShape 2023-01-11T22:08:33.3753052Z [ OK ] VmapTest.TestVmapPhysicalViewGetPhysicalShape (0 ms) 2023-01-11T22:08:33.3753416Z [ RUN ] VmapTest.TestBatchedTensorExpand 2023-01-11T22:08:33.3894330Z [ OK ] VmapTest.TestBatchedTensorExpand (14 ms) 2023-01-11T22:08:33.3894928Z [ RUN ] VmapTest.TestBatchedTensorUnsqueeze 2023-01-11T22:08:33.3895387Z [ OK ] VmapTest.TestBatchedTensorUnsqueeze (0 ms) 2023-01-11T22:08:33.3895804Z [ RUN ] VmapTest.TestBatchedTensorSqueeze 2023-01-11T22:08:33.3896672Z [ OK ] VmapTest.TestBatchedTensorSqueeze (0 ms) 2023-01-11T22:08:33.3897086Z [ RUN ] VmapTest.TestBatchedTensorTranspose 2023-01-11T22:08:33.3899291Z [ OK ] VmapTest.TestBatchedTensorTranspose (0 ms) 2023-01-11T22:08:33.3899648Z [ RUN ] VmapTest.TestBatchedTensorPermute 2023-01-11T22:08:33.3901719Z [ OK ] VmapTest.TestBatchedTensorPermute (0 ms) 2023-01-11T22:08:33.3902129Z [ RUN ] VmapTest.TestMultiBatchVmapTransformBatchedBatched 2023-01-11T22:08:33.3928064Z [ OK ] VmapTest.TestMultiBatchVmapTransformBatchedBatched (2 ms) 2023-01-11T22:08:33.3928895Z [ RUN ] VmapTest.TestMultiBatchVmapTransformBatchedUnbatched 2023-01-11T22:08:33.3934604Z [ OK ] VmapTest.TestMultiBatchVmapTransformBatchedUnbatched (0 ms) 2023-01-11T22:08:33.3935399Z [ RUN ] VmapTest.TestMultiBatchVmapTransformMaxLevels 2023-01-11T22:08:33.3939681Z [ OK ] VmapTest.TestMultiBatchVmapTransformMaxLevels (0 ms) 2023-01-11T22:08:33.3940472Z [ RUN ] VmapTest.TestMultiBatchVmapTransformMultipleTensors 2023-01-11T22:08:33.3944942Z [ OK ] VmapTest.TestMultiBatchVmapTransformMultipleTensors (0 ms) 2023-01-11T22:08:33.3945523Z [----------] 23 tests from VmapTest (31 ms total) 2023-01-11T22:08:33.3945756Z 2023-01-11T22:08:33.3946011Z [----------] Global test environment tear-down 2023-01-11T22:08:33.3946474Z [==========] 23 tests from 1 test suite ran. (31 ms total) 2023-01-11T22:08:33.3946850Z [ PASSED ] 23 tests. 2023-01-11T22:08:33.4691807Z + ./operators_test 2023-01-11T22:08:33.7954234Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:33.7954752Z [==========] Running 4 tests from 1 test suite. 2023-01-11T22:08:33.7955066Z [----------] Global test environment set-up. 2023-01-11T22:08:33.7955375Z [----------] 4 tests from OperatorsTest 2023-01-11T22:08:33.7955684Z [ RUN ] OperatorsTest.TestFunctionDecltype 2023-01-11T22:08:33.7960153Z [ OK ] OperatorsTest.TestFunctionDecltype (0 ms) 2023-01-11T22:08:33.7960624Z [ RUN ] OperatorsTest.TestMethodOnlyDecltype 2023-01-11T22:08:33.7961368Z [ OK ] OperatorsTest.TestMethodOnlyDecltype (0 ms) 2023-01-11T22:08:33.7961718Z [ RUN ] OperatorsTest.Test_ATEN_FN 2023-01-11T22:08:33.7992047Z [ OK ] OperatorsTest.Test_ATEN_FN (2 ms) 2023-01-11T22:08:33.7992635Z [ RUN ] OperatorsTest.TestOutVariantIsFaithful 2023-01-11T22:08:33.7993015Z [ OK ] OperatorsTest.TestOutVariantIsFaithful (0 ms) 2023-01-11T22:08:33.7993385Z [----------] 4 tests from OperatorsTest (3 ms total) 2023-01-11T22:08:33.7993540Z 2023-01-11T22:08:33.7993709Z [----------] Global test environment tear-down 2023-01-11T22:08:33.7994025Z [==========] 4 tests from 1 test suite ran. (3 ms total) 2023-01-11T22:08:33.7994269Z [ PASSED ] 4 tests. 2023-01-11T22:08:33.8728254Z + [[ -x ./cudnn_test ]] 2023-01-11T22:08:33.8728528Z + [[ -x ./cuda_generator_test ]] 2023-01-11T22:08:33.8728716Z + ./cuda_generator_test 2023-01-11T22:08:34.1937126Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:34.1937904Z [==========] Running 11 tests from 1 test suite. 2023-01-11T22:08:34.1938231Z [----------] Global test environment set-up. 2023-01-11T22:08:34.1938591Z [----------] 11 tests from CUDAGeneratorImpl 2023-01-11T22:08:34.1938965Z [ RUN ] CUDAGeneratorImpl.TestPhiloxEngineReproducibility 2023-01-11T22:08:34.1939478Z [ OK ] CUDAGeneratorImpl.TestPhiloxEngineReproducibility (0 ms) 2023-01-11T22:08:34.1939953Z [ RUN ] CUDAGeneratorImpl.TestPhiloxEngineOffset1 2023-01-11T22:08:34.1940339Z [ OK ] CUDAGeneratorImpl.TestPhiloxEngineOffset1 (0 ms) 2023-01-11T22:08:34.1940788Z [ RUN ] CUDAGeneratorImpl.TestPhiloxEngineOffset2 2023-01-11T22:08:34.1941182Z [ OK ] CUDAGeneratorImpl.TestPhiloxEngineOffset2 (0 ms) 2023-01-11T22:08:34.1941610Z [ RUN ] CUDAGeneratorImpl.TestPhiloxEngineOffset3 2023-01-11T22:08:34.1942077Z [ OK ] CUDAGeneratorImpl.TestPhiloxEngineOffset3 (0 ms) 2023-01-11T22:08:34.1942706Z [ RUN ] CUDAGeneratorImpl.TestPhiloxEngineIndex 2023-01-11T22:08:34.1943155Z [ OK ] CUDAGeneratorImpl.TestPhiloxEngineIndex (0 ms) 2023-01-11T22:08:34.1943529Z [ RUN ] CUDAGeneratorImpl.TestGeneratorDynamicCast 2023-01-11T22:08:34.1943993Z [ OK ] CUDAGeneratorImpl.TestGeneratorDynamicCast (0 ms) 2023-01-11T22:08:34.1944381Z [ RUN ] CUDAGeneratorImpl.TestDefaultGenerator 2023-01-11T22:08:34.1944805Z [ OK ] CUDAGeneratorImpl.TestDefaultGenerator (0 ms) 2023-01-11T22:08:34.1945149Z [ RUN ] CUDAGeneratorImpl.TestCloning 2023-01-11T22:08:34.1945534Z [ OK ] CUDAGeneratorImpl.TestCloning (0 ms) 2023-01-11T22:08:34.1945922Z [ RUN ] CUDAGeneratorImpl.TestMultithreadingGetSetCurrentSeed 2023-01-11T22:08:34.1946457Z [ OK ] CUDAGeneratorImpl.TestMultithreadingGetSetCurrentSeed (0 ms) 2023-01-11T22:08:34.1946914Z [ RUN ] CUDAGeneratorImpl.TestRNGForking 2023-01-11T22:08:34.1947260Z [ OK ] CUDAGeneratorImpl.TestRNGForking (0 ms) 2023-01-11T22:08:34.1947623Z [ RUN ] CUDAGeneratorImpl.TestMultithreadRNG 2023-01-11T22:08:34.1948013Z [ OK ] CUDAGeneratorImpl.TestMultithreadRNG (0 ms) 2023-01-11T22:08:34.1948393Z [----------] 11 tests from CUDAGeneratorImpl (0 ms total) 2023-01-11T22:08:34.1948593Z 2023-01-11T22:08:34.1948752Z [----------] Global test environment tear-down 2023-01-11T22:08:34.1949061Z [==========] 11 tests from 1 test suite ran. (0 ms total) 2023-01-11T22:08:34.1949375Z [ PASSED ] 11 tests. 2023-01-11T22:08:34.2658485Z + [[ -x ./apply_test ]] 2023-01-11T22:08:34.2658865Z + [[ -x ./stream_test ]] 2023-01-11T22:08:34.2659092Z + [[ -x ./cuda_half_test ]] 2023-01-11T22:08:34.2659282Z + ./cuda_half_test 2023-01-11T22:08:34.5857742Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:34.5858602Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:34.5859153Z [----------] Global test environment set-up. 2023-01-11T22:08:34.5859669Z [----------] 1 test from HalfCuda 2023-01-11T22:08:34.5860121Z [ RUN ] HalfCuda.HalfCuda 2023-01-11T22:08:34.5860585Z [ OK ] HalfCuda.HalfCuda (0 ms) 2023-01-11T22:08:34.5860895Z [----------] 1 test from HalfCuda (0 ms total) 2023-01-11T22:08:34.5861029Z 2023-01-11T22:08:34.5861199Z [----------] Global test environment tear-down 2023-01-11T22:08:34.5861507Z [==========] 1 test from 1 test suite ran. (0 ms total) 2023-01-11T22:08:34.5861766Z [ PASSED ] 1 test. 2023-01-11T22:08:34.6551381Z + [[ -x ./cuda_vectorized_test ]] 2023-01-11T22:08:34.6551807Z + ./cuda_vectorized_test 2023-01-11T22:08:34.9726619Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:34.9727689Z [==========] Running 2 tests from 1 test suite. 2023-01-11T22:08:34.9728005Z [----------] Global test environment set-up. 2023-01-11T22:08:34.9728344Z [----------] 2 tests from TestVectorizedMemoryAccess 2023-01-11T22:08:34.9728711Z [ RUN ] TestVectorizedMemoryAccess.CanVectorizeUpTo 2023-01-11T22:08:34.9729122Z [ OK ] TestVectorizedMemoryAccess.CanVectorizeUpTo (0 ms) 2023-01-11T22:08:34.9729494Z [ RUN ] TestVectorizedMemoryAccess.CopyKernel 2023-01-11T22:08:34.9729866Z [ OK ] TestVectorizedMemoryAccess.CopyKernel (0 ms) 2023-01-11T22:08:34.9730257Z [----------] 2 tests from TestVectorizedMemoryAccess (0 ms total) 2023-01-11T22:08:34.9730440Z 2023-01-11T22:08:34.9730607Z [----------] Global test environment tear-down 2023-01-11T22:08:34.9730907Z [==========] 2 tests from 1 test suite ran. (0 ms total) 2023-01-11T22:08:34.9731247Z [ PASSED ] 2 tests. 2023-01-11T22:08:35.0450647Z + [[ -x ./cuda_distributions_test ]] 2023-01-11T22:08:35.0450889Z + ./cuda_distributions_test 2023-01-11T22:08:35.3697670Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:35.3698430Z [==========] Running 4 tests from 2 test suites. 2023-01-11T22:08:35.3699123Z [----------] Global test environment set-up. 2023-01-11T22:08:35.3699560Z [----------] 3 tests from DistributionsTest 2023-01-11T22:08:35.3699959Z [ RUN ] DistributionsTest.TestPhiloxIncrementSmallUniformTensor 2023-01-11T22:08:35.3700447Z [ OK ] DistributionsTest.TestPhiloxIncrementSmallUniformTensor (0 ms) 2023-01-11T22:08:35.3700909Z [ RUN ] DistributionsTest.TestPhiloxIncrementBigUniformTensor 2023-01-11T22:08:35.3701382Z [ OK ] DistributionsTest.TestPhiloxIncrementBigUniformTensor (0 ms) 2023-01-11T22:08:35.3701882Z [ RUN ] DistributionsTest.TestPhiloxIncrementSmallMultinomialTensor 2023-01-11T22:08:35.3702660Z [ OK ] DistributionsTest.TestPhiloxIncrementSmallMultinomialTensor (0 ms) 2023-01-11T22:08:35.3703090Z [----------] 3 tests from DistributionsTest (0 ms total) 2023-01-11T22:08:35.3703253Z 2023-01-11T22:08:35.3703430Z [----------] 1 test from RandomPermutationTest 2023-01-11T22:08:35.3703790Z [ RUN ] RandomPermutationTest.TestIslandShuffle 2023-01-11T22:08:35.3704161Z [ OK ] RandomPermutationTest.TestIslandShuffle (0 ms) 2023-01-11T22:08:35.3704535Z [----------] 1 test from RandomPermutationTest (0 ms total) 2023-01-11T22:08:35.3704702Z 2023-01-11T22:08:35.3704869Z [----------] Global test environment tear-down 2023-01-11T22:08:35.3705180Z [==========] 4 tests from 2 test suites ran. (0 ms total) 2023-01-11T22:08:35.3705426Z [ PASSED ] 4 tests. 2023-01-11T22:08:35.4417030Z + [[ -x ./cuda_optional_test ]] 2023-01-11T22:08:35.4417274Z + ./cuda_optional_test 2023-01-11T22:08:35.7591200Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:35.7591763Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:08:35.7592082Z [----------] Global test environment set-up. 2023-01-11T22:08:35.7592380Z [----------] 1 test from OptionalTest 2023-01-11T22:08:35.7592748Z [ RUN ] OptionalTest.OptionalTestCUDA 2023-01-11T22:08:35.7593084Z [ OK ] OptionalTest.OptionalTestCUDA (0 ms) 2023-01-11T22:08:35.7593462Z [----------] 1 test from OptionalTest (0 ms total) 2023-01-11T22:08:35.7593613Z 2023-01-11T22:08:35.7593780Z [----------] Global test environment tear-down 2023-01-11T22:08:35.7594118Z [==========] 1 test from 1 test suite ran. (0 ms total) 2023-01-11T22:08:35.7594393Z [ PASSED ] 1 test. 2023-01-11T22:08:35.8288453Z + [[ -x ./cuda_tensor_interop_test ]] 2023-01-11T22:08:35.8288983Z + [[ -x ./cuda_complex_test ]] 2023-01-11T22:08:35.8289182Z + ./cuda_complex_test 2023-01-11T22:08:36.1515002Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:36.1520288Z [==========] Running 11 tests from 7 test suites. 2023-01-11T22:08:36.1520892Z [----------] Global test environment set-up. 2023-01-11T22:08:36.1521435Z [----------] 2 tests from TestMemory 2023-01-11T22:08:36.1521879Z [ RUN ] TestMemory.ReinterpretCast 2023-01-11T22:08:36.1522189Z [ OK ] TestMemory.ReinterpretCast (0 ms) 2023-01-11T22:08:36.1522518Z [ RUN ] TestMemory.ThrustReinterpretCast 2023-01-11T22:08:36.1522863Z [ OK ] TestMemory.ThrustReinterpretCast (0 ms) 2023-01-11T22:08:36.1523185Z [----------] 2 tests from TestMemory (0 ms total) 2023-01-11T22:08:36.1523333Z 2023-01-11T22:08:36.1523491Z [----------] 2 tests from TestConstructors 2023-01-11T22:08:36.1523985Z [ RUN ] TestConstructors.FromThrust 2023-01-11T22:08:36.1524299Z [ OK ] TestConstructors.FromThrust (0 ms) 2023-01-11T22:08:36.1524627Z [ RUN ] TestConstructors.UnorderedMap 2023-01-11T22:08:36.1524957Z [ OK ] TestConstructors.UnorderedMap (0 ms) 2023-01-11T22:08:36.1525299Z [----------] 2 tests from TestConstructors (0 ms total) 2023-01-11T22:08:36.1525446Z 2023-01-11T22:08:36.1525596Z [----------] 1 test from TestAssignment 2023-01-11T22:08:36.1525889Z [ RUN ] TestAssignment.FromThrust 2023-01-11T22:08:36.1526346Z [ OK ] TestAssignment.FromThrust (0 ms) 2023-01-11T22:08:36.1526658Z [----------] 1 test from TestAssignment (0 ms total) 2023-01-11T22:08:36.1526811Z 2023-01-11T22:08:36.1527007Z [----------] 1 test from TestArithmeticIntScalar 2023-01-11T22:08:36.1527528Z [ RUN ] TestArithmeticIntScalar.All 2023-01-11T22:08:36.1528118Z [ OK ] TestArithmeticIntScalar.All (0 ms) 2023-01-11T22:08:36.1528575Z [----------] 1 test from TestArithmeticIntScalar (0 ms total) 2023-01-11T22:08:36.1528757Z 2023-01-11T22:08:36.1528894Z [----------] 1 test from TestIO 2023-01-11T22:08:36.1529135Z [ RUN ] TestIO.All 2023-01-11T22:08:36.1529379Z [ OK ] TestIO.All (0 ms) 2023-01-11T22:08:36.1529658Z [----------] 1 test from TestIO (0 ms total) 2023-01-11T22:08:36.1529801Z 2023-01-11T22:08:36.1529988Z [----------] 1 test from TestStd 2023-01-11T22:08:36.1530259Z [ RUN ] TestStd.BasicFunctions 2023-01-11T22:08:36.1530538Z [ OK ] TestStd.BasicFunctions (0 ms) 2023-01-11T22:08:36.1530842Z [----------] 1 test from TestStd (0 ms total) 2023-01-11T22:08:36.1530982Z 2023-01-11T22:08:36.1531127Z [----------] 3 tests from DeviceTests 2023-01-11T22:08:36.1531405Z [ RUN ] DeviceTests.ThrustConversion 2023-01-11T22:08:36.1531734Z [ OK ] DeviceTests.ThrustConversion (0 ms) 2023-01-11T22:08:36.1532039Z [ RUN ] DeviceTests.StdFunctions 2023-01-11T22:08:36.1532333Z [ OK ] DeviceTests.StdFunctions (0 ms) 2023-01-11T22:08:36.1532644Z [ RUN ] DeviceTests.ReinterpretCast 2023-01-11T22:08:36.1532963Z [ OK ] DeviceTests.ReinterpretCast (0 ms) 2023-01-11T22:08:36.1533287Z [----------] 3 tests from DeviceTests (0 ms total) 2023-01-11T22:08:36.1533424Z 2023-01-11T22:08:36.1533590Z [----------] Global test environment tear-down 2023-01-11T22:08:36.1533907Z [==========] 11 tests from 7 test suites ran. (0 ms total) 2023-01-11T22:08:36.1534171Z [ PASSED ] 11 tests. 2023-01-11T22:08:36.2253698Z + [[ -x ./cuda_complex_math_test ]] 2023-01-11T22:08:36.2253983Z + ./cuda_complex_math_test 2023-01-11T22:08:36.5510669Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:36.5511678Z [==========] Running 30 tests from 24 test suites. 2023-01-11T22:08:36.5512155Z [----------] Global test environment set-up. 2023-01-11T22:08:36.5512660Z [----------] 2 tests from TestExponentialDevice 2023-01-11T22:08:36.5513133Z [ RUN ] TestExponentialDevice.IPi 2023-01-11T22:08:36.5513865Z [ OK ] TestExponentialDevice.IPi (0 ms) 2023-01-11T22:08:36.5514427Z [ RUN ] TestExponentialDevice.EulerFormula 2023-01-11T22:08:36.5514964Z [ OK ] TestExponentialDevice.EulerFormula (0 ms) 2023-01-11T22:08:36.5515561Z [----------] 2 tests from TestExponentialDevice (0 ms total) 2023-01-11T22:08:36.5515799Z 2023-01-11T22:08:36.5516053Z [----------] 1 test from TestLogDevice 2023-01-11T22:08:36.5516505Z [ RUN ] TestLogDevice.Definition 2023-01-11T22:08:36.5517012Z [ OK ] TestLogDevice.Definition (0 ms) 2023-01-11T22:08:36.5517827Z [----------] 1 test from TestLogDevice (0 ms total) 2023-01-11T22:08:36.5518121Z 2023-01-11T22:08:36.5518375Z [----------] 1 test from TestLog10Device 2023-01-11T22:08:36.5518874Z [ RUN ] TestLog10Device.Rev 2023-01-11T22:08:36.5519288Z [ OK ] TestLog10Device.Rev (0 ms) 2023-01-11T22:08:36.5519590Z [----------] 1 test from TestLog10Device (0 ms total) 2023-01-11T22:08:36.5519760Z 2023-01-11T22:08:36.5519951Z [----------] 1 test from TestLog2Device 2023-01-11T22:08:36.5520269Z [ RUN ] TestLog2Device.Rev 2023-01-11T22:08:36.5520559Z [ OK ] TestLog2Device.Rev (0 ms) 2023-01-11T22:08:36.5520909Z [----------] 1 test from TestLog2Device (0 ms total) 2023-01-11T22:08:36.5521060Z 2023-01-11T22:08:36.5521219Z [----------] 3 tests from TestLog1pDevice 2023-01-11T22:08:36.5521560Z [ RUN ] TestLog1pDevice.Normal 2023-01-11T22:08:36.5521842Z [ OK ] TestLog1pDevice.Normal (0 ms) 2023-01-11T22:08:36.5522171Z [ RUN ] TestLog1pDevice.Small 2023-01-11T22:08:36.5522475Z [ OK ] TestLog1pDevice.Small (0 ms) 2023-01-11T22:08:36.5522749Z [ RUN ] TestLog1pDevice.Extreme 2023-01-11T22:08:36.5523108Z [ OK ] TestLog1pDevice.Extreme (0 ms) 2023-01-11T22:08:36.5523430Z [----------] 3 tests from TestLog1pDevice (0 ms total) 2023-01-11T22:08:36.5523587Z 2023-01-11T22:08:36.5523793Z [----------] 1 test from TestPowSqrtDevice 2023-01-11T22:08:36.5524088Z [ RUN ] TestPowSqrtDevice.Equal 2023-01-11T22:08:36.5524384Z [ OK ] TestPowSqrtDevice.Equal (0 ms) 2023-01-11T22:08:36.5524758Z [----------] 1 test from TestPowSqrtDevice (0 ms total) 2023-01-11T22:08:36.5524917Z 2023-01-11T22:08:36.5525065Z [----------] 1 test from TestPowDevice 2023-01-11T22:08:36.5525398Z [ RUN ] TestPowDevice.Square 2023-01-11T22:08:36.5525683Z [ OK ] TestPowDevice.Square (0 ms) 2023-01-11T22:08:36.5526008Z [----------] 1 test from TestPowDevice (0 ms total) 2023-01-11T22:08:36.5526194Z 2023-01-11T22:08:36.5526375Z [----------] 1 test from TestSinCosSinhCoshDevice 2023-01-11T22:08:36.5526706Z [ RUN ] TestSinCosSinhCoshDevice.Identity 2023-01-11T22:08:36.5527102Z [ OK ] TestSinCosSinhCoshDevice.Identity (0 ms) 2023-01-11T22:08:36.5527470Z [----------] 1 test from TestSinCosSinhCoshDevice (0 ms total) 2023-01-11T22:08:36.5527709Z 2023-01-11T22:08:36.5527858Z [----------] 1 test from TestTanDevice 2023-01-11T22:08:36.5528134Z [ RUN ] TestTanDevice.Identity 2023-01-11T22:08:36.5528468Z [ OK ] TestTanDevice.Identity (0 ms) 2023-01-11T22:08:36.5528784Z [----------] 1 test from TestTanDevice (0 ms total) 2023-01-11T22:08:36.5528935Z 2023-01-11T22:08:36.5529092Z [----------] 1 test from TestTanhDevice 2023-01-11T22:08:36.5529407Z [ RUN ] TestTanhDevice.Identity 2023-01-11T22:08:36.5529787Z [ OK ] TestTanhDevice.Identity (0 ms) 2023-01-11T22:08:36.5530165Z [----------] 1 test from TestTanhDevice (0 ms total) 2023-01-11T22:08:36.5530318Z 2023-01-11T22:08:36.5530500Z [----------] 1 test from TestRevTrigonometricDevice 2023-01-11T22:08:36.5530870Z [ RUN ] TestRevTrigonometricDevice.Rev 2023-01-11T22:08:36.5531202Z [ OK ] TestRevTrigonometricDevice.Rev (0 ms) 2023-01-11T22:08:36.5531624Z [----------] 1 test from TestRevTrigonometricDevice (0 ms total) 2023-01-11T22:08:36.5531806Z 2023-01-11T22:08:36.5531971Z [----------] 1 test from TestRevHyperbolicDevice 2023-01-11T22:08:36.5532336Z [ RUN ] TestRevHyperbolicDevice.Rev 2023-01-11T22:08:36.5532659Z [ OK ] TestRevHyperbolicDevice.Rev (0 ms) 2023-01-11T22:08:36.5533026Z [----------] 1 test from TestRevHyperbolicDevice (0 ms total) 2023-01-11T22:08:36.5533223Z 2023-01-11T22:08:36.5533440Z [----------] 2 tests from TestExponentialHost 2023-01-11T22:08:36.5533742Z [ RUN ] TestExponentialHost.IPi 2023-01-11T22:08:36.5534100Z [ OK ] TestExponentialHost.IPi (0 ms) 2023-01-11T22:08:36.5534404Z [ RUN ] TestExponentialHost.EulerFormula 2023-01-11T22:08:36.5534809Z [ OK ] TestExponentialHost.EulerFormula (0 ms) 2023-01-11T22:08:36.5535169Z [----------] 2 tests from TestExponentialHost (0 ms total) 2023-01-11T22:08:36.5535354Z 2023-01-11T22:08:36.5535521Z [----------] 1 test from TestLogHost 2023-01-11T22:08:36.5535797Z [ RUN ] TestLogHost.Definition 2023-01-11T22:08:36.5536105Z [ OK ] TestLogHost.Definition (0 ms) 2023-01-11T22:08:36.5536460Z [----------] 1 test from TestLogHost (0 ms total) 2023-01-11T22:08:36.5536596Z 2023-01-11T22:08:36.5536745Z [----------] 1 test from TestLog10Host 2023-01-11T22:08:36.5537066Z [ RUN ] TestLog10Host.Rev 2023-01-11T22:08:36.5537340Z [ OK ] TestLog10Host.Rev (0 ms) 2023-01-11T22:08:36.5537634Z [----------] 1 test from TestLog10Host (0 ms total) 2023-01-11T22:08:36.5537836Z 2023-01-11T22:08:36.5537988Z [----------] 1 test from TestLog2Host 2023-01-11T22:08:36.5538246Z [ RUN ] TestLog2Host.Rev 2023-01-11T22:08:36.5538554Z [ OK ] TestLog2Host.Rev (0 ms) 2023-01-11T22:08:36.5538854Z [----------] 1 test from TestLog2Host (0 ms total) 2023-01-11T22:08:36.5539002Z 2023-01-11T22:08:36.5539152Z [----------] 3 tests from TestLog1pHost 2023-01-11T22:08:36.5539487Z [ RUN ] TestLog1pHost.Normal 2023-01-11T22:08:36.5539755Z [ OK ] TestLog1pHost.Normal (0 ms) 2023-01-11T22:08:36.5540072Z [ RUN ] TestLog1pHost.Small 2023-01-11T22:08:36.5540356Z [ OK ] TestLog1pHost.Small (0 ms) 2023-01-11T22:08:36.5540618Z [ RUN ] TestLog1pHost.Extreme 2023-01-11T22:08:36.5540970Z [ OK ] TestLog1pHost.Extreme (0 ms) 2023-01-11T22:08:36.5541287Z [----------] 3 tests from TestLog1pHost (0 ms total) 2023-01-11T22:08:36.5541438Z 2023-01-11T22:08:36.5541629Z [----------] 1 test from TestPowSqrtHost 2023-01-11T22:08:36.5541917Z [ RUN ] TestPowSqrtHost.Equal 2023-01-11T22:08:36.5542205Z [ OK ] TestPowSqrtHost.Equal (0 ms) 2023-01-11T22:08:36.5542724Z [----------] 1 test from TestPowSqrtHost (0 ms total) 2023-01-11T22:08:36.5542882Z 2023-01-11T22:08:36.5543025Z [----------] 1 test from TestPowHost 2023-01-11T22:08:36.5543342Z [ RUN ] TestPowHost.Square 2023-01-11T22:08:36.5543620Z [ OK ] TestPowHost.Square (0 ms) 2023-01-11T22:08:36.5543909Z [----------] 1 test from TestPowHost (0 ms total) 2023-01-11T22:08:36.5544109Z 2023-01-11T22:08:36.5544282Z [----------] 1 test from TestSinCosSinhCoshHost 2023-01-11T22:08:36.5544600Z [ RUN ] TestSinCosSinhCoshHost.Identity 2023-01-11T22:08:36.5545081Z [ OK ] TestSinCosSinhCoshHost.Identity (0 ms) 2023-01-11T22:08:36.5545439Z [----------] 1 test from TestSinCosSinhCoshHost (0 ms total) 2023-01-11T22:08:36.5545663Z 2023-01-11T22:08:36.5545808Z [----------] 1 test from TestTanHost 2023-01-11T22:08:36.5546075Z [ RUN ] TestTanHost.Identity 2023-01-11T22:08:36.5546383Z [ OK ] TestTanHost.Identity (0 ms) 2023-01-11T22:08:36.5546702Z [----------] 1 test from TestTanHost (0 ms total) 2023-01-11T22:08:36.5546847Z 2023-01-11T22:08:36.5546993Z [----------] 1 test from TestTanhHost 2023-01-11T22:08:36.5547311Z [ RUN ] TestTanhHost.Identity 2023-01-11T22:08:36.5547604Z [ OK ] TestTanhHost.Identity (0 ms) 2023-01-11T22:08:36.5547951Z [----------] 1 test from TestTanhHost (0 ms total) 2023-01-11T22:08:36.5548111Z 2023-01-11T22:08:36.5548291Z [----------] 1 test from TestRevTrigonometricHost 2023-01-11T22:08:36.5548648Z [ RUN ] TestRevTrigonometricHost.Rev 2023-01-11T22:08:36.5549037Z [ OK ] TestRevTrigonometricHost.Rev (0 ms) 2023-01-11T22:08:36.5549396Z [----------] 1 test from TestRevTrigonometricHost (0 ms total) 2023-01-11T22:08:36.5549557Z 2023-01-11T22:08:36.5549728Z [----------] 1 test from TestRevHyperbolicHost 2023-01-11T22:08:36.5550031Z [ RUN ] TestRevHyperbolicHost.Rev 2023-01-11T22:08:36.5550402Z [ OK ] TestRevHyperbolicHost.Rev (0 ms) 2023-01-11T22:08:36.5550734Z [----------] 1 test from TestRevHyperbolicHost (0 ms total) 2023-01-11T22:08:36.5550901Z 2023-01-11T22:08:36.5551068Z [----------] Global test environment tear-down 2023-01-11T22:08:36.5551380Z [==========] 30 tests from 24 test suites ran. (0 ms total) 2023-01-11T22:08:36.5551639Z [ PASSED ] 30 tests. 2023-01-11T22:08:36.6214251Z + [[ -x ./cuda_cub_test ]] 2023-01-11T22:08:36.6214518Z + ./cuda_cub_test 2023-01-11T22:08:36.9406397Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:36.9407189Z [==========] Running 3 tests from 3 test suites. 2023-01-11T22:08:36.9407733Z [----------] Global test environment set-up. 2023-01-11T22:08:36.9408028Z [----------] 1 test from NumBits 2023-01-11T22:08:36.9408401Z [ RUN ] NumBits.CubTest 2023-01-11T22:08:36.9408737Z [ OK ] NumBits.CubTest (0 ms) 2023-01-11T22:08:36.9409031Z [----------] 1 test from NumBits (0 ms total) 2023-01-11T22:08:36.9409226Z 2023-01-11T22:08:36.9409399Z [----------] 1 test from InclusiveScanSplit 2023-01-11T22:08:36.9409687Z [ RUN ] InclusiveScanSplit.CubTest 2023-01-11T22:08:36.9410003Z [ OK ] InclusiveScanSplit.CubTest (0 ms) 2023-01-11T22:08:36.9410341Z [----------] 1 test from InclusiveScanSplit (0 ms total) 2023-01-11T22:08:36.9410505Z 2023-01-11T22:08:36.9410664Z [----------] 1 test from ExclusiveScanSplit 2023-01-11T22:08:36.9410963Z [ RUN ] ExclusiveScanSplit.CubTest 2023-01-11T22:08:36.9411276Z [ OK ] ExclusiveScanSplit.CubTest (0 ms) 2023-01-11T22:08:36.9411610Z [----------] 1 test from ExclusiveScanSplit (0 ms total) 2023-01-11T22:08:36.9411758Z 2023-01-11T22:08:36.9411925Z [----------] Global test environment tear-down 2023-01-11T22:08:36.9412242Z [==========] 3 tests from 3 test suites ran. (0 ms total) 2023-01-11T22:08:36.9412501Z [ PASSED ] 3 tests. 2023-01-11T22:08:37.0124103Z + [[ -x ./cuda_atomic_ops_test ]] 2023-01-11T22:08:37.0124358Z + ./cuda_atomic_ops_test 2023-01-11T22:08:37.3370448Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:08:37.3371187Z [==========] Running 4 tests from 1 test suite. 2023-01-11T22:08:37.3371487Z [----------] Global test environment set-up. 2023-01-11T22:08:37.3372084Z [----------] 4 tests from TestAtomicOps 2023-01-11T22:08:37.3372562Z [ RUN ] TestAtomicOps.TestAtomicAdd 2023-01-11T22:08:37.3373102Z [ OK ] TestAtomicOps.TestAtomicAdd (0 ms) 2023-01-11T22:08:37.3373678Z [ RUN ] TestAtomicOps.TestAtomicMul 2023-01-11T22:08:37.3374265Z [ OK ] TestAtomicOps.TestAtomicMul (0 ms) 2023-01-11T22:08:37.3374617Z [ RUN ] TestAtomicOps.TestAtomicMax 2023-01-11T22:08:37.3374922Z [ OK ] TestAtomicOps.TestAtomicMax (0 ms) 2023-01-11T22:08:37.3375229Z [ RUN ] TestAtomicOps.TestAtomicMin 2023-01-11T22:08:37.3375543Z [ OK ] TestAtomicOps.TestAtomicMin (0 ms) 2023-01-11T22:08:37.3375856Z [----------] 4 tests from TestAtomicOps (0 ms total) 2023-01-11T22:08:37.3376010Z 2023-01-11T22:08:37.3376177Z [----------] Global test environment tear-down 2023-01-11T22:08:37.3376567Z [==========] 4 tests from 1 test suite ran. (0 ms total) 2023-01-11T22:08:37.3376818Z [ PASSED ] 4 tests. 2023-01-11T22:08:37.4095288Z + '[' ON == ON ']' 2023-01-11T22:08:37.4095779Z + valgrind --suppressions=/var/lib/jenkins/workspace/aten/tools/valgrind.sup --error-exitcode=1 ./basic '--gtest_filter=-*CUDA' 2023-01-11T22:08:37.4487428Z ==24431== Memcheck, a memory error detector 2023-01-11T22:08:37.4488077Z ==24431== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. 2023-01-11T22:08:37.4488465Z ==24431== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info 2023-01-11T22:08:37.4488779Z ==24431== Command: ./basic --gtest_filter=-*CUDA 2023-01-11T22:08:37.4488976Z ==24431== 2023-01-11T22:08:40.8884812Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8885473Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8886032Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8886530Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8887098Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8887658Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8888176Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8888737Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8889259Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8889764Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8890317Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8890855Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8891343Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8891923Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8892462Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8892974Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8893557Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8894140Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8894654Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8895236Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8895816Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8896347Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8896953Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8897508Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8897985Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8898535Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8899351Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8899826Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8900424Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8900954Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8901459Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8901996Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8902648Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8903155Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8903650Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8904152Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8904567Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8905263Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8905695Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8906135Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8906628Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8907157Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8907677Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8908256Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8908710Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8909139Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8909663Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8910119Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8910543Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8911067Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8911397Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8911737Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8912205Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8912730Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8913220Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8913755Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8914279Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8914760Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8915278Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8915747Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8916172Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8916596Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8916911Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8917218Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8917559Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8917868Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8918171Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8918509Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8918818Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8919193Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8919532Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8919984Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8920275Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8920613Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8920940Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8921230Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8921766Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8922088Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8922374Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8922709Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8923029Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8923331Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8923703Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8924030Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8924326Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8924651Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8924977Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8925277Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8925603Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8925920Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8926220Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8926555Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8926862Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8927172Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8927509Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8927822Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8928128Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8928469Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8928778Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8929080Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8929426Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8929751Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8930042Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8930384Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8930710Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8931003Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8931337Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8931662Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8931951Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8932291Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8932617Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8932922Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8933250Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8933577Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8933882Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8934212Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8934584Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8934887Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8935214Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8935538Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8935839Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8936182Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8936492Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8936793Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8937134Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8937445Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8937747Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8938122Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8938437Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8938738Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8939080Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8939400Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8939688Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8940023Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8940343Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8940633Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8940968Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8941285Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8941581Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8941917Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8942238Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8942691Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8943019Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8943341Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8943647Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8943970Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8944291Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8944594Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8944920Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8945245Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8945543Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8945881Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8946191Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8946492Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8946826Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8947132Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8947431Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8947765Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8948083Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8948371Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8948709Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8949106Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8949396Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8949741Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8950060Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8950348Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8950685Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8951006Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8951308Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8951630Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8951948Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8952249Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8952634Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8952960Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8953264Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8953593Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8953910Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8954211Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8954548Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8954855Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8955159Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8955497Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8955804Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8956109Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8956448Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8956753Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8957057Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8957392Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8957711Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8957999Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8958338Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8958656Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8959005Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8959352Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8959685Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8959974Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8960309Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8960627Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8960928Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8961251Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8961574Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8961877Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8962201Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8962524Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8962824Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8963154Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8963514Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8963814Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8964152Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8964459Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8964758Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8965090Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8965396Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8965696Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8966031Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8966336Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8966682Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8967021Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8967339Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8967625Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8967963Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8968282Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8968574Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8968907Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8969227Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8969516Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8969853Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8970176Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8970485Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8970808Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8971128Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8971431Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8971756Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8972075Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8972376Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8972698Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8973018Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8973318Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8973657Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8973967Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8974269Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8974609Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8974916Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8975218Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8975559Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8975868Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8976179Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8976519Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8976844Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8977135Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8977518Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8977843Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8978134Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8978471Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8978794Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8979088Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8979429Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8979752Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8980053Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8980378Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8980706Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8981048Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8981379Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8981704Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8982009Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8982448Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8982866Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8983175Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8983516Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8983829Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8984132Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8984475Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8984789Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8985094Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8985434Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8985744Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8986047Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8986385Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8986703Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8986991Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8987331Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8987652Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8987941Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8988277Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8988602Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8988888Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8989223Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8989543Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8989842Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8990164Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8990480Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8990777Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8991103Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8991429Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8991730Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8992131Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8992452Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8992753Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8993092Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8993402Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8993706Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8994044Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8994354Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8994655Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8994992Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8995301Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8995651Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8995993Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8996318Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8996609Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8996944Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8997264Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8997558Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8997890Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8998212Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8998501Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8998841Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.8999233Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.8999540Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.8999866Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9000189Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9000496Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9000824Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9001142Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9001443Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9001770Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9002091Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9002393Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9002732Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9003041Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9003340Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9003677Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9003985Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9004284Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9004620Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9004929Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9005230Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9005569Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9005886Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9006174Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9006566Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9006887Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9007179Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9007514Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9007836Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9008130Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9008464Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9008785Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9009088Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9009412Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9009734Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9010073Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9010398Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9010718Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9011023Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9011346Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9011666Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9011966Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9012304Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9012610Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9012912Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9013246Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9013559Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9013862Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9014199Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9014509Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9014806Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9015141Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9015462Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9015750Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9016090Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9016407Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9016695Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9017031Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9017352Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9017643Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9017979Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9018302Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9018602Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9018928Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9019253Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9019552Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9019873Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9020198Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9020500Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9020864Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9021184Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9021491Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9021826Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9022134Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9022544Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9022888Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9023203Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9023505Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9023842Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9024154Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9024516Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9024859Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9025183Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9025473Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9025811Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9026138Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9026430Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9026770Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9027096Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9027389Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9027728Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9028057Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9028362Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9028689Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9029013Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9029316Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9029645Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9029970Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9030271Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9030599Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9030921Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9031232Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9031574Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9031887Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9032198Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9032534Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9032845Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9033147Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9033490Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9033800Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9034105Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9034444Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9034765Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9035055Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9035439Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9035762Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9036051Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9036389Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9036712Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9037000Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9037337Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9037658Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9037958Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9038283Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9038608Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9039007Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9039345Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9039667Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9039968Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9040295Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9040618Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9040922Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9041261Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9041567Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9041868Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9042205Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9042519Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9042823Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9043156Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9043467Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9043765Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9044102Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9044421Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9044706Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9045042Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9045359Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9045649Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9045990Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9046313Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9046599Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9046934Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9047257Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9047557Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9047881Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9048198Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9048501Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9048823Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9049141Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9049446Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9049824Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9050143Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9050443Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9050779Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9051083Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9051383Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9051722Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9052029Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9052330Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9052667Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9053026Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9053323Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9053661Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9053982Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9054274Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9054612Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9054932Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9055221Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9055553Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9055871Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9056173Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9056496Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9056821Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9057119Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9057445Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9057769Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9058072Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9058393Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9058716Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9059014Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9059354Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9059661Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9059962Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9060302Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9060611Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9060910Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9061248Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9061553Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9061855Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9062191Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9062709Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9063004Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9063345Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9063666Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9064030Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9064369Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9064692Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9064984Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9065323Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9065644Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9065945Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9066274Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9066603Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9066904Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9067225Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9067608Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9067912Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9068235Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9068564Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9068866Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9069205Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9069517Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9069818Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9070157Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9070464Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9070764Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9071102Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9071414Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9071713Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9072050Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9072371Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9072660Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9073002Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9073325Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9073617Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9073957Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9074283Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:40.9074574Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:40.9074916Z --24431-- When reading debug info from /opt/conda/lib/libstdc++.so.6.0.29: 2023-01-11T22:08:40.9075269Z --24431-- parse_CU_Header: is neither DWARF2 nor DWARF3 nor DWARF4 2023-01-11T22:08:41.0053648Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0054356Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0054726Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0055037Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0055372Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0055680Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0055984Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0056310Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0056618Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0057144Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0057494Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0057797Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0058103Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0058430Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0058752Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0059040Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0059368Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0059686Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0059973Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0060297Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0060681Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0066366Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0066980Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0067532Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0068086Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0068633Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0069185Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0069701Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0070232Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0070770Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0071289Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0071838Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0072385Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0072898Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0073450Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0073976Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0074482Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0075059Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0075563Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0076072Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0076629Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0077187Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0077701Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0078277Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0078834Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0079411Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0079953Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0080435Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0080907Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0081490Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0082066Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0082550Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0083093Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0083666Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0084372Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0085137Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0085717Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0086240Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0086738Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0087282Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0087889Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0088482Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0088973Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0089492Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0090073Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0090624Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0091296Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0091915Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0092478Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0092979Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0093541Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0094068Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0094567Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0095122Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0095646Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0096140Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0096697Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0097230Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0097739Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0098292Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0098816Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0099324Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0099858Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0100381Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0100895Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0101432Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0101956Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0102697Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0103240Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0103786Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0104297Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0104823Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0105351Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0105860Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0106409Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0106920Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0107430Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0107977Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0108494Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0109002Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0109548Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0110203Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0110713Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0111269Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0111811Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0112316Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0112876Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0113412Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0113909Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0114472Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0114992Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0127949Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0128688Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0129263Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0129787Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0130339Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0130886Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0131408Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0131953Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0132495Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0133011Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0133557Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0134098Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0134602Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0135173Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0135698Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0136211Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0136771Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0137295Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0137806Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0138371Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0138894Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0139410Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0139968Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0140488Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0141016Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0141579Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0142107Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0142746Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0143297Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0143717Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0144010Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0144337Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0144652Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0144941Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0145265Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0145579Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0145991Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0146303Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0146617Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0146917Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0147231Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0147547Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0147847Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0148157Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0148656Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0149168Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0149729Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0150325Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0150846Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0151403Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0151914Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0152421Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0152984Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0153505Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0154021Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0154577Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0155109Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0155600Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0155997Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0156313Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0156603Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0156925Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0157241Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0157531Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0157853Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0158163Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0158462Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0158773Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0159165Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0159466Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0159786Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0160097Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0160397Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0160709Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0161020Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0161320Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0161642Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0161993Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0162498Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0163056Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0163421Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0163936Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0164565Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0165078Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0165593Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0166152Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0166666Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0167183Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0167729Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0168047Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0168337Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0168658Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0168970Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0169310Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0169639Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0169954Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0170242Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0170566Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0170877Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0171175Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0171486Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0171939Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0172452Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0172987Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0173520Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0174029Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0174566Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0175095Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0175601Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0176149Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0176663Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0177168Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0177713Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0178214Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0178727Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0179283Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0179811Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0180330Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0180889Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0181421Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0181911Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0182593Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0183128Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0183630Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0184138Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0184589Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0185014Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0185502Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0186136Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0186564Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0187023Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0187478Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0187906Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0188381Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0188761Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0189069Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0189386Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0189701Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0190003Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0190423Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0190728Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0191032Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0191358Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0191660Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0191961Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0192287Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0192589Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0192891Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0193218Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0193519Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0193822Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0194153Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0194467Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0194755Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0195079Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0195395Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0195682Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0196160Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0196666Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0196967Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0197291Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0197603Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0197912Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0198227Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0198540Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0198836Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0199217Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0199532Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0199831Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0200142Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0200454Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0200754Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0201076Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0201440Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0201740Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0202064Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0202365Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0202665Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0203098Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0203612Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0204119Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0204486Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0205013Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0205496Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0206037Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0206629Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0207030Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0207353Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0207668Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0207957Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0208282Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0208592Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0208891Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0209203Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0209517Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0209817Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0210136Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0210450Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0210748Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0211058Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0211376Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0211680Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0211994Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0212304Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0212606Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0212931Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0213231Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0213531Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0213859Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0214160Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0214458Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0214782Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0215082Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0215381Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0215704Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0216018Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0216310Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0216636Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0216947Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.0217283Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.0217610Z --24431-- When reading debug info from /opt/conda/lib/libgcc_s.so.1: 2023-01-11T22:08:41.0217990Z --24431-- parse_CU_Header: is neither DWARF2 nor DWARF3 nor DWARF4 2023-01-11T22:08:41.3392881Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3393370Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3393712Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3394040Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3394372Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3394703Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3395010Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3395332Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3395867Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3396175Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3396509Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3396818Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3402896Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3403595Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3404065Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3404360Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3404698Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3405021Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3405313Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3405657Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3405982Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3406271Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3406611Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3406928Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3407228Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3407575Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3408051Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3408352Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3408670Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3409009Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3409326Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3409651Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3409977Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3410274Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3410606Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3410910Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3411209Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3411542Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3411847Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3412148Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3412478Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3412782Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3413087Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3413549Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3413868Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3414157Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3414489Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3414806Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3415182Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3415748Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3416163Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3416542Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3416912Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3417290Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3417678Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3418062Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3418385Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3418743Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3419065Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3419426Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3419734Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3420064Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3420433Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3425255Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3425932Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3426550Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3427128Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3427755Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3428354Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3428718Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3429060Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3429383Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3429675Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3430010Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3430329Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3430618Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3430958Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3431279Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3431581Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3431902Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3432219Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3432518Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3432838Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3433157Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3433456Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3433773Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3434099Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3434399Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3434885Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3435202Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3438197Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3438783Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3439442Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3439948Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3440425Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3441024Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3441494Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3441832Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3442158Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3442583Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3442913Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3443239Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3443539Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3443861Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3444182Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3444480Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3444805Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3445131Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3445433Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3445753Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3446090Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3446391Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3446722Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3447029Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3447327Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3447664Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3447972Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3448271Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3448602Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3448908Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3449209Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3449542Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3449864Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3450159Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3450490Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3450808Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3451101Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3451430Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3451745Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3452032Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3452368Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3452681Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3452980Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3453354Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3453673Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3453975Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3454295Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3454608Z --24431-- Ignoring non-Dwarf2/3/4 block in .debug_info 2023-01-11T22:08:41.3454914Z --24431-- WARNING: Serious error when reading debug info 2023-01-11T22:08:41.3455232Z --24431-- When reading debug info from /opt/conda/lib/libgomp.so.1.0.0: 2023-01-11T22:08:41.3455591Z --24431-- parse_CU_Header: is neither DWARF2 nor DWARF3 nor DWARF4 2023-01-11T22:08:42.3280781Z ==24431== Warning: set address range perms: large range [0x59eb4000, 0x6ed02000) (defined) 2023-01-11T22:09:25.0180230Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2023-01-11T22:09:25.0391922Z Note: Google Test filter = -*CUDA 2023-01-11T22:09:25.0445435Z [==========] Running 4 tests from 1 test suite. 2023-01-11T22:09:25.0461712Z [----------] Global test environment set-up. 2023-01-11T22:09:25.0490999Z [----------] 4 tests from BasicTest 2023-01-11T22:09:25.0505230Z [ RUN ] BasicTest.BasicTestCPU 2023-01-11T22:09:26.1852605Z 336 ms 2023-01-11T22:09:30.9627710Z 4746 ms 2023-01-11T22:09:37.5974530Z 6626 ms 2023-01-11T22:09:38.2523880Z [ OK ] BasicTest.BasicTestCPU (13199 ms) 2023-01-11T22:09:38.2528793Z [ RUN ] BasicTest.BasicTestHalfCPU 2023-01-11T22:09:38.4477481Z 140 ms 2023-01-11T22:09:43.5096544Z 5053 ms 2023-01-11T22:09:50.5311429Z 7017 ms 2023-01-11T22:09:50.6564259Z [ OK ] BasicTest.BasicTestHalfCPU (12403 ms) 2023-01-11T22:09:50.6565962Z [ RUN ] BasicTest.FactoryMethodsTest 2023-01-11T22:09:50.6927606Z [ OK ] BasicTest.FactoryMethodsTest (36 ms) 2023-01-11T22:09:50.6928030Z [ RUN ] BasicTest.BasicStdTestCPU 2023-01-11T22:09:50.7328016Z Simple example: called once 2023-01-11T22:09:50.8609104Z throw: call_once will retry 2023-01-11T22:09:50.9039129Z throw: call_once will retry 2023-01-11T22:09:50.9053492Z Didn't throw, call_once will not attempt again 2023-01-11T22:09:50.9079638Z [ OK ] BasicTest.BasicStdTestCPU (215 ms) 2023-01-11T22:09:50.9098167Z [----------] 4 tests from BasicTest (25858 ms total) 2023-01-11T22:09:50.9098358Z 2023-01-11T22:09:50.9109303Z [----------] Global test environment tear-down 2023-01-11T22:09:50.9136240Z [==========] 4 tests from 1 test suite ran. (25875 ms total) 2023-01-11T22:09:50.9151085Z [ PASSED ] 4 tests. 2023-01-11T22:09:52.9868282Z ==24431== 2023-01-11T22:09:52.9872148Z ==24431== HEAP SUMMARY: 2023-01-11T22:09:52.9872559Z ==24431== in use at exit: 687,403 bytes in 4,089 blocks 2023-01-11T22:09:52.9873002Z ==24431== total heap usage: 1,929,172 allocs, 1,925,083 frees, 311,449,931 bytes allocated 2023-01-11T22:09:52.9873369Z ==24431== 2023-01-11T22:09:53.5283068Z ==24431== LEAK SUMMARY: 2023-01-11T22:09:53.5283512Z ==24431== definitely lost: 0 bytes in 0 blocks 2023-01-11T22:09:53.5283894Z ==24431== indirectly lost: 0 bytes in 0 blocks 2023-01-11T22:09:53.5284292Z ==24431== possibly lost: 1,584 bytes in 3 blocks 2023-01-11T22:09:53.5284690Z ==24431== still reachable: 685,819 bytes in 4,086 blocks 2023-01-11T22:09:53.5285056Z ==24431== suppressed: 0 bytes in 0 blocks 2023-01-11T22:09:53.5285742Z ==24431== Rerun with --leak-check=full to see details of leaked memory 2023-01-11T22:09:53.5286122Z ==24431== 2023-01-11T22:09:53.5286561Z ==24431== For lists of detected and suppressed errors, rerun with: -s 2023-01-11T22:09:53.5287070Z ==24431== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4) 2023-01-11T22:09:53.5768809Z + [[ -x ./tensor_interop_test ]] 2023-01-11T22:09:53.5769051Z + popd 2023-01-11T22:09:53.5769224Z ~/workspace 2023-01-11T22:09:53.5770861Z + [[ -n '' ]] 2023-01-11T22:09:53.5771084Z + assert_git_not_dirty 2023-01-11T22:09:53.5771375Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:09:53.5771700Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:09:53.5774375Z ++ git status --porcelain 2023-01-11T22:09:53.6589039Z + git_status= 2023-01-11T22:09:53.6589458Z + [[ -n '' ]] 2023-01-11T22:09:53.6589635Z + test_vec256 2023-01-11T22:09:53.6589943Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *asan* ]] 2023-01-11T22:09:53.6590272Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:09:53.6590559Z + echo 'Testing vec256 instructions' 2023-01-11T22:09:53.6590760Z Testing vec256 instructions 2023-01-11T22:09:53.6591013Z + mkdir -p test/test-reports/vec256 2023-01-11T22:09:53.6639527Z + pushd build/bin 2023-01-11T22:09:53.6639757Z ~/workspace/build/bin ~/workspace 2023-01-11T22:09:53.6642320Z ++ find . -maxdepth 1 -executable -name 'vec256_test*' 2023-01-11T22:09:53.6694740Z + vec256_tests= 2023-01-11T22:09:53.6695302Z + popd 2023-01-11T22:09:53.6695515Z ~/workspace 2023-01-11T22:09:53.6702935Z + assert_git_not_dirty 2023-01-11T22:09:53.6703495Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:09:53.6703991Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:09:53.6704254Z ++ git status --porcelain 2023-01-11T22:09:53.7502093Z + git_status= 2023-01-11T22:09:53.7502677Z + [[ -n '' ]] 2023-01-11T22:09:53.7503003Z + test_libtorch 2023-01-11T22:09:53.7503395Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:09:53.7503650Z + echo 'Testing libtorch' 2023-01-11T22:09:53.7503835Z Testing libtorch 2023-01-11T22:09:53.7504264Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libbackend_with_compiler.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7512327Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libjitbackend_test.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7521449Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10d_cuda_test.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7530925Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libshm.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7540220Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda_linalg.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorchbind_test.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7548639Z + ln -sf '/opt/conda/lib/python3.10/site-packages/torch/lib/libtbb*' /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:09:53.7557801Z + TEST_REPORTS_DIR=test/test-reports/cpp-unittest/test_libtorch 2023-01-11T22:09:53.7558157Z + mkdir -p test/test-reports/cpp-unittest/test_libtorch 2023-01-11T22:09:53.7558504Z + python tools/download_mnist.py --quiet -d test/cpp/api/mnist 2023-01-11T22:09:53.7592193Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *-tsan* ]] 2023-01-11T22:09:53.7592490Z + python test/cpp/jit/tests_setup.py setup 2023-01-11T22:09:53.7979233Z Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz ... 2023-01-11T22:09:54.1604398Z Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz ... 2023-01-11T22:09:54.2052957Z Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz ... 2023-01-11T22:09:54.3089973Z Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz ... 2023-01-11T22:09:55.1375813Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *cuda* ]] 2023-01-11T22:09:55.1376562Z + /opt/conda/lib/python3.10/site-packages/torch/bin/test_jit --gtest_output=xml:test/test-reports/cpp-unittest/test_libtorch/test_jit.xml 2023-01-11T22:09:55.5452410Z CUDA not available. Disabling CUDA and MultiCUDA tests 2023-01-11T22:09:55.5460529Z Note: Google Test filter = *-*_CUDA:*_MultiCUDA 2023-01-11T22:09:55.5460947Z [==========] Running 569 tests from 119 test suites. 2023-01-11T22:09:55.5461266Z [----------] Global test environment set-up. 2023-01-11T22:09:55.5461580Z [----------] 2 tests from AddIfThenElseOpTest 2023-01-11T22:09:55.5461921Z [ RUN ] AddIfThenElseOpTest.AddIfThenElseOpSimple 2023-01-11T22:09:55.5528704Z [ OK ] AddIfThenElseOpTest.AddIfThenElseOpSimple (6 ms) 2023-01-11T22:09:55.5529457Z [ RUN ] AddIfThenElseOpTest.NoIfThenElseOpMultipleOutputs 2023-01-11T22:09:55.5530277Z [ OK ] AddIfThenElseOpTest.NoIfThenElseOpMultipleOutputs (0 ms) 2023-01-11T22:09:55.5531268Z [----------] 2 tests from AddIfThenElseOpTest (6 ms total) 2023-01-11T22:09:55.5531585Z 2023-01-11T22:09:55.5531890Z [----------] 15 tests from TopologicalMoveTest 2023-01-11T22:09:55.5532450Z [ RUN ] TopologicalMoveTest.SplitsDeps 2023-01-11T22:09:55.5533067Z [ OK ] TopologicalMoveTest.SplitsDeps (0 ms) 2023-01-11T22:09:55.5533707Z [ RUN ] TopologicalMoveTest.MoveAfterBackwardSimple 2023-01-11T22:09:55.5534451Z [ OK ] TopologicalMoveTest.MoveAfterBackwardSimple (0 ms) 2023-01-11T22:09:55.5535204Z [ RUN ] TopologicalMoveTest.MoveAfterBackwardInvalid 2023-01-11T22:09:55.5535955Z [ OK ] TopologicalMoveTest.MoveAfterBackwardInvalid (0 ms) 2023-01-11T22:09:55.5536605Z [ RUN ] TopologicalMoveTest.MoveAfterNoOp 2023-01-11T22:09:55.5537240Z [ OK ] TopologicalMoveTest.MoveAfterNoOp (0 ms) 2023-01-11T22:09:55.5537960Z [ RUN ] TopologicalMoveTest.MoveAfterBackwardMultipleDeps 2023-01-11T22:09:55.5538782Z [ OK ] TopologicalMoveTest.MoveAfterBackwardMultipleDeps (0 ms) 2023-01-11T22:09:55.5539633Z [ RUN ] TopologicalMoveTest.MoveAfterBackwardNonZeroWorkingSet 2023-01-11T22:09:55.5540515Z [ OK ] TopologicalMoveTest.MoveAfterBackwardNonZeroWorkingSet (0 ms) 2023-01-11T22:09:55.5541300Z [ RUN ] TopologicalMoveTest.MoveAfterForwardSimple 2023-01-11T22:09:55.5542042Z [ OK ] TopologicalMoveTest.MoveAfterForwardSimple (0 ms) 2023-01-11T22:09:55.5543017Z [ RUN ] TopologicalMoveTest.MoveAfterForwardNonZeroWorkingSet 2023-01-11T22:09:55.5543884Z [ OK ] TopologicalMoveTest.MoveAfterForwardNonZeroWorkingSet (0 ms) 2023-01-11T22:09:55.5544658Z [ RUN ] TopologicalMoveTest.MoveBeforeForwardSimple 2023-01-11T22:09:55.5545407Z [ OK ] TopologicalMoveTest.MoveBeforeForwardSimple (0 ms) 2023-01-11T22:09:55.5546169Z [ RUN ] TopologicalMoveTest.MoveBeforeBackwardSimple 2023-01-11T22:09:55.5546938Z [ OK ] TopologicalMoveTest.MoveBeforeBackwardSimple (0 ms) 2023-01-11T22:09:55.5547630Z [ RUN ] TopologicalMoveTest.MoveBeforeNoOp 2023-01-11T22:09:55.5548239Z [ OK ] TopologicalMoveTest.MoveBeforeNoOp (0 ms) 2023-01-11T22:09:55.5548859Z [ RUN ] TopologicalMoveTest.MoveBeforeForwardWithDeps 2023-01-11T22:09:55.5549590Z [ OK ] TopologicalMoveTest.MoveBeforeForwardWithDeps (0 ms) 2023-01-11T22:09:55.5550297Z [ RUN ] TopologicalMoveTest.MoveBeforeBackwardWithDeps 2023-01-11T22:09:55.5550948Z [ OK ] TopologicalMoveTest.MoveBeforeBackwardWithDeps (0 ms) 2023-01-11T22:09:55.5551548Z [ RUN ] TopologicalMoveTest.DepsDisallowMove 2023-01-11T22:09:55.5552129Z [ OK ] TopologicalMoveTest.DepsDisallowMove (0 ms) 2023-01-11T22:09:55.5552735Z [ RUN ] TopologicalMoveTest.MoveAfterBeforeWithDeps 2023-01-11T22:09:55.5553591Z [ OK ] TopologicalMoveTest.MoveAfterBeforeWithDeps (0 ms) 2023-01-11T22:09:55.5554241Z [----------] 15 tests from TopologicalMoveTest (2 ms total) 2023-01-11T22:09:55.5554519Z 2023-01-11T22:09:55.5554804Z [----------] 6 tests from AliasAnalysisTest 2023-01-11T22:09:55.5555422Z [ RUN ] AliasAnalysisTest.AliasingMutationBlocksMoves 2023-01-11T22:09:55.5588424Z [ OK ] AliasAnalysisTest.AliasingMutationBlocksMoves (3 ms) 2023-01-11T22:09:55.5589214Z [ RUN ] AliasAnalysisTest.AliasingMutationBlocksMoves2 2023-01-11T22:09:55.5589925Z [ OK ] AliasAnalysisTest.AliasingMutationBlocksMoves2 (0 ms) 2023-01-11T22:09:55.5590629Z [ RUN ] AliasAnalysisTest.SideEffectsBlockMoves 2023-01-11T22:09:55.5591268Z [ OK ] AliasAnalysisTest.SideEffectsBlockMoves (0 ms) 2023-01-11T22:09:55.5591897Z [ RUN ] AliasAnalysisTest.MovingAcrossInnerBlocks 2023-01-11T22:09:55.5592728Z [ OK ] AliasAnalysisTest.MovingAcrossInnerBlocks (0 ms) 2023-01-11T22:09:55.5593277Z [ RUN ] AliasAnalysisTest.NoneHasNoWriters 2023-01-11T22:09:55.5593638Z [ OK ] AliasAnalysisTest.NoneHasNoWriters (0 ms) 2023-01-11T22:09:55.5594047Z [ RUN ] AliasAnalysisTest.SafeToChangeAliasingRelationship 2023-01-11T22:09:55.5594503Z [ OK ] AliasAnalysisTest.SafeToChangeAliasingRelationship (0 ms) 2023-01-11T22:09:55.5594904Z [----------] 6 tests from AliasAnalysisTest (4 ms total) 2023-01-11T22:09:55.5595067Z 2023-01-11T22:09:55.5595230Z [----------] 4 tests from WriteTrackingTest 2023-01-11T22:09:55.5595523Z [ RUN ] WriteTrackingTest.Basic 2023-01-11T22:09:55.5595810Z [ OK ] WriteTrackingTest.Basic (0 ms) 2023-01-11T22:09:55.5596117Z [ RUN ] WriteTrackingTest.IsMutable 2023-01-11T22:09:55.5600833Z [ OK ] WriteTrackingTest.IsMutable (0 ms) 2023-01-11T22:09:55.5601416Z [ RUN ] WriteTrackingTest.IsImmutable 2023-01-11T22:09:55.5602031Z [ OK ] WriteTrackingTest.IsImmutable (0 ms) 2023-01-11T22:09:55.5602575Z [ RUN ] WriteTrackingTest.HasWriters 2023-01-11T22:09:55.5603174Z [ OK ] WriteTrackingTest.HasWriters (0 ms) 2023-01-11T22:09:55.5603748Z [----------] 4 tests from WriteTrackingTest (0 ms total) 2023-01-11T22:09:55.5604011Z 2023-01-11T22:09:55.5604281Z [----------] 13 tests from ContainerAliasingTest 2023-01-11T22:09:55.5604652Z [ RUN ] ContainerAliasingTest.MayContainAlias 2023-01-11T22:09:55.5605114Z [ OK ] ContainerAliasingTest.MayContainAlias (0 ms) 2023-01-11T22:09:55.5605552Z [ RUN ] ContainerAliasingTest.MayContainAlias_cast 2023-01-11T22:09:55.5606057Z [ OK ] ContainerAliasingTest.MayContainAlias_cast (0 ms) 2023-01-11T22:09:55.5606820Z [ RUN ] ContainerAliasingTest.PrimitveValuesDontAliasContainers 2023-01-11T22:09:55.5607518Z [ OK ] ContainerAliasingTest.PrimitveValuesDontAliasContainers (0 ms) 2023-01-11T22:09:55.5608016Z [ RUN ] ContainerAliasingTest.UnionAliasing 2023-01-11T22:09:55.5608358Z [ OK ] ContainerAliasingTest.UnionAliasing (0 ms) 2023-01-11T22:09:55.5608734Z [ RUN ] ContainerAliasingTest.InputsCanAliasOutputs 2023-01-11T22:09:55.5609154Z [ OK ] ContainerAliasingTest.InputsCanAliasOutputs (0 ms) 2023-01-11T22:09:55.5609538Z [ RUN ] ContainerAliasingTest.NestedTupleConstruct 2023-01-11T22:09:55.5609934Z [ OK ] ContainerAliasingTest.NestedTupleConstruct (0 ms) 2023-01-11T22:09:55.5610417Z [ RUN ] ContainerAliasingTest.NestedTypes 2023-01-11T22:09:55.5610770Z [ OK ] ContainerAliasingTest.NestedTypes (0 ms) 2023-01-11T22:09:55.5611084Z [ RUN ] ContainerAliasingTest.Simple 2023-01-11T22:09:55.5611406Z [ OK ] ContainerAliasingTest.Simple (0 ms) 2023-01-11T22:09:55.5611894Z [ RUN ] ContainerAliasingTest.Lists 2023-01-11T22:09:55.5612197Z [ OK ] ContainerAliasingTest.Lists (0 ms) 2023-01-11T22:09:55.5612564Z [ RUN ] ContainerAliasingTest.Lists2 2023-01-11T22:09:55.5612982Z [ OK ] ContainerAliasingTest.Lists2 (0 ms) 2023-01-11T22:09:55.5613299Z [ RUN ] ContainerAliasingTest.Conservative 2023-01-11T22:09:55.5613672Z [ OK ] ContainerAliasingTest.Conservative (0 ms) 2023-01-11T22:09:55.5614159Z [ RUN ] ContainerAliasingTest.MovesAcrossContainedWrites 2023-01-11T22:09:55.5614601Z [ OK ] ContainerAliasingTest.MovesAcrossContainedWrites (0 ms) 2023-01-11T22:09:55.5615073Z [ RUN ] ContainerAliasingTest.MovesAcrossContainedWritesNested 2023-01-11T22:09:55.5616386Z [ OK ] ContainerAliasingTest.MovesAcrossContainedWritesNested (0 ms) 2023-01-11T22:09:55.5616935Z [----------] 13 tests from ContainerAliasingTest (1 ms total) 2023-01-11T22:09:55.5617130Z 2023-01-11T22:09:55.5617323Z [----------] 3 tests from WildcardsTest 2023-01-11T22:09:55.5617617Z [ RUN ] WildcardsTest.Basic 2023-01-11T22:09:55.5618151Z [ OK ] WildcardsTest.Basic (0 ms) 2023-01-11T22:09:55.5618570Z [ RUN ] WildcardsTest.TypeIsolation 2023-01-11T22:09:55.5619885Z [ OK ] WildcardsTest.TypeIsolation (0 ms) 2023-01-11T22:09:55.5620362Z [ RUN ] WildcardsTest.InvariantContainerAliasing 2023-01-11T22:09:55.5620980Z [ OK ] WildcardsTest.InvariantContainerAliasing (0 ms) 2023-01-11T22:09:55.5621594Z [----------] 3 tests from WildcardsTest (0 ms total) 2023-01-11T22:09:55.5621807Z 2023-01-11T22:09:55.5622011Z [----------] 18 tests from AliasRegistrationTest 2023-01-11T22:09:55.5622572Z [ RUN ] AliasRegistrationTest.ConservativeWithInferredSchema 2023-01-11T22:09:55.5623207Z [ OK ] AliasRegistrationTest.ConservativeWithInferredSchema (0 ms) 2023-01-11T22:09:55.5623804Z [ RUN ] AliasRegistrationTest.ConservativeWithSpecifiedSchema 2023-01-11T22:09:55.5624478Z [ OK ] AliasRegistrationTest.ConservativeWithSpecifiedSchema (0 ms) 2023-01-11T22:09:55.5625013Z [ RUN ] AliasRegistrationTest.ConservativeWithAliasingAnnotationsShouldError 2023-01-11T22:09:55.5728948Z [ OK ] AliasRegistrationTest.ConservativeWithAliasingAnnotationsShouldError (10 ms) 2023-01-11T22:09:55.5729535Z [ RUN ] AliasRegistrationTest.ConservativeWithAliasingAnnotationsShouldError2 2023-01-11T22:09:55.5773812Z [ OK ] AliasRegistrationTest.ConservativeWithAliasingAnnotationsShouldError2 (4 ms) 2023-01-11T22:09:55.5774375Z [ RUN ] AliasRegistrationTest.FromSchemaWithInferredSchemaShouldError 2023-01-11T22:09:55.5788272Z [ OK ] AliasRegistrationTest.FromSchemaWithInferredSchemaShouldError (1 ms) 2023-01-11T22:09:55.5788912Z [ RUN ] AliasRegistrationTest.FromSchemaInferredPure 2023-01-11T22:09:55.5789457Z [ OK ] AliasRegistrationTest.FromSchemaInferredPure (0 ms) 2023-01-11T22:09:55.5789986Z [ RUN ] AliasRegistrationTest.FromSchemaAliased 2023-01-11T22:09:55.5790698Z [ OK ] AliasRegistrationTest.FromSchemaAliased (0 ms) 2023-01-11T22:09:55.5791162Z [ RUN ] AliasRegistrationTest.FromSchemaPure 2023-01-11T22:09:55.5792067Z [ OK ] AliasRegistrationTest.FromSchemaPure (0 ms) 2023-01-11T22:09:55.5792469Z [ RUN ] AliasRegistrationTest.PureNoSchema 2023-01-11T22:09:55.5793220Z [ OK ] AliasRegistrationTest.PureNoSchema (0 ms) 2023-01-11T22:09:55.5793598Z [ RUN ] AliasRegistrationTest.PureWithSchema 2023-01-11T22:09:55.5794677Z [ OK ] AliasRegistrationTest.PureWithSchema (0 ms) 2023-01-11T22:09:55.5795142Z [ RUN ] AliasRegistrationTest.PureWithAnnotationsShouldError 2023-01-11T22:09:55.5828938Z [ OK ] AliasRegistrationTest.PureWithAnnotationsShouldError (3 ms) 2023-01-11T22:09:55.5829660Z [ RUN ] AliasRegistrationTest.AliasMoveAtenListOp 2023-01-11T22:09:55.5830367Z [ OK ] AliasRegistrationTest.AliasMoveAtenListOp (0 ms) 2023-01-11T22:09:55.5831060Z [ RUN ] AliasRegistrationTest.AliasMoveForTupleConstructWithSingleUseAsGraphOutput 2023-01-11T22:09:55.5832146Z [ OK ] AliasRegistrationTest.AliasMoveForTupleConstructWithSingleUseAsGraphOutput (0 ms) 2023-01-11T22:09:55.5833079Z [ RUN ] AliasRegistrationTest.RecursiveSubgraphTupleContainment 2023-01-11T22:09:55.5833982Z [ OK ] AliasRegistrationTest.RecursiveSubgraphTupleContainment (0 ms) 2023-01-11T22:09:55.5834732Z [ RUN ] AliasRegistrationTest.WildcardAliasForTupleConstructWithUses 2023-01-11T22:09:55.5835256Z [ OK ] AliasRegistrationTest.WildcardAliasForTupleConstructWithUses (0 ms) 2023-01-11T22:09:55.5835830Z [ RUN ] AliasRegistrationTest.ATenSplitIntListAliasCheck 2023-01-11T22:09:55.5836278Z [ OK ] AliasRegistrationTest.ATenSplitIntListAliasCheck (0 ms) 2023-01-11T22:09:55.5836683Z [ RUN ] AliasRegistrationTest.ATenSplitIntAliasCheck 2023-01-11T22:09:55.5837128Z [ OK ] AliasRegistrationTest.ATenSplitIntAliasCheck (0 ms) 2023-01-11T22:09:55.5837885Z [ RUN ] AliasRegistrationTest.PureWithAnnotationsShouldError2 2023-01-11T22:09:55.5868305Z [ OK ] AliasRegistrationTest.PureWithAnnotationsShouldError2 (3 ms) 2023-01-11T22:09:55.5869090Z [----------] 18 tests from AliasRegistrationTest (24 ms total) 2023-01-11T22:09:55.5869376Z 2023-01-11T22:09:55.5869611Z [----------] 2 tests from IRNonDeterminismTest 2023-01-11T22:09:55.5870139Z [ RUN ] IRNonDeterminismTest.Basic 2023-01-11T22:09:55.5870662Z [ OK ] IRNonDeterminismTest.Basic (0 ms) 2023-01-11T22:09:55.5871025Z [ RUN ] IRNonDeterminismTest.DropoutSpecialCase 2023-01-11T22:09:55.5871420Z [ OK ] IRNonDeterminismTest.DropoutSpecialCase (0 ms) 2023-01-11T22:09:55.5871798Z [----------] 2 tests from IRNonDeterminismTest (0 ms total) 2023-01-11T22:09:55.5872004Z 2023-01-11T22:09:55.5872411Z [----------] 1 test from NonDeterminismBackwardsCompatibility 2023-01-11T22:09:55.5873042Z [ RUN ] NonDeterminismBackwardsCompatibility.BackwardsCompatibility 2023-01-11T22:09:55.5873561Z [ OK ] NonDeterminismBackwardsCompatibility.BackwardsCompatibility (0 ms) 2023-01-11T22:09:55.5874054Z [----------] 1 test from NonDeterminismBackwardsCompatibility (0 ms total) 2023-01-11T22:09:55.5874242Z 2023-01-11T22:09:55.5874392Z [----------] 3 tests from AutodiffTest 2023-01-11T22:09:55.5874675Z [ RUN ] AutodiffTest.ADFormulas 2023-01-11T22:09:55.6344218Z [ OK ] AutodiffTest.ADFormulas (47 ms) 2023-01-11T22:09:55.6344589Z [ RUN ] AutodiffTest.Differentiate 2023-01-11T22:09:55.6350140Z [ OK ] AutodiffTest.Differentiate (0 ms) 2023-01-11T22:09:55.6350599Z [ RUN ] AutodiffTest.DifferentiateWithRequiresGrad 2023-01-11T22:09:55.6368903Z [ OK ] AutodiffTest.DifferentiateWithRequiresGrad (1 ms) 2023-01-11T22:09:55.6369548Z [----------] 3 tests from AutodiffTest (49 ms total) 2023-01-11T22:09:55.6369712Z 2023-01-11T22:09:55.6369927Z [----------] 1 test from AutodiffRemoveUnusedGradientsTest 2023-01-11T22:09:55.6370312Z [ RUN ] AutodiffRemoveUnusedGradientsTest.Linear 2023-01-11T22:09:55.6381886Z [ OK ] AutodiffRemoveUnusedGradientsTest.Linear (1 ms) 2023-01-11T22:09:55.6382750Z [----------] 1 test from AutodiffRemoveUnusedGradientsTest (1 ms total) 2023-01-11T22:09:55.6382955Z 2023-01-11T22:09:55.6383112Z [----------] 1 test from UpgraderLoad 2023-01-11T22:09:55.6383639Z [ RUN ] UpgraderLoad.CanPopulateUpgradersGraph 2023-01-11T22:09:55.6425394Z [ OK ] UpgraderLoad.CanPopulateUpgradersGraph (4 ms) 2023-01-11T22:09:55.6426075Z [----------] 1 test from UpgraderLoad (4 ms total) 2023-01-11T22:09:55.6426290Z 2023-01-11T22:09:55.6426454Z [----------] 4 tests from OpReplacementTest 2023-01-11T22:09:55.6426871Z [ RUN ] OpReplacementTest.ReplaceDivInSimpleFunction 2023-01-11T22:09:55.6427714Z [ OK ] OpReplacementTest.ReplaceDivInSimpleFunction (0 ms) 2023-01-11T22:09:55.6428172Z [ RUN ] OpReplacementTest.ReplaceTwoOpsInSimpleFunction 2023-01-11T22:09:55.6430314Z [ OK ] OpReplacementTest.ReplaceTwoOpsInSimpleFunction (0 ms) 2023-01-11T22:09:55.6430833Z [ RUN ] OpReplacementTest.ReplaceDivInNestedFunction 2023-01-11T22:09:55.6432103Z [ OK ] OpReplacementTest.ReplaceDivInNestedFunction (0 ms) 2023-01-11T22:09:55.6432933Z [ RUN ] OpReplacementTest.ReplaceTestSubcmulInSimpleFunction 2023-01-11T22:09:55.6434389Z [ OK ] OpReplacementTest.ReplaceTestSubcmulInSimpleFunction (0 ms) 2023-01-11T22:09:55.6434851Z [----------] 4 tests from OpReplacementTest (0 ms total) 2023-01-11T22:09:55.6435031Z 2023-01-11T22:09:55.6435186Z [----------] 4 tests from UpgraderUtils 2023-01-11T22:09:55.6435739Z [ RUN ] UpgraderUtils.FindCorrectUpgrader 2023-01-11T22:09:55.6436317Z [ OK ] UpgraderUtils.FindCorrectUpgrader (0 ms) 2023-01-11T22:09:55.6436819Z [ RUN ] UpgraderUtils.IsVersionMapSorted 2023-01-11T22:09:55.6437172Z [ OK ] UpgraderUtils.IsVersionMapSorted (0 ms) 2023-01-11T22:09:55.6437511Z [ RUN ] UpgraderUtils.FindIfOpIsCurrent 2023-01-11T22:09:55.6437892Z [ OK ] UpgraderUtils.FindIfOpIsCurrent (0 ms) 2023-01-11T22:09:55.6438528Z [ RUN ] UpgraderUtils.CanLoadHistoricOp 2023-01-11T22:09:55.6439111Z [ OK ] UpgraderUtils.CanLoadHistoricOp (0 ms) 2023-01-11T22:09:55.6439462Z [----------] 4 tests from UpgraderUtils (0 ms total) 2023-01-11T22:09:55.6439603Z 2023-01-11T22:09:55.6439751Z [----------] 9 tests from BackendTest 2023-01-11T22:09:55.6440022Z [ RUN ] BackendTest.ToBackend 2023-01-11T22:09:55.6483798Z [ OK ] BackendTest.ToBackend (4 ms) 2023-01-11T22:09:55.6484178Z [ RUN ] BackendTest.ToBackendNotAvailable 2023-01-11T22:09:55.6505455Z [W backend_detail.cpp:393] Warning: Backend [test_backend_unavailable] is not available. Execution of this Module is still possible by saving and loading on a device where the backend is available. (function codegen_backend_module) 2023-01-11T22:09:55.6523119Z [ OK ] BackendTest.ToBackendNotAvailable (3 ms) 2023-01-11T22:09:55.6523509Z [ RUN ] BackendTest.TestCompiler 2023-01-11T22:09:55.6580633Z [ OK ] BackendTest.TestCompiler (5 ms) 2023-01-11T22:09:55.6581043Z [ RUN ] BackendTest.TestCompilerWithStringTable 2023-01-11T22:09:55.6634928Z [ OK ] BackendTest.TestCompilerWithStringTable (5 ms) 2023-01-11T22:09:55.6635357Z [ RUN ] BackendTest.TestComposite 2023-01-11T22:09:55.6739056Z [ OK ] BackendTest.TestComposite (10 ms) 2023-01-11T22:09:55.6739443Z [ RUN ] BackendTest.TestPrimDtype 2023-01-11T22:09:55.6745655Z [ OK ] BackendTest.TestPrimDtype (0 ms) 2023-01-11T22:09:55.6746079Z [ RUN ] BackendTest.TestCompositeWithSetStates 2023-01-11T22:09:55.6848510Z [ OK ] BackendTest.TestCompositeWithSetStates (10 ms) 2023-01-11T22:09:55.6848984Z [ RUN ] BackendTest.TestConsistencyOfCompositeWithSetStates 2023-01-11T22:09:55.7038059Z [ OK ] BackendTest.TestConsistencyOfCompositeWithSetStates (18 ms) 2023-01-11T22:09:55.7038508Z [ RUN ] BackendTest.TestCompilerNotSupport 2023-01-11T22:09:55.7058059Z [ OK ] BackendTest.TestCompilerNotSupport (1 ms) 2023-01-11T22:09:55.7058450Z [----------] 9 tests from BackendTest (62 ms total) 2023-01-11T22:09:55.7058607Z 2023-01-11T22:09:55.7058781Z [----------] 6 tests from BackendTestDebugInfo 2023-01-11T22:09:55.7059094Z [ RUN ] BackendTestDebugInfo.TestCompiler 2023-01-11T22:09:55.7183693Z [ OK ] BackendTestDebugInfo.TestCompiler (12 ms) 2023-01-11T22:09:55.7184166Z [ RUN ] BackendTestDebugInfo.TestCompilerWithStringTable 2023-01-11T22:09:55.7321386Z [ OK ] BackendTestDebugInfo.TestCompilerWithStringTable (13 ms) 2023-01-11T22:09:55.7322323Z [ RUN ] BackendTestDebugInfo.TestExceptionStackForCompilerWithModuleHierarchy 2023-01-11T22:09:55.7459279Z [ OK ] BackendTestDebugInfo.TestExceptionStackForCompilerWithModuleHierarchy (13 ms) 2023-01-11T22:09:55.7460657Z [ RUN ] BackendTestDebugInfo.TestExceptionStackForCompilerWithTwoLevelModuleHierarchy 2023-01-11T22:09:55.7588823Z [ OK ] BackendTestDebugInfo.TestExceptionStackForCompilerWithTwoLevelModuleHierarchy (12 ms) 2023-01-11T22:09:55.7589966Z [ RUN ] BackendTestDebugInfo.TestExceptionStackForCompilerWithLoweredSubModule 2023-01-11T22:09:55.7730593Z [ OK ] BackendTestDebugInfo.TestExceptionStackForCompilerWithLoweredSubModule (14 ms) 2023-01-11T22:09:55.7731748Z [ RUN ] BackendTestDebugInfo.TestExceptionStackForCompilerWithSelectiveLoweredSubModule 2023-01-11T22:09:55.7864557Z [ OK ] BackendTestDebugInfo.TestExceptionStackForCompilerWithSelectiveLoweredSubModule (13 ms) 2023-01-11T22:09:55.7865476Z [----------] 6 tests from BackendTestDebugInfo (80 ms total) 2023-01-11T22:09:55.7865767Z 2023-01-11T22:09:55.7866192Z [----------] 4 tests from ClassImportTest 2023-01-11T22:09:55.7866681Z [ RUN ] ClassImportTest.Basic 2023-01-11T22:09:55.7871839Z [ OK ] ClassImportTest.Basic (0 ms) 2023-01-11T22:09:55.7872373Z [ RUN ] ClassImportTest.ScriptObject 2023-01-11T22:09:55.7898127Z [ OK ] ClassImportTest.ScriptObject (2 ms) 2023-01-11T22:09:55.7898661Z [ RUN ] ClassImportTest.ClassDerive 2023-01-11T22:09:55.7899206Z [ OK ] ClassImportTest.ClassDerive (0 ms) 2023-01-11T22:09:55.7899714Z [ RUN ] ClassImportTest.CustomClass 2023-01-11T22:09:55.7900674Z [ OK ] ClassImportTest.CustomClass (0 ms) 2023-01-11T22:09:55.7901281Z [----------] 4 tests from ClassImportTest (3 ms total) 2023-01-11T22:09:55.7901587Z 2023-01-11T22:09:55.7901842Z [----------] 1 test from ClassParserTest 2023-01-11T22:09:55.7902523Z [ RUN ] ClassParserTest.Basic 2023-01-11T22:09:55.7903046Z [ OK ] ClassParserTest.Basic (0 ms) 2023-01-11T22:09:55.7903622Z [----------] 1 test from ClassParserTest (0 ms total) 2023-01-11T22:09:55.7903895Z 2023-01-11T22:09:55.7904161Z [----------] 3 tests from ClassTypeTest 2023-01-11T22:09:55.7904680Z [ RUN ] ClassTypeTest.AddRemoveAttr 2023-01-11T22:09:55.7905167Z [ OK ] ClassTypeTest.AddRemoveAttr (0 ms) 2023-01-11T22:09:55.7905493Z [ RUN ] ClassTypeTest.AddRemoveConstant 2023-01-11T22:09:55.7905832Z [ OK ] ClassTypeTest.AddRemoveConstant (0 ms) 2023-01-11T22:09:55.7906193Z [ RUN ] ClassTypeTest.IdenticalTypesDifferentCus 2023-01-11T22:09:55.7926574Z [ OK ] ClassTypeTest.IdenticalTypesDifferentCus (2 ms) 2023-01-11T22:09:55.7927197Z [----------] 3 tests from ClassTypeTest (2 ms total) 2023-01-11T22:09:55.7927486Z 2023-01-11T22:09:55.7927765Z [----------] 2 tests from TestCodeTemplate 2023-01-11T22:09:55.7928288Z [ RUN ] TestCodeTemplate.Copying 2023-01-11T22:09:55.7928816Z [ OK ] TestCodeTemplate.Copying (0 ms) 2023-01-11T22:09:55.7929361Z [ RUN ] TestCodeTemplate.Formatting 2023-01-11T22:09:55.7930123Z [ OK ] TestCodeTemplate.Formatting (0 ms) 2023-01-11T22:09:55.7930719Z [----------] 2 tests from TestCodeTemplate (0 ms total) 2023-01-11T22:09:55.7931007Z 2023-01-11T22:09:55.7931288Z [----------] 13 tests from ConcatOptTest 2023-01-11T22:09:55.7931958Z [ RUN ] ConcatOptTest.SimpleCommonInputsEliminationPrefix 2023-01-11T22:09:55.7987515Z [ OK ] ConcatOptTest.SimpleCommonInputsEliminationPrefix (5 ms) 2023-01-11T22:09:55.7988350Z [ RUN ] ConcatOptTest.SimpleCommonInputsEliminationSuffix 2023-01-11T22:09:55.8031835Z [ OK ] ConcatOptTest.SimpleCommonInputsEliminationSuffix (4 ms) 2023-01-11T22:09:55.8032768Z [ RUN ] ConcatOptTest.CommonInputsEliminationWithDifferentOrderInputs 2023-01-11T22:09:55.8066938Z [ OK ] ConcatOptTest.CommonInputsEliminationWithDifferentOrderInputs (3 ms) 2023-01-11T22:09:55.8067716Z [ RUN ] ConcatOptTest.MoreCommonInputsElimination 2023-01-11T22:09:55.8154953Z [ OK ] ConcatOptTest.MoreCommonInputsElimination (8 ms) 2023-01-11T22:09:55.8155553Z [ RUN ] ConcatOptTest.ExpandConcat 2023-01-11T22:09:55.8179231Z [ OK ] ConcatOptTest.ExpandConcat (2 ms) 2023-01-11T22:09:55.8179697Z [ RUN ] ConcatOptTest.ConcatWithoutResultShape 2023-01-11T22:09:55.8202124Z [ OK ] ConcatOptTest.ConcatWithoutResultShape (2 ms) 2023-01-11T22:09:55.8202618Z [ RUN ] ConcatOptTest.ConcatWithoutInputShape 2023-01-11T22:09:55.8230565Z [ OK ] ConcatOptTest.ConcatWithoutInputShape (2 ms) 2023-01-11T22:09:55.8231003Z [ RUN ] ConcatOptTest.UseVariadicCat 2023-01-11T22:09:55.8290976Z [ OK ] ConcatOptTest.UseVariadicCat (6 ms) 2023-01-11T22:09:55.8291486Z [ RUN ] ConcatOptTest.UseVariadicCatWithMultipleListUses 2023-01-11T22:09:55.8311280Z [ OK ] ConcatOptTest.UseVariadicCatWithMultipleListUses (2 ms) 2023-01-11T22:09:55.8311890Z [ RUN ] ConcatOptTest.UseVariadicCatWithListMutationAfterCat 2023-01-11T22:09:55.8337543Z [ OK ] ConcatOptTest.UseVariadicCatWithListMutationAfterCat (2 ms) 2023-01-11T22:09:55.8338143Z [ RUN ] ConcatOptTest.UseVariadicCatWithListMutationBeforeCat 2023-01-11T22:09:55.8372590Z [ OK ] ConcatOptTest.UseVariadicCatWithListMutationBeforeCat (3 ms) 2023-01-11T22:09:55.8373212Z [ RUN ] ConcatOptTest.UseVariadicCatWithMultipleListMutations 2023-01-11T22:09:55.8417140Z [ OK ] ConcatOptTest.UseVariadicCatWithMultipleListMutations (4 ms) 2023-01-11T22:09:55.8417848Z [ RUN ] ConcatOptTest.RemoveListMutationUseVariadicCatAndCommonInputsElimination 2023-01-11T22:09:55.8460733Z [ OK ] ConcatOptTest.RemoveListMutationUseVariadicCatAndCommonInputsElimination (4 ms) 2023-01-11T22:09:55.8461382Z [----------] 13 tests from ConcatOptTest (53 ms total) 2023-01-11T22:09:55.8461600Z 2023-01-11T22:09:55.8461816Z [----------] 1 test from OptimizeConcatTest 2023-01-11T22:09:55.8462292Z [ RUN ] OptimizeConcatTest.UseVariadicCatReplaceMultiple 2023-01-11T22:09:55.8492713Z [ OK ] OptimizeConcatTest.UseVariadicCatReplaceMultiple (3 ms) 2023-01-11T22:09:55.8493247Z [----------] 1 test from OptimizeConcatTest (3 ms total) 2023-01-11T22:09:55.8493464Z 2023-01-11T22:09:55.8493652Z [----------] 3 tests from ConcatOpt 2023-01-11T22:09:55.8494051Z [ RUN ] ConcatOpt.CombineConcatsSimpleCase 2023-01-11T22:09:55.8494861Z [ OK ] ConcatOpt.CombineConcatsSimpleCase (0 ms) 2023-01-11T22:09:55.8495305Z [ RUN ] ConcatOpt.CombineConcatsLongChain 2023-01-11T22:09:55.8497544Z [ OK ] ConcatOpt.CombineConcatsLongChain (0 ms) 2023-01-11T22:09:55.8498025Z [ RUN ] ConcatOpt.CombineConcatsMutation 2023-01-11T22:09:55.8503082Z [ OK ] ConcatOpt.CombineConcatsMutation (0 ms) 2023-01-11T22:09:55.8504236Z [----------] 3 tests from ConcatOpt (0 ms total) 2023-01-11T22:09:55.8504427Z 2023-01-11T22:09:55.8505439Z [----------] 4 tests from ConstantPoolingTest 2023-01-11T22:09:55.8505821Z [ RUN ] ConstantPoolingTest.Int 2023-01-11T22:09:55.8506214Z [ OK ] ConstantPoolingTest.Int (0 ms) 2023-01-11T22:09:55.8507519Z [ RUN ] ConstantPoolingTest.PoolingAcrossBlocks 2023-01-11T22:09:55.8508021Z [ OK ] ConstantPoolingTest.PoolingAcrossBlocks (0 ms) 2023-01-11T22:09:55.8508525Z [ RUN ] ConstantPoolingTest.PoolingDifferentDevices 2023-01-11T22:09:55.8509042Z [ OK ] ConstantPoolingTest.PoolingDifferentDevices (0 ms) 2023-01-11T22:09:55.8509540Z [ RUN ] ConstantPoolingTest.DictConstantPooling 2023-01-11T22:09:55.8510027Z [ OK ] ConstantPoolingTest.DictConstantPooling (0 ms) 2023-01-11T22:09:55.8510607Z [----------] 4 tests from ConstantPoolingTest (0 ms total) 2023-01-11T22:09:55.8510812Z 2023-01-11T22:09:55.8511012Z [----------] 1 test from CleanupPassTest 2023-01-11T22:09:55.8511379Z [ RUN ] CleanupPassTest.Basic 2023-01-11T22:09:55.8511753Z [ OK ] CleanupPassTest.Basic (0 ms) 2023-01-11T22:09:55.8512156Z [----------] 1 test from CleanupPassTest (0 ms total) 2023-01-11T22:09:55.8512358Z 2023-01-11T22:09:55.8512602Z [----------] 1 test from CreateAutodiffSubgraphsTest 2023-01-11T22:09:55.8513040Z [ RUN ] CreateAutodiffSubgraphsTest.Basic 2023-01-11T22:09:55.8514446Z [ OK ] CreateAutodiffSubgraphsTest.Basic (0 ms) 2023-01-11T22:09:55.8514932Z [----------] 1 test from CreateAutodiffSubgraphsTest (0 ms total) 2023-01-11T22:09:55.8515165Z 2023-01-11T22:09:55.8515367Z [----------] 4 tests from CustomClassTest 2023-01-11T22:09:55.8515781Z [ RUN ] CustomClassTest.TorchbindIValueAPI 2023-01-11T22:09:55.8516844Z [ OK ] CustomClassTest.TorchbindIValueAPI (0 ms) 2023-01-11T22:09:55.8517279Z [ RUN ] CustomClassTest.ScalarTypeClass 2023-01-11T22:09:55.8520483Z [ OK ] CustomClassTest.ScalarTypeClass (0 ms) 2023-01-11T22:09:55.8520917Z [ RUN ] CustomClassTest.TestDocString 2023-01-11T22:09:55.8521349Z [ OK ] CustomClassTest.TestDocString (0 ms) 2023-01-11T22:09:55.8521757Z [ RUN ] CustomClassTest.Serialization 2023-01-11T22:09:55.8541694Z [ OK ] CustomClassTest.Serialization (2 ms) 2023-01-11T22:09:55.8542530Z [----------] 4 tests from CustomClassTest (3 ms total) 2023-01-11T22:09:55.8542808Z 2023-01-11T22:09:55.8542985Z [----------] 5 tests from CustomOperatorTest 2023-01-11T22:09:55.8543309Z [ RUN ] CustomOperatorTest.InferredSchema 2023-01-11T22:09:55.8545206Z [ OK ] CustomOperatorTest.InferredSchema (0 ms) 2023-01-11T22:09:55.8545851Z [ RUN ] CustomOperatorTest.ExplicitSchema 2023-01-11T22:09:55.8546959Z [ OK ] CustomOperatorTest.ExplicitSchema (0 ms) 2023-01-11T22:09:55.8547567Z [ RUN ] CustomOperatorTest.ListParameters 2023-01-11T22:09:55.8548549Z [ OK ] CustomOperatorTest.ListParameters (0 ms) 2023-01-11T22:09:55.8549156Z [ RUN ] CustomOperatorTest.ListParameters2 2023-01-11T22:09:55.8550322Z [ OK ] CustomOperatorTest.ListParameters2 (0 ms) 2023-01-11T22:09:55.8550901Z [ RUN ] CustomOperatorTest.Aliasing 2023-01-11T22:09:55.8552340Z [ OK ] CustomOperatorTest.Aliasing (0 ms) 2023-01-11T22:09:55.8552948Z [----------] 5 tests from CustomOperatorTest (1 ms total) 2023-01-11T22:09:55.8553262Z 2023-01-11T22:09:55.8553533Z [----------] 2 tests from TestCustomOperator 2023-01-11T22:09:55.8554196Z [ RUN ] TestCustomOperator.OperatorGeneratorUndeclared 2023-01-11T22:09:55.8554814Z [ OK ] TestCustomOperator.OperatorGeneratorUndeclared (0 ms) 2023-01-11T22:09:55.8555266Z [ RUN ] TestCustomOperator.OperatorGeneratorBasic 2023-01-11T22:09:55.8555665Z [ OK ] TestCustomOperator.OperatorGeneratorBasic (0 ms) 2023-01-11T22:09:55.8556189Z [----------] 2 tests from TestCustomOperator (0 ms total) 2023-01-11T22:09:55.8556415Z 2023-01-11T22:09:55.8556597Z [----------] 1 test from EliminateDeadCodeTest 2023-01-11T22:09:55.8556909Z [ RUN ] EliminateDeadCodeTest.Basic 2023-01-11T22:09:55.8557214Z [ OK ] EliminateDeadCodeTest.Basic (0 ms) 2023-01-11T22:09:55.8557557Z [----------] 1 test from EliminateDeadCodeTest (0 ms total) 2023-01-11T22:09:55.8557727Z 2023-01-11T22:09:55.8557866Z [----------] 2 tests from FuserTest 2023-01-11T22:09:55.8558144Z [ RUN ] FuserTest.FusionAliasing 2023-01-11T22:09:55.8560410Z [ OK ] FuserTest.FusionAliasing (0 ms) 2023-01-11T22:09:55.8560829Z [ RUN ] FuserTest.KernelCaching 2023-01-11T22:09:55.8563561Z [ OK ] FuserTest.KernelCaching (0 ms) 2023-01-11T22:09:55.8563877Z [----------] 2 tests from FuserTest (0 ms total) 2023-01-11T22:09:55.8564029Z 2023-01-11T22:09:55.8564190Z [----------] 1 test from GraphExecutorTest 2023-01-11T22:09:55.8564506Z [ RUN ] GraphExecutorTest.runAsync_executor 2023-01-11T22:09:55.8628392Z [ OK ] GraphExecutorTest.runAsync_executor (6 ms) 2023-01-11T22:09:55.8629294Z [----------] 1 test from GraphExecutorTest (6 ms total) 2023-01-11T22:09:55.8629741Z 2023-01-11T22:09:55.8630006Z [----------] 5 tests from GraphIteratorTest 2023-01-11T22:09:55.8630682Z [ RUN ] GraphIteratorTest.ConstantReturnGraph 2023-01-11T22:09:55.8631129Z [ OK ] GraphIteratorTest.ConstantReturnGraph (0 ms) 2023-01-11T22:09:55.8631629Z [ RUN ] GraphIteratorTest.GraphWithParameters 2023-01-11T22:09:55.8632116Z [ OK ] GraphIteratorTest.GraphWithParameters (0 ms) 2023-01-11T22:09:55.8632710Z [ RUN ] GraphIteratorTest.GraphWithIf 2023-01-11T22:09:55.8633297Z [ OK ] GraphIteratorTest.GraphWithIf (0 ms) 2023-01-11T22:09:55.8633924Z [ RUN ] GraphIteratorTest.GraphWithNestedIf 2023-01-11T22:09:55.8634501Z [ OK ] GraphIteratorTest.GraphWithNestedIf (0 ms) 2023-01-11T22:09:55.8634834Z [ RUN ] GraphIteratorTest.GraphWithLoop 2023-01-11T22:09:55.8635176Z [ OK ] GraphIteratorTest.GraphWithLoop (0 ms) 2023-01-11T22:09:55.8635525Z [----------] 5 tests from GraphIteratorTest (0 ms total) 2023-01-11T22:09:55.8635687Z 2023-01-11T22:09:55.8635883Z [----------] 1 test from CSDebugInfoSerializaitionTest 2023-01-11T22:09:55.8636242Z [ RUN ] CSDebugInfoSerializaitionTest.TwoSubmodules 2023-01-11T22:09:55.8637884Z [ OK ] CSDebugInfoSerializaitionTest.TwoSubmodules (0 ms) 2023-01-11T22:09:55.8638597Z [----------] 1 test from CSDebugInfoSerializaitionTest (0 ms total) 2023-01-11T22:09:55.8638793Z 2023-01-11T22:09:55.8638944Z [----------] 1 test from InlinerTest 2023-01-11T22:09:55.8639256Z [ RUN ] InlinerTest.Basic 2023-01-11T22:09:55.8640890Z [ OK ] InlinerTest.Basic (0 ms) 2023-01-11T22:09:55.8641461Z [----------] 1 test from InlinerTest (0 ms total) 2023-01-11T22:09:55.8641721Z 2023-01-11T22:09:55.8641861Z [----------] 1 test from InterfaceTest 2023-01-11T22:09:55.8642200Z [ RUN ] InterfaceTest.ModuleInterfaceSerialization 2023-01-11T22:09:55.8654700Z [ OK ] InterfaceTest.ModuleInterfaceSerialization (1 ms) 2023-01-11T22:09:55.8655359Z [----------] 1 test from InterfaceTest (1 ms total) 2023-01-11T22:09:55.8655538Z 2023-01-11T22:09:55.8655776Z [----------] 4 tests from TypeCheckTest 2023-01-11T22:09:55.8656195Z [ RUN ] TypeCheckTest.MatchingType 2023-01-11T22:09:55.8656871Z [ OK ] TypeCheckTest.MatchingType (0 ms) 2023-01-11T22:09:55.8657285Z [ RUN ] TypeCheckTest.SizeMismatch 2023-01-11T22:09:55.8657725Z [ OK ] TypeCheckTest.SizeMismatch (0 ms) 2023-01-11T22:09:55.8658181Z [ RUN ] TypeCheckTest.GradientMismatch 2023-01-11T22:09:55.8658642Z [ OK ] TypeCheckTest.GradientMismatch (0 ms) 2023-01-11T22:09:55.8658978Z [ RUN ] TypeCheckTest.ScalarTypeMismatch 2023-01-11T22:09:55.8659380Z [ OK ] TypeCheckTest.ScalarTypeMismatch (0 ms) 2023-01-11T22:09:55.8659727Z [----------] 4 tests from TypeCheckTest (0 ms total) 2023-01-11T22:09:55.8659882Z 2023-01-11T22:09:55.8660028Z [----------] 3 tests from InterpreterTest 2023-01-11T22:09:55.8660357Z [ RUN ] InterpreterTest.IgnorableArgsInSchema 2023-01-11T22:09:55.8663936Z [ OK ] InterpreterTest.IgnorableArgsInSchema (0 ms) 2023-01-11T22:09:55.8664513Z [ RUN ] InterpreterTest.IgnorableArgsInSchemaWithOut 2023-01-11T22:09:55.8664928Z [ OK ] InterpreterTest.IgnorableArgsInSchemaWithOut (0 ms) 2023-01-11T22:09:55.8665301Z [ RUN ] InterpreterTest.runAsyncBasicTest 2023-01-11T22:09:55.8681458Z [ OK ] InterpreterTest.runAsyncBasicTest (1 ms) 2023-01-11T22:09:55.8682156Z [----------] 3 tests from InterpreterTest (2 ms total) 2023-01-11T22:09:55.8682343Z 2023-01-11T22:09:55.8682596Z [----------] 1 test from EnableRethrowCaughtExceptionTest 2023-01-11T22:09:55.8683265Z [ RUN ] EnableRethrowCaughtExceptionTest.EnableRethrowCaughtExceptionTestRethrowsCaughtException 2023-01-11T22:09:55.8894403Z [ OK ] EnableRethrowCaughtExceptionTest.EnableRethrowCaughtExceptionTestRethrowsCaughtException (21 ms) 2023-01-11T22:09:55.8895446Z [----------] 1 test from EnableRethrowCaughtExceptionTest (21 ms total) 2023-01-11T22:09:55.8895716Z 2023-01-11T22:09:55.8895869Z [----------] 4 tests from IRTest 2023-01-11T22:09:55.8896180Z [ RUN ] IRTest.Attributes 2023-01-11T22:09:55.8896559Z [ OK ] IRTest.Attributes (0 ms) 2023-01-11T22:09:55.8896848Z [ RUN ] IRTest.Blocks 2023-01-11T22:09:55.8897197Z [ OK ] IRTest.Blocks (0 ms) 2023-01-11T22:09:55.8897613Z [ RUN ] IRTest.CommonAncestor 2023-01-11T22:09:55.8897946Z [ OK ] IRTest.CommonAncestor (0 ms) 2023-01-11T22:09:55.8898354Z [ RUN ] IRTest.OperatorMap 2023-01-11T22:09:55.8898763Z [ OK ] IRTest.OperatorMap (0 ms) 2023-01-11T22:09:55.8899178Z [----------] 4 tests from IRTest (0 ms total) 2023-01-11T22:09:55.8899366Z 2023-01-11T22:09:55.8899578Z [----------] 21 tests from IRParserTest 2023-01-11T22:09:55.8899969Z [ RUN ] IRParserTest.Basic 2023-01-11T22:09:55.8900370Z [ OK ] IRParserTest.Basic (0 ms) 2023-01-11T22:09:55.8900878Z [ RUN ] IRParserTest.NestedBlock 2023-01-11T22:09:55.8901421Z [ OK ] IRParserTest.NestedBlock (0 ms) 2023-01-11T22:09:55.8901915Z [ RUN ] IRParserTest.If 2023-01-11T22:09:55.8902563Z [ OK ] IRParserTest.If (0 ms) 2023-01-11T22:09:55.8902933Z [ RUN ] IRParserTest.If2 2023-01-11T22:09:55.8903263Z [ OK ] IRParserTest.If2 (0 ms) 2023-01-11T22:09:55.8903625Z [ RUN ] IRParserTest.InferredTypeIsTensor 2023-01-11T22:09:55.8904148Z [ OK ] IRParserTest.InferredTypeIsTensor (0 ms) 2023-01-11T22:09:55.8904631Z [ RUN ] IRParserTest.ValueReuse 2023-01-11T22:09:55.8905093Z [ OK ] IRParserTest.ValueReuse (0 ms) 2023-01-11T22:09:55.8905452Z [ RUN ] IRParserTest.Attributes 2023-01-11T22:09:55.8905875Z [ OK ] IRParserTest.Attributes (0 ms) 2023-01-11T22:09:55.8906212Z [ RUN ] IRParserTest.OptionalTypes 2023-01-11T22:09:55.8906930Z [ OK ] IRParserTest.OptionalTypes (0 ms) 2023-01-11T22:09:55.8907321Z [ RUN ] IRParserTest.StarTensor 2023-01-11T22:09:55.8907732Z [ OK ] IRParserTest.StarTensor (0 ms) 2023-01-11T22:09:55.8908139Z [ RUN ] IRParserTest.UnshapedTensor 2023-01-11T22:09:55.8908654Z [ OK ] IRParserTest.UnshapedTensor (0 ms) 2023-01-11T22:09:55.8909179Z [ RUN ] IRParserTest.ShapedTensor 2023-01-11T22:09:55.8909634Z [ OK ] IRParserTest.ShapedTensor (0 ms) 2023-01-11T22:09:55.8910001Z [ RUN ] IRParserTest.NestedContrainer 2023-01-11T22:09:55.8910450Z [ OK ] IRParserTest.NestedContrainer (0 ms) 2023-01-11T22:09:55.8910809Z [ RUN ] IRParserTest.MalformedShapeAnnotation 2023-01-11T22:09:55.8911333Z [ OK ] IRParserTest.MalformedShapeAnnotation (0 ms) 2023-01-11T22:09:55.8911685Z [ RUN ] IRParserTest.FileCheck 2023-01-11T22:09:55.8912158Z [ OK ] IRParserTest.FileCheck (0 ms) 2023-01-11T22:09:55.8912455Z [ RUN ] IRParserTest.Strides 2023-01-11T22:09:55.8912756Z [ OK ] IRParserTest.Strides (0 ms) 2023-01-11T22:09:55.8913091Z [ RUN ] IRParserTest.MalformedStrides 2023-01-11T22:09:55.8913420Z [ OK ] IRParserTest.MalformedStrides (0 ms) 2023-01-11T22:09:55.8913780Z [ RUN ] IRParserTest.TensorShapes 2023-01-11T22:09:55.8914092Z [ OK ] IRParserTest.TensorShapes (0 ms) 2023-01-11T22:09:55.8914508Z [ RUN ] IRParserTest.DeviceAndRequiresGradTensors 2023-01-11T22:09:55.8914894Z [ OK ] IRParserTest.DeviceAndRequiresGradTensors (0 ms) 2023-01-11T22:09:55.8915300Z [ RUN ] IRParserTest.ListConstant 2023-01-11T22:09:55.8915613Z [ OK ] IRParserTest.ListConstant (0 ms) 2023-01-11T22:09:55.8915996Z [ RUN ] IRParserTest.PartialStarTensor 2023-01-11T22:09:55.8916333Z [ OK ] IRParserTest.PartialStarTensor (0 ms) 2023-01-11T22:09:55.8916735Z [ RUN ] IRParserTest.ComplexTensorAttributes 2023-01-11T22:09:55.8917107Z [ OK ] IRParserTest.ComplexTensorAttributes (0 ms) 2023-01-11T22:09:55.8917504Z [----------] 21 tests from IRParserTest (1 ms total) 2023-01-11T22:09:55.8917661Z 2023-01-11T22:09:55.8917814Z [----------] 2 tests from JitTypeTest 2023-01-11T22:09:55.8918094Z [ RUN ] JitTypeTest.IsComplete 2023-01-11T22:09:55.8918439Z [ OK ] JitTypeTest.IsComplete (0 ms) 2023-01-11T22:09:55.8918770Z [ RUN ] JitTypeTest.UnifyTypes 2023-01-11T22:09:55.8919186Z [ OK ] JitTypeTest.UnifyTypes (0 ms) 2023-01-11T22:09:55.8919545Z [----------] 2 tests from JitTypeTest (0 ms total) 2023-01-11T22:09:55.8919689Z 2023-01-11T22:09:55.8919923Z [----------] 42 tests from LiteInterpreterTest 2023-01-11T22:09:55.8920269Z [ RUN ] LiteInterpreterTest.UpsampleNearest2d 2023-01-11T22:09:55.8926780Z [ OK ] LiteInterpreterTest.UpsampleNearest2d (1 ms) 2023-01-11T22:09:55.8927239Z [ RUN ] LiteInterpreterTest.CheckAttrAccess 2023-01-11T22:09:55.8928382Z [ OK ] LiteInterpreterTest.CheckAttrAccess (0 ms) 2023-01-11T22:09:55.8928782Z [ RUN ] LiteInterpreterTest.MethodInvocation 2023-01-11T22:09:55.8955079Z [ OK ] LiteInterpreterTest.MethodInvocation (2 ms) 2023-01-11T22:09:55.8955475Z [ RUN ] LiteInterpreterTest.Conv 2023-01-11T22:09:55.8981228Z [ OK ] LiteInterpreterTest.Conv (2 ms) 2023-01-11T22:09:55.8981618Z [ RUN ] LiteInterpreterTest.Inline 2023-01-11T22:09:55.8991559Z [ OK ] LiteInterpreterTest.Inline (1 ms) 2023-01-11T22:09:55.8991925Z [ RUN ] LiteInterpreterTest.Tuple 2023-01-11T22:09:55.8998620Z [ OK ] LiteInterpreterTest.Tuple (0 ms) 2023-01-11T22:09:55.8998994Z [ RUN ] LiteInterpreterTest.AtenFormat 2023-01-11T22:09:55.9005188Z [ OK ] LiteInterpreterTest.AtenFormat (0 ms) 2023-01-11T22:09:55.9005584Z [ RUN ] LiteInterpreterTest.PrimDevice 2023-01-11T22:09:55.9009519Z [ OK ] LiteInterpreterTest.PrimDevice (0 ms) 2023-01-11T22:09:55.9009905Z [ RUN ] LiteInterpreterTest.Dict 2023-01-11T22:09:55.9017089Z [ OK ] LiteInterpreterTest.Dict (0 ms) 2023-01-11T22:09:55.9017649Z [ RUN ] LiteInterpreterTest.List 2023-01-11T22:09:55.9025882Z [ OK ] LiteInterpreterTest.List (0 ms) 2023-01-11T22:09:55.9026486Z [ RUN ] LiteInterpreterTest.PrimOverload 2023-01-11T22:09:55.9026893Z [ OK ] LiteInterpreterTest.PrimOverload (0 ms) 2023-01-11T22:09:55.9027341Z [ RUN ] LiteInterpreterTest.Prim 2023-01-11T22:09:55.9030683Z [ OK ] LiteInterpreterTest.Prim (0 ms) 2023-01-11T22:09:55.9031124Z [ RUN ] LiteInterpreterTest.PrimScalar 2023-01-11T22:09:55.9035713Z [ OK ] LiteInterpreterTest.PrimScalar (0 ms) 2023-01-11T22:09:55.9036051Z [ RUN ] LiteInterpreterTest.LoadOrigJit 2023-01-11T22:09:55.9092853Z [ OK ] LiteInterpreterTest.LoadOrigJit (5 ms) 2023-01-11T22:09:55.9093213Z [ RUN ] LiteInterpreterTest.WrongMethodName 2023-01-11T22:09:55.9113532Z [ OK ] LiteInterpreterTest.WrongMethodName (2 ms) 2023-01-11T22:09:55.9113876Z [ RUN ] LiteInterpreterTest.SetState 2023-01-11T22:09:55.9139807Z [ OK ] LiteInterpreterTest.SetState (2 ms) 2023-01-11T22:09:55.9140406Z [ RUN ] LiteInterpreterTest.BuiltinClass 2023-01-11T22:09:55.9148819Z [ OK ] LiteInterpreterTest.BuiltinClass (0 ms) 2023-01-11T22:09:55.9149440Z [ RUN ] LiteInterpreterTest.BuiltinFunction 2023-01-11T22:09:55.9153151Z [ OK ] LiteInterpreterTest.BuiltinFunction (0 ms) 2023-01-11T22:09:55.9153646Z [ RUN ] LiteInterpreterTest.GetRuntimeByteCodeVersion 2023-01-11T22:09:55.9154085Z [ OK ] LiteInterpreterTest.GetRuntimeByteCodeVersion (0 ms) 2023-01-11T22:09:55.9154583Z [ RUN ] LiteInterpreterTest.GetRuntimeOperatorsVersion 2023-01-11T22:09:55.9155031Z [ OK ] LiteInterpreterTest.GetRuntimeOperatorsVersion (0 ms) 2023-01-11T22:09:55.9155458Z [ RUN ] LiteInterpreterTest.GetByteCodeVersion 2023-01-11T22:09:55.9160281Z [ OK ] LiteInterpreterTest.GetByteCodeVersion (0 ms) 2023-01-11T22:09:55.9160907Z [ RUN ] LiteInterpreterTest.GetContainTypes 2023-01-11T22:09:55.9163196Z [ OK ] LiteInterpreterTest.GetContainTypes (0 ms) 2023-01-11T22:09:55.9163925Z [ RUN ] LiteInterpreterTest.BackPortByteCodeModelAllVersions 2023-01-11T22:09:56.0062785Z [ OK ] LiteInterpreterTest.BackPortByteCodeModelAllVersions (89 ms) 2023-01-11T22:09:56.0063495Z [ RUN ] LiteInterpreterTest.GetRuntimeOpsAndInfo 2023-01-11T22:09:56.0131482Z [ OK ] LiteInterpreterTest.GetRuntimeOpsAndInfo (7 ms) 2023-01-11T22:09:56.0132034Z [ RUN ] LiteInterpreterTest.isCompatibleSuccess 2023-01-11T22:09:56.0190952Z [ OK ] LiteInterpreterTest.isCompatibleSuccess (5 ms) 2023-01-11T22:09:56.0191685Z [ RUN ] LiteInterpreterTest.isCompatibleFail 2023-01-11T22:09:56.0300923Z [ OK ] LiteInterpreterTest.isCompatibleFail (10 ms) 2023-01-11T22:09:56.0301298Z [ RUN ] LiteInterpreterTest.Eval 2023-01-11T22:09:56.0312294Z [ OK ] LiteInterpreterTest.Eval (1 ms) 2023-01-11T22:09:56.0312657Z [ RUN ] LiteInterpreterTest.FindWrongMethodName 2023-01-11T22:09:56.0316931Z [ OK ] LiteInterpreterTest.FindWrongMethodName (0 ms) 2023-01-11T22:09:56.0317310Z [ RUN ] LiteInterpreterTest.FindAndRunMethod 2023-01-11T22:09:56.0325060Z [ OK ] LiteInterpreterTest.FindAndRunMethod (0 ms) 2023-01-11T22:09:56.0325698Z [ RUN ] LiteInterpreterTest.RunMethodVariadic 2023-01-11T22:09:56.0332534Z [ OK ] LiteInterpreterTest.RunMethodVariadic (0 ms) 2023-01-11T22:09:56.0332922Z [ RUN ] LiteInterpreterTest.DuplicateSetState 2023-01-11T22:09:56.0342954Z [ OK ] LiteInterpreterTest.DuplicateSetState (1 ms) 2023-01-11T22:09:56.0343400Z [ RUN ] LiteInterpreterTest.ExtraFiles 2023-01-11T22:09:56.0348138Z [ OK ] LiteInterpreterTest.ExtraFiles (0 ms) 2023-01-11T22:09:56.0348529Z [ RUN ] LiteInterpreterTest.OpNameExportFetchRootOperators 2023-01-11T22:09:56.0357370Z [ OK ] LiteInterpreterTest.OpNameExportFetchRootOperators (0 ms) 2023-01-11T22:09:56.0357801Z [ RUN ] LiteInterpreterTest.DefaultArgsConv 2023-01-11T22:09:56.0372209Z [ OK ] LiteInterpreterTest.DefaultArgsConv (1 ms) 2023-01-11T22:09:56.0372884Z [ RUN ] LiteInterpreterTest.DefaultArgsPinv 2023-01-11T22:09:56.0433599Z [ OK ] LiteInterpreterTest.DefaultArgsPinv (6 ms) 2023-01-11T22:09:56.0434049Z [ RUN ] LiteInterpreterTest.DefaultArgsTensorinvSpecifyDefault 2023-01-11T22:09:56.0470151Z [ OK ] LiteInterpreterTest.DefaultArgsTensorinvSpecifyDefault (3 ms) 2023-01-11T22:09:56.0470592Z [ RUN ] LiteInterpreterTest.DefaultArgsPinvWithOutArg 2023-01-11T22:09:56.0490362Z [ OK ] LiteInterpreterTest.DefaultArgsPinvWithOutArg (1 ms) 2023-01-11T22:09:56.0490773Z [ RUN ] LiteInterpreterTest.DefaultArgsWithOutArg 2023-01-11T22:09:56.0498648Z [ OK ] LiteInterpreterTest.DefaultArgsWithOutArg (0 ms) 2023-01-11T22:09:56.0499162Z [ RUN ] LiteInterpreterTest.TestExceptionStackWithTwoLevelModuleHierarchy 2023-01-11T22:09:56.0598877Z [ OK ] LiteInterpreterTest.TestExceptionStackWithTwoLevelModuleHierarchy (9 ms) 2023-01-11T22:09:56.0599486Z [ RUN ] LiteInterpreterTest.OperatorCacheDifferentiatesDefaultArgs 2023-01-11T22:09:56.0634762Z [ OK ] LiteInterpreterTest.OperatorCacheDifferentiatesDefaultArgs (3 ms) 2023-01-11T22:09:56.0635238Z [ RUN ] LiteInterpreterTest.OperatorSize1 2023-01-11T22:09:56.0639151Z [ OK ] LiteInterpreterTest.OperatorSize1 (0 ms) 2023-01-11T22:09:56.0639737Z [ RUN ] LiteInterpreterTest.OperatorTest2 2023-01-11T22:09:56.0655708Z [ OK ] LiteInterpreterTest.OperatorTest2 (1 ms) 2023-01-11T22:09:56.0656346Z [----------] 42 tests from LiteInterpreterTest (174 ms total) 2023-01-11T22:09:56.0656523Z 2023-01-11T22:09:56.0656678Z [----------] 3 tests from RunTimeTest 2023-01-11T22:09:56.0656958Z [ RUN ] RunTimeTest.ParseBytecode 2023-01-11T22:09:56.0657274Z [ OK ] RunTimeTest.ParseBytecode (0 ms) 2023-01-11T22:09:56.0657581Z [ RUN ] RunTimeTest.ParseOperator 2023-01-11T22:09:56.0657894Z [ OK ] RunTimeTest.ParseOperator (0 ms) 2023-01-11T22:09:56.0658188Z [ RUN ] RunTimeTest.RuntimeCall 2023-01-11T22:09:56.0658485Z [ OK ] RunTimeTest.RuntimeCall (0 ms) 2023-01-11T22:09:56.0658787Z [----------] 3 tests from RunTimeTest (0 ms total) 2023-01-11T22:09:56.0658939Z 2023-01-11T22:09:56.0659132Z [----------] 11 tests from LiteInterpreterUpgraderTest 2023-01-11T22:09:56.0659491Z [ RUN ] LiteInterpreterUpgraderTest.DivTensorV2 2023-01-11T22:09:56.0664628Z [ OK ] LiteInterpreterUpgraderTest.DivTensorV2 (0 ms) 2023-01-11T22:09:56.0665012Z [ RUN ] LiteInterpreterUpgraderTest.DivTensorOutV2 2023-01-11T22:09:56.0672176Z [ OK ] LiteInterpreterUpgraderTest.DivTensorOutV2 (0 ms) 2023-01-11T22:09:56.0672610Z [ RUN ] LiteInterpreterUpgraderTest.DivTensorInplaceV2 2023-01-11T22:09:56.0678327Z [ OK ] LiteInterpreterUpgraderTest.DivTensorInplaceV2 (0 ms) 2023-01-11T22:09:56.0678947Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarFloatV2 2023-01-11T22:09:56.0689283Z [ OK ] LiteInterpreterUpgraderTest.DivScalarFloatV2 (1 ms) 2023-01-11T22:09:56.0689820Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarReciprocalFloatV2 2023-01-11T22:09:56.0694547Z expect output: 0.5000 2023-01-11T22:09:56.0694883Z [ CPUFloatType{1} ]actual output: 0.5000 2023-01-11T22:09:56.0695679Z [ CPUFloatType{1} ][ OK ] LiteInterpreterUpgraderTest.DivScalarReciprocalFloatV2 (0 ms) 2023-01-11T22:09:56.0696174Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarReciprocalIntV2 2023-01-11T22:09:56.0700579Z [ OK ] LiteInterpreterUpgraderTest.DivScalarReciprocalIntV2 (0 ms) 2023-01-11T22:09:56.0701212Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarScalarV2 2023-01-11T22:09:56.0707280Z [ OK ] LiteInterpreterUpgraderTest.DivScalarScalarV2 (0 ms) 2023-01-11T22:09:56.0708011Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarIntV2 2023-01-11T22:09:56.0712480Z [ OK ] LiteInterpreterUpgraderTest.DivScalarIntV2 (0 ms) 2023-01-11T22:09:56.0713144Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarInplaceFloatV2 2023-01-11T22:09:56.0720231Z [ OK ] LiteInterpreterUpgraderTest.DivScalarInplaceFloatV2 (0 ms) 2023-01-11T22:09:56.0720887Z [ RUN ] LiteInterpreterUpgraderTest.DivScalarInplaceIntV2 2023-01-11T22:09:56.0728328Z [ OK ] LiteInterpreterUpgraderTest.DivScalarInplaceIntV2 (0 ms) 2023-01-11T22:09:56.0728964Z [ RUN ] LiteInterpreterUpgraderTest.Upgrader 2023-01-11T22:09:56.0729350Z [ OK ] LiteInterpreterUpgraderTest.Upgrader (0 ms) 2023-01-11T22:09:56.0729743Z [----------] 11 tests from LiteInterpreterUpgraderTest (7 ms total) 2023-01-11T22:09:56.0729930Z 2023-01-11T22:09:56.0730120Z [----------] 29 tests from LiteInterpreterDirectTest 2023-01-11T22:09:56.0730504Z [ RUN ] LiteInterpreterDirectTest.UpsampleNearest2d 2023-01-11T22:09:56.0737806Z [ OK ] LiteInterpreterDirectTest.UpsampleNearest2d (0 ms) 2023-01-11T22:09:56.0738252Z [ RUN ] LiteInterpreterDirectTest.CheckAttrAccess 2023-01-11T22:09:56.0738658Z [ OK ] LiteInterpreterDirectTest.CheckAttrAccess (0 ms) 2023-01-11T22:09:56.0739057Z [ RUN ] LiteInterpreterDirectTest.MethodInvocation 2023-01-11T22:09:56.0743334Z hello 2023-01-11T22:09:56.0743588Z hello 3 2023-01-11T22:09:56.0750666Z hello 2023-01-11T22:09:56.0751471Z hello 3 2023-01-11T22:09:56.0756279Z hello 2023-01-11T22:09:56.0756874Z hello 3 2023-01-11T22:09:56.0757516Z [ OK ] LiteInterpreterDirectTest.MethodInvocation (1 ms) 2023-01-11T22:09:56.0758137Z [ RUN ] LiteInterpreterDirectTest.Conv 2023-01-11T22:09:56.0773148Z [ OK ] LiteInterpreterDirectTest.Conv (1 ms) 2023-01-11T22:09:56.0773930Z [ RUN ] LiteInterpreterDirectTest.Inline 2023-01-11T22:09:56.0780472Z [ OK ] LiteInterpreterDirectTest.Inline (0 ms) 2023-01-11T22:09:56.0781083Z [ RUN ] LiteInterpreterDirectTest.Tuple 2023-01-11T22:09:56.0785484Z [ OK ] LiteInterpreterDirectTest.Tuple (0 ms) 2023-01-11T22:09:56.0786076Z [ RUN ] LiteInterpreterDirectTest.Dict 2023-01-11T22:09:56.0789978Z [ OK ] LiteInterpreterDirectTest.Dict (0 ms) 2023-01-11T22:09:56.0790572Z [ RUN ] LiteInterpreterDirectTest.Prim 2023-01-11T22:09:56.0793602Z [ OK ] LiteInterpreterDirectTest.Prim (0 ms) 2023-01-11T22:09:56.0794237Z [ RUN ] LiteInterpreterDirectTest.PrimScalar 2023-01-11T22:09:56.0797675Z [ OK ] LiteInterpreterDirectTest.PrimScalar (0 ms) 2023-01-11T22:09:56.0798341Z [ RUN ] LiteInterpreterDirectTest.WrongMethodName 2023-01-11T22:09:56.0817104Z [ OK ] LiteInterpreterDirectTest.WrongMethodName (1 ms) 2023-01-11T22:09:56.0817995Z [ RUN ] LiteInterpreterDirectTest.SetState 2023-01-11T22:09:56.0835435Z [ OK ] LiteInterpreterDirectTest.SetState (1 ms) 2023-01-11T22:09:56.0836075Z [ RUN ] LiteInterpreterDirectTest.BuiltinFunction 2023-01-11T22:09:56.0837554Z [ OK ] LiteInterpreterDirectTest.BuiltinFunction (0 ms) 2023-01-11T22:09:56.0838139Z [ RUN ] LiteInterpreterDirectTest.GetRuntimeByteCodeVersion 2023-01-11T22:09:56.0838738Z [ OK ] LiteInterpreterDirectTest.GetRuntimeByteCodeVersion (0 ms) 2023-01-11T22:09:56.0839413Z [ RUN ] LiteInterpreterDirectTest.GetRuntimeOperatorsVersion 2023-01-11T22:09:56.0839987Z [ OK ] LiteInterpreterDirectTest.GetRuntimeOperatorsVersion (0 ms) 2023-01-11T22:09:56.0840552Z [ RUN ] LiteInterpreterDirectTest.GetByteCodeVersion 2023-01-11T22:09:56.0841074Z [ OK ] LiteInterpreterDirectTest.GetByteCodeVersion (0 ms) 2023-01-11T22:09:56.0841757Z [ RUN ] LiteInterpreterDirectTest.GetRuntimeOpsAndInfo 2023-01-11T22:09:56.0903795Z [ OK ] LiteInterpreterDirectTest.GetRuntimeOpsAndInfo (6 ms) 2023-01-11T22:09:56.0904442Z [ RUN ] LiteInterpreterDirectTest.Eval 2023-01-11T22:09:56.0910556Z [ OK ] LiteInterpreterDirectTest.Eval (0 ms) 2023-01-11T22:09:56.0911216Z [ RUN ] LiteInterpreterDirectTest.FindWrongMethodName 2023-01-11T22:09:56.0913088Z [ OK ] LiteInterpreterDirectTest.FindWrongMethodName (0 ms) 2023-01-11T22:09:56.0913770Z [ RUN ] LiteInterpreterDirectTest.FindAndRunMethod 2023-01-11T22:09:56.0919427Z [ OK ] LiteInterpreterDirectTest.FindAndRunMethod (0 ms) 2023-01-11T22:09:56.0920148Z [ RUN ] LiteInterpreterDirectTest.RunMethodVariadic 2023-01-11T22:09:56.0924978Z [ OK ] LiteInterpreterDirectTest.RunMethodVariadic (0 ms) 2023-01-11T22:09:56.0925712Z [ RUN ] LiteInterpreterDirectTest.DuplicateSetState 2023-01-11T22:09:56.0931023Z [ OK ] LiteInterpreterDirectTest.DuplicateSetState (0 ms) 2023-01-11T22:09:56.0931872Z [ RUN ] LiteInterpreterDirectTest.OpNameExportFetchRootOperators 2023-01-11T22:09:56.0937040Z [ OK ] LiteInterpreterDirectTest.OpNameExportFetchRootOperators (0 ms) 2023-01-11T22:09:56.0937815Z [ RUN ] LiteInterpreterDirectTest.DefaultArgsConv 2023-01-11T22:09:56.0948241Z [ OK ] LiteInterpreterDirectTest.DefaultArgsConv (1 ms) 2023-01-11T22:09:56.0949184Z [ RUN ] LiteInterpreterDirectTest.DefaultArgsPinv 2023-01-11T22:09:56.0985850Z [ OK ] LiteInterpreterDirectTest.DefaultArgsPinv (3 ms) 2023-01-11T22:09:56.0986948Z [ RUN ] LiteInterpreterDirectTest.DefaultArgsTensorinvSpecifyDefault 2023-01-11T22:09:56.0994762Z [ OK ] LiteInterpreterDirectTest.DefaultArgsTensorinvSpecifyDefault (0 ms) 2023-01-11T22:09:56.0995771Z [ RUN ] LiteInterpreterDirectTest.DefaultArgsPinvWithOutArg 2023-01-11T22:09:56.1015639Z [ OK ] LiteInterpreterDirectTest.DefaultArgsPinvWithOutArg (2 ms) 2023-01-11T22:09:56.1016687Z [ RUN ] LiteInterpreterDirectTest.DefaultArgsWithOutArg 2023-01-11T22:09:56.1022283Z [ OK ] LiteInterpreterDirectTest.DefaultArgsWithOutArg (0 ms) 2023-01-11T22:09:56.1023395Z [ RUN ] LiteInterpreterDirectTest.TestExceptionStackWithTwoLevelModuleHierarchy 2023-01-11T22:09:56.1124170Z [ OK ] LiteInterpreterDirectTest.TestExceptionStackWithTwoLevelModuleHierarchy (10 ms) 2023-01-11T22:09:56.1125198Z [ RUN ] LiteInterpreterDirectTest.OperatorCacheDifferentiatesDefaultArgs 2023-01-11T22:09:56.1144787Z [ OK ] LiteInterpreterDirectTest.OperatorCacheDifferentiatesDefaultArgs (2 ms) 2023-01-11T22:09:56.1145556Z [----------] 29 tests from LiteInterpreterDirectTest (41 ms total) 2023-01-11T22:09:56.1145843Z 2023-01-11T22:09:56.1146338Z [----------] 7 tests from LiteTrainerTest 2023-01-11T22:09:56.1146784Z [ RUN ] LiteTrainerTest.Params 2023-01-11T22:09:56.1228516Z [ OK ] LiteTrainerTest.Params (8 ms) 2023-01-11T22:09:56.1229009Z [ RUN ] LiteTrainerTest.SGD 2023-01-11T22:09:56.1300465Z [ OK ] LiteTrainerTest.SGD (7 ms) 2023-01-11T22:09:56.1301009Z [ RUN ] LiteTrainerTest.SequentialSampler 2023-01-11T22:09:56.1301592Z [ OK ] LiteTrainerTest.SequentialSampler (0 ms) 2023-01-11T22:09:56.1302291Z [ RUN ] LiteTrainerTest.RandomSamplerReturnsIndicesInCorrectRange 2023-01-11T22:09:56.1303471Z [ OK ] LiteTrainerTest.RandomSamplerReturnsIndicesInCorrectRange (0 ms) 2023-01-11T22:09:56.1304350Z [ RUN ] LiteTrainerTest.RandomSamplerReturnsLessValuesForLastBatch 2023-01-11T22:09:56.1305181Z [ OK ] LiteTrainerTest.RandomSamplerReturnsLessValuesForLastBatch (0 ms) 2023-01-11T22:09:56.1306180Z [ RUN ] LiteTrainerTest.RandomSamplerResetsWell 2023-01-11T22:09:56.1306840Z [ OK ] LiteTrainerTest.RandomSamplerResetsWell (0 ms) 2023-01-11T22:09:56.1307495Z [ RUN ] LiteTrainerTest.RandomSamplerResetsWithNewSizeWell 2023-01-11T22:09:56.1308252Z [ OK ] LiteTrainerTest.RandomSamplerResetsWithNewSizeWell (0 ms) 2023-01-11T22:09:56.1308914Z [----------] 7 tests from LiteTrainerTest (15 ms total) 2023-01-11T22:09:56.1309150Z 2023-01-11T22:09:56.1309378Z [----------] 6 tests from MobileTest 2023-01-11T22:09:56.1309910Z [ RUN ] MobileTest.SaveLoadParametersEmpty 2023-01-11T22:09:56.1310492Z [ OK ] MobileTest.SaveLoadParametersEmpty (0 ms) 2023-01-11T22:09:56.1311110Z [ RUN ] MobileTest.SaveParametersDefaultsToZip 2023-01-11T22:09:56.1311771Z [ OK ] MobileTest.SaveParametersDefaultsToZip (0 ms) 2023-01-11T22:09:56.1312454Z [ RUN ] MobileTest.SaveParametersCanUseFlatbuffer 2023-01-11T22:09:56.1313157Z [ OK ] MobileTest.SaveParametersCanUseFlatbuffer (0 ms) 2023-01-11T22:09:56.1313861Z [ RUN ] MobileTest.SaveLoadParametersUsingFlatbuffers 2023-01-11T22:09:56.1314621Z [ OK ] MobileTest.SaveLoadParametersUsingFlatbuffers (0 ms) 2023-01-11T22:09:56.1315407Z [ RUN ] MobileTest.LoadParametersUnexpectedFormatShouldThrow 2023-01-11T22:09:56.1335710Z [ OK ] MobileTest.LoadParametersUnexpectedFormatShouldThrow (2 ms) 2023-01-11T22:09:56.1336524Z [ RUN ] MobileTest.LoadParametersEmptyDataShouldThrow 2023-01-11T22:09:56.1357782Z [ OK ] MobileTest.LoadParametersEmptyDataShouldThrow (2 ms) 2023-01-11T22:09:56.1358188Z [----------] 6 tests from MobileTest (5 ms total) 2023-01-11T22:09:56.1358343Z 2023-01-11T22:09:56.1358498Z [----------] 1 test from MemoryDAGTest 2023-01-11T22:09:56.1358757Z [ RUN ] MemoryDAGTest.Basic 2023-01-11T22:09:56.1359051Z [ OK ] MemoryDAGTest.Basic (0 ms) 2023-01-11T22:09:56.1359424Z [----------] 1 test from MemoryDAGTest (0 ms total) 2023-01-11T22:09:56.1359581Z 2023-01-11T22:09:56.1359735Z [----------] 1 test from InternedStringsTest 2023-01-11T22:09:56.1360037Z [ RUN ] InternedStringsTest.Basic 2023-01-11T22:09:56.1370867Z [ OK ] InternedStringsTest.Basic (1 ms) 2023-01-11T22:09:56.1371316Z [----------] 1 test from InternedStringsTest (1 ms total) 2023-01-11T22:09:56.1371473Z 2023-01-11T22:09:56.1371636Z [----------] 1 test from FromQualStringTest 2023-01-11T22:09:56.1371929Z [ RUN ] FromQualStringTest.Basic 2023-01-11T22:09:56.1380333Z [ OK ] FromQualStringTest.Basic (0 ms) 2023-01-11T22:09:56.1380857Z [----------] 1 test from FromQualStringTest (0 ms total) 2023-01-11T22:09:56.1381101Z 2023-01-11T22:09:56.1381327Z [----------] 1 test from THNNConvTest 2023-01-11T22:09:56.1382117Z [ RUN ] THNNConvTest.Basic 2023-01-11T22:09:56.1395371Z [ OK ] THNNConvTest.Basic (1 ms) 2023-01-11T22:09:56.1395981Z [----------] 1 test from THNNConvTest (1 ms total) 2023-01-11T22:09:56.1396306Z 2023-01-11T22:09:56.1396712Z [----------] 1 test from ATenNativeBatchNormTest 2023-01-11T22:09:56.1397374Z [ RUN ] ATenNativeBatchNormTest.Basic 2023-01-11T22:09:56.1407460Z [ OK ] ATenNativeBatchNormTest.Basic (1 ms) 2023-01-11T22:09:56.1408112Z [----------] 1 test from ATenNativeBatchNormTest (1 ms total) 2023-01-11T22:09:56.1408408Z 2023-01-11T22:09:56.1408686Z [----------] 2 tests from CustomFusionTest 2023-01-11T22:09:56.1409196Z [ RUN ] CustomFusionTest.Basic 2023-01-11T22:09:56.1409677Z [ OK ] CustomFusionTest.Basic (0 ms) 2023-01-11T22:09:56.1410215Z [ RUN ] CustomFusionTest.NestedBlocks 2023-01-11T22:09:56.1411161Z [ OK ] CustomFusionTest.NestedBlocks (0 ms) 2023-01-11T22:09:56.1411558Z [----------] 2 tests from CustomFusionTest (0 ms total) 2023-01-11T22:09:56.1411728Z 2023-01-11T22:09:56.1411887Z [----------] 1 test from ControlFlowTest 2023-01-11T22:09:56.1412161Z [ RUN ] ControlFlowTest.Basic 2023-01-11T22:09:56.1418841Z [ OK ] ControlFlowTest.Basic (0 ms) 2023-01-11T22:09:56.1419701Z [----------] 1 test from ControlFlowTest (0 ms total) 2023-01-11T22:09:56.1419997Z 2023-01-11T22:09:56.1420186Z [----------] 1 test from ProtoTest 2023-01-11T22:09:56.1420434Z [ RUN ] ProtoTest.Basic 2023-01-11T22:09:56.1420759Z [ OK ] ProtoTest.Basic (0 ms) 2023-01-11T22:09:56.1421230Z [----------] 1 test from ProtoTest (0 ms total) 2023-01-11T22:09:56.1421496Z 2023-01-11T22:09:56.1421773Z [----------] 9 tests from SchemaParserTest 2023-01-11T22:09:56.1422491Z [ RUN ] SchemaParserTest.NestedArrays 2023-01-11T22:09:56.1422917Z [ OK ] SchemaParserTest.NestedArrays (0 ms) 2023-01-11T22:09:56.1423237Z [ RUN ] SchemaParserTest.OutVariant 2023-01-11T22:09:56.1423545Z [ OK ] SchemaParserTest.OutVariant (0 ms) 2023-01-11T22:09:56.1423866Z [ RUN ] SchemaParserTest.NamedReturns 2023-01-11T22:09:56.1424269Z [ OK ] SchemaParserTest.NamedReturns (0 ms) 2023-01-11T22:09:56.1424569Z [ RUN ] SchemaParserTest.Futures 2023-01-11T22:09:56.1424873Z [ OK ] SchemaParserTest.Futures (0 ms) 2023-01-11T22:09:56.1425336Z [ RUN ] SchemaParserTest.AnnotatedAliasSets 2023-01-11T22:09:56.1425687Z [ OK ] SchemaParserTest.AnnotatedAliasSets (0 ms) 2023-01-11T22:09:56.1437623Z [ RUN ] SchemaParserTest.TensorListAnnotatedAliasSets 2023-01-11T22:09:56.1438241Z [ OK ] SchemaParserTest.TensorListAnnotatedAliasSets (0 ms) 2023-01-11T22:09:56.1438686Z [ RUN ] SchemaParserTest.AnnotatedAliasWithoutBeforeSet 2023-01-11T22:09:56.1439215Z [ OK ] SchemaParserTest.AnnotatedAliasWithoutBeforeSet (0 ms) 2023-01-11T22:09:56.1439609Z [ RUN ] SchemaParserTest.BeforeAfterSets 2023-01-11T22:09:56.1439944Z [ OK ] SchemaParserTest.BeforeAfterSets (0 ms) 2023-01-11T22:09:56.1440288Z [ RUN ] SchemaParserTest.BeforeAfterSets2 2023-01-11T22:09:56.1440638Z [ OK ] SchemaParserTest.BeforeAfterSets2 (0 ms) 2023-01-11T22:09:56.1440983Z [----------] 9 tests from SchemaParserTest (0 ms total) 2023-01-11T22:09:56.1441145Z 2023-01-11T22:09:56.1441318Z [----------] 2 tests from TopologicalIndexTest 2023-01-11T22:09:56.1441622Z [ RUN ] TopologicalIndexTest.Basic 2023-01-11T22:09:56.1441936Z [ OK ] TopologicalIndexTest.Basic (0 ms) 2023-01-11T22:09:56.1442235Z [ RUN ] TopologicalIndexTest.Reindex 2023-01-11T22:09:56.1442557Z [ OK ] TopologicalIndexTest.Reindex (0 ms) 2023-01-11T22:09:56.1443031Z [----------] 2 tests from TopologicalIndexTest (0 ms total) 2023-01-11T22:09:56.1443203Z 2023-01-11T22:09:56.1443355Z [----------] 7 tests from RecordFunctionTest 2023-01-11T22:09:56.1443705Z [ RUN ] RecordFunctionTest.TracedTestInputsOutputs 2023-01-11T22:09:56.1444108Z [ OK ] RecordFunctionTest.TracedTestInputsOutputs (0 ms) 2023-01-11T22:09:56.1444470Z [ RUN ] RecordFunctionTest.SampledCallbacks 2023-01-11T22:09:56.1489635Z [ OK ] RecordFunctionTest.SampledCallbacks (5 ms) 2023-01-11T22:09:56.1490221Z [ RUN ] RecordFunctionTest.RecordFunctionGuard 2023-01-11T22:09:56.1490617Z [ OK ] RecordFunctionTest.RecordFunctionGuard (0 ms) 2023-01-11T22:09:56.1490947Z [ RUN ] RecordFunctionTest.Callbacks 2023-01-11T22:09:56.1492362Z [ OK ] RecordFunctionTest.Callbacks (0 ms) 2023-01-11T22:09:56.1492989Z [ RUN ] RecordFunctionTest.ShouldRun 2023-01-11T22:09:56.1493413Z [ OK ] RecordFunctionTest.ShouldRun (0 ms) 2023-01-11T22:09:56.1493743Z [ RUN ] RecordFunctionTest.Basic 2023-01-11T22:09:56.1494723Z [ OK ] RecordFunctionTest.Basic (0 ms) 2023-01-11T22:09:56.1495235Z [ RUN ] RecordFunctionTest.OperatorNameOverload 2023-01-11T22:09:56.1495906Z [ OK ] RecordFunctionTest.OperatorNameOverload (0 ms) 2023-01-11T22:09:56.1496356Z [----------] 7 tests from RecordFunctionTest (6 ms total) 2023-01-11T22:09:56.1496524Z 2023-01-11T22:09:56.1496711Z [----------] 1 test from ThreadLocalDebugInfoTest 2023-01-11T22:09:56.1497210Z [ RUN ] ThreadLocalDebugInfoTest.Basic 2023-01-11T22:09:56.1497789Z [ OK ] ThreadLocalDebugInfoTest.Basic (0 ms) 2023-01-11T22:09:56.1498283Z [----------] 1 test from ThreadLocalDebugInfoTest (0 ms total) 2023-01-11T22:09:56.1498488Z 2023-01-11T22:09:56.1498786Z [----------] 1 test from TestSymIntArrayRef 2023-01-11T22:09:56.1499352Z [ RUN ] TestSymIntArrayRef.BasicConversion 2023-01-11T22:09:56.1499718Z [ OK ] TestSymIntArrayRef.BasicConversion (0 ms) 2023-01-11T22:09:56.1500251Z [----------] 1 test from TestSymIntArrayRef (0 ms total) 2023-01-11T22:09:56.1500551Z 2023-01-11T22:09:56.1500778Z [----------] 4 tests from TestSymInt 2023-01-11T22:09:56.1501348Z [ RUN ] TestSymInt.NarrowCopyWithSymbolicInt 2023-01-11T22:09:56.1501764Z [ OK ] TestSymInt.NarrowCopyWithSymbolicInt (0 ms) 2023-01-11T22:09:56.1502079Z [ RUN ] TestSymInt.NarrowCopy 2023-01-11T22:09:56.1502525Z [ OK ] TestSymInt.NarrowCopy (0 ms) 2023-01-11T22:09:56.1502828Z [ RUN ] TestSymInt.AddSymbolicInt 2023-01-11T22:09:56.1503135Z [ OK ] TestSymInt.AddSymbolicInt (0 ms) 2023-01-11T22:09:56.1503473Z [ RUN ] TestSymInt.TestSymIntToSymNodeDispatch 2023-01-11T22:09:56.1503844Z [ OK ] TestSymInt.TestSymIntToSymNodeDispatch (0 ms) 2023-01-11T22:09:56.1504195Z [----------] 4 tests from TestSymInt (0 ms total) 2023-01-11T22:09:56.1504345Z 2023-01-11T22:09:56.1504506Z [----------] 1 test from FallbackGraphsTest 2023-01-11T22:09:56.1504789Z [ RUN ] FallbackGraphsTest.Basic 2023-01-11T22:09:56.1507696Z [ OK ] FallbackGraphsTest.Basic (0 ms) 2023-01-11T22:09:56.1508233Z [----------] 1 test from FallbackGraphsTest (0 ms total) 2023-01-11T22:09:56.1508480Z 2023-01-11T22:09:56.1508739Z [----------] 1 test from NoneSchemaMatchTest 2023-01-11T22:09:56.1509208Z [ RUN ] NoneSchemaMatchTest.Basic 2023-01-11T22:09:56.1509769Z [ OK ] NoneSchemaMatchTest.Basic (0 ms) 2023-01-11T22:09:56.1510304Z [----------] 1 test from NoneSchemaMatchTest (0 ms total) 2023-01-11T22:09:56.1510471Z 2023-01-11T22:09:56.1510628Z [----------] 1 test from PassManagementTest 2023-01-11T22:09:56.1511040Z [ RUN ] PassManagementTest.Basic 2023-01-11T22:09:56.1511344Z [ OK ] PassManagementTest.Basic (0 ms) 2023-01-11T22:09:56.1511678Z [----------] 1 test from PassManagementTest (0 ms total) 2023-01-11T22:09:56.1511826Z 2023-01-11T22:09:56.1511981Z [----------] 5 tests from LoopPeelerTest 2023-01-11T22:09:56.1512311Z [ RUN ] LoopPeelerTest.NoInductionVariableUse 2023-01-11T22:09:56.1513552Z [ OK ] LoopPeelerTest.NoInductionVariableUse (0 ms) 2023-01-11T22:09:56.1514146Z [ RUN ] LoopPeelerTest.YesInductionVariableUse 2023-01-11T22:09:56.1516718Z [ OK ] LoopPeelerTest.YesInductionVariableUse (0 ms) 2023-01-11T22:09:56.1517378Z [ RUN ] LoopPeelerTest.LoopWithTerminationCondition 2023-01-11T22:09:56.1521280Z [ OK ] LoopPeelerTest.LoopWithTerminationCondition (0 ms) 2023-01-11T22:09:56.1521938Z [ RUN ] LoopPeelerTest.SimpleNestedLoops 2023-01-11T22:09:56.1527165Z [ OK ] LoopPeelerTest.SimpleNestedLoops (0 ms) 2023-01-11T22:09:56.1527729Z [ RUN ] LoopPeelerTest.SimpleNestedLoops2 2023-01-11T22:09:56.1535076Z [ OK ] LoopPeelerTest.SimpleNestedLoops2 (0 ms) 2023-01-11T22:09:56.1535631Z [----------] 5 tests from LoopPeelerTest (2 ms total) 2023-01-11T22:09:56.1535791Z 2023-01-11T22:09:56.1535938Z [----------] 1 test from JitTracing 2023-01-11T22:09:56.1536196Z [ RUN ] JitTracing.Basic 2023-01-11T22:09:56.1805801Z [ OK ] JitTracing.Basic (26 ms) 2023-01-11T22:09:56.1806386Z [----------] 1 test from JitTracing (26 ms total) 2023-01-11T22:09:56.1806545Z 2023-01-11T22:09:56.1806771Z [----------] 1 test from InsertAndEliminateRedundantGuardsTest 2023-01-11T22:09:56.1807176Z [ RUN ] InsertAndEliminateRedundantGuardsTest.Basic 2023-01-11T22:09:56.1811372Z [ OK ] InsertAndEliminateRedundantGuardsTest.Basic (0 ms) 2023-01-11T22:09:56.1812101Z [----------] 1 test from InsertAndEliminateRedundantGuardsTest (0 ms total) 2023-01-11T22:09:56.1812315Z 2023-01-11T22:09:56.1812479Z [----------] 1 test from InsertBailOutsTest 2023-01-11T22:09:56.1812774Z [ RUN ] InsertBailOutsTest.Basic 2023-01-11T22:09:56.1821054Z [ OK ] InsertBailOutsTest.Basic (0 ms) 2023-01-11T22:09:56.1821414Z [----------] 1 test from InsertBailOutsTest (0 ms total) 2023-01-11T22:09:56.1821583Z 2023-01-11T22:09:56.1821740Z [----------] 2 tests from ProfilerTest 2023-01-11T22:09:56.1822004Z [ RUN ] ProfilerTest.Basic 2023-01-11T22:09:56.2065205Z [ OK ] ProfilerTest.Basic (24 ms) 2023-01-11T22:09:56.2065705Z [ RUN ] ProfilerTest.OptionalProfiling 2023-01-11T22:09:56.2066165Z [ OK ] ProfilerTest.OptionalProfiling (0 ms) 2023-01-11T22:09:56.2066508Z [----------] 2 tests from ProfilerTest (24 ms total) 2023-01-11T22:09:56.2066679Z 2023-01-11T22:09:56.2066886Z [----------] 2 tests from CallStackTest 2023-01-11T22:09:56.2067158Z [ RUN ] CallStackTest.Basic 2023-01-11T22:09:56.2071302Z [ OK ] CallStackTest.Basic (0 ms) 2023-01-11T22:09:56.2071665Z [ RUN ] CallStackTest.Caching 2023-01-11T22:09:56.2075077Z [ OK ] CallStackTest.Caching (0 ms) 2023-01-11T22:09:56.2075510Z [----------] 2 tests from CallStackTest (0 ms total) 2023-01-11T22:09:56.2075669Z 2023-01-11T22:09:56.2075829Z [----------] 2 tests from InlinedCallStackTest 2023-01-11T22:09:56.2076160Z [ RUN ] InlinedCallStackTest.BlockAnnotation 2023-01-11T22:09:56.2082221Z [ OK ] InlinedCallStackTest.BlockAnnotation (0 ms) 2023-01-11T22:09:56.2082727Z [ RUN ] InlinedCallStackTest.SelfCallMethods 2023-01-11T22:09:56.2090424Z [ OK ] InlinedCallStackTest.SelfCallMethods (0 ms) 2023-01-11T22:09:56.2091310Z [----------] 2 tests from InlinedCallStackTest (1 ms total) 2023-01-11T22:09:56.2091640Z 2023-01-11T22:09:56.2091943Z [----------] 1 test from AutogradSymbolsTest 2023-01-11T22:09:56.2092458Z [ RUN ] AutogradSymbolsTest.Basic 2023-01-11T22:09:56.2092790Z [ OK ] AutogradSymbolsTest.Basic (0 ms) 2023-01-11T22:09:56.2093368Z [----------] 1 test from AutogradSymbolsTest (0 ms total) 2023-01-11T22:09:56.2093672Z 2023-01-11T22:09:56.2093947Z [----------] 1 test from DefaultArgTypeHintingTest 2023-01-11T22:09:56.2094267Z [ RUN ] DefaultArgTypeHintingTest.Basic 2023-01-11T22:09:56.2094612Z [ OK ] DefaultArgTypeHintingTest.Basic (0 ms) 2023-01-11T22:09:56.2094978Z [----------] 1 test from DefaultArgTypeHintingTest (0 ms total) 2023-01-11T22:09:56.2095157Z 2023-01-11T22:09:56.2095290Z [----------] 5 tests from FuturesTest 2023-01-11T22:09:56.2095623Z [ RUN ] FuturesTest.Basic 2023-01-11T22:09:56.2095900Z [ OK ] FuturesTest.Basic (0 ms) 2023-01-11T22:09:56.2096165Z [ RUN ] FuturesTest.Error 2023-01-11T22:09:56.2106131Z [ OK ] FuturesTest.Error (1 ms) 2023-01-11T22:09:56.2106614Z [ RUN ] FuturesTest.Then 2023-01-11T22:09:56.2107112Z [ OK ] FuturesTest.Then (0 ms) 2023-01-11T22:09:56.2107433Z [ RUN ] FuturesTest.CollectAll 2023-01-11T22:09:56.2107726Z [ OK ] FuturesTest.CollectAll (0 ms) 2023-01-11T22:09:56.2108082Z [ RUN ] FuturesTest.CollectAny 2023-01-11T22:09:56.2108493Z [ OK ] FuturesTest.CollectAny (0 ms) 2023-01-11T22:09:56.2108973Z [----------] 5 tests from FuturesTest (1 ms total) 2023-01-11T22:09:56.2109218Z 2023-01-11T22:09:56.2109499Z [----------] 1 test from TLSFutureCallbacksTest 2023-01-11T22:09:56.2109822Z [ RUN ] TLSFutureCallbacksTest.Basic 2023-01-11T22:09:56.2110145Z [ OK ] TLSFutureCallbacksTest.Basic (0 ms) 2023-01-11T22:09:56.2110542Z [----------] 1 test from TLSFutureCallbacksTest (0 ms total) 2023-01-11T22:09:56.2110776Z 2023-01-11T22:09:56.2110978Z [----------] 1 test from ProfilerDisableInCallbackTest 2023-01-11T22:09:56.2111391Z [ RUN ] ProfilerDisableInCallbackTest.Basic 2023-01-11T22:09:56.2112014Z [ OK ] ProfilerDisableInCallbackTest.Basic (0 ms) 2023-01-11T22:09:56.2112525Z [----------] 1 test from ProfilerDisableInCallbackTest (0 ms total) 2023-01-11T22:09:56.2112749Z 2023-01-11T22:09:56.2112979Z [----------] 2 tests from RecordDebugHandles 2023-01-11T22:09:56.2113262Z [ RUN ] RecordDebugHandles.Basic 2023-01-11T22:09:56.2113785Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T22:09:56.2115637Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T22:09:56.2116646Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T22:09:56.2117308Z [ OK ] RecordDebugHandles.Basic (0 ms) 2023-01-11T22:09:56.2117882Z [ RUN ] RecordDebugHandles.ScopedCallbacks 2023-01-11T22:09:56.2118602Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T22:09:56.2123348Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T22:09:56.2123935Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T22:09:56.2124653Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T22:09:56.2127986Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T22:09:56.2128581Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T22:09:56.2129158Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:300] Completed Stage: Warm Up 2023-01-11T22:09:56.2134622Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:306] Completed Stage: Collection 2023-01-11T22:09:56.2135085Z STAGE:2023-01-11 22:09:56 24563:24563 ActivityProfilerController.cpp:310] Completed Stage: Post Processing 2023-01-11T22:09:56.2135490Z [ OK ] RecordDebugHandles.ScopedCallbacks (1 ms) 2023-01-11T22:09:56.2135853Z [----------] 2 tests from RecordDebugHandles (2 ms total) 2023-01-11T22:09:56.2136006Z 2023-01-11T22:09:56.2136163Z [----------] 1 test from IValueKWargsTest 2023-01-11T22:09:56.2136446Z [ RUN ] IValueKWargsTest.Basic 2023-01-11T22:09:56.2140048Z [ OK ] IValueKWargsTest.Basic (0 ms) 2023-01-11T22:09:56.2140577Z [----------] 1 test from IValueKWargsTest (0 ms total) 2023-01-11T22:09:56.2140972Z 2023-01-11T22:09:56.2141259Z [----------] 1 test from ComputeFlopsTest 2023-01-11T22:09:56.2141787Z [ RUN ] ComputeFlopsTest.Basic 2023-01-11T22:09:56.2142305Z [W util.cpp:501] Warning: Failed to compute flops for op aten::conv2d because both input and weight must be size 4. (function computeFlops) 2023-01-11T22:09:56.2143094Z [W util.cpp:516] Warning: Failed to compute flops for op aten::conv2d because stride must be size 2 and cannot be 0. (function computeFlops) 2023-01-11T22:09:56.2143817Z [W util.cpp:472] Warning: Calculating flops for aten::conv2d requires groups, padding, stride, dilation, input_size, and weight_size in saved arguments. (function computeFlops) 2023-01-11T22:09:56.2144672Z [W util.cpp:545] Warning: Calculating flops for aten::mm requires mat1_size and mat2_size in saved arguments. (function computeFlops) 2023-01-11T22:09:56.2145235Z [ OK ] ComputeFlopsTest.Basic (0 ms) 2023-01-11T22:09:56.2145559Z [----------] 1 test from ComputeFlopsTest (0 ms total) 2023-01-11T22:09:56.2145720Z 2023-01-11T22:09:56.2145867Z [----------] 1 test from TestConstant 2023-01-11T22:09:56.2146147Z [ RUN ] TestConstant.TensorGrad 2023-01-11T22:09:56.2146458Z [ OK ] TestConstant.TensorGrad (0 ms) 2023-01-11T22:09:56.2146760Z [----------] 1 test from TestConstant (0 ms total) 2023-01-11T22:09:56.2146907Z 2023-01-11T22:09:56.2147051Z [----------] 1 test from TestMutation 2023-01-11T22:09:56.2147312Z [ RUN ] TestMutation.Basic 2023-01-11T22:09:56.2147573Z [ OK ] TestMutation.Basic (0 ms) 2023-01-11T22:09:56.2147874Z [----------] 1 test from TestMutation (0 ms total) 2023-01-11T22:09:56.2148022Z 2023-01-11T22:09:56.2148225Z [----------] 1 test from TestInplaceToFunctionalActivation 2023-01-11T22:09:56.2148599Z [ RUN ] TestInplaceToFunctionalActivation.Basic 2023-01-11T22:09:56.2148971Z [ OK ] TestInplaceToFunctionalActivation.Basic (0 ms) 2023-01-11T22:09:56.2149381Z [----------] 1 test from TestInplaceToFunctionalActivation (0 ms total) 2023-01-11T22:09:56.2149575Z 2023-01-11T22:09:56.2149740Z [----------] 1 test from TestRegisterShapeOp 2023-01-11T22:09:56.2150029Z [ RUN ] TestRegisterShapeOp.Basic 2023-01-11T22:09:56.3342773Z [ OK ] TestRegisterShapeOp.Basic (119 ms) 2023-01-11T22:09:56.3343267Z [----------] 1 test from TestRegisterShapeOp (119 ms total) 2023-01-11T22:09:56.3343442Z 2023-01-11T22:09:56.3343663Z [----------] 1 test from TestFunctionalToInplaceActivation 2023-01-11T22:09:56.3344026Z [ RUN ] TestFunctionalToInplaceActivation.Basic 2023-01-11T22:09:56.3344410Z [ OK ] TestFunctionalToInplaceActivation.Basic (0 ms) 2023-01-11T22:09:56.3344820Z [----------] 1 test from TestFunctionalToInplaceActivation (0 ms total) 2023-01-11T22:09:56.3345012Z 2023-01-11T22:09:56.3345429Z [----------] 2 tests from TestFunctionExecutor 2023-01-11T22:09:56.3345778Z [ RUN ] TestFunctionExecutor.SimpleExecutorTest 2023-01-11T22:09:56.3346600Z [ OK ] TestFunctionExecutor.SimpleExecutorTest (0 ms) 2023-01-11T22:09:56.3347013Z [ RUN ] TestFunctionExecutor.RunDecompositionTest 2023-01-11T22:09:56.3363288Z [ OK ] TestFunctionExecutor.RunDecompositionTest (1 ms) 2023-01-11T22:09:56.3363708Z [----------] 2 tests from TestFunctionExecutor (2 ms total) 2023-01-11T22:09:56.3363879Z 2023-01-11T22:09:56.3364079Z [----------] 1 test from TestShapeGraphLinting 2023-01-11T22:09:56.3364416Z [ RUN ] TestShapeGraphLinting.Basic 2023-01-11T22:09:56.3369090Z [ OK ] TestShapeGraphLinting.Basic (0 ms) 2023-01-11T22:09:56.3369507Z [----------] 1 test from TestShapeGraphLinting (0 ms total) 2023-01-11T22:09:56.3369687Z 2023-01-11T22:09:56.3369829Z [----------] 1 test from Composed 2023-01-11T22:09:56.3370210Z [ RUN ] Composed.ComposedOp 2023-01-11T22:09:56.6363360Z [ OK ] Composed.ComposedOp (299 ms) 2023-01-11T22:09:56.6363876Z [----------] 1 test from Composed (299 ms total) 2023-01-11T22:09:56.6364040Z 2023-01-11T22:09:56.6364210Z [----------] 1 test from ConstantPropagation 2023-01-11T22:09:56.6364573Z [ RUN ] ConstantPropagation.CustomClassesCanBePropagated 2023-01-11T22:09:56.6550370Z [ OK ] ConstantPropagation.CustomClassesCanBePropagated (18 ms) 2023-01-11T22:09:56.6551144Z [----------] 1 test from ConstantPropagation (18 ms total) 2023-01-11T22:09:56.6551443Z 2023-01-11T22:09:56.6551734Z [----------] 19 tests from MobileTypeParserTest 2023-01-11T22:09:56.6552268Z [ RUN ] MobileTypeParserTest.Int 2023-01-11T22:09:56.6552803Z [ OK ] MobileTypeParserTest.Int (0 ms) 2023-01-11T22:09:56.6553476Z [ RUN ] MobileTypeParserTest.NestedContainersAnnotationStr 2023-01-11T22:09:56.6554337Z [ OK ] MobileTypeParserTest.NestedContainersAnnotationStr (0 ms) 2023-01-11T22:09:56.6555046Z [ RUN ] MobileTypeParserTest.TorchBindClass 2023-01-11T22:09:56.6555684Z [ OK ] MobileTypeParserTest.TorchBindClass (0 ms) 2023-01-11T22:09:56.6556338Z [ RUN ] MobileTypeParserTest.ListOfTorchBindClass 2023-01-11T22:09:56.6557042Z [ OK ] MobileTypeParserTest.ListOfTorchBindClass (0 ms) 2023-01-11T22:09:56.6557870Z [ RUN ] MobileTypeParserTest.NestedContainersAnnotationStrWithSpaces 2023-01-11T22:09:56.6558821Z [ OK ] MobileTypeParserTest.NestedContainersAnnotationStrWithSpaces (0 ms) 2023-01-11T22:09:56.6559640Z [ RUN ] MobileTypeParserTest.NamedTuple 2023-01-11T22:09:56.6560244Z [ OK ] MobileTypeParserTest.NamedTuple (0 ms) 2023-01-11T22:09:56.6560938Z [ RUN ] MobileTypeParserTest.DictNestedNamedTupleTypeList 2023-01-11T22:09:56.6561748Z [ OK ] MobileTypeParserTest.DictNestedNamedTupleTypeList (0 ms) 2023-01-11T22:09:56.6562597Z [ RUN ] MobileTypeParserTest.NamedTupleNestedNamedTupleTypeList 2023-01-11T22:09:56.6563489Z [ OK ] MobileTypeParserTest.NamedTupleNestedNamedTupleTypeList (0 ms) 2023-01-11T22:09:56.6564315Z [ RUN ] MobileTypeParserTest.NamedTupleNestedNamedTuple 2023-01-11T22:09:56.6565089Z [ OK ] MobileTypeParserTest.NamedTupleNestedNamedTuple (0 ms) 2023-01-11T22:09:56.6565737Z [ RUN ] MobileTypeParserTest.Empty 2023-01-11T22:09:56.6588283Z [ OK ] MobileTypeParserTest.Empty (3 ms) 2023-01-11T22:09:56.6588874Z [ RUN ] MobileTypeParserTest.TypoRaises 2023-01-11T22:09:56.6626437Z [ OK ] MobileTypeParserTest.TypoRaises (3 ms) 2023-01-11T22:09:56.6627128Z [ RUN ] MobileTypeParserTest.MismatchBracketRaises 2023-01-11T22:09:56.6673160Z [ OK ] MobileTypeParserTest.MismatchBracketRaises (4 ms) 2023-01-11T22:09:56.6673753Z [ RUN ] MobileTypeParserTest.MismatchBracketRaises2 2023-01-11T22:09:56.6710832Z [ OK ] MobileTypeParserTest.MismatchBracketRaises2 (3 ms) 2023-01-11T22:09:56.6711255Z [ RUN ] MobileTypeParserTest.DictWithoutValueRaises 2023-01-11T22:09:56.6743028Z [ OK ] MobileTypeParserTest.DictWithoutValueRaises (3 ms) 2023-01-11T22:09:56.6743438Z [ RUN ] MobileTypeParserTest.ListArgCountMismatchRaises 2023-01-11T22:09:56.6780444Z [ OK ] MobileTypeParserTest.ListArgCountMismatchRaises (3 ms) 2023-01-11T22:09:56.6780896Z [ RUN ] MobileTypeParserTest.DictArgCountMismatchRaises 2023-01-11T22:09:56.6812797Z [ OK ] MobileTypeParserTest.DictArgCountMismatchRaises (3 ms) 2023-01-11T22:09:56.6813252Z [ RUN ] MobileTypeParserTest.ValidTypeWithExtraStuffRaises 2023-01-11T22:09:56.6834179Z [ OK ] MobileTypeParserTest.ValidTypeWithExtraStuffRaises (2 ms) 2023-01-11T22:09:56.6834662Z [ RUN ] MobileTypeParserTest.NonIdentifierRaises 2023-01-11T22:09:56.6855202Z [ OK ] MobileTypeParserTest.NonIdentifierRaises (2 ms) 2023-01-11T22:09:56.6855674Z [ RUN ] MobileTypeParserTest.DictNestedNamedTupleTypeListRaises 2023-01-11T22:09:56.6898442Z [ OK ] MobileTypeParserTest.DictNestedNamedTupleTypeListRaises (4 ms) 2023-01-11T22:09:56.6898897Z [----------] 19 tests from MobileTypeParserTest (34 ms total) 2023-01-11T22:09:56.6899075Z 2023-01-11T22:09:56.6899217Z [----------] 13 tests from ModuleAPITest 2023-01-11T22:09:56.6899575Z [ RUN ] ModuleAPITest.MethodRunAsync 2023-01-11T22:09:56.6925380Z [ OK ] ModuleAPITest.MethodRunAsync (2 ms) 2023-01-11T22:09:56.6926019Z [ RUN ] ModuleAPITest.Clone 2023-01-11T22:09:56.6926622Z [ OK ] ModuleAPITest.Clone (0 ms) 2023-01-11T22:09:56.6927460Z [ RUN ] ModuleAPITest.CloneWithModuleInterface 2023-01-11T22:09:56.6935272Z [ OK ] ModuleAPITest.CloneWithModuleInterface (0 ms) 2023-01-11T22:09:56.6935795Z [ RUN ] ModuleAPITest.Copy 2023-01-11T22:09:56.6936298Z [ OK ] ModuleAPITest.Copy (0 ms) 2023-01-11T22:09:56.6936741Z [ RUN ] ModuleAPITest.DeepCopy 2023-01-11T22:09:56.6937223Z [ OK ] ModuleAPITest.DeepCopy (0 ms) 2023-01-11T22:09:56.6937682Z [ RUN ] ModuleAPITest.DeepCopyString 2023-01-11T22:09:56.6938187Z [ OK ] ModuleAPITest.DeepCopyString (0 ms) 2023-01-11T22:09:56.6938678Z [ RUN ] ModuleAPITest.DeepCopyEnum 2023-01-11T22:09:56.6939189Z [ OK ] ModuleAPITest.DeepCopyEnum (0 ms) 2023-01-11T22:09:56.6939755Z [ RUN ] ModuleAPITest.DeepCopyPreservesAliasing 2023-01-11T22:09:56.6940420Z [ OK ] ModuleAPITest.DeepCopyPreservesAliasing (0 ms) 2023-01-11T22:09:56.6940983Z [ RUN ] ModuleAPITest.Constants 2023-01-11T22:09:56.6941499Z [ OK ] ModuleAPITest.Constants (0 ms) 2023-01-11T22:09:56.6942034Z [ RUN ] ModuleAPITest.Parameters 2023-01-11T22:09:56.6942778Z [ OK ] ModuleAPITest.Parameters (0 ms) 2023-01-11T22:09:56.6943225Z [ RUN ] ModuleAPITest.Define 2023-01-11T22:09:56.6945499Z [ OK ] ModuleAPITest.Define (0 ms) 2023-01-11T22:09:56.6945819Z [ RUN ] ModuleAPITest.Freezing 2023-01-11T22:09:56.6974435Z [ OK ] ModuleAPITest.Freezing (2 ms) 2023-01-11T22:09:56.6975033Z [ RUN ] ModuleAPITest.OfiFreezesTraining 2023-01-11T22:09:56.7002627Z [ OK ] ModuleAPITest.OfiFreezesTraining (2 ms) 2023-01-11T22:09:56.7003233Z [----------] 13 tests from ModuleAPITest (10 ms total) 2023-01-11T22:09:56.7003503Z 2023-01-11T22:09:56.7003727Z [----------] 6 tests from PeepholeOptimizeTest 2023-01-11T22:09:56.7004259Z [ RUN ] PeepholeOptimizeTest.IsAndIsNot 2023-01-11T22:09:56.7004970Z [ OK ] PeepholeOptimizeTest.IsAndIsNot (0 ms) 2023-01-11T22:09:56.7005491Z [ RUN ] PeepholeOptimizeTest.IsAndIsNot2 2023-01-11T22:09:56.7005985Z [ OK ] PeepholeOptimizeTest.IsAndIsNot2 (0 ms) 2023-01-11T22:09:56.7006615Z [ RUN ] PeepholeOptimizeTest.IsAndIsNot3 2023-01-11T22:09:56.7007172Z [ OK ] PeepholeOptimizeTest.IsAndIsNot3 (0 ms) 2023-01-11T22:09:56.7007810Z [ RUN ] PeepholeOptimizeTest.UnwrapOptional 2023-01-11T22:09:56.7008274Z [ OK ] PeepholeOptimizeTest.UnwrapOptional (0 ms) 2023-01-11T22:09:56.7008638Z [ RUN ] PeepholeOptimizeTest.UnwrapOptional2 2023-01-11T22:09:56.7008994Z [ OK ] PeepholeOptimizeTest.UnwrapOptional2 (0 ms) 2023-01-11T22:09:56.7009341Z [ RUN ] PeepholeOptimizeTest.AddMMFusion 2023-01-11T22:09:56.7009682Z [ OK ] PeepholeOptimizeTest.AddMMFusion (0 ms) 2023-01-11T22:09:56.7010138Z [----------] 6 tests from PeepholeOptimizeTest (0 ms total) 2023-01-11T22:09:56.7010310Z 2023-01-11T22:09:56.7010473Z [----------] 5 tests from QualifiedNameTest 2023-01-11T22:09:56.7010802Z [ RUN ] QualifiedNameTest.PrefixConstruction 2023-01-11T22:09:56.7011166Z [ OK ] QualifiedNameTest.PrefixConstruction (0 ms) 2023-01-11T22:09:56.7011508Z [ RUN ] QualifiedNameTest.DottedConstruction 2023-01-11T22:09:56.7011866Z [ OK ] QualifiedNameTest.DottedConstruction (0 ms) 2023-01-11T22:09:56.7012208Z [ RUN ] QualifiedNameTest.BadInputRaises 2023-01-11T22:09:56.7054669Z [ OK ] QualifiedNameTest.BadInputRaises (4 ms) 2023-01-11T22:09:56.7055123Z [ RUN ] QualifiedNameTest.Equality 2023-01-11T22:09:56.7055443Z [ OK ] QualifiedNameTest.Equality (0 ms) 2023-01-11T22:09:56.7055753Z [ RUN ] QualifiedNameTest.IsPrefixOf 2023-01-11T22:09:56.7056070Z [ OK ] QualifiedNameTest.IsPrefixOf (0 ms) 2023-01-11T22:09:56.7056544Z [----------] 5 tests from QualifiedNameTest (4 ms total) 2023-01-11T22:09:56.7056708Z 2023-01-11T22:09:56.7056900Z [----------] 6 tests from SerializationTest 2023-01-11T22:09:56.7057336Z [ RUN ] SerializationTest.ExtraFilesHookPreference 2023-01-11T22:09:56.7057816Z [W export_module.cpp:587] Warning: An extra files hook attempted to write metadata.json but this is already written in extra files and so will be skipped. This warning will only appear once per process. (function operator()) 2023-01-11T22:09:56.7062566Z [ OK ] SerializationTest.ExtraFilesHookPreference (0 ms) 2023-01-11T22:09:56.7062994Z [ RUN ] SerializationTest.ExtraFileHooksNoSecret 2023-01-11T22:09:56.7063936Z [ OK ] SerializationTest.ExtraFileHooksNoSecret (0 ms) 2023-01-11T22:09:56.7064436Z [ RUN ] SerializationTest.ExtraFileHooksWithSecret 2023-01-11T22:09:56.7065039Z [ OK ] SerializationTest.ExtraFileHooksWithSecret (0 ms) 2023-01-11T22:09:56.7065387Z [ RUN ] SerializationTest.TypeTags 2023-01-11T22:09:56.7067536Z [ OK ] SerializationTest.TypeTags (0 ms) 2023-01-11T22:09:56.7067876Z [ RUN ] SerializationTest.ParentDirNotExist 2023-01-11T22:09:56.7113480Z [ OK ] SerializationTest.ParentDirNotExist (4 ms) 2023-01-11T22:09:56.7114028Z [ RUN ] SerializationTest.CalculateNecessaryArgsTest 2023-01-11T22:09:56.7114462Z [ OK ] SerializationTest.CalculateNecessaryArgsTest (0 ms) 2023-01-11T22:09:56.7114849Z [----------] 6 tests from SerializationTest (5 ms total) 2023-01-11T22:09:56.7115012Z 2023-01-11T22:09:56.7115181Z [----------] 3 tests from TestSourceRoundTrip 2023-01-11T22:09:56.7115508Z [ RUN ] TestSourceRoundTrip.UpsampleNearest2d 2023-01-11T22:09:56.7128579Z [ OK ] TestSourceRoundTrip.UpsampleNearest2d (1 ms) 2023-01-11T22:09:56.7129139Z [ RUN ] TestSourceRoundTrip.CheckAttrAccess 2023-01-11T22:09:56.7129662Z [ OK ] TestSourceRoundTrip.CheckAttrAccess (0 ms) 2023-01-11T22:09:56.7130239Z [ RUN ] TestSourceRoundTrip.MethodInvocation 2023-01-11T22:09:56.7178376Z [ OK ] TestSourceRoundTrip.MethodInvocation (4 ms) 2023-01-11T22:09:56.7179064Z [----------] 3 tests from TestSourceRoundTrip (6 ms total) 2023-01-11T22:09:56.7179353Z 2023-01-11T22:09:56.7179492Z [----------] 1 test from TestSaveLoad 2023-01-11T22:09:56.7179801Z [ RUN ] TestSaveLoad.LoadWithoutDebugInfo 2023-01-11T22:09:56.7199588Z [ OK ] TestSaveLoad.LoadWithoutDebugInfo (2 ms) 2023-01-11T22:09:56.7200220Z [----------] 1 test from TestSaveLoad (2 ms total) 2023-01-11T22:09:56.7200443Z 2023-01-11T22:09:56.7200625Z [----------] 2 tests from FunctionSchemaIsAliasingTest 2023-01-11T22:09:56.7201072Z [ RUN ] FunctionSchemaIsAliasingTest.Basic 2023-01-11T22:09:56.7201427Z [ OK ] FunctionSchemaIsAliasingTest.Basic (0 ms) 2023-01-11T22:09:56.7201795Z [ RUN ] FunctionSchemaIsAliasingTest.InvalidArgument 2023-01-11T22:09:56.7222660Z [ OK ] FunctionSchemaIsAliasingTest.InvalidArgument (2 ms) 2023-01-11T22:09:56.7223314Z [----------] 2 tests from FunctionSchemaIsAliasingTest (2 ms total) 2023-01-11T22:09:56.7223511Z 2023-01-11T22:09:56.7223704Z [----------] 2 tests from FunctionSchemaIsMutableTest 2023-01-11T22:09:56.7224033Z [ RUN ] FunctionSchemaIsMutableTest.Basic 2023-01-11T22:09:56.7224380Z [ OK ] FunctionSchemaIsMutableTest.Basic (0 ms) 2023-01-11T22:09:56.7224753Z [ RUN ] FunctionSchemaIsMutableTest.InvalidArgument 2023-01-11T22:09:56.7254970Z [ OK ] FunctionSchemaIsMutableTest.InvalidArgument (3 ms) 2023-01-11T22:09:56.7255679Z [----------] 2 tests from FunctionSchemaIsMutableTest (3 ms total) 2023-01-11T22:09:56.7255870Z 2023-01-11T22:09:56.7256053Z [----------] 5 tests from SchemaInfoIsMutableTest 2023-01-11T22:09:56.7256477Z [ RUN ] SchemaInfoIsMutableTest.Basic 2023-01-11T22:09:56.7257409Z [ OK ] SchemaInfoIsMutableTest.Basic (0 ms) 2023-01-11T22:09:56.7258058Z [ RUN ] SchemaInfoIsMutableTest.InvalidArgument 2023-01-11T22:09:56.7289673Z [ OK ] SchemaInfoIsMutableTest.InvalidArgument (3 ms) 2023-01-11T22:09:56.7290533Z [ RUN ] SchemaInfoIsMutableTest.AliasingInputs 2023-01-11T22:09:56.7291168Z [ OK ] SchemaInfoIsMutableTest.AliasingInputs (0 ms) 2023-01-11T22:09:56.7291840Z [ RUN ] SchemaInfoIsMutableTest.InstanceNorm 2023-01-11T22:09:56.7292464Z [ OK ] SchemaInfoIsMutableTest.InstanceNorm (0 ms) 2023-01-11T22:09:56.7293099Z [ RUN ] SchemaInfoIsMutableTest.BatchNorm 2023-01-11T22:09:56.7293723Z [ OK ] SchemaInfoIsMutableTest.BatchNorm (0 ms) 2023-01-11T22:09:56.7294344Z [----------] 5 tests from SchemaInfoIsMutableTest (3 ms total) 2023-01-11T22:09:56.7294529Z 2023-01-11T22:09:56.7294738Z [----------] 2 tests from SchemaInfoIsNonDeterministicTest 2023-01-11T22:09:56.7295092Z [ RUN ] SchemaInfoIsNonDeterministicTest.Basic 2023-01-11T22:09:56.7295467Z [ OK ] SchemaInfoIsNonDeterministicTest.Basic (0 ms) 2023-01-11T22:09:56.7295845Z [ RUN ] SchemaInfoIsNonDeterministicTest.Dropout 2023-01-11T22:09:56.7296234Z [ OK ] SchemaInfoIsNonDeterministicTest.Dropout (0 ms) 2023-01-11T22:09:56.7296631Z [----------] 2 tests from SchemaInfoIsNonDeterministicTest (0 ms total) 2023-01-11T22:09:56.7296822Z 2023-01-11T22:09:56.7297011Z [----------] 3 tests from FunctionSchemaMayAliasTest 2023-01-11T22:09:56.7297342Z [ RUN ] FunctionSchemaMayAliasTest.Basic 2023-01-11T22:09:56.7297675Z [ OK ] FunctionSchemaMayAliasTest.Basic (0 ms) 2023-01-11T22:09:56.7298153Z [ RUN ] FunctionSchemaMayAliasTest.InvalidArgument 2023-01-11T22:09:56.7320961Z [ OK ] FunctionSchemaMayAliasTest.InvalidArgument (2 ms) 2023-01-11T22:09:56.7321623Z [ RUN ] FunctionSchemaMayAliasTest.Wildcard 2023-01-11T22:09:56.7322155Z [ OK ] FunctionSchemaMayAliasTest.Wildcard (0 ms) 2023-01-11T22:09:56.7322616Z [----------] 3 tests from FunctionSchemaMayAliasTest (2 ms total) 2023-01-11T22:09:56.7322870Z 2023-01-11T22:09:56.7323049Z [----------] 7 tests from SchemaInfoMayAliasTest 2023-01-11T22:09:56.7323555Z [ RUN ] SchemaInfoMayAliasTest.AliasingInputs 2023-01-11T22:09:56.7324162Z [ OK ] SchemaInfoMayAliasTest.AliasingInputs (0 ms) 2023-01-11T22:09:56.7324774Z [ RUN ] SchemaInfoMayAliasTest.AliasingOutputs 2023-01-11T22:09:56.7325403Z [ OK ] SchemaInfoMayAliasTest.AliasingOutputs (0 ms) 2023-01-11T22:09:56.7326118Z [ RUN ] SchemaInfoMayAliasTest.AliasingInputOutput 2023-01-11T22:09:56.7326742Z [ OK ] SchemaInfoMayAliasTest.AliasingInputOutput (0 ms) 2023-01-11T22:09:56.7327376Z [ RUN ] SchemaInfoMayAliasTest.MultipleWildcardInputs 2023-01-11T22:09:56.7328066Z [ OK ] SchemaInfoMayAliasTest.MultipleWildcardInputs (0 ms) 2023-01-11T22:09:56.7328686Z [ RUN ] SchemaInfoMayAliasTest.MultipleNonWildcardInputs 2023-01-11T22:09:56.7329360Z [W schema_info.cpp:333] Warning: alias::a appears twice in same argument list which will make aliasing checks more conservative. (function operator()) 2023-01-11T22:09:56.7330107Z [ OK ] SchemaInfoMayAliasTest.MultipleNonWildcardInputs (0 ms) 2023-01-11T22:09:56.7330675Z [ RUN ] SchemaInfoMayAliasTest.MultipleNonWildcardOutputs 2023-01-11T22:09:56.7331171Z [W schema_info.cpp:333] Warning: alias::a appears twice in same argument list which will make aliasing checks more conservative. (function operator()) 2023-01-11T22:09:56.7331665Z [ OK ] SchemaInfoMayAliasTest.MultipleNonWildcardOutputs (0 ms) 2023-01-11T22:09:56.7332073Z [ RUN ] SchemaInfoMayAliasTest.MismatchingTypes 2023-01-11T22:09:56.7332444Z [ OK ] SchemaInfoMayAliasTest.MismatchingTypes (0 ms) 2023-01-11T22:09:56.7332832Z [----------] 7 tests from SchemaInfoMayAliasTest (0 ms total) 2023-01-11T22:09:56.7333006Z 2023-01-11T22:09:56.7333213Z [----------] 3 tests from FunctionSchemaMayContainAliasTest 2023-01-11T22:09:56.7333569Z [ RUN ] FunctionSchemaMayContainAliasTest.Basic 2023-01-11T22:09:56.7333972Z [ OK ] FunctionSchemaMayContainAliasTest.Basic (0 ms) 2023-01-11T22:09:56.7334507Z [ RUN ] FunctionSchemaMayContainAliasTest.Wildcard 2023-01-11T22:09:56.7335041Z [ OK ] FunctionSchemaMayContainAliasTest.Wildcard (0 ms) 2023-01-11T22:09:56.7335608Z [ RUN ] FunctionSchemaMayContainAliasTest.InputAndOutputContainers 2023-01-11T22:09:56.7336143Z [ OK ] FunctionSchemaMayContainAliasTest.InputAndOutputContainers (0 ms) 2023-01-11T22:09:56.7336616Z [----------] 3 tests from FunctionSchemaMayContainAliasTest (0 ms total) 2023-01-11T22:09:56.7336814Z 2023-01-11T22:09:56.7337010Z [----------] 6 tests from SchemaInfoMayContainAliasTest 2023-01-11T22:09:56.7337407Z [ RUN ] SchemaInfoMayContainAliasTest.ContainAliasInputsEqual 2023-01-11T22:09:56.7337885Z [ OK ] SchemaInfoMayContainAliasTest.ContainAliasInputsEqual (0 ms) 2023-01-11T22:09:56.7338485Z [ RUN ] SchemaInfoMayContainAliasTest.ContainAliasInputsContained 2023-01-11T22:09:56.7339080Z [ OK ] SchemaInfoMayContainAliasTest.ContainAliasInputsContained (0 ms) 2023-01-11T22:09:56.7339788Z [ RUN ] SchemaInfoMayContainAliasTest.ContainAliasOutputs 2023-01-11T22:09:56.7340425Z [ OK ] SchemaInfoMayContainAliasTest.ContainAliasOutputs (0 ms) 2023-01-11T22:09:56.7340984Z [ RUN ] SchemaInfoMayContainAliasTest.ContainAliasInputOutput 2023-01-11T22:09:56.7341445Z [ OK ] SchemaInfoMayContainAliasTest.ContainAliasInputOutput (0 ms) 2023-01-11T22:09:56.7341916Z [ RUN ] SchemaInfoMayContainAliasTest.InputAndOutputContainers 2023-01-11T22:09:56.7342566Z [ OK ] SchemaInfoMayContainAliasTest.InputAndOutputContainers (0 ms) 2023-01-11T22:09:56.7343006Z [ RUN ] SchemaInfoMayContainAliasTest.Wildcard 2023-01-11T22:09:56.7343381Z [ OK ] SchemaInfoMayContainAliasTest.Wildcard (0 ms) 2023-01-11T22:09:56.7343783Z [----------] 6 tests from SchemaInfoMayContainAliasTest (0 ms total) 2023-01-11T22:09:56.7343971Z 2023-01-11T22:09:56.7344133Z [----------] 2 tests from SchemaMatchingTest 2023-01-11T22:09:56.7344421Z [ RUN ] SchemaMatchingTest.VarType 2023-01-11T22:09:56.7344808Z [ OK ] SchemaMatchingTest.VarType (0 ms) 2023-01-11T22:09:56.7345126Z [ RUN ] SchemaMatchingTest.VarType2 2023-01-11T22:09:56.7345430Z [ OK ] SchemaMatchingTest.VarType2 (0 ms) 2023-01-11T22:09:56.7345770Z [----------] 2 tests from SchemaMatchingTest (0 ms total) 2023-01-11T22:09:56.7345936Z 2023-01-11T22:09:56.7346086Z [----------] 6 tests from StackOptTest 2023-01-11T22:09:56.7346370Z [ RUN ] StackOptTest.UseVariadicStack 2023-01-11T22:09:56.7449957Z [ OK ] StackOptTest.UseVariadicStack (10 ms) 2023-01-11T22:09:56.7450343Z [ RUN ] StackOptTest.UseVariadicStackReplaceMultiple 2023-01-11T22:09:56.7504843Z [ OK ] StackOptTest.UseVariadicStackReplaceMultiple (5 ms) 2023-01-11T22:09:56.7505273Z [ RUN ] StackOptTest.UseVariadicStackWithMultipleListUses 2023-01-11T22:09:56.7531038Z [ OK ] StackOptTest.UseVariadicStackWithMultipleListUses (2 ms) 2023-01-11T22:09:56.7531516Z [ RUN ] StackOptTest.UseVariadicStackWithListMutationAfterCat 2023-01-11T22:09:56.7566834Z [ OK ] StackOptTest.UseVariadicStackWithListMutationAfterCat (3 ms) 2023-01-11T22:09:56.7567307Z [ RUN ] StackOptTest.UseVariadicStackWithListMutationBeforeCat 2023-01-11T22:09:56.7619344Z [ OK ] StackOptTest.UseVariadicStackWithListMutationBeforeCat (5 ms) 2023-01-11T22:09:56.7619831Z [ RUN ] StackOptTest.UseVariadicStackWithMultipleListMutations 2023-01-11T22:09:56.7695469Z [ OK ] StackOptTest.UseVariadicStackWithMultipleListMutations (7 ms) 2023-01-11T22:09:56.7696128Z [----------] 6 tests from StackOptTest (35 ms total) 2023-01-11T22:09:56.7696299Z 2023-01-11T22:09:56.7696514Z [----------] 16 tests from SubgraphMatcherTest 2023-01-11T22:09:56.7696845Z [ RUN ] SubgraphMatcherTest.Trivial1 2023-01-11T22:09:56.7697344Z [ OK ] SubgraphMatcherTest.Trivial1 (0 ms) 2023-01-11T22:09:56.7697822Z [ RUN ] SubgraphMatcherTest.Trivial2 2023-01-11T22:09:56.7698216Z [ OK ] SubgraphMatcherTest.Trivial2 (0 ms) 2023-01-11T22:09:56.7698722Z [ RUN ] SubgraphMatcherTest.Trivial3 2023-01-11T22:09:56.7699181Z [ OK ] SubgraphMatcherTest.Trivial3 (0 ms) 2023-01-11T22:09:56.7699575Z [ RUN ] SubgraphMatcherTest.Trivial4 2023-01-11T22:09:56.7700026Z [ OK ] SubgraphMatcherTest.Trivial4 (0 ms) 2023-01-11T22:09:56.7700404Z [ RUN ] SubgraphMatcherTest.Linear1 2023-01-11T22:09:56.7700883Z [ OK ] SubgraphMatcherTest.Linear1 (0 ms) 2023-01-11T22:09:56.7701317Z [ RUN ] SubgraphMatcherTest.Linear2 2023-01-11T22:09:56.7701740Z [ OK ] SubgraphMatcherTest.Linear2 (0 ms) 2023-01-11T22:09:56.7702056Z [ RUN ] SubgraphMatcherTest.Diamond1 2023-01-11T22:09:56.7702535Z [ OK ] SubgraphMatcherTest.Diamond1 (0 ms) 2023-01-11T22:09:56.7703187Z [ RUN ] SubgraphMatcherTest.Diamond2 2023-01-11T22:09:56.7703561Z [ OK ] SubgraphMatcherTest.Diamond2 (0 ms) 2023-01-11T22:09:56.7703874Z [ RUN ] SubgraphMatcherTest.XPattern 2023-01-11T22:09:56.7704251Z [ OK ] SubgraphMatcherTest.XPattern (0 ms) 2023-01-11T22:09:56.7704576Z [ RUN ] SubgraphMatcherTest.MultipleMatches 2023-01-11T22:09:56.7704933Z [ OK ] SubgraphMatcherTest.MultipleMatches (0 ms) 2023-01-11T22:09:56.7705301Z [ RUN ] SubgraphMatcherTest.OverlappingMatches 2023-01-11T22:09:56.7705672Z [ OK ] SubgraphMatcherTest.OverlappingMatches (0 ms) 2023-01-11T22:09:56.7706050Z [ RUN ] SubgraphMatcherTest.MatchInBasicBlocks1 2023-01-11T22:09:56.7706442Z [ OK ] SubgraphMatcherTest.MatchInBasicBlocks1 (0 ms) 2023-01-11T22:09:56.7706823Z [ RUN ] SubgraphMatcherTest.MatchInBasicBlocks2 2023-01-11T22:09:56.7707280Z [ OK ] SubgraphMatcherTest.MatchInBasicBlocks2 (0 ms) 2023-01-11T22:09:56.7707659Z [ RUN ] SubgraphMatcherTest.MatchesAttributes 2023-01-11T22:09:56.7708029Z [ OK ] SubgraphMatcherTest.MatchesAttributes (0 ms) 2023-01-11T22:09:56.7708365Z [ RUN ] SubgraphMatcherTest.BadPattern 2023-01-11T22:09:56.7745542Z [ OK ] SubgraphMatcherTest.BadPattern (4 ms) 2023-01-11T22:09:56.7745979Z [ RUN ] SubgraphMatcherTest.MultiOutput 2023-01-11T22:09:56.7746617Z [ OK ] SubgraphMatcherTest.MultiOutput (0 ms) 2023-01-11T22:09:56.7747053Z [----------] 16 tests from SubgraphMatcherTest (5 ms total) 2023-01-11T22:09:56.7747230Z 2023-01-11T22:09:56.7747404Z [----------] 4 tests from SubgraphRewriterTest 2023-01-11T22:09:56.7747724Z [ RUN ] SubgraphRewriterTest.FilterMatch 2023-01-11T22:09:56.7749164Z [ OK ] SubgraphRewriterTest.FilterMatch (0 ms) 2023-01-11T22:09:56.7749809Z [ RUN ] SubgraphRewriterTest.FilterNoMatch 2023-01-11T22:09:56.7750223Z [ OK ] SubgraphRewriterTest.FilterNoMatch (0 ms) 2023-01-11T22:09:56.7750569Z [ RUN ] SubgraphRewriterTest.MultiOutput 2023-01-11T22:09:56.7753564Z [ OK ] SubgraphRewriterTest.MultiOutput (0 ms) 2023-01-11T22:09:56.7754167Z [ RUN ] SubgraphRewriterTest.OutputType 2023-01-11T22:09:56.7754897Z [ OK ] SubgraphRewriterTest.OutputType (0 ms) 2023-01-11T22:09:56.7755568Z [----------] 4 tests from SubgraphRewriterTest (0 ms total) 2023-01-11T22:09:56.7755741Z 2023-01-11T22:09:56.7755904Z [----------] 3 tests from SubgraphUtilsTest 2023-01-11T22:09:56.7756194Z [ RUN ] SubgraphUtilsTest.Basic 2023-01-11T22:09:56.7758623Z [ OK ] SubgraphUtilsTest.Basic (0 ms) 2023-01-11T22:09:56.7759256Z [ RUN ] SubgraphUtilsTest.MergeSubgraphs 2023-01-11T22:09:56.7761340Z [ OK ] SubgraphUtilsTest.MergeSubgraphs (0 ms) 2023-01-11T22:09:56.7761905Z [ RUN ] SubgraphUtilsTest.GraphName 2023-01-11T22:09:56.7762494Z [ OK ] SubgraphUtilsTest.GraphName (0 ms) 2023-01-11T22:09:56.7763078Z [----------] 3 tests from SubgraphUtilsTest (0 ms total) 2023-01-11T22:09:56.7763241Z 2023-01-11T22:09:56.7763436Z [----------] 8 tests from UnionTypeTest 2023-01-11T22:09:56.7764017Z [ RUN ] UnionTypeTest.UnionOperatorEquals 2023-01-11T22:09:56.7764491Z [ OK ] UnionTypeTest.UnionOperatorEquals (0 ms) 2023-01-11T22:09:56.7764942Z [ RUN ] UnionTypeTest.UnionCreate_OptionalT1AndOptionalT2 2023-01-11T22:09:56.7765340Z [ OK ] UnionTypeTest.UnionCreate_OptionalT1AndOptionalT2 (0 ms) 2023-01-11T22:09:56.7765707Z [ RUN ] UnionTypeTest.UnionCreate_OptionalTAndT 2023-01-11T22:09:56.7766057Z [ OK ] UnionTypeTest.UnionCreate_OptionalTAndT (0 ms) 2023-01-11T22:09:56.7766450Z [ RUN ] UnionTypeTest.UnionCreate_TupleWithSubtypingRelationship 2023-01-11T22:09:56.7766999Z [ OK ] UnionTypeTest.UnionCreate_TupleWithSubtypingRelationship (0 ms) 2023-01-11T22:09:56.7767371Z [ RUN ] UnionTypeTest.UnionCreate_ContainerTAndT 2023-01-11T22:09:56.7767725Z [ OK ] UnionTypeTest.UnionCreate_ContainerTAndT (0 ms) 2023-01-11T22:09:56.7768129Z [ RUN ] UnionTypeTest.UnionCreate_OptionalContainerTAndContainerTAndT 2023-01-11T22:09:56.7768582Z [ OK ] UnionTypeTest.UnionCreate_OptionalContainerTAndContainerTAndT (0 ms) 2023-01-11T22:09:56.7768951Z [ RUN ] UnionTypeTest.Subtyping_NumberType 2023-01-11T22:09:56.7769285Z [ OK ] UnionTypeTest.Subtyping_NumberType (0 ms) 2023-01-11T22:09:56.7769621Z [ RUN ] UnionTypeTest.Subtyping_OptionalType 2023-01-11T22:09:56.7769945Z [ OK ] UnionTypeTest.Subtyping_OptionalType (0 ms) 2023-01-11T22:09:56.7770280Z [----------] 8 tests from UnionTypeTest (0 ms total) 2023-01-11T22:09:56.7770488Z 2023-01-11T22:09:56.7770651Z [----------] 2 tests from ScriptProfileTest 2023-01-11T22:09:56.7770947Z [ RUN ] ScriptProfileTest.Basic 2023-01-11T22:09:56.7771236Z [ OK ] ScriptProfileTest.Basic (0 ms) 2023-01-11T22:09:56.7771546Z [ RUN ] ScriptProfileTest.CallingOrder 2023-01-11T22:09:56.7797075Z [ OK ] ScriptProfileTest.CallingOrder (3 ms) 2023-01-11T22:09:56.7797420Z [----------] 2 tests from ScriptProfileTest (3 ms total) 2023-01-11T22:09:56.7797584Z 2023-01-11T22:09:56.7797753Z [----------] 7 tests from ShapeAnalysisTest 2023-01-11T22:09:56.7798085Z [ RUN ] ShapeAnalysisTest.DynamicShapesFusion 2023-01-11T22:09:56.7870719Z [ OK ] ShapeAnalysisTest.DynamicShapesFusion (7 ms) 2023-01-11T22:09:56.7871131Z [ RUN ] ShapeAnalysisTest.MovingConstantOutOfFusionGroups 2023-01-11T22:09:56.7887823Z [ OK ] ShapeAnalysisTest.MovingConstantOutOfFusionGroups (1 ms) 2023-01-11T22:09:56.7888249Z [ RUN ] ShapeAnalysisTest.SymbolicShapeAPI 2023-01-11T22:09:56.7960796Z [ OK ] ShapeAnalysisTest.SymbolicShapeAPI (7 ms) 2023-01-11T22:09:56.7961175Z [ RUN ] ShapeAnalysisTest.BoundedSymbolicShapes 2023-01-11T22:09:56.7967117Z [ OK ] ShapeAnalysisTest.BoundedSymbolicShapes (0 ms) 2023-01-11T22:09:56.7967568Z [ RUN ] ShapeAnalysisTest.SymbolicShapeCaching 2023-01-11T22:09:56.7973654Z [ OK ] ShapeAnalysisTest.SymbolicShapeCaching (0 ms) 2023-01-11T22:09:56.7974106Z [ RUN ] ShapeAnalysisTest.ShapeCacheMultipleFns 2023-01-11T22:09:56.8002498Z [ OK ] ShapeAnalysisTest.ShapeCacheMultipleFns (2 ms) 2023-01-11T22:09:56.8002966Z [ RUN ] ShapeAnalysisTest.TestShapeMultipleReturns 2023-01-11T22:09:56.8015271Z [ OK ] ShapeAnalysisTest.TestShapeMultipleReturns (1 ms) 2023-01-11T22:09:56.8015900Z [----------] 7 tests from ShapeAnalysisTest (21 ms total) 2023-01-11T22:09:56.8016160Z 2023-01-11T22:09:56.8016399Z [----------] 5 tests from JitLoggingTest 2023-01-11T22:09:56.8016948Z [ RUN ] JitLoggingTest.CheckSetLoggingLevel 2023-01-11T22:09:56.8017564Z [ OK ] JitLoggingTest.CheckSetLoggingLevel (0 ms) 2023-01-11T22:09:56.8017944Z [ RUN ] JitLoggingTest.CheckSetMultipleLogLevels 2023-01-11T22:09:56.8018334Z [ OK ] JitLoggingTest.CheckSetMultipleLogLevels (0 ms) 2023-01-11T22:09:56.8018707Z [ RUN ] JitLoggingTest.CheckLoggingLevelAfterUnset 2023-01-11T22:09:56.8019103Z [ OK ] JitLoggingTest.CheckLoggingLevelAfterUnset (0 ms) 2023-01-11T22:09:56.8019486Z [ RUN ] JitLoggingTest.CheckAfterChangingLevel 2023-01-11T22:09:56.8019850Z [ OK ] JitLoggingTest.CheckAfterChangingLevel (0 ms) 2023-01-11T22:09:56.8020222Z [ RUN ] JitLoggingTest.CheckOutputStreamSetting 2023-01-11T22:09:56.8020770Z [ OK ] JitLoggingTest.CheckOutputStreamSetting (0 ms) 2023-01-11T22:09:56.8021130Z [----------] 5 tests from JitLoggingTest (0 ms total) 2023-01-11T22:09:56.8021285Z 2023-01-11T22:09:56.8021426Z [----------] 9 tests from FileFormatTest 2023-01-11T22:09:56.8021764Z [ RUN ] FileFormatTest.IdentifiesFlatbufferStream 2023-01-11T22:09:56.8022156Z [ OK ] FileFormatTest.IdentifiesFlatbufferStream (0 ms) 2023-01-11T22:09:56.8022812Z [ RUN ] FileFormatTest.IdentifiesZipStream 2023-01-11T22:09:56.8023171Z [ OK ] FileFormatTest.IdentifiesZipStream (0 ms) 2023-01-11T22:09:56.8023540Z [ RUN ] FileFormatTest.FlatbufferTakesPrecedence 2023-01-11T22:09:56.8023987Z [ OK ] FileFormatTest.FlatbufferTakesPrecedence (0 ms) 2023-01-11T22:09:56.8024389Z [ RUN ] FileFormatTest.HandlesUnknownStream 2023-01-11T22:09:56.8024838Z [ OK ] FileFormatTest.HandlesUnknownStream (0 ms) 2023-01-11T22:09:56.8025201Z [ RUN ] FileFormatTest.ShortStreamIsUnknown 2023-01-11T22:09:56.8025545Z [ OK ] FileFormatTest.ShortStreamIsUnknown (0 ms) 2023-01-11T22:09:56.8025894Z [ RUN ] FileFormatTest.EmptyStreamIsUnknown 2023-01-11T22:09:56.8026250Z [ OK ] FileFormatTest.EmptyStreamIsUnknown (0 ms) 2023-01-11T22:09:56.8026596Z [ RUN ] FileFormatTest.BadStreamIsUnknown 2023-01-11T22:09:56.8026931Z [ OK ] FileFormatTest.BadStreamIsUnknown (0 ms) 2023-01-11T22:09:56.8027320Z [ RUN ] FileFormatTest.StreamOffsetIsObservedAndRestored 2023-01-11T22:09:56.8027757Z [ OK ] FileFormatTest.StreamOffsetIsObservedAndRestored (0 ms) 2023-01-11T22:09:56.8028125Z [ RUN ] FileFormatTest.HandlesMissingFile 2023-01-11T22:09:56.8028471Z [ OK ] FileFormatTest.HandlesMissingFile (0 ms) 2023-01-11T22:09:56.8028816Z [----------] 9 tests from FileFormatTest (0 ms total) 2023-01-11T22:09:56.8028973Z 2023-01-11T22:09:56.8029114Z [----------] 35 tests from FlatbufferTest 2023-01-11T22:09:56.8029478Z [ RUN ] FlatbufferTest.UpsampleNearest2d 2023-01-11T22:09:56.8033054Z [ OK ] FlatbufferTest.UpsampleNearest2d (1 ms) 2023-01-11T22:09:56.8033463Z [ RUN ] FlatbufferTest.UpsampleNearest2dWithCopyTensorMemory 2023-01-11T22:09:56.8044154Z [ OK ] FlatbufferTest.UpsampleNearest2dWithCopyTensorMemory (1 ms) 2023-01-11T22:09:56.8044593Z [ RUN ] FlatbufferTest.CheckAttrAccess 2023-01-11T22:09:56.8044941Z [ OK ] FlatbufferTest.CheckAttrAccess (0 ms) 2023-01-11T22:09:56.8045262Z [ RUN ] FlatbufferTest.MethodInvocation 2023-01-11T22:09:56.8064957Z [ OK ] FlatbufferTest.MethodInvocation (2 ms) 2023-01-11T22:09:56.8065319Z [ RUN ] FlatbufferTest.FlatbufferBackPortTest 2023-01-11T22:09:56.8094077Z [ OK ] FlatbufferTest.FlatbufferBackPortTest (2 ms) 2023-01-11T22:09:56.8094432Z [ RUN ] FlatbufferTest.ExtraFiles 2023-01-11T22:09:56.8097457Z [ OK ] FlatbufferTest.ExtraFiles (0 ms) 2023-01-11T22:09:56.8097750Z [ RUN ] FlatbufferTest.Conv 2023-01-11T22:09:56.8116693Z [ OK ] FlatbufferTest.Conv (1 ms) 2023-01-11T22:09:56.8117149Z [ RUN ] FlatbufferTest.ConvWithCopyTensorMemory 2023-01-11T22:09:56.8136819Z [ OK ] FlatbufferTest.ConvWithCopyTensorMemory (1 ms) 2023-01-11T22:09:56.8137261Z [ RUN ] FlatbufferTest.Inline 2023-01-11T22:09:56.8144450Z [ OK ] FlatbufferTest.Inline (0 ms) 2023-01-11T22:09:56.8144820Z [ RUN ] FlatbufferTest.InlineWithCopyTensorMemory 2023-01-11T22:09:56.8150734Z [ OK ] FlatbufferTest.InlineWithCopyTensorMemory (0 ms) 2023-01-11T22:09:56.8151070Z [ RUN ] FlatbufferTest.Tuple 2023-01-11T22:09:56.8155585Z [ OK ] FlatbufferTest.Tuple (0 ms) 2023-01-11T22:09:56.8156016Z [ RUN ] FlatbufferTest.Dict 2023-01-11T22:09:56.8160637Z [ OK ] FlatbufferTest.Dict (0 ms) 2023-01-11T22:09:56.8160909Z [ RUN ] FlatbufferTest.Prim 2023-01-11T22:09:56.8164090Z [ OK ] FlatbufferTest.Prim (0 ms) 2023-01-11T22:09:56.8164384Z [ RUN ] FlatbufferTest.PrimScalar 2023-01-11T22:09:56.8168261Z [ OK ] FlatbufferTest.PrimScalar (0 ms) 2023-01-11T22:09:56.8168591Z [ RUN ] FlatbufferTest.WrongMethodName 2023-01-11T22:09:56.8203537Z [ OK ] FlatbufferTest.WrongMethodName (3 ms) 2023-01-11T22:09:56.8203874Z [ RUN ] FlatbufferTest.SetState 2023-01-11T22:09:56.8223830Z [ OK ] FlatbufferTest.SetState (1 ms) 2023-01-11T22:09:56.8224238Z [ RUN ] FlatbufferTest.BuiltinClass 2023-01-11T22:09:56.8229571Z [ OK ] FlatbufferTest.BuiltinClass (0 ms) 2023-01-11T22:09:56.8230067Z [ RUN ] FlatbufferTest.BuiltinFunction 2023-01-11T22:09:56.8232223Z [ OK ] FlatbufferTest.BuiltinFunction (0 ms) 2023-01-11T22:09:56.8232547Z [ RUN ] FlatbufferTest.Eval 2023-01-11T22:09:56.8238239Z [ OK ] FlatbufferTest.Eval (0 ms) 2023-01-11T22:09:56.8238552Z [ RUN ] FlatbufferTest.FindWrongMethodName 2023-01-11T22:09:56.8241118Z [ OK ] FlatbufferTest.FindWrongMethodName (0 ms) 2023-01-11T22:09:56.8241517Z [ RUN ] FlatbufferTest.FindAndRunMethod 2023-01-11T22:09:56.8247197Z [ OK ] FlatbufferTest.FindAndRunMethod (0 ms) 2023-01-11T22:09:56.8247609Z [ RUN ] FlatbufferTest.RunMethodVariadic 2023-01-11T22:09:56.8252809Z [ OK ] FlatbufferTest.RunMethodVariadic (0 ms) 2023-01-11T22:09:56.8253216Z [ RUN ] FlatbufferTest.DuplicateSetState 2023-01-11T22:09:56.8261466Z [ OK ] FlatbufferTest.DuplicateSetState (0 ms) 2023-01-11T22:09:56.8261936Z [ RUN ] FlatbufferTest.OpNameExportFetchRootOperators 2023-01-11T22:09:56.8268268Z [ OK ] FlatbufferTest.OpNameExportFetchRootOperators (0 ms) 2023-01-11T22:09:56.8268712Z [ RUN ] FlatbufferTest.DefaultArgsConv 2023-01-11T22:09:56.8281389Z [ OK ] FlatbufferTest.DefaultArgsConv (1 ms) 2023-01-11T22:09:56.8281891Z [ RUN ] FlatbufferTest.DefaultArgsPinv 2023-01-11T22:09:56.8336829Z [ OK ] FlatbufferTest.DefaultArgsPinv (5 ms) 2023-01-11T22:09:56.8337351Z [ RUN ] FlatbufferTest.DefaultArgsTensorinvSpecifyDefault 2023-01-11T22:09:56.8347321Z [ OK ] FlatbufferTest.DefaultArgsTensorinvSpecifyDefault (1 ms) 2023-01-11T22:09:56.8347804Z [ RUN ] FlatbufferTest.DefaultArgsPinvWithOutArg 2023-01-11T22:09:56.8367791Z [ OK ] FlatbufferTest.DefaultArgsPinvWithOutArg (2 ms) 2023-01-11T22:09:56.8368290Z [ RUN ] FlatbufferTest.DefaultArgsWithOutArg 2023-01-11T22:09:56.8377539Z [ OK ] FlatbufferTest.DefaultArgsWithOutArg (0 ms) 2023-01-11T22:09:56.8378032Z [ RUN ] FlatbufferTest.OperatorCacheDifferentiatesDefaultArgs 2023-01-11T22:09:56.8401157Z [ OK ] FlatbufferTest.OperatorCacheDifferentiatesDefaultArgs (2 ms) 2023-01-11T22:09:56.8401606Z [ RUN ] FlatbufferTest.OperatorSize1 2023-01-11T22:09:56.8404374Z [ OK ] FlatbufferTest.OperatorSize1 (0 ms) 2023-01-11T22:09:56.8404884Z [ RUN ] FlatbufferTest.BoolAndDoubleList 2023-01-11T22:09:56.8405283Z [ OK ] FlatbufferTest.BoolAndDoubleList (0 ms) 2023-01-11T22:09:56.8405609Z [ RUN ] FlatbufferTest.OperatorTest2 2023-01-11T22:09:56.8416087Z [ OK ] FlatbufferTest.OperatorTest2 (1 ms) 2023-01-11T22:09:56.8416532Z [ RUN ] FlatbufferTest.DetachedBufferSmoke 2023-01-11T22:09:56.8416886Z [ OK ] FlatbufferTest.DetachedBufferSmoke (0 ms) 2023-01-11T22:09:56.8417382Z [ RUN ] FlatbufferTest.DetachedBufferNullOwner 2023-01-11T22:09:56.8417757Z [ OK ] FlatbufferTest.DetachedBufferNullOwner (0 ms) 2023-01-11T22:09:56.8418111Z [----------] 35 tests from FlatbufferTest (39 ms total) 2023-01-11T22:09:56.8418269Z 2023-01-11T22:09:56.8418440Z [----------] 3 tests from TestSourceFlatbuffer 2023-01-11T22:09:56.8418782Z [ RUN ] TestSourceFlatbuffer.UpsampleNearest2d 2023-01-11T22:09:56.8432585Z [ OK ] TestSourceFlatbuffer.UpsampleNearest2d (1 ms) 2023-01-11T22:09:56.8433016Z [ RUN ] TestSourceFlatbuffer.CheckAttrAccess 2023-01-11T22:09:56.8433831Z [ OK ] TestSourceFlatbuffer.CheckAttrAccess (0 ms) 2023-01-11T22:09:56.8434244Z [ RUN ] TestSourceFlatbuffer.MethodInvocation 2023-01-11T22:09:56.8486734Z [ OK ] TestSourceFlatbuffer.MethodInvocation (5 ms) 2023-01-11T22:09:56.8487312Z [----------] 3 tests from TestSourceFlatbuffer (7 ms total) 2023-01-11T22:09:56.8487499Z 2023-01-11T22:09:56.8487682Z [----------] 10 tests from FlatbufferUpgraderTest 2023-01-11T22:09:56.8488015Z [ RUN ] FlatbufferUpgraderTest.DivTensorV2 2023-01-11T22:09:56.8494031Z [ OK ] FlatbufferUpgraderTest.DivTensorV2 (0 ms) 2023-01-11T22:09:56.8494460Z [ RUN ] FlatbufferUpgraderTest.DivTensorOutV2 2023-01-11T22:09:56.8499458Z [ OK ] FlatbufferUpgraderTest.DivTensorOutV2 (0 ms) 2023-01-11T22:09:56.8499903Z [ RUN ] FlatbufferUpgraderTest.DivTensorInplaceV2 2023-01-11T22:09:56.8504554Z [ OK ] FlatbufferUpgraderTest.DivTensorInplaceV2 (0 ms) 2023-01-11T22:09:56.8505004Z [ RUN ] FlatbufferUpgraderTest.DivScalarFloatV2 2023-01-11T22:09:56.8509637Z [ OK ] FlatbufferUpgraderTest.DivScalarFloatV2 (0 ms) 2023-01-11T22:09:56.8510105Z [ RUN ] FlatbufferUpgraderTest.DivScalarReciprocalFloatV2 2023-01-11T22:09:56.8515807Z [ OK ] FlatbufferUpgraderTest.DivScalarReciprocalFloatV2 (0 ms) 2023-01-11T22:09:56.8516307Z [ RUN ] FlatbufferUpgraderTest.DivScalarReciprocalIntV2 2023-01-11T22:09:56.8521372Z [ OK ] FlatbufferUpgraderTest.DivScalarReciprocalIntV2 (0 ms) 2023-01-11T22:09:56.8521832Z [ RUN ] FlatbufferUpgraderTest.DivScalarScalarV2 2023-01-11T22:09:56.8527296Z [ OK ] FlatbufferUpgraderTest.DivScalarScalarV2 (0 ms) 2023-01-11T22:09:56.8527735Z [ RUN ] FlatbufferUpgraderTest.DivScalarIntV2 2023-01-11T22:09:56.8533019Z [ OK ] FlatbufferUpgraderTest.DivScalarIntV2 (0 ms) 2023-01-11T22:09:56.8533467Z [ RUN ] FlatbufferUpgraderTest.DivScalarInplaceFloatV2 2023-01-11T22:09:56.8537809Z [ OK ] FlatbufferUpgraderTest.DivScalarInplaceFloatV2 (0 ms) 2023-01-11T22:09:56.8538288Z [ RUN ] FlatbufferUpgraderTest.DivScalarInplaceIntV2 2023-01-11T22:09:56.8543439Z [ OK ] FlatbufferUpgraderTest.DivScalarInplaceIntV2 (0 ms) 2023-01-11T22:09:56.8544070Z [----------] 10 tests from FlatbufferUpgraderTest (5 ms total) 2023-01-11T22:09:56.8544352Z 2023-01-11T22:09:56.8544803Z [----------] 12 tests from AliasAnalysisTest/BatchAndInstanceNormFixture 2023-01-11T22:09:56.8545611Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/0 2023-01-11T22:09:56.8546513Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/0 (0 ms) 2023-01-11T22:09:56.8547413Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/1 2023-01-11T22:09:56.8548228Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/1 (0 ms) 2023-01-11T22:09:56.8548854Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/2 2023-01-11T22:09:56.8549583Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/2 (0 ms) 2023-01-11T22:09:56.8550558Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/3 2023-01-11T22:09:56.8551294Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNorm/3 (0 ms) 2023-01-11T22:09:56.8552034Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/0 2023-01-11T22:09:56.8552958Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/0 (0 ms) 2023-01-11T22:09:56.8553935Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/1 2023-01-11T22:09:56.8554718Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/1 (0 ms) 2023-01-11T22:09:56.8555303Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/2 2023-01-11T22:09:56.8556000Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/2 (0 ms) 2023-01-11T22:09:56.8556604Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/3 2023-01-11T22:09:56.8557208Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchAndInstanceNormTrainingUnknown/3 (0 ms) 2023-01-11T22:09:56.8557778Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/0 2023-01-11T22:09:56.8558363Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/0 (0 ms) 2023-01-11T22:09:56.8558938Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/1 2023-01-11T22:09:56.8559601Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/1 (0 ms) 2023-01-11T22:09:56.8560159Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/2 2023-01-11T22:09:56.8560751Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/2 (0 ms) 2023-01-11T22:09:56.8561320Z [ RUN ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/3 2023-01-11T22:09:56.8561897Z [ OK ] AliasAnalysisTest/BatchAndInstanceNormFixture.BatchNormTrainingWithNoMeanOrVar/3 (0 ms) 2023-01-11T22:09:56.8562392Z [----------] 12 tests from AliasAnalysisTest/BatchAndInstanceNormFixture (1 ms total) 2023-01-11T22:09:56.8562596Z 2023-01-11T22:09:56.8562842Z [----------] 10 tests from PyTorch/LiteInterpreterDynamicTypeTestFixture 2023-01-11T22:09:56.8563294Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/0 2023-01-11T22:09:57.6716901Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/0 (816 ms) 2023-01-11T22:09:57.6717434Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/1 2023-01-11T22:09:58.8269077Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/1 (1155 ms) 2023-01-11T22:09:58.8269597Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/2 2023-01-11T22:10:00.2151340Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/2 (1388 ms) 2023-01-11T22:10:00.2151916Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/3 2023-01-11T22:10:01.5619267Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/3 (1346 ms) 2023-01-11T22:10:01.5619804Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/4 2023-01-11T22:10:02.9045986Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/4 (1342 ms) 2023-01-11T22:10:02.9046516Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/5 2023-01-11T22:10:04.2537296Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/5 (1349 ms) 2023-01-11T22:10:04.2537993Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/6 2023-01-11T22:10:05.5960305Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/6 (1342 ms) 2023-01-11T22:10:05.5960832Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/7 2023-01-11T22:10:06.9100898Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/7 (1314 ms) 2023-01-11T22:10:06.9101444Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/8 2023-01-11T22:10:08.0995867Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/8 (1189 ms) 2023-01-11T22:10:08.0996378Z [ RUN ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/9 2023-01-11T22:10:09.2817263Z [ OK ] PyTorch/LiteInterpreterDynamicTypeTestFixture.Conformance/9 (1182 ms) 2023-01-11T22:10:09.2818204Z [----------] 10 tests from PyTorch/LiteInterpreterDynamicTypeTestFixture (12426 ms total) 2023-01-11T22:10:09.2818430Z 2023-01-11T22:10:09.2818613Z [----------] Global test environment tear-down 2023-01-11T22:10:09.2887793Z [==========] 569 tests from 119 test suites ran. (13735 ms total) 2023-01-11T22:10:09.2888279Z [ PASSED ] 569 tests. 2023-01-11T22:10:09.3738390Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *cuda* ]] 2023-01-11T22:10:09.3738705Z + [[ nogpu_NO_AVX2 != *nogpu* ]] 2023-01-11T22:10:09.3739157Z + /opt/conda/lib/python3.10/site-packages/torch/bin/test_lazy --gtest_output=xml:test/test-reports/cpp-unittest/test_libtorch/test_lazy.xml 2023-01-11T22:10:09.7354724Z CUDA not available. Disabling CUDA and MultiCUDA tests 2023-01-11T22:10:09.7357834Z Note: Google Test filter = *-*_CUDA:*_MultiCUDA 2023-01-11T22:10:09.7358579Z [==========] Running 611 tests from 10 test suites. 2023-01-11T22:10:09.7359161Z [----------] Global test environment set-up. 2023-01-11T22:10:09.7359570Z [----------] 11 tests from BackendDeviceTest 2023-01-11T22:10:09.7359926Z [ RUN ] BackendDeviceTest.BackendDeviceType 2023-01-11T22:10:09.7360321Z [ OK ] BackendDeviceTest.BackendDeviceType (0 ms) 2023-01-11T22:10:09.7360632Z [ RUN ] BackendDeviceTest.Basic1 2023-01-11T22:10:09.7360993Z [ OK ] BackendDeviceTest.Basic1 (0 ms) 2023-01-11T22:10:09.7361294Z [ RUN ] BackendDeviceTest.Basic2 2023-01-11T22:10:09.7361634Z [ OK ] BackendDeviceTest.Basic2 (0 ms) 2023-01-11T22:10:09.7361926Z [ RUN ] BackendDeviceTest.Basic3 2023-01-11T22:10:09.7362235Z [ OK ] BackendDeviceTest.Basic3 (0 ms) 2023-01-11T22:10:09.7362561Z [ RUN ] BackendDeviceTest.Basic4 2023-01-11T22:10:09.7362855Z [ OK ] BackendDeviceTest.Basic4 (0 ms) 2023-01-11T22:10:09.7363208Z [ RUN ] BackendDeviceTest.Compare 2023-01-11T22:10:09.7363525Z [ OK ] BackendDeviceTest.Compare (0 ms) 2023-01-11T22:10:09.7363843Z [ RUN ] BackendDeviceTest.Ostream 2023-01-11T22:10:09.7364178Z [ OK ] BackendDeviceTest.Ostream (0 ms) 2023-01-11T22:10:09.7364478Z [ RUN ] BackendDeviceTest.FromAten 2023-01-11T22:10:09.7422751Z [ OK ] BackendDeviceTest.FromAten (6 ms) 2023-01-11T22:10:09.7423325Z [ RUN ] BackendDeviceTest.ToAten 2023-01-11T22:10:09.7423671Z [ OK ] BackendDeviceTest.ToAten (0 ms) 2023-01-11T22:10:09.7424105Z [ RUN ] BackendDeviceTest.GetBackendDevice1 2023-01-11T22:10:09.7425038Z [ OK ] BackendDeviceTest.GetBackendDevice1 (0 ms) 2023-01-11T22:10:09.7425589Z [ RUN ] BackendDeviceTest.GetBackendDevice2 2023-01-11T22:10:09.7426247Z [ OK ] BackendDeviceTest.GetBackendDevice2 (0 ms) 2023-01-11T22:10:09.7426841Z [----------] 11 tests from BackendDeviceTest (6 ms total) 2023-01-11T22:10:09.7427408Z 2023-01-11T22:10:09.7427677Z [----------] 2 tests from CacheTest 2023-01-11T22:10:09.7428150Z [ RUN ] CacheTest.BasicTest 2023-01-11T22:10:09.7428595Z [ OK ] CacheTest.BasicTest (0 ms) 2023-01-11T22:10:09.7428922Z [ RUN ] CacheTest.ShapeCacheTestForDynamicShape 2023-01-11T22:10:09.7429307Z [ OK ] CacheTest.ShapeCacheTestForDynamicShape (0 ms) 2023-01-11T22:10:09.7429657Z [----------] 2 tests from CacheTest (0 ms total) 2023-01-11T22:10:09.7429802Z 2023-01-11T22:10:09.7429925Z [----------] 5 tests from IrTest 2023-01-11T22:10:09.7430180Z [ RUN ] IrTest.BasicTest 2023-01-11T22:10:09.7430448Z [ OK ] IrTest.BasicTest (0 ms) 2023-01-11T22:10:09.7430702Z [ RUN ] IrTest.MetaDataTest 2023-01-11T22:10:09.7430978Z [ OK ] IrTest.MetaDataTest (0 ms) 2023-01-11T22:10:09.7431245Z [ RUN ] IrTest.TsNodeTest 2023-01-11T22:10:09.7431582Z [ OK ] IrTest.TsNodeTest (0 ms) 2023-01-11T22:10:09.7431862Z [ RUN ] IrTest.DimensionNodeTest 2023-01-11T22:10:09.7432161Z [ OK ] IrTest.DimensionNodeTest (0 ms) 2023-01-11T22:10:09.7432478Z [ RUN ] IrTest.DimensionIsDynamicTest 2023-01-11T22:10:09.7432792Z [ OK ] IrTest.DimensionIsDynamicTest (0 ms) 2023-01-11T22:10:09.7433103Z [----------] 5 tests from IrTest (0 ms total) 2023-01-11T22:10:09.7433246Z 2023-01-11T22:10:09.7433391Z [----------] 2 tests from IrUtilTest 2023-01-11T22:10:09.7433647Z [ RUN ] IrUtilTest.BasicTest 2023-01-11T22:10:09.7433925Z [ OK ] IrUtilTest.BasicTest (0 ms) 2023-01-11T22:10:09.7434203Z [ RUN ] IrUtilTest.TestCircle 2023-01-11T22:10:09.7449982Z [ OK ] IrUtilTest.TestCircle (2 ms) 2023-01-11T22:10:09.7450574Z [----------] 2 tests from IrUtilTest (2 ms total) 2023-01-11T22:10:09.7450883Z 2023-01-11T22:10:09.7451155Z [----------] 2 tests from HashTest 2023-01-11T22:10:09.7451426Z [ RUN ] HashTest.Scalar 2023-01-11T22:10:09.7451678Z [ OK ] HashTest.Scalar (0 ms) 2023-01-11T22:10:09.7451936Z [ RUN ] HashTest.Sanity 2023-01-11T22:10:09.7452198Z [ OK ] HashTest.Sanity (0 ms) 2023-01-11T22:10:09.7452477Z [----------] 2 tests from HashTest (0 ms total) 2023-01-11T22:10:09.7452622Z 2023-01-11T22:10:09.7452794Z [----------] 3 tests from PermutationUtilTest 2023-01-11T22:10:09.7453145Z [ RUN ] PermutationUtilTest.TestInversePermutation 2023-01-11T22:10:09.7483871Z [ OK ] PermutationUtilTest.TestInversePermutation (3 ms) 2023-01-11T22:10:09.7484460Z [ RUN ] PermutationUtilTest.TestIsPermutation 2023-01-11T22:10:09.7484839Z [ OK ] PermutationUtilTest.TestIsPermutation (0 ms) 2023-01-11T22:10:09.7485188Z [ RUN ] PermutationUtilTest.TestPermute 2023-01-11T22:10:09.7504253Z [ OK ] PermutationUtilTest.TestPermute (2 ms) 2023-01-11T22:10:09.7504823Z [----------] 3 tests from PermutationUtilTest (5 ms total) 2023-01-11T22:10:09.7505092Z 2023-01-11T22:10:09.7505360Z [----------] 7 tests from ShapeTest 2023-01-11T22:10:09.7505802Z [ RUN ] ShapeTest.Basic1 2023-01-11T22:10:09.7506282Z [ OK ] ShapeTest.Basic1 (0 ms) 2023-01-11T22:10:09.7506844Z [ RUN ] ShapeTest.Basic2 2023-01-11T22:10:09.7507282Z [ OK ] ShapeTest.Basic2 (0 ms) 2023-01-11T22:10:09.7507695Z [ RUN ] ShapeTest.Basic3 2023-01-11T22:10:09.7508200Z [ OK ] ShapeTest.Basic3 (0 ms) 2023-01-11T22:10:09.7508695Z [ RUN ] ShapeTest.SetScalarType 2023-01-11T22:10:09.7509235Z [ OK ] ShapeTest.SetScalarType (0 ms) 2023-01-11T22:10:09.7509701Z [ RUN ] ShapeTest.SetSize 2023-01-11T22:10:09.7510112Z [ OK ] ShapeTest.SetSize (0 ms) 2023-01-11T22:10:09.7510765Z [ RUN ] ShapeTest.Equal 2023-01-11T22:10:09.7511135Z [ OK ] ShapeTest.Equal (0 ms) 2023-01-11T22:10:09.7511397Z [ RUN ] ShapeTest.Ostream 2023-01-11T22:10:09.7511666Z [ OK ] ShapeTest.Ostream (0 ms) 2023-01-11T22:10:09.7511952Z [----------] 7 tests from ShapeTest (0 ms total) 2023-01-11T22:10:09.7512101Z 2023-01-11T22:10:09.7512252Z [----------] 2 tests from TrieCacheTest 2023-01-11T22:10:09.7512543Z [ RUN ] TrieCacheTest.TestSinglePath 2023-01-11T22:10:09.7512865Z [ OK ] TrieCacheTest.TestSinglePath (0 ms) 2023-01-11T22:10:09.7513160Z [ RUN ] TrieCacheTest.TestTwoPaths 2023-01-11T22:10:09.7513471Z [ OK ] TrieCacheTest.TestTwoPaths (0 ms) 2023-01-11T22:10:09.7513796Z [----------] 2 tests from TrieCacheTest (0 ms total) 2023-01-11T22:10:09.7513949Z 2023-01-11T22:10:09.7514075Z [----------] 3 tests from UtilTest 2023-01-11T22:10:09.7514425Z [ RUN ] UtilTest.ExceptionCleanup 2023-01-11T22:10:09.7514739Z [ OK ] UtilTest.ExceptionCleanup (0 ms) 2023-01-11T22:10:09.7515005Z [ RUN ] UtilTest.MaybeRef 2023-01-11T22:10:09.7515279Z [ OK ] UtilTest.MaybeRef (0 ms) 2023-01-11T22:10:09.7515536Z [ RUN ] UtilTest.Iota 2023-01-11T22:10:09.7515795Z [ OK ] UtilTest.Iota (0 ms) 2023-01-11T22:10:09.7516071Z [----------] 3 tests from UtilTest (0 ms total) 2023-01-11T22:10:09.7516216Z 2023-01-11T22:10:09.7516366Z [----------] 574 tests from LazyOpsTest 2023-01-11T22:10:09.7519215Z [ RUN ] LazyOpsTest.TestScalarTensor 2023-01-11T22:10:09.7652833Z [ OK ] LazyOpsTest.TestScalarTensor (13 ms) 2023-01-11T22:10:09.7653369Z [ RUN ] LazyOpsTest.TestClone 2023-01-11T22:10:09.7679790Z [ OK ] LazyOpsTest.TestClone (2 ms) 2023-01-11T22:10:09.7680219Z [ RUN ] LazyOpsTest.TestTo 2023-01-11T22:10:09.7680822Z [ OK ] LazyOpsTest.TestTo (0 ms) 2023-01-11T22:10:09.7681379Z [ RUN ] LazyOpsTest.TestIsFloatingPoint 2023-01-11T22:10:09.7681835Z [ OK ] LazyOpsTest.TestIsFloatingPoint (0 ms) 2023-01-11T22:10:09.7682138Z [ RUN ] LazyOpsTest.TestIsSigned 2023-01-11T22:10:09.7682445Z [ OK ] LazyOpsTest.TestIsSigned (0 ms) 2023-01-11T22:10:09.7682746Z [ RUN ] LazyOpsTest.TestCastByte 2023-01-11T22:10:09.8056981Z [ OK ] LazyOpsTest.TestCastByte (37 ms) 2023-01-11T22:10:09.8057416Z [ RUN ] LazyOpsTest.TestCastChar 2023-01-11T22:10:09.8059925Z [ OK ] LazyOpsTest.TestCastChar (0 ms) 2023-01-11T22:10:09.8060485Z [ RUN ] LazyOpsTest.TestCastShort 2023-01-11T22:10:09.8063166Z [ OK ] LazyOpsTest.TestCastShort (0 ms) 2023-01-11T22:10:09.8063717Z [ RUN ] LazyOpsTest.TestCastInt 2023-01-11T22:10:09.8066090Z [ OK ] LazyOpsTest.TestCastInt (0 ms) 2023-01-11T22:10:09.8066627Z [ RUN ] LazyOpsTest.TestCastLong 2023-01-11T22:10:09.8068776Z [ OK ] LazyOpsTest.TestCastLong (0 ms) 2023-01-11T22:10:09.8069171Z [ RUN ] LazyOpsTest.TestCastFloat 2023-01-11T22:10:09.8069487Z [ OK ] LazyOpsTest.TestCastFloat (0 ms) 2023-01-11T22:10:09.8069848Z [ RUN ] LazyOpsTest.TestRetainType 2023-01-11T22:10:09.8070892Z [ OK ] LazyOpsTest.TestRetainType (0 ms) 2023-01-11T22:10:09.8071301Z [ RUN ] LazyOpsTest.TestLogicalTypeWithInterop 2023-01-11T22:10:09.8114240Z [ OK ] LazyOpsTest.TestLogicalTypeWithInterop (4 ms) 2023-01-11T22:10:09.8114574Z [ RUN ] LazyOpsTest.TestAdd 2023-01-11T22:10:09.8117185Z [ OK ] LazyOpsTest.TestAdd (0 ms) 2023-01-11T22:10:09.8117521Z [ RUN ] LazyOpsTest.TestAddHalf 2023-01-11T22:10:09.8120270Z [ OK ] LazyOpsTest.TestAddHalf (0 ms) 2023-01-11T22:10:09.8120779Z [ RUN ] LazyOpsTest.TestAddMixedPrecision 2023-01-11T22:10:09.8125397Z [ OK ] LazyOpsTest.TestAddMixedPrecision (0 ms) 2023-01-11T22:10:09.8125804Z [ RUN ] LazyOpsTest.TestAddInPlace 2023-01-11T22:10:09.8130065Z [ OK ] LazyOpsTest.TestAddInPlace (0 ms) 2023-01-11T22:10:09.8130392Z [ RUN ] LazyOpsTest.TestAddScalar 2023-01-11T22:10:09.8133129Z [ OK ] LazyOpsTest.TestAddScalar (0 ms) 2023-01-11T22:10:09.8133498Z [ RUN ] LazyOpsTest.TestAddScalarInPlace 2023-01-11T22:10:09.8137384Z [ OK ] LazyOpsTest.TestAddScalarInPlace (0 ms) 2023-01-11T22:10:09.8137742Z [ RUN ] LazyOpsTest.TestAddZeroSizeDim 2023-01-11T22:10:09.8140345Z [ OK ] LazyOpsTest.TestAddZeroSizeDim (0 ms) 2023-01-11T22:10:09.8140696Z [ RUN ] LazyOpsTest.TestSub 2023-01-11T22:10:09.8143936Z [ OK ] LazyOpsTest.TestSub (0 ms) 2023-01-11T22:10:09.8144377Z [ RUN ] LazyOpsTest.TestSubInPlace 2023-01-11T22:10:09.8172712Z [ OK ] LazyOpsTest.TestSubInPlace (2 ms) 2023-01-11T22:10:09.8173040Z [ RUN ] LazyOpsTest.TestSubScalar 2023-01-11T22:10:09.8175853Z [ OK ] LazyOpsTest.TestSubScalar (0 ms) 2023-01-11T22:10:09.8176201Z [ RUN ] LazyOpsTest.TestSubScalarInPlace 2023-01-11T22:10:09.8179871Z [ OK ] LazyOpsTest.TestSubScalarInPlace (0 ms) 2023-01-11T22:10:09.8180220Z [ RUN ] LazyOpsTest.TestMul 2023-01-11T22:10:09.8189832Z [ OK ] LazyOpsTest.TestMul (0 ms) 2023-01-11T22:10:09.8190149Z [ RUN ] LazyOpsTest.TestMulInPlace 2023-01-11T22:10:09.8193800Z [ OK ] LazyOpsTest.TestMulInPlace (0 ms) 2023-01-11T22:10:09.8194142Z [ RUN ] LazyOpsTest.TestMulScalar 2023-01-11T22:10:09.8196660Z [ OK ] LazyOpsTest.TestMulScalar (0 ms) 2023-01-11T22:10:09.8197032Z [ RUN ] LazyOpsTest.TestMulScalarInPlace 2023-01-11T22:10:09.8200973Z [ OK ] LazyOpsTest.TestMulScalarInPlace (0 ms) 2023-01-11T22:10:09.8201281Z [ RUN ] LazyOpsTest.TestDiv 2023-01-11T22:10:09.8286505Z [ OK ] LazyOpsTest.TestDiv (8 ms) 2023-01-11T22:10:09.8286845Z [ RUN ] LazyOpsTest.TestDivWithRoundingMode 2023-01-11T22:10:09.8590981Z [ OK ] LazyOpsTest.TestDivWithRoundingMode (30 ms) 2023-01-11T22:10:09.8591317Z [ RUN ] LazyOpsTest.TestDivInPlace 2023-01-11T22:10:09.8594352Z [ OK ] LazyOpsTest.TestDivInPlace (0 ms) 2023-01-11T22:10:09.8594828Z [ RUN ] LazyOpsTest.TestDivInPlaceWithRoundingMode 2023-01-11T22:10:09.8604595Z [ OK ] LazyOpsTest.TestDivInPlaceWithRoundingMode (1 ms) 2023-01-11T22:10:09.8604959Z [ RUN ] LazyOpsTest.TestDivScalar 2023-01-11T22:10:09.8633466Z [ OK ] LazyOpsTest.TestDivScalar (2 ms) 2023-01-11T22:10:09.8633829Z [ RUN ] LazyOpsTest.TestDivScalarInPlace 2023-01-11T22:10:09.8640250Z [ OK ] LazyOpsTest.TestDivScalarInPlace (0 ms) 2023-01-11T22:10:09.8640569Z [ RUN ] LazyOpsTest.TestDivOut 2023-01-11T22:10:09.8644507Z [ OK ] LazyOpsTest.TestDivOut (0 ms) 2023-01-11T22:10:09.8644813Z [ RUN ] LazyOpsTest.TestRsubScalar 2023-01-11T22:10:09.8648091Z [ OK ] LazyOpsTest.TestRsubScalar (0 ms) 2023-01-11T22:10:09.8648380Z [ RUN ] LazyOpsTest.TestNe 2023-01-11T22:10:09.8651076Z [ OK ] LazyOpsTest.TestNe (0 ms) 2023-01-11T22:10:09.8651381Z [ RUN ] LazyOpsTest.TestNeInplace 2023-01-11T22:10:09.8656181Z [ OK ] LazyOpsTest.TestNeInplace (0 ms) 2023-01-11T22:10:09.8656470Z [ RUN ] LazyOpsTest.TestEq 2023-01-11T22:10:09.8659136Z [ OK ] LazyOpsTest.TestEq (0 ms) 2023-01-11T22:10:09.8659450Z [ RUN ] LazyOpsTest.TestEqInplace 2023-01-11T22:10:09.8664241Z [ OK ] LazyOpsTest.TestEqInplace (0 ms) 2023-01-11T22:10:09.8664535Z [ RUN ] LazyOpsTest.TestGe 2023-01-11T22:10:09.8667240Z [ OK ] LazyOpsTest.TestGe (0 ms) 2023-01-11T22:10:09.8667542Z [ RUN ] LazyOpsTest.TestGeInplace 2023-01-11T22:10:09.8682091Z [ OK ] LazyOpsTest.TestGeInplace (1 ms) 2023-01-11T22:10:09.8682414Z [ RUN ] LazyOpsTest.TestLe 2023-01-11T22:10:09.8685048Z [ OK ] LazyOpsTest.TestLe (0 ms) 2023-01-11T22:10:09.8685355Z [ RUN ] LazyOpsTest.TestLeInplace 2023-01-11T22:10:09.8690014Z [ OK ] LazyOpsTest.TestLeInplace (0 ms) 2023-01-11T22:10:09.8690324Z [ RUN ] LazyOpsTest.TestGt 2023-01-11T22:10:09.8693246Z [ OK ] LazyOpsTest.TestGt (0 ms) 2023-01-11T22:10:09.8693578Z [ RUN ] LazyOpsTest.TestGtInplace 2023-01-11T22:10:09.8698197Z [ OK ] LazyOpsTest.TestGtInplace (0 ms) 2023-01-11T22:10:09.8698604Z [ RUN ] LazyOpsTest.TestLt 2023-01-11T22:10:09.8701219Z [ OK ] LazyOpsTest.TestLt (0 ms) 2023-01-11T22:10:09.8701534Z [ RUN ] LazyOpsTest.TestLtInplace 2023-01-11T22:10:09.8721960Z [ OK ] LazyOpsTest.TestLtInplace (2 ms) 2023-01-11T22:10:09.8722273Z [ RUN ] LazyOpsTest.TestNeScalar 2023-01-11T22:10:09.8724865Z [ OK ] LazyOpsTest.TestNeScalar (0 ms) 2023-01-11T22:10:09.8725175Z [ RUN ] LazyOpsTest.TestEqScalar 2023-01-11T22:10:09.8727737Z [ OK ] LazyOpsTest.TestEqScalar (0 ms) 2023-01-11T22:10:09.8728052Z [ RUN ] LazyOpsTest.TestGeScalar 2023-01-11T22:10:09.8730597Z [ OK ] LazyOpsTest.TestGeScalar (0 ms) 2023-01-11T22:10:09.8730920Z [ RUN ] LazyOpsTest.TestGeScalarInplace 2023-01-11T22:10:09.8746707Z [ OK ] LazyOpsTest.TestGeScalarInplace (1 ms) 2023-01-11T22:10:09.8747072Z [ RUN ] LazyOpsTest.TestLeScalar 2023-01-11T22:10:09.8749446Z [ OK ] LazyOpsTest.TestLeScalar (0 ms) 2023-01-11T22:10:09.8749795Z [ RUN ] LazyOpsTest.TestLeScalarInplace 2023-01-11T22:10:09.8753919Z [ OK ] LazyOpsTest.TestLeScalarInplace (0 ms) 2023-01-11T22:10:09.8754249Z [ RUN ] LazyOpsTest.TestGtScalar 2023-01-11T22:10:09.8756712Z [ OK ] LazyOpsTest.TestGtScalar (0 ms) 2023-01-11T22:10:09.8757059Z [ RUN ] LazyOpsTest.TestGtScalarInplace 2023-01-11T22:10:09.8761390Z [ OK ] LazyOpsTest.TestGtScalarInplace (0 ms) 2023-01-11T22:10:09.8761709Z [ RUN ] LazyOpsTest.TestLtScalar 2023-01-11T22:10:09.8764112Z [ OK ] LazyOpsTest.TestLtScalar (0 ms) 2023-01-11T22:10:09.8764457Z [ RUN ] LazyOpsTest.TestLtScalarInplace 2023-01-11T22:10:09.8768645Z [ OK ] LazyOpsTest.TestLtScalarInplace (0 ms) 2023-01-11T22:10:09.8768972Z [ RUN ] LazyOpsTest.TestIntegerAdd 2023-01-11T22:10:09.8780183Z [ OK ] LazyOpsTest.TestIntegerAdd (1 ms) 2023-01-11T22:10:09.8780501Z [ RUN ] LazyOpsTest.TestSVD 2023-01-11T22:10:09.8845163Z [ OK ] LazyOpsTest.TestSVD (6 ms) 2023-01-11T22:10:09.8845696Z [ RUN ] LazyOpsTest.TestQR 2023-01-11T22:10:09.8846364Z [W BatchLinearAlgebra.cpp:2459] Warning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release. 2023-01-11T22:10:09.8847191Z The boolean parameter 'some' has been replaced with a string parameter 'mode'. 2023-01-11T22:10:09.8847576Z Q, R = torch.qr(A, some) 2023-01-11T22:10:09.8847769Z should be replaced with 2023-01-11T22:10:09.8848098Z Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (function operator()) 2023-01-11T22:10:09.8868883Z [ OK ] LazyOpsTest.TestQR (2 ms) 2023-01-11T22:10:09.8869288Z [ RUN ] LazyOpsTest.TestSymEig 2023-01-11T22:10:09.8871167Z [W BatchLinearAlgebra.cpp:2910] Warning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release. 2023-01-11T22:10:09.8872037Z The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion. 2023-01-11T22:10:09.8872520Z L, _ = torch.symeig(A, upper=upper) 2023-01-11T22:10:09.8872792Z should be replaced with 2023-01-11T22:10:09.8873188Z L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L') 2023-01-11T22:10:09.8873459Z and 2023-01-11T22:10:09.8873699Z L, V = torch.symeig(A, eigenvectors=True) 2023-01-11T22:10:09.8873951Z should be replaced with 2023-01-11T22:10:09.8874368Z L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (function operator()) 2023-01-11T22:10:09.8889907Z [ OK ] LazyOpsTest.TestSymEig (2 ms) 2023-01-11T22:10:09.8890528Z [ RUN ] LazyOpsTest.TestCholesky 2023-01-11T22:10:09.8891495Z [W BatchLinearAlgebra.cpp:1730] Warning: torch.cholesky is deprecated in favor of torch.linalg.cholesky and will be removed in a future PyTorch release. 2023-01-11T22:10:09.8892148Z L = torch.cholesky(A) 2023-01-11T22:10:09.8892480Z should be replaced with 2023-01-11T22:10:09.8892827Z L = torch.linalg.cholesky(A) 2023-01-11T22:10:09.8893122Z and 2023-01-11T22:10:09.8893310Z U = torch.cholesky(A, upper=True) 2023-01-11T22:10:09.8893499Z should be replaced with 2023-01-11T22:10:09.8893708Z U = torch.linalg.cholesky(A).mH(). 2023-01-11T22:10:09.8894034Z This transform will produce equivalent results for all valid (symmetric positive definite) inputs. (function operator()) 2023-01-11T22:10:09.8899936Z [ OK ] LazyOpsTest.TestCholesky (1 ms) 2023-01-11T22:10:09.8900378Z [ RUN ] LazyOpsTest.TestLogDet 2023-01-11T22:10:09.8922408Z [ OK ] LazyOpsTest.TestLogDet (2 ms) 2023-01-11T22:10:09.8922954Z [ RUN ] LazyOpsTest.TestTriangularSolve 2023-01-11T22:10:09.8923591Z [W BatchLinearAlgebra.cpp:2225] Warning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangularand will be removed in a future PyTorch release. 2023-01-11T22:10:09.8924178Z torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs. 2023-01-11T22:10:09.8924569Z X = torch.triangular_solve(B, A).solution 2023-01-11T22:10:09.8924824Z should be replaced with 2023-01-11T22:10:09.8925141Z X = torch.linalg.solve_triangular(A, B). (function operator()) 2023-01-11T22:10:09.9131863Z [ OK ] LazyOpsTest.TestTriangularSolve (20 ms) 2023-01-11T22:10:09.9132213Z [ RUN ] LazyOpsTest.TestKthValue 2023-01-11T22:10:09.9163627Z [ OK ] LazyOpsTest.TestKthValue (3 ms) 2023-01-11T22:10:09.9163992Z [ RUN ] LazyOpsTest.TestTopK 2023-01-11T22:10:09.9343559Z [ OK ] LazyOpsTest.TestTopK (17 ms) 2023-01-11T22:10:09.9343947Z [ RUN ] LazyOpsTest.TestSort 2023-01-11T22:10:09.9411069Z [ OK ] LazyOpsTest.TestSort (6 ms) 2023-01-11T22:10:09.9411594Z [ RUN ] LazyOpsTest.TestSortDescWithMinValue 2023-01-11T22:10:09.9416332Z [ OK ] LazyOpsTest.TestSortDescWithMinValue (0 ms) 2023-01-11T22:10:09.9416910Z [ RUN ] LazyOpsTest.TestArgSort 2023-01-11T22:10:09.9428455Z [ OK ] LazyOpsTest.TestArgSort (1 ms) 2023-01-11T22:10:09.9428748Z [ RUN ] LazyOpsTest.TestMin 2023-01-11T22:10:09.9431465Z [ OK ] LazyOpsTest.TestMin (0 ms) 2023-01-11T22:10:09.9431761Z [ RUN ] LazyOpsTest.TestMax 2023-01-11T22:10:09.9434546Z [ OK ] LazyOpsTest.TestMax (0 ms) 2023-01-11T22:10:09.9434881Z [ RUN ] LazyOpsTest.TestUnaryMin 2023-01-11T22:10:09.9437382Z [ OK ] LazyOpsTest.TestUnaryMin (0 ms) 2023-01-11T22:10:09.9437686Z [ RUN ] LazyOpsTest.TestUnaryMax 2023-01-11T22:10:09.9440050Z [ OK ] LazyOpsTest.TestUnaryMax (0 ms) 2023-01-11T22:10:09.9440336Z [ RUN ] LazyOpsTest.TestAll 2023-01-11T22:10:09.9450926Z [ OK ] LazyOpsTest.TestAll (1 ms) 2023-01-11T22:10:09.9451446Z [ RUN ] LazyOpsTest.TestAllDim 2023-01-11T22:10:09.9453857Z [ OK ] LazyOpsTest.TestAllDim (0 ms) 2023-01-11T22:10:09.9454180Z [ RUN ] LazyOpsTest.TestAllDimKeep 2023-01-11T22:10:09.9456516Z [ OK ] LazyOpsTest.TestAllDimKeep (0 ms) 2023-01-11T22:10:09.9456835Z [ RUN ] LazyOpsTest.TestAmax 2023-01-11T22:10:09.9492611Z [ OK ] LazyOpsTest.TestAmax (3 ms) 2023-01-11T22:10:09.9492920Z [ RUN ] LazyOpsTest.TestAmin 2023-01-11T22:10:09.9528439Z [ OK ] LazyOpsTest.TestAmin (3 ms) 2023-01-11T22:10:09.9528770Z [ RUN ] LazyOpsTest.TestAny 2023-01-11T22:10:09.9539769Z [ OK ] LazyOpsTest.TestAny (1 ms) 2023-01-11T22:10:09.9540089Z [ RUN ] LazyOpsTest.TestAnyDim 2023-01-11T22:10:09.9542545Z [ OK ] LazyOpsTest.TestAnyDim (0 ms) 2023-01-11T22:10:09.9543018Z [ RUN ] LazyOpsTest.TestAnyDimKeep 2023-01-11T22:10:09.9545080Z [ OK ] LazyOpsTest.TestAnyDimKeep (0 ms) 2023-01-11T22:10:09.9545432Z [ RUN ] LazyOpsTest.TestMean 2023-01-11T22:10:09.9548644Z [ OK ] LazyOpsTest.TestMean (0 ms) 2023-01-11T22:10:09.9548998Z [ RUN ] LazyOpsTest.TestMeanCast 2023-01-11T22:10:09.9551898Z [ OK ] LazyOpsTest.TestMeanCast (0 ms) 2023-01-11T22:10:09.9552279Z [ RUN ] LazyOpsTest.TestMeanInDim 2023-01-11T22:10:09.9571681Z [ OK ] LazyOpsTest.TestMeanInDim (1 ms) 2023-01-11T22:10:09.9572267Z [ RUN ] LazyOpsTest.TestMeanInDims 2023-01-11T22:10:09.9578953Z [ OK ] LazyOpsTest.TestMeanInDims (0 ms) 2023-01-11T22:10:09.9579545Z [ RUN ] LazyOpsTest.TestMeanInDimsKeepCast 2023-01-11T22:10:09.9586287Z [ OK ] LazyOpsTest.TestMeanInDimsKeepCast (0 ms) 2023-01-11T22:10:09.9586653Z [ RUN ] LazyOpsTest.TestMeanInDimOut 2023-01-11T22:10:09.9614825Z [ OK ] LazyOpsTest.TestMeanInDimOut (2 ms) 2023-01-11T22:10:09.9615362Z [ RUN ] LazyOpsTest.TestStd 2023-01-11T22:10:09.9622288Z [ OK ] LazyOpsTest.TestStd (0 ms) 2023-01-11T22:10:09.9622994Z [ RUN ] LazyOpsTest.TestStdInDim 2023-01-11T22:10:09.9693974Z [ OK ] LazyOpsTest.TestStdInDim (7 ms) 2023-01-11T22:10:09.9694551Z [ RUN ] LazyOpsTest.TestStdWithCorrection 2023-01-11T22:10:09.9729890Z [ OK ] LazyOpsTest.TestStdWithCorrection (3 ms) 2023-01-11T22:10:09.9730523Z [ RUN ] LazyOpsTest.TestStdMeanWithCorrection 2023-01-11T22:10:09.9743910Z [ OK ] LazyOpsTest.TestStdMeanWithCorrection (1 ms) 2023-01-11T22:10:09.9744476Z [ RUN ] LazyOpsTest.TestSum 2023-01-11T22:10:09.9747578Z [ OK ] LazyOpsTest.TestSum (0 ms) 2023-01-11T22:10:09.9748095Z [ RUN ] LazyOpsTest.TestSumCast 2023-01-11T22:10:09.9751210Z [ OK ] LazyOpsTest.TestSumCast (0 ms) 2023-01-11T22:10:09.9751720Z [ RUN ] LazyOpsTest.TestSumU8 2023-01-11T22:10:09.9754397Z [ OK ] LazyOpsTest.TestSumU8 (0 ms) 2023-01-11T22:10:09.9754904Z [ RUN ] LazyOpsTest.TestSumInDim 2023-01-11T22:10:09.9772429Z [ OK ] LazyOpsTest.TestSumInDim (1 ms) 2023-01-11T22:10:09.9772989Z [ RUN ] LazyOpsTest.TestSumInDims 2023-01-11T22:10:09.9779479Z [ OK ] LazyOpsTest.TestSumInDims (0 ms) 2023-01-11T22:10:09.9780052Z [ RUN ] LazyOpsTest.TestSumInDimsKeep 2023-01-11T22:10:09.9786752Z [ OK ] LazyOpsTest.TestSumInDimsKeep (0 ms) 2023-01-11T22:10:09.9787370Z [ RUN ] LazyOpsTest.TestSumInDimsKeepCast 2023-01-11T22:10:09.9793682Z [ OK ] LazyOpsTest.TestSumInDimsKeepCast (0 ms) 2023-01-11T22:10:09.9794217Z [ RUN ] LazyOpsTest.TestVar 2023-01-11T22:10:09.9796578Z [ OK ] LazyOpsTest.TestVar (0 ms) 2023-01-11T22:10:09.9797314Z [ RUN ] LazyOpsTest.TestVarWithDim 2023-01-11T22:10:09.9803463Z [ OK ] LazyOpsTest.TestVarWithDim (0 ms) 2023-01-11T22:10:09.9804023Z [ RUN ] LazyOpsTest.TestVarWithCorrection 2023-01-11T22:10:09.9812088Z [ OK ] LazyOpsTest.TestVarWithCorrection (0 ms) 2023-01-11T22:10:09.9812715Z [ RUN ] LazyOpsTest.TestVarMeanWithCorrection 2023-01-11T22:10:09.9825797Z [ OK ] LazyOpsTest.TestVarMeanWithCorrection (1 ms) 2023-01-11T22:10:09.9826354Z [ RUN ] LazyOpsTest.TestMaxInDim 2023-01-11T22:10:09.9886894Z [ OK ] LazyOpsTest.TestMaxInDim (6 ms) 2023-01-11T22:10:09.9887414Z [ RUN ] LazyOpsTest.TestMinInDim 2023-01-11T22:10:09.9898501Z [ OK ] LazyOpsTest.TestMinInDim (1 ms) 2023-01-11T22:10:09.9898995Z [ RUN ] LazyOpsTest.TestNorm 2023-01-11T22:10:09.9902892Z [ OK ] LazyOpsTest.TestNorm (0 ms) 2023-01-11T22:10:09.9903595Z [ RUN ] LazyOpsTest.TestNormInDim 2023-01-11T22:10:09.9910590Z [ OK ] LazyOpsTest.TestNormInDim (0 ms) 2023-01-11T22:10:09.9911162Z [ RUN ] LazyOpsTest.TestNormInDims 2023-01-11T22:10:09.9917960Z [ OK ] LazyOpsTest.TestNormInDims (0 ms) 2023-01-11T22:10:09.9918545Z [ RUN ] LazyOpsTest.TestNormInDimsKeep 2023-01-11T22:10:09.9925092Z [ OK ] LazyOpsTest.TestNormInDimsKeep (0 ms) 2023-01-11T22:10:09.9925758Z [ RUN ] LazyOpsTest.TestNormalTwoTensor 2023-01-11T22:10:09.9931915Z [ OK ] LazyOpsTest.TestNormalTwoTensor (0 ms) 2023-01-11T22:10:09.9932490Z [ RUN ] LazyOpsTest.TestNormalDoubleMean 2023-01-11T22:10:09.9938850Z [ OK ] LazyOpsTest.TestNormalDoubleMean (0 ms) 2023-01-11T22:10:09.9939443Z [ RUN ] LazyOpsTest.TestNormalDoubleStd 2023-01-11T22:10:09.9941346Z [ OK ] LazyOpsTest.TestNormalDoubleStd (0 ms) 2023-01-11T22:10:09.9941894Z [ RUN ] LazyOpsTest.TestNormalInPlace 2023-01-11T22:10:09.9945501Z [ OK ] LazyOpsTest.TestNormalInPlace (0 ms) 2023-01-11T22:10:09.9946038Z [ RUN ] LazyOpsTest.TestUniformInPlace 2023-01-11T22:10:09.9948116Z [ OK ] LazyOpsTest.TestUniformInPlace (0 ms) 2023-01-11T22:10:09.9948674Z [ RUN ] LazyOpsTest.TestRandomInPlace 2023-01-11T22:10:09.9999758Z [ OK ] LazyOpsTest.TestRandomInPlace (5 ms) 2023-01-11T22:10:10.0000409Z [ RUN ] LazyOpsTest.TestRandomInPlaceDefaultFrom 2023-01-11T22:10:10.0049537Z [ OK ] LazyOpsTest.TestRandomInPlaceDefaultFrom (4 ms) 2023-01-11T22:10:10.0050181Z [ RUN ] LazyOpsTest.TestRandomInPlaceDefault 2023-01-11T22:10:10.0063261Z [ OK ] LazyOpsTest.TestRandomInPlaceDefault (1 ms) 2023-01-11T22:10:10.0063879Z [ RUN ] LazyOpsTest.TestNormGeneral 2023-01-11T22:10:10.0067274Z [ OK ] LazyOpsTest.TestNormGeneral (0 ms) 2023-01-11T22:10:10.0067878Z [ RUN ] LazyOpsTest.TestNormNuclear 2023-01-11T22:10:10.0071756Z [ OK ] LazyOpsTest.TestNormNuclear (0 ms) 2023-01-11T22:10:10.0072342Z [ RUN ] LazyOpsTest.TestFrobeniusNormInDim 2023-01-11T22:10:10.0073193Z [W LinearAlgebra.cpp:2783] Warning: at::frobenius_norm is deprecated and it is just left for JIT compatibility. It will be removed in a future PyTorch release. Please use `linalg.vector_norm(A, 2., dim, keepdim)` instead (function operator()) 2023-01-11T22:10:10.0079574Z [ OK ] LazyOpsTest.TestFrobeniusNormInDim (0 ms) 2023-01-11T22:10:10.0080227Z [ RUN ] LazyOpsTest.TestFrobeniusNormInDims 2023-01-11T22:10:10.0086561Z [ OK ] LazyOpsTest.TestFrobeniusNormInDims (0 ms) 2023-01-11T22:10:10.0087099Z [ RUN ] LazyOpsTest.TestGroupNorm 2023-01-11T22:10:10.0123792Z [ OK ] LazyOpsTest.TestGroupNorm (3 ms) 2023-01-11T22:10:10.0124631Z [ RUN ] LazyOpsTest.TestGroupNormBackward 2023-01-11T22:10:10.0574031Z [ OK ] LazyOpsTest.TestGroupNormBackward (44 ms) 2023-01-11T22:10:10.0574501Z [ RUN ] LazyOpsTest.TestInstanceNorm 2023-01-11T22:10:10.0601250Z [ OK ] LazyOpsTest.TestInstanceNorm (2 ms) 2023-01-11T22:10:10.0601701Z [ RUN ] LazyOpsTest.TestLayerNorm 2023-01-11T22:10:10.0637541Z [ OK ] LazyOpsTest.TestLayerNorm (3 ms) 2023-01-11T22:10:10.0638021Z [ RUN ] LazyOpsTest.TestLayerNormBackward 2023-01-11T22:10:10.0884852Z [ OK ] LazyOpsTest.TestLayerNormBackward (24 ms) 2023-01-11T22:10:10.0885547Z [ RUN ] LazyOpsTest.TestNuclearNorm 2023-01-11T22:10:10.0886822Z [W LinearAlgebra.cpp:2837] Warning: at::nuclear_norm is deprecated and it is just left for JIT compatibility. It will be removed in a future PyTorch release. Please use `linalg.matrix_norm(A, 'nuc', dim, keepdim)` instead (function operator()) 2023-01-11T22:10:10.0894524Z [ OK ] LazyOpsTest.TestNuclearNorm (1 ms) 2023-01-11T22:10:10.0895328Z [ RUN ] LazyOpsTest.TestPairwiseDistance 2023-01-11T22:10:10.1566296Z [ OK ] LazyOpsTest.TestPairwiseDistance (67 ms) 2023-01-11T22:10:10.1566699Z [ RUN ] LazyOpsTest.TestCosineSimilarity 2023-01-11T22:10:10.1611543Z [ OK ] LazyOpsTest.TestCosineSimilarity (4 ms) 2023-01-11T22:10:10.1612088Z [ RUN ] LazyOpsTest.TestCosineEmbeddingLoss 2023-01-11T22:10:10.2233941Z [ OK ] LazyOpsTest.TestCosineEmbeddingLoss (62 ms) 2023-01-11T22:10:10.2234293Z [ RUN ] LazyOpsTest.TestHingeEmbeddingLoss 2023-01-11T22:10:10.2271288Z [ OK ] LazyOpsTest.TestHingeEmbeddingLoss (3 ms) 2023-01-11T22:10:10.2271644Z [ RUN ] LazyOpsTest.TestTripletMarginLoss 2023-01-11T22:10:10.5070552Z [ OK ] LazyOpsTest.TestTripletMarginLoss (279 ms) 2023-01-11T22:10:10.5071334Z [ RUN ] LazyOpsTest.TestBinaryCrossEntropy 2023-01-11T22:10:10.5090759Z [ OK ] LazyOpsTest.TestBinaryCrossEntropy (2 ms) 2023-01-11T22:10:10.5091348Z [ RUN ] LazyOpsTest.TestMarginRankingLoss 2023-01-11T22:10:10.5791283Z [ OK ] LazyOpsTest.TestMarginRankingLoss (69 ms) 2023-01-11T22:10:10.5791975Z [ RUN ] LazyOpsTest.TestBCEWithLogits 2023-01-11T22:10:10.5806372Z [ OK ] LazyOpsTest.TestBCEWithLogits (1 ms) 2023-01-11T22:10:10.5806998Z [ RUN ] LazyOpsTest.TestKlDiv 2023-01-11T22:10:10.5827920Z [ OK ] LazyOpsTest.TestKlDiv (2 ms) 2023-01-11T22:10:10.5828433Z [ RUN ] LazyOpsTest.TestProd 2023-01-11T22:10:10.5828899Z [ OK ] LazyOpsTest.TestProd (0 ms) 2023-01-11T22:10:10.5829365Z [ RUN ] LazyOpsTest.TestProdCast 2023-01-11T22:10:10.5830115Z [ OK ] LazyOpsTest.TestProdCast (0 ms) 2023-01-11T22:10:10.5830441Z [ RUN ] LazyOpsTest.TestProdInDim 2023-01-11T22:10:10.5834671Z [ OK ] LazyOpsTest.TestProdInDim (0 ms) 2023-01-11T22:10:10.5835024Z [ RUN ] LazyOpsTest.TestProdInDimKeepCast 2023-01-11T22:10:10.5838828Z [ OK ] LazyOpsTest.TestProdInDimKeepCast (0 ms) 2023-01-11T22:10:10.5839158Z [ RUN ] LazyOpsTest.TestProdInDimKeep 2023-01-11T22:10:10.5843274Z [ OK ] LazyOpsTest.TestProdInDimKeep (0 ms) 2023-01-11T22:10:10.5843589Z [ RUN ] LazyOpsTest.TestCumSum 2023-01-11T22:10:10.5857623Z [ OK ] LazyOpsTest.TestCumSum (1 ms) 2023-01-11T22:10:10.5858002Z [ RUN ] LazyOpsTest.TestCumSumCast 2023-01-11T22:10:10.5873515Z [ OK ] LazyOpsTest.TestCumSumCast (1 ms) 2023-01-11T22:10:10.5873864Z [ RUN ] LazyOpsTest.TestCumSumLong 2023-01-11T22:10:10.5886257Z [ OK ] LazyOpsTest.TestCumSumLong (1 ms) 2023-01-11T22:10:10.5886594Z [ RUN ] LazyOpsTest.TestCumSumCastLong 2023-01-11T22:10:10.5898979Z [ OK ] LazyOpsTest.TestCumSumCastLong (1 ms) 2023-01-11T22:10:10.5899293Z [ RUN ] LazyOpsTest.TestCumProd 2023-01-11T22:10:10.5903741Z [ OK ] LazyOpsTest.TestCumProd (0 ms) 2023-01-11T22:10:10.5904067Z [ RUN ] LazyOpsTest.TestCumProdCast 2023-01-11T22:10:10.5908008Z [ OK ] LazyOpsTest.TestCumProdCast (0 ms) 2023-01-11T22:10:10.5908319Z [ RUN ] LazyOpsTest.TestCumProdLong 2023-01-11T22:10:10.5916799Z [ OK ] LazyOpsTest.TestCumProdLong (0 ms) 2023-01-11T22:10:10.5917148Z [ RUN ] LazyOpsTest.TestCumProdCastLong 2023-01-11T22:10:10.5925835Z [ OK ] LazyOpsTest.TestCumProdCastLong (0 ms) 2023-01-11T22:10:10.5926325Z [ RUN ] LazyOpsTest.TestArgMin 2023-01-11T22:10:10.5927045Z [ OK ] LazyOpsTest.TestArgMin (0 ms) 2023-01-11T22:10:10.5927354Z [ RUN ] LazyOpsTest.TestArgMinDim 2023-01-11T22:10:10.5928982Z [ OK ] LazyOpsTest.TestArgMinDim (0 ms) 2023-01-11T22:10:10.5929505Z [ RUN ] LazyOpsTest.TestArgMinDimKeep 2023-01-11T22:10:10.5929990Z [ OK ] LazyOpsTest.TestArgMinDimKeep (0 ms) 2023-01-11T22:10:10.5930307Z [ RUN ] LazyOpsTest.TestArgMinSameValue 2023-01-11T22:10:10.5930651Z [ OK ] LazyOpsTest.TestArgMinSameValue (0 ms) 2023-01-11T22:10:10.5930979Z [ RUN ] LazyOpsTest.TestArgMinWrapper 2023-01-11T22:10:10.5931707Z [ OK ] LazyOpsTest.TestArgMinWrapper (0 ms) 2023-01-11T22:10:10.5932088Z [ RUN ] LazyOpsTest.TestArgMax 2023-01-11T22:10:10.5932390Z [ OK ] LazyOpsTest.TestArgMax (0 ms) 2023-01-11T22:10:10.5932690Z [ RUN ] LazyOpsTest.TestArgMaxDim 2023-01-11T22:10:10.5933556Z [ OK ] LazyOpsTest.TestArgMaxDim (0 ms) 2023-01-11T22:10:10.5933961Z [ RUN ] LazyOpsTest.TestArgMaxDimKeep 2023-01-11T22:10:10.5934905Z [ OK ] LazyOpsTest.TestArgMaxDimKeep (0 ms) 2023-01-11T22:10:10.5935468Z [ RUN ] LazyOpsTest.TestArgMaxSameValue 2023-01-11T22:10:10.5936063Z [ OK ] LazyOpsTest.TestArgMaxSameValue (0 ms) 2023-01-11T22:10:10.5936440Z [ RUN ] LazyOpsTest.TestArgMaxWrapper 2023-01-11T22:10:10.5937102Z [ OK ] LazyOpsTest.TestArgMaxWrapper (0 ms) 2023-01-11T22:10:10.5937618Z [ RUN ] LazyOpsTest.TestAsin 2023-01-11T22:10:10.5938667Z [ OK ] LazyOpsTest.TestAsin (0 ms) 2023-01-11T22:10:10.5939193Z [ RUN ] LazyOpsTest.TestAsinh 2023-01-11T22:10:10.5940338Z [ OK ] LazyOpsTest.TestAsinh (0 ms) 2023-01-11T22:10:10.5940889Z [ RUN ] LazyOpsTest.TestAsinhInPlace 2023-01-11T22:10:10.5942044Z [ OK ] LazyOpsTest.TestAsinhInPlace (0 ms) 2023-01-11T22:10:10.5942739Z [ RUN ] LazyOpsTest.TestSin 2023-01-11T22:10:10.5943720Z [ OK ] LazyOpsTest.TestSin (0 ms) 2023-01-11T22:10:10.5944254Z [ RUN ] LazyOpsTest.TestSinh 2023-01-11T22:10:10.5945037Z [ OK ] LazyOpsTest.TestSinh (0 ms) 2023-01-11T22:10:10.5945533Z [ RUN ] LazyOpsTest.TestAcos 2023-01-11T22:10:10.5946431Z [ OK ] LazyOpsTest.TestAcos (0 ms) 2023-01-11T22:10:10.5946957Z [ RUN ] LazyOpsTest.TestAcosh 2023-01-11T22:10:10.5947685Z [ OK ] LazyOpsTest.TestAcosh (0 ms) 2023-01-11T22:10:10.5948228Z [ RUN ] LazyOpsTest.TestAcoshInPlace 2023-01-11T22:10:10.5949294Z [ OK ] LazyOpsTest.TestAcoshInPlace (0 ms) 2023-01-11T22:10:10.5949827Z [ RUN ] LazyOpsTest.TestCos 2023-01-11T22:10:10.5952511Z [ OK ] LazyOpsTest.TestCos (0 ms) 2023-01-11T22:10:10.5953020Z [ RUN ] LazyOpsTest.TestCosh 2023-01-11T22:10:10.5953601Z [ OK ] LazyOpsTest.TestCosh (0 ms) 2023-01-11T22:10:10.5954125Z [ RUN ] LazyOpsTest.TestAtan 2023-01-11T22:10:10.5954982Z [ OK ] LazyOpsTest.TestAtan (0 ms) 2023-01-11T22:10:10.5955633Z [ RUN ] LazyOpsTest.TestAtanh 2023-01-11T22:10:10.5956433Z [ OK ] LazyOpsTest.TestAtanh (0 ms) 2023-01-11T22:10:10.5956982Z [ RUN ] LazyOpsTest.TestAtanhInPlace 2023-01-11T22:10:10.5958106Z [ OK ] LazyOpsTest.TestAtanhInPlace (0 ms) 2023-01-11T22:10:10.5958648Z [ RUN ] LazyOpsTest.TestAtan2 2023-01-11T22:10:10.5959644Z [ OK ] LazyOpsTest.TestAtan2 (0 ms) 2023-01-11T22:10:10.5960137Z [ RUN ] LazyOpsTest.TestTan 2023-01-11T22:10:10.5960943Z [ OK ] LazyOpsTest.TestTan (0 ms) 2023-01-11T22:10:10.5961456Z [ RUN ] LazyOpsTest.TestTanh 2023-01-11T22:10:10.5964366Z [ OK ] LazyOpsTest.TestTanh (0 ms) 2023-01-11T22:10:10.5964933Z [ RUN ] LazyOpsTest.TestClampMinMax 2023-01-11T22:10:10.5967864Z [ OK ] LazyOpsTest.TestClampMinMax (0 ms) 2023-01-11T22:10:10.5968513Z [ RUN ] LazyOpsTest.TestClampMin 2023-01-11T22:10:10.5971177Z [ OK ] LazyOpsTest.TestClampMin (0 ms) 2023-01-11T22:10:10.5971730Z [ RUN ] LazyOpsTest.TestClampMax 2023-01-11T22:10:10.5974221Z [ OK ] LazyOpsTest.TestClampMax (0 ms) 2023-01-11T22:10:10.5974795Z [ RUN ] LazyOpsTest.TestClampMinExplicit 2023-01-11T22:10:10.5977084Z [ OK ] LazyOpsTest.TestClampMinExplicit (0 ms) 2023-01-11T22:10:10.5977681Z [ RUN ] LazyOpsTest.TestClampMaxExplicit 2023-01-11T22:10:10.5978356Z [ OK ] LazyOpsTest.TestClampMaxExplicit (0 ms) 2023-01-11T22:10:10.5978954Z [ RUN ] LazyOpsTest.TestClampMinExplicitInPlace 2023-01-11T22:10:10.5981859Z [ OK ] LazyOpsTest.TestClampMinExplicitInPlace (0 ms) 2023-01-11T22:10:10.5982605Z [ RUN ] LazyOpsTest.TestClampMaxExplicitInPlace 2023-01-11T22:10:10.5983537Z [ OK ] LazyOpsTest.TestClampMaxExplicitInPlace (0 ms) 2023-01-11T22:10:10.5984102Z [ RUN ] LazyOpsTest.TestCeil 2023-01-11T22:10:10.5984808Z [ OK ] LazyOpsTest.TestCeil (0 ms) 2023-01-11T22:10:10.5985332Z [ RUN ] LazyOpsTest.TestFloor 2023-01-11T22:10:10.5987659Z [ OK ] LazyOpsTest.TestFloor (0 ms) 2023-01-11T22:10:10.5987977Z [ RUN ] LazyOpsTest.TestRound 2023-01-11T22:10:10.5989402Z [ OK ] LazyOpsTest.TestRound (0 ms) 2023-01-11T22:10:10.5989913Z [ RUN ] LazyOpsTest.TestTrunc 2023-01-11T22:10:10.5992815Z [ OK ] LazyOpsTest.TestTrunc (0 ms) 2023-01-11T22:10:10.5993351Z [ RUN ] LazyOpsTest.TestFrac 2023-01-11T22:10:10.5995960Z [ OK ] LazyOpsTest.TestFrac (0 ms) 2023-01-11T22:10:10.5996463Z [ RUN ] LazyOpsTest.TestNeg 2023-01-11T22:10:10.5999029Z [ OK ] LazyOpsTest.TestNeg (0 ms) 2023-01-11T22:10:10.5999634Z [ RUN ] LazyOpsTest.TestBitwiseNot 2023-01-11T22:10:10.6001437Z [ OK ] LazyOpsTest.TestBitwiseNot (0 ms) 2023-01-11T22:10:10.6002029Z [ RUN ] LazyOpsTest.TestBitwiseNotInPlace 2023-01-11T22:10:10.6003512Z [ OK ] LazyOpsTest.TestBitwiseNotInPlace (0 ms) 2023-01-11T22:10:10.6004058Z [ RUN ] LazyOpsTest.TestSign 2023-01-11T22:10:10.6004798Z [ OK ] LazyOpsTest.TestSign (0 ms) 2023-01-11T22:10:10.6005332Z [ RUN ] LazyOpsTest.TestSignByte 2023-01-11T22:10:10.6005824Z [ OK ] LazyOpsTest.TestSignByte (0 ms) 2023-01-11T22:10:10.6006096Z [ RUN ] LazyOpsTest.TestAbs 2023-01-11T22:10:10.6008278Z [ OK ] LazyOpsTest.TestAbs (0 ms) 2023-01-11T22:10:10.6008829Z [ RUN ] LazyOpsTest.TestAbsByte 2023-01-11T22:10:10.6010571Z [ OK ] LazyOpsTest.TestAbsByte (0 ms) 2023-01-11T22:10:10.6011126Z [ RUN ] LazyOpsTest.TestEmptyLike 2023-01-11T22:10:10.6011687Z [ OK ] LazyOpsTest.TestEmptyLike (0 ms) 2023-01-11T22:10:10.6012397Z [ RUN ] LazyOpsTest.TestEmptyLikeOptions 2023-01-11T22:10:10.6012761Z [ OK ] LazyOpsTest.TestEmptyLikeOptions (0 ms) 2023-01-11T22:10:10.6013073Z [ RUN ] LazyOpsTest.TestEmpty 2023-01-11T22:10:10.6013368Z [ OK ] LazyOpsTest.TestEmpty (0 ms) 2023-01-11T22:10:10.6013843Z [ RUN ] LazyOpsTest.TestZeroInPlace 2023-01-11T22:10:10.6014176Z [ OK ] LazyOpsTest.TestZeroInPlace (0 ms) 2023-01-11T22:10:10.6014537Z [ RUN ] LazyOpsTest.TestZerosLike 2023-01-11T22:10:10.6015074Z [ OK ] LazyOpsTest.TestZerosLike (0 ms) 2023-01-11T22:10:10.6015476Z [ RUN ] LazyOpsTest.TestZerosLikeOptions 2023-01-11T22:10:10.6016096Z [ OK ] LazyOpsTest.TestZerosLikeOptions (0 ms) 2023-01-11T22:10:10.6016409Z [ RUN ] LazyOpsTest.TestZeros 2023-01-11T22:10:10.6019237Z [ OK ] LazyOpsTest.TestZeros (0 ms) 2023-01-11T22:10:10.6019967Z [ RUN ] LazyOpsTest.TestOnes 2023-01-11T22:10:10.6023006Z [ OK ] LazyOpsTest.TestOnes (0 ms) 2023-01-11T22:10:10.6023542Z [ RUN ] LazyOpsTest.TestOnesLike 2023-01-11T22:10:10.6024216Z [ OK ] LazyOpsTest.TestOnesLike (0 ms) 2023-01-11T22:10:10.6024583Z [ RUN ] LazyOpsTest.TestOnesLikeOptions 2023-01-11T22:10:10.6024932Z [ OK ] LazyOpsTest.TestOnesLikeOptions (0 ms) 2023-01-11T22:10:10.6025223Z [ RUN ] LazyOpsTest.TestFull 2023-01-11T22:10:10.6029384Z [ OK ] LazyOpsTest.TestFull (0 ms) 2023-01-11T22:10:10.6029768Z [ RUN ] LazyOpsTest.TestFullLike 2023-01-11T22:10:10.6030578Z [ OK ] LazyOpsTest.TestFullLike (0 ms) 2023-01-11T22:10:10.6030976Z [ RUN ] LazyOpsTest.TestFullLikeOptions 2023-01-11T22:10:10.6040326Z [ OK ] LazyOpsTest.TestFullLikeOptions (0 ms) 2023-01-11T22:10:10.6040810Z [ RUN ] LazyOpsTest.TestARange 2023-01-11T22:10:10.6041287Z [ OK ] LazyOpsTest.TestARange (0 ms) 2023-01-11T22:10:10.6041601Z [ RUN ] LazyOpsTest.TestARangeOut 2023-01-11T22:10:10.6042086Z [W RangeFactories.cpp:216] Warning: The number of elements in the out tensor of shape [4] is 4 which does not match the computed number of elements 200. Note that this may occur as a result of rounding error. The out tensor will be resized to a tensor of shape (200,). (function operator()) 2023-01-11T22:10:10.6042601Z [ OK ] LazyOpsTest.TestARangeOut (0 ms) 2023-01-11T22:10:10.6042906Z [ RUN ] LazyOpsTest.TestDimARange 2023-01-11T22:10:10.6045079Z [ OK ] LazyOpsTest.TestDimARange (0 ms) 2023-01-11T22:10:10.6045401Z [ RUN ] LazyOpsTest.TestBartlettWindow 2023-01-11T22:10:10.6072191Z [ OK ] LazyOpsTest.TestBartlettWindow (2 ms) 2023-01-11T22:10:10.6072521Z [ RUN ] LazyOpsTest.TestBlackmanWindow 2023-01-11T22:10:10.6094493Z [ OK ] LazyOpsTest.TestBlackmanWindow (2 ms) 2023-01-11T22:10:10.6094861Z [ RUN ] LazyOpsTest.TestHammingWindow 2023-01-11T22:10:10.6110672Z [ OK ] LazyOpsTest.TestHammingWindow (1 ms) 2023-01-11T22:10:10.6111093Z [ RUN ] LazyOpsTest.TestHannWindow 2023-01-11T22:10:10.6690307Z [ OK ] LazyOpsTest.TestHannWindow (57 ms) 2023-01-11T22:10:10.6690723Z [ RUN ] LazyOpsTest.TestLogSigmoid 2023-01-11T22:10:10.6693499Z [ OK ] LazyOpsTest.TestLogSigmoid (0 ms) 2023-01-11T22:10:10.6693929Z [ RUN ] LazyOpsTest.TestLogSigmoidForward 2023-01-11T22:10:10.6699937Z [ OK ] LazyOpsTest.TestLogSigmoidForward (0 ms) 2023-01-11T22:10:10.6700385Z [ RUN ] LazyOpsTest.TestLogsumexp 2023-01-11T22:10:10.6738969Z [ OK ] LazyOpsTest.TestLogsumexp (3 ms) 2023-01-11T22:10:10.6739330Z [ RUN ] LazyOpsTest.TestSiLU 2023-01-11T22:10:10.6742866Z [ OK ] LazyOpsTest.TestSiLU (0 ms) 2023-01-11T22:10:10.6743408Z [ RUN ] LazyOpsTest.TestSigmoid 2023-01-11T22:10:10.6745934Z [ OK ] LazyOpsTest.TestSigmoid (0 ms) 2023-01-11T22:10:10.6746235Z [ RUN ] LazyOpsTest.TestMatmul_1x1 2023-01-11T22:10:10.6747828Z [ OK ] LazyOpsTest.TestMatmul_1x1 (0 ms) 2023-01-11T22:10:10.6748134Z [ RUN ] LazyOpsTest.TestMatmul_2x1 2023-01-11T22:10:10.6751738Z [ OK ] LazyOpsTest.TestMatmul_2x1 (0 ms) 2023-01-11T22:10:10.6752059Z [ RUN ] LazyOpsTest.TestMatmul_1x2 2023-01-11T22:10:10.6757327Z [ OK ] LazyOpsTest.TestMatmul_1x2 (0 ms) 2023-01-11T22:10:10.6757641Z [ RUN ] LazyOpsTest.TestMatmul_2x2 2023-01-11T22:10:10.6769589Z [ OK ] LazyOpsTest.TestMatmul_2x2 (0 ms) 2023-01-11T22:10:10.6769987Z [ RUN ] LazyOpsTest.TestMatmulBcast 2023-01-11T22:10:10.6770396Z [ OK ] LazyOpsTest.TestMatmulBcast (0 ms) 2023-01-11T22:10:10.6770880Z [ RUN ] LazyOpsTest.TestDot 2023-01-11T22:10:10.6771686Z [ OK ] LazyOpsTest.TestDot (0 ms) 2023-01-11T22:10:10.6772056Z [ RUN ] LazyOpsTest.TestTensorDot 2023-01-11T22:10:10.6823058Z [ OK ] LazyOpsTest.TestTensorDot (0 ms) 2023-01-11T22:10:10.6823453Z [ RUN ] LazyOpsTest.TestGer 2023-01-11T22:10:10.6823814Z [ OK ] LazyOpsTest.TestGer (1 ms) 2023-01-11T22:10:10.6824144Z [ RUN ] LazyOpsTest.TestMv 2023-01-11T22:10:10.6824502Z [ OK ] LazyOpsTest.TestMv (0 ms) 2023-01-11T22:10:10.6824860Z [ RUN ] LazyOpsTest.TestMvOut 2023-01-11T22:10:10.6825229Z [ OK ] LazyOpsTest.TestMvOut (1 ms) 2023-01-11T22:10:10.6825628Z [ RUN ] LazyOpsTest.TestBatchAddBatchMatMul 2023-01-11T22:10:10.6826085Z [ OK ] LazyOpsTest.TestBatchAddBatchMatMul (0 ms) 2023-01-11T22:10:10.6826567Z [ RUN ] LazyOpsTest.TestBatchAddBatchMatMulInPlace 2023-01-11T22:10:10.6827073Z [ OK ] LazyOpsTest.TestBatchAddBatchMatMulInPlace (0 ms) 2023-01-11T22:10:10.6827519Z [ RUN ] LazyOpsTest.TestBatchMatMul 2023-01-11T22:10:10.6827923Z [ OK ] LazyOpsTest.TestBatchMatMul (0 ms) 2023-01-11T22:10:10.6828306Z [ RUN ] LazyOpsTest.TestChainMatMul 2023-01-11T22:10:10.6828926Z [W LinearAlgebra.cpp:1077] Warning: torch.chain_matmul is deprecated and will be removed in a future PyTorch release. Use torch.linalg.multi_dot instead, which accepts a list of two or more tensors rather than multiple parameters. (function operator()) 2023-01-11T22:10:10.6862906Z [ OK ] LazyOpsTest.TestChainMatMul (0 ms) 2023-01-11T22:10:10.6863301Z [ RUN ] LazyOpsTest.TestLinear 2023-01-11T22:10:10.6863707Z [ OK ] LazyOpsTest.TestLinear (0 ms) 2023-01-11T22:10:10.6864100Z [ RUN ] LazyOpsTest.TestPinverse 2023-01-11T22:10:10.6864511Z [ OK ] LazyOpsTest.TestPinverse (1 ms) 2023-01-11T22:10:10.6864821Z [ RUN ] LazyOpsTest.TestEinsumOuter 2023-01-11T22:10:10.6865126Z [ OK ] LazyOpsTest.TestEinsumOuter (0 ms) 2023-01-11T22:10:10.6865459Z [ RUN ] LazyOpsTest.TestEinsumOuterBackward 2023-01-11T22:10:10.6888037Z [ OK ] LazyOpsTest.TestEinsumOuterBackward (2 ms) 2023-01-11T22:10:10.6888546Z [ RUN ] LazyOpsTest.TestEinsumBatchMatMul 2023-01-11T22:10:10.6898763Z [ OK ] LazyOpsTest.TestEinsumBatchMatMul (1 ms) 2023-01-11T22:10:10.6899343Z [ RUN ] LazyOpsTest.TestEinsumPyTorchLowerBilinear 2023-01-11T22:10:10.6943163Z [ OK ] LazyOpsTest.TestEinsumPyTorchLowerBilinear (1 ms) 2023-01-11T22:10:10.6944674Z [ RUN ] LazyOpsTest.TestEinsumPyTorchLowerDiagonal 2023-01-11T22:10:10.6945145Z [ OK ] LazyOpsTest.TestEinsumPyTorchLowerDiagonal (0 ms) 2023-01-11T22:10:10.6945617Z [ RUN ] LazyOpsTest.TestEinsumPyTorchLowerBatchDiagonal 2023-01-11T22:10:10.6946203Z [ OK ] LazyOpsTest.TestEinsumPyTorchLowerBatchDiagonal (0 ms) 2023-01-11T22:10:10.6946695Z [ RUN ] LazyOpsTest.TestEinsumPyTorchLowerBatchPermute 2023-01-11T22:10:10.6947139Z [ OK ] LazyOpsTest.TestEinsumPyTorchLowerBatchPermute (0 ms) 2023-01-11T22:10:10.6947593Z [ RUN ] LazyOpsTest.TestEinsumPyTorchLowerRepeatedAxis 2023-01-11T22:10:10.6948077Z [ OK ] LazyOpsTest.TestEinsumPyTorchLowerRepeatedAxis (0 ms) 2023-01-11T22:10:10.6948431Z [ RUN ] LazyOpsTest.TestBilinear 2023-01-11T22:10:10.7073359Z [ OK ] LazyOpsTest.TestBilinear (13 ms) 2023-01-11T22:10:10.7073940Z [ RUN ] LazyOpsTest.TestUpsampleNearest2D 2023-01-11T22:10:10.7078258Z [ OK ] LazyOpsTest.TestUpsampleNearest2D (0 ms) 2023-01-11T22:10:10.7078664Z [ RUN ] LazyOpsTest.TestUpsampleNearest2DBackward 2023-01-11T22:10:10.7088859Z [ OK ] LazyOpsTest.TestUpsampleNearest2DBackward (1 ms) 2023-01-11T22:10:10.7089330Z [ RUN ] LazyOpsTest.TestUpsampleNearest2DWithScale 2023-01-11T22:10:10.7093125Z [ OK ] LazyOpsTest.TestUpsampleNearest2DWithScale (0 ms) 2023-01-11T22:10:10.7093687Z [ RUN ] LazyOpsTest.TestUpsampleNearest2DBackwardWithScale 2023-01-11T22:10:10.7103106Z [ OK ] LazyOpsTest.TestUpsampleNearest2DBackwardWithScale (0 ms) 2023-01-11T22:10:10.7103619Z [ RUN ] LazyOpsTest.TestUpsampleBilinear2D 2023-01-11T22:10:10.7111280Z [ OK ] LazyOpsTest.TestUpsampleBilinear2D (0 ms) 2023-01-11T22:10:10.7111725Z [ RUN ] LazyOpsTest.TestUpsampleBilinear2DBackward 2023-01-11T22:10:10.7129236Z [ OK ] LazyOpsTest.TestUpsampleBilinear2DBackward (1 ms) 2023-01-11T22:10:10.7129645Z [ RUN ] LazyOpsTest.TestAddCMul 2023-01-11T22:10:10.7132666Z [ OK ] LazyOpsTest.TestAddCMul (0 ms) 2023-01-11T22:10:10.7133218Z [ RUN ] LazyOpsTest.TestAddCDiv 2023-01-11T22:10:10.7136238Z [ OK ] LazyOpsTest.TestAddCDiv (0 ms) 2023-01-11T22:10:10.7136815Z [ RUN ] LazyOpsTest.TestAddCDivWithBroadcast 2023-01-11T22:10:10.7139747Z [ OK ] LazyOpsTest.TestAddCDivWithBroadcast (0 ms) 2023-01-11T22:10:10.7140103Z [ RUN ] LazyOpsTest.TestSize 2023-01-11T22:10:10.7140440Z [ OK ] LazyOpsTest.TestSize (0 ms) 2023-01-11T22:10:10.7140714Z [ RUN ] LazyOpsTest.TestSelect 2023-01-11T22:10:10.7222188Z [ OK ] LazyOpsTest.TestSelect (8 ms) 2023-01-11T22:10:10.7222663Z [ RUN ] LazyOpsTest.TestBernoulliScalarProb 2023-01-11T22:10:10.7227647Z [ OK ] LazyOpsTest.TestBernoulliScalarProb (0 ms) 2023-01-11T22:10:10.7228049Z [ RUN ] LazyOpsTest.TestBernoulliTensorProb 2023-01-11T22:10:10.7231694Z [ OK ] LazyOpsTest.TestBernoulliTensorProb (0 ms) 2023-01-11T22:10:10.7232208Z [ RUN ] LazyOpsTest.TestBernoulliScalarProbInPlace 2023-01-11T22:10:10.7236002Z [ OK ] LazyOpsTest.TestBernoulliScalarProbInPlace (0 ms) 2023-01-11T22:10:10.7236508Z [ RUN ] LazyOpsTest.TestBernoulliTensorProbInPlace 2023-01-11T22:10:10.7240465Z [ OK ] LazyOpsTest.TestBernoulliTensorProbInPlace (0 ms) 2023-01-11T22:10:10.7240887Z [ RUN ] LazyOpsTest.TestDropout 2023-01-11T22:10:10.7244529Z [ OK ] LazyOpsTest.TestDropout (0 ms) 2023-01-11T22:10:10.7244908Z [ RUN ] LazyOpsTest.TestDropoutInPlace 2023-01-11T22:10:10.7249935Z [ OK ] LazyOpsTest.TestDropoutInPlace (0 ms) 2023-01-11T22:10:10.7250301Z [ RUN ] LazyOpsTest.TestRandperm 2023-01-11T22:10:10.7252754Z [ OK ] LazyOpsTest.TestRandperm (0 ms) 2023-01-11T22:10:10.7253290Z [ RUN ] LazyOpsTest.TestSlice 2023-01-11T22:10:10.7262180Z [ OK ] LazyOpsTest.TestSlice (0 ms) 2023-01-11T22:10:10.7263461Z [ RUN ] LazyOpsTest.TestTake 2023-01-11T22:10:10.7263837Z [ OK ] LazyOpsTest.TestTake (0 ms) 2023-01-11T22:10:10.7264311Z [ RUN ] LazyOpsTest.TestTakeBackward 2023-01-11T22:10:10.7274682Z [ OK ] LazyOpsTest.TestTakeBackward (1 ms) 2023-01-11T22:10:10.7275092Z [ RUN ] LazyOpsTest.TestStack 2023-01-11T22:10:10.7299164Z [ OK ] LazyOpsTest.TestStack (2 ms) 2023-01-11T22:10:10.7299447Z [ RUN ] LazyOpsTest.TestCat 2023-01-11T22:10:10.7306664Z [ OK ] LazyOpsTest.TestCat (0 ms) 2023-01-11T22:10:10.7306948Z [ RUN ] LazyOpsTest.TestUnbind 2023-01-11T22:10:10.7318818Z [ OK ] LazyOpsTest.TestUnbind (1 ms) 2023-01-11T22:10:10.7319111Z [ RUN ] LazyOpsTest.TestRepeat 2023-01-11T22:10:10.7330201Z [ OK ] LazyOpsTest.TestRepeat (1 ms) 2023-01-11T22:10:10.7330494Z [ RUN ] LazyOpsTest.TestGather 2023-01-11T22:10:10.7337211Z [ OK ] LazyOpsTest.TestGather (0 ms) 2023-01-11T22:10:10.7337498Z [ RUN ] LazyOpsTest.TestScatter 2023-01-11T22:10:10.7340645Z [ OK ] LazyOpsTest.TestScatter (0 ms) 2023-01-11T22:10:10.7340997Z [ RUN ] LazyOpsTest.TestScatterR1 2023-01-11T22:10:10.7342553Z [ OK ] LazyOpsTest.TestScatterR1 (0 ms) 2023-01-11T22:10:10.7342876Z [ RUN ] LazyOpsTest.TestScatterR3 2023-01-11T22:10:10.7345114Z [ OK ] LazyOpsTest.TestScatterR3 (0 ms) 2023-01-11T22:10:10.7345450Z [ RUN ] LazyOpsTest.TestScatterBiggerSource 2023-01-11T22:10:10.7348423Z [ OK ] LazyOpsTest.TestScatterBiggerSource (0 ms) 2023-01-11T22:10:10.7348770Z [ RUN ] LazyOpsTest.TestScatterScalar 2023-01-11T22:10:10.7351069Z [ OK ] LazyOpsTest.TestScatterScalar (0 ms) 2023-01-11T22:10:10.7351603Z [ RUN ] LazyOpsTest.TestScatterReduceAdd 2023-01-11T22:10:10.7353992Z [ OK ] LazyOpsTest.TestScatterReduceAdd (0 ms) 2023-01-11T22:10:10.7354385Z [ RUN ] LazyOpsTest.TestScatterAdd 2023-01-11T22:10:10.7360581Z [ OK ] LazyOpsTest.TestScatterAdd (0 ms) 2023-01-11T22:10:10.7360912Z [ RUN ] LazyOpsTest.TestScatterAddInPlace 2023-01-11T22:10:10.7368112Z [ OK ] LazyOpsTest.TestScatterAddInPlace (0 ms) 2023-01-11T22:10:10.7368466Z [ RUN ] LazyOpsTest.TestIndexSelect 2023-01-11T22:10:10.7458407Z [ OK ] LazyOpsTest.TestIndexSelect (8 ms) 2023-01-11T22:10:10.7458763Z [ RUN ] LazyOpsTest.TestIndexSelectRank0 2023-01-11T22:10:10.7482179Z [ OK ] LazyOpsTest.TestIndexSelectRank0 (2 ms) 2023-01-11T22:10:10.7482877Z [ RUN ] LazyOpsTest.TestInverse 2023-01-11T22:10:10.7488346Z [ OK ] LazyOpsTest.TestInverse (0 ms) 2023-01-11T22:10:10.7488872Z [ RUN ] LazyOpsTest.TestIsnan 2023-01-11T22:10:10.7489388Z [ OK ] LazyOpsTest.TestIsnan (0 ms) 2023-01-11T22:10:10.7489858Z [ RUN ] LazyOpsTest.TestExpand 2023-01-11T22:10:10.7492782Z [ OK ] LazyOpsTest.TestExpand (0 ms) 2023-01-11T22:10:10.7493308Z [ RUN ] LazyOpsTest.TestExpandBack 2023-01-11T22:10:10.7497088Z [ OK ] LazyOpsTest.TestExpandBack (0 ms) 2023-01-11T22:10:10.7497639Z [ RUN ] LazyOpsTest.TestExpandAs 2023-01-11T22:10:10.7501742Z [ OK ] LazyOpsTest.TestExpandAs (0 ms) 2023-01-11T22:10:10.7502247Z [ RUN ] LazyOpsTest.TestEye 2023-01-11T22:10:10.7502888Z [ OK ] LazyOpsTest.TestEye (0 ms) 2023-01-11T22:10:10.7503338Z [ RUN ] LazyOpsTest.TestEyeWide 2023-01-11T22:10:10.7504067Z [ OK ] LazyOpsTest.TestEyeWide (0 ms) 2023-01-11T22:10:10.7504534Z [ RUN ] LazyOpsTest.TestEyeNarrow 2023-01-11T22:10:10.7505097Z [ OK ] LazyOpsTest.TestEyeNarrow (0 ms) 2023-01-11T22:10:10.7505824Z [ RUN ] LazyOpsTest.TestBroadcastTensors 2023-01-11T22:10:10.7511807Z [ OK ] LazyOpsTest.TestBroadcastTensors (0 ms) 2023-01-11T22:10:10.7512239Z [ RUN ] LazyOpsTest.TestOneIndex 2023-01-11T22:10:10.7582896Z [ OK ] LazyOpsTest.TestOneIndex (1 ms) 2023-01-11T22:10:10.7583390Z [ RUN ] LazyOpsTest.TestOneIndexTransfer 2023-01-11T22:10:10.7584003Z [ OK ] LazyOpsTest.TestOneIndexTransfer (1 ms) 2023-01-11T22:10:10.7584520Z [ RUN ] LazyOpsTest.TestNonzero 2023-01-11T22:10:10.7585015Z [ OK ] LazyOpsTest.TestNonzero (0 ms) 2023-01-11T22:10:10.7585527Z [ RUN ] LazyOpsTest.TestMaskedSelect 2023-01-11T22:10:10.7586074Z [ OK ] LazyOpsTest.TestMaskedSelect (0 ms) 2023-01-11T22:10:10.7586551Z [ RUN ] LazyOpsTest.TestMaskedScatter 2023-01-11T22:10:10.7586956Z [ OK ] LazyOpsTest.TestMaskedScatter (0 ms) 2023-01-11T22:10:10.7587419Z [ RUN ] LazyOpsTest.TestMultiIndexHeadNull 2023-01-11T22:10:10.7587766Z [ OK ] LazyOpsTest.TestMultiIndexHeadNull (0 ms) 2023-01-11T22:10:10.7588120Z [ RUN ] LazyOpsTest.TestMultiIndexMiddleNull 2023-01-11T22:10:10.7588480Z [ OK ] LazyOpsTest.TestMultiIndexMiddleNull (0 ms) 2023-01-11T22:10:10.7588831Z [ RUN ] LazyOpsTest.TestMultiIndexTailNull 2023-01-11T22:10:10.7589168Z [ OK ] LazyOpsTest.TestMultiIndexTailNull (0 ms) 2023-01-11T22:10:10.7589535Z [ RUN ] LazyOpsTest.TestMultiIndexMiddleBroadcast 2023-01-11T22:10:10.7589926Z [ OK ] LazyOpsTest.TestMultiIndexMiddleBroadcast (0 ms) 2023-01-11T22:10:10.7590373Z [ RUN ] LazyOpsTest.TestMultiIndexTailBroadcast 2023-01-11T22:10:10.7590952Z [ OK ] LazyOpsTest.TestMultiIndexTailBroadcast (0 ms) 2023-01-11T22:10:10.7591349Z [ RUN ] LazyOpsTest.TestMaskIndex 2023-01-11T22:10:10.7591667Z [ OK ] LazyOpsTest.TestMaskIndex (0 ms) 2023-01-11T22:10:10.7591966Z [ RUN ] LazyOpsTest.TestOneIndexPut 2023-01-11T22:10:10.7622171Z [ OK ] LazyOpsTest.TestOneIndexPut (3 ms) 2023-01-11T22:10:10.7622752Z [ RUN ] LazyOpsTest.TestOneIndexPutInPlace 2023-01-11T22:10:10.7645081Z [ OK ] LazyOpsTest.TestOneIndexPutInPlace (2 ms) 2023-01-11T22:10:10.7645589Z [ RUN ] LazyOpsTest.TestOneIndexPutTransfer 2023-01-11T22:10:10.7661730Z [ OK ] LazyOpsTest.TestOneIndexPutTransfer (1 ms) 2023-01-11T22:10:10.7662164Z [ RUN ] LazyOpsTest.TestMultiIndexPut 2023-01-11T22:10:10.7680285Z [ OK ] LazyOpsTest.TestMultiIndexPut (1 ms) 2023-01-11T22:10:10.7680734Z [ RUN ] LazyOpsTest.TestMultiIndexPutHeadNull 2023-01-11T22:10:10.7708335Z [ OK ] LazyOpsTest.TestMultiIndexPutHeadNull (2 ms) 2023-01-11T22:10:10.7708977Z [ RUN ] LazyOpsTest.TestMultiIndexPutMiddleNull 2023-01-11T22:10:10.7727418Z [ OK ] LazyOpsTest.TestMultiIndexPutMiddleNull (1 ms) 2023-01-11T22:10:10.7728080Z [ RUN ] LazyOpsTest.TestMultiIndexPutTailNull 2023-01-11T22:10:10.7746898Z [ OK ] LazyOpsTest.TestMultiIndexPutTailNull (1 ms) 2023-01-11T22:10:10.7747568Z [ RUN ] LazyOpsTest.TestMultiIndexPutMiddleBroadcast 2023-01-11T22:10:10.7767238Z [ OK ] LazyOpsTest.TestMultiIndexPutMiddleBroadcast (2 ms) 2023-01-11T22:10:10.7767788Z [ RUN ] LazyOpsTest.TestMultiIndexPutTailBroadcast 2023-01-11T22:10:10.7786227Z [ OK ] LazyOpsTest.TestMultiIndexPutTailBroadcast (1 ms) 2023-01-11T22:10:10.7786672Z [ RUN ] LazyOpsTest.TestMaskIndexPut 2023-01-11T22:10:10.7807314Z [ OK ] LazyOpsTest.TestMaskIndexPut (2 ms) 2023-01-11T22:10:10.7807731Z [ RUN ] LazyOpsTest.TestIndexPutImpl 2023-01-11T22:10:10.7831816Z [ OK ] LazyOpsTest.TestIndexPutImpl (2 ms) 2023-01-11T22:10:10.7832467Z [ RUN ] LazyOpsTest.TestIndexFillWithScalar 2023-01-11T22:10:10.7864939Z [ OK ] LazyOpsTest.TestIndexFillWithScalar (3 ms) 2023-01-11T22:10:10.7865428Z [ RUN ] LazyOpsTest.TestIndexFillWithScalarInPlace 2023-01-11T22:10:10.7894791Z [ OK ] LazyOpsTest.TestIndexFillWithScalarInPlace (2 ms) 2023-01-11T22:10:10.7895186Z [ RUN ] LazyOpsTest.TestIndexFillWithTensor 2023-01-11T22:10:10.7923087Z [ OK ] LazyOpsTest.TestIndexFillWithTensor (2 ms) 2023-01-11T22:10:10.7923470Z [ RUN ] LazyOpsTest.TestIndexFillWithTensorInPlace 2023-01-11T22:10:10.7956560Z [ OK ] LazyOpsTest.TestIndexFillWithTensorInPlace (3 ms) 2023-01-11T22:10:10.7956931Z [ RUN ] LazyOpsTest.TestIndexFillRank0 2023-01-11T22:10:10.7984255Z [ OK ] LazyOpsTest.TestIndexFillRank0 (2 ms) 2023-01-11T22:10:10.7984581Z [ RUN ] LazyOpsTest.TestIndexAdd 2023-01-11T22:10:10.8057011Z [ OK ] LazyOpsTest.TestIndexAdd (7 ms) 2023-01-11T22:10:10.8057367Z [ RUN ] LazyOpsTest.TestIndexAddInPlace 2023-01-11T22:10:10.8109912Z [ OK ] LazyOpsTest.TestIndexAddInPlace (5 ms) 2023-01-11T22:10:10.8110277Z [ RUN ] LazyOpsTest.TestIndexAddRank0 2023-01-11T22:10:10.8131461Z [ OK ] LazyOpsTest.TestIndexAddRank0 (2 ms) 2023-01-11T22:10:10.8131862Z [ RUN ] LazyOpsTest.TestIndexCopy 2023-01-11T22:10:10.8153892Z [ OK ] LazyOpsTest.TestIndexCopy (2 ms) 2023-01-11T22:10:10.8154293Z [ RUN ] LazyOpsTest.TestIndexCopyInPlace 2023-01-11T22:10:10.8182195Z [ OK ] LazyOpsTest.TestIndexCopyInPlace (2 ms) 2023-01-11T22:10:10.8182712Z [ RUN ] LazyOpsTest.TestIndexCopyRank0 2023-01-11T22:10:10.8203162Z [ OK ] LazyOpsTest.TestIndexCopyRank0 (2 ms) 2023-01-11T22:10:10.8203543Z [ RUN ] LazyOpsTest.TestRelu 2023-01-11T22:10:10.8206674Z [ OK ] LazyOpsTest.TestRelu (0 ms) 2023-01-11T22:10:10.8207030Z [ RUN ] LazyOpsTest.TestReluInPlace 2023-01-11T22:10:10.8210420Z [ OK ] LazyOpsTest.TestReluInPlace (0 ms) 2023-01-11T22:10:10.8210862Z [ RUN ] LazyOpsTest.TestHardshrink 2023-01-11T22:10:10.8211995Z [ OK ] LazyOpsTest.TestHardshrink (0 ms) 2023-01-11T22:10:10.8212293Z [ RUN ] LazyOpsTest.TestHardSigmoid 2023-01-11T22:10:10.8214844Z [ OK ] LazyOpsTest.TestHardSigmoid (0 ms) 2023-01-11T22:10:10.8215182Z [ RUN ] LazyOpsTest.TestHardSigmoidInPlace 2023-01-11T22:10:10.8218603Z [ OK ] LazyOpsTest.TestHardSigmoidInPlace (0 ms) 2023-01-11T22:10:10.8218965Z [ RUN ] LazyOpsTest.TestHardSigmoidBackward 2023-01-11T22:10:10.8228720Z [ OK ] LazyOpsTest.TestHardSigmoidBackward (0 ms) 2023-01-11T22:10:10.8229127Z [ RUN ] LazyOpsTest.TestSoftshrink 2023-01-11T22:10:10.8229979Z [ OK ] LazyOpsTest.TestSoftshrink (0 ms) 2023-01-11T22:10:10.8230294Z [ RUN ] LazyOpsTest.TestHardtanh 2023-01-11T22:10:10.8231278Z [ OK ] LazyOpsTest.TestHardtanh (0 ms) 2023-01-11T22:10:10.8231666Z [ RUN ] LazyOpsTest.TestHardtanhInPlace 2023-01-11T22:10:10.8233006Z [ OK ] LazyOpsTest.TestHardtanhInPlace (0 ms) 2023-01-11T22:10:10.8233338Z [ RUN ] LazyOpsTest.TestLeakyRelu 2023-01-11T22:10:10.8235821Z [ OK ] LazyOpsTest.TestLeakyRelu (0 ms) 2023-01-11T22:10:10.8236192Z [ RUN ] LazyOpsTest.TestLeakyReluInPlace 2023-01-11T22:10:10.8239749Z [ OK ] LazyOpsTest.TestLeakyReluInPlace (0 ms) 2023-01-11T22:10:10.8240055Z [ RUN ] LazyOpsTest.TestExp 2023-01-11T22:10:10.8242754Z [ OK ] LazyOpsTest.TestExp (0 ms) 2023-01-11T22:10:10.8243091Z [ RUN ] LazyOpsTest.TestExpm1 2023-01-11T22:10:10.8244271Z [ OK ] LazyOpsTest.TestExpm1 (0 ms) 2023-01-11T22:10:10.8244763Z [ RUN ] LazyOpsTest.TestLog 2023-01-11T22:10:10.8247309Z [ OK ] LazyOpsTest.TestLog (0 ms) 2023-01-11T22:10:10.8247656Z [ RUN ] LazyOpsTest.TestLog2 2023-01-11T22:10:10.8250747Z [ OK ] LazyOpsTest.TestLog2 (0 ms) 2023-01-11T22:10:10.8251087Z [ RUN ] LazyOpsTest.TestLog10 2023-01-11T22:10:10.8252267Z [ OK ] LazyOpsTest.TestLog10 (0 ms) 2023-01-11T22:10:10.8252574Z [ RUN ] LazyOpsTest.TestLog1p 2023-01-11T22:10:10.8253615Z [ OK ] LazyOpsTest.TestLog1p (0 ms) 2023-01-11T22:10:10.8253883Z [ RUN ] LazyOpsTest.TestErf 2023-01-11T22:10:10.8255076Z [ OK ] LazyOpsTest.TestErf (0 ms) 2023-01-11T22:10:10.8255388Z [ RUN ] LazyOpsTest.TestErfc 2023-01-11T22:10:10.8256499Z [ OK ] LazyOpsTest.TestErfc (0 ms) 2023-01-11T22:10:10.8256830Z [ RUN ] LazyOpsTest.TestErfinv 2023-01-11T22:10:10.8257884Z [ OK ] LazyOpsTest.TestErfinv (0 ms) 2023-01-11T22:10:10.8258175Z [ RUN ] LazyOpsTest.TestSqrt 2023-01-11T22:10:10.8261395Z [ OK ] LazyOpsTest.TestSqrt (0 ms) 2023-01-11T22:10:10.8261683Z [ RUN ] LazyOpsTest.TestRsqrt 2023-01-11T22:10:10.8264688Z [ OK ] LazyOpsTest.TestRsqrt (0 ms) 2023-01-11T22:10:10.8264980Z [ RUN ] LazyOpsTest.TestReciprocal 2023-01-11T22:10:10.8267732Z [ OK ] LazyOpsTest.TestReciprocal (0 ms) 2023-01-11T22:10:10.8268062Z [ RUN ] LazyOpsTest.TestPowTensorScalar 2023-01-11T22:10:10.8270903Z [ OK ] LazyOpsTest.TestPowTensorScalar (0 ms) 2023-01-11T22:10:10.8271253Z [ RUN ] LazyOpsTest.TestPowTensorScalarInPlace 2023-01-11T22:10:10.8274747Z [ OK ] LazyOpsTest.TestPowTensorScalarInPlace (0 ms) 2023-01-11T22:10:10.8275104Z [ RUN ] LazyOpsTest.TestPowTensorTensor 2023-01-11T22:10:10.8278024Z [ OK ] LazyOpsTest.TestPowTensorTensor (0 ms) 2023-01-11T22:10:10.8278389Z [ RUN ] LazyOpsTest.TestPowTensorTensorInPlace 2023-01-11T22:10:10.8292098Z [ OK ] LazyOpsTest.TestPowTensorTensorInPlace (1 ms) 2023-01-11T22:10:10.8292497Z [ RUN ] LazyOpsTest.TestPowTensorTensorBroadcast 2023-01-11T22:10:10.8295407Z [ OK ] LazyOpsTest.TestPowTensorTensorBroadcast (0 ms) 2023-01-11T22:10:10.8295801Z [ RUN ] LazyOpsTest.TestPowScalarTensor 2023-01-11T22:10:10.8296952Z [ OK ] LazyOpsTest.TestPowScalarTensor (0 ms) 2023-01-11T22:10:10.8297276Z [ RUN ] LazyOpsTest.TestPowIntExponent 2023-01-11T22:10:10.8299841Z [ OK ] LazyOpsTest.TestPowIntExponent (0 ms) 2023-01-11T22:10:10.8300163Z [ RUN ] LazyOpsTest.TestFmodScalar 2023-01-11T22:10:10.8301281Z [ OK ] LazyOpsTest.TestFmodScalar (0 ms) 2023-01-11T22:10:10.8301620Z [ RUN ] LazyOpsTest.TestFmodScalarInPlace 2023-01-11T22:10:10.8303672Z [ OK ] LazyOpsTest.TestFmodScalarInPlace (0 ms) 2023-01-11T22:10:10.8304074Z [ RUN ] LazyOpsTest.TestFmodTensor 2023-01-11T22:10:10.8304644Z [ OK ] LazyOpsTest.TestFmodTensor (0 ms) 2023-01-11T22:10:10.8304992Z [ RUN ] LazyOpsTest.TestFmodTensorInPlace 2023-01-11T22:10:10.8316267Z [ OK ] LazyOpsTest.TestFmodTensorInPlace (1 ms) 2023-01-11T22:10:10.8316684Z [ RUN ] LazyOpsTest.TestRemainderScalar 2023-01-11T22:10:10.8319769Z [ OK ] LazyOpsTest.TestRemainderScalar (0 ms) 2023-01-11T22:10:10.8320170Z [ RUN ] LazyOpsTest.TestRemainderScalarInPlace 2023-01-11T22:10:10.8324353Z [ OK ] LazyOpsTest.TestRemainderScalarInPlace (0 ms) 2023-01-11T22:10:10.8324702Z [ RUN ] LazyOpsTest.TestRemainderTensor 2023-01-11T22:10:10.8327493Z [ OK ] LazyOpsTest.TestRemainderTensor (0 ms) 2023-01-11T22:10:10.8327853Z [ RUN ] LazyOpsTest.TestRemainderTensorInPlace 2023-01-11T22:10:10.8332212Z [ OK ] LazyOpsTest.TestRemainderTensorInPlace (0 ms) 2023-01-11T22:10:10.8332536Z [ RUN ] LazyOpsTest.TestWhere 2023-01-11T22:10:10.8333117Z [W TensorCompare.cpp:493] Warning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead. (function operator()) 2023-01-11T22:10:10.8334020Z [ OK ] LazyOpsTest.TestWhere (0 ms) 2023-01-11T22:10:10.8334396Z [ RUN ] LazyOpsTest.TestWhereBroadcast 2023-01-11T22:10:10.8335726Z [ OK ] LazyOpsTest.TestWhereBroadcast (0 ms) 2023-01-11T22:10:10.8336112Z [ RUN ] LazyOpsTest.TestThreshold 2023-01-11T22:10:10.8339028Z [ OK ] LazyOpsTest.TestThreshold (0 ms) 2023-01-11T22:10:10.8339391Z [ RUN ] LazyOpsTest.TestThresholdBackward 2023-01-11T22:10:10.8348212Z [ OK ] LazyOpsTest.TestThresholdBackward (0 ms) 2023-01-11T22:10:10.8348574Z [ RUN ] LazyOpsTest.TestThresholdInPlace 2023-01-11T22:10:10.8351620Z [ OK ] LazyOpsTest.TestThresholdInPlace (0 ms) 2023-01-11T22:10:10.8351915Z [ RUN ] LazyOpsTest.TestElu 2023-01-11T22:10:10.8354639Z [ OK ] LazyOpsTest.TestElu (0 ms) 2023-01-11T22:10:10.8354935Z [ RUN ] LazyOpsTest.TestEluInPlace 2023-01-11T22:10:10.8358463Z [ OK ] LazyOpsTest.TestEluInPlace (0 ms) 2023-01-11T22:10:10.8358747Z [ RUN ] LazyOpsTest.TestSelu 2023-01-11T22:10:10.8361564Z [ OK ] LazyOpsTest.TestSelu (0 ms) 2023-01-11T22:10:10.8361868Z [ RUN ] LazyOpsTest.TestSeluInPlace 2023-01-11T22:10:10.8365177Z [ OK ] LazyOpsTest.TestSeluInPlace (0 ms) 2023-01-11T22:10:10.8365483Z [ RUN ] LazyOpsTest.TestCelu 2023-01-11T22:10:10.8366964Z [ OK ] LazyOpsTest.TestCelu (0 ms) 2023-01-11T22:10:10.8367261Z [ RUN ] LazyOpsTest.TestCeluInPlace 2023-01-11T22:10:10.8369022Z [ OK ] LazyOpsTest.TestCeluInPlace (0 ms) 2023-01-11T22:10:10.8369382Z [ RUN ] LazyOpsTest.TestGelu 2023-01-11T22:10:10.8376966Z [ OK ] LazyOpsTest.TestGelu (0 ms) 2023-01-11T22:10:10.8377347Z [ RUN ] LazyOpsTest.TestAddMatMul 2023-01-11T22:10:10.8423301Z [ OK ] LazyOpsTest.TestAddMatMul (1 ms) 2023-01-11T22:10:10.8423733Z [ RUN ] LazyOpsTest.TestEmbedding 2023-01-11T22:10:10.8424060Z [ OK ] LazyOpsTest.TestEmbedding (0 ms) 2023-01-11T22:10:10.8424373Z [ RUN ] LazyOpsTest.TestOneHot 2023-01-11T22:10:10.8424714Z [ OK ] LazyOpsTest.TestOneHot (0 ms) 2023-01-11T22:10:10.8425090Z [ RUN ] LazyOpsTest.TestTranspose 2023-01-11T22:10:10.8425544Z [ OK ] LazyOpsTest.TestTranspose (0 ms) 2023-01-11T22:10:10.8425929Z [ RUN ] LazyOpsTest.TestTransposeInPlace 2023-01-11T22:10:10.8426336Z [ OK ] LazyOpsTest.TestTransposeInPlace (0 ms) 2023-01-11T22:10:10.8426655Z [ RUN ] LazyOpsTest.TestReshape 2023-01-11T22:10:10.8426943Z [ OK ] LazyOpsTest.TestReshape (0 ms) 2023-01-11T22:10:10.8427231Z [ RUN ] LazyOpsTest.TestResize 2023-01-11T22:10:10.8427525Z [ OK ] LazyOpsTest.TestResize (0 ms) 2023-01-11T22:10:10.8427809Z [ RUN ] LazyOpsTest.TestViewResize 2023-01-11T22:10:10.8432537Z [ OK ] LazyOpsTest.TestViewResize (0 ms) 2023-01-11T22:10:10.8432917Z [ RUN ] LazyOpsTest.TestView 2023-01-11T22:10:10.8439156Z [ OK ] LazyOpsTest.TestView (0 ms) 2023-01-11T22:10:10.8439560Z [ RUN ] LazyOpsTest.TestViewMod 2023-01-11T22:10:10.8456326Z [ OK ] LazyOpsTest.TestViewMod (1 ms) 2023-01-11T22:10:10.8456666Z [ RUN ] LazyOpsTest.TestViewModComplex 2023-01-11T22:10:10.8474273Z [ OK ] LazyOpsTest.TestViewModComplex (1 ms) 2023-01-11T22:10:10.8474803Z [ RUN ] LazyOpsTest.TestViewOfViewMod 2023-01-11T22:10:10.8495686Z [ OK ] LazyOpsTest.TestViewOfViewMod (2 ms) 2023-01-11T22:10:10.8496107Z [ RUN ] LazyOpsTest.TestViewSqueezeAddInPlace 2023-01-11T22:10:10.8506556Z [ OK ] LazyOpsTest.TestViewSqueezeAddInPlace (1 ms) 2023-01-11T22:10:10.8507173Z [ RUN ] LazyOpsTest.TestUnsafeView 2023-01-11T22:10:10.8513379Z [ OK ] LazyOpsTest.TestUnsafeView (0 ms) 2023-01-11T22:10:10.8513992Z [ RUN ] LazyOpsTest.TestNarrow 2023-01-11T22:10:10.8528852Z [ OK ] LazyOpsTest.TestNarrow (1 ms) 2023-01-11T22:10:10.8529410Z [ RUN ] LazyOpsTest.TestNarrowUpdate 2023-01-11T22:10:10.8555836Z [ OK ] LazyOpsTest.TestNarrowUpdate (2 ms) 2023-01-11T22:10:10.8556428Z [ RUN ] LazyOpsTest.TestNarrowUpdateBaseCheck 2023-01-11T22:10:10.8579292Z [ OK ] LazyOpsTest.TestNarrowUpdateBaseCheck (2 ms) 2023-01-11T22:10:10.8579810Z [ RUN ] LazyOpsTest.TestNarrowUpdateTwoSlices 2023-01-11T22:10:10.8697572Z [ OK ] LazyOpsTest.TestNarrowUpdateTwoSlices (11 ms) 2023-01-11T22:10:10.8698151Z [ RUN ] LazyOpsTest.TestNarrowUpdateView 2023-01-11T22:10:10.8732413Z [ OK ] LazyOpsTest.TestNarrowUpdateView (3 ms) 2023-01-11T22:10:10.8732957Z [ RUN ] LazyOpsTest.TestNarrowInNarrowUpdate 2023-01-11T22:10:10.8775404Z [ OK ] LazyOpsTest.TestNarrowInNarrowUpdate (4 ms) 2023-01-11T22:10:10.8775990Z [ RUN ] LazyOpsTest.TestNarrowCopy 2023-01-11T22:10:10.8783754Z [ OK ] LazyOpsTest.TestNarrowCopy (0 ms) 2023-01-11T22:10:10.8784290Z [ RUN ] LazyOpsTest.TestViewAs 2023-01-11T22:10:10.8792532Z [ OK ] LazyOpsTest.TestViewAs (0 ms) 2023-01-11T22:10:10.8793095Z [ RUN ] LazyOpsTest.TestLogSoftmax 2023-01-11T22:10:10.8813229Z [ OK ] LazyOpsTest.TestLogSoftmax (2 ms) 2023-01-11T22:10:10.8813769Z [ RUN ] LazyOpsTest.TestLogSoftmaxCast 2023-01-11T22:10:10.8842116Z [ OK ] LazyOpsTest.TestLogSoftmaxCast (2 ms) 2023-01-11T22:10:10.8842587Z [ RUN ] LazyOpsTest.TestLogSoftmaxWrapper 2023-01-11T22:10:10.8862167Z [ OK ] LazyOpsTest.TestLogSoftmaxWrapper (2 ms) 2023-01-11T22:10:10.8862711Z [ RUN ] LazyOpsTest.TestSoftmax 2023-01-11T22:10:10.8883150Z [ OK ] LazyOpsTest.TestSoftmax (2 ms) 2023-01-11T22:10:10.8883547Z [ RUN ] LazyOpsTest.TestSoftmaxCast 2023-01-11T22:10:10.8913492Z [ OK ] LazyOpsTest.TestSoftmaxCast (3 ms) 2023-01-11T22:10:10.8913859Z [ RUN ] LazyOpsTest.TestSoftmaxWrapper 2023-01-11T22:10:10.8935855Z [ OK ] LazyOpsTest.TestSoftmaxWrapper (2 ms) 2023-01-11T22:10:10.8936239Z [ RUN ] LazyOpsTest.TestSoftplus 2023-01-11T22:10:10.8939558Z [ OK ] LazyOpsTest.TestSoftplus (0 ms) 2023-01-11T22:10:10.8939991Z [ RUN ] LazyOpsTest.TestMaxPool1D 2023-01-11T22:10:10.9027825Z [ OK ] LazyOpsTest.TestMaxPool1D (8 ms) 2023-01-11T22:10:10.9028378Z [ RUN ] LazyOpsTest.TestMaxPool2D 2023-01-11T22:10:10.9088430Z [ OK ] LazyOpsTest.TestMaxPool2D (6 ms) 2023-01-11T22:10:10.9089073Z [ RUN ] LazyOpsTest.TestMaxPool2DWithIndices 2023-01-11T22:10:10.9211542Z [ OK ] LazyOpsTest.TestMaxPool2DWithIndices (12 ms) 2023-01-11T22:10:10.9212023Z [ RUN ] LazyOpsTest.TestMaxPool2DNonSquare 2023-01-11T22:10:10.9272206Z [ OK ] LazyOpsTest.TestMaxPool2DNonSquare (6 ms) 2023-01-11T22:10:10.9272656Z [ RUN ] LazyOpsTest.TestMaxPool3D 2023-01-11T22:10:10.9290867Z [ OK ] LazyOpsTest.TestMaxPool3D (1 ms) 2023-01-11T22:10:10.9291493Z [ RUN ] LazyOpsTest.TestMaxPool3DWithIndices 2023-01-11T22:10:10.9314203Z [ OK ] LazyOpsTest.TestMaxPool3DWithIndices (2 ms) 2023-01-11T22:10:10.9314903Z [ RUN ] LazyOpsTest.TestMaxPool3DIncompleteAttributes 2023-01-11T22:10:10.9327882Z [ OK ] LazyOpsTest.TestMaxPool3DIncompleteAttributes (1 ms) 2023-01-11T22:10:10.9328572Z [ RUN ] LazyOpsTest.TestMaxPool3DNonSquare 2023-01-11T22:10:10.9343805Z [ OK ] LazyOpsTest.TestMaxPool3DNonSquare (1 ms) 2023-01-11T22:10:10.9344423Z [ RUN ] LazyOpsTest.TestMaxPool2DNoBatch 2023-01-11T22:10:10.9404396Z [ OK ] LazyOpsTest.TestMaxPool2DNoBatch (6 ms) 2023-01-11T22:10:10.9405003Z [ RUN ] LazyOpsTest.TestMaxPool3DNoBatch 2023-01-11T22:10:10.9422490Z [ OK ] LazyOpsTest.TestMaxPool3DNoBatch (1 ms) 2023-01-11T22:10:10.9423060Z [ RUN ] LazyOpsTest.TestAvgPool1D 2023-01-11T22:10:10.9504518Z [ OK ] LazyOpsTest.TestAvgPool1D (8 ms) 2023-01-11T22:10:10.9504933Z [ RUN ] LazyOpsTest.TestAvgPool2D 2023-01-11T22:10:10.9596574Z [ OK ] LazyOpsTest.TestAvgPool2D (9 ms) 2023-01-11T22:10:10.9597015Z [ RUN ] LazyOpsTest.TestAvgPool2DNonSquare 2023-01-11T22:10:10.9661495Z [ OK ] LazyOpsTest.TestAvgPool2DNonSquare (6 ms) 2023-01-11T22:10:10.9661942Z [ RUN ] LazyOpsTest.TestAvgPool3D 2023-01-11T22:10:10.9675476Z [ OK ] LazyOpsTest.TestAvgPool3D (1 ms) 2023-01-11T22:10:10.9675954Z [ RUN ] LazyOpsTest.TestAvgPool3DIncompleteAttributes 2023-01-11T22:10:10.9686299Z [ OK ] LazyOpsTest.TestAvgPool3DIncompleteAttributes (1 ms) 2023-01-11T22:10:10.9686787Z [ RUN ] LazyOpsTest.TestAvgPool3DNonSquare 2023-01-11T22:10:10.9697908Z [ OK ] LazyOpsTest.TestAvgPool3DNonSquare (1 ms) 2023-01-11T22:10:10.9698348Z [ RUN ] LazyOpsTest.TestAvgPool2DNoBatch 2023-01-11T22:10:10.9751765Z [ OK ] LazyOpsTest.TestAvgPool2DNoBatch (5 ms) 2023-01-11T22:10:10.9752206Z [ RUN ] LazyOpsTest.TestAvgPool3DNoBatch 2023-01-11T22:10:10.9764179Z [ OK ] LazyOpsTest.TestAvgPool3DNoBatch (1 ms) 2023-01-11T22:10:10.9764621Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool2D 2023-01-11T22:10:10.9771004Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool2D (0 ms) 2023-01-11T22:10:10.9771432Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool3D 2023-01-11T22:10:10.9982729Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool3D (20 ms) 2023-01-11T22:10:10.9983251Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool3DNoBatch 2023-01-11T22:10:11.0004478Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool3DNoBatch (2 ms) 2023-01-11T22:10:11.0004983Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool2DNoBatch 2023-01-11T22:10:11.0011696Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool2DNoBatch (0 ms) 2023-01-11T22:10:11.0012146Z [ RUN ] LazyOpsTest.TestMaxUnpool2D 2023-01-11T22:10:11.0027642Z [ OK ] LazyOpsTest.TestMaxUnpool2D (1 ms) 2023-01-11T22:10:11.0028073Z [ RUN ] LazyOpsTest.TestMaxUnpool3D 2023-01-11T22:10:11.0041912Z [ OK ] LazyOpsTest.TestMaxUnpool3D (1 ms) 2023-01-11T22:10:11.0042310Z [ RUN ] LazyOpsTest.TestNllLoss 2023-01-11T22:10:11.0042654Z /var/lib/jenkins/workspace/test/cpp/lazy/test_lazy_ops.cpp:8173: Skipped 2023-01-11T22:10:11.0042879Z 2023-01-11T22:10:11.0043101Z [ SKIPPED ] LazyOpsTest.TestNllLoss (0 ms) 2023-01-11T22:10:11.0043492Z [ RUN ] LazyOpsTest.TestNllLoss2d 2023-01-11T22:10:11.0119590Z [ OK ] LazyOpsTest.TestNllLoss2d (7 ms) 2023-01-11T22:10:11.0120610Z [ RUN ] LazyOpsTest.TestSmoothL1Loss 2023-01-11T22:10:11.0138208Z [ OK ] LazyOpsTest.TestSmoothL1Loss (1 ms) 2023-01-11T22:10:11.0138693Z [ RUN ] LazyOpsTest.TestL1Loss 2023-01-11T22:10:11.0150620Z [ OK ] LazyOpsTest.TestL1Loss (1 ms) 2023-01-11T22:10:11.0151312Z [ RUN ] LazyOpsTest.TestL1LossBackward 2023-01-11T22:10:11.0190202Z [ OK ] LazyOpsTest.TestL1LossBackward (3 ms) 2023-01-11T22:10:11.0190773Z [ RUN ] LazyOpsTest.TestMseLoss 2023-01-11T22:10:11.0193339Z [ OK ] LazyOpsTest.TestMseLoss (0 ms) 2023-01-11T22:10:11.0193907Z [ RUN ] LazyOpsTest.TestMseLossBackward 2023-01-11T22:10:11.0216517Z [ OK ] LazyOpsTest.TestMseLossBackward (2 ms) 2023-01-11T22:10:11.0216933Z [ RUN ] LazyOpsTest.TestBatchNorm1D 2023-01-11T22:10:11.0237276Z [ OK ] LazyOpsTest.TestBatchNorm1D (2 ms) 2023-01-11T22:10:11.0237688Z [ RUN ] LazyOpsTest.TestBatchNorm2D 2023-01-11T22:10:11.0257465Z [ OK ] LazyOpsTest.TestBatchNorm2D (1 ms) 2023-01-11T22:10:11.0257851Z [ RUN ] LazyOpsTest.TestDim 2023-01-11T22:10:11.0258201Z [ OK ] LazyOpsTest.TestDim (0 ms) 2023-01-11T22:10:11.0258571Z [ RUN ] LazyOpsTest.TestContiguous 2023-01-11T22:10:11.0259349Z [ OK ] LazyOpsTest.TestContiguous (0 ms) 2023-01-11T22:10:11.0259685Z [ RUN ] LazyOpsTest.TestSqueezeAll 2023-01-11T22:10:11.0261617Z [ OK ] LazyOpsTest.TestSqueezeAll (0 ms) 2023-01-11T22:10:11.0262301Z [ RUN ] LazyOpsTest.TestSqueezeAllInPlace 2023-01-11T22:10:11.0265384Z [ OK ] LazyOpsTest.TestSqueezeAllInPlace (0 ms) 2023-01-11T22:10:11.0265909Z [ RUN ] LazyOpsTest.TestSqueezeOne 2023-01-11T22:10:11.0287326Z [ OK ] LazyOpsTest.TestSqueezeOne (2 ms) 2023-01-11T22:10:11.0287973Z [ RUN ] LazyOpsTest.TestSqueezeOneInPlace 2023-01-11T22:10:11.0312625Z [ OK ] LazyOpsTest.TestSqueezeOneInPlace (2 ms) 2023-01-11T22:10:11.0313041Z [ RUN ] LazyOpsTest.TestUnsqueeze 2023-01-11T22:10:11.0327121Z [ OK ] LazyOpsTest.TestUnsqueeze (1 ms) 2023-01-11T22:10:11.0327449Z [ RUN ] LazyOpsTest.TestUnsqueezeInPlace 2023-01-11T22:10:11.0344416Z [ OK ] LazyOpsTest.TestUnsqueezeInPlace (1 ms) 2023-01-11T22:10:11.0344791Z [ RUN ] LazyOpsTest.TestMaskedFill 2023-01-11T22:10:11.0347426Z [ OK ] LazyOpsTest.TestMaskedFill (0 ms) 2023-01-11T22:10:11.0347831Z [ RUN ] LazyOpsTest.TestMaskedFillInPlace 2023-01-11T22:10:11.0350805Z [ OK ] LazyOpsTest.TestMaskedFillInPlace (0 ms) 2023-01-11T22:10:11.0351214Z [ RUN ] LazyOpsTest.TestMaskedFillBroadcast 2023-01-11T22:10:11.0353793Z [ OK ] LazyOpsTest.TestMaskedFillBroadcast (0 ms) 2023-01-11T22:10:11.0354173Z [ RUN ] LazyOpsTest.TestFill 2023-01-11T22:10:11.0357708Z [ OK ] LazyOpsTest.TestFill (0 ms) 2023-01-11T22:10:11.0358041Z [ RUN ] LazyOpsTest.TestFillWithRank0 2023-01-11T22:10:11.0359872Z [ OK ] LazyOpsTest.TestFillWithRank0 (0 ms) 2023-01-11T22:10:11.0360263Z [ RUN ] LazyOpsTest.TestPermute 2023-01-11T22:10:11.0391623Z [ OK ] LazyOpsTest.TestPermute (3 ms) 2023-01-11T22:10:11.0392182Z [ RUN ] LazyOpsTest.TestPermuteMod 2023-01-11T22:10:11.0504373Z [ OK ] LazyOpsTest.TestPermuteMod (11 ms) 2023-01-11T22:10:11.0504890Z [ RUN ] LazyOpsTest.TestFlip 2023-01-11T22:10:11.0537129Z [ OK ] LazyOpsTest.TestFlip (3 ms) 2023-01-11T22:10:11.0537700Z [ RUN ] LazyOpsTest.TestPixelShuffle 2023-01-11T22:10:11.0544415Z [ OK ] LazyOpsTest.TestPixelShuffle (0 ms) 2023-01-11T22:10:11.0544969Z [ RUN ] LazyOpsTest.TestSumToSize 2023-01-11T22:10:11.0548440Z [ OK ] LazyOpsTest.TestSumToSize (0 ms) 2023-01-11T22:10:11.0548762Z [ RUN ] LazyOpsTest.TestTransposeDims 2023-01-11T22:10:11.0551982Z [ OK ] LazyOpsTest.TestTransposeDims (0 ms) 2023-01-11T22:10:11.0552332Z [ RUN ] LazyOpsTest.TestTransposeDimsMod 2023-01-11T22:10:11.0563422Z [ OK ] LazyOpsTest.TestTransposeDimsMod (1 ms) 2023-01-11T22:10:11.0563910Z [ RUN ] LazyOpsTest.TestTransposeDimsInPlace 2023-01-11T22:10:11.0567403Z [ OK ] LazyOpsTest.TestTransposeDimsInPlace (0 ms) 2023-01-11T22:10:11.0567805Z [ RUN ] LazyOpsTest.TestSplit 2023-01-11T22:10:11.0590965Z [ OK ] LazyOpsTest.TestSplit (2 ms) 2023-01-11T22:10:11.0591418Z [ RUN ] LazyOpsTest.TestSplitEmpty 2023-01-11T22:10:11.0591744Z [ OK ] LazyOpsTest.TestSplitEmpty (0 ms) 2023-01-11T22:10:11.0592055Z [ RUN ] LazyOpsTest.TestSplitWithSizes 2023-01-11T22:10:11.0603697Z [ OK ] LazyOpsTest.TestSplitWithSizes (1 ms) 2023-01-11T22:10:11.0604033Z [ RUN ] LazyOpsTest.TestCrossImplicitDim 2023-01-11T22:10:11.0606674Z [ OK ] LazyOpsTest.TestCrossImplicitDim (0 ms) 2023-01-11T22:10:11.0607048Z [ RUN ] LazyOpsTest.TestCrossExplicitDim 2023-01-11T22:10:11.0610428Z [ OK ] LazyOpsTest.TestCrossExplicitDim (0 ms) 2023-01-11T22:10:11.0610866Z [ RUN ] LazyOpsTest.TestCrossZeroDim 2023-01-11T22:10:11.0611264Z [ OK ] LazyOpsTest.TestCrossZeroDim (0 ms) 2023-01-11T22:10:11.0611642Z [ RUN ] LazyOpsTest.TestTriu 2023-01-11T22:10:11.0639635Z [ OK ] LazyOpsTest.TestTriu (2 ms) 2023-01-11T22:10:11.0639964Z [ RUN ] LazyOpsTest.TestTriuNonSquare 2023-01-11T22:10:11.0669656Z [ OK ] LazyOpsTest.TestTriuNonSquare (2 ms) 2023-01-11T22:10:11.0670069Z [ RUN ] LazyOpsTest.TestTriuBatch 2023-01-11T22:10:11.0700740Z [ OK ] LazyOpsTest.TestTriuBatch (3 ms) 2023-01-11T22:10:11.0701244Z [ RUN ] LazyOpsTest.TestTril 2023-01-11T22:10:11.0729983Z [ OK ] LazyOpsTest.TestTril (2 ms) 2023-01-11T22:10:11.0730359Z [ RUN ] LazyOpsTest.TestTrilNonSquare 2023-01-11T22:10:11.0758914Z [ OK ] LazyOpsTest.TestTrilNonSquare (2 ms) 2023-01-11T22:10:11.0759339Z [ RUN ] LazyOpsTest.TestTrilBatch 2023-01-11T22:10:11.0788062Z [ OK ] LazyOpsTest.TestTrilBatch (2 ms) 2023-01-11T22:10:11.0788765Z [ RUN ] LazyOpsTest.TestTriuInPlace 2023-01-11T22:10:11.0824743Z [ OK ] LazyOpsTest.TestTriuInPlace (3 ms) 2023-01-11T22:10:11.0825301Z [ RUN ] LazyOpsTest.TestTrilInPlace 2023-01-11T22:10:11.0860363Z [ OK ] LazyOpsTest.TestTrilInPlace (3 ms) 2023-01-11T22:10:11.0860902Z [ RUN ] LazyOpsTest.TestTrace 2023-01-11T22:10:11.0863393Z [ OK ] LazyOpsTest.TestTrace (0 ms) 2023-01-11T22:10:11.0863938Z [ RUN ] LazyOpsTest.TestTraceWide 2023-01-11T22:10:11.0866036Z [ OK ] LazyOpsTest.TestTraceWide (0 ms) 2023-01-11T22:10:11.0866403Z [ RUN ] LazyOpsTest.TestTraceNarrow 2023-01-11T22:10:11.0868542Z [ OK ] LazyOpsTest.TestTraceNarrow (0 ms) 2023-01-11T22:10:11.0868857Z [ RUN ] LazyOpsTest.TestDiagRank1 2023-01-11T22:10:11.1046344Z [ OK ] LazyOpsTest.TestDiagRank1 (17 ms) 2023-01-11T22:10:11.1046682Z [ RUN ] LazyOpsTest.TestDiagRank2 2023-01-11T22:10:11.1083992Z [ OK ] LazyOpsTest.TestDiagRank2 (3 ms) 2023-01-11T22:10:11.1084691Z [ RUN ] LazyOpsTest.TestDiagFlat 2023-01-11T22:10:11.1603319Z [ OK ] LazyOpsTest.TestDiagFlat (51 ms) 2023-01-11T22:10:11.1603752Z [ RUN ] LazyOpsTest.TestDiagonal 2023-01-11T22:10:11.1631776Z [ OK ] LazyOpsTest.TestDiagonal (3 ms) 2023-01-11T22:10:11.1632189Z [ RUN ] LazyOpsTest.TestDiagonalUpdate 2023-01-11T22:10:11.1725998Z [ OK ] LazyOpsTest.TestDiagonalUpdate (9 ms) 2023-01-11T22:10:11.1726366Z [ RUN ] LazyOpsTest.TestDiagonalNonSquare 2023-01-11T22:10:11.1754253Z [ OK ] LazyOpsTest.TestDiagonalNonSquare (2 ms) 2023-01-11T22:10:11.1754850Z [ RUN ] LazyOpsTest.TestDiagonalBatch 2023-01-11T22:10:11.1782781Z [ OK ] LazyOpsTest.TestDiagonalBatch (2 ms) 2023-01-11T22:10:11.1783080Z [ RUN ] LazyOpsTest.TestFlatten 2023-01-11T22:10:11.1842793Z [ OK ] LazyOpsTest.TestFlatten (6 ms) 2023-01-11T22:10:11.1843162Z [ RUN ] LazyOpsTest.TestLogicalAnd 2023-01-11T22:10:11.1861121Z [ OK ] LazyOpsTest.TestLogicalAnd (1 ms) 2023-01-11T22:10:11.1861490Z [ RUN ] LazyOpsTest.TestBitwiseAnd 2023-01-11T22:10:11.1864374Z [ OK ] LazyOpsTest.TestBitwiseAnd (0 ms) 2023-01-11T22:10:11.1864767Z [ RUN ] LazyOpsTest.TestBitwiseAndInPlace 2023-01-11T22:10:11.1867626Z [ OK ] LazyOpsTest.TestBitwiseAndInPlace (0 ms) 2023-01-11T22:10:11.1868046Z [ RUN ] LazyOpsTest.TestBitwiseAndScalar 2023-01-11T22:10:11.1870765Z [ OK ] LazyOpsTest.TestBitwiseAndScalar (0 ms) 2023-01-11T22:10:11.1871310Z [ RUN ] LazyOpsTest.TestBitwiseAndScalarInPlace 2023-01-11T22:10:11.1874499Z [ OK ] LazyOpsTest.TestBitwiseAndScalarInPlace (0 ms) 2023-01-11T22:10:11.1874899Z [ RUN ] LazyOpsTest.TestBitwiseAndPromotion 2023-01-11T22:10:11.1879341Z [ OK ] LazyOpsTest.TestBitwiseAndPromotion (0 ms) 2023-01-11T22:10:11.1879758Z [ RUN ] LazyOpsTest.TestBitwiseOr 2023-01-11T22:10:11.1882466Z [ OK ] LazyOpsTest.TestBitwiseOr (0 ms) 2023-01-11T22:10:11.1882876Z [ RUN ] LazyOpsTest.TestBitwiseOrInPlace 2023-01-11T22:10:11.1885715Z [ OK ] LazyOpsTest.TestBitwiseOrInPlace (0 ms) 2023-01-11T22:10:11.1886101Z [ RUN ] LazyOpsTest.TestBitwiseOrScalar 2023-01-11T22:10:11.1888669Z [ OK ] LazyOpsTest.TestBitwiseOrScalar (0 ms) 2023-01-11T22:10:11.1889092Z [ RUN ] LazyOpsTest.TestBitwiseOrScalarInPlace 2023-01-11T22:10:11.1891839Z [ OK ] LazyOpsTest.TestBitwiseOrScalarInPlace (0 ms) 2023-01-11T22:10:11.1892336Z [ RUN ] LazyOpsTest.TestBitwiseXor 2023-01-11T22:10:11.1892784Z [ OK ] LazyOpsTest.TestBitwiseXor (0 ms) 2023-01-11T22:10:11.1893207Z [ RUN ] LazyOpsTest.TestBitwiseXorInPlace 2023-01-11T22:10:11.1894666Z [ OK ] LazyOpsTest.TestBitwiseXorInPlace (0 ms) 2023-01-11T22:10:11.1895160Z [ RUN ] LazyOpsTest.TestBitwiseXorScalar 2023-01-11T22:10:11.1895618Z [ OK ] LazyOpsTest.TestBitwiseXorScalar (0 ms) 2023-01-11T22:10:11.1896116Z [ RUN ] LazyOpsTest.TestBitwiseXorScalarInPlace 2023-01-11T22:10:11.1896502Z [ OK ] LazyOpsTest.TestBitwiseXorScalarInPlace (0 ms) 2023-01-11T22:10:11.1896829Z [ RUN ] LazyOpsTest.TestLshift 2023-01-11T22:10:11.1897817Z [ OK ] LazyOpsTest.TestLshift (0 ms) 2023-01-11T22:10:11.1898113Z [ RUN ] LazyOpsTest.TestLshiftInPlace 2023-01-11T22:10:11.1899877Z [ OK ] LazyOpsTest.TestLshiftInPlace (0 ms) 2023-01-11T22:10:11.1900312Z [ RUN ] LazyOpsTest.TestLshiftScalar 2023-01-11T22:10:11.1900832Z [ OK ] LazyOpsTest.TestLshiftScalar (0 ms) 2023-01-11T22:10:11.1901171Z [ RUN ] LazyOpsTest.TestLshiftScalarInPlace 2023-01-11T22:10:11.1903017Z [ OK ] LazyOpsTest.TestLshiftScalarInPlace (0 ms) 2023-01-11T22:10:11.1903592Z [ RUN ] LazyOpsTest.TestRshift 2023-01-11T22:10:11.1904099Z [ OK ] LazyOpsTest.TestRshift (0 ms) 2023-01-11T22:10:11.1904606Z [ RUN ] LazyOpsTest.TestRshiftInPlace 2023-01-11T22:10:11.1905678Z [ OK ] LazyOpsTest.TestRshiftInPlace (0 ms) 2023-01-11T22:10:11.1906247Z [ RUN ] LazyOpsTest.TestRshiftScalar 2023-01-11T22:10:11.1906939Z [ OK ] LazyOpsTest.TestRshiftScalar (0 ms) 2023-01-11T22:10:11.1907554Z [ RUN ] LazyOpsTest.TestRshiftScalarInPlace 2023-01-11T22:10:11.1908786Z [ OK ] LazyOpsTest.TestRshiftScalarInPlace (0 ms) 2023-01-11T22:10:11.1909488Z [ RUN ] LazyOpsTest.TestMeshgrid 2023-01-11T22:10:11.1909855Z [W TensorShape.cpp:3452] Warning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (function operator()) 2023-01-11T22:10:11.1919799Z [ OK ] LazyOpsTest.TestMeshgrid (1 ms) 2023-01-11T22:10:11.1920114Z [ RUN ] LazyOpsTest.TestConstantPad 2023-01-11T22:10:11.1924100Z [ OK ] LazyOpsTest.TestConstantPad (0 ms) 2023-01-11T22:10:11.1924451Z [ RUN ] LazyOpsTest.TestConstantPadIncomplete 2023-01-11T22:10:11.1927538Z [ OK ] LazyOpsTest.TestConstantPadIncomplete (0 ms) 2023-01-11T22:10:11.1927995Z [ RUN ] LazyOpsTest.TestReflectionPad2dRank3 2023-01-11T22:10:11.1929912Z [ OK ] LazyOpsTest.TestReflectionPad2dRank3 (0 ms) 2023-01-11T22:10:11.1930571Z [ RUN ] LazyOpsTest.TestReflectionPad2dRank4 2023-01-11T22:10:11.1931389Z [ OK ] LazyOpsTest.TestReflectionPad2dRank4 (0 ms) 2023-01-11T22:10:11.1931976Z [ RUN ] LazyOpsTest.TestReflectionPad2dBackward 2023-01-11T22:10:11.1940148Z [ OK ] LazyOpsTest.TestReflectionPad2dBackward (0 ms) 2023-01-11T22:10:11.1940793Z [ RUN ] LazyOpsTest.TestReplicationPad1d 2023-01-11T22:10:11.1941465Z [ OK ] LazyOpsTest.TestReplicationPad1d (0 ms) 2023-01-11T22:10:11.1942197Z [ RUN ] LazyOpsTest.TestReplicationPad1dZeroPad 2023-01-11T22:10:11.1944281Z [ OK ] LazyOpsTest.TestReplicationPad1dZeroPad (0 ms) 2023-01-11T22:10:11.1944663Z [ RUN ] LazyOpsTest.TestReplicationPad1dBackward 2023-01-11T22:10:11.1950612Z [ OK ] LazyOpsTest.TestReplicationPad1dBackward (0 ms) 2023-01-11T22:10:11.1951012Z [ RUN ] LazyOpsTest.TestReplicationPad2d 2023-01-11T22:10:11.1951692Z [ OK ] LazyOpsTest.TestReplicationPad2d (0 ms) 2023-01-11T22:10:11.1953393Z [ RUN ] LazyOpsTest.TestReplicationPad2dZeroPad 2023-01-11T22:10:11.1953995Z [ OK ] LazyOpsTest.TestReplicationPad2dZeroPad (0 ms) 2023-01-11T22:10:11.1954380Z [ RUN ] LazyOpsTest.TestReplicationPad2dBackward 2023-01-11T22:10:11.1961512Z [ OK ] LazyOpsTest.TestReplicationPad2dBackward (0 ms) 2023-01-11T22:10:11.1962038Z [ RUN ] LazyOpsTest.TestAsStrided 2023-01-11T22:10:11.1973027Z [ OK ] LazyOpsTest.TestAsStrided (1 ms) 2023-01-11T22:10:11.1973600Z [ RUN ] LazyOpsTest.TestAsStridedInPlace 2023-01-11T22:10:11.1988904Z [ OK ] LazyOpsTest.TestAsStridedInPlace (1 ms) 2023-01-11T22:10:11.1989285Z [ RUN ] LazyOpsTest.TestAsStridedWithOffset 2023-01-11T22:10:11.1993710Z [ OK ] LazyOpsTest.TestAsStridedWithOffset (0 ms) 2023-01-11T22:10:11.1994102Z [ RUN ] LazyOpsTest.TestAsStridedWithInplaceCopy 2023-01-11T22:10:11.2001551Z [ OK ] LazyOpsTest.TestAsStridedWithInplaceCopy (0 ms) 2023-01-11T22:10:11.2001916Z [ RUN ] LazyOpsTest.TestEmptyStrided 2023-01-11T22:10:11.2002313Z [ OK ] LazyOpsTest.TestEmptyStrided (0 ms) 2023-01-11T22:10:11.2002632Z [ RUN ] LazyOpsTest.TestAvgPool2DBackward 2023-01-11T22:10:11.2162928Z [ OK ] LazyOpsTest.TestAvgPool2DBackward (16 ms) 2023-01-11T22:10:11.2163531Z [ RUN ] LazyOpsTest.TestAvgPool3DBackward 2023-01-11T22:10:11.2255370Z [ OK ] LazyOpsTest.TestAvgPool3DBackward (9 ms) 2023-01-11T22:10:11.2255853Z [ RUN ] LazyOpsTest.TestAvgPool2DNoBatchBackward 2023-01-11T22:10:11.2444678Z [ OK ] LazyOpsTest.TestAvgPool2DNoBatchBackward (18 ms) 2023-01-11T22:10:11.2445198Z [ RUN ] LazyOpsTest.TestAvgPool3DNoBatchBackward 2023-01-11T22:10:11.2540770Z [ OK ] LazyOpsTest.TestAvgPool3DNoBatchBackward (9 ms) 2023-01-11T22:10:11.2541219Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool3DNoBatchBackward 2023-01-11T22:10:11.2577203Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool3DNoBatchBackward (3 ms) 2023-01-11T22:10:11.2577738Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool3DBackward 2023-01-11T22:10:11.2636460Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool3DBackward (5 ms) 2023-01-11T22:10:11.2637011Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool2DBackward 2023-01-11T22:10:11.2660874Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool2DBackward (2 ms) 2023-01-11T22:10:11.2661417Z [ RUN ] LazyOpsTest.TestAdaptiveAvgPool2DNoBatchBackward 2023-01-11T22:10:11.2684104Z [ OK ] LazyOpsTest.TestAdaptiveAvgPool2DNoBatchBackward (2 ms) 2023-01-11T22:10:11.2684569Z [ RUN ] LazyOpsTest.TestConv2D 2023-01-11T22:10:11.3791288Z [ OK ] LazyOpsTest.TestConv2D (110 ms) 2023-01-11T22:10:11.3791707Z [ RUN ] LazyOpsTest.TestConv2DBackward 2023-01-11T22:10:11.7894318Z [ OK ] LazyOpsTest.TestConv2DBackward (410 ms) 2023-01-11T22:10:11.7895242Z [ RUN ] LazyOpsTest.TestTransposedConv2DBackward 2023-01-11T22:10:12.0983864Z [ OK ] LazyOpsTest.TestTransposedConv2DBackward (308 ms) 2023-01-11T22:10:12.0984821Z [ RUN ] LazyOpsTest.TestConv3DBackward 2023-01-11T22:10:12.3835600Z [ OK ] LazyOpsTest.TestConv3DBackward (285 ms) 2023-01-11T22:10:12.3836518Z [ RUN ] LazyOpsTest.TestTransposedConv3DBackward 2023-01-11T22:10:13.0395868Z [ OK ] LazyOpsTest.TestTransposedConv3DBackward (655 ms) 2023-01-11T22:10:13.0396868Z [ RUN ] LazyOpsTest.TestMaxPool2DBackward 2023-01-11T22:10:13.0495671Z [ OK ] LazyOpsTest.TestMaxPool2DBackward (10 ms) 2023-01-11T22:10:13.0496321Z [ RUN ] LazyOpsTest.TestMaxPool3DBackward 2023-01-11T22:10:13.0550577Z [ OK ] LazyOpsTest.TestMaxPool3DBackward (5 ms) 2023-01-11T22:10:13.0551101Z [ RUN ] LazyOpsTest.TestMaxPool2DNoBatchBackward 2023-01-11T22:10:13.0648163Z [ OK ] LazyOpsTest.TestMaxPool2DNoBatchBackward (9 ms) 2023-01-11T22:10:13.0648674Z [ RUN ] LazyOpsTest.TestMaxPool3DNoBatchBackward 2023-01-11T22:10:13.0700728Z [ OK ] LazyOpsTest.TestMaxPool3DNoBatchBackward (5 ms) 2023-01-11T22:10:13.0701204Z [ RUN ] LazyOpsTest.TestMaxUnpool2DBackward 2023-01-11T22:10:13.0879083Z [ OK ] LazyOpsTest.TestMaxUnpool2DBackward (17 ms) 2023-01-11T22:10:13.0879584Z [ RUN ] LazyOpsTest.TestMaxUnpool3DBackward 2023-01-11T22:10:13.1025854Z [ OK ] LazyOpsTest.TestMaxUnpool3DBackward (14 ms) 2023-01-11T22:10:13.1026293Z [ RUN ] LazyOpsTest.TestTanhBackward 2023-01-11T22:10:13.1037107Z [ OK ] LazyOpsTest.TestTanhBackward (1 ms) 2023-01-11T22:10:13.1037563Z [ RUN ] LazyOpsTest.TestSigmoidBackward 2023-01-11T22:10:13.1047147Z [ OK ] LazyOpsTest.TestSigmoidBackward (1 ms) 2023-01-11T22:10:13.1047609Z [ RUN ] LazyOpsTest.TestLogSigmoidBackward 2023-01-11T22:10:13.1057565Z [ OK ] LazyOpsTest.TestLogSigmoidBackward (1 ms) 2023-01-11T22:10:13.1058011Z [ RUN ] LazyOpsTest.TestLogSoftmaxBackward 2023-01-11T22:10:13.1138360Z [ OK ] LazyOpsTest.TestLogSoftmaxBackward (8 ms) 2023-01-11T22:10:13.1138804Z [ RUN ] LazyOpsTest.TestSoftmaxBackward 2023-01-11T22:10:13.1217069Z [ OK ] LazyOpsTest.TestSoftmaxBackward (7 ms) 2023-01-11T22:10:13.1217499Z [ RUN ] LazyOpsTest.TestSoftplusBackward 2023-01-11T22:10:13.1227080Z [ OK ] LazyOpsTest.TestSoftplusBackward (0 ms) 2023-01-11T22:10:13.1227500Z [ RUN ] LazyOpsTest.TestReluBackward 2023-01-11T22:10:13.1236888Z [ OK ] LazyOpsTest.TestReluBackward (0 ms) 2023-01-11T22:10:13.1237294Z [ RUN ] LazyOpsTest.TestRreluBackward 2023-01-11T22:10:13.1247885Z [ OK ] LazyOpsTest.TestRreluBackward (1 ms) 2023-01-11T22:10:13.1248309Z [ RUN ] LazyOpsTest.TestHardshrinkBackward 2023-01-11T22:10:13.1256722Z [ OK ] LazyOpsTest.TestHardshrinkBackward (0 ms) 2023-01-11T22:10:13.1257102Z [ RUN ] LazyOpsTest.TestSoftshrinkBackward 2023-01-11T22:10:13.1265252Z [ OK ] LazyOpsTest.TestSoftshrinkBackward (0 ms) 2023-01-11T22:10:13.1265598Z [ RUN ] LazyOpsTest.TestHardtanhBackward 2023-01-11T22:10:13.1270007Z [ OK ] LazyOpsTest.TestHardtanhBackward (0 ms) 2023-01-11T22:10:13.1270395Z [ RUN ] LazyOpsTest.TestEluBackward 2023-01-11T22:10:13.1280231Z [ OK ] LazyOpsTest.TestEluBackward (0 ms) 2023-01-11T22:10:13.1280539Z [ RUN ] LazyOpsTest.TestGeluBackward 2023-01-11T22:10:13.1297157Z [ OK ] LazyOpsTest.TestGeluBackward (1 ms) 2023-01-11T22:10:13.1297584Z [ RUN ] LazyOpsTest.TestLeakyReluBackward 2023-01-11T22:10:13.1308287Z [ OK ] LazyOpsTest.TestLeakyReluBackward (1 ms) 2023-01-11T22:10:13.1308721Z [ RUN ] LazyOpsTest.TestTransposeBackward 2023-01-11T22:10:13.1316880Z [ OK ] LazyOpsTest.TestTransposeBackward (0 ms) 2023-01-11T22:10:13.1317318Z [ RUN ] LazyOpsTest.TestAddMatMulBackward 2023-01-11T22:10:13.1388389Z [ OK ] LazyOpsTest.TestAddMatMulBackward (7 ms) 2023-01-11T22:10:13.1388978Z [ RUN ] LazyOpsTest.TestBinaryCrossEntropyBackward 2023-01-11T22:10:13.1452456Z [ OK ] LazyOpsTest.TestBinaryCrossEntropyBackward (6 ms) 2023-01-11T22:10:13.1453133Z [ RUN ] LazyOpsTest.TestNllLossBackward 2023-01-11T22:10:13.1453441Z /var/lib/jenkins/workspace/test/cpp/lazy/test_lazy_ops.cpp:10954: Skipped 2023-01-11T22:10:13.1453611Z 2023-01-11T22:10:13.1453807Z [ SKIPPED ] LazyOpsTest.TestNllLossBackward (0 ms) 2023-01-11T22:10:13.1454145Z [ RUN ] LazyOpsTest.TestNllLoss2dBackward 2023-01-11T22:10:13.1737970Z [ OK ] LazyOpsTest.TestNllLoss2dBackward (28 ms) 2023-01-11T22:10:13.1738432Z [ RUN ] LazyOpsTest.TestSmoothL1LossBackward 2023-01-11T22:10:13.1799167Z [ OK ] LazyOpsTest.TestSmoothL1LossBackward (6 ms) 2023-01-11T22:10:13.1799625Z [ RUN ] LazyOpsTest.TestViewBackward 2023-01-11T22:10:13.1819444Z [ OK ] LazyOpsTest.TestViewBackward (2 ms) 2023-01-11T22:10:13.1819833Z [ RUN ] LazyOpsTest.TestBatchNorm2DBackward 2023-01-11T22:10:13.1870276Z [ OK ] LazyOpsTest.TestBatchNorm2DBackward (5 ms) 2023-01-11T22:10:13.1870734Z [ RUN ] LazyOpsTest.TestBatchNorm3DBackward 2023-01-11T22:10:13.1920579Z [ OK ] LazyOpsTest.TestBatchNorm3DBackward (5 ms) 2023-01-11T22:10:13.1921038Z [ RUN ] LazyOpsTest.TestBCEWithLogitsBackward 2023-01-11T22:10:13.2275500Z [ OK ] LazyOpsTest.TestBCEWithLogitsBackward (35 ms) 2023-01-11T22:10:13.2275986Z [ RUN ] LazyOpsTest.TestKlDivBackward 2023-01-11T22:10:13.2357442Z [ OK ] LazyOpsTest.TestKlDivBackward (8 ms) 2023-01-11T22:10:13.2357774Z [ RUN ] LazyOpsTest.TestEmbeddingBackward 2023-01-11T22:10:13.3071685Z [ OK ] LazyOpsTest.TestEmbeddingBackward (71 ms) 2023-01-11T22:10:13.3072428Z [ RUN ] LazyOpsTest.TestAmpForeachNonFiniteCheckAndUnscale 2023-01-11T22:10:13.3072932Z /var/lib/jenkins/workspace/test/cpp/lazy/test_lazy_ops.cpp:11351: Skipped 2023-01-11T22:10:13.3073093Z 2023-01-11T22:10:13.3073404Z [ SKIPPED ] LazyOpsTest.TestAmpForeachNonFiniteCheckAndUnscale (0 ms) 2023-01-11T22:10:13.3073837Z [ RUN ] LazyOpsTest.TestAmpUpdateScale 2023-01-11T22:10:13.3074219Z /var/lib/jenkins/workspace/test/cpp/lazy/test_lazy_ops.cpp:11400: Skipped 2023-01-11T22:10:13.3074478Z 2023-01-11T22:10:13.3074670Z [ SKIPPED ] LazyOpsTest.TestAmpUpdateScale (0 ms) 2023-01-11T22:10:13.3075299Z [ RUN ] LazyOpsTest.TestEarlySyncLiveTensors 2023-01-11T22:10:13.3075759Z [ OK ] LazyOpsTest.TestEarlySyncLiveTensors (0 ms) 2023-01-11T22:10:13.3076078Z [ RUN ] LazyOpsTest.TestLerp 2023-01-11T22:10:13.3076468Z [ OK ] LazyOpsTest.TestLerp (0 ms) 2023-01-11T22:10:13.3076772Z [ RUN ] LazyOpsTest.TestLerpScalar 2023-01-11T22:10:13.3077155Z [ OK ] LazyOpsTest.TestLerpScalar (0 ms) 2023-01-11T22:10:13.3077497Z [ RUN ] LazyOpsTest.TestLerpInplace 2023-01-11T22:10:13.3077970Z [ OK ] LazyOpsTest.TestLerpInplace (0 ms) 2023-01-11T22:10:13.3078388Z [ RUN ] LazyOpsTest.TestLerpScalarInplace 2023-01-11T22:10:13.3079215Z [ OK ] LazyOpsTest.TestLerpScalarInplace (0 ms) 2023-01-11T22:10:13.3079707Z [ RUN ] LazyOpsTest.TestLerpOut 2023-01-11T22:10:13.3080476Z [ OK ] LazyOpsTest.TestLerpOut (0 ms) 2023-01-11T22:10:13.3081120Z [ RUN ] LazyOpsTest.TestLerpScalarOut 2023-01-11T22:10:13.3081842Z [ OK ] LazyOpsTest.TestLerpScalarOut (0 ms) 2023-01-11T22:10:13.3082200Z [ RUN ] LazyOpsTest.IsAliasOf 2023-01-11T22:10:13.3082496Z [ OK ] LazyOpsTest.IsAliasOf (0 ms) 2023-01-11T22:10:13.3082866Z [----------] 574 tests from LazyOpsTest (3556 ms total) 2023-01-11T22:10:13.3083026Z 2023-01-11T22:10:13.3083196Z [----------] Global test environment tear-down 2023-01-11T22:10:13.3141432Z [==========] 611 tests from 10 test suites ran. (3572 ms total) 2023-01-11T22:10:13.3141764Z [ PASSED ] 607 tests. 2023-01-11T22:10:13.3142020Z [ SKIPPED ] 4 tests, listed below: 2023-01-11T22:10:13.3142301Z [ SKIPPED ] LazyOpsTest.TestNllLoss 2023-01-11T22:10:13.3142762Z [ SKIPPED ] LazyOpsTest.TestNllLossBackward 2023-01-11T22:10:13.3143147Z [ SKIPPED ] LazyOpsTest.TestAmpForeachNonFiniteCheckAndUnscale 2023-01-11T22:10:13.3143541Z [ SKIPPED ] LazyOpsTest.TestAmpUpdateScale 2023-01-11T22:10:13.4044042Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *-tsan* ]] 2023-01-11T22:10:13.4044337Z + python test/cpp/jit/tests_setup.py shutdown 2023-01-11T22:10:14.7532911Z + wait 2023-01-11T22:10:14.7533309Z + OMP_NUM_THREADS=2 2023-01-11T22:10:14.7533671Z + TORCH_CPP_TEST_MNIST_PATH=test/cpp/api/mnist 2023-01-11T22:10:14.7534307Z + /opt/conda/lib/python3.10/site-packages/torch/bin/test_api '--gtest_filter=-IMethodTest.*' --gtest_output=xml:test/test-reports/cpp-unittest/test_libtorch/test_api.xml 2023-01-11T22:10:15.1417967Z CUDA not available. Disabling CUDA and MultiCUDA tests 2023-01-11T22:10:15.1425488Z Note: Google Test filter = -IMethodTest.*:*_CUDA:*_MultiCUDA 2023-01-11T22:10:15.1426094Z [==========] Running 992 tests from 48 test suites. 2023-01-11T22:10:15.1426534Z [----------] Global test environment set-up. 2023-01-11T22:10:15.1426827Z [----------] 9 tests from AutogradAPITests 2023-01-11T22:10:15.1427169Z [ RUN ] AutogradAPITests.BackwardSimpleTest 2023-01-11T22:10:15.1437041Z [ OK ] AutogradAPITests.BackwardSimpleTest (1 ms) 2023-01-11T22:10:15.1437393Z [ RUN ] AutogradAPITests.BackwardTest 2023-01-11T22:10:15.1438007Z [W engine.cpp:1134] Warning: Using backward() with create_graph=True will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using autograd.grad when creating the graph to avoid this. If you have to use this function, make sure to reset the .grad fields of your parameters to None after use to break the cycle and avoid the leak. (function operator()) 2023-01-11T22:10:15.1440081Z [ OK ] AutogradAPITests.BackwardTest (0 ms) 2023-01-11T22:10:15.1440517Z [ RUN ] AutogradAPITests.GradSimpleTest 2023-01-11T22:10:15.1441773Z [ OK ] AutogradAPITests.GradSimpleTest (0 ms) 2023-01-11T22:10:15.1442155Z [ RUN ] AutogradAPITests.GradTest 2023-01-11T22:10:15.1454043Z [ OK ] AutogradAPITests.GradTest (1 ms) 2023-01-11T22:10:15.1454640Z [ RUN ] AutogradAPITests.GradNonLeafTest 2023-01-11T22:10:15.1457041Z [W TensorBody.h:485] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad) 2023-01-11T22:10:15.1459176Z [W TensorBody.h:485] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad) 2023-01-11T22:10:15.1461300Z [W TensorBody.h:485] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad) 2023-01-11T22:10:15.1463803Z [W TensorBody.h:485] Warning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (function grad) 2023-01-11T22:10:15.1464533Z [ OK ] AutogradAPITests.GradNonLeafTest (0 ms) 2023-01-11T22:10:15.1464875Z [ RUN ] AutogradAPITests.GradUnreachableTest 2023-01-11T22:10:15.1519168Z [ OK ] AutogradAPITests.GradUnreachableTest (5 ms) 2023-01-11T22:10:15.1519728Z [ RUN ] AutogradAPITests.EmptyInput 2023-01-11T22:10:15.1546165Z [ OK ] AutogradAPITests.EmptyInput (2 ms) 2023-01-11T22:10:15.1546731Z [ RUN ] AutogradAPITests.RetainGrad 2023-01-11T22:10:15.1549955Z [ OK ] AutogradAPITests.RetainGrad (0 ms) 2023-01-11T22:10:15.1550540Z [ RUN ] AutogradAPITests.AnomalyMode 2023-01-11T22:10:15.1550931Z [W anomaly_mode.cpp:27] Warning: This mode should be enabled only for debugging as the different tests will slow down your program execution. (function operator()) 2023-01-11T22:10:15.3167970Z [ OK ] AutogradAPITests.AnomalyMode (161 ms) 2023-01-11T22:10:15.3168507Z [----------] 9 tests from AutogradAPITests (174 ms total) 2023-01-11T22:10:15.3168674Z 2023-01-11T22:10:15.3168850Z [----------] 33 tests from CustomAutogradTest 2023-01-11T22:10:15.3169207Z [ RUN ] CustomAutogradTest.GradUnreachableDiscoveryTest 2023-01-11T22:10:15.3177348Z [ OK ] CustomAutogradTest.GradUnreachableDiscoveryTest (1 ms) 2023-01-11T22:10:15.3177831Z [ RUN ] CustomAutogradTest.CustomFunction 2023-01-11T22:10:15.3179931Z [ OK ] CustomAutogradTest.CustomFunction (0 ms) 2023-01-11T22:10:15.3180352Z [ RUN ] CustomAutogradTest.CustomFunctionWithTensorList 2023-01-11T22:10:15.3181768Z [ OK ] CustomAutogradTest.CustomFunctionWithTensorList (0 ms) 2023-01-11T22:10:15.3182169Z [ RUN ] CustomAutogradTest.GraphTaskTrimEdges 2023-01-11T22:10:15.3185326Z [ OK ] CustomAutogradTest.GraphTaskTrimEdges (0 ms) 2023-01-11T22:10:15.3185952Z [ RUN ] CustomAutogradTest.FunctionReturnsInput 2023-01-11T22:10:15.3186590Z [ OK ] CustomAutogradTest.FunctionReturnsInput (0 ms) 2023-01-11T22:10:15.3186988Z [ RUN ] CustomAutogradTest.FunctionReturnsUndefined 2023-01-11T22:10:15.3188075Z [ OK ] CustomAutogradTest.FunctionReturnsUndefined (0 ms) 2023-01-11T22:10:15.3188477Z [ RUN ] CustomAutogradTest.MaterializeGrads 2023-01-11T22:10:15.3190357Z [ OK ] CustomAutogradTest.MaterializeGrads (0 ms) 2023-01-11T22:10:15.3191008Z [ RUN ] CustomAutogradTest.DontMaterializeGrads 2023-01-11T22:10:15.3191665Z [ OK ] CustomAutogradTest.DontMaterializeGrads (0 ms) 2023-01-11T22:10:15.3192041Z [ RUN ] CustomAutogradTest.NoGradCustomFunction 2023-01-11T22:10:15.3192416Z [ OK ] CustomAutogradTest.NoGradCustomFunction (0 ms) 2023-01-11T22:10:15.3192870Z [ RUN ] CustomAutogradTest.MarkDirty 2023-01-11T22:10:15.3198223Z [ OK ] CustomAutogradTest.MarkDirty (0 ms) 2023-01-11T22:10:15.3198679Z [ RUN ] CustomAutogradTest.MarkNonDifferentiable 2023-01-11T22:10:15.3199756Z [ OK ] CustomAutogradTest.MarkNonDifferentiable (0 ms) 2023-01-11T22:10:15.3200145Z [ RUN ] CustomAutogradTest.MarkNonDifferentiableMixed 2023-01-11T22:10:15.3202135Z [ OK ] CustomAutogradTest.MarkNonDifferentiableMixed (0 ms) 2023-01-11T22:10:15.3202652Z [ RUN ] CustomAutogradTest.MarkNonDifferentiableNone 2023-01-11T22:10:15.3203664Z [ OK ] CustomAutogradTest.MarkNonDifferentiableNone (0 ms) 2023-01-11T22:10:15.3204057Z [ RUN ] CustomAutogradTest.ReturnLeafInplace 2023-01-11T22:10:15.3204488Z [ OK ] CustomAutogradTest.ReturnLeafInplace (0 ms) 2023-01-11T22:10:15.3204933Z [ RUN ] CustomAutogradTest.ReturnDuplicateInplace 2023-01-11T22:10:15.3236230Z [ OK ] CustomAutogradTest.ReturnDuplicateInplace (3 ms) 2023-01-11T22:10:15.3236911Z [ RUN ] CustomAutogradTest.ReturnDuplicate 2023-01-11T22:10:15.3237482Z [ OK ] CustomAutogradTest.ReturnDuplicate (0 ms) 2023-01-11T22:10:15.3238038Z [ RUN ] CustomAutogradTest.SaveEmptyForBackward 2023-01-11T22:10:15.3238661Z [ OK ] CustomAutogradTest.SaveEmptyForBackward (0 ms) 2023-01-11T22:10:15.3239027Z [ RUN ] CustomAutogradTest.InvalidGradients 2023-01-11T22:10:15.3297421Z [ OK ] CustomAutogradTest.InvalidGradients (5 ms) 2023-01-11T22:10:15.3298034Z [ RUN ] CustomAutogradTest.NoGradInput 2023-01-11T22:10:15.3298563Z [ OK ] CustomAutogradTest.NoGradInput (0 ms) 2023-01-11T22:10:15.3298992Z [ RUN ] CustomAutogradTest.TooManyGrads 2023-01-11T22:10:15.3299329Z [ OK ] CustomAutogradTest.TooManyGrads (0 ms) 2023-01-11T22:10:15.3299684Z [ RUN ] CustomAutogradTest.DepNoGrad 2023-01-11T22:10:15.3300082Z [ OK ] CustomAutogradTest.DepNoGrad (0 ms) 2023-01-11T22:10:15.3300399Z [ RUN ] CustomAutogradTest.Reentrant 2023-01-11T22:10:15.3300780Z [ OK ] CustomAutogradTest.Reentrant (0 ms) 2023-01-11T22:10:15.3301105Z [ RUN ] CustomAutogradTest.DeepReentrant 2023-01-11T22:10:15.7688600Z [ OK ] CustomAutogradTest.DeepReentrant (438 ms) 2023-01-11T22:10:15.7689006Z [ RUN ] CustomAutogradTest.ReentrantPriority 2023-01-11T22:10:15.7693030Z [ OK ] CustomAutogradTest.ReentrantPriority (0 ms) 2023-01-11T22:10:15.7693431Z [ RUN ] CustomAutogradTest.Hooks 2023-01-11T22:10:15.7729111Z [ OK ] CustomAutogradTest.Hooks (3 ms) 2023-01-11T22:10:15.7729428Z [ RUN ] CustomAutogradTest.HooksInplace 2023-01-11T22:10:15.7731360Z [ OK ] CustomAutogradTest.HooksInplace (0 ms) 2023-01-11T22:10:15.7731991Z [ RUN ] CustomAutogradTest.HooksInplaceWithRetainsGrad 2023-01-11T22:10:15.7733758Z [ OK ] CustomAutogradTest.HooksInplaceWithRetainsGrad (0 ms) 2023-01-11T22:10:15.7734187Z [ RUN ] CustomAutogradTest.HooksInplaceTwiceWithRetainsGrad 2023-01-11T22:10:15.7736320Z [ OK ] CustomAutogradTest.HooksInplaceTwiceWithRetainsGrad (0 ms) 2023-01-11T22:10:15.7736803Z [ RUN ] CustomAutogradTest.HookNone 2023-01-11T22:10:15.7737318Z [ OK ] CustomAutogradTest.HookNone (0 ms) 2023-01-11T22:10:15.7737708Z [ RUN ] CustomAutogradTest.BackwardWithInputs 2023-01-11T22:10:15.7739096Z [ OK ] CustomAutogradTest.BackwardWithInputs (0 ms) 2023-01-11T22:10:15.7739573Z [ RUN ] CustomAutogradTest.BackwardWithEmptyInputs 2023-01-11T22:10:15.7751344Z [ OK ] CustomAutogradTest.BackwardWithEmptyInputs (1 ms) 2023-01-11T22:10:15.7751901Z [ RUN ] CustomAutogradTest.BackwardWithNonLeafInputs 2023-01-11T22:10:15.7753762Z [ OK ] CustomAutogradTest.BackwardWithNonLeafInputs (0 ms) 2023-01-11T22:10:15.7754345Z [ RUN ] CustomAutogradTest.BackwardWithCreateGraphWarns 2023-01-11T22:10:15.7754958Z [ OK ] CustomAutogradTest.BackwardWithCreateGraphWarns (0 ms) 2023-01-11T22:10:15.7755376Z [----------] 33 tests from CustomAutogradTest (458 ms total) 2023-01-11T22:10:15.7755545Z 2023-01-11T22:10:15.7755764Z [----------] 13 tests from TestAutogradNotImplementedFallback 2023-01-11T22:10:15.7756195Z [ RUN ] TestAutogradNotImplementedFallback.RetSingleNonTensor 2023-01-11T22:10:15.7758713Z [ OK ] TestAutogradNotImplementedFallback.RetSingleNonTensor (0 ms) 2023-01-11T22:10:15.7759163Z [ RUN ] TestAutogradNotImplementedFallback.InplaceOp 2023-01-11T22:10:15.7804085Z [ OK ] TestAutogradNotImplementedFallback.InplaceOp (4 ms) 2023-01-11T22:10:15.7804520Z [ RUN ] TestAutogradNotImplementedFallback.DoubleInplaceOp 2023-01-11T22:10:15.7841809Z [ OK ] TestAutogradNotImplementedFallback.DoubleInplaceOp (3 ms) 2023-01-11T22:10:15.7842245Z [ RUN ] TestAutogradNotImplementedFallback.OptOp 2023-01-11T22:10:15.7844783Z [ OK ] TestAutogradNotImplementedFallback.OptOp (0 ms) 2023-01-11T22:10:15.7845209Z [ RUN ] TestAutogradNotImplementedFallback.OutOfPlaceAddition 2023-01-11T22:10:15.7878635Z [ OK ] TestAutogradNotImplementedFallback.OutOfPlaceAddition (3 ms) 2023-01-11T22:10:15.7879111Z [ RUN ] TestAutogradNotImplementedFallback.RetTupleNonTensor 2023-01-11T22:10:15.7912313Z [ OK ] TestAutogradNotImplementedFallback.RetTupleNonTensor (3 ms) 2023-01-11T22:10:15.7912735Z [ RUN ] TestAutogradNotImplementedFallback.ViewOp 2023-01-11T22:10:15.7977275Z [ OK ] TestAutogradNotImplementedFallback.ViewOp (6 ms) 2023-01-11T22:10:15.7977805Z [ RUN ] TestAutogradNotImplementedFallback.ViewOpWithExtraArg 2023-01-11T22:10:15.8010922Z [ OK ] TestAutogradNotImplementedFallback.ViewOpWithExtraArg (3 ms) 2023-01-11T22:10:15.8011396Z [ RUN ] TestAutogradNotImplementedFallback.RetTensorVectorView 2023-01-11T22:10:15.8012937Z [ OK ] TestAutogradNotImplementedFallback.RetTensorVectorView (0 ms) 2023-01-11T22:10:15.8013397Z [ RUN ] TestAutogradNotImplementedFallback.DoubleViewOP 2023-01-11T22:10:15.8032457Z [ OK ] TestAutogradNotImplementedFallback.DoubleViewOP (1 ms) 2023-01-11T22:10:15.8032929Z [ RUN ] TestAutogradNotImplementedFallback.NonFirstViewOP 2023-01-11T22:10:15.8063046Z [ OK ] TestAutogradNotImplementedFallback.NonFirstViewOP (3 ms) 2023-01-11T22:10:15.8063535Z [ RUN ] TestAutogradNotImplementedFallback.RetTensorVector 2023-01-11T22:10:15.8096606Z [ OK ] TestAutogradNotImplementedFallback.RetTensorVector (3 ms) 2023-01-11T22:10:15.8097279Z [ RUN ] TestAutogradNotImplementedFallback.TensorlistOp 2023-01-11T22:10:15.8122208Z [ OK ] TestAutogradNotImplementedFallback.TensorlistOp (2 ms) 2023-01-11T22:10:15.8122914Z [----------] 13 tests from TestAutogradNotImplementedFallback (36 ms total) 2023-01-11T22:10:15.8123297Z 2023-01-11T22:10:15.8123549Z [----------] 18 tests from AnyModuleTest 2023-01-11T22:10:15.8123865Z [ RUN ] AnyModuleTest.SimpleReturnType 2023-01-11T22:10:15.8124205Z [ OK ] AnyModuleTest.SimpleReturnType (0 ms) 2023-01-11T22:10:15.8124602Z [ RUN ] AnyModuleTest.SimpleReturnTypeAndSingleArgument 2023-01-11T22:10:15.8125022Z [ OK ] AnyModuleTest.SimpleReturnTypeAndSingleArgument (0 ms) 2023-01-11T22:10:15.8125452Z [ RUN ] AnyModuleTest.StringLiteralReturnTypeAndArgument 2023-01-11T22:10:15.8125891Z [ OK ] AnyModuleTest.StringLiteralReturnTypeAndArgument (0 ms) 2023-01-11T22:10:15.8126514Z [ RUN ] AnyModuleTest.StringReturnTypeWithConstArgument 2023-01-11T22:10:15.8126953Z [ OK ] AnyModuleTest.StringReturnTypeWithConstArgument (0 ms) 2023-01-11T22:10:15.8127473Z [ RUN ] AnyModuleTest.TensorReturnTypeAndStringArgumentsWithFunkyQualifications 2023-01-11T22:10:15.8128079Z [ OK ] AnyModuleTest.TensorReturnTypeAndStringArgumentsWithFunkyQualifications (0 ms) 2023-01-11T22:10:15.8128529Z [ RUN ] AnyModuleTest.WrongArgumentType 2023-01-11T22:10:15.8136225Z [ OK ] AnyModuleTest.WrongArgumentType (1 ms) 2023-01-11T22:10:15.8136591Z [ RUN ] AnyModuleTest.WrongNumberOfArguments 2023-01-11T22:10:15.8173645Z [ OK ] AnyModuleTest.WrongNumberOfArguments (3 ms) 2023-01-11T22:10:15.8174183Z [ RUN ] AnyModuleTest.PassingArgumentsToModuleWithDefaultArgumentsInForwardMethod 2023-01-11T22:10:15.8257644Z [ OK ] AnyModuleTest.PassingArgumentsToModuleWithDefaultArgumentsInForwardMethod (8 ms) 2023-01-11T22:10:15.8258179Z [ RUN ] AnyModuleTest.GetWithCorrectTypeSucceeds 2023-01-11T22:10:15.8258564Z [ OK ] AnyModuleTest.GetWithCorrectTypeSucceeds (0 ms) 2023-01-11T22:10:15.8258954Z [ RUN ] AnyModuleTest.GetWithIncorrectTypeThrows 2023-01-11T22:10:15.8267801Z [ OK ] AnyModuleTest.GetWithIncorrectTypeThrows (1 ms) 2023-01-11T22:10:15.8268484Z [ RUN ] AnyModuleTest.PtrWithBaseClassSucceeds 2023-01-11T22:10:15.8268861Z [ OK ] AnyModuleTest.PtrWithBaseClassSucceeds (0 ms) 2023-01-11T22:10:15.8269253Z [ RUN ] AnyModuleTest.PtrWithGoodDowncastSuccceeds 2023-01-11T22:10:15.8269654Z [ OK ] AnyModuleTest.PtrWithGoodDowncastSuccceeds (0 ms) 2023-01-11T22:10:15.8270028Z [ RUN ] AnyModuleTest.PtrWithBadDowncastThrows 2023-01-11T22:10:15.8284063Z [ OK ] AnyModuleTest.PtrWithBadDowncastThrows (1 ms) 2023-01-11T22:10:15.8284734Z [ RUN ] AnyModuleTest.DefaultStateIsEmpty 2023-01-11T22:10:15.8285089Z [ OK ] AnyModuleTest.DefaultStateIsEmpty (0 ms) 2023-01-11T22:10:15.8285462Z [ RUN ] AnyModuleTest.AllMethodsThrowForEmptyAnyModule 2023-01-11T22:10:15.8336719Z [ OK ] AnyModuleTest.AllMethodsThrowForEmptyAnyModule (5 ms) 2023-01-11T22:10:15.8337393Z [ RUN ] AnyModuleTest.CanMoveAssignDifferentModules 2023-01-11T22:10:15.8338083Z [ OK ] AnyModuleTest.CanMoveAssignDifferentModules (0 ms) 2023-01-11T22:10:15.8338795Z [ RUN ] AnyModuleTest.ConstructsFromModuleHolder 2023-01-11T22:10:15.8339506Z [ OK ] AnyModuleTest.ConstructsFromModuleHolder (0 ms) 2023-01-11T22:10:15.8340045Z [ RUN ] AnyModuleTest.ConvertsVariableToTensorCorrectly 2023-01-11T22:10:15.8340463Z [ OK ] AnyModuleTest.ConvertsVariableToTensorCorrectly (0 ms) 2023-01-11T22:10:15.8341059Z [----------] 18 tests from AnyModuleTest (21 ms total) 2023-01-11T22:10:15.8341217Z 2023-01-11T22:10:15.8341369Z [----------] 12 tests from AnyValueTest 2023-01-11T22:10:15.8341734Z [ RUN ] AnyValueTest.CorrectlyAccessesIntWhenCorrectType 2023-01-11T22:10:15.8342163Z [ OK ] AnyValueTest.CorrectlyAccessesIntWhenCorrectType (0 ms) 2023-01-11T22:10:15.8342819Z [ RUN ] AnyValueTest.CorrectlyAccessesStringLiteralWhenCorrectType 2023-01-11T22:10:15.8343326Z [ OK ] AnyValueTest.CorrectlyAccessesStringLiteralWhenCorrectType (0 ms) 2023-01-11T22:10:15.8343787Z [ RUN ] AnyValueTest.CorrectlyAccessesStringWhenCorrectType 2023-01-11T22:10:15.8344243Z [ OK ] AnyValueTest.CorrectlyAccessesStringWhenCorrectType (0 ms) 2023-01-11T22:10:15.8344705Z [ RUN ] AnyValueTest.CorrectlyAccessesPointersWhenCorrectType 2023-01-11T22:10:15.8345187Z [ OK ] AnyValueTest.CorrectlyAccessesPointersWhenCorrectType (0 ms) 2023-01-11T22:10:15.8345718Z [ RUN ] AnyValueTest.CorrectlyAccessesReferencesWhenCorrectType 2023-01-11T22:10:15.8346210Z [ OK ] AnyValueTest.CorrectlyAccessesReferencesWhenCorrectType (0 ms) 2023-01-11T22:10:15.8346663Z [ RUN ] AnyValueTest.TryGetReturnsNullptrForTheWrongType 2023-01-11T22:10:15.8347088Z [ OK ] AnyValueTest.TryGetReturnsNullptrForTheWrongType (0 ms) 2023-01-11T22:10:15.8347485Z [ RUN ] AnyValueTest.GetThrowsForTheWrongType 2023-01-11T22:10:15.8361348Z [ OK ] AnyValueTest.GetThrowsForTheWrongType (2 ms) 2023-01-11T22:10:15.8362050Z [ RUN ] AnyValueTest.MoveConstructionIsAllowed 2023-01-11T22:10:15.8362440Z [ OK ] AnyValueTest.MoveConstructionIsAllowed (0 ms) 2023-01-11T22:10:15.8362800Z [ RUN ] AnyValueTest.MoveAssignmentIsAllowed 2023-01-11T22:10:15.8363163Z [ OK ] AnyValueTest.MoveAssignmentIsAllowed (0 ms) 2023-01-11T22:10:15.8363518Z [ RUN ] AnyValueTest.TypeInfoIsCorrectForInt 2023-01-11T22:10:15.8363879Z [ OK ] AnyValueTest.TypeInfoIsCorrectForInt (0 ms) 2023-01-11T22:10:15.8364271Z [ RUN ] AnyValueTest.TypeInfoIsCorrectForStringLiteral 2023-01-11T22:10:15.8364699Z [ OK ] AnyValueTest.TypeInfoIsCorrectForStringLiteral (0 ms) 2023-01-11T22:10:15.8365083Z [ RUN ] AnyValueTest.TypeInfoIsCorrectForString 2023-01-11T22:10:15.8365462Z [ OK ] AnyValueTest.TypeInfoIsCorrectForString (0 ms) 2023-01-11T22:10:15.8365822Z [----------] 12 tests from AnyValueTest (2 ms total) 2023-01-11T22:10:15.8365977Z 2023-01-11T22:10:15.8366106Z [----------] 50 tests from DataTest 2023-01-11T22:10:15.8366412Z [ RUN ] DataTest.DatasetCallsGetCorrectly 2023-01-11T22:10:15.8374206Z [ OK ] DataTest.DatasetCallsGetCorrectly (1 ms) 2023-01-11T22:10:15.8374683Z [ RUN ] DataTest.TransformCallsGetApplyCorrectly 2023-01-11T22:10:15.8375072Z [ OK ] DataTest.TransformCallsGetApplyCorrectly (0 ms) 2023-01-11T22:10:15.8375479Z [ RUN ] DataTest.ChunkDataSetWithInvalidInitParameter 2023-01-11T22:10:15.8440381Z [ OK ] DataTest.ChunkDataSetWithInvalidInitParameter (6 ms) 2023-01-11T22:10:15.8440746Z [ RUN ] DataTest.InfiniteStreamDataset 2023-01-11T22:10:15.8441083Z [ OK ] DataTest.InfiniteStreamDataset (0 ms) 2023-01-11T22:10:15.8441407Z [ RUN ] DataTest.NoSequencerIsIdentity 2023-01-11T22:10:15.8451201Z [ OK ] DataTest.NoSequencerIsIdentity (1 ms) 2023-01-11T22:10:15.8451715Z [ RUN ] DataTest.OrderedSequencerIsSetUpWell 2023-01-11T22:10:15.8452171Z [ OK ] DataTest.OrderedSequencerIsSetUpWell (0 ms) 2023-01-11T22:10:15.8452707Z [ RUN ] DataTest.OrderedSequencerReOrdersValues 2023-01-11T22:10:15.8453357Z [ OK ] DataTest.OrderedSequencerReOrdersValues (0 ms) 2023-01-11T22:10:15.8453890Z [ RUN ] DataTest.BatchLambdaAppliesFunctionToBatch 2023-01-11T22:10:15.8454298Z [ OK ] DataTest.BatchLambdaAppliesFunctionToBatch (0 ms) 2023-01-11T22:10:15.8454686Z [ RUN ] DataTest.LambdaAppliesFunctionToExample 2023-01-11T22:10:15.8455070Z [ OK ] DataTest.LambdaAppliesFunctionToExample (0 ms) 2023-01-11T22:10:15.8455411Z [ RUN ] DataTest.CollateReducesBatch 2023-01-11T22:10:15.8455734Z [ OK ] DataTest.CollateReducesBatch (0 ms) 2023-01-11T22:10:15.8456044Z [ RUN ] DataTest.CollationReducesBatch 2023-01-11T22:10:15.8456377Z [ OK ] DataTest.CollationReducesBatch (0 ms) 2023-01-11T22:10:15.8456758Z [ RUN ] DataTest.SequentialSamplerReturnsIndicesInOrder 2023-01-11T22:10:15.8457172Z [ OK ] DataTest.SequentialSamplerReturnsIndicesInOrder (0 ms) 2023-01-11T22:10:15.8457703Z [ RUN ] DataTest.SequentialSamplerReturnsLessValuesForLastBatch 2023-01-11T22:10:15.8458195Z [ OK ] DataTest.SequentialSamplerReturnsLessValuesForLastBatch (0 ms) 2023-01-11T22:10:15.8458612Z [ RUN ] DataTest.SequentialSamplerResetsWell 2023-01-11T22:10:15.8458960Z [ OK ] DataTest.SequentialSamplerResetsWell (0 ms) 2023-01-11T22:10:15.8459354Z [ RUN ] DataTest.SequentialSamplerResetsWithNewSizeWell 2023-01-11T22:10:15.8459783Z [ OK ] DataTest.SequentialSamplerResetsWithNewSizeWell (0 ms) 2023-01-11T22:10:15.8460172Z [ RUN ] DataTest.CanSaveAndLoadSequentialSampler 2023-01-11T22:10:15.8595965Z [ OK ] DataTest.CanSaveAndLoadSequentialSampler (14 ms) 2023-01-11T22:10:15.8596678Z [ RUN ] DataTest.RandomSamplerReturnsIndicesInCorrectRange 2023-01-11T22:10:15.8597526Z [ OK ] DataTest.RandomSamplerReturnsIndicesInCorrectRange (0 ms) 2023-01-11T22:10:15.8597980Z [ RUN ] DataTest.RandomSamplerReturnsLessValuesForLastBatch 2023-01-11T22:10:15.8598437Z [ OK ] DataTest.RandomSamplerReturnsLessValuesForLastBatch (0 ms) 2023-01-11T22:10:15.8598825Z [ RUN ] DataTest.RandomSamplerResetsWell 2023-01-11T22:10:15.8599151Z [ OK ] DataTest.RandomSamplerResetsWell (0 ms) 2023-01-11T22:10:15.8599562Z [ RUN ] DataTest.RandomSamplerResetsWithNewSizeWell 2023-01-11T22:10:15.8599969Z [ OK ] DataTest.RandomSamplerResetsWithNewSizeWell (0 ms) 2023-01-11T22:10:15.8600532Z [ RUN ] DataTest.SavingAndLoadingRandomSamplerYieldsSameSequence 2023-01-11T22:10:15.8601689Z [ OK ] DataTest.SavingAndLoadingRandomSamplerYieldsSameSequence (0 ms) 2023-01-11T22:10:15.8602442Z [ RUN ] DataTest.StreamSamplerReturnsTheBatchSizeAndThenRemainder 2023-01-11T22:10:15.8603013Z [ OK ] DataTest.StreamSamplerReturnsTheBatchSizeAndThenRemainder (0 ms) 2023-01-11T22:10:15.8603526Z [ RUN ] DataTest.StreamSamplerResetsWell 2023-01-11T22:10:15.8603982Z [ OK ] DataTest.StreamSamplerResetsWell (0 ms) 2023-01-11T22:10:15.8604410Z [ RUN ] DataTest.StreamSamplerResetsWithNewSizeWell 2023-01-11T22:10:15.8604877Z [ OK ] DataTest.StreamSamplerResetsWithNewSizeWell (0 ms) 2023-01-11T22:10:15.8605280Z [ RUN ] DataTest.TensorDatasetConstructsFromSingleTensor 2023-01-11T22:10:15.8605742Z [ OK ] DataTest.TensorDatasetConstructsFromSingleTensor (0 ms) 2023-01-11T22:10:15.8606316Z [ RUN ] DataTest.TensorDatasetConstructsFromInitializerListOfTensors 2023-01-11T22:10:15.8606906Z [ OK ] DataTest.TensorDatasetConstructsFromInitializerListOfTensors (0 ms) 2023-01-11T22:10:15.8607373Z [ RUN ] DataTest.StackTransformWorksForExample 2023-01-11T22:10:15.8607907Z [ OK ] DataTest.StackTransformWorksForExample (0 ms) 2023-01-11T22:10:15.8608306Z [ RUN ] DataTest.StackTransformWorksForTensorExample 2023-01-11T22:10:15.8609036Z [ OK ] DataTest.StackTransformWorksForTensorExample (0 ms) 2023-01-11T22:10:15.8609775Z [ RUN ] DataTest.TensorTransformWorksForAnyTargetType 2023-01-11T22:10:15.8610547Z [ OK ] DataTest.TensorTransformWorksForAnyTargetType (0 ms) 2023-01-11T22:10:15.8610989Z [ RUN ] DataTest.TensorLambdaWorksforAnyTargetType 2023-01-11T22:10:15.8611375Z [ OK ] DataTest.TensorLambdaWorksforAnyTargetType (0 ms) 2023-01-11T22:10:15.8611719Z [ RUN ] DataTest.NormalizeTransform 2023-01-11T22:10:15.8614956Z [ OK ] DataTest.NormalizeTransform (0 ms) 2023-01-11T22:10:15.8615496Z [ RUN ] DataTest.MapDoesNotCopy 2023-01-11T22:10:15.8615829Z [ OK ] DataTest.MapDoesNotCopy (0 ms) 2023-01-11T22:10:15.8616168Z [ RUN ] DataTest.QueuePushAndPopFromSameThread 2023-01-11T22:10:15.8616682Z [ OK ] DataTest.QueuePushAndPopFromSameThread (0 ms) 2023-01-11T22:10:15.8617077Z [ RUN ] DataTest.QueuePopWithTimeoutThrowsUponTimeout 2023-01-11T22:10:15.8727700Z [ OK ] DataTest.QueuePopWithTimeoutThrowsUponTimeout (11 ms) 2023-01-11T22:10:15.8728344Z [ RUN ] DataTest.QueuePushAndPopFromDifferentThreads 2023-01-11T22:10:15.8934305Z [ OK ] DataTest.QueuePushAndPopFromDifferentThreads (20 ms) 2023-01-11T22:10:15.8934885Z [ RUN ] DataTest.QueueClearEmptiesTheQueue 2023-01-11T22:10:15.8955957Z [ OK ] DataTest.QueueClearEmptiesTheQueue (2 ms) 2023-01-11T22:10:15.8956732Z [ RUN ] DataTest.DataShuttleCanPushAndPopJob 2023-01-11T22:10:15.8957120Z [ OK ] DataTest.DataShuttleCanPushAndPopJob (0 ms) 2023-01-11T22:10:15.8957493Z [ RUN ] DataTest.DataShuttleCanPushAndPopResult 2023-01-11T22:10:15.8957873Z [ OK ] DataTest.DataShuttleCanPushAndPopResult (0 ms) 2023-01-11T22:10:15.8958327Z [ RUN ] DataTest.DataShuttlePopResultReturnsNulloptWhenNoJobsInFlight 2023-01-11T22:10:15.8958867Z [ OK ] DataTest.DataShuttlePopResultReturnsNulloptWhenNoJobsInFlight (0 ms) 2023-01-11T22:10:15.8959360Z [ RUN ] DataTest.DataShuttleDrainMeansPopResultReturnsNullopt 2023-01-11T22:10:15.8959934Z [ OK ] DataTest.DataShuttleDrainMeansPopResultReturnsNullopt (0 ms) 2023-01-11T22:10:15.8960338Z [ RUN ] DataTest.DataShuttlePopResultTimesOut 2023-01-11T22:10:15.9068496Z [ OK ] DataTest.DataShuttlePopResultTimesOut (11 ms) 2023-01-11T22:10:15.9068890Z [ RUN ] DataTest.SharedBatchDatasetReallyIsShared 2023-01-11T22:10:15.9083707Z [ OK ] DataTest.SharedBatchDatasetReallyIsShared (1 ms) 2023-01-11T22:10:15.9084441Z [ RUN ] DataTest.SharedBatchDatasetDoesNotIncurCopyWhenPassedDatasetObject 2023-01-11T22:10:15.9085028Z [ OK ] DataTest.SharedBatchDatasetDoesNotIncurCopyWhenPassedDatasetObject (0 ms) 2023-01-11T22:10:15.9085504Z [ RUN ] DataTest.CanUseCustomTypeAsIndexType 2023-01-11T22:10:15.9085858Z [ OK ] DataTest.CanUseCustomTypeAsIndexType (0 ms) 2023-01-11T22:10:15.9086332Z [ RUN ] DataTest.DistributedRandomSamplerSingleReplicaProduceCorrectSamples 2023-01-11T22:10:15.9092410Z [ OK ] DataTest.DistributedRandomSamplerSingleReplicaProduceCorrectSamples (0 ms) 2023-01-11T22:10:15.9093107Z [ RUN ] DataTest.DistributedRandomSamplerMultiReplicaProduceCorrectSamples 2023-01-11T22:10:15.9094004Z [ OK ] DataTest.DistributedRandomSamplerMultiReplicaProduceCorrectSamples (0 ms) 2023-01-11T22:10:15.9094494Z [ RUN ] DataTest.CanSaveAndLoadDistributedRandomSampler 2023-01-11T22:10:15.9098932Z [ OK ] DataTest.CanSaveAndLoadDistributedRandomSampler (0 ms) 2023-01-11T22:10:15.9099781Z [ RUN ] DataTest.DistributedSequentialSamplerSingleReplicaProduceCorrectSamples 2023-01-11T22:10:15.9100532Z [ OK ] DataTest.DistributedSequentialSamplerSingleReplicaProduceCorrectSamples (0 ms) 2023-01-11T22:10:15.9101408Z [ RUN ] DataTest.DistributedSequentialSamplerMultiReplicaProduceCorrectSamples 2023-01-11T22:10:15.9102253Z [ OK ] DataTest.DistributedSequentialSamplerMultiReplicaProduceCorrectSamples (0 ms) 2023-01-11T22:10:15.9102924Z [ RUN ] DataTest.CanSaveAndLoadDistributedSequentialSampler 2023-01-11T22:10:15.9103401Z [ OK ] DataTest.CanSaveAndLoadDistributedSequentialSampler (0 ms) 2023-01-11T22:10:15.9103797Z [----------] 50 tests from DataTest (74 ms total) 2023-01-11T22:10:15.9103946Z 2023-01-11T22:10:15.9104100Z [----------] 37 tests from DataLoaderTest 2023-01-11T22:10:15.9104456Z [ RUN ] DataLoaderTest.DataLoaderOptionsDefaultAsExpected 2023-01-11T22:10:15.9104900Z [ OK ] DataLoaderTest.DataLoaderOptionsDefaultAsExpected (0 ms) 2023-01-11T22:10:15.9105450Z [ RUN ] DataLoaderTest.DataLoaderOptionsCoalesceOptionalValues 2023-01-11T22:10:15.9105937Z [ OK ] DataLoaderTest.DataLoaderOptionsCoalesceOptionalValues (0 ms) 2023-01-11T22:10:15.9106377Z [ RUN ] DataLoaderTest.MakeDataLoaderDefaultsAsExpected 2023-01-11T22:10:15.9106810Z [ OK ] DataLoaderTest.MakeDataLoaderDefaultsAsExpected (0 ms) 2023-01-11T22:10:15.9107355Z [ RUN ] DataLoaderTest.MakeDataLoaderThrowsWhenConstructingSamplerWithUnsizedDataset 2023-01-11T22:10:15.9114167Z [ OK ] DataLoaderTest.MakeDataLoaderThrowsWhenConstructingSamplerWithUnsizedDataset (1 ms) 2023-01-11T22:10:15.9114999Z [ RUN ] DataLoaderTest.IteratorsCompareEqualToThemselves 2023-01-11T22:10:15.9115794Z [ OK ] DataLoaderTest.IteratorsCompareEqualToThemselves (0 ms) 2023-01-11T22:10:15.9116337Z [ RUN ] DataLoaderTest.ValidIteratorsCompareUnequalToEachOther 2023-01-11T22:10:15.9116818Z [ OK ] DataLoaderTest.ValidIteratorsCompareUnequalToEachOther (0 ms) 2023-01-11T22:10:15.9117299Z [ RUN ] DataLoaderTest.SentinelIteratorsCompareEqualToEachOther 2023-01-11T22:10:15.9117786Z [ OK ] DataLoaderTest.SentinelIteratorsCompareEqualToEachOther (0 ms) 2023-01-11T22:10:15.9118276Z [ RUN ] DataLoaderTest.IteratorsCompareEqualToSentinelWhenExhausted 2023-01-11T22:10:15.9118795Z [ OK ] DataLoaderTest.IteratorsCompareEqualToSentinelWhenExhausted (0 ms) 2023-01-11T22:10:15.9119213Z [ RUN ] DataLoaderTest.IteratorsShareState 2023-01-11T22:10:15.9119624Z [ OK ] DataLoaderTest.IteratorsShareState (0 ms) 2023-01-11T22:10:15.9120017Z [ RUN ] DataLoaderTest.CanDereferenceIteratorMultipleTimes 2023-01-11T22:10:15.9120469Z [ OK ] DataLoaderTest.CanDereferenceIteratorMultipleTimes (0 ms) 2023-01-11T22:10:15.9120883Z [ RUN ] DataLoaderTest.CanUseIteratorAlgorithms 2023-01-11T22:10:15.9121257Z [ OK ] DataLoaderTest.CanUseIteratorAlgorithms (0 ms) 2023-01-11T22:10:15.9121719Z [ RUN ] DataLoaderTest.CallingBeginWhileOtherIteratorIsInFlightThrows 2023-01-11T22:10:15.9131685Z [ OK ] DataLoaderTest.CallingBeginWhileOtherIteratorIsInFlightThrows (1 ms) 2023-01-11T22:10:15.9132360Z [ RUN ] DataLoaderTest.IncrementingExhaustedValidIteratorThrows 2023-01-11T22:10:15.9142991Z [ OK ] DataLoaderTest.IncrementingExhaustedValidIteratorThrows (1 ms) 2023-01-11T22:10:15.9143636Z [ RUN ] DataLoaderTest.DereferencingExhaustedValidIteratorThrows 2023-01-11T22:10:15.9154391Z [ OK ] DataLoaderTest.DereferencingExhaustedValidIteratorThrows (1 ms) 2023-01-11T22:10:15.9155003Z [ RUN ] DataLoaderTest.IncrementingSentinelIteratorThrows 2023-01-11T22:10:15.9165691Z [ OK ] DataLoaderTest.IncrementingSentinelIteratorThrows (1 ms) 2023-01-11T22:10:15.9166418Z [ RUN ] DataLoaderTest.DereferencingSentinelIteratorThrows 2023-01-11T22:10:15.9177330Z [ OK ] DataLoaderTest.DereferencingSentinelIteratorThrows (1 ms) 2023-01-11T22:10:15.9177804Z [ RUN ] DataLoaderTest.YieldsCorrectBatchSize 2023-01-11T22:10:15.9178180Z [ OK ] DataLoaderTest.YieldsCorrectBatchSize (0 ms) 2023-01-11T22:10:15.9178694Z [ RUN ] DataLoaderTest.ReturnsLastBatchWhenSmallerThanBatchSizeWhenDropLastIsFalse 2023-01-11T22:10:15.9179333Z [ OK ] DataLoaderTest.ReturnsLastBatchWhenSmallerThanBatchSizeWhenDropLastIsFalse (0 ms) 2023-01-11T22:10:15.9179976Z [ RUN ] DataLoaderTest.DoesNotReturnLastBatchWhenSmallerThanBatchSizeWhenDropLastIsTrue 2023-01-11T22:10:15.9180656Z [ OK ] DataLoaderTest.DoesNotReturnLastBatchWhenSmallerThanBatchSizeWhenDropLastIsTrue (0 ms) 2023-01-11T22:10:15.9181148Z [ RUN ] DataLoaderTest.RespectsTimeout 2023-01-11T22:10:15.9294521Z [ OK ] DataLoaderTest.RespectsTimeout (11 ms) 2023-01-11T22:10:15.9295032Z [ RUN ] DataLoaderTest.EnforcesOrderingAmongThreadsWhenConfigured 2023-01-11T22:10:15.9319340Z [ OK ] DataLoaderTest.EnforcesOrderingAmongThreadsWhenConfigured (2 ms) 2023-01-11T22:10:15.9319826Z [ RUN ] DataLoaderTest.Reset 2023-01-11T22:10:15.9320123Z [ OK ] DataLoaderTest.Reset (0 ms) 2023-01-11T22:10:15.9320502Z [ RUN ] DataLoaderTest.TestExceptionsArePropagatedFromWorkers 2023-01-11T22:10:15.9324665Z [ OK ] DataLoaderTest.TestExceptionsArePropagatedFromWorkers (0 ms) 2023-01-11T22:10:15.9325410Z [ RUN ] DataLoaderTest.StatefulDatasetWithNoWorkers 2023-01-11T22:10:15.9325814Z [ OK ] DataLoaderTest.StatefulDatasetWithNoWorkers (0 ms) 2023-01-11T22:10:15.9326226Z [ RUN ] DataLoaderTest.StatefulDatasetWithManyWorkers 2023-01-11T22:10:15.9343869Z [ OK ] DataLoaderTest.StatefulDatasetWithManyWorkers (1 ms) 2023-01-11T22:10:15.9344282Z [ RUN ] DataLoaderTest.StatefulDatasetWithMap 2023-01-11T22:10:15.9344704Z [ OK ] DataLoaderTest.StatefulDatasetWithMap (0 ms) 2023-01-11T22:10:15.9345101Z [ RUN ] DataLoaderTest.StatefulDatasetWithCollate 2023-01-11T22:10:15.9346562Z [ OK ] DataLoaderTest.StatefulDatasetWithCollate (0 ms) 2023-01-11T22:10:15.9346925Z [ RUN ] DataLoaderTest.ChunkDataSetGetBatch 2023-01-11T22:10:15.9448519Z [ OK ] DataLoaderTest.ChunkDataSetGetBatch (10 ms) 2023-01-11T22:10:15.9448951Z [ RUN ] DataLoaderTest.ChunkDataSetWithBatchSizeMismatch 2023-01-11T22:10:15.9462908Z [ OK ] DataLoaderTest.ChunkDataSetWithBatchSizeMismatch (1 ms) 2023-01-11T22:10:15.9463445Z [ RUN ] DataLoaderTest.ChunkDataSetWithEmptyBatch 2023-01-11T22:10:15.9463987Z [ OK ] DataLoaderTest.ChunkDataSetWithEmptyBatch (0 ms) 2023-01-11T22:10:15.9464482Z [ RUN ] DataLoaderTest.ChunkDataSetGetBatchWithUnevenBatchSize 2023-01-11T22:10:15.9465333Z [ OK ] DataLoaderTest.ChunkDataSetGetBatchWithUnevenBatchSize (0 ms) 2023-01-11T22:10:15.9465975Z [ RUN ] DataLoaderTest.CanAccessChunkSamplerWithChunkDataSet 2023-01-11T22:10:15.9466834Z [ OK ] DataLoaderTest.CanAccessChunkSamplerWithChunkDataSet (0 ms) 2023-01-11T22:10:15.9467262Z [ RUN ] DataLoaderTest.ChunkDatasetDoesNotHang 2023-01-11T22:10:15.9467627Z [ OK ] DataLoaderTest.ChunkDatasetDoesNotHang (0 ms) 2023-01-11T22:10:15.9467979Z [ RUN ] DataLoaderTest.ChunkDatasetSave 2023-01-11T22:10:15.9626477Z [ OK ] DataLoaderTest.ChunkDatasetSave (15 ms) 2023-01-11T22:10:15.9626868Z [ RUN ] DataLoaderTest.ChunkDatasetLoad 2023-01-11T22:10:15.9631635Z [ OK ] DataLoaderTest.ChunkDatasetLoad (0 ms) 2023-01-11T22:10:15.9632149Z [ RUN ] DataLoaderTest.ChunkDatasetCrossChunkShuffle 2023-01-11T22:10:15.9636023Z [ OK ] DataLoaderTest.ChunkDatasetCrossChunkShuffle (0 ms) 2023-01-11T22:10:15.9636465Z [ RUN ] DataLoaderTest.CustomPreprocessPolicy 2023-01-11T22:10:15.9638719Z [ OK ] DataLoaderTest.CustomPreprocessPolicy (0 ms) 2023-01-11T22:10:15.9639386Z [----------] 37 tests from DataLoaderTest (53 ms total) 2023-01-11T22:10:15.9639739Z 2023-01-11T22:10:15.9639883Z [----------] 1 test from EnumTest 2023-01-11T22:10:15.9640129Z [ RUN ] EnumTest.AllEnums 2023-01-11T22:10:15.9640405Z [ OK ] EnumTest.AllEnums (0 ms) 2023-01-11T22:10:15.9640700Z [----------] 1 test from EnumTest (0 ms total) 2023-01-11T22:10:15.9640847Z 2023-01-11T22:10:15.9641012Z [----------] 6 tests from ExpandingArrayTest 2023-01-11T22:10:15.9641379Z [ RUN ] ExpandingArrayTest.CanConstructFromInitializerList 2023-01-11T22:10:15.9641964Z [ OK ] ExpandingArrayTest.CanConstructFromInitializerList (0 ms) 2023-01-11T22:10:15.9642396Z [ RUN ] ExpandingArrayTest.CanConstructFromVector 2023-01-11T22:10:15.9642778Z [ OK ] ExpandingArrayTest.CanConstructFromVector (0 ms) 2023-01-11T22:10:15.9643165Z [ RUN ] ExpandingArrayTest.CanConstructFromArray 2023-01-11T22:10:15.9643552Z [ OK ] ExpandingArrayTest.CanConstructFromArray (0 ms) 2023-01-11T22:10:15.9643962Z [ RUN ] ExpandingArrayTest.CanConstructFromSingleValue 2023-01-11T22:10:15.9644377Z [ OK ] ExpandingArrayTest.CanConstructFromSingleValue (0 ms) 2023-01-11T22:10:15.9644978Z [ RUN ] ExpandingArrayTest.ThrowsWhenConstructedWithIncorrectNumberOfArgumentsInInitializerList 2023-01-11T22:10:15.9651756Z [ OK ] ExpandingArrayTest.ThrowsWhenConstructedWithIncorrectNumberOfArgumentsInInitializerList (1 ms) 2023-01-11T22:10:15.9652462Z [ RUN ] ExpandingArrayTest.ThrowsWhenConstructedWithIncorrectNumberOfArgumentsInVector 2023-01-11T22:10:15.9662596Z [ OK ] ExpandingArrayTest.ThrowsWhenConstructedWithIncorrectNumberOfArgumentsInVector (1 ms) 2023-01-11T22:10:15.9663503Z [----------] 6 tests from ExpandingArrayTest (2 ms total) 2023-01-11T22:10:15.9663679Z 2023-01-11T22:10:15.9663820Z [----------] 10 tests from FFTTest 2023-01-11T22:10:15.9664054Z [ RUN ] FFTTest.fft 2023-01-11T22:10:15.9789158Z [ OK ] FFTTest.fft (12 ms) 2023-01-11T22:10:15.9789426Z [ RUN ] FFTTest.fft_real 2023-01-11T22:10:15.9790745Z [ OK ] FFTTest.fft_real (0 ms) 2023-01-11T22:10:15.9791209Z [ RUN ] FFTTest.fft_pad 2023-01-11T22:10:15.9804967Z [ OK ] FFTTest.fft_pad (1 ms) 2023-01-11T22:10:15.9805438Z [ RUN ] FFTTest.fft_norm 2023-01-11T22:10:15.9807922Z [ OK ] FFTTest.fft_norm (0 ms) 2023-01-11T22:10:15.9808322Z [ RUN ] FFTTest.ifft 2023-01-11T22:10:15.9811172Z [ OK ] FFTTest.ifft (0 ms) 2023-01-11T22:10:15.9811632Z [ RUN ] FFTTest.fft_ifft 2023-01-11T22:10:15.9960842Z [ OK ] FFTTest.fft_ifft (14 ms) 2023-01-11T22:10:15.9961318Z [ RUN ] FFTTest.rfft 2023-01-11T22:10:15.9986776Z [ OK ] FFTTest.rfft (2 ms) 2023-01-11T22:10:15.9987252Z [ RUN ] FFTTest.rfft_irfft 2023-01-11T22:10:15.9988192Z [ OK ] FFTTest.rfft_irfft (0 ms) 2023-01-11T22:10:15.9988666Z [ RUN ] FFTTest.ihfft 2023-01-11T22:10:15.9992596Z [ OK ] FFTTest.ihfft (0 ms) 2023-01-11T22:10:15.9993045Z [ RUN ] FFTTest.hfft_ihfft 2023-01-11T22:10:16.0033339Z [ OK ] FFTTest.hfft_ihfft (4 ms) 2023-01-11T22:10:16.0033917Z [----------] 10 tests from FFTTest (37 ms total) 2023-01-11T22:10:16.0034169Z 2023-01-11T22:10:16.0034316Z [----------] 132 tests from FunctionalTest 2023-01-11T22:10:16.0034605Z [ RUN ] FunctionalTest.Conv1d 2023-01-11T22:10:16.0053590Z [ OK ] FunctionalTest.Conv1d (1 ms) 2023-01-11T22:10:16.0053909Z [ RUN ] FunctionalTest.Conv2dEven 2023-01-11T22:10:16.0059624Z [ OK ] FunctionalTest.Conv2dEven (0 ms) 2023-01-11T22:10:16.0059940Z [ RUN ] FunctionalTest.Conv2dUneven 2023-01-11T22:10:16.0061927Z [ OK ] FunctionalTest.Conv2dUneven (0 ms) 2023-01-11T22:10:16.0062222Z [ RUN ] FunctionalTest.Conv3d 2023-01-11T22:10:16.0066351Z [ OK ] FunctionalTest.Conv3d (0 ms) 2023-01-11T22:10:16.0066656Z [ RUN ] FunctionalTest.MaxPool1d 2023-01-11T22:10:16.0088541Z [ OK ] FunctionalTest.MaxPool1d (2 ms) 2023-01-11T22:10:16.0088922Z [ RUN ] FunctionalTest.MaxPool2d 2023-01-11T22:10:16.0089306Z [ OK ] FunctionalTest.MaxPool2d (0 ms) 2023-01-11T22:10:16.0089631Z [ RUN ] FunctionalTest.MaxPool2dBackward 2023-01-11T22:10:16.0091508Z [ OK ] FunctionalTest.MaxPool2dBackward (0 ms) 2023-01-11T22:10:16.0091911Z [ RUN ] FunctionalTest.MaxPool3d 2023-01-11T22:10:16.0092372Z [ OK ] FunctionalTest.MaxPool3d (0 ms) 2023-01-11T22:10:16.0092698Z [ RUN ] FunctionalTest.AvgPool1d 2023-01-11T22:10:16.0093207Z [ OK ] FunctionalTest.AvgPool1d (0 ms) 2023-01-11T22:10:16.0093725Z [ RUN ] FunctionalTest.AvgPool2d 2023-01-11T22:10:16.0094284Z [ OK ] FunctionalTest.AvgPool2d (0 ms) 2023-01-11T22:10:16.0094747Z [ RUN ] FunctionalTest.AvgPool3d 2023-01-11T22:10:16.0095317Z [ OK ] FunctionalTest.AvgPool3d (0 ms) 2023-01-11T22:10:16.0095735Z [ RUN ] FunctionalTest.FractionalMaxPool2d 2023-01-11T22:10:16.0098199Z [ OK ] FunctionalTest.FractionalMaxPool2d (0 ms) 2023-01-11T22:10:16.0098877Z [ RUN ] FunctionalTest.FractionalMaxPool3d 2023-01-11T22:10:16.0099863Z [ OK ] FunctionalTest.FractionalMaxPool3d (0 ms) 2023-01-11T22:10:16.0100460Z [ RUN ] FunctionalTest.LPPool1d 2023-01-11T22:10:16.0102577Z [ OK ] FunctionalTest.LPPool1d (0 ms) 2023-01-11T22:10:16.0103127Z [ RUN ] FunctionalTest.LPPool2d 2023-01-11T22:10:16.0103678Z [ OK ] FunctionalTest.LPPool2d (0 ms) 2023-01-11T22:10:16.0104032Z [ RUN ] FunctionalTest.CosineSimilarity 2023-01-11T22:10:16.0106259Z [ OK ] FunctionalTest.CosineSimilarity (0 ms) 2023-01-11T22:10:16.0106720Z [ RUN ] FunctionalTest.SmoothL1LossDefaultOptions 2023-01-11T22:10:16.0108855Z [ OK ] FunctionalTest.SmoothL1LossDefaultOptions (0 ms) 2023-01-11T22:10:16.0109559Z [ RUN ] FunctionalTest.SmoothL1LossBeta 2023-01-11T22:10:16.0110430Z [ OK ] FunctionalTest.SmoothL1LossBeta (0 ms) 2023-01-11T22:10:16.0111101Z [ RUN ] FunctionalTest.SmoothL1LossNoReduction 2023-01-11T22:10:16.0112376Z [ OK ] FunctionalTest.SmoothL1LossNoReduction (0 ms) 2023-01-11T22:10:16.0113072Z [ RUN ] FunctionalTest.HuberLossDefaultOptions 2023-01-11T22:10:16.0114571Z [ OK ] FunctionalTest.HuberLossDefaultOptions (0 ms) 2023-01-11T22:10:16.0115206Z [ RUN ] FunctionalTest.HuberLossDelta 2023-01-11T22:10:16.0116346Z [ OK ] FunctionalTest.HuberLossDelta (0 ms) 2023-01-11T22:10:16.0116990Z [ RUN ] FunctionalTest.HuberLossNoReduction 2023-01-11T22:10:16.0118093Z [ OK ] FunctionalTest.HuberLossNoReduction (0 ms) 2023-01-11T22:10:16.0118779Z [ RUN ] FunctionalTest.SoftMarginLossDefaultOptions 2023-01-11T22:10:16.0121620Z [ OK ] FunctionalTest.SoftMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:16.0122431Z [ RUN ] FunctionalTest.MultiLabelSoftMarginLossDefaultOptions 2023-01-11T22:10:16.0126775Z [ OK ] FunctionalTest.MultiLabelSoftMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:16.0127835Z [ RUN ] FunctionalTest.SoftMarginLossNoReduction 2023-01-11T22:10:16.0128540Z [ OK ] FunctionalTest.SoftMarginLossNoReduction (0 ms) 2023-01-11T22:10:16.0129384Z [ RUN ] FunctionalTest.MultiLabelSoftMarginLossWeightedNoReduction 2023-01-11T22:10:16.0132979Z [ OK ] FunctionalTest.MultiLabelSoftMarginLossWeightedNoReduction (0 ms) 2023-01-11T22:10:16.0133676Z [ RUN ] FunctionalTest.PairwiseDistance 2023-01-11T22:10:16.0134380Z [ OK ] FunctionalTest.PairwiseDistance (0 ms) 2023-01-11T22:10:16.0134895Z [ RUN ] FunctionalTest.PDist 2023-01-11T22:10:16.0136458Z [ OK ] FunctionalTest.PDist (0 ms) 2023-01-11T22:10:16.0136991Z [ RUN ] FunctionalTest.AdaptiveMaxPool1d 2023-01-11T22:10:16.0137676Z [ OK ] FunctionalTest.AdaptiveMaxPool1d (0 ms) 2023-01-11T22:10:16.0138218Z [ RUN ] FunctionalTest.AdaptiveMaxPool2d 2023-01-11T22:10:16.0138958Z [ OK ] FunctionalTest.AdaptiveMaxPool2d (0 ms) 2023-01-11T22:10:16.0139522Z [ RUN ] FunctionalTest.AdaptiveMaxPool3d 2023-01-11T22:10:16.0140343Z [ OK ] FunctionalTest.AdaptiveMaxPool3d (0 ms) 2023-01-11T22:10:16.0140893Z [ RUN ] FunctionalTest.AdaptiveAvgPool1d 2023-01-11T22:10:16.0141410Z [ OK ] FunctionalTest.AdaptiveAvgPool1d (0 ms) 2023-01-11T22:10:16.0141917Z [ RUN ] FunctionalTest.AdaptiveAvgPool2d 2023-01-11T22:10:16.0142709Z [ OK ] FunctionalTest.AdaptiveAvgPool2d (0 ms) 2023-01-11T22:10:16.0143251Z [ RUN ] FunctionalTest.AdaptiveAvgPool3d 2023-01-11T22:10:16.0143775Z [ OK ] FunctionalTest.AdaptiveAvgPool3d (0 ms) 2023-01-11T22:10:16.0144259Z [ RUN ] FunctionalTest.L1Loss 2023-01-11T22:10:16.0144976Z [ OK ] FunctionalTest.L1Loss (0 ms) 2023-01-11T22:10:16.0146958Z [ RUN ] FunctionalTest.MSELoss 2023-01-11T22:10:16.0147532Z [ OK ] FunctionalTest.MSELoss (0 ms) 2023-01-11T22:10:16.0148049Z [ RUN ] FunctionalTest.BCELoss 2023-01-11T22:10:16.0148346Z [ OK ] FunctionalTest.BCELoss (0 ms) 2023-01-11T22:10:16.0148677Z [ RUN ] FunctionalTest.KLDivLoss 2023-01-11T22:10:16.0149790Z [W loss.h:57] Warning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. (function kl_div) 2023-01-11T22:10:16.0150605Z [ OK ] FunctionalTest.KLDivLoss (0 ms) 2023-01-11T22:10:16.0150994Z [ RUN ] FunctionalTest.HingeEmbeddingLoss 2023-01-11T22:10:16.0151620Z [ OK ] FunctionalTest.HingeEmbeddingLoss (0 ms) 2023-01-11T22:10:16.0152053Z [ RUN ] FunctionalTest.GridSample 2023-01-11T22:10:16.0153813Z [W vision.h:87] Warning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. (function grid_sample) 2023-01-11T22:10:16.0155793Z [ OK ] FunctionalTest.GridSample (0 ms) 2023-01-11T22:10:16.0156177Z [ RUN ] FunctionalTest.AffineGrid 2023-01-11T22:10:16.0296416Z [ OK ] FunctionalTest.AffineGrid (14 ms) 2023-01-11T22:10:16.0296849Z [ RUN ] FunctionalTest.MultiMarginLoss 2023-01-11T22:10:16.0297540Z [ OK ] FunctionalTest.MultiMarginLoss (0 ms) 2023-01-11T22:10:16.0297892Z [ RUN ] FunctionalTest.CosineEmbeddingLoss 2023-01-11T22:10:16.0299847Z [ OK ] FunctionalTest.CosineEmbeddingLoss (0 ms) 2023-01-11T22:10:16.0300284Z [ RUN ] FunctionalTest.MultiLabelMarginLossDefaultOptions 2023-01-11T22:10:16.0302012Z [ OK ] FunctionalTest.MultiLabelMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:16.0302864Z [ RUN ] FunctionalTest.MultiLabelMarginLossNoReduction 2023-01-11T22:10:16.0303656Z [ OK ] FunctionalTest.MultiLabelMarginLossNoReduction (0 ms) 2023-01-11T22:10:16.0304121Z [ RUN ] FunctionalTest.TripletMarginLoss 2023-01-11T22:10:16.0305213Z [ OK ] FunctionalTest.TripletMarginLoss (0 ms) 2023-01-11T22:10:16.0517169Z [ RUN ] FunctionalTest.TripletMarginWithDistanceLossDefaultParity 2023-01-11T22:10:16.0517900Z [ OK ] FunctionalTest.TripletMarginWithDistanceLossDefaultParity (21 ms) 2023-01-11T22:10:16.0518373Z [ RUN ] FunctionalTest.NLLLoss 2023-01-11T22:10:16.0518687Z [ OK ] FunctionalTest.NLLLoss (0 ms) 2023-01-11T22:10:16.0518993Z [ RUN ] FunctionalTest.CrossEntropy 2023-01-11T22:10:16.0522132Z [ OK ] FunctionalTest.CrossEntropy (0 ms) 2023-01-11T22:10:16.0522798Z [ RUN ] FunctionalTest.MaxUnpool1d 2023-01-11T22:10:16.0525080Z [ OK ] FunctionalTest.MaxUnpool1d (0 ms) 2023-01-11T22:10:16.0525703Z [ RUN ] FunctionalTest.MaxUnpool2d 2023-01-11T22:10:16.0527345Z [ OK ] FunctionalTest.MaxUnpool2d (0 ms) 2023-01-11T22:10:16.0527952Z [ RUN ] FunctionalTest.MaxUnpool3d 2023-01-11T22:10:16.0528916Z [ OK ] FunctionalTest.MaxUnpool3d (0 ms) 2023-01-11T22:10:16.0529444Z [ RUN ] FunctionalTest.ELU 2023-01-11T22:10:16.0544411Z [ OK ] FunctionalTest.ELU (1 ms) 2023-01-11T22:10:16.0544779Z [ RUN ] FunctionalTest.SELU 2023-01-11T22:10:16.0548800Z [ OK ] FunctionalTest.SELU (0 ms) 2023-01-11T22:10:16.0549286Z [ RUN ] FunctionalTest.GLU 2023-01-11T22:10:16.0550418Z [ OK ] FunctionalTest.GLU (0 ms) 2023-01-11T22:10:16.0550929Z [ RUN ] FunctionalTest.GELU 2023-01-11T22:10:16.0554436Z [ OK ] FunctionalTest.GELU (0 ms) 2023-01-11T22:10:16.0554953Z [ RUN ] FunctionalTest.TanhGELU 2023-01-11T22:10:16.0556319Z [ OK ] FunctionalTest.TanhGELU (0 ms) 2023-01-11T22:10:16.0556879Z [ RUN ] FunctionalTest.Hardshrink 2023-01-11T22:10:16.0563671Z [ OK ] FunctionalTest.Hardshrink (0 ms) 2023-01-11T22:10:16.0564229Z [ RUN ] FunctionalTest.OneHot 2023-01-11T22:10:16.0567168Z [ OK ] FunctionalTest.OneHot (0 ms) 2023-01-11T22:10:16.0567693Z [ RUN ] FunctionalTest.Hardtanh 2023-01-11T22:10:16.0591563Z [ OK ] FunctionalTest.Hardtanh (2 ms) 2023-01-11T22:10:16.0592118Z [ RUN ] FunctionalTest.LeakyReLU 2023-01-11T22:10:16.0599921Z [ OK ] FunctionalTest.LeakyReLU (0 ms) 2023-01-11T22:10:16.0600458Z [ RUN ] FunctionalTest.LogSigmoid 2023-01-11T22:10:16.0601126Z [ OK ] FunctionalTest.LogSigmoid (0 ms) 2023-01-11T22:10:16.0601720Z [ RUN ] FunctionalTest.GumbelSoftmax 2023-01-11T22:10:16.0633113Z [ OK ] FunctionalTest.GumbelSoftmax (3 ms) 2023-01-11T22:10:16.0633638Z [ RUN ] FunctionalTest.Softmax 2023-01-11T22:10:16.0634738Z [ OK ] FunctionalTest.Softmax (0 ms) 2023-01-11T22:10:16.0635276Z [ RUN ] FunctionalTest.Softmin 2023-01-11T22:10:16.0636097Z [ OK ] FunctionalTest.Softmin (0 ms) 2023-01-11T22:10:16.0636652Z [ RUN ] FunctionalTest.LogSoftmax 2023-01-11T22:10:16.0637736Z [ OK ] FunctionalTest.LogSoftmax (0 ms) 2023-01-11T22:10:16.0638035Z [ RUN ] FunctionalTest.PReLU 2023-01-11T22:10:16.0640679Z [ OK ] FunctionalTest.PReLU (0 ms) 2023-01-11T22:10:16.0641219Z [ RUN ] FunctionalTest.LayerNorm 2023-01-11T22:10:16.0641772Z [ OK ] FunctionalTest.LayerNorm (0 ms) 2023-01-11T22:10:16.0642080Z [ RUN ] FunctionalTest.GroupNorm 2023-01-11T22:10:16.0642754Z [ OK ] FunctionalTest.GroupNorm (0 ms) 2023-01-11T22:10:16.0643081Z [ RUN ] FunctionalTest.LocalResponseNorm 2023-01-11T22:10:16.0645083Z [ OK ] FunctionalTest.LocalResponseNorm (0 ms) 2023-01-11T22:10:16.0645383Z [ RUN ] FunctionalTest.Linear 2023-01-11T22:10:16.0649400Z [ OK ] FunctionalTest.Linear (0 ms) 2023-01-11T22:10:16.0650199Z [ RUN ] FunctionalTest.Embedding 2023-01-11T22:10:16.0650741Z [ OK ] FunctionalTest.Embedding (0 ms) 2023-01-11T22:10:16.0651193Z [ RUN ] FunctionalTest.EmbeddingBag 2023-01-11T22:10:16.0655494Z [ OK ] FunctionalTest.EmbeddingBag (0 ms) 2023-01-11T22:10:16.0655881Z [ RUN ] FunctionalTest.Bilinear 2023-01-11T22:10:16.0658378Z [ OK ] FunctionalTest.Bilinear (0 ms) 2023-01-11T22:10:16.0658753Z [ RUN ] FunctionalTest.Normalize 2023-01-11T22:10:16.0662875Z [ OK ] FunctionalTest.Normalize (0 ms) 2023-01-11T22:10:16.0663279Z [ RUN ] FunctionalTest.ReLU 2023-01-11T22:10:16.0666298Z [ OK ] FunctionalTest.ReLU (0 ms) 2023-01-11T22:10:16.0666827Z [ RUN ] FunctionalTest.ReLUDefaultOptions 2023-01-11T22:10:16.0667477Z [ OK ] FunctionalTest.ReLUDefaultOptions (0 ms) 2023-01-11T22:10:16.0667776Z [ RUN ] FunctionalTest.ReLU6 2023-01-11T22:10:16.0671012Z [ OK ] FunctionalTest.ReLU6 (0 ms) 2023-01-11T22:10:16.0671552Z [ RUN ] FunctionalTest.ReLU6DefaultOptions 2023-01-11T22:10:16.0672147Z [ OK ] FunctionalTest.ReLU6DefaultOptions (0 ms) 2023-01-11T22:10:16.0672668Z [ RUN ] FunctionalTest.RReLU 2023-01-11T22:10:16.0705014Z [ OK ] FunctionalTest.RReLU (3 ms) 2023-01-11T22:10:16.0705352Z [ RUN ] FunctionalTest.RReLUDefaultOptions 2023-01-11T22:10:16.0707006Z [ OK ] FunctionalTest.RReLUDefaultOptions (0 ms) 2023-01-11T22:10:16.0707332Z [ RUN ] FunctionalTest.CELU 2023-01-11T22:10:16.0717900Z [ OK ] FunctionalTest.CELU (1 ms) 2023-01-11T22:10:16.0718213Z [ RUN ] FunctionalTest.CELUDefaultOptions 2023-01-11T22:10:16.0719647Z [ OK ] FunctionalTest.CELUDefaultOptions (0 ms) 2023-01-11T22:10:16.0719986Z [ RUN ] FunctionalTest.PixelShuffle 2023-01-11T22:10:16.0721582Z [ OK ] FunctionalTest.PixelShuffle (0 ms) 2023-01-11T22:10:16.0721901Z [ RUN ] FunctionalTest.PixelUnshuffle 2023-01-11T22:10:16.0723296Z [ OK ] FunctionalTest.PixelUnshuffle (0 ms) 2023-01-11T22:10:16.0723860Z [ RUN ] FunctionalTest.Softplus 2023-01-11T22:10:16.0730613Z [ OK ] FunctionalTest.Softplus (0 ms) 2023-01-11T22:10:16.0731226Z [ RUN ] FunctionalTest.SoftplusDefaultOptions 2023-01-11T22:10:16.0732023Z [ OK ] FunctionalTest.SoftplusDefaultOptions (0 ms) 2023-01-11T22:10:16.0732603Z [ RUN ] FunctionalTest.Fold 2023-01-11T22:10:16.0733436Z [ OK ] FunctionalTest.Fold (0 ms) 2023-01-11T22:10:16.0733952Z [ RUN ] FunctionalTest.Unfold 2023-01-11T22:10:16.0734970Z [ OK ] FunctionalTest.Unfold (0 ms) 2023-01-11T22:10:16.0735499Z [ RUN ] FunctionalTest.Softshrink 2023-01-11T22:10:16.0741231Z [ OK ] FunctionalTest.Softshrink (0 ms) 2023-01-11T22:10:16.0741856Z [ RUN ] FunctionalTest.SoftshrinkDefaultOptions 2023-01-11T22:10:16.0742668Z [ OK ] FunctionalTest.SoftshrinkDefaultOptions (0 ms) 2023-01-11T22:10:16.0743185Z [ RUN ] FunctionalTest.Softsign 2023-01-11T22:10:16.0743805Z [ OK ] FunctionalTest.Softsign (0 ms) 2023-01-11T22:10:16.0744990Z [ RUN ] FunctionalTest.Mish 2023-01-11T22:10:16.0745511Z [ OK ] FunctionalTest.Mish (0 ms) 2023-01-11T22:10:16.0745942Z [ RUN ] FunctionalTest.Tanhshrink 2023-01-11T22:10:16.0746684Z [ OK ] FunctionalTest.Tanhshrink (0 ms) 2023-01-11T22:10:16.0746986Z [ RUN ] FunctionalTest.Threshold 2023-01-11T22:10:16.0758687Z [ OK ] FunctionalTest.Threshold (1 ms) 2023-01-11T22:10:16.0759121Z [ RUN ] FunctionalTest.BatchNorm1d 2023-01-11T22:10:16.0760050Z [ OK ] FunctionalTest.BatchNorm1d (0 ms) 2023-01-11T22:10:16.0760466Z [ RUN ] FunctionalTest.BatchNorm1dDefaultOptions 2023-01-11T22:10:16.0761642Z [ OK ] FunctionalTest.BatchNorm1dDefaultOptions (0 ms) 2023-01-11T22:10:16.0762077Z [ RUN ] FunctionalTest.BatchNorm2d 2023-01-11T22:10:16.0763127Z [ OK ] FunctionalTest.BatchNorm2d (0 ms) 2023-01-11T22:10:16.0763598Z [ RUN ] FunctionalTest.BatchNorm2dDefaultOptions 2023-01-11T22:10:16.0764418Z [ OK ] FunctionalTest.BatchNorm2dDefaultOptions (0 ms) 2023-01-11T22:10:16.0765016Z [ RUN ] FunctionalTest.BatchNorm3d 2023-01-11T22:10:16.0765814Z [ OK ] FunctionalTest.BatchNorm3d (0 ms) 2023-01-11T22:10:16.0766343Z [ RUN ] FunctionalTest.BatchNorm3dDefaultOptions 2023-01-11T22:10:16.0767304Z [ OK ] FunctionalTest.BatchNorm3dDefaultOptions (0 ms) 2023-01-11T22:10:16.0767659Z [ RUN ] FunctionalTest.InstanceNorm1d 2023-01-11T22:10:16.0770255Z [ OK ] FunctionalTest.InstanceNorm1d (0 ms) 2023-01-11T22:10:16.0770618Z [ RUN ] FunctionalTest.InstanceNorm1dDefaultOptions 2023-01-11T22:10:16.0772408Z [ OK ] FunctionalTest.InstanceNorm1dDefaultOptions (0 ms) 2023-01-11T22:10:16.0772962Z [ RUN ] FunctionalTest.InstanceNorm2d 2023-01-11T22:10:16.0775108Z [ OK ] FunctionalTest.InstanceNorm2d (0 ms) 2023-01-11T22:10:16.0775642Z [ RUN ] FunctionalTest.InstanceNorm2dDefaultOptions 2023-01-11T22:10:16.0776899Z [ OK ] FunctionalTest.InstanceNorm2dDefaultOptions (0 ms) 2023-01-11T22:10:16.0777434Z [ RUN ] FunctionalTest.InstanceNorm3d 2023-01-11T22:10:16.0780191Z [ OK ] FunctionalTest.InstanceNorm3d (0 ms) 2023-01-11T22:10:16.0780712Z [ RUN ] FunctionalTest.InstanceNorm3dDefaultOptions 2023-01-11T22:10:16.0783303Z [ OK ] FunctionalTest.InstanceNorm3dDefaultOptions (0 ms) 2023-01-11T22:10:16.0783913Z [ RUN ] FunctionalTest.Interpolate 2023-01-11T22:10:16.0784822Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0786337Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0787522Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0788677Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0789969Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0791138Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0792373Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0793544Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:16.0868051Z [ OK ] FunctionalTest.Interpolate (8 ms) 2023-01-11T22:10:16.0868480Z [ RUN ] FunctionalTest.Pad1 2023-01-11T22:10:16.0868804Z [ OK ] FunctionalTest.Pad1 (0 ms) 2023-01-11T22:10:16.0869073Z [ RUN ] FunctionalTest.Pad2 2023-01-11T22:10:16.0870659Z [ OK ] FunctionalTest.Pad2 (0 ms) 2023-01-11T22:10:16.0870939Z [ RUN ] FunctionalTest.Pad3 2023-01-11T22:10:16.0875495Z [ OK ] FunctionalTest.Pad3 (0 ms) 2023-01-11T22:10:16.0875908Z [ RUN ] FunctionalTest.Pad4 2023-01-11T22:10:16.0877288Z [ OK ] FunctionalTest.Pad4 (0 ms) 2023-01-11T22:10:16.0877699Z [ RUN ] FunctionalTest.Pad5 2023-01-11T22:10:16.0880259Z [ OK ] FunctionalTest.Pad5 (0 ms) 2023-01-11T22:10:16.0880668Z [ RUN ] FunctionalTest.Pad6 2023-01-11T22:10:16.0882387Z [ OK ] FunctionalTest.Pad6 (0 ms) 2023-01-11T22:10:16.0882743Z [ RUN ] FunctionalTest.Pad7 2023-01-11T22:10:16.0883087Z [ OK ] FunctionalTest.Pad7 (0 ms) 2023-01-11T22:10:16.0883440Z [ RUN ] FunctionalTest.Pad8 2023-01-11T22:10:16.0883812Z [ OK ] FunctionalTest.Pad8 (0 ms) 2023-01-11T22:10:16.0884158Z [ RUN ] FunctionalTest.CTCLoss 2023-01-11T22:10:16.0994039Z [ OK ] FunctionalTest.CTCLoss (11 ms) 2023-01-11T22:10:16.0994528Z [ RUN ] FunctionalTest.PoissonNLLLoss 2023-01-11T22:10:16.0996257Z [ OK ] FunctionalTest.PoissonNLLLoss (0 ms) 2023-01-11T22:10:16.0996734Z [ RUN ] FunctionalTest.MarginRankingLoss 2023-01-11T22:10:16.0999929Z [ OK ] FunctionalTest.MarginRankingLoss (0 ms) 2023-01-11T22:10:16.1000364Z [ RUN ] FunctionalTest.ConvTranspose1d 2023-01-11T22:10:16.1003291Z [ OK ] FunctionalTest.ConvTranspose1d (0 ms) 2023-01-11T22:10:16.1003739Z [ RUN ] FunctionalTest.ConvTranspose2dEven 2023-01-11T22:10:16.1008452Z [ OK ] FunctionalTest.ConvTranspose2dEven (0 ms) 2023-01-11T22:10:16.1008914Z [ RUN ] FunctionalTest.ConvTranspose2dUneven 2023-01-11T22:10:16.1012983Z [ OK ] FunctionalTest.ConvTranspose2dUneven (0 ms) 2023-01-11T22:10:16.1013367Z [ RUN ] FunctionalTest.ConvTranspose3d 2023-01-11T22:10:16.1016201Z [ OK ] FunctionalTest.ConvTranspose3d (0 ms) 2023-01-11T22:10:16.1016582Z [ RUN ] FunctionalTest.AlphaDropout 2023-01-11T22:10:16.1026871Z [ OK ] FunctionalTest.AlphaDropout (1 ms) 2023-01-11T22:10:16.1027333Z [ RUN ] FunctionalTest.FeatureAlphaDropout 2023-01-11T22:10:16.1035633Z [ OK ] FunctionalTest.FeatureAlphaDropout (0 ms) 2023-01-11T22:10:16.1035988Z [ RUN ] FunctionalTest.Dropout 2023-01-11T22:10:16.1041917Z [ OK ] FunctionalTest.Dropout (0 ms) 2023-01-11T22:10:16.1042223Z [ RUN ] FunctionalTest.Dropout2d 2023-01-11T22:10:16.1053180Z [ OK ] FunctionalTest.Dropout2d (1 ms) 2023-01-11T22:10:16.1053507Z [ RUN ] FunctionalTest.Dropout3d 2023-01-11T22:10:16.1063131Z [ OK ] FunctionalTest.Dropout3d (0 ms) 2023-01-11T22:10:16.1063438Z [ RUN ] FunctionalTest.isfinite 2023-01-11T22:10:16.1069192Z [ OK ] FunctionalTest.isfinite (0 ms) 2023-01-11T22:10:16.1069493Z [ RUN ] FunctionalTest.isinf 2023-01-11T22:10:16.1073737Z [ OK ] FunctionalTest.isinf (0 ms) 2023-01-11T22:10:16.1074028Z [ RUN ] FunctionalTest.AllClose 2023-01-11T22:10:16.1139113Z [ OK ] FunctionalTest.AllClose (6 ms) 2023-01-11T22:10:16.1139478Z [ RUN ] FunctionalTest.BCEWithLogitsLoss 2023-01-11T22:10:16.1176653Z [ OK ] FunctionalTest.BCEWithLogitsLoss (3 ms) 2023-01-11T22:10:16.1177015Z [----------] 132 tests from FunctionalTest (114 ms total) 2023-01-11T22:10:16.1177184Z 2023-01-11T22:10:16.1177341Z [----------] 1 test from IntegrationTest 2023-01-11T22:10:16.1177633Z [ RUN ] IntegrationTest.CartPole 2023-01-11T22:10:29.0932835Z [ OK ] IntegrationTest.CartPole (12975 ms) 2023-01-11T22:10:29.0933566Z [----------] 1 test from IntegrationTest (12975 ms total) 2023-01-11T22:10:29.0933793Z 2023-01-11T22:10:29.0934144Z [----------] 9 tests from InitTest 2023-01-11T22:10:29.0936065Z [ RUN ] InitTest.ProducesPyTorchValues_XavierUniform 2023-01-11T22:10:29.0964750Z [ OK ] InitTest.ProducesPyTorchValues_XavierUniform (3 ms) 2023-01-11T22:10:29.0965152Z [ RUN ] InitTest.ProducesPyTorchValues_XavierNormal 2023-01-11T22:10:29.0976855Z [ OK ] InitTest.ProducesPyTorchValues_XavierNormal (1 ms) 2023-01-11T22:10:29.0977275Z [ RUN ] InitTest.ProducesPyTorchValues_KaimingNormal 2023-01-11T22:10:29.0989581Z [ OK ] InitTest.ProducesPyTorchValues_KaimingNormal (1 ms) 2023-01-11T22:10:29.0990023Z [ RUN ] InitTest.ProducesPyTorchValues_KaimingUniform 2023-01-11T22:10:29.1002542Z [ OK ] InitTest.ProducesPyTorchValues_KaimingUniform (1 ms) 2023-01-11T22:10:29.1003014Z [ RUN ] InitTest.CanInitializeTensorThatRequiresGrad 2023-01-11T22:10:29.1029499Z [ OK ] InitTest.CanInitializeTensorThatRequiresGrad (2 ms) 2023-01-11T22:10:29.1030142Z [ RUN ] InitTest.CalculateGainWithTanh 2023-01-11T22:10:29.1030663Z [ OK ] InitTest.CalculateGainWithTanh (0 ms) 2023-01-11T22:10:29.1030976Z [ RUN ] InitTest.CalculateGainWithRelu 2023-01-11T22:10:29.1031304Z [ OK ] InitTest.CalculateGainWithRelu (0 ms) 2023-01-11T22:10:29.1031641Z [ RUN ] InitTest.CalculateGainWithLeakyRelu 2023-01-11T22:10:29.1031994Z [ OK ] InitTest.CalculateGainWithLeakyRelu (0 ms) 2023-01-11T22:10:29.1032344Z [ RUN ] InitTest.CanInitializeCnnWithOrthogonal 2023-01-11T22:10:29.1044986Z [ OK ] InitTest.CanInitializeCnnWithOrthogonal (1 ms) 2023-01-11T22:10:29.1045464Z [----------] 9 tests from InitTest (11 ms total) 2023-01-11T22:10:29.1045902Z 2023-01-11T22:10:29.1046096Z [----------] 6 tests from TorchScriptTest 2023-01-11T22:10:29.1046544Z [ RUN ] TorchScriptTest.CanCompileMultipleFunctions 2023-01-11T22:10:29.1427987Z [ OK ] TorchScriptTest.CanCompileMultipleFunctions (38 ms) 2023-01-11T22:10:29.1428513Z [ RUN ] TorchScriptTest.TestNestedIValueModuleArgMatching 2023-01-11T22:10:29.1462330Z [ OK ] TorchScriptTest.TestNestedIValueModuleArgMatching (3 ms) 2023-01-11T22:10:29.1462968Z [ RUN ] TorchScriptTest.TestDictArgMatching 2023-01-11T22:10:29.1465884Z [ OK ] TorchScriptTest.TestDictArgMatching (0 ms) 2023-01-11T22:10:29.1466417Z [ RUN ] TorchScriptTest.TestTupleArgMatching 2023-01-11T22:10:29.1467083Z [ OK ] TorchScriptTest.TestTupleArgMatching (0 ms) 2023-01-11T22:10:29.1467686Z [ RUN ] TorchScriptTest.TestOptionalArgMatching 2023-01-11T22:10:29.1472594Z [ OK ] TorchScriptTest.TestOptionalArgMatching (0 ms) 2023-01-11T22:10:29.1473285Z [ RUN ] TorchScriptTest.TestPickle 2023-01-11T22:10:29.1473874Z [ OK ] TorchScriptTest.TestPickle (0 ms) 2023-01-11T22:10:29.1474449Z [----------] 6 tests from TorchScriptTest (42 ms total) 2023-01-11T22:10:29.1474747Z 2023-01-11T22:10:29.1474967Z [----------] 3 tests from MakeUniqueTest 2023-01-11T22:10:29.1475394Z [ RUN ] MakeUniqueTest.ForwardRvaluesCorrectly 2023-01-11T22:10:29.1476063Z [ OK ] MakeUniqueTest.ForwardRvaluesCorrectly (0 ms) 2023-01-11T22:10:29.1476538Z [ RUN ] MakeUniqueTest.ForwardLvaluesCorrectly 2023-01-11T22:10:29.1476902Z [ OK ] MakeUniqueTest.ForwardLvaluesCorrectly (0 ms) 2023-01-11T22:10:29.1477288Z [ RUN ] MakeUniqueTest.CanConstructUniquePtrOfArray 2023-01-11T22:10:29.1477700Z [ OK ] MakeUniqueTest.CanConstructUniquePtrOfArray (0 ms) 2023-01-11T22:10:29.1478087Z [----------] 3 tests from MakeUniqueTest (0 ms total) 2023-01-11T22:10:29.1478229Z 2023-01-11T22:10:29.1478383Z [----------] 2 tests from MetaTensorTest 2023-01-11T22:10:29.1478681Z [ RUN ] MetaTensorTest.MetaDeviceApi 2023-01-11T22:10:29.1479016Z [ OK ] MetaTensorTest.MetaDeviceApi (0 ms) 2023-01-11T22:10:29.1479328Z [ RUN ] MetaTensorTest.MetaNamespaceApi 2023-01-11T22:10:29.1479733Z [ OK ] MetaTensorTest.MetaNamespaceApi (0 ms) 2023-01-11T22:10:29.1480075Z [----------] 2 tests from MetaTensorTest (0 ms total) 2023-01-11T22:10:29.1480231Z 2023-01-11T22:10:29.1480373Z [----------] 2 tests from UtilsTest 2023-01-11T22:10:29.1480623Z [ RUN ] UtilsTest.WarnOnce 2023-01-11T22:10:29.1480900Z [ OK ] UtilsTest.WarnOnce (0 ms) 2023-01-11T22:10:29.1481222Z [ RUN ] UtilsTest.AmbiguousOperatorDefaults 2023-01-11T22:10:29.1481571Z [ OK ] UtilsTest.AmbiguousOperatorDefaults (0 ms) 2023-01-11T22:10:29.1481911Z [----------] 2 tests from UtilsTest (0 ms total) 2023-01-11T22:10:29.1482058Z 2023-01-11T22:10:29.1482199Z [----------] 1 test from NoGradTest 2023-01-11T22:10:29.1482486Z [ RUN ] NoGradTest.SetsGradModeCorrectly 2023-01-11T22:10:29.1508580Z [ OK ] NoGradTest.SetsGradModeCorrectly (3 ms) 2023-01-11T22:10:29.1509185Z [----------] 1 test from NoGradTest (3 ms total) 2023-01-11T22:10:29.1509352Z 2023-01-11T22:10:29.1509503Z [----------] 3 tests from AutogradTest 2023-01-11T22:10:29.1509793Z [ RUN ] AutogradTest.CanTakeDerivatives 2023-01-11T22:10:29.1510503Z [ OK ] AutogradTest.CanTakeDerivatives (0 ms) 2023-01-11T22:10:29.1511064Z [ RUN ] AutogradTest.CanTakeDerivativesOfZeroDimTensors 2023-01-11T22:10:29.1511747Z [ OK ] AutogradTest.CanTakeDerivativesOfZeroDimTensors (0 ms) 2023-01-11T22:10:29.1512506Z [ RUN ] AutogradTest.CanPassCustomGradientInputs 2023-01-11T22:10:29.1513266Z [ OK ] AutogradTest.CanPassCustomGradientInputs (0 ms) 2023-01-11T22:10:29.1513899Z [----------] 3 tests from AutogradTest (0 ms total) 2023-01-11T22:10:29.1514179Z 2023-01-11T22:10:29.1514423Z [----------] 1 test from OptionalArrayRefTest 2023-01-11T22:10:29.1514757Z [ RUN ] OptionalArrayRefTest.DanglingPointerFix 2023-01-11T22:10:29.1515289Z [ OK ] OptionalArrayRefTest.DanglingPointerFix (0 ms) 2023-01-11T22:10:29.1515863Z [----------] 1 test from OptionalArrayRefTest (0 ms total) 2023-01-11T22:10:29.1516036Z 2023-01-11T22:10:29.1516243Z [----------] 52 tests from ModuleTest 2023-01-11T22:10:29.1516634Z [ RUN ] ModuleTest.CanEnableAndDisableTrainingMode 2023-01-11T22:10:29.1517140Z [ OK ] ModuleTest.CanEnableAndDisableTrainingMode (0 ms) 2023-01-11T22:10:29.1517472Z [ RUN ] ModuleTest.ZeroGrad 2023-01-11T22:10:29.1517938Z [ OK ] ModuleTest.ZeroGrad (0 ms) 2023-01-11T22:10:29.1518259Z [ RUN ] ModuleTest.ZeroGradWithUndefined 2023-01-11T22:10:29.1518605Z [ OK ] ModuleTest.ZeroGradWithUndefined (0 ms) 2023-01-11T22:10:29.1518992Z [ RUN ] ModuleTest.RegisterModuleThrowsForEmptyOrDottedName 2023-01-11T22:10:29.1541504Z [ OK ] ModuleTest.RegisterModuleThrowsForEmptyOrDottedName (2 ms) 2023-01-11T22:10:29.1542040Z [ RUN ] ModuleTest.RegisterModuleThrowsForDuplicateModuleName 2023-01-11T22:10:29.1558551Z [ OK ] ModuleTest.RegisterModuleThrowsForDuplicateModuleName (1 ms) 2023-01-11T22:10:29.1559067Z [ RUN ] ModuleTest.ReplaceModuleThrowsForUnknownModuleName 2023-01-11T22:10:29.1571398Z [ OK ] ModuleTest.ReplaceModuleThrowsForUnknownModuleName (1 ms) 2023-01-11T22:10:29.1571777Z [ RUN ] ModuleTest.ReplaceModule 2023-01-11T22:10:29.1572093Z [ OK ] ModuleTest.ReplaceModule (0 ms) 2023-01-11T22:10:29.1572386Z [ RUN ] ModuleTest.UnregisterModule 2023-01-11T22:10:29.1588223Z [ OK ] ModuleTest.UnregisterModule (1 ms) 2023-01-11T22:10:29.1588679Z [ RUN ] ModuleTest.RegisterParameterThrowsForEmptyOrDottedName 2023-01-11T22:10:29.1620501Z [ OK ] ModuleTest.RegisterParameterThrowsForEmptyOrDottedName (3 ms) 2023-01-11T22:10:29.1621006Z [ RUN ] ModuleTest.RegisterParameterThrowsForDuplicateModuleName 2023-01-11T22:10:29.1642684Z [ OK ] ModuleTest.RegisterParameterThrowsForDuplicateModuleName (2 ms) 2023-01-11T22:10:29.1643142Z [ RUN ] ModuleTest.RegisterParameterUndefinedTensor 2023-01-11T22:10:29.1643543Z [ OK ] ModuleTest.RegisterParameterUndefinedTensor (0 ms) 2023-01-11T22:10:29.1643974Z [ RUN ] ModuleTest.RegisterBufferThrowsForEmptyOrDottedName 2023-01-11T22:10:29.1674910Z [ OK ] ModuleTest.RegisterBufferThrowsForEmptyOrDottedName (3 ms) 2023-01-11T22:10:29.1675432Z [ RUN ] ModuleTest.RegisterBufferThrowsForDuplicateModuleName 2023-01-11T22:10:29.1697106Z [ OK ] ModuleTest.RegisterBufferThrowsForDuplicateModuleName (2 ms) 2023-01-11T22:10:29.1697658Z [ RUN ] ModuleTest.CanGetName 2023-01-11T22:10:29.1698146Z [ OK ] ModuleTest.CanGetName (0 ms) 2023-01-11T22:10:29.1698706Z [ RUN ] ModuleTest.AsCastsModulesCorrectly 2023-01-11T22:10:29.1699198Z [ OK ] ModuleTest.AsCastsModulesCorrectly (0 ms) 2023-01-11T22:10:29.1699685Z [ RUN ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor 2023-01-11T22:10:29.1700185Z [ OK ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor (0 ms) 2023-01-11T22:10:29.1700705Z [ RUN ] ModuleTest.ParametersAndBuffersAccessorSkipsUndefinedTensor 2023-01-11T22:10:29.1701292Z [ OK ] ModuleTest.ParametersAndBuffersAccessorSkipsUndefinedTensor (0 ms) 2023-01-11T22:10:29.1701990Z [ RUN ] ModuleTest.CallingCloneOnModuleThatDoesNotOverrideCloneThrows 2023-01-11T22:10:29.1715661Z [ OK ] ModuleTest.CallingCloneOnModuleThatDoesNotOverrideCloneThrows (1 ms) 2023-01-11T22:10:29.1716464Z [ RUN ] ModuleTest.CallingCloneOnModuleThatDoesOverrideCloneDoesNotThrow 2023-01-11T22:10:29.1717025Z [ OK ] ModuleTest.CallingCloneOnModuleThatDoesOverrideCloneDoesNotThrow (0 ms) 2023-01-11T22:10:29.1717485Z [ RUN ] ModuleTest.CloneCreatesDistinctParameters 2023-01-11T22:10:29.1721405Z [ OK ] ModuleTest.CloneCreatesDistinctParameters (0 ms) 2023-01-11T22:10:29.1722052Z [ RUN ] ModuleTest.ClonePreservesExternalReferences 2023-01-11T22:10:29.1723262Z [ OK ] ModuleTest.ClonePreservesExternalReferences (0 ms) 2023-01-11T22:10:29.1723909Z [ RUN ] ModuleTest.CloneCopiesTheValuesOfVariablesOfSubmodules 2023-01-11T22:10:29.1724816Z [ OK ] ModuleTest.CloneCopiesTheValuesOfVariablesOfSubmodules (0 ms) 2023-01-11T22:10:29.1725347Z [ RUN ] ModuleTest.HasCorrectNumberOfParameters 2023-01-11T22:10:29.1725735Z [ OK ] ModuleTest.HasCorrectNumberOfParameters (0 ms) 2023-01-11T22:10:29.1726128Z [ RUN ] ModuleTest.ContainsParametersWithTheCorrectName 2023-01-11T22:10:29.1726564Z [ OK ] ModuleTest.ContainsParametersWithTheCorrectName (0 ms) 2023-01-11T22:10:29.1726958Z [ RUN ] ModuleTest.HasCorrectNumberOfBuffers 2023-01-11T22:10:29.1727321Z [ OK ] ModuleTest.HasCorrectNumberOfBuffers (0 ms) 2023-01-11T22:10:29.1727689Z [ RUN ] ModuleTest.ContainsBuffersWithTheCorrectName 2023-01-11T22:10:29.1728107Z [ OK ] ModuleTest.ContainsBuffersWithTheCorrectName (0 ms) 2023-01-11T22:10:29.1728624Z [ RUN ] ModuleTest.DefaultConstructorOfModuleHolderCallsDefaultConstructorOfImpl 2023-01-11T22:10:29.1729232Z [ OK ] ModuleTest.DefaultConstructorOfModuleHolderCallsDefaultConstructorOfImpl (0 ms) 2023-01-11T22:10:29.1729832Z [ RUN ] ModuleTest.ValueConstructorOfModuleHolderCallsCorrectConstructorInImpl 2023-01-11T22:10:29.1730432Z [ OK ] ModuleTest.ValueConstructorOfModuleHolderCallsCorrectConstructorInImpl (0 ms) 2023-01-11T22:10:29.1730993Z [ RUN ] ModuleTest.NullptrConstructorLeavesTheModuleHolderInEmptyState 2023-01-11T22:10:29.1737217Z [ OK ] ModuleTest.NullptrConstructorLeavesTheModuleHolderInEmptyState (1 ms) 2023-01-11T22:10:29.1737973Z [ RUN ] ModuleTest.ModulesReturnsExpectedSubmodulesForFlatModel 2023-01-11T22:10:29.1738702Z [ OK ] ModuleTest.ModulesReturnsExpectedSubmodulesForFlatModel (0 ms) 2023-01-11T22:10:29.1739475Z [ RUN ] ModuleTest.ModulesExcludesSelfWhenIncludeSelfSetToFalse 2023-01-11T22:10:29.1740116Z [ OK ] ModuleTest.ModulesExcludesSelfWhenIncludeSelfSetToFalse (0 ms) 2023-01-11T22:10:29.1741107Z [ RUN ] ModuleTest.NamedModulesReturnsExpectedNamedSubmodulesForFlatModel 2023-01-11T22:10:29.1742055Z [ OK ] ModuleTest.NamedModulesReturnsExpectedNamedSubmodulesForFlatModel (0 ms) 2023-01-11T22:10:29.1743071Z [ RUN ] ModuleTest.NamedModulesExcludesSelfWhenIncludeSelfSetToFalse 2023-01-11T22:10:29.1744001Z [ OK ] ModuleTest.NamedModulesExcludesSelfWhenIncludeSelfSetToFalse (0 ms) 2023-01-11T22:10:29.1744760Z [ RUN ] ModuleTest.ChildrenReturnsExpectedSubmodulesForFlatModel 2023-01-11T22:10:29.1745570Z [ OK ] ModuleTest.ChildrenReturnsExpectedSubmodulesForFlatModel (0 ms) 2023-01-11T22:10:29.1746214Z [ RUN ] ModuleTest.NamedChildrenReturnsExpectedNamedSubmodulesForFlatModel 2023-01-11T22:10:29.1746786Z [ OK ] ModuleTest.NamedChildrenReturnsExpectedNamedSubmodulesForFlatModel (0 ms) 2023-01-11T22:10:29.1747318Z [ RUN ] ModuleTest.ParametersReturnsExpectedTensorsForFlatModel 2023-01-11T22:10:29.1747949Z [ OK ] ModuleTest.ParametersReturnsExpectedTensorsForFlatModel (0 ms) 2023-01-11T22:10:29.1748441Z [ RUN ] ModuleTest.NamedParametersReturnsExpectedTensorsForFlatModel 2023-01-11T22:10:29.1748965Z [ OK ] ModuleTest.NamedParametersReturnsExpectedTensorsForFlatModel (0 ms) 2023-01-11T22:10:29.1749455Z [ RUN ] ModuleTest.BuffersReturnsExpectedTensorsForFlatModel 2023-01-11T22:10:29.1749906Z [ OK ] ModuleTest.BuffersReturnsExpectedTensorsForFlatModel (0 ms) 2023-01-11T22:10:29.1750384Z [ RUN ] ModuleTest.NamedBuffersReturnsExpectedTensorsForFlatModel 2023-01-11T22:10:29.1750883Z [ OK ] ModuleTest.NamedBuffersReturnsExpectedTensorsForFlatModel (0 ms) 2023-01-11T22:10:29.1751371Z [ RUN ] ModuleTest.ModulesReturnsExpectedSubmodulesForDeepModel 2023-01-11T22:10:29.1751911Z [ OK ] ModuleTest.ModulesReturnsExpectedSubmodulesForDeepModel (0 ms) 2023-01-11T22:10:29.1752440Z [ RUN ] ModuleTest.NamedModulesReturnsExpectedNamedSubmodulesForDeepModel 2023-01-11T22:10:29.1753002Z [ OK ] ModuleTest.NamedModulesReturnsExpectedNamedSubmodulesForDeepModel (0 ms) 2023-01-11T22:10:29.1753527Z [ RUN ] ModuleTest.ChildrensReturnsExpectedSubmodulesForDeepModel 2023-01-11T22:10:29.1754014Z [ OK ] ModuleTest.ChildrensReturnsExpectedSubmodulesForDeepModel (0 ms) 2023-01-11T22:10:29.1754549Z [ RUN ] ModuleTest.NamedChildrensReturnsExpectedNamedSubmodulesForDeepModel 2023-01-11T22:10:29.1755130Z [ OK ] ModuleTest.NamedChildrensReturnsExpectedNamedSubmodulesForDeepModel (0 ms) 2023-01-11T22:10:29.1755586Z [ RUN ] ModuleTest.ModuleApplyIteratesCorreclty 2023-01-11T22:10:29.1755972Z [ OK ] ModuleTest.ModuleApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1756373Z [ RUN ] ModuleTest.ConstModuleApplyIteratesCorreclty 2023-01-11T22:10:29.1756780Z [ OK ] ModuleTest.ConstModuleApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1757184Z [ RUN ] ModuleTest.NamedModuleApplyIteratesCorreclty 2023-01-11T22:10:29.1757594Z [ OK ] ModuleTest.NamedModuleApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1758020Z [ RUN ] ModuleTest.ConstNamedModuleApplyIteratesCorreclty 2023-01-11T22:10:29.1758450Z [ OK ] ModuleTest.ConstNamedModuleApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1758881Z [ RUN ] ModuleTest.ModulePointerApplyIteratesCorreclty 2023-01-11T22:10:29.1759308Z [ OK ] ModuleTest.ModulePointerApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1759807Z [ RUN ] ModuleTest.NamedModulePointerApplyIteratesCorreclty 2023-01-11T22:10:29.1760270Z [ OK ] ModuleTest.NamedModulePointerApplyIteratesCorreclty (0 ms) 2023-01-11T22:10:29.1760772Z [ RUN ] ModuleTest.ThrowsWhenAttemptingtoGetTopLevelModuleAsSharedPtr 2023-01-11T22:10:29.1775203Z [ OK ] ModuleTest.ThrowsWhenAttemptingtoGetTopLevelModuleAsSharedPtr (2 ms) 2023-01-11T22:10:29.1775906Z [ RUN ] ModuleTest.PrettyPrint 2023-01-11T22:10:29.1776298Z [ OK ] ModuleTest.PrettyPrint (0 ms) 2023-01-11T22:10:29.1776701Z [ RUN ] ModuleTest.CanCallForwardOnNonTensorForwardThroughPimpl 2023-01-11T22:10:29.1777195Z [ OK ] ModuleTest.CanCallForwardOnNonTensorForwardThroughPimpl (0 ms) 2023-01-11T22:10:29.1777590Z [----------] 52 tests from ModuleTest (26 ms total) 2023-01-11T22:10:29.1777743Z 2023-01-11T22:10:29.1777899Z [----------] 11 tests from ModuleDictTest 2023-01-11T22:10:29.1778213Z [ RUN ] ModuleDictTest.ConstructsFromList 2023-01-11T22:10:29.1778550Z [ OK ] ModuleDictTest.ConstructsFromList (0 ms) 2023-01-11T22:10:29.1778917Z [ RUN ] ModuleDictTest.ConstructsFromordereddict 2023-01-11T22:10:29.1779392Z [ OK ] ModuleDictTest.ConstructsFromordereddict (0 ms) 2023-01-11T22:10:29.1779757Z [ RUN ] ModuleDictTest.UpdatePopClearContains 2023-01-11T22:10:29.1789525Z [ OK ] ModuleDictTest.UpdatePopClearContains (1 ms) 2023-01-11T22:10:29.1790068Z [ RUN ] ModuleDictTest.UpdateExist 2023-01-11T22:10:29.1790399Z [ OK ] ModuleDictTest.UpdateExist (0 ms) 2023-01-11T22:10:29.1790679Z [ RUN ] ModuleDictTest.Keys 2023-01-11T22:10:29.1801583Z [ OK ] ModuleDictTest.Keys (1 ms) 2023-01-11T22:10:29.1802114Z [ RUN ] ModuleDictTest.Values 2023-01-11T22:10:29.1802485Z [ OK ] ModuleDictTest.Values (0 ms) 2023-01-11T22:10:29.1802915Z [ RUN ] ModuleDictTest.SanityCheckForHoldingStandardModules 2023-01-11T22:10:29.1803689Z [ OK ] ModuleDictTest.SanityCheckForHoldingStandardModules (0 ms) 2023-01-11T22:10:29.1804275Z [ RUN ] ModuleDictTest.HasReferenceSemantics 2023-01-11T22:10:29.1804674Z [ OK ] ModuleDictTest.HasReferenceSemantics (0 ms) 2023-01-11T22:10:29.1805005Z [ RUN ] ModuleDictTest.IsCloneable 2023-01-11T22:10:29.1807836Z [ OK ] ModuleDictTest.IsCloneable (0 ms) 2023-01-11T22:10:29.1808313Z [ RUN ] ModuleDictTest.RegistersElementsAsSubmodules 2023-01-11T22:10:29.1809171Z [ OK ] ModuleDictTest.RegistersElementsAsSubmodules (0 ms) 2023-01-11T22:10:29.1809698Z [ RUN ] ModuleDictTest.PrettyPrintModuleDict 2023-01-11T22:10:29.1810903Z [ OK ] ModuleDictTest.PrettyPrintModuleDict (0 ms) 2023-01-11T22:10:29.1811486Z [----------] 11 tests from ModuleDictTest (3 ms total) 2023-01-11T22:10:29.1811756Z 2023-01-11T22:10:29.1812015Z [----------] 15 tests from ModuleListTest 2023-01-11T22:10:29.1812486Z [ RUN ] ModuleListTest.ConstructsFromSharedPointer 2023-01-11T22:10:29.1812906Z [ OK ] ModuleListTest.ConstructsFromSharedPointer (0 ms) 2023-01-11T22:10:29.1813290Z [ RUN ] ModuleListTest.ConstructsFromConcreteType 2023-01-11T22:10:29.1813688Z [ OK ] ModuleListTest.ConstructsFromConcreteType (0 ms) 2023-01-11T22:10:29.1814075Z [ RUN ] ModuleListTest.ConstructsFromModuleHolder 2023-01-11T22:10:29.1814454Z [ OK ] ModuleListTest.ConstructsFromModuleHolder (0 ms) 2023-01-11T22:10:29.1814825Z [ RUN ] ModuleListTest.PushBackAddsAnElement 2023-01-11T22:10:29.1815193Z [ OK ] ModuleListTest.PushBackAddsAnElement (0 ms) 2023-01-11T22:10:29.1815515Z [ RUN ] ModuleListTest.Insertion 2023-01-11T22:10:29.1815802Z [ OK ] ModuleListTest.Insertion (0 ms) 2023-01-11T22:10:29.1816104Z [ RUN ] ModuleListTest.AccessWithAt 2023-01-11T22:10:29.1836075Z [ OK ] ModuleListTest.AccessWithAt (2 ms) 2023-01-11T22:10:29.1836390Z [ RUN ] ModuleListTest.AccessWithPtr 2023-01-11T22:10:29.1858530Z [ OK ] ModuleListTest.AccessWithPtr (2 ms) 2023-01-11T22:10:29.1858966Z [ RUN ] ModuleListTest.SanityCheckForHoldingStandardModules 2023-01-11T22:10:29.1860375Z [ OK ] ModuleListTest.SanityCheckForHoldingStandardModules (0 ms) 2023-01-11T22:10:29.1861180Z [ RUN ] ModuleListTest.ExtendPushesModulesFromOtherModuleList 2023-01-11T22:10:29.1861668Z [ OK ] ModuleListTest.ExtendPushesModulesFromOtherModuleList (0 ms) 2023-01-11T22:10:29.1862080Z [ RUN ] ModuleListTest.HasReferenceSemantics 2023-01-11T22:10:29.1862626Z [ OK ] ModuleListTest.HasReferenceSemantics (0 ms) 2023-01-11T22:10:29.1862961Z [ RUN ] ModuleListTest.IsCloneable 2023-01-11T22:10:29.1864813Z [ OK ] ModuleListTest.IsCloneable (0 ms) 2023-01-11T22:10:29.1865300Z [ RUN ] ModuleListTest.RegistersElementsAsSubmodules 2023-01-11T22:10:29.1865901Z [ OK ] ModuleListTest.RegistersElementsAsSubmodules (0 ms) 2023-01-11T22:10:29.1866401Z [ RUN ] ModuleListTest.NestingIsPossible 2023-01-11T22:10:29.1866786Z [ OK ] ModuleListTest.NestingIsPossible (0 ms) 2023-01-11T22:10:29.1867325Z [ RUN ] ModuleListTest.PrettyPrintModuleList 2023-01-11T22:10:29.1867963Z [ OK ] ModuleListTest.PrettyPrintModuleList (0 ms) 2023-01-11T22:10:29.1868315Z [ RUN ] ModuleListTest.RangeBasedForLoop 2023-01-11T22:10:29.1868657Z [ OK ] ModuleListTest.RangeBasedForLoop (0 ms) 2023-01-11T22:10:29.1868990Z [----------] 15 tests from ModuleListTest (5 ms total) 2023-01-11T22:10:29.1869146Z 2023-01-11T22:10:29.1869295Z [----------] 256 tests from ModulesTest 2023-01-11T22:10:29.1869562Z [ RUN ] ModulesTest.Conv1d 2023-01-11T22:10:29.1886686Z [ OK ] ModulesTest.Conv1d (1 ms) 2023-01-11T22:10:29.1887360Z [ RUN ] ModulesTest.Conv1dSameStrided 2023-01-11T22:10:29.1923060Z [ OK ] ModulesTest.Conv1dSameStrided (3 ms) 2023-01-11T22:10:29.1923594Z [ RUN ] ModulesTest.Conv2dEven 2023-01-11T22:10:29.1926846Z [ OK ] ModulesTest.Conv2dEven (0 ms) 2023-01-11T22:10:29.1927393Z [ RUN ] ModulesTest.Conv2dUneven 2023-01-11T22:10:29.1929839Z [ OK ] ModulesTest.Conv2dUneven (0 ms) 2023-01-11T22:10:29.1930385Z [ RUN ] ModulesTest.Conv2dSameStrided 2023-01-11T22:10:29.1977006Z [ OK ] ModulesTest.Conv2dSameStrided (4 ms) 2023-01-11T22:10:29.1977541Z [ RUN ] ModulesTest.Conv3d 2023-01-11T22:10:29.1982087Z [ OK ] ModulesTest.Conv3d (0 ms) 2023-01-11T22:10:29.1982761Z [ RUN ] ModulesTest.Conv3dSameStrided 2023-01-11T22:10:29.2037556Z [ OK ] ModulesTest.Conv3dSameStrided (5 ms) 2023-01-11T22:10:29.2038127Z [ RUN ] ModulesTest.ConvTranspose1d 2023-01-11T22:10:29.2041300Z [ OK ] ModulesTest.ConvTranspose1d (0 ms) 2023-01-11T22:10:29.2041876Z [ RUN ] ModulesTest.ConvTranspose2dEven 2023-01-11T22:10:29.2046013Z [ OK ] ModulesTest.ConvTranspose2dEven (0 ms) 2023-01-11T22:10:29.2046599Z [ RUN ] ModulesTest.ConvTranspose2dUneven 2023-01-11T22:10:29.2050574Z [ OK ] ModulesTest.ConvTranspose2dUneven (0 ms) 2023-01-11T22:10:29.2051138Z [ RUN ] ModulesTest.ConvTranspose3d 2023-01-11T22:10:29.2054377Z [ OK ] ModulesTest.ConvTranspose3d (0 ms) 2023-01-11T22:10:29.2054904Z [ RUN ] ModulesTest.MaxPool1d 2023-01-11T22:10:29.2069829Z [ OK ] ModulesTest.MaxPool1d (1 ms) 2023-01-11T22:10:29.2070401Z [ RUN ] ModulesTest.MaxPool1dReturnIndices 2023-01-11T22:10:29.2071274Z [ OK ] ModulesTest.MaxPool1dReturnIndices (0 ms) 2023-01-11T22:10:29.2071827Z [ RUN ] ModulesTest.MaxPool2dEven 2023-01-11T22:10:29.2072468Z [ OK ] ModulesTest.MaxPool2dEven (0 ms) 2023-01-11T22:10:29.2073044Z [ RUN ] ModulesTest.MaxPool2dUneven 2023-01-11T22:10:29.2074092Z [ OK ] ModulesTest.MaxPool2dUneven (0 ms) 2023-01-11T22:10:29.2074702Z [ RUN ] ModulesTest.MaxPool2dReturnIndices 2023-01-11T22:10:29.2075438Z [ OK ] ModulesTest.MaxPool2dReturnIndices (0 ms) 2023-01-11T22:10:29.2076003Z [ RUN ] ModulesTest.MaxPool3d 2023-01-11T22:10:29.2076945Z [ OK ] ModulesTest.MaxPool3d (0 ms) 2023-01-11T22:10:29.2077526Z [ RUN ] ModulesTest.MaxPool3dReturnIndices 2023-01-11T22:10:29.2078540Z [ OK ] ModulesTest.MaxPool3dReturnIndices (0 ms) 2023-01-11T22:10:29.2079087Z [ RUN ] ModulesTest.AvgPool1d 2023-01-11T22:10:29.2080116Z [ OK ] ModulesTest.AvgPool1d (0 ms) 2023-01-11T22:10:29.2080646Z [ RUN ] ModulesTest.AvgPool2dEven 2023-01-11T22:10:29.2081542Z [ OK ] ModulesTest.AvgPool2dEven (0 ms) 2023-01-11T22:10:29.2082074Z [ RUN ] ModulesTest.AvgPool2dUneven 2023-01-11T22:10:29.2082649Z [ OK ] ModulesTest.AvgPool2dUneven (0 ms) 2023-01-11T22:10:29.2083142Z [ RUN ] ModulesTest.AvgPool3d 2023-01-11T22:10:29.2084546Z [ OK ] ModulesTest.AvgPool3d (0 ms) 2023-01-11T22:10:29.2085103Z [ RUN ] ModulesTest.FractionalMaxPool2d 2023-01-11T22:10:29.2085827Z [ OK ] ModulesTest.FractionalMaxPool2d (0 ms) 2023-01-11T22:10:29.2086402Z [ RUN ] ModulesTest.FractionalMaxPool2dReturnIndices 2023-01-11T22:10:29.2087244Z [ OK ] ModulesTest.FractionalMaxPool2dReturnIndices (0 ms) 2023-01-11T22:10:29.2087607Z [ RUN ] ModulesTest.FractionalMaxPool3d 2023-01-11T22:10:29.2089261Z [ OK ] ModulesTest.FractionalMaxPool3d (0 ms) 2023-01-11T22:10:29.2089682Z [ RUN ] ModulesTest.FractionalMaxPool3dReturnIndices 2023-01-11T22:10:29.2091034Z [ OK ] ModulesTest.FractionalMaxPool3dReturnIndices (0 ms) 2023-01-11T22:10:29.2091544Z [ RUN ] ModulesTest.LPPool1d 2023-01-11T22:10:29.2092526Z [ OK ] ModulesTest.LPPool1d (0 ms) 2023-01-11T22:10:29.2092996Z [ RUN ] ModulesTest.LPPool2d 2023-01-11T22:10:29.2093533Z [ OK ] ModulesTest.LPPool2d (0 ms) 2023-01-11T22:10:29.2093974Z [ RUN ] ModulesTest.Identity 2023-01-11T22:10:29.2094632Z [ OK ] ModulesTest.Identity (0 ms) 2023-01-11T22:10:29.2094892Z [ RUN ] ModulesTest.Flatten 2023-01-11T22:10:29.2097259Z [ OK ] ModulesTest.Flatten (0 ms) 2023-01-11T22:10:29.2097839Z [ RUN ] ModulesTest.Unflatten 2023-01-11T22:10:29.2098894Z [W TensorImpl.h:1816] Warning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (function operator()) 2023-01-11T22:10:29.2099504Z [ OK ] ModulesTest.Unflatten (0 ms) 2023-01-11T22:10:29.2099805Z [ RUN ] ModulesTest.AdaptiveMaxPool1d 2023-01-11T22:10:29.2101237Z [ OK ] ModulesTest.AdaptiveMaxPool1d (0 ms) 2023-01-11T22:10:29.2101782Z [ RUN ] ModulesTest.AdaptiveMaxPool1dReturnIndices 2023-01-11T22:10:29.2102841Z [ OK ] ModulesTest.AdaptiveMaxPool1dReturnIndices (0 ms) 2023-01-11T22:10:29.2103224Z [ RUN ] ModulesTest.AdaptiveMaxPool2dEven 2023-01-11T22:10:29.2106018Z [ OK ] ModulesTest.AdaptiveMaxPool2dEven (0 ms) 2023-01-11T22:10:29.2106398Z [ RUN ] ModulesTest.AdaptiveMaxPool2dUneven 2023-01-11T22:10:29.2107435Z [ OK ] ModulesTest.AdaptiveMaxPool2dUneven (0 ms) 2023-01-11T22:10:29.2107900Z [ RUN ] ModulesTest.AdaptiveMaxPool2dReturnIndicesEven 2023-01-11T22:10:29.2110378Z [ OK ] ModulesTest.AdaptiveMaxPool2dReturnIndicesEven (0 ms) 2023-01-11T22:10:29.2110829Z [ RUN ] ModulesTest.AdaptiveMaxPool2dReturnIndicesUneven 2023-01-11T22:10:29.2112993Z [ OK ] ModulesTest.AdaptiveMaxPool2dReturnIndicesUneven (0 ms) 2023-01-11T22:10:29.2113379Z [ RUN ] ModulesTest.AdaptiveMaxPool3d 2023-01-11T22:10:29.2115223Z [ OK ] ModulesTest.AdaptiveMaxPool3d (0 ms) 2023-01-11T22:10:29.2115579Z [ RUN ] ModulesTest.AdaptiveMaxPool3dReturnIndices 2023-01-11T22:10:29.2118107Z [ OK ] ModulesTest.AdaptiveMaxPool3dReturnIndices (0 ms) 2023-01-11T22:10:29.2118495Z [ RUN ] ModulesTest.AdaptiveAvgPool1d 2023-01-11T22:10:29.2119584Z [ OK ] ModulesTest.AdaptiveAvgPool1d (0 ms) 2023-01-11T22:10:29.2119950Z [ RUN ] ModulesTest.AdaptiveAvgPool2dEven 2023-01-11T22:10:29.2121796Z [ OK ] ModulesTest.AdaptiveAvgPool2dEven (0 ms) 2023-01-11T22:10:29.2122174Z [ RUN ] ModulesTest.AdaptiveAvgPool2dUneven 2023-01-11T22:10:29.2123633Z [ OK ] ModulesTest.AdaptiveAvgPool2dUneven (0 ms) 2023-01-11T22:10:29.2123993Z [ RUN ] ModulesTest.AdaptiveAvgPool3d 2023-01-11T22:10:29.2125777Z [ OK ] ModulesTest.AdaptiveAvgPool3d (0 ms) 2023-01-11T22:10:29.2126146Z [ RUN ] ModulesTest.MaxUnpool1d 2023-01-11T22:10:29.2127932Z [ OK ] ModulesTest.MaxUnpool1d (0 ms) 2023-01-11T22:10:29.2128307Z [ RUN ] ModulesTest.MaxPool1d_MaxUnpool1d 2023-01-11T22:10:29.2130587Z [ OK ] ModulesTest.MaxPool1d_MaxUnpool1d (0 ms) 2023-01-11T22:10:29.2130935Z [ RUN ] ModulesTest.MaxUnpool2d 2023-01-11T22:10:29.2133098Z [ OK ] ModulesTest.MaxUnpool2d (0 ms) 2023-01-11T22:10:29.2133477Z [ RUN ] ModulesTest.MaxPool2d_MaxUnpool2d 2023-01-11T22:10:29.2135520Z [ OK ] ModulesTest.MaxPool2d_MaxUnpool2d (0 ms) 2023-01-11T22:10:29.2135822Z [ RUN ] ModulesTest.MaxUnpool3d 2023-01-11T22:10:29.2137052Z [ OK ] ModulesTest.MaxUnpool3d (0 ms) 2023-01-11T22:10:29.2137390Z [ RUN ] ModulesTest.MaxUnpool3dOutputSize 2023-01-11T22:10:29.2140006Z [ OK ] ModulesTest.MaxUnpool3dOutputSize (0 ms) 2023-01-11T22:10:29.2140423Z [ RUN ] ModulesTest.MaxPool3d_MaxUnpool3d 2023-01-11T22:10:29.3906623Z [ OK ] ModulesTest.MaxPool3d_MaxUnpool3d (176 ms) 2023-01-11T22:10:29.3906955Z [ RUN ] ModulesTest.Linear 2023-01-11T22:10:29.3911531Z [ OK ] ModulesTest.Linear (0 ms) 2023-01-11T22:10:29.3911870Z [ RUN ] ModulesTest.LocalResponseNorm 2023-01-11T22:10:29.3916093Z [ OK ] ModulesTest.LocalResponseNorm (0 ms) 2023-01-11T22:10:29.3916395Z [ RUN ] ModulesTest.LayerNorm 2023-01-11T22:10:29.3918403Z [ OK ] ModulesTest.LayerNorm (0 ms) 2023-01-11T22:10:29.3918701Z [ RUN ] ModulesTest.GroupNorm 2023-01-11T22:10:29.3928272Z [ OK ] ModulesTest.GroupNorm (0 ms) 2023-01-11T22:10:29.3928616Z [ RUN ] ModulesTest.Bilinear 2023-01-11T22:10:29.3931735Z [ OK ] ModulesTest.Bilinear (0 ms) 2023-01-11T22:10:29.3932220Z [ RUN ] ModulesTest.Fold 2023-01-11T22:10:29.3957536Z [ OK ] ModulesTest.Fold (2 ms) 2023-01-11T22:10:29.3958064Z [ RUN ] ModulesTest.Unfold 2023-01-11T22:10:29.4024369Z [ OK ] ModulesTest.Unfold (6 ms) 2023-01-11T22:10:29.4024906Z [ RUN ] ModulesTest.SimpleContainer 2023-01-11T22:10:29.4045154Z [ OK ] ModulesTest.SimpleContainer (2 ms) 2023-01-11T22:10:29.4045487Z [ RUN ] ModulesTest.EmbeddingBasic 2023-01-11T22:10:29.4047330Z [ OK ] ModulesTest.EmbeddingBasic (0 ms) 2023-01-11T22:10:29.4047699Z [ RUN ] ModulesTest.EmbeddingList 2023-01-11T22:10:29.4048793Z [ OK ] ModulesTest.EmbeddingList (0 ms) 2023-01-11T22:10:29.4049612Z [ RUN ] ModulesTest.EmbeddingFromPretrained 2023-01-11T22:10:29.4050251Z [ OK ] ModulesTest.EmbeddingFromPretrained (0 ms) 2023-01-11T22:10:29.4050629Z [ RUN ] ModulesTest.EmbeddingBagFromPretrained 2023-01-11T22:10:29.4052695Z [ OK ] ModulesTest.EmbeddingBagFromPretrained (0 ms) 2023-01-11T22:10:29.4053077Z [ RUN ] ModulesTest.AlphaDropout 2023-01-11T22:10:29.4054148Z [ OK ] ModulesTest.AlphaDropout (0 ms) 2023-01-11T22:10:29.4054505Z [ RUN ] ModulesTest.FeatureAlphaDropout 2023-01-11T22:10:29.4055807Z [ OK ] ModulesTest.FeatureAlphaDropout (0 ms) 2023-01-11T22:10:29.4056119Z [ RUN ] ModulesTest.Dropout 2023-01-11T22:10:29.4058112Z [ OK ] ModulesTest.Dropout (0 ms) 2023-01-11T22:10:29.4058644Z [ RUN ] ModulesTest.Dropout2d 2023-01-11T22:10:29.4063935Z [ OK ] ModulesTest.Dropout2d (0 ms) 2023-01-11T22:10:29.4064480Z [ RUN ] ModulesTest.Dropout3d 2023-01-11T22:10:29.4071876Z [ OK ] ModulesTest.Dropout3d (0 ms) 2023-01-11T22:10:29.4072296Z [ RUN ] ModulesTest.Parameters 2023-01-11T22:10:29.4072793Z [ OK ] ModulesTest.Parameters (0 ms) 2023-01-11T22:10:29.4073373Z [ RUN ] ModulesTest.FunctionalCallsSuppliedFunction 2023-01-11T22:10:29.4074025Z [ OK ] ModulesTest.FunctionalCallsSuppliedFunction (0 ms) 2023-01-11T22:10:29.4074420Z [ RUN ] ModulesTest.FunctionalWithTorchFunction 2023-01-11T22:10:29.4074818Z [ OK ] ModulesTest.FunctionalWithTorchFunction (0 ms) 2023-01-11T22:10:29.4075195Z [ RUN ] ModulesTest.FunctionalArgumentBinding 2023-01-11T22:10:29.4075554Z [ OK ] ModulesTest.FunctionalArgumentBinding (0 ms) 2023-01-11T22:10:29.4075902Z [ RUN ] ModulesTest.BatchNorm1dStateful 2023-01-11T22:10:29.4076238Z [ OK ] ModulesTest.BatchNorm1dStateful (0 ms) 2023-01-11T22:10:29.4076745Z [ RUN ] ModulesTest.BatchNorm1dStateless 2023-01-11T22:10:29.4077218Z [ OK ] ModulesTest.BatchNorm1dStateless (0 ms) 2023-01-11T22:10:29.4077581Z [ RUN ] ModulesTest.BatchNorm1d 2023-01-11T22:10:29.4077892Z [ OK ] ModulesTest.BatchNorm1d (0 ms) 2023-01-11T22:10:29.4078190Z [ RUN ] ModulesTest.BatchNorm2dStateful 2023-01-11T22:10:29.4078522Z [ OK ] ModulesTest.BatchNorm2dStateful (0 ms) 2023-01-11T22:10:29.4078852Z [ RUN ] ModulesTest.BatchNorm2dStateless 2023-01-11T22:10:29.4079178Z [ OK ] ModulesTest.BatchNorm2dStateless (0 ms) 2023-01-11T22:10:29.4079482Z [ RUN ] ModulesTest.BatchNorm2d 2023-01-11T22:10:29.4081345Z [ OK ] ModulesTest.BatchNorm2d (0 ms) 2023-01-11T22:10:29.4081921Z [ RUN ] ModulesTest.BatchNorm3dStateful 2023-01-11T22:10:29.4082391Z [ OK ] ModulesTest.BatchNorm3dStateful (0 ms) 2023-01-11T22:10:29.4082732Z [ RUN ] ModulesTest.BatchNorm3dStateless 2023-01-11T22:10:29.4083071Z [ OK ] ModulesTest.BatchNorm3dStateless (0 ms) 2023-01-11T22:10:29.4083367Z [ RUN ] ModulesTest.BatchNorm3d 2023-01-11T22:10:29.4086529Z [ OK ] ModulesTest.BatchNorm3d (0 ms) 2023-01-11T22:10:29.4087060Z [ RUN ] ModulesTest.InstanceNorm1dStateful 2023-01-11T22:10:29.4087430Z [ OK ] ModulesTest.InstanceNorm1dStateful (0 ms) 2023-01-11T22:10:29.4087820Z [ RUN ] ModulesTest.InstanceNorm1dStateless 2023-01-11T22:10:29.4088181Z [ OK ] ModulesTest.InstanceNorm1dStateless (0 ms) 2023-01-11T22:10:29.4088506Z [ RUN ] ModulesTest.InstanceNorm1d 2023-01-11T22:10:29.4088986Z [ OK ] ModulesTest.InstanceNorm1d (0 ms) 2023-01-11T22:10:29.4089552Z [ RUN ] ModulesTest.InstanceNorm2dStateful 2023-01-11T22:10:29.4089908Z [ OK ] ModulesTest.InstanceNorm2dStateful (0 ms) 2023-01-11T22:10:29.4090260Z [ RUN ] ModulesTest.InstanceNorm2dStateless 2023-01-11T22:10:29.4090614Z [ OK ] ModulesTest.InstanceNorm2dStateless (0 ms) 2023-01-11T22:10:29.4090983Z [ RUN ] ModulesTest.InstanceNorm2d 2023-01-11T22:10:29.4091937Z [ OK ] ModulesTest.InstanceNorm2d (0 ms) 2023-01-11T22:10:29.4092532Z [ RUN ] ModulesTest.InstanceNorm3dStateful 2023-01-11T22:10:29.4092933Z [ OK ] ModulesTest.InstanceNorm3dStateful (0 ms) 2023-01-11T22:10:29.4093286Z [ RUN ] ModulesTest.InstanceNorm3dStateless 2023-01-11T22:10:29.4093627Z [ OK ] ModulesTest.InstanceNorm3dStateless (0 ms) 2023-01-11T22:10:29.4093953Z [ RUN ] ModulesTest.InstanceNorm3d 2023-01-11T22:10:29.4096453Z [ OK ] ModulesTest.InstanceNorm3d (0 ms) 2023-01-11T22:10:29.4096985Z [ RUN ] ModulesTest.L1Loss 2023-01-11T22:10:29.4097727Z [ OK ] ModulesTest.L1Loss (0 ms) 2023-01-11T22:10:29.4098144Z [ RUN ] ModulesTest.MSELoss 2023-01-11T22:10:29.4099027Z [ OK ] ModulesTest.MSELoss (0 ms) 2023-01-11T22:10:29.4099368Z [ RUN ] ModulesTest.BCELoss 2023-01-11T22:10:29.4100388Z [ OK ] ModulesTest.BCELoss (0 ms) 2023-01-11T22:10:29.4100728Z [ RUN ] ModulesTest.KLDivLoss 2023-01-11T22:10:29.4101504Z [W loss.h:57] Warning: reduction: 'mean' divides the total loss by both the batch size and the support size.'batchmean' divides only by the batch size, and aligns with the KL div math definition.'mean' will be changed to behave the same as 'batchmean' in the next major release. (function kl_div) 2023-01-11T22:10:29.4102239Z [ OK ] ModulesTest.KLDivLoss (0 ms) 2023-01-11T22:10:29.4102708Z [ RUN ] ModulesTest.HingeEmbeddingLoss 2023-01-11T22:10:29.4104912Z [ OK ] ModulesTest.HingeEmbeddingLoss (0 ms) 2023-01-11T22:10:29.4105386Z [ RUN ] ModulesTest.MultiMarginLoss 2023-01-11T22:10:29.4106792Z [ OK ] ModulesTest.MultiMarginLoss (0 ms) 2023-01-11T22:10:29.4107386Z [ RUN ] ModulesTest.CosineEmbeddingLoss 2023-01-11T22:10:29.4111143Z [ OK ] ModulesTest.CosineEmbeddingLoss (0 ms) 2023-01-11T22:10:29.4111538Z [ RUN ] ModulesTest.SmoothL1LossDefaultOptions 2023-01-11T22:10:29.4112651Z [ OK ] ModulesTest.SmoothL1LossDefaultOptions (0 ms) 2023-01-11T22:10:29.4113021Z [ RUN ] ModulesTest.HuberLossDefaultOptions 2023-01-11T22:10:29.4114518Z [ OK ] ModulesTest.HuberLossDefaultOptions (0 ms) 2023-01-11T22:10:29.4115015Z [ RUN ] ModulesTest.MultiLabelMarginLossDefaultOptions 2023-01-11T22:10:29.4116021Z [ OK ] ModulesTest.MultiLabelMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:29.4116491Z [ RUN ] ModulesTest.SmoothL1LossNoReduction 2023-01-11T22:10:29.4117382Z [ OK ] ModulesTest.SmoothL1LossNoReduction (0 ms) 2023-01-11T22:10:29.4117732Z [ RUN ] ModulesTest.HuberLossNoReduction 2023-01-11T22:10:29.4118831Z [ OK ] ModulesTest.HuberLossNoReduction (0 ms) 2023-01-11T22:10:29.4119234Z [ RUN ] ModulesTest.MultiLabelMarginLossNoReduction 2023-01-11T22:10:29.4120495Z [ OK ] ModulesTest.MultiLabelMarginLossNoReduction (0 ms) 2023-01-11T22:10:29.4120911Z [ RUN ] ModulesTest.SmoothL1LossBeta 2023-01-11T22:10:29.4122218Z [ OK ] ModulesTest.SmoothL1LossBeta (0 ms) 2023-01-11T22:10:29.4122606Z [ RUN ] ModulesTest.HuberLossDelta 2023-01-11T22:10:29.4123506Z [ OK ] ModulesTest.HuberLossDelta (0 ms) 2023-01-11T22:10:29.4123918Z [ RUN ] ModulesTest.TripletMarginLoss 2023-01-11T22:10:29.4126589Z [ OK ] ModulesTest.TripletMarginLoss (0 ms) 2023-01-11T22:10:29.4126988Z [ RUN ] ModulesTest.TripletMarginWithDistanceLossDefaultParity 2023-01-11T22:10:29.4368714Z [ OK ] ModulesTest.TripletMarginWithDistanceLossDefaultParity (24 ms) 2023-01-11T22:10:29.4369227Z [ RUN ] ModulesTest.TripletMarginWithDistanceLossFunctionalParity 2023-01-11T22:10:29.4866425Z [ OK ] ModulesTest.TripletMarginWithDistanceLossFunctionalParity (49 ms) 2023-01-11T22:10:29.4866835Z [ RUN ] ModulesTest.NLLLoss 2023-01-11T22:10:29.4868430Z [ OK ] ModulesTest.NLLLoss (0 ms) 2023-01-11T22:10:29.4868797Z [ RUN ] ModulesTest.CrossEntropyLoss 2023-01-11T22:10:29.4875338Z [ OK ] ModulesTest.CrossEntropyLoss (0 ms) 2023-01-11T22:10:29.4875674Z [ RUN ] ModulesTest.CosineSimilarity 2023-01-11T22:10:29.4878898Z [ OK ] ModulesTest.CosineSimilarity (0 ms) 2023-01-11T22:10:29.4879308Z [ RUN ] ModulesTest.SoftMarginLossDefaultOptions 2023-01-11T22:10:29.4881372Z [ OK ] ModulesTest.SoftMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:29.4882086Z [ RUN ] ModulesTest.MultiLabelSoftMarginLossDefaultOptions 2023-01-11T22:10:29.4884719Z [ OK ] ModulesTest.MultiLabelSoftMarginLossDefaultOptions (0 ms) 2023-01-11T22:10:29.4885228Z [ RUN ] ModulesTest.SoftMarginLossNoReduction 2023-01-11T22:10:29.4886715Z [ OK ] ModulesTest.SoftMarginLossNoReduction (0 ms) 2023-01-11T22:10:29.4887180Z [ RUN ] ModulesTest.MultiLabelSoftMarginLossWeightedNoReduction 2023-01-11T22:10:29.4890141Z [ OK ] ModulesTest.MultiLabelSoftMarginLossWeightedNoReduction (0 ms) 2023-01-11T22:10:29.4890568Z [ RUN ] ModulesTest.PairwiseDistance 2023-01-11T22:10:29.4892103Z [ OK ] ModulesTest.PairwiseDistance (0 ms) 2023-01-11T22:10:29.4892398Z [ RUN ] ModulesTest.ELU 2023-01-11T22:10:29.4905019Z [ OK ] ModulesTest.ELU (1 ms) 2023-01-11T22:10:29.4905360Z [ RUN ] ModulesTest.SELU 2023-01-11T22:10:29.4907993Z [ OK ] ModulesTest.SELU (0 ms) 2023-01-11T22:10:29.4908341Z [ RUN ] ModulesTest.Hardshrink 2023-01-11T22:10:29.4915281Z [ OK ] ModulesTest.Hardshrink (0 ms) 2023-01-11T22:10:29.4915638Z [ RUN ] ModulesTest.Hardtanh 2023-01-11T22:10:29.4940698Z [ OK ] ModulesTest.Hardtanh (2 ms) 2023-01-11T22:10:29.4941057Z [ RUN ] ModulesTest.HardtanhMinValGEMaxVal 2023-01-11T22:10:29.5018610Z [ OK ] ModulesTest.HardtanhMinValGEMaxVal (7 ms) 2023-01-11T22:10:29.5018954Z [ RUN ] ModulesTest.LeakyReLU 2023-01-11T22:10:29.5030348Z [ OK ] ModulesTest.LeakyReLU (1 ms) 2023-01-11T22:10:29.5030712Z [ RUN ] ModulesTest.LogSigmoid 2023-01-11T22:10:29.5032202Z [ OK ] ModulesTest.LogSigmoid (0 ms) 2023-01-11T22:10:29.5032519Z [ RUN ] ModulesTest.Softmax 2023-01-11T22:10:29.5033792Z [ OK ] ModulesTest.Softmax (0 ms) 2023-01-11T22:10:29.5034082Z [ RUN ] ModulesTest.Softmin 2023-01-11T22:10:29.5035269Z [ OK ] ModulesTest.Softmin (0 ms) 2023-01-11T22:10:29.5035576Z [ RUN ] ModulesTest.LogSoftmax 2023-01-11T22:10:29.5036644Z [ OK ] ModulesTest.LogSoftmax (0 ms) 2023-01-11T22:10:29.5037000Z [ RUN ] ModulesTest.AdaptiveLogSoftmaxWithLoss 2023-01-11T22:10:29.5063050Z [ OK ] ModulesTest.AdaptiveLogSoftmaxWithLoss (2 ms) 2023-01-11T22:10:29.5063414Z [ RUN ] ModulesTest.Softmax2d 2023-01-11T22:10:29.5073917Z [ OK ] ModulesTest.Softmax2d (1 ms) 2023-01-11T22:10:29.5074284Z [ RUN ] ModulesTest.PReLU 2023-01-11T22:10:29.5079057Z [ OK ] ModulesTest.PReLU (0 ms) 2023-01-11T22:10:29.5079369Z [ RUN ] ModulesTest.ReLU 2023-01-11T22:10:29.5081968Z [ OK ] ModulesTest.ReLU (0 ms) 2023-01-11T22:10:29.5082239Z [ RUN ] ModulesTest.ReLU6 2023-01-11T22:10:29.5084681Z [ OK ] ModulesTest.ReLU6 (0 ms) 2023-01-11T22:10:29.5084934Z [ RUN ] ModulesTest.RReLU 2023-01-11T22:10:29.5121049Z [ OK ] ModulesTest.RReLU (3 ms) 2023-01-11T22:10:29.5121334Z [ RUN ] ModulesTest.CELU 2023-01-11T22:10:29.5130948Z [ OK ] ModulesTest.CELU (1 ms) 2023-01-11T22:10:29.5131218Z [ RUN ] ModulesTest.GLU 2023-01-11T22:10:29.5133305Z [ OK ] ModulesTest.GLU (0 ms) 2023-01-11T22:10:29.5133552Z [ RUN ] ModulesTest.GELU 2023-01-11T22:10:29.5137049Z [ OK ] ModulesTest.GELU (0 ms) 2023-01-11T22:10:29.5137470Z [ RUN ] ModulesTest.TanhGELU 2023-01-11T22:10:29.5138303Z [ OK ] ModulesTest.TanhGELU (0 ms) 2023-01-11T22:10:29.5138629Z [ RUN ] ModulesTest.Mish 2023-01-11T22:10:29.5139831Z [ OK ] ModulesTest.Mish (0 ms) 2023-01-11T22:10:29.5140136Z [ RUN ] ModulesTest.Sigmoid 2023-01-11T22:10:29.5141139Z [ OK ] ModulesTest.Sigmoid (0 ms) 2023-01-11T22:10:29.5141696Z [ RUN ] ModulesTest.PixelShuffle 2023-01-11T22:10:29.5143451Z [ OK ] ModulesTest.PixelShuffle (0 ms) 2023-01-11T22:10:29.5144114Z [ RUN ] ModulesTest.PixelUnshuffle 2023-01-11T22:10:29.5145740Z [ OK ] ModulesTest.PixelUnshuffle (0 ms) 2023-01-11T22:10:29.5146057Z [ RUN ] ModulesTest.Softplus 2023-01-11T22:10:29.5152980Z [ OK ] ModulesTest.Softplus (0 ms) 2023-01-11T22:10:29.5153481Z [ RUN ] ModulesTest.Softshrink 2023-01-11T22:10:29.5159181Z [ OK ] ModulesTest.Softshrink (0 ms) 2023-01-11T22:10:29.5159755Z [ RUN ] ModulesTest.Softsign 2023-01-11T22:10:29.5160238Z [ OK ] ModulesTest.Softsign (0 ms) 2023-01-11T22:10:29.5160710Z [ RUN ] ModulesTest.Tanh 2023-01-11T22:10:29.5161503Z [ OK ] ModulesTest.Tanh (0 ms) 2023-01-11T22:10:29.5162203Z [ RUN ] ModulesTest.Tanhshrink 2023-01-11T22:10:29.5162738Z [ OK ] ModulesTest.Tanhshrink (0 ms) 2023-01-11T22:10:29.5163082Z [ RUN ] ModulesTest.Threshold 2023-01-11T22:10:29.5174991Z [ OK ] ModulesTest.Threshold (1 ms) 2023-01-11T22:10:29.5175524Z [ RUN ] ModulesTest.Upsampling1D 2023-01-11T22:10:29.5177859Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5179316Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5181123Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5182662Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5184924Z [ OK ] ModulesTest.Upsampling1D (0 ms) 2023-01-11T22:10:29.5185487Z [ RUN ] ModulesTest.Upsampling2D 2023-01-11T22:10:29.5186924Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5188254Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5190116Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5191435Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5193194Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5194609Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5197037Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5198370Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5200341Z [ OK ] ModulesTest.Upsampling2D (1 ms) 2023-01-11T22:10:29.5200659Z [ RUN ] ModulesTest.Upsampling3D 2023-01-11T22:10:29.5202297Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5203505Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5206526Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5207929Z [W upsampling.h:66] Warning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and uses scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. (function _interp_output_size) 2023-01-11T22:10:29.5209821Z [ OK ] ModulesTest.Upsampling3D (0 ms) 2023-01-11T22:10:29.5210341Z [ RUN ] ModulesTest.CTCLoss 2023-01-11T22:10:29.5212986Z [ OK ] ModulesTest.CTCLoss (0 ms) 2023-01-11T22:10:29.5213302Z [ RUN ] ModulesTest.PoissonNLLLoss 2023-01-11T22:10:29.5215273Z [ OK ] ModulesTest.PoissonNLLLoss (0 ms) 2023-01-11T22:10:29.5215861Z [ RUN ] ModulesTest.MarginRankingLoss 2023-01-11T22:10:29.5218413Z [ OK ] ModulesTest.MarginRankingLoss (0 ms) 2023-01-11T22:10:29.5218976Z [ RUN ] ModulesTest.BCEWithLogitsLoss 2023-01-11T22:10:29.5278109Z [ OK ] ModulesTest.BCEWithLogitsLoss (5 ms) 2023-01-11T22:10:29.5278601Z [ RUN ] ModulesTest.MultiheadAttention 2023-01-11T22:10:38.8636718Z [ OK ] ModulesTest.MultiheadAttention (9335 ms) 2023-01-11T22:10:38.8637501Z [ RUN ] ModulesTest.PrettyPrintIdentity 2023-01-11T22:10:38.8637989Z [ OK ] ModulesTest.PrettyPrintIdentity (0 ms) 2023-01-11T22:10:38.8638508Z [ RUN ] ModulesTest.PrettyPrintFlatten 2023-01-11T22:10:38.8638853Z [ OK ] ModulesTest.PrettyPrintFlatten (0 ms) 2023-01-11T22:10:38.8639238Z [ RUN ] ModulesTest.PrettyPrintUnflatten 2023-01-11T22:10:38.8639630Z [ OK ] ModulesTest.PrettyPrintUnflatten (0 ms) 2023-01-11T22:10:38.8640054Z [ RUN ] ModulesTest.ReflectionPad1d 2023-01-11T22:10:38.8640492Z [ OK ] ModulesTest.ReflectionPad1d (0 ms) 2023-01-11T22:10:38.8640874Z [ RUN ] ModulesTest.ReflectionPad2d 2023-01-11T22:10:38.8641490Z [ OK ] ModulesTest.ReflectionPad2d (0 ms) 2023-01-11T22:10:38.8642053Z [ RUN ] ModulesTest.ReflectionPad3d 2023-01-11T22:10:38.8644922Z [ OK ] ModulesTest.ReflectionPad3d (0 ms) 2023-01-11T22:10:38.8645510Z [ RUN ] ModulesTest.ReplicationPad1d 2023-01-11T22:10:38.8646732Z [ OK ] ModulesTest.ReplicationPad1d (0 ms) 2023-01-11T22:10:38.8647306Z [ RUN ] ModulesTest.ReplicationPad2d 2023-01-11T22:10:38.8649678Z [ OK ] ModulesTest.ReplicationPad2d (0 ms) 2023-01-11T22:10:38.8650267Z [ RUN ] ModulesTest.ReplicationPad3d 2023-01-11T22:10:38.8654005Z [ OK ] ModulesTest.ReplicationPad3d (0 ms) 2023-01-11T22:10:38.8654294Z [ RUN ] ModulesTest.ZeroPad2d 2023-01-11T22:10:38.8656588Z [ OK ] ModulesTest.ZeroPad2d (0 ms) 2023-01-11T22:10:38.8656938Z [ RUN ] ModulesTest.ConstantPad1d 2023-01-11T22:10:38.8658769Z [ OK ] ModulesTest.ConstantPad1d (0 ms) 2023-01-11T22:10:38.8659069Z [ RUN ] ModulesTest.ConstantPad2d 2023-01-11T22:10:38.8661037Z [ OK ] ModulesTest.ConstantPad2d (0 ms) 2023-01-11T22:10:38.8661592Z [ RUN ] ModulesTest.ConstantPad3d 2023-01-11T22:10:38.8665673Z [ OK ] ModulesTest.ConstantPad3d (0 ms) 2023-01-11T22:10:38.8665983Z [ RUN ] ModulesTest.CrossMapLRN2d 2023-01-11T22:10:38.8672546Z [ OK ] ModulesTest.CrossMapLRN2d (0 ms) 2023-01-11T22:10:38.8672834Z [ RUN ] ModulesTest.RNNCell 2023-01-11T22:10:38.8675742Z [ OK ] ModulesTest.RNNCell (0 ms) 2023-01-11T22:10:38.8676196Z [ RUN ] ModulesTest.LSTMCell 2023-01-11T22:10:38.8680858Z [ OK ] ModulesTest.LSTMCell (0 ms) 2023-01-11T22:10:38.8681124Z [ RUN ] ModulesTest.GRUCell 2023-01-11T22:10:38.8685143Z [ OK ] ModulesTest.GRUCell (0 ms) 2023-01-11T22:10:38.8685811Z [ RUN ] ModulesTest.PrettyPrintLinear 2023-01-11T22:10:38.8686422Z [ OK ] ModulesTest.PrettyPrintLinear (0 ms) 2023-01-11T22:10:38.8686749Z [ RUN ] ModulesTest.PrettyPrintBilinear 2023-01-11T22:10:38.8687089Z [ OK ] ModulesTest.PrettyPrintBilinear (0 ms) 2023-01-11T22:10:38.8687546Z [ RUN ] ModulesTest.PrettyPrintConv 2023-01-11T22:10:38.8688028Z [ OK ] ModulesTest.PrettyPrintConv (0 ms) 2023-01-11T22:10:38.8688367Z [ RUN ] ModulesTest.PrettyPrintConvTranspose 2023-01-11T22:10:38.8690701Z [ OK ] ModulesTest.PrettyPrintConvTranspose (0 ms) 2023-01-11T22:10:38.8691217Z [ RUN ] ModulesTest.PrettyPrintUpsample 2023-01-11T22:10:38.8691792Z [ OK ] ModulesTest.PrettyPrintUpsample (0 ms) 2023-01-11T22:10:38.8692314Z [ RUN ] ModulesTest.PrettyPrintFold 2023-01-11T22:10:38.8692826Z [ OK ] ModulesTest.PrettyPrintFold (0 ms) 2023-01-11T22:10:38.8693442Z [ RUN ] ModulesTest.PrettyPrintUnfold 2023-01-11T22:10:38.8693909Z [ OK ] ModulesTest.PrettyPrintUnfold (0 ms) 2023-01-11T22:10:38.8694423Z [ RUN ] ModulesTest.PrettyPrintMaxPool 2023-01-11T22:10:38.8694825Z [ OK ] ModulesTest.PrettyPrintMaxPool (0 ms) 2023-01-11T22:10:38.8695278Z [ RUN ] ModulesTest.PrettyPrintAvgPool 2023-01-11T22:10:38.8695613Z [ OK ] ModulesTest.PrettyPrintAvgPool (0 ms) 2023-01-11T22:10:38.8695954Z [ RUN ] ModulesTest.PrettyPrinFractionalMaxPool 2023-01-11T22:10:38.8696359Z [ OK ] ModulesTest.PrettyPrinFractionalMaxPool (0 ms) 2023-01-11T22:10:38.8696808Z [ RUN ] ModulesTest.PrettyPrintLPPool 2023-01-11T22:10:38.8697349Z [ OK ] ModulesTest.PrettyPrintLPPool (0 ms) 2023-01-11T22:10:38.8697893Z [ RUN ] ModulesTest.PrettyPrintAdaptiveMaxPool 2023-01-11T22:10:38.8698387Z [ OK ] ModulesTest.PrettyPrintAdaptiveMaxPool (0 ms) 2023-01-11T22:10:38.8698864Z [ RUN ] ModulesTest.PrettyPrintAdaptiveAvgPool 2023-01-11T22:10:38.8699397Z [ OK ] ModulesTest.PrettyPrintAdaptiveAvgPool (0 ms) 2023-01-11T22:10:38.8699933Z [ RUN ] ModulesTest.PrettyPrintMaxUnpool 2023-01-11T22:10:38.8700429Z [ OK ] ModulesTest.PrettyPrintMaxUnpool (0 ms) 2023-01-11T22:10:38.8700960Z [ RUN ] ModulesTest.PrettyPrintDropout 2023-01-11T22:10:38.8701468Z [ OK ] ModulesTest.PrettyPrintDropout (0 ms) 2023-01-11T22:10:38.8701965Z [ RUN ] ModulesTest.PrettyPrintDropout2d 2023-01-11T22:10:38.8702617Z [ OK ] ModulesTest.PrettyPrintDropout2d (0 ms) 2023-01-11T22:10:38.8703149Z [ RUN ] ModulesTest.PrettyPrintDropout3d 2023-01-11T22:10:38.8703705Z [ OK ] ModulesTest.PrettyPrintDropout3d (0 ms) 2023-01-11T22:10:38.8704299Z [ RUN ] ModulesTest.PrettyPrintFunctional 2023-01-11T22:10:38.8704872Z [ OK ] ModulesTest.PrettyPrintFunctional (0 ms) 2023-01-11T22:10:38.8705401Z [ RUN ] ModulesTest.PrettyPrintBatchNorm1d 2023-01-11T22:10:38.8705922Z [ OK ] ModulesTest.PrettyPrintBatchNorm1d (0 ms) 2023-01-11T22:10:38.8706565Z [ RUN ] ModulesTest.PrettyPrintBatchNorm2d 2023-01-11T22:10:38.8706926Z [ OK ] ModulesTest.PrettyPrintBatchNorm2d (0 ms) 2023-01-11T22:10:38.8707315Z [ RUN ] ModulesTest.PrettyPrintBatchNorm3d 2023-01-11T22:10:38.8707852Z [ OK ] ModulesTest.PrettyPrintBatchNorm3d (0 ms) 2023-01-11T22:10:38.8708326Z [ RUN ] ModulesTest.PrettyPrintInstanceNorm1d 2023-01-11T22:10:38.8708865Z [ OK ] ModulesTest.PrettyPrintInstanceNorm1d (0 ms) 2023-01-11T22:10:38.8709412Z [ RUN ] ModulesTest.PrettyPrintInstanceNorm2d 2023-01-11T22:10:38.8710022Z [ OK ] ModulesTest.PrettyPrintInstanceNorm2d (0 ms) 2023-01-11T22:10:38.8710477Z [ RUN ] ModulesTest.PrettyPrintInstanceNorm3d 2023-01-11T22:10:38.8710994Z [ OK ] ModulesTest.PrettyPrintInstanceNorm3d (0 ms) 2023-01-11T22:10:38.8711477Z [ RUN ] ModulesTest.PrettyPrintLayerNorm 2023-01-11T22:10:38.8711863Z [ OK ] ModulesTest.PrettyPrintLayerNorm (0 ms) 2023-01-11T22:10:38.8712202Z [ RUN ] ModulesTest.PrettyPrintGroupNorm 2023-01-11T22:10:38.8712624Z [ OK ] ModulesTest.PrettyPrintGroupNorm (0 ms) 2023-01-11T22:10:38.8713096Z [ RUN ] ModulesTest.PrettyPrintLocalResponseNorm 2023-01-11T22:10:38.8713472Z [ OK ] ModulesTest.PrettyPrintLocalResponseNorm (0 ms) 2023-01-11T22:10:38.8713830Z [ RUN ] ModulesTest.PrettyPrintEmbedding 2023-01-11T22:10:38.8714245Z [ OK ] ModulesTest.PrettyPrintEmbedding (0 ms) 2023-01-11T22:10:38.8714579Z [ RUN ] ModulesTest.PrettyPrintEmbeddingBag 2023-01-11T22:10:38.8715024Z [ OK ] ModulesTest.PrettyPrintEmbeddingBag (0 ms) 2023-01-11T22:10:38.8715501Z [ RUN ] ModulesTest.PrettyPrintL1Loss 2023-01-11T22:10:38.8715971Z [ OK ] ModulesTest.PrettyPrintL1Loss (0 ms) 2023-01-11T22:10:38.8716391Z [ RUN ] ModulesTest.PrettyPrintKLDivLoss 2023-01-11T22:10:38.8716872Z [ OK ] ModulesTest.PrettyPrintKLDivLoss (0 ms) 2023-01-11T22:10:38.8717243Z [ RUN ] ModulesTest.PrettyPrintMSELoss 2023-01-11T22:10:38.8717586Z [ OK ] ModulesTest.PrettyPrintMSELoss (0 ms) 2023-01-11T22:10:38.8717912Z [ RUN ] ModulesTest.PrettyPrintBCELoss 2023-01-11T22:10:38.8718244Z [ OK ] ModulesTest.PrettyPrintBCELoss (0 ms) 2023-01-11T22:10:38.8718588Z [ RUN ] ModulesTest.PrettyPrintHingeEmbeddingLoss 2023-01-11T22:10:38.8718983Z [ OK ] ModulesTest.PrettyPrintHingeEmbeddingLoss (0 ms) 2023-01-11T22:10:38.8719376Z [ RUN ] ModulesTest.PrettyPrintCosineEmbeddingLoss 2023-01-11T22:10:38.8719848Z [ OK ] ModulesTest.PrettyPrintCosineEmbeddingLoss (0 ms) 2023-01-11T22:10:38.8720239Z [ RUN ] ModulesTest.PrettyPrintTripletMarginLoss 2023-01-11T22:10:38.8720627Z [ OK ] ModulesTest.PrettyPrintTripletMarginLoss (0 ms) 2023-01-11T22:10:38.8721050Z [ RUN ] ModulesTest.PrettyPrintTripletMarginWithDistanceLoss 2023-01-11T22:10:38.8721504Z [ OK ] ModulesTest.PrettyPrintTripletMarginWithDistanceLoss (0 ms) 2023-01-11T22:10:38.8721892Z [ RUN ] ModulesTest.PrettyPrintNLLLoss 2023-01-11T22:10:38.8722222Z [ OK ] ModulesTest.PrettyPrintNLLLoss (0 ms) 2023-01-11T22:10:38.8722559Z [ RUN ] ModulesTest.PrettyPrinCrossEntropyLoss 2023-01-11T22:10:38.8722935Z [ OK ] ModulesTest.PrettyPrinCrossEntropyLoss (0 ms) 2023-01-11T22:10:38.8723322Z [ RUN ] ModulesTest.PrettyPrintMultiLabelMarginLoss 2023-01-11T22:10:38.8723716Z [ OK ] ModulesTest.PrettyPrintMultiLabelMarginLoss (0 ms) 2023-01-11T22:10:38.8724133Z [ RUN ] ModulesTest.PrettyPrintMultiLabelSoftMarginLoss 2023-01-11T22:10:38.8724563Z [ OK ] ModulesTest.PrettyPrintMultiLabelSoftMarginLoss (0 ms) 2023-01-11T22:10:38.8724955Z [ RUN ] ModulesTest.PrettyPrintSoftMarginLoss 2023-01-11T22:10:38.8725312Z [ OK ] ModulesTest.PrettyPrintSoftMarginLoss (0 ms) 2023-01-11T22:10:38.8725682Z [ RUN ] ModulesTest.PrettyPrintCosineSimilarity 2023-01-11T22:10:38.8726062Z [ OK ] ModulesTest.PrettyPrintCosineSimilarity (0 ms) 2023-01-11T22:10:38.8726424Z [ RUN ] ModulesTest.PrettyPrintPairwiseDistance 2023-01-11T22:10:38.8726802Z [ OK ] ModulesTest.PrettyPrintPairwiseDistance (0 ms) 2023-01-11T22:10:38.8727241Z [ RUN ] ModulesTest.PrettyPrintReflectionPad 2023-01-11T22:10:38.8727609Z [ OK ] ModulesTest.PrettyPrintReflectionPad (0 ms) 2023-01-11T22:10:38.8728032Z [ RUN ] ModulesTest.PrettyPrintReplicationPad 2023-01-11T22:10:38.8728404Z [ OK ] ModulesTest.PrettyPrintReplicationPad (0 ms) 2023-01-11T22:10:38.8728758Z [ RUN ] ModulesTest.PrettyPrintZeroPad2d 2023-01-11T22:10:38.8729100Z [ OK ] ModulesTest.PrettyPrintZeroPad2d (0 ms) 2023-01-11T22:10:38.8729488Z [ RUN ] ModulesTest.PrettyPrintConstantPad 2023-01-11T22:10:38.8729913Z [ OK ] ModulesTest.PrettyPrintConstantPad (0 ms) 2023-01-11T22:10:38.8730251Z [ RUN ] ModulesTest.PrettyPrintNestedModel 2023-01-11T22:10:38.8730603Z [ OK ] ModulesTest.PrettyPrintNestedModel (0 ms) 2023-01-11T22:10:38.8730924Z [ RUN ] ModulesTest.PrettyPrintELU 2023-01-11T22:10:38.8731236Z [ OK ] ModulesTest.PrettyPrintELU (0 ms) 2023-01-11T22:10:38.8731532Z [ RUN ] ModulesTest.PrettyPrintSELU 2023-01-11T22:10:38.8731850Z [ OK ] ModulesTest.PrettyPrintSELU (0 ms) 2023-01-11T22:10:38.8732205Z [ RUN ] ModulesTest.PrettyPrintGLU 2023-01-11T22:10:38.8732505Z [ OK ] ModulesTest.PrettyPrintGLU (0 ms) 2023-01-11T22:10:38.8732829Z [ RUN ] ModulesTest.PrettyPrintHardshrink 2023-01-11T22:10:38.8733174Z [ OK ] ModulesTest.PrettyPrintHardshrink (0 ms) 2023-01-11T22:10:38.8733496Z [ RUN ] ModulesTest.PrettyPrintHardtanh 2023-01-11T22:10:38.8733835Z [ OK ] ModulesTest.PrettyPrintHardtanh (0 ms) 2023-01-11T22:10:38.8734166Z [ RUN ] ModulesTest.PrettyPrintLeakyReLU 2023-01-11T22:10:38.8737100Z [ OK ] ModulesTest.PrettyPrintLeakyReLU (0 ms) 2023-01-11T22:10:38.8737649Z [ RUN ] ModulesTest.PrettyPrintLogSigmoid 2023-01-11T22:10:38.8738251Z [ OK ] ModulesTest.PrettyPrintLogSigmoid (0 ms) 2023-01-11T22:10:38.8738852Z [ RUN ] ModulesTest.PrettyPrintSoftmax 2023-01-11T22:10:38.8739458Z [ OK ] ModulesTest.PrettyPrintSoftmax (0 ms) 2023-01-11T22:10:38.8740060Z [ RUN ] ModulesTest.PrettyPrintSoftmin 2023-01-11T22:10:38.8740665Z [ OK ] ModulesTest.PrettyPrintSoftmin (0 ms) 2023-01-11T22:10:38.8741251Z [ RUN ] ModulesTest.PrettyPrintLogSoftmax 2023-01-11T22:10:38.8741836Z [ OK ] ModulesTest.PrettyPrintLogSoftmax (0 ms) 2023-01-11T22:10:38.8742568Z [ RUN ] ModulesTest.PrettyPrintSoftmax2d 2023-01-11T22:10:38.8742931Z [ OK ] ModulesTest.PrettyPrintSoftmax2d (0 ms) 2023-01-11T22:10:38.8743395Z [ RUN ] ModulesTest.PrettyPrintPReLU 2023-01-11T22:10:38.8743920Z [ OK ] ModulesTest.PrettyPrintPReLU (0 ms) 2023-01-11T22:10:38.8744460Z [ RUN ] ModulesTest.PrettyPrintReLU 2023-01-11T22:10:38.8744960Z [ OK ] ModulesTest.PrettyPrintReLU (0 ms) 2023-01-11T22:10:38.8745428Z [ RUN ] ModulesTest.PrettyPrintReLU6 2023-01-11T22:10:38.8745954Z [ OK ] ModulesTest.PrettyPrintReLU6 (0 ms) 2023-01-11T22:10:38.8746489Z [ RUN ] ModulesTest.PrettyPrintRReLU 2023-01-11T22:10:38.8747017Z [ OK ] ModulesTest.PrettyPrintRReLU (0 ms) 2023-01-11T22:10:38.8747416Z [ RUN ] ModulesTest.PrettyPrintCELU 2023-01-11T22:10:38.8747833Z [ OK ] ModulesTest.PrettyPrintCELU (0 ms) 2023-01-11T22:10:38.8748144Z [ RUN ] ModulesTest.PrettyPrintSigmoid 2023-01-11T22:10:38.8748477Z [ OK ] ModulesTest.PrettyPrintSigmoid (0 ms) 2023-01-11T22:10:38.8748818Z [ RUN ] ModulesTest.PrettyPrintPixelShuffle 2023-01-11T22:10:38.8749165Z [ OK ] ModulesTest.PrettyPrintPixelShuffle (0 ms) 2023-01-11T22:10:38.8749527Z [ RUN ] ModulesTest.PrettyPrintPixelUnshuffle 2023-01-11T22:10:38.8749896Z [ OK ] ModulesTest.PrettyPrintPixelUnshuffle (0 ms) 2023-01-11T22:10:38.8750232Z [ RUN ] ModulesTest.PrettyPrintSoftplus 2023-01-11T22:10:38.8750685Z [ OK ] ModulesTest.PrettyPrintSoftplus (0 ms) 2023-01-11T22:10:38.8751023Z [ RUN ] ModulesTest.PrettyPrintSoftshrink 2023-01-11T22:10:38.8751372Z [ OK ] ModulesTest.PrettyPrintSoftshrink (0 ms) 2023-01-11T22:10:38.8751693Z [ RUN ] ModulesTest.PrettyPrintSoftsign 2023-01-11T22:10:38.8752031Z [ OK ] ModulesTest.PrettyPrintSoftsign (0 ms) 2023-01-11T22:10:38.8752348Z [ RUN ] ModulesTest.PrettyPrintTanh 2023-01-11T22:10:38.8752694Z [ OK ] ModulesTest.PrettyPrintTanh (0 ms) 2023-01-11T22:10:38.8753010Z [ RUN ] ModulesTest.PrettyPrintTanhshrink 2023-01-11T22:10:38.8753358Z [ OK ] ModulesTest.PrettyPrintTanhshrink (0 ms) 2023-01-11T22:10:38.8753694Z [ RUN ] ModulesTest.PrettyPrintThreshold 2023-01-11T22:10:38.8754025Z [ OK ] ModulesTest.PrettyPrintThreshold (0 ms) 2023-01-11T22:10:38.8754356Z [ RUN ] ModulesTest.PrettyPrintCTCLoss 2023-01-11T22:10:38.8754744Z [ OK ] ModulesTest.PrettyPrintCTCLoss (0 ms) 2023-01-11T22:10:38.8755082Z [ RUN ] ModulesTest.PrettyPrintPoissonNLLLoss 2023-01-11T22:10:38.8755452Z [ OK ] ModulesTest.PrettyPrintPoissonNLLLoss (0 ms) 2023-01-11T22:10:38.8755828Z [ RUN ] ModulesTest.PrettyPrintMarginRankingLoss 2023-01-11T22:10:38.8756218Z [ OK ] ModulesTest.PrettyPrintMarginRankingLoss (0 ms) 2023-01-11T22:10:38.8756579Z [ RUN ] ModulesTest.PrettyPrintCrossMapLRN2d 2023-01-11T22:10:38.8756948Z [ OK ] ModulesTest.PrettyPrintCrossMapLRN2d (0 ms) 2023-01-11T22:10:38.8757303Z [ RUN ] ModulesTest.PrettyPrintAlphaDropout 2023-01-11T22:10:38.8757649Z [ OK ] ModulesTest.PrettyPrintAlphaDropout (0 ms) 2023-01-11T22:10:38.8758029Z [ RUN ] ModulesTest.PrettyPrintFeatureAlphaDropout 2023-01-11T22:10:38.8758430Z [ OK ] ModulesTest.PrettyPrintFeatureAlphaDropout (0 ms) 2023-01-11T22:10:38.8758813Z [ RUN ] ModulesTest.PrettyPrintBCEWithLogitsLoss 2023-01-11T22:10:38.8759198Z [ OK ] ModulesTest.PrettyPrintBCEWithLogitsLoss (0 ms) 2023-01-11T22:10:38.8759649Z [ RUN ] ModulesTest.PrettyPrintMultiheadAttention 2023-01-11T22:10:38.8760047Z [ OK ] ModulesTest.PrettyPrintMultiheadAttention (0 ms) 2023-01-11T22:10:38.8760389Z [ RUN ] ModulesTest.PrettyPrintRNNCell 2023-01-11T22:10:38.8760721Z [ OK ] ModulesTest.PrettyPrintRNNCell (0 ms) 2023-01-11T22:10:38.8761048Z [ RUN ] ModulesTest.PrettyPrintLSTMCell 2023-01-11T22:10:38.8761376Z [ OK ] ModulesTest.PrettyPrintLSTMCell (0 ms) 2023-01-11T22:10:38.8761701Z [ RUN ] ModulesTest.PrettyPrintGRUCell 2023-01-11T22:10:38.8762032Z [ OK ] ModulesTest.PrettyPrintGRUCell (0 ms) 2023-01-11T22:10:38.8762407Z [ RUN ] ModulesTest.PrettyPrintAdaptiveLogSoftmaxWithLoss 2023-01-11T22:10:38.8762860Z [ OK ] ModulesTest.PrettyPrintAdaptiveLogSoftmaxWithLoss (0 ms) 2023-01-11T22:10:38.8763258Z [----------] 256 tests from ModulesTest (9684 ms total) 2023-01-11T22:10:38.8763414Z 2023-01-11T22:10:38.8763557Z [----------] 1 test from NestedTest 2023-01-11T22:10:38.8763804Z [ RUN ] NestedTest.Nested 2023-01-11T22:10:38.8764158Z [W NestedTensorImpl.cpp:179] Warning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (function operator()) 2023-01-11T22:10:38.8764552Z [ OK ] NestedTest.Nested (0 ms) 2023-01-11T22:10:38.8764835Z [----------] 1 test from NestedTest (0 ms total) 2023-01-11T22:10:38.8764981Z 2023-01-11T22:10:38.8765141Z [----------] 10 tests from ParameterDictTest 2023-01-11T22:10:38.8765470Z [ RUN ] ParameterDictTest.ConstructFromTensor 2023-01-11T22:10:38.8765835Z [ OK ] ParameterDictTest.ConstructFromTensor (0 ms) 2023-01-11T22:10:38.8766261Z [ RUN ] ParameterDictTest.ConstructFromOrderedDict 2023-01-11T22:10:38.8766661Z [ OK ] ParameterDictTest.ConstructFromOrderedDict (0 ms) 2023-01-11T22:10:38.8767029Z [ RUN ] ParameterDictTest.InsertAndContains 2023-01-11T22:10:38.8767371Z [ OK ] ParameterDictTest.InsertAndContains (0 ms) 2023-01-11T22:10:38.8767712Z [ RUN ] ParameterDictTest.InsertAndClear 2023-01-11T22:10:38.8768053Z [ OK ] ParameterDictTest.InsertAndClear (0 ms) 2023-01-11T22:10:38.8772362Z [ RUN ] ParameterDictTest.InsertAndPop 2023-01-11T22:10:38.8773148Z [ OK ] ParameterDictTest.InsertAndPop (1 ms) 2023-01-11T22:10:38.8773857Z [ RUN ] ParameterDictTest.SimpleUpdate 2023-01-11T22:10:38.8774568Z [ OK ] ParameterDictTest.SimpleUpdate (1 ms) 2023-01-11T22:10:38.8775191Z [ RUN ] ParameterDictTest.Keys 2023-01-11T22:10:38.8775950Z [ OK ] ParameterDictTest.Keys (0 ms) 2023-01-11T22:10:38.8776585Z [ RUN ] ParameterDictTest.Values 2023-01-11T22:10:38.8777204Z [ OK ] ParameterDictTest.Values (0 ms) 2023-01-11T22:10:38.8777809Z [ RUN ] ParameterDictTest.Get 2023-01-11T22:10:38.8778418Z [ OK ] ParameterDictTest.Get (0 ms) 2023-01-11T22:10:38.8779161Z [ RUN ] ParameterDictTest.PrettyPrintParameterDict 2023-01-11T22:10:38.8780011Z [ OK ] ParameterDictTest.PrettyPrintParameterDict (0 ms) 2023-01-11T22:10:38.8780830Z [----------] 10 tests from ParameterDictTest (2 ms total) 2023-01-11T22:10:38.8781175Z 2023-01-11T22:10:38.8781520Z [----------] 8 tests from ParameterListTest 2023-01-11T22:10:38.8782268Z [ RUN ] ParameterListTest.ConstructsFromSharedPointer 2023-01-11T22:10:38.8783259Z [ OK ] ParameterListTest.ConstructsFromSharedPointer (0 ms) 2023-01-11T22:10:38.8783760Z [ RUN ] ParameterListTest.isEmpty 2023-01-11T22:10:38.8784157Z [ OK ] ParameterListTest.isEmpty (0 ms) 2023-01-11T22:10:38.8784617Z [ RUN ] ParameterListTest.PushBackAddsAnElement 2023-01-11T22:10:38.8785068Z [ OK ] ParameterListTest.PushBackAddsAnElement (0 ms) 2023-01-11T22:10:38.8785573Z [ RUN ] ParameterListTest.ForEachLoop 2023-01-11T22:10:38.8785932Z [ OK ] ParameterListTest.ForEachLoop (0 ms) 2023-01-11T22:10:38.8786369Z [ RUN ] ParameterListTest.AccessWithAt 2023-01-11T22:10:38.8786921Z [ OK ] ParameterListTest.AccessWithAt (3 ms) 2023-01-11T22:10:38.8787613Z [ RUN ] ParameterListTest.ExtendPushesParametersFromOtherParameterList 2023-01-11T22:10:38.8788573Z [ OK ] ParameterListTest.ExtendPushesParametersFromOtherParameterList (0 ms) 2023-01-11T22:10:38.8789431Z [ RUN ] ParameterListTest.PrettyPrintParameterList 2023-01-11T22:10:38.8790120Z [ OK ] ParameterListTest.PrettyPrintParameterList (0 ms) 2023-01-11T22:10:38.8790474Z [ RUN ] ParameterListTest.IncrementAdd 2023-01-11T22:10:38.8790816Z [ OK ] ParameterListTest.IncrementAdd (0 ms) 2023-01-11T22:10:38.8791167Z [----------] 8 tests from ParameterListTest (4 ms total) 2023-01-11T22:10:38.8791334Z 2023-01-11T22:10:38.8791489Z [----------] 1 test from NamespaceTests 2023-01-11T22:10:38.8791874Z [ RUN ] NamespaceTests.NotLeakingSymbolsFromTorchAutogradNamespace 2023-01-11T22:10:38.8792385Z [ OK ] NamespaceTests.NotLeakingSymbolsFromTorchAutogradNamespace (0 ms) 2023-01-11T22:10:38.8792801Z [----------] 1 test from NamespaceTests (0 ms total) 2023-01-11T22:10:38.8792954Z 2023-01-11T22:10:38.8793089Z [----------] 7 tests from NNUtilsTest 2023-01-11T22:10:38.8793371Z [ RUN ] NNUtilsTest.ClipGradNorm 2023-01-11T22:10:38.8803347Z [ OK ] NNUtilsTest.ClipGradNorm (1 ms) 2023-01-11T22:10:38.8803833Z [ RUN ] NNUtilsTest.ClipGradNormErrorIfNonfinite 2023-01-11T22:10:39.0594740Z [ OK ] NNUtilsTest.ClipGradNormErrorIfNonfinite (178 ms) 2023-01-11T22:10:39.0595213Z [ RUN ] NNUtilsTest.ClipGradValue 2023-01-11T22:10:39.0595714Z [ OK ] NNUtilsTest.ClipGradValue (0 ms) 2023-01-11T22:10:39.0596071Z [ RUN ] NNUtilsTest.ConvertParameters 2023-01-11T22:10:39.0599814Z [ OK ] NNUtilsTest.ConvertParameters (0 ms) 2023-01-11T22:10:39.0600232Z [ RUN ] NNUtilsTest.PackSequence 2023-01-11T22:10:39.0997288Z [ OK ] NNUtilsTest.PackSequence (39 ms) 2023-01-11T22:10:39.0997688Z [ RUN ] NNUtilsTest.PackPaddedSequence 2023-01-11T22:10:39.1155067Z [ OK ] NNUtilsTest.PackPaddedSequence (15 ms) 2023-01-11T22:10:39.1155666Z [ RUN ] NNUtilsTest.PadSequence 2023-01-11T22:10:39.1219040Z [ OK ] NNUtilsTest.PadSequence (6 ms) 2023-01-11T22:10:39.1220066Z [----------] 7 tests from NNUtilsTest (243 ms total) 2023-01-11T22:10:39.1220344Z 2023-01-11T22:10:39.1220651Z [----------] 3 tests from PackedSequenceTest 2023-01-11T22:10:39.1221179Z [ RUN ] PackedSequenceTest.WrongOrder 2023-01-11T22:10:39.1259243Z [ OK ] PackedSequenceTest.WrongOrder (4 ms) 2023-01-11T22:10:39.1259567Z [ RUN ] PackedSequenceTest.TotalLength 2023-01-11T22:10:39.1334633Z [ OK ] PackedSequenceTest.TotalLength (7 ms) 2023-01-11T22:10:39.1335026Z [ RUN ] PackedSequenceTest.To 2023-01-11T22:10:39.1336649Z [ OK ] PackedSequenceTest.To (0 ms) 2023-01-11T22:10:39.1337069Z [----------] 3 tests from PackedSequenceTest (11 ms total) 2023-01-11T22:10:39.1337269Z 2023-01-11T22:10:39.1337419Z [----------] 34 tests from OptimTest 2023-01-11T22:10:39.1337712Z [ RUN ] OptimTest.OptimizerAccessors 2023-01-11T22:10:39.1366034Z [ OK ] OptimTest.OptimizerAccessors (2 ms) 2023-01-11T22:10:39.1366606Z [ RUN ] OptimTest.OldInterface 2023-01-11T22:10:39.1367372Z [ OK ] OptimTest.OldInterface (0 ms) 2023-01-11T22:10:39.1367892Z [ RUN ] OptimTest.XORConvergence_SGD 2023-01-11T22:10:40.5780466Z [ OK ] OptimTest.XORConvergence_SGD (1441 ms) 2023-01-11T22:10:40.5780852Z [ RUN ] OptimTest.XORConvergence_LBFGS 2023-01-11T22:10:41.5842737Z [ OK ] OptimTest.XORConvergence_LBFGS (1006 ms) 2023-01-11T22:10:41.5843263Z [ RUN ] OptimTest.XORConvergence_Adagrad 2023-01-11T22:10:42.1359307Z [ OK ] OptimTest.XORConvergence_Adagrad (551 ms) 2023-01-11T22:10:42.1359859Z [ RUN ] OptimTest.XORConvergence_RMSprop 2023-01-11T22:10:42.6890219Z [ OK ] OptimTest.XORConvergence_RMSprop (552 ms) 2023-01-11T22:10:42.6890765Z [ RUN ] OptimTest.XORConvergence_RMSpropWithMomentum 2023-01-11T22:10:44.2402189Z [ OK ] OptimTest.XORConvergence_RMSpropWithMomentum (1551 ms) 2023-01-11T22:10:44.2402582Z [ RUN ] OptimTest.XORConvergence_Adam 2023-01-11T22:10:44.8488501Z [ OK ] OptimTest.XORConvergence_Adam (608 ms) 2023-01-11T22:10:44.8489101Z [ RUN ] OptimTest.XORConvergence_AdamWithAmsgrad 2023-01-11T22:10:45.4596139Z [ OK ] OptimTest.XORConvergence_AdamWithAmsgrad (610 ms) 2023-01-11T22:10:45.4596842Z [ RUN ] OptimTest.ProducesPyTorchValues_Adam 2023-01-11T22:10:45.6549689Z [ OK ] OptimTest.ProducesPyTorchValues_Adam (195 ms) 2023-01-11T22:10:45.6550158Z [ RUN ] OptimTest.ProducesPyTorchValues_AdamWithWeightDecay 2023-01-11T22:10:45.8557874Z [ OK ] OptimTest.ProducesPyTorchValues_AdamWithWeightDecay (200 ms) 2023-01-11T22:10:45.8558361Z [ RUN ] OptimTest.ProducesPyTorchValues_AdamWithWeightDecayAndAMSGrad 2023-01-11T22:10:46.0626586Z [ OK ] OptimTest.ProducesPyTorchValues_AdamWithWeightDecayAndAMSGrad (206 ms) 2023-01-11T22:10:46.0627310Z [ RUN ] OptimTest.XORConvergence_AdamW 2023-01-11T22:10:46.6760618Z [ OK ] OptimTest.XORConvergence_AdamW (613 ms) 2023-01-11T22:10:46.6761068Z [ RUN ] OptimTest.XORConvergence_AdamWWithAmsgrad 2023-01-11T22:10:47.2891173Z [ OK ] OptimTest.XORConvergence_AdamWWithAmsgrad (613 ms) 2023-01-11T22:10:47.2891609Z [ RUN ] OptimTest.ProducesPyTorchValues_AdamW 2023-01-11T22:10:47.4904158Z [ OK ] OptimTest.ProducesPyTorchValues_AdamW (201 ms) 2023-01-11T22:10:47.4904728Z [ RUN ] OptimTest.ProducesPyTorchValues_AdamWWithoutWeightDecay 2023-01-11T22:10:47.6861174Z [ OK ] OptimTest.ProducesPyTorchValues_AdamWWithoutWeightDecay (195 ms) 2023-01-11T22:10:47.6861630Z [ RUN ] OptimTest.ProducesPyTorchValues_AdamWWithAMSGrad 2023-01-11T22:10:47.8932307Z [ OK ] OptimTest.ProducesPyTorchValues_AdamWWithAMSGrad (207 ms) 2023-01-11T22:10:47.8932769Z [ RUN ] OptimTest.ProducesPyTorchValues_Adagrad 2023-01-11T22:10:48.0560780Z [ OK ] OptimTest.ProducesPyTorchValues_Adagrad (162 ms) 2023-01-11T22:10:48.0561295Z [ RUN ] OptimTest.ProducesPyTorchValues_AdagradWithWeightDecay 2023-01-11T22:10:48.2250395Z [ OK ] OptimTest.ProducesPyTorchValues_AdagradWithWeightDecay (168 ms) 2023-01-11T22:10:48.2250904Z [ RUN ] OptimTest.ProducesPyTorchValues_AdagradWithWeightDecayAndLRDecay 2023-01-11T22:10:48.3946090Z [ OK ] OptimTest.ProducesPyTorchValues_AdagradWithWeightDecayAndLRDecay (169 ms) 2023-01-11T22:10:48.3946551Z [ RUN ] OptimTest.ProducesPyTorchValues_RMSprop 2023-01-11T22:10:48.5655689Z [ OK ] OptimTest.ProducesPyTorchValues_RMSprop (170 ms) 2023-01-11T22:10:48.5656220Z [ RUN ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecay 2023-01-11T22:10:48.7428774Z [ OK ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecay (177 ms) 2023-01-11T22:10:48.7429338Z [ RUN ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecayAndCentered 2023-01-11T22:10:48.9369405Z [ OK ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecayAndCentered (194 ms) 2023-01-11T22:10:48.9369983Z [ RUN ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecayAndCenteredAndMomentum 2023-01-11T22:10:49.1402109Z [ OK ] OptimTest.ProducesPyTorchValues_RMSpropWithWeightDecayAndCenteredAndMomentum (203 ms) 2023-01-11T22:10:49.1402583Z [ RUN ] OptimTest.ProducesPyTorchValues_SGD 2023-01-11T22:10:49.2724424Z [ OK ] OptimTest.ProducesPyTorchValues_SGD (132 ms) 2023-01-11T22:10:49.2725066Z [ RUN ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecay 2023-01-11T22:10:49.4141976Z [ OK ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecay (141 ms) 2023-01-11T22:10:49.4142665Z [ RUN ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecayAndMomentum 2023-01-11T22:10:49.5733296Z [ OK ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecayAndMomentum (159 ms) 2023-01-11T22:10:49.5733854Z [ RUN ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecayAndNesterovMomentum 2023-01-11T22:10:49.7373604Z [ OK ] OptimTest.ProducesPyTorchValues_SGDWithWeightDecayAndNesterovMomentum (164 ms) 2023-01-11T22:10:49.7374094Z [ RUN ] OptimTest.ProducesPyTorchValues_LBFGS 2023-01-11T22:10:49.8885937Z [ OK ] OptimTest.ProducesPyTorchValues_LBFGS (151 ms) 2023-01-11T22:10:49.8886502Z [ RUN ] OptimTest.ProducesPyTorchValues_LBFGS_with_line_search 2023-01-11T22:10:50.5667791Z [ OK ] OptimTest.ProducesPyTorchValues_LBFGS_with_line_search (678 ms) 2023-01-11T22:10:50.5668243Z [ RUN ] OptimTest.ZeroGrad 2023-01-11T22:10:50.5668545Z [ OK ] OptimTest.ZeroGrad (0 ms) 2023-01-11T22:10:50.5669277Z [ RUN ] OptimTest.ExternalVectorOfParameters 2023-01-11T22:10:50.5669839Z [ OK ] OptimTest.ExternalVectorOfParameters (0 ms) 2023-01-11T22:10:50.5670281Z [ RUN ] OptimTest.AddParameter_LBFGS 2023-01-11T22:10:50.5672034Z [ OK ] OptimTest.AddParameter_LBFGS (0 ms) 2023-01-11T22:10:50.5672420Z [ RUN ] OptimTest.CheckLRChange_StepLR_Adam 2023-01-11T22:10:50.5672805Z [ OK ] OptimTest.CheckLRChange_StepLR_Adam (0 ms) 2023-01-11T22:10:50.5673151Z [----------] 34 tests from OptimTest (11433 ms total) 2023-01-11T22:10:50.5673360Z 2023-01-11T22:10:50.5673527Z [----------] 29 tests from OrderedDictTest 2023-01-11T22:10:50.5673880Z [ RUN ] OrderedDictTest.IsEmptyAfterDefaultConstruction 2023-01-11T22:10:50.5674379Z [ OK ] OrderedDictTest.IsEmptyAfterDefaultConstruction (0 ms) 2023-01-11T22:10:50.5675009Z [ RUN ] OrderedDictTest.InsertAddsElementsWhenTheyAreYetNotPresent 2023-01-11T22:10:50.5675540Z [ OK ] OrderedDictTest.InsertAddsElementsWhenTheyAreYetNotPresent (0 ms) 2023-01-11T22:10:50.5676063Z [ RUN ] OrderedDictTest.GetReturnsValuesWhenTheyArePresent 2023-01-11T22:10:50.5676574Z [ OK ] OrderedDictTest.GetReturnsValuesWhenTheyArePresent (0 ms) 2023-01-11T22:10:50.5677047Z [ RUN ] OrderedDictTest.GetThrowsWhenPassedKeysThatAreNotPresent 2023-01-11T22:10:50.5699752Z [ OK ] OrderedDictTest.GetThrowsWhenPassedKeysThatAreNotPresent (2 ms) 2023-01-11T22:10:50.5700175Z [ RUN ] OrderedDictTest.CanInitializeFromList 2023-01-11T22:10:50.5700553Z [ OK ] OrderedDictTest.CanInitializeFromList (0 ms) 2023-01-11T22:10:50.5701006Z [ RUN ] OrderedDictTest.InsertThrowsWhenPassedElementsThatArePresent 2023-01-11T22:10:50.5723506Z [ OK ] OrderedDictTest.InsertThrowsWhenPassedElementsThatArePresent (2 ms) 2023-01-11T22:10:50.5724007Z [ RUN ] OrderedDictTest.FrontReturnsTheFirstItem 2023-01-11T22:10:50.5724403Z [ OK ] OrderedDictTest.FrontReturnsTheFirstItem (0 ms) 2023-01-11T22:10:50.5724775Z [ RUN ] OrderedDictTest.FrontThrowsWhenEmpty 2023-01-11T22:10:50.5734016Z [ OK ] OrderedDictTest.FrontThrowsWhenEmpty (1 ms) 2023-01-11T22:10:50.5734459Z [ RUN ] OrderedDictTest.BackReturnsTheLastItem 2023-01-11T22:10:50.5734843Z [ OK ] OrderedDictTest.BackReturnsTheLastItem (0 ms) 2023-01-11T22:10:50.5735195Z [ RUN ] OrderedDictTest.BackThrowsWhenEmpty 2023-01-11T22:10:50.5744860Z [ OK ] OrderedDictTest.BackThrowsWhenEmpty (1 ms) 2023-01-11T22:10:50.5745398Z [ RUN ] OrderedDictTest.FindReturnsPointersToValuesWhenPresent 2023-01-11T22:10:50.5745899Z [ OK ] OrderedDictTest.FindReturnsPointersToValuesWhenPresent (0 ms) 2023-01-11T22:10:50.5746431Z [ RUN ] OrderedDictTest.FindReturnsNullPointersWhenPasesdKeysThatAreNotPresent 2023-01-11T22:10:50.5747044Z [ OK ] OrderedDictTest.FindReturnsNullPointersWhenPasesdKeysThatAreNotPresent (0 ms) 2023-01-11T22:10:50.5747639Z [ RUN ] OrderedDictTest.SubscriptOperatorThrowsWhenPassedKeysThatAreNotPresent 2023-01-11T22:10:50.5748241Z [ OK ] OrderedDictTest.SubscriptOperatorThrowsWhenPassedKeysThatAreNotPresent (0 ms) 2023-01-11T22:10:50.5748845Z [ RUN ] OrderedDictTest.SubscriptOperatorReturnsItemsPositionallyWhenPassedIntegers 2023-01-11T22:10:50.5749493Z [ OK ] OrderedDictTest.SubscriptOperatorReturnsItemsPositionallyWhenPassedIntegers (0 ms) 2023-01-11T22:10:50.5750114Z [ RUN ] OrderedDictTest.SubscriptOperatorsThrowswhenPassedKeysThatAreNotPresent 2023-01-11T22:10:50.5768519Z [ OK ] OrderedDictTest.SubscriptOperatorsThrowswhenPassedKeysThatAreNotPresent (2 ms) 2023-01-11T22:10:50.5769140Z [ RUN ] OrderedDictTest.UpdateInsertsAllItemsFromAnotherOrderedDict 2023-01-11T22:10:50.5769812Z [ OK ] OrderedDictTest.UpdateInsertsAllItemsFromAnotherOrderedDict (0 ms) 2023-01-11T22:10:50.5770277Z [ RUN ] OrderedDictTest.UpdateAlsoChecksForDuplicates 2023-01-11T22:10:50.5780275Z [ OK ] OrderedDictTest.UpdateAlsoChecksForDuplicates (1 ms) 2023-01-11T22:10:50.5780931Z [ RUN ] OrderedDictTest.CanIterateItems 2023-01-11T22:10:50.5781563Z [ OK ] OrderedDictTest.CanIterateItems (0 ms) 2023-01-11T22:10:50.5782139Z [ RUN ] OrderedDictTest.EraseWorks 2023-01-11T22:10:50.5782816Z [ OK ] OrderedDictTest.EraseWorks (0 ms) 2023-01-11T22:10:50.5783165Z [ RUN ] OrderedDictTest.ClearMakesTheDictEmpty 2023-01-11T22:10:50.5783546Z [ OK ] OrderedDictTest.ClearMakesTheDictEmpty (0 ms) 2023-01-11T22:10:50.5783890Z [ RUN ] OrderedDictTest.CanCopyConstruct 2023-01-11T22:10:50.5784336Z [ OK ] OrderedDictTest.CanCopyConstruct (0 ms) 2023-01-11T22:10:50.5784668Z [ RUN ] OrderedDictTest.CanCopyAssign 2023-01-11T22:10:50.5784985Z [ OK ] OrderedDictTest.CanCopyAssign (0 ms) 2023-01-11T22:10:50.5785317Z [ RUN ] OrderedDictTest.CanMoveConstruct 2023-01-11T22:10:50.5785663Z [ OK ] OrderedDictTest.CanMoveConstruct (0 ms) 2023-01-11T22:10:50.5785987Z [ RUN ] OrderedDictTest.CanMoveAssign 2023-01-11T22:10:50.5786303Z [ OK ] OrderedDictTest.CanMoveAssign (0 ms) 2023-01-11T22:10:50.5786642Z [ RUN ] OrderedDictTest.CanInsertWithBraces 2023-01-11T22:10:50.5787001Z [ OK ] OrderedDictTest.CanInsertWithBraces (0 ms) 2023-01-11T22:10:50.5787402Z [ RUN ] OrderedDictTest.ErrorMessagesIncludeTheKeyDescription 2023-01-11T22:10:50.5805088Z [ OK ] OrderedDictTest.ErrorMessagesIncludeTheKeyDescription (2 ms) 2023-01-11T22:10:50.5805681Z [ RUN ] OrderedDictTest.KeysReturnsAllKeys 2023-01-11T22:10:50.5806067Z [ OK ] OrderedDictTest.KeysReturnsAllKeys (0 ms) 2023-01-11T22:10:50.5806419Z [ RUN ] OrderedDictTest.ValuesReturnsAllValues 2023-01-11T22:10:50.5806796Z [ OK ] OrderedDictTest.ValuesReturnsAllValues (0 ms) 2023-01-11T22:10:50.5807165Z [ RUN ] OrderedDictTest.ItemsReturnsAllItems 2023-01-11T22:10:50.5807515Z [ OK ] OrderedDictTest.ItemsReturnsAllItems (0 ms) 2023-01-11T22:10:50.5807876Z [----------] 29 tests from OrderedDictTest (13 ms total) 2023-01-11T22:10:50.5808036Z 2023-01-11T22:10:50.5808177Z [----------] 13 tests from RNNTest 2023-01-11T22:10:50.5808461Z [ RUN ] RNNTest.CheckOutputSizes 2023-01-11T22:10:50.5886656Z [ OK ] RNNTest.CheckOutputSizes (8 ms) 2023-01-11T22:10:50.5887123Z [ RUN ] RNNTest.CheckOutputSizesProj 2023-01-11T22:10:50.5965270Z [ OK ] RNNTest.CheckOutputSizesProj (7 ms) 2023-01-11T22:10:50.5965828Z [ RUN ] RNNTest.CheckOutputValuesMatchPyTorch 2023-01-11T22:10:50.5969751Z [ OK ] RNNTest.CheckOutputValuesMatchPyTorch (0 ms) 2023-01-11T22:10:50.5970328Z [ RUN ] RNNTest.EndToEndLSTM 2023-01-11T22:10:52.5403493Z [ OK ] RNNTest.EndToEndLSTM (1943 ms) 2023-01-11T22:10:52.5404202Z [ RUN ] RNNTest.EndToEndLSTMProj 2023-01-11T22:10:54.4102791Z [ OK ] RNNTest.EndToEndLSTMProj (1869 ms) 2023-01-11T22:10:54.4103190Z [ RUN ] RNNTest.EndToEndGRU 2023-01-11T22:10:56.0850753Z [ OK ] RNNTest.EndToEndGRU (1674 ms) 2023-01-11T22:10:56.0851362Z [ RUN ] RNNTest.EndToEndRNNRelu 2023-01-11T22:10:56.9697611Z [ OK ] RNNTest.EndToEndRNNRelu (884 ms) 2023-01-11T22:10:56.9698377Z [ RUN ] RNNTest.EndToEndRNNTanh 2023-01-11T22:10:57.9568840Z [ OK ] RNNTest.EndToEndRNNTanh (987 ms) 2023-01-11T22:10:57.9569776Z [ RUN ] RNNTest.PrettyPrintRNNs 2023-01-11T22:10:57.9592544Z [ OK ] RNNTest.PrettyPrintRNNs (2 ms) 2023-01-11T22:10:57.9593161Z [ RUN ] RNNTest.BidirectionalFlattenParameters 2023-01-11T22:10:57.9704534Z [ OK ] RNNTest.BidirectionalFlattenParameters (11 ms) 2023-01-11T22:10:57.9705137Z [ RUN ] RNNTest.BidirectionalGRUReverseForward 2023-01-11T22:10:57.9718135Z [ OK ] RNNTest.BidirectionalGRUReverseForward (1 ms) 2023-01-11T22:10:57.9718893Z [ RUN ] RNNTest.BidirectionalLSTMReverseForward 2023-01-11T22:10:57.9730199Z [ OK ] RNNTest.BidirectionalLSTMReverseForward (1 ms) 2023-01-11T22:10:57.9730599Z [ RUN ] RNNTest.UsePackedSequenceAsInput 2023-01-11T22:10:57.9748061Z [ OK ] RNNTest.UsePackedSequenceAsInput (1 ms) 2023-01-11T22:10:57.9748692Z [----------] 13 tests from RNNTest (7394 ms total) 2023-01-11T22:10:57.9748978Z 2023-01-11T22:10:57.9749566Z [----------] 19 tests from SequentialTest 2023-01-11T22:10:57.9750106Z [ RUN ] SequentialTest.CanContainThings 2023-01-11T22:10:57.9750689Z [ OK ] SequentialTest.CanContainThings (0 ms) 2023-01-11T22:10:57.9751123Z [ RUN ] SequentialTest.ConstructsFromSharedPointer 2023-01-11T22:10:57.9751576Z [ OK ] SequentialTest.ConstructsFromSharedPointer (0 ms) 2023-01-11T22:10:57.9752017Z [ RUN ] SequentialTest.ConstructsFromConcreteType 2023-01-11T22:10:57.9752474Z [ OK ] SequentialTest.ConstructsFromConcreteType (0 ms) 2023-01-11T22:10:57.9752861Z [ RUN ] SequentialTest.ConstructsFromModuleHolder 2023-01-11T22:10:57.9753310Z [ OK ] SequentialTest.ConstructsFromModuleHolder (0 ms) 2023-01-11T22:10:57.9753668Z [ RUN ] SequentialTest.PushBackAddsAnElement 2023-01-11T22:10:57.9754094Z [ OK ] SequentialTest.PushBackAddsAnElement (0 ms) 2023-01-11T22:10:57.9754430Z [ RUN ] SequentialTest.AccessWithAt 2023-01-11T22:10:57.9783951Z [ OK ] SequentialTest.AccessWithAt (3 ms) 2023-01-11T22:10:57.9784295Z [ RUN ] SequentialTest.AccessWithPtr 2023-01-11T22:10:57.9807043Z [ OK ] SequentialTest.AccessWithPtr (2 ms) 2023-01-11T22:10:57.9807492Z [ RUN ] SequentialTest.CallingForwardOnEmptySequentialIsDisallowed 2023-01-11T22:10:57.9819117Z [ OK ] SequentialTest.CallingForwardOnEmptySequentialIsDisallowed (1 ms) 2023-01-11T22:10:57.9819580Z [ RUN ] SequentialTest.CallingForwardChainsCorrectly 2023-01-11T22:10:57.9819996Z [ OK ] SequentialTest.CallingForwardChainsCorrectly (0 ms) 2023-01-11T22:10:57.9820440Z [ RUN ] SequentialTest.CallingForwardWithTheWrongReturnTypeThrows 2023-01-11T22:10:57.9830051Z [ OK ] SequentialTest.CallingForwardWithTheWrongReturnTypeThrows (1 ms) 2023-01-11T22:10:57.9830551Z [ RUN ] SequentialTest.TheReturnTypeOfForwardDefaultsToTensor 2023-01-11T22:10:57.9831035Z [ OK ] SequentialTest.TheReturnTypeOfForwardDefaultsToTensor (0 ms) 2023-01-11T22:10:57.9831447Z [ RUN ] SequentialTest.ForwardReturnsTheLastValue 2023-01-11T22:10:57.9834901Z [ OK ] SequentialTest.ForwardReturnsTheLastValue (0 ms) 2023-01-11T22:10:57.9835554Z [ RUN ] SequentialTest.SanityCheckForHoldingStandardModules 2023-01-11T22:10:57.9836706Z [ OK ] SequentialTest.SanityCheckForHoldingStandardModules (0 ms) 2023-01-11T22:10:57.9837521Z [ RUN ] SequentialTest.ExtendPushesModulesFromOtherSequential 2023-01-11T22:10:57.9838003Z [ OK ] SequentialTest.ExtendPushesModulesFromOtherSequential (0 ms) 2023-01-11T22:10:57.9838409Z [ RUN ] SequentialTest.HasReferenceSemantics 2023-01-11T22:10:57.9838756Z [ OK ] SequentialTest.HasReferenceSemantics (0 ms) 2023-01-11T22:10:57.9839250Z [ RUN ] SequentialTest.IsCloneable 2023-01-11T22:10:57.9842431Z [ OK ] SequentialTest.IsCloneable (0 ms) 2023-01-11T22:10:57.9843091Z [ RUN ] SequentialTest.RegistersElementsAsSubmodules 2023-01-11T22:10:57.9843499Z [ OK ] SequentialTest.RegistersElementsAsSubmodules (0 ms) 2023-01-11T22:10:57.9843881Z [ RUN ] SequentialTest.PrettyPrintSequential 2023-01-11T22:10:57.9845233Z [ OK ] SequentialTest.PrettyPrintSequential (0 ms) 2023-01-11T22:10:57.9845883Z [ RUN ] SequentialTest.ModuleForwardMethodOptionalArg 2023-01-11T22:10:57.9877113Z [ OK ] SequentialTest.ModuleForwardMethodOptionalArg (3 ms) 2023-01-11T22:10:57.9877787Z [----------] 19 tests from SequentialTest (12 ms total) 2023-01-11T22:10:57.9877952Z 2023-01-11T22:10:57.9878114Z [----------] 11 tests from TransformerTest 2023-01-11T22:10:57.9878435Z [ RUN ] TransformerTest.TransformerEncoderLayer 2023-01-11T22:10:57.9966952Z [ OK ] TransformerTest.TransformerEncoderLayer (8 ms) 2023-01-11T22:10:57.9967501Z [ RUN ] TransformerTest.TransformerDecoderLayer 2023-01-11T22:10:58.0050470Z [ OK ] TransformerTest.TransformerDecoderLayer (8 ms) 2023-01-11T22:10:58.0051047Z [ RUN ] TransformerTest.TransformerDecoderLayer_gelu 2023-01-11T22:10:58.0098288Z [ OK ] TransformerTest.TransformerDecoderLayer_gelu (4 ms) 2023-01-11T22:10:58.0098890Z [ RUN ] TransformerTest.TransformerEncoder 2023-01-11T22:10:58.0262573Z [ OK ] TransformerTest.TransformerEncoder (16 ms) 2023-01-11T22:10:58.0263212Z [ RUN ] TransformerTest.PrettyPrintTransformerEncoderLayer 2023-01-11T22:10:58.0265836Z [ OK ] TransformerTest.PrettyPrintTransformerEncoderLayer (0 ms) 2023-01-11T22:10:58.0266548Z [ RUN ] TransformerTest.PrettyPrintTransformerEncoder 2023-01-11T22:10:58.0274653Z [ OK ] TransformerTest.PrettyPrintTransformerEncoder (0 ms) 2023-01-11T22:10:58.0275303Z [ RUN ] TransformerTest.PrettyPrintTransformerDecoderLayer 2023-01-11T22:10:58.0276912Z [ OK ] TransformerTest.PrettyPrintTransformerDecoderLayer (0 ms) 2023-01-11T22:10:58.0277585Z [ RUN ] TransformerTest.TransformerDecoder 2023-01-11T22:10:58.0723343Z [ OK ] TransformerTest.TransformerDecoder (44 ms) 2023-01-11T22:10:58.0723827Z [ RUN ] TransformerTest.PrettyPrintTransformerDecoder 2023-01-11T22:10:58.0733951Z [ OK ] TransformerTest.PrettyPrintTransformerDecoder (1 ms) 2023-01-11T22:10:58.0734382Z [ RUN ] TransformerTest.Transformer 2023-01-11T22:10:58.0896556Z [ OK ] TransformerTest.Transformer (16 ms) 2023-01-11T22:10:58.0897068Z [ RUN ] TransformerTest.TransformerArgsCorrectness 2023-01-11T22:10:58.0956301Z [ OK ] TransformerTest.TransformerArgsCorrectness (5 ms) 2023-01-11T22:10:58.0957024Z [----------] 11 tests from TransformerTest (107 ms total) 2023-01-11T22:10:58.0957196Z 2023-01-11T22:10:58.0957351Z [----------] 23 tests from SerializeTest 2023-01-11T22:10:58.0957635Z [ RUN ] SerializeTest.KeysFunc 2023-01-11T22:10:58.0961046Z [ OK ] SerializeTest.KeysFunc (0 ms) 2023-01-11T22:10:58.0961595Z [ RUN ] SerializeTest.TryReadFunc 2023-01-11T22:10:58.0963687Z [ OK ] SerializeTest.TryReadFunc (0 ms) 2023-01-11T22:10:58.0964199Z [ RUN ] SerializeTest.Basic 2023-01-11T22:10:58.0966042Z [ OK ] SerializeTest.Basic (0 ms) 2023-01-11T22:10:58.0966577Z [ RUN ] SerializeTest.MathBits 2023-01-11T22:10:58.1041380Z [ OK ] SerializeTest.MathBits (7 ms) 2023-01-11T22:10:58.1041917Z [ RUN ] SerializeTest.BasicToFile 2023-01-11T22:10:58.1044868Z [ OK ] SerializeTest.BasicToFile (0 ms) 2023-01-11T22:10:58.1045563Z [ RUN ] SerializeTest.BasicViaFunc 2023-01-11T22:10:58.1048160Z [ OK ] SerializeTest.BasicViaFunc (0 ms) 2023-01-11T22:10:58.1048707Z [ RUN ] SerializeTest.Resized 2023-01-11T22:10:58.1050330Z [ OK ] SerializeTest.Resized (0 ms) 2023-01-11T22:10:58.1050959Z [ RUN ] SerializeTest.Sliced 2023-01-11T22:10:58.1052493Z [ OK ] SerializeTest.Sliced (0 ms) 2023-01-11T22:10:58.1053129Z [ RUN ] SerializeTest.NonContiguous 2023-01-11T22:10:58.1054638Z [ OK ] SerializeTest.NonContiguous (0 ms) 2023-01-11T22:10:58.1055270Z [ RUN ] SerializeTest.ErrorOnMissingKey 2023-01-11T22:10:58.1133632Z [ OK ] SerializeTest.ErrorOnMissingKey (7 ms) 2023-01-11T22:10:58.1134145Z [ RUN ] SerializeTest.XOR 2023-01-11T22:10:58.2871051Z [ OK ] SerializeTest.XOR (173 ms) 2023-01-11T22:10:58.2871565Z [ RUN ] SerializeTest.Optim 2023-01-11T22:10:58.2893553Z [ OK ] SerializeTest.Optim (2 ms) 2023-01-11T22:10:58.2894011Z [ RUN ] SerializeTest.Optim_Adagrad 2023-01-11T22:10:58.2924003Z [ OK ] SerializeTest.Optim_Adagrad (3 ms) 2023-01-11T22:10:58.2924565Z [ RUN ] SerializeTest.Optim_SGD 2023-01-11T22:10:58.2951720Z [ OK ] SerializeTest.Optim_SGD (2 ms) 2023-01-11T22:10:58.2952288Z [ RUN ] SerializeTest.Optim_Adam 2023-01-11T22:10:58.2986647Z [ OK ] SerializeTest.Optim_Adam (3 ms) 2023-01-11T22:10:58.2987206Z [ RUN ] SerializeTest.Optim_AdamW 2023-01-11T22:10:58.3021327Z [ OK ] SerializeTest.Optim_AdamW (3 ms) 2023-01-11T22:10:58.3021858Z [ RUN ] SerializeTest.Optim_RMSprop 2023-01-11T22:10:58.3054880Z [ OK ] SerializeTest.Optim_RMSprop (3 ms) 2023-01-11T22:10:58.3055435Z [ RUN ] SerializeTest.Optim_LBFGS 2023-01-11T22:10:58.3088779Z [ OK ] SerializeTest.Optim_LBFGS (3 ms) 2023-01-11T22:10:58.3089577Z [ RUN ] SerializeTest.CanSerializeModulesWithIntermediateModulesWithoutParametersOrBuffers 2023-01-11T22:10:58.3093083Z [ OK ] SerializeTest.CanSerializeModulesWithIntermediateModulesWithoutParametersOrBuffers (0 ms) 2023-01-11T22:10:58.3093643Z [ RUN ] SerializeTest.VectorOfTensors 2023-01-11T22:10:58.3095475Z [ OK ] SerializeTest.VectorOfTensors (0 ms) 2023-01-11T22:10:58.3096035Z [ RUN ] SerializeTest.IValue 2023-01-11T22:10:58.3098337Z [ OK ] SerializeTest.IValue (0 ms) 2023-01-11T22:10:58.3099030Z [ RUN ] SerializeTest.UnserializableSubmoduleIsSkippedWhenSavingModule 2023-01-11T22:10:58.3099896Z [ OK ] SerializeTest.UnserializableSubmoduleIsSkippedWhenSavingModule (0 ms) 2023-01-11T22:10:58.3100575Z [ RUN ] SerializeTest.UnserializableSubmoduleIsIgnoredWhenLoadingModule 2023-01-11T22:10:58.3106022Z [ OK ] SerializeTest.UnserializableSubmoduleIsIgnoredWhenLoadingModule (0 ms) 2023-01-11T22:10:58.3106767Z [----------] 23 tests from SerializeTest (214 ms total) 2023-01-11T22:10:58.3107056Z 2023-01-11T22:10:58.3107296Z [----------] 1 test from SpecialTest 2023-01-11T22:10:58.3107759Z [ RUN ] SpecialTest.special 2023-01-11T22:10:58.3108285Z [ OK ] SpecialTest.special (0 ms) 2023-01-11T22:10:58.3108669Z [----------] 1 test from SpecialTest (0 ms total) 2023-01-11T22:10:58.3108823Z 2023-01-11T22:10:58.3108966Z [----------] 5 tests from TestStatic 2023-01-11T22:10:58.3109226Z [ RUN ] TestStatic.AllOf 2023-01-11T22:10:58.3109495Z [ OK ] TestStatic.AllOf (0 ms) 2023-01-11T22:10:58.3109740Z [ RUN ] TestStatic.AnyOf 2023-01-11T22:10:58.3110014Z [ OK ] TestStatic.AnyOf (0 ms) 2023-01-11T22:10:58.3110298Z [ RUN ] TestStatic.EnableIfModule 2023-01-11T22:10:58.3110595Z [ OK ] TestStatic.EnableIfModule (0 ms) 2023-01-11T22:10:58.3111078Z [ RUN ] TestStatic.ReturnTypeOfForward 2023-01-11T22:10:58.3111413Z [ OK ] TestStatic.ReturnTypeOfForward (0 ms) 2023-01-11T22:10:58.3111684Z [ RUN ] TestStatic.Apply 2023-01-11T22:10:58.3111954Z [ OK ] TestStatic.Apply (0 ms) 2023-01-11T22:10:58.3112254Z [----------] 5 tests from TestStatic (0 ms total) 2023-01-11T22:10:58.3112403Z 2023-01-11T22:10:58.3112548Z [----------] 45 tests from TensorTest 2023-01-11T22:10:58.3112801Z [ RUN ] TensorTest.ToDtype 2023-01-11T22:10:58.3119032Z [ OK ] TensorTest.ToDtype (1 ms) 2023-01-11T22:10:58.3119648Z [ RUN ] TensorTest.ToTensorAndTensorAttributes 2023-01-11T22:10:58.3120138Z [ OK ] TensorTest.ToTensorAndTensorAttributes (0 ms) 2023-01-11T22:10:58.3120514Z [ RUN ] TensorTest.ToOptionsWithRequiresGrad 2023-01-11T22:10:58.3143619Z [ OK ] TensorTest.ToOptionsWithRequiresGrad (2 ms) 2023-01-11T22:10:58.3144359Z [ RUN ] TensorTest.ToDoesNotCopyWhenOptionsAreAllTheSame 2023-01-11T22:10:58.3145153Z [ OK ] TensorTest.ToDoesNotCopyWhenOptionsAreAllTheSame (0 ms) 2023-01-11T22:10:58.3145694Z [ RUN ] TensorTest.AtTensorCtorScalar 2023-01-11T22:10:58.3146288Z [ OK ] TensorTest.AtTensorCtorScalar (0 ms) 2023-01-11T22:10:58.3146726Z [ RUN ] TensorTest.AtTensorCtorSingleDim 2023-01-11T22:10:58.3147078Z [ OK ] TensorTest.AtTensorCtorSingleDim (0 ms) 2023-01-11T22:10:58.3147423Z [ RUN ] TensorTest.AtTensorCastRealToComplex 2023-01-11T22:10:58.3147794Z [ OK ] TensorTest.AtTensorCastRealToComplex (0 ms) 2023-01-11T22:10:58.3148180Z [ RUN ] TensorTest.AtTensorCastComplexToRealErrorChecks 2023-01-11T22:10:58.3200268Z [ OK ] TensorTest.AtTensorCastComplexToRealErrorChecks (5 ms) 2023-01-11T22:10:58.3200873Z [ RUN ] TensorTest.TorchTensorCtorScalarIntegralType 2023-01-11T22:10:58.3201284Z [ OK ] TensorTest.TorchTensorCtorScalarIntegralType (0 ms) 2023-01-11T22:10:58.3201696Z [ RUN ] TensorTest.TorchTensorCtorScalarFloatingType 2023-01-11T22:10:58.3202105Z [ OK ] TensorTest.TorchTensorCtorScalarFloatingType (0 ms) 2023-01-11T22:10:58.3202500Z [ RUN ] TensorTest.TorchTensorCtorScalarBoolType 2023-01-11T22:10:58.3202875Z [ OK ] TensorTest.TorchTensorCtorScalarBoolType (0 ms) 2023-01-11T22:10:58.3203275Z [ RUN ] TensorTest.TorchTensorCtorSingleDimIntegralType 2023-01-11T22:10:58.3207360Z [ OK ] TensorTest.TorchTensorCtorSingleDimIntegralType (0 ms) 2023-01-11T22:10:58.3208027Z [ RUN ] TensorTest.TorchTensorCtorSingleDimFloatingType 2023-01-11T22:10:58.3208818Z [ OK ] TensorTest.TorchTensorCtorSingleDimFloatingType (0 ms) 2023-01-11T22:10:58.3209269Z [ RUN ] TensorTest.TorchTensorCtorSingleDimBoolType 2023-01-11T22:10:58.3209674Z [ OK ] TensorTest.TorchTensorCtorSingleDimBoolType (0 ms) 2023-01-11T22:10:58.3210076Z [ RUN ] TensorTest.TorchTensorCtorMultiDimIntegralType 2023-01-11T22:10:58.3212772Z [ OK ] TensorTest.TorchTensorCtorMultiDimIntegralType (0 ms) 2023-01-11T22:10:58.3213389Z [ RUN ] TensorTest.TorchTensorCtorMultiDimFloatingType 2023-01-11T22:10:58.3215199Z [ OK ] TensorTest.TorchTensorCtorMultiDimFloatingType (0 ms) 2023-01-11T22:10:58.3215766Z [ RUN ] TensorTest.TorchTensorCtorMultiDimBoolType 2023-01-11T22:10:58.3216351Z [ OK ] TensorTest.TorchTensorCtorMultiDimBoolType (0 ms) 2023-01-11T22:10:58.3217019Z [ RUN ] TensorTest.TorchTensorCtorMultiDimWithOptions 2023-01-11T22:10:58.3217592Z [ OK ] TensorTest.TorchTensorCtorMultiDimWithOptions (0 ms) 2023-01-11T22:10:58.3218015Z [ RUN ] TensorTest.TorchTensorCtorMultiDimErrorChecks 2023-01-11T22:10:58.3274845Z [ OK ] TensorTest.TorchTensorCtorMultiDimErrorChecks (5 ms) 2023-01-11T22:10:58.3275626Z [ RUN ] TensorTest.TorchTensorCastRealToComplex 2023-01-11T22:10:58.3276335Z [ OK ] TensorTest.TorchTensorCastRealToComplex (0 ms) 2023-01-11T22:10:58.3277095Z [ RUN ] TensorTest.TorchTensorCastComplexToRealErrorChecks 2023-01-11T22:10:58.3277789Z [W Copy.cpp:276] Warning: Casting complex values to real discards the imaginary part (function operator()) 2023-01-11T22:10:58.3289010Z [ OK ] TensorTest.TorchTensorCastComplexToRealErrorChecks (1 ms) 2023-01-11T22:10:58.3289728Z [ RUN ] TensorTest.TorchTensorCtorZeroSizedDim 2023-01-11T22:10:58.3290356Z [ OK ] TensorTest.TorchTensorCtorZeroSizedDim (0 ms) 2023-01-11T22:10:58.3291043Z [ RUN ] TensorTest.TorchTensorCtorWithoutSpecifyingDtype 2023-01-11T22:10:58.3291919Z [ OK ] TensorTest.TorchTensorCtorWithoutSpecifyingDtype (0 ms) 2023-01-11T22:10:58.3292694Z [ RUN ] TensorTest.TorchTensorCtorWithNonDtypeOptions 2023-01-11T22:10:58.3293463Z [ OK ] TensorTest.TorchTensorCtorWithNonDtypeOptions (0 ms) 2023-01-11T22:10:58.3294048Z [ RUN ] TensorTest.Arange 2023-01-11T22:10:58.3294507Z [ OK ] TensorTest.Arange (0 ms) 2023-01-11T22:10:58.3295109Z [ RUN ] TensorTest.PrettyPrintTensorDataContainer 2023-01-11T22:10:58.3295833Z [ OK ] TensorTest.PrettyPrintTensorDataContainer (0 ms) 2023-01-11T22:10:58.3296628Z [ RUN ] TensorTest.TensorDataContainerCallingAccessorOfWrongType 2023-01-11T22:10:58.3358112Z [ OK ] TensorTest.TensorDataContainerCallingAccessorOfWrongType (6 ms) 2023-01-11T22:10:58.3358724Z [ RUN ] TensorTest.FromBlob 2023-01-11T22:10:58.3359185Z [ OK ] TensorTest.FromBlob (0 ms) 2023-01-11T22:10:58.3359778Z [ RUN ] TensorTest.FromBlobUsesDeleter 2023-01-11T22:10:58.3360367Z [ OK ] TensorTest.FromBlobUsesDeleter (0 ms) 2023-01-11T22:10:58.3360938Z [ RUN ] TensorTest.FromBlobWithStrides 2023-01-11T22:10:58.3361504Z [ OK ] TensorTest.FromBlobWithStrides (0 ms) 2023-01-11T22:10:58.3361998Z [ RUN ] TensorTest.Item 2023-01-11T22:10:58.3362451Z [ OK ] TensorTest.Item (0 ms) 2023-01-11T22:10:58.3362910Z [ RUN ] TensorTest.DataPtr 2023-01-11T22:10:58.3363374Z [ OK ] TensorTest.DataPtr (0 ms) 2023-01-11T22:10:58.3363822Z [ RUN ] TensorTest.Data 2023-01-11T22:10:58.3364273Z [ OK ] TensorTest.Data (0 ms) 2023-01-11T22:10:58.3364766Z [ RUN ] TensorTest.BackwardAndGrad 2023-01-11T22:10:58.3365308Z [ OK ] TensorTest.BackwardAndGrad (0 ms) 2023-01-11T22:10:58.3365887Z [ RUN ] TensorTest.BackwardCreatesOnesGrad 2023-01-11T22:10:58.3366502Z [ OK ] TensorTest.BackwardCreatesOnesGrad (0 ms) 2023-01-11T22:10:58.3367122Z [ RUN ] TensorTest.BackwardNonScalarOutputs 2023-01-11T22:10:58.3394603Z [ OK ] TensorTest.BackwardNonScalarOutputs (3 ms) 2023-01-11T22:10:58.3395156Z [ RUN ] TensorTest.IsLeaf 2023-01-11T22:10:58.3395617Z [ OK ] TensorTest.IsLeaf (0 ms) 2023-01-11T22:10:58.3396089Z [ RUN ] TensorTest.OutputNr 2023-01-11T22:10:58.3396592Z [ OK ] TensorTest.OutputNr (0 ms) 2023-01-11T22:10:58.3397046Z [ RUN ] TensorTest.Version 2023-01-11T22:10:58.3397523Z [ OK ] TensorTest.Version (0 ms) 2023-01-11T22:10:58.3397983Z [ RUN ] TensorTest.Detach 2023-01-11T22:10:58.3398442Z [ OK ] TensorTest.Detach (0 ms) 2023-01-11T22:10:58.3398934Z [ RUN ] TensorTest.DetachInplace 2023-01-11T22:10:58.3399467Z [ OK ] TensorTest.DetachInplace (0 ms) 2023-01-11T22:10:58.3400252Z [ RUN ] TensorTest.SetData 2023-01-11T22:10:58.3400717Z [ OK ] TensorTest.SetData (0 ms) 2023-01-11T22:10:58.3401246Z [ RUN ] TensorTest.RequiresGradInplace 2023-01-11T22:10:58.3423536Z [ OK ] TensorTest.RequiresGradInplace (2 ms) 2023-01-11T22:10:58.3423994Z [ RUN ] TensorTest.StdDimension 2023-01-11T22:10:58.3424447Z [ OK ] TensorTest.StdDimension (0 ms) 2023-01-11T22:10:58.3424914Z [ RUN ] TensorTest.ReshapeAlias 2023-01-11T22:10:58.3427526Z [ OK ] TensorTest.ReshapeAlias (0 ms) 2023-01-11T22:10:58.3428106Z [----------] 45 tests from TensorTest (31 ms total) 2023-01-11T22:10:58.3428388Z 2023-01-11T22:10:58.3428684Z [----------] 36 tests from TensorIndexingTest 2023-01-11T22:10:58.3429202Z [ RUN ] TensorIndexingTest.Slice 2023-01-11T22:10:58.3429722Z [ OK ] TensorIndexingTest.Slice (0 ms) 2023-01-11T22:10:58.3430421Z [ RUN ] TensorIndexingTest.TensorIndex 2023-01-11T22:10:58.3441530Z [ OK ] TensorIndexingTest.TensorIndex (1 ms) 2023-01-11T22:10:58.3442166Z [ RUN ] TensorIndexingTest.TestNoIndices 2023-01-11T22:10:58.3529068Z [ OK ] TensorIndexingTest.TestNoIndices (8 ms) 2023-01-11T22:10:58.3529769Z [ RUN ] TensorIndexingTest.TestAdvancedIndexingWithListOfTensor 2023-01-11T22:10:58.3530526Z [ OK ] TensorIndexingTest.TestAdvancedIndexingWithListOfTensor (0 ms) 2023-01-11T22:10:58.3531208Z [ RUN ] TensorIndexingTest.TestSingleInt 2023-01-11T22:10:58.3531758Z [ OK ] TensorIndexingTest.TestSingleInt (0 ms) 2023-01-11T22:10:58.3532286Z [ RUN ] TensorIndexingTest.TestMultipleInt 2023-01-11T22:10:58.3532840Z [ OK ] TensorIndexingTest.TestMultipleInt (0 ms) 2023-01-11T22:10:58.3533338Z [ RUN ] TensorIndexingTest.TestNone 2023-01-11T22:10:58.3533840Z [ OK ] TensorIndexingTest.TestNone (0 ms) 2023-01-11T22:10:58.3534363Z [ RUN ] TensorIndexingTest.TestStep 2023-01-11T22:10:58.3534876Z [ OK ] TensorIndexingTest.TestStep (0 ms) 2023-01-11T22:10:58.3535432Z [ RUN ] TensorIndexingTest.TestStepAssignment 2023-01-11T22:10:58.3536051Z [ OK ] TensorIndexingTest.TestStepAssignment (0 ms) 2023-01-11T22:10:58.3536639Z [ RUN ] TensorIndexingTest.TestBoolIndices 2023-01-11T22:10:58.3537213Z [ OK ] TensorIndexingTest.TestBoolIndices (0 ms) 2023-01-11T22:10:58.3537827Z [ RUN ] TensorIndexingTest.TestBoolIndicesAccumulate 2023-01-11T22:10:58.3538514Z [ OK ] TensorIndexingTest.TestBoolIndicesAccumulate (0 ms) 2023-01-11T22:10:58.3539175Z [ RUN ] TensorIndexingTest.TestMultipleBoolIndices 2023-01-11T22:10:58.3539847Z [ OK ] TensorIndexingTest.TestMultipleBoolIndices (0 ms) 2023-01-11T22:10:58.3540448Z [ RUN ] TensorIndexingTest.TestByteMask 2023-01-11T22:10:58.3541031Z [ OK ] TensorIndexingTest.TestByteMask (0 ms) 2023-01-11T22:10:58.3541620Z [ RUN ] TensorIndexingTest.TestByteMaskAccumulate 2023-01-11T22:10:58.3542274Z [ OK ] TensorIndexingTest.TestByteMaskAccumulate (0 ms) 2023-01-11T22:10:58.3543108Z [ RUN ] TensorIndexingTest.TestMultipleByteMask 2023-01-11T22:10:58.3543719Z [ OK ] TensorIndexingTest.TestMultipleByteMask (0 ms) 2023-01-11T22:10:58.3544326Z [ RUN ] TensorIndexingTest.TestByteMask2d 2023-01-11T22:10:58.3544936Z [ OK ] TensorIndexingTest.TestByteMask2d (0 ms) 2023-01-11T22:10:58.3545546Z [ RUN ] TensorIndexingTest.TestIntIndices 2023-01-11T22:10:58.3546153Z [ OK ] TensorIndexingTest.TestIntIndices (0 ms) 2023-01-11T22:10:58.3546781Z [ RUN ] TensorIndexingTest.TestIntIndices2d 2023-01-11T22:10:58.3547447Z [ OK ] TensorIndexingTest.TestIntIndices2d (0 ms) 2023-01-11T22:10:58.3548302Z [ RUN ] TensorIndexingTest.TestIntIndicesBroadcast 2023-01-11T22:10:58.3549026Z [ OK ] TensorIndexingTest.TestIntIndicesBroadcast (0 ms) 2023-01-11T22:10:58.3549692Z [ RUN ] TensorIndexingTest.TestEmptyIndex 2023-01-11T22:10:58.3550300Z [ OK ] TensorIndexingTest.TestEmptyIndex (0 ms) 2023-01-11T22:10:58.3550945Z [ RUN ] TensorIndexingTest.TestEmptyNdimIndex 2023-01-11T22:10:58.3605084Z [ OK ] TensorIndexingTest.TestEmptyNdimIndex (5 ms) 2023-01-11T22:10:58.3605507Z [ RUN ] TensorIndexingTest.TestEmptyNdimIndexBool 2023-01-11T22:10:58.3626277Z [ OK ] TensorIndexingTest.TestEmptyNdimIndexBool (2 ms) 2023-01-11T22:10:58.3626889Z [ RUN ] TensorIndexingTest.TestEmptySlice 2023-01-11T22:10:58.3627393Z [ OK ] TensorIndexingTest.TestEmptySlice (0 ms) 2023-01-11T22:10:58.3628245Z [ RUN ] TensorIndexingTest.TestIndexGetitemCopyBoolsSlices 2023-01-11T22:10:58.3628705Z [ OK ] TensorIndexingTest.TestIndexGetitemCopyBoolsSlices (0 ms) 2023-01-11T22:10:58.3629150Z [ RUN ] TensorIndexingTest.TestIndexSetitemBoolsSlices 2023-01-11T22:10:58.3716562Z [ OK ] TensorIndexingTest.TestIndexSetitemBoolsSlices (8 ms) 2023-01-11T22:10:58.3717174Z [ RUN ] TensorIndexingTest.TestIndexScalarWithBoolMask 2023-01-11T22:10:58.3718247Z [ OK ] TensorIndexingTest.TestIndexScalarWithBoolMask (0 ms) 2023-01-11T22:10:58.3719050Z [ RUN ] TensorIndexingTest.TestSetitemExpansionError 2023-01-11T22:10:58.3837480Z [ OK ] TensorIndexingTest.TestSetitemExpansionError (11 ms) 2023-01-11T22:10:58.3946976Z [ RUN ] TensorIndexingTest.TestGetitemScalars 2023-01-11T22:10:58.3947685Z [ OK ] TensorIndexingTest.TestGetitemScalars (10 ms) 2023-01-11T22:10:58.3948369Z [ RUN ] TensorIndexingTest.TestSetitemScalars 2023-01-11T22:10:58.4056399Z [ OK ] TensorIndexingTest.TestSetitemScalars (10 ms) 2023-01-11T22:10:58.4056943Z [ RUN ] TensorIndexingTest.TestBasicAdvancedCombined 2023-01-11T22:10:58.4057528Z [ OK ] TensorIndexingTest.TestBasicAdvancedCombined (0 ms) 2023-01-11T22:10:58.4058134Z [ RUN ] TensorIndexingTest.TestIntAssignment 2023-01-11T22:10:58.4058612Z [ OK ] TensorIndexingTest.TestIntAssignment (0 ms) 2023-01-11T22:10:58.4059051Z [ RUN ] TensorIndexingTest.TestByteTensorAssignment 2023-01-11T22:10:58.4060168Z [ OK ] TensorIndexingTest.TestByteTensorAssignment (0 ms) 2023-01-11T22:10:58.4060605Z [ RUN ] TensorIndexingTest.TestVariableSlicing 2023-01-11T22:10:58.4061196Z [ OK ] TensorIndexingTest.TestVariableSlicing (0 ms) 2023-01-11T22:10:58.4061672Z [ RUN ] TensorIndexingTest.TestEllipsisTensor 2023-01-11T22:10:58.4062056Z [ OK ] TensorIndexingTest.TestEllipsisTensor (0 ms) 2023-01-11T22:10:58.4062575Z [ RUN ] TensorIndexingTest.TestOutOfBoundIndex 2023-01-11T22:10:58.4158540Z [ OK ] TensorIndexingTest.TestOutOfBoundIndex (9 ms) 2023-01-11T22:10:58.4159102Z [ RUN ] TensorIndexingTest.TestZeroDimIndex 2023-01-11T22:10:58.4179751Z [ OK ] TensorIndexingTest.TestZeroDimIndex (2 ms) 2023-01-11T22:10:58.4180477Z [----------] 36 tests from TensorIndexingTest (75 ms total) 2023-01-11T22:10:58.4180779Z 2023-01-11T22:10:58.4181057Z [----------] 18 tests from NumpyTests 2023-01-11T22:10:58.4181556Z [ RUN ] NumpyTests.TestNoneIndex 2023-01-11T22:10:58.4181895Z [ OK ] NumpyTests.TestNoneIndex (0 ms) 2023-01-11T22:10:58.4182200Z [ RUN ] NumpyTests.TestEmptyFancyIndex 2023-01-11T22:10:58.4233217Z [ OK ] NumpyTests.TestEmptyFancyIndex (5 ms) 2023-01-11T22:10:58.4233840Z [ RUN ] NumpyTests.TestEllipsisIndex 2023-01-11T22:10:58.4234708Z [ OK ] NumpyTests.TestEllipsisIndex (0 ms) 2023-01-11T22:10:58.4235142Z [ RUN ] NumpyTests.TestSingleIntIndex 2023-01-11T22:10:58.4255733Z [ OK ] NumpyTests.TestSingleIntIndex (2 ms) 2023-01-11T22:10:58.4256173Z [ RUN ] NumpyTests.TestSingleBoolIndex 2023-01-11T22:10:58.4256787Z [ OK ] NumpyTests.TestSingleBoolIndex (0 ms) 2023-01-11T22:10:58.4257186Z [ RUN ] NumpyTests.TestBooleanShapeMismatch 2023-01-11T22:10:58.4466711Z [ OK ] NumpyTests.TestBooleanShapeMismatch (20 ms) 2023-01-11T22:10:58.4467204Z [ RUN ] NumpyTests.TestBooleanIndexingOnedim 2023-01-11T22:10:58.4467781Z [ OK ] NumpyTests.TestBooleanIndexingOnedim (0 ms) 2023-01-11T22:10:58.4468180Z [ RUN ] NumpyTests.TestBooleanAssignmentValueMismatch 2023-01-11T22:10:58.4618444Z [ OK ] NumpyTests.TestBooleanAssignmentValueMismatch (15 ms) 2023-01-11T22:10:58.4619018Z [ RUN ] NumpyTests.TestBooleanIndexingTwodim 2023-01-11T22:10:58.4620043Z [ OK ] NumpyTests.TestBooleanIndexingTwodim (0 ms) 2023-01-11T22:10:58.4620419Z [ RUN ] NumpyTests.TestBooleanIndexingWeirdness 2023-01-11T22:10:58.4729223Z [ OK ] NumpyTests.TestBooleanIndexingWeirdness (10 ms) 2023-01-11T22:10:58.4729637Z [ RUN ] NumpyTests.TestBooleanIndexingWeirdnessTensors 2023-01-11T22:10:58.4838826Z [ OK ] NumpyTests.TestBooleanIndexingWeirdnessTensors (10 ms) 2023-01-11T22:10:58.4839569Z [ RUN ] NumpyTests.TestBooleanIndexingAlldims 2023-01-11T22:10:58.4840099Z [ OK ] NumpyTests.TestBooleanIndexingAlldims (0 ms) 2023-01-11T22:10:58.4840485Z [ RUN ] NumpyTests.TestBooleanListIndexing 2023-01-11T22:10:58.4841022Z [ OK ] NumpyTests.TestBooleanListIndexing (0 ms) 2023-01-11T22:10:58.4841392Z [ RUN ] NumpyTests.TestEverythingReturnsViews 2023-01-11T22:10:58.4841755Z [ OK ] NumpyTests.TestEverythingReturnsViews (0 ms) 2023-01-11T22:10:58.4842111Z [ RUN ] NumpyTests.TestBroaderrorsIndexing 2023-01-11T22:10:58.5056937Z [ OK ] NumpyTests.TestBroaderrorsIndexing (21 ms) 2023-01-11T22:10:58.5057385Z [ RUN ] NumpyTests.TestTrivialFancyOutOfBounds 2023-01-11T22:10:58.5322216Z [ OK ] NumpyTests.TestTrivialFancyOutOfBounds (26 ms) 2023-01-11T22:10:58.5322698Z [ RUN ] NumpyTests.TestIndexIsLarger 2023-01-11T22:10:58.5323123Z [ OK ] NumpyTests.TestIndexIsLarger (0 ms) 2023-01-11T22:10:58.5323532Z [ RUN ] NumpyTests.TestBroadcastSubspace 2023-01-11T22:10:58.5325626Z [ OK ] NumpyTests.TestBroadcastSubspace (0 ms) 2023-01-11T22:10:58.5326281Z [----------] 18 tests from NumpyTests (114 ms total) 2023-01-11T22:10:58.5326561Z 2023-01-11T22:10:58.5326737Z [----------] 5 tests from TensorOptionsTest 2023-01-11T22:10:58.5327093Z [ RUN ] TensorOptionsTest.DefaultsToTheRightValues 2023-01-11T22:10:58.5327502Z [ OK ] TensorOptionsTest.DefaultsToTheRightValues (0 ms) 2023-01-11T22:10:58.5327972Z [ RUN ] TensorOptionsTest.UtilityFunctionsReturnTheRightTensorOptions 2023-01-11T22:10:58.5328492Z [ OK ] TensorOptionsTest.UtilityFunctionsReturnTheRightTensorOptions (0 ms) 2023-01-11T22:10:58.5328956Z [ RUN ] TensorOptionsTest.ConstructsWellFromCPUTypes 2023-01-11T22:10:58.5329376Z [ OK ] TensorOptionsTest.ConstructsWellFromCPUTypes (0 ms) 2023-01-11T22:10:58.5329793Z [ RUN ] TensorOptionsTest.ConstructsWellFromCPUTensors 2023-01-11T22:10:58.5330210Z [ OK ] TensorOptionsTest.ConstructsWellFromCPUTensors (0 ms) 2023-01-11T22:10:58.5330626Z [ RUN ] TensorOptionsTest.ConstructsWellFromVariables 2023-01-11T22:10:58.5331045Z [ OK ] TensorOptionsTest.ConstructsWellFromVariables (0 ms) 2023-01-11T22:10:58.5331586Z [----------] 5 tests from TensorOptionsTest (0 ms total) 2023-01-11T22:10:58.5331751Z 2023-01-11T22:10:58.5331898Z [----------] 1 test from DeviceTest 2023-01-11T22:10:58.5332223Z [ RUN ] DeviceTest.ParsesCorrectlyFromString 2023-01-11T22:10:58.5410721Z [ OK ] DeviceTest.ParsesCorrectlyFromString (8 ms) 2023-01-11T22:10:58.5411364Z [----------] 1 test from DeviceTest (8 ms total) 2023-01-11T22:10:58.5411631Z 2023-01-11T22:10:58.5411905Z [----------] 3 tests from DefaultDtypeTest 2023-01-11T22:10:58.5412474Z [ RUN ] DefaultDtypeTest.CanSetAndGetDefaultDtype 2023-01-11T22:10:58.5413175Z [ OK ] DefaultDtypeTest.CanSetAndGetDefaultDtype (0 ms) 2023-01-11T22:10:58.5413877Z [ RUN ] DefaultDtypeTest.NewTensorOptionsHasCorrectDefault 2023-01-11T22:10:58.5414469Z [ OK ] DefaultDtypeTest.NewTensorOptionsHasCorrectDefault (0 ms) 2023-01-11T22:10:58.5415203Z [ RUN ] DefaultDtypeTest.NewTensorsHaveCorrectDefaultDtype 2023-01-11T22:10:58.5415792Z [ OK ] DefaultDtypeTest.NewTensorsHaveCorrectDefaultDtype (0 ms) 2023-01-11T22:10:58.5416311Z [----------] 3 tests from DefaultDtypeTest (0 ms total) 2023-01-11T22:10:58.5416520Z 2023-01-11T22:10:58.5416709Z [----------] 1 test from TorchIncludeTest 2023-01-11T22:10:58.5417121Z [ RUN ] TorchIncludeTest.GetSetNumThreads 2023-01-11T22:10:58.5570663Z [ OK ] TorchIncludeTest.GetSetNumThreads (15 ms) 2023-01-11T22:10:58.5571364Z [----------] 1 test from TorchIncludeTest (15 ms total) 2023-01-11T22:10:58.5571584Z 2023-01-11T22:10:58.5571760Z [----------] 28 tests from InferenceModeTest 2023-01-11T22:10:58.5572084Z [ RUN ] InferenceModeTest.TestTLSState 2023-01-11T22:10:58.5572408Z [ OK ] InferenceModeTest.TestTLSState (0 ms) 2023-01-11T22:10:58.5572797Z [ RUN ] InferenceModeTest.TestInferenceTensorCreation 2023-01-11T22:10:58.5573215Z [ OK ] InferenceModeTest.TestInferenceTensorCreation (0 ms) 2023-01-11T22:10:58.5573615Z [ RUN ] InferenceModeTest.TestExistingAutogradSession 2023-01-11T22:10:58.5642471Z [ OK ] InferenceModeTest.TestExistingAutogradSession (7 ms) 2023-01-11T22:10:58.5643410Z [ RUN ] InferenceModeTest.TestInferenceTensorInInferenceModeFunctionalOp 2023-01-11T22:10:58.5644463Z [ OK ] InferenceModeTest.TestInferenceTensorInInferenceModeFunctionalOp (0 ms) 2023-01-11T22:10:58.5645456Z [ RUN ] InferenceModeTest.TestInferenceTensorInInferenceModeInplaceOp 2023-01-11T22:10:58.5646438Z [ OK ] InferenceModeTest.TestInferenceTensorInInferenceModeInplaceOp (0 ms) 2023-01-11T22:10:58.5647361Z [ RUN ] InferenceModeTest.TestInferenceTensorInInferenceModeViewOp 2023-01-11T22:10:58.5648303Z [ OK ] InferenceModeTest.TestInferenceTensorInInferenceModeViewOp (0 ms) 2023-01-11T22:10:58.5649257Z [ RUN ] InferenceModeTest.TestInferenceTensorInNormalModeFunctionalOp 2023-01-11T22:10:58.5650216Z [ OK ] InferenceModeTest.TestInferenceTensorInNormalModeFunctionalOp (0 ms) 2023-01-11T22:10:58.5651159Z [ RUN ] InferenceModeTest.TestInferenceTensorInNormalModeInplaceOp 2023-01-11T22:10:58.5698876Z [ OK ] InferenceModeTest.TestInferenceTensorInNormalModeInplaceOp (5 ms) 2023-01-11T22:10:58.5699780Z [ RUN ] InferenceModeTest.TestInferenceTensorInNormalModeViewOp 2023-01-11T22:10:58.5700608Z [ OK ] InferenceModeTest.TestInferenceTensorInNormalModeViewOp (0 ms) 2023-01-11T22:10:58.5701127Z [ RUN ] InferenceModeTest.TestNormalTensorInplaceOutputInInferenceMode 2023-01-11T22:10:58.5701667Z [ OK ] InferenceModeTest.TestNormalTensorInplaceOutputInInferenceMode (0 ms) 2023-01-11T22:10:58.5702199Z [ RUN ] InferenceModeTest.TestNormalTensorInplaceOutputInNormalMode 2023-01-11T22:10:58.5702976Z [ OK ] InferenceModeTest.TestNormalTensorInplaceOutputInNormalMode (0 ms) 2023-01-11T22:10:58.5703492Z [ RUN ] InferenceModeTest.TestNormalTensorViewOutputInInferenceMode 2023-01-11T22:10:58.5704005Z [ OK ] InferenceModeTest.TestNormalTensorViewOutputInInferenceMode (0 ms) 2023-01-11T22:10:58.5704490Z [ RUN ] InferenceModeTest.TestNormalTensorViewOutputInNormalMode 2023-01-11T22:10:58.5732884Z [ OK ] InferenceModeTest.TestNormalTensorViewOutputInNormalMode (3 ms) 2023-01-11T22:10:58.5733536Z [ RUN ] InferenceModeTest.TestMixInferenceAndNormalTensorFunctionalOp 2023-01-11T22:10:58.5764642Z [ OK ] InferenceModeTest.TestMixInferenceAndNormalTensorFunctionalOp (3 ms) 2023-01-11T22:10:58.5765222Z [ RUN ] InferenceModeTest.TestMixInferenceAndNormalTensorInplaceOp 2023-01-11T22:10:58.5845842Z [ OK ] InferenceModeTest.TestMixInferenceAndNormalTensorInplaceOp (8 ms) 2023-01-11T22:10:58.5846494Z [ RUN ] InferenceModeTest.TestMixInferenceAndNormalTensorViewOp 2023-01-11T22:10:58.5846991Z [ OK ] InferenceModeTest.TestMixInferenceAndNormalTensorViewOp (0 ms) 2023-01-11T22:10:58.5847427Z [ RUN ] InferenceModeTest.TestHandleDirectViewOnRebase 2023-01-11T22:10:58.5877966Z [ OK ] InferenceModeTest.TestHandleDirectViewOnRebase (3 ms) 2023-01-11T22:10:58.5878405Z [ RUN ] InferenceModeTest.TestHandleInDirectViewOnRebase 2023-01-11T22:10:58.5900025Z [ OK ] InferenceModeTest.TestHandleInDirectViewOnRebase (2 ms) 2023-01-11T22:10:58.5900462Z [ RUN ] InferenceModeTest.TestCreationMetaPropagation 2023-01-11T22:10:58.5961749Z [ OK ] InferenceModeTest.TestCreationMetaPropagation (6 ms) 2023-01-11T22:10:58.5962190Z [ RUN ] InferenceModeTest.TestCreationMetaPropagationInput 2023-01-11T22:10:58.6084120Z [ OK ] InferenceModeTest.TestCreationMetaPropagationInput (12 ms) 2023-01-11T22:10:58.6084583Z [ RUN ] InferenceModeTest.TestInplaceCopyOnInferenceTensor 2023-01-11T22:10:58.6155818Z [ OK ] InferenceModeTest.TestInplaceCopyOnInferenceTensor (7 ms) 2023-01-11T22:10:58.6156273Z [ RUN ] InferenceModeTest.TestSetRequiresGradInNormalMode 2023-01-11T22:10:58.6166625Z [ OK ] InferenceModeTest.TestSetRequiresGradInNormalMode (1 ms) 2023-01-11T22:10:58.6167048Z [ RUN ] InferenceModeTest.TestAccessVersionCounter 2023-01-11T22:10:58.6200033Z [ OK ] InferenceModeTest.TestAccessVersionCounter (3 ms) 2023-01-11T22:10:58.6200547Z [ RUN ] InferenceModeTest.TestInplaceUpdateInferenceTensorWithNormalTensor 2023-01-11T22:10:58.6270970Z [ OK ] InferenceModeTest.TestInplaceUpdateInferenceTensorWithNormalTensor (7 ms) 2023-01-11T22:10:58.6271673Z [ RUN ] InferenceModeTest.TestComplexViewInInferenceMode 2023-01-11T22:10:58.6272259Z [ OK ] InferenceModeTest.TestComplexViewInInferenceMode (0 ms) 2023-01-11T22:10:58.6272967Z [ RUN ] InferenceModeTest.TestComplexViewInNormalMode 2023-01-11T22:10:58.6273739Z [ OK ] InferenceModeTest.TestComplexViewInNormalMode (0 ms) 2023-01-11T22:10:58.6274225Z [ RUN ] InferenceModeTest.TestCustomFunction 2023-01-11T22:10:58.6274602Z [ OK ] InferenceModeTest.TestCustomFunction (0 ms) 2023-01-11T22:10:58.6275025Z [ RUN ] InferenceModeTest.TestLegacyAutoNonVariableTypeModeWarning 2023-01-11T22:10:58.6275530Z [ OK ] InferenceModeTest.TestLegacyAutoNonVariableTypeModeWarning (0 ms) 2023-01-11T22:10:58.6275963Z [----------] 28 tests from InferenceModeTest (70 ms total) 2023-01-11T22:10:58.6276131Z 2023-01-11T22:10:58.6276269Z [----------] 4 tests from GradModeTest 2023-01-11T22:10:58.6276609Z [ RUN ] GradModeTest.TestRequiresGradFunctionalOp 2023-01-11T22:10:58.6277121Z [ OK ] GradModeTest.TestRequiresGradFunctionalOp (0 ms) 2023-01-11T22:10:58.6277503Z [ RUN ] GradModeTest.TestRequiresGradInplaceOp 2023-01-11T22:10:58.6277867Z [ OK ] GradModeTest.TestRequiresGradInplaceOp (0 ms) 2023-01-11T22:10:58.6278231Z [ RUN ] GradModeTest.TestRequiresGradViewOp 2023-01-11T22:10:58.6278593Z [ OK ] GradModeTest.TestRequiresGradViewOp (0 ms) 2023-01-11T22:10:58.6278955Z [ RUN ] GradModeTest.TestRequiresGradViewOpExiting 2023-01-11T22:10:58.6305747Z [ OK ] GradModeTest.TestRequiresGradViewOpExiting (3 ms) 2023-01-11T22:10:58.6306384Z [----------] 4 tests from GradModeTest (3 ms total) 2023-01-11T22:10:58.6306685Z 2023-01-11T22:10:58.6306947Z [----------] 3 tests from OperationTest 2023-01-11T22:10:58.6307213Z [ RUN ] OperationTest.Lerp 2023-01-11T22:10:58.6313747Z [ OK ] OperationTest.Lerp (0 ms) 2023-01-11T22:10:58.6314255Z [ RUN ] OperationTest.Cross 2023-01-11T22:10:58.6347446Z [ OK ] OperationTest.Cross (3 ms) 2023-01-11T22:10:58.6347994Z [ RUN ] OperationTest.Linear_out 2023-01-11T22:10:58.6350501Z [ OK ] OperationTest.Linear_out (0 ms) 2023-01-11T22:10:58.6351099Z [----------] 3 tests from OperationTest (4 ms total) 2023-01-11T22:10:58.6351284Z 2023-01-11T22:10:58.6351455Z [----------] Global test environment tear-down 2023-01-11T22:10:58.6451469Z [==========] 992 tests from 48 test suites ran. (43492 ms total) 2023-01-11T22:10:58.6451958Z [ PASSED ] 992 tests. 2023-01-11T22:10:58.7326480Z + /opt/conda/lib/python3.10/site-packages/torch/bin/test_tensorexpr --gtest_output=xml:test/test-reports/cpp-unittest/test_libtorch/test_tensorexpr.xml 2023-01-11T22:10:59.1096751Z CUDA not available. Disabling CUDA and MultiCUDA tests 2023-01-11T22:10:59.1101234Z Note: Google Test filter = *-*_CUDA:*_MultiCUDA 2023-01-11T22:10:59.1101832Z [==========] Running 801 tests from 25 test suites. 2023-01-11T22:10:59.1102216Z [----------] Global test environment set-up. 2023-01-11T22:10:59.1102638Z [----------] 1 test from Approx 2023-01-11T22:10:59.1102889Z [ RUN ] Approx.log_vml 2023-01-11T22:11:00.3533744Z [ OK ] Approx.log_vml (1242 ms) 2023-01-11T22:11:00.3534298Z [----------] 1 test from Approx (1243 ms total) 2023-01-11T22:11:00.3534487Z 2023-01-11T22:11:00.3534662Z [----------] 34 tests from ATen 2023-01-11T22:11:00.3534900Z [ RUN ] ATen._cast_Float 2023-01-11T22:11:00.3535168Z [ OK ] ATen._cast_Float (0 ms) 2023-01-11T22:11:00.3535416Z [ RUN ] ATen.negInt 2023-01-11T22:11:00.3538080Z [ OK ] ATen.negInt (0 ms) 2023-01-11T22:11:00.3538337Z [ RUN ] ATen.negFloat 2023-01-11T22:11:00.3541725Z [ OK ] ATen.negFloat (0 ms) 2023-01-11T22:11:00.3541979Z [ RUN ] ATen.addInt 2023-01-11T22:11:00.3548696Z [ OK ] ATen.addInt (0 ms) 2023-01-11T22:11:00.3548949Z [ RUN ] ATen.addFloat 2023-01-11T22:11:00.3555280Z [ OK ] ATen.addFloat (0 ms) 2023-01-11T22:11:00.3555648Z [ RUN ] ATen.subInt 2023-01-11T22:11:00.3561845Z [ OK ] ATen.subInt (0 ms) 2023-01-11T22:11:00.3562095Z [ RUN ] ATen.subFloat 2023-01-11T22:11:00.3568507Z [ OK ] ATen.subFloat (0 ms) 2023-01-11T22:11:00.3568824Z [ RUN ] ATen.lerp 2023-01-11T22:11:00.3576312Z [ OK ] ATen.lerp (0 ms) 2023-01-11T22:11:00.3576786Z [ RUN ] ATen.addcmulInt 2023-01-11T22:11:00.3584016Z [ OK ] ATen.addcmulInt (0 ms) 2023-01-11T22:11:00.3584526Z [ RUN ] ATen.addcmulFloat 2023-01-11T22:11:00.3591712Z [ OK ] ATen.addcmulFloat (0 ms) 2023-01-11T22:11:00.3592434Z [ RUN ] ATen.mulInt 2023-01-11T22:11:00.3595992Z [ OK ] ATen.mulInt (0 ms) 2023-01-11T22:11:00.3596454Z [ RUN ] ATen.mulFloat 2023-01-11T22:11:00.3600442Z [ OK ] ATen.mulFloat (0 ms) 2023-01-11T22:11:00.3600887Z [ RUN ] ATen.divInt 2023-01-11T22:11:00.3604863Z [ OK ] ATen.divInt (0 ms) 2023-01-11T22:11:00.3605323Z [ RUN ] ATen.divFloat 2023-01-11T22:11:00.3609283Z [ OK ] ATen.divFloat (0 ms) 2023-01-11T22:11:00.3609738Z [ RUN ] ATen.maxInt 2023-01-11T22:11:00.3613720Z [ OK ] ATen.maxInt (0 ms) 2023-01-11T22:11:00.3614161Z [ RUN ] ATen.maxFloat 2023-01-11T22:11:00.3618130Z [ OK ] ATen.maxFloat (0 ms) 2023-01-11T22:11:00.3618590Z [ RUN ] ATen.minInt 2023-01-11T22:11:00.3622580Z [ OK ] ATen.minInt (0 ms) 2023-01-11T22:11:00.3623022Z [ RUN ] ATen.minFloat 2023-01-11T22:11:00.3627080Z [ OK ] ATen.minFloat (0 ms) 2023-01-11T22:11:00.3627525Z [ RUN ] ATen.reluInt 2023-01-11T22:11:00.3630142Z [ OK ] ATen.reluInt (0 ms) 2023-01-11T22:11:00.3630607Z [ RUN ] ATen.reluFloat 2023-01-11T22:11:00.3633677Z [ OK ] ATen.reluFloat (0 ms) 2023-01-11T22:11:00.3634122Z [ RUN ] ATen.logFloat 2023-01-11T22:11:00.3637062Z [ OK ] ATen.logFloat (0 ms) 2023-01-11T22:11:00.3637538Z [ RUN ] ATen.fastLogFloat 2023-01-11T22:11:00.3786008Z [ OK ] ATen.fastLogFloat (14 ms) 2023-01-11T22:11:00.3786495Z [ RUN ] ATen.fastTanhFloat 2023-01-11T22:11:00.3843075Z [ OK ] ATen.fastTanhFloat (5 ms) 2023-01-11T22:11:00.3843607Z [ RUN ] ATen.fastSigmoidFloat 2023-01-11T22:11:00.3918872Z [ OK ] ATen.fastSigmoidFloat (7 ms) 2023-01-11T22:11:00.3919395Z [ RUN ] ATen.log10Float 2023-01-11T22:11:00.3922321Z [ OK ] ATen.log10Float (0 ms) 2023-01-11T22:11:00.3922813Z [ RUN ] ATen.log2Float 2023-01-11T22:11:00.3925639Z [ OK ] ATen.log2Float (0 ms) 2023-01-11T22:11:00.3926118Z [ RUN ] ATen.expFloat 2023-01-11T22:11:00.3929005Z [ OK ] ATen.expFloat (0 ms) 2023-01-11T22:11:00.3929456Z [ RUN ] ATen.erfFloat 2023-01-11T22:11:00.3932434Z [ OK ] ATen.erfFloat (0 ms) 2023-01-11T22:11:00.3932899Z [ RUN ] ATen.cosFloat 2023-01-11T22:11:00.3935836Z [ OK ] ATen.cosFloat (0 ms) 2023-01-11T22:11:00.3936285Z [ RUN ] ATen.eqInt 2023-01-11T22:11:00.3940909Z [ OK ] ATen.eqInt (0 ms) 2023-01-11T22:11:00.3941355Z [ RUN ] ATen.geInt 2023-01-11T22:11:00.3946010Z [ OK ] ATen.geInt (0 ms) 2023-01-11T22:11:00.3946445Z [ RUN ] ATen.gtInt 2023-01-11T22:11:00.3950967Z [ OK ] ATen.gtInt (0 ms) 2023-01-11T22:11:00.3951409Z [ RUN ] ATen.leInt 2023-01-11T22:11:00.3955949Z [ OK ] ATen.leInt (0 ms) 2023-01-11T22:11:00.3956401Z [ RUN ] ATen.ltInt 2023-01-11T22:11:00.3960997Z [ OK ] ATen.ltInt (0 ms) 2023-01-11T22:11:00.3961524Z [----------] 34 tests from ATen (42 ms total) 2023-01-11T22:11:00.3961789Z 2023-01-11T22:11:00.3961969Z [----------] 26 tests from BoundsInference 2023-01-11T22:11:00.3962253Z [ RUN ] BoundsInference._1 2023-01-11T22:11:00.3963900Z [ OK ] BoundsInference._1 (0 ms) 2023-01-11T22:11:00.3964512Z [ RUN ] BoundsInference._2 2023-01-11T22:11:00.3966007Z [ OK ] BoundsInference._2 (0 ms) 2023-01-11T22:11:00.3966482Z [ RUN ] BoundsInference._3 2023-01-11T22:11:00.3968685Z [ OK ] BoundsInference._3 (0 ms) 2023-01-11T22:11:00.3969180Z [ RUN ] BoundsInference._4 2023-01-11T22:11:00.3974281Z [ OK ] BoundsInference._4 (0 ms) 2023-01-11T22:11:00.3974978Z [ RUN ] BoundsInference._5 2023-01-11T22:11:00.3987197Z [ OK ] BoundsInference._5 (1 ms) 2023-01-11T22:11:00.3987707Z [ RUN ] BoundsInference._6 2023-01-11T22:11:00.3996068Z [ OK ] BoundsInference._6 (0 ms) 2023-01-11T22:11:00.3996598Z [ RUN ] BoundsInference.Adjacent 2023-01-11T22:11:00.4001841Z [ OK ] BoundsInference.Adjacent (0 ms) 2023-01-11T22:11:00.4002461Z [ RUN ] BoundsInference.MultipleTopLoopLoad 2023-01-11T22:11:00.4006857Z [ OK ] BoundsInference.MultipleTopLoopLoad (0 ms) 2023-01-11T22:11:00.4007496Z [ RUN ] BoundsInference.MultipleTopLoopStore 2023-01-11T22:11:00.4011691Z [ OK ] BoundsInference.MultipleTopLoopStore (0 ms) 2023-01-11T22:11:00.4012276Z [ RUN ] BoundsInference.CacheReads 2023-01-11T22:11:00.4038324Z [ OK ] BoundsInference.CacheReads (2 ms) 2023-01-11T22:11:00.4039012Z [ RUN ] BoundsInference.Flattened 2023-01-11T22:11:00.4050762Z [ OK ] BoundsInference.Flattened (1 ms) 2023-01-11T22:11:00.4051369Z [ RUN ] BoundsInference.GetPotentialHazards 2023-01-11T22:11:00.4052114Z [ OK ] BoundsInference.GetPotentialHazards (0 ms) 2023-01-11T22:11:00.4052821Z [ RUN ] BoundsInference.GetPotentialHazardsLoopNoHazard 2023-01-11T22:11:00.4056100Z [ OK ] BoundsInference.GetPotentialHazardsLoopNoHazard (0 ms) 2023-01-11T22:11:00.4056754Z [ RUN ] BoundsInference.GetPotentialHazardsLoopCall 2023-01-11T22:11:00.4060881Z [ OK ] BoundsInference.GetPotentialHazardsLoopCall (0 ms) 2023-01-11T22:11:00.4061523Z [ RUN ] BoundsInference.GetPotentialHazardsLoopSplit 2023-01-11T22:11:00.4072631Z [ OK ] BoundsInference.GetPotentialHazardsLoopSplit (1 ms) 2023-01-11T22:11:00.4073361Z [ RUN ] BoundsInference.HasConflictingOverlapSameBufferWithPartialOverlap 2023-01-11T22:11:00.4077891Z [ OK ] BoundsInference.HasConflictingOverlapSameBufferWithPartialOverlap (0 ms) 2023-01-11T22:11:00.4078584Z [ RUN ] BoundsInference.HasConflictingOverlapSameBufferWithFullOverlap 2023-01-11T22:11:00.4079735Z [ OK ] BoundsInference.HasConflictingOverlapSameBufferWithFullOverlap (0 ms) 2023-01-11T22:11:00.4080501Z [ RUN ] BoundsInference.HasConflictingOverlapSameBufferWithFullOverlapRAW 2023-01-11T22:11:00.4082206Z [ OK ] BoundsInference.HasConflictingOverlapSameBufferWithFullOverlapRAW (0 ms) 2023-01-11T22:11:00.4082914Z [ RUN ] BoundsInference.HasConflictingOverlapSameBufferNotOverlapping 2023-01-11T22:11:00.4085407Z [ OK ] BoundsInference.HasConflictingOverlapSameBufferNotOverlapping (0 ms) 2023-01-11T22:11:00.4086074Z [ RUN ] BoundsInference.HasConflictingOverlap2DBufferWithOverlap 2023-01-11T22:11:00.4099352Z [ OK ] BoundsInference.HasConflictingOverlap2DBufferWithOverlap (1 ms) 2023-01-11T22:11:00.4100022Z [ RUN ] BoundsInference.HasConflictingOverlap2DBufferWithNoOverlap 2023-01-11T22:11:00.4109322Z [ OK ] BoundsInference.HasConflictingOverlap2DBufferWithNoOverlap (0 ms) 2023-01-11T22:11:00.4110213Z [ RUN ] BoundsInference.HasConflictingOverlapDifferentBuffers 2023-01-11T22:11:00.4112358Z [ OK ] BoundsInference.HasConflictingOverlapDifferentBuffers (0 ms) 2023-01-11T22:11:00.4112830Z [ RUN ] BoundsInference.HasConflictingOverlapDueToRAWDependence 2023-01-11T22:11:00.4115203Z [ OK ] BoundsInference.HasConflictingOverlapDueToRAWDependence (0 ms) 2023-01-11T22:11:00.4115753Z [ RUN ] BoundsInference.HasConflictingOverlapDueToWARDependence 2023-01-11T22:11:00.4118011Z [ OK ] BoundsInference.HasConflictingOverlapDueToWARDependence (0 ms) 2023-01-11T22:11:00.4118539Z [ RUN ] BoundsInference.HasConflictingOverlapWithLoads 2023-01-11T22:11:00.4121021Z [ OK ] BoundsInference.HasConflictingOverlapWithLoads (0 ms) 2023-01-11T22:11:00.4121477Z [ RUN ] BoundsInference.IsOverlapping 2023-01-11T22:11:00.4143267Z [ OK ] BoundsInference.IsOverlapping (2 ms) 2023-01-11T22:11:00.4143894Z [----------] 26 tests from BoundsInference (18 ms total) 2023-01-11T22:11:00.4144153Z 2023-01-11T22:11:00.4144293Z [----------] 4 tests from Conv 2023-01-11T22:11:00.4144560Z [ RUN ] Conv.DepthwiseConv2D 2023-01-11T22:11:00.6757895Z [ OK ] Conv.DepthwiseConv2D (261 ms) 2023-01-11T22:11:00.6758324Z [ RUN ] Conv.DepthwiseConv2DNoBias 2023-01-11T22:11:00.9305367Z [ OK ] Conv.DepthwiseConv2DNoBias (254 ms) 2023-01-11T22:11:00.9306053Z [ RUN ] Conv.DepthwiseConv2DDynamicShapes 2023-01-11T22:11:01.1272817Z [ OK ] Conv.DepthwiseConv2DDynamicShapes (196 ms) 2023-01-11T22:11:01.1273491Z [ RUN ] Conv.Conv2D 2023-01-11T22:11:02.1009589Z [ OK ] Conv.Conv2D (973 ms) 2023-01-11T22:11:02.1010172Z [----------] 4 tests from Conv (1686 ms total) 2023-01-11T22:11:02.1010444Z 2023-01-11T22:11:02.1010743Z [----------] 28 tests from CppPrinter 2023-01-11T22:11:02.1011157Z [ RUN ] CppPrinter.IntImm 2023-01-11T22:11:02.1011561Z [ OK ] CppPrinter.IntImm (0 ms) 2023-01-11T22:11:02.1011984Z [ RUN ] CppPrinter.FloatImm 2023-01-11T22:11:02.1012450Z [ OK ] CppPrinter.FloatImm (0 ms) 2023-01-11T22:11:02.1012846Z [ RUN ] CppPrinter.FloatImm1 2023-01-11T22:11:02.1013298Z [ OK ] CppPrinter.FloatImm1 (0 ms) 2023-01-11T22:11:02.1013626Z [ RUN ] CppPrinter.DoubleImm 2023-01-11T22:11:02.1014008Z [ OK ] CppPrinter.DoubleImm (0 ms) 2023-01-11T22:11:02.1014477Z [ RUN ] CppPrinter.DoubleImm1 2023-01-11T22:11:02.1015004Z [ OK ] CppPrinter.DoubleImm1 (0 ms) 2023-01-11T22:11:02.1015476Z [ RUN ] CppPrinter.HalfImm 2023-01-11T22:11:02.1015789Z [ OK ] CppPrinter.HalfImm (0 ms) 2023-01-11T22:11:02.1016052Z [ RUN ] CppPrinter.Add 2023-01-11T22:11:02.1016314Z [ OK ] CppPrinter.Add (0 ms) 2023-01-11T22:11:02.1016567Z [ RUN ] CppPrinter.AddExpr1 2023-01-11T22:11:02.1016848Z [ OK ] CppPrinter.AddExpr1 (0 ms) 2023-01-11T22:11:02.1017124Z [ RUN ] CppPrinter.AddExpr2 2023-01-11T22:11:02.1017392Z [ OK ] CppPrinter.AddExpr2 (0 ms) 2023-01-11T22:11:02.1017665Z [ RUN ] CppPrinter.AddExpr3 2023-01-11T22:11:02.1017945Z [ OK ] CppPrinter.AddExpr3 (0 ms) 2023-01-11T22:11:02.1018204Z [ RUN ] CppPrinter.Mod 2023-01-11T22:11:02.1018450Z [ OK ] CppPrinter.Mod (0 ms) 2023-01-11T22:11:02.1018712Z [ RUN ] CppPrinter.ModFloat 2023-01-11T22:11:02.1018995Z [ OK ] CppPrinter.ModFloat (0 ms) 2023-01-11T22:11:02.1019246Z [ RUN ] CppPrinter.Max 2023-01-11T22:11:02.1019504Z [ OK ] CppPrinter.Max (0 ms) 2023-01-11T22:11:02.1019766Z [ RUN ] CppPrinter.MaxFloat 2023-01-11T22:11:02.1020028Z [ OK ] CppPrinter.MaxFloat (0 ms) 2023-01-11T22:11:02.1020296Z [ RUN ] CppPrinter.MaxHalf 2023-01-11T22:11:02.1020569Z [ OK ] CppPrinter.MaxHalf (0 ms) 2023-01-11T22:11:02.1020813Z [ RUN ] CppPrinter.And 2023-01-11T22:11:02.1021073Z [ OK ] CppPrinter.And (0 ms) 2023-01-11T22:11:02.1021354Z [ RUN ] CppPrinter.CompareSelect 2023-01-11T22:11:02.1021645Z [ OK ] CppPrinter.CompareSelect (0 ms) 2023-01-11T22:11:02.1021940Z [ RUN ] CppPrinter.IfThenElse 2023-01-11T22:11:02.1022230Z [ OK ] CppPrinter.IfThenElse (0 ms) 2023-01-11T22:11:02.1022804Z [ RUN ] CppPrinter.AllocateFree 2023-01-11T22:11:02.1023311Z [ OK ] CppPrinter.AllocateFree (0 ms) 2023-01-11T22:11:02.1023598Z [ RUN ] CppPrinter.LoadStore 2023-01-11T22:11:02.1023880Z [ OK ] CppPrinter.LoadStore (0 ms) 2023-01-11T22:11:02.1024132Z [ RUN ] CppPrinter.Var 2023-01-11T22:11:02.1024394Z [ OK ] CppPrinter.Var (0 ms) 2023-01-11T22:11:02.1024648Z [ RUN ] CppPrinter.Cast 2023-01-11T22:11:02.1024903Z [ OK ] CppPrinter.Cast (0 ms) 2023-01-11T22:11:02.1025165Z [ RUN ] CppPrinter.BitCast 2023-01-11T22:11:02.1025438Z [ OK ] CppPrinter.BitCast (0 ms) 2023-01-11T22:11:02.1025685Z [ RUN ] CppPrinter.Let 2023-01-11T22:11:02.1025947Z [ OK ] CppPrinter.Let (0 ms) 2023-01-11T22:11:02.1028641Z [ RUN ] CppPrinter.For 2023-01-11T22:11:02.1029095Z [ OK ] CppPrinter.For (0 ms) 2023-01-11T22:11:02.1029560Z [ RUN ] CppPrinter.Cond 2023-01-11T22:11:02.1030192Z [ OK ] CppPrinter.Cond (0 ms) 2023-01-11T22:11:02.1030694Z [ RUN ] CppPrinter.Intrinsics 2023-01-11T22:11:02.1031200Z [ OK ] CppPrinter.Intrinsics (0 ms) 2023-01-11T22:11:02.1031736Z [ RUN ] CppPrinter.ExternalCall 2023-01-11T22:11:02.1032172Z [ OK ] CppPrinter.ExternalCall (0 ms) 2023-01-11T22:11:02.1032746Z [----------] 28 tests from CppPrinter (0 ms total) 2023-01-11T22:11:02.1033033Z 2023-01-11T22:11:02.1033261Z [----------] 8 tests from DynamicShapes 2023-01-11T22:11:02.1033554Z [ RUN ] DynamicShapes.SimpleGraph 2023-01-11T22:11:02.1867966Z [ OK ] DynamicShapes.SimpleGraph (85 ms) 2023-01-11T22:11:02.1868599Z [ RUN ] DynamicShapes.GraphWith2InputsSameDims 2023-01-11T22:11:02.2606551Z [ OK ] DynamicShapes.GraphWith2InputsSameDims (73 ms) 2023-01-11T22:11:02.2607147Z [ RUN ] DynamicShapes.GraphWith2InputsAndBroadcast 2023-01-11T22:11:02.3317618Z [ OK ] DynamicShapes.GraphWith2InputsAndBroadcast (71 ms) 2023-01-11T22:11:02.3318258Z [ RUN ] DynamicShapes.GraphWithPartiallySymbolicOutput 2023-01-11T22:11:02.3712738Z [ OK ] DynamicShapes.GraphWithPartiallySymbolicOutput (39 ms) 2023-01-11T22:11:02.3713364Z [ RUN ] DynamicShapes.GraphWithSymbolicStrides 2023-01-11T22:11:02.5370193Z [ OK ] DynamicShapes.GraphWithSymbolicStrides (165 ms) 2023-01-11T22:11:02.5370627Z [ RUN ] DynamicShapes.GraphWithCatAndBroadcast 2023-01-11T22:11:02.9745329Z [ OK ] DynamicShapes.GraphWithCatAndBroadcast (437 ms) 2023-01-11T22:11:02.9745733Z [ RUN ] DynamicShapes.GraphFromModel 2023-01-11T22:11:03.3504437Z [ OK ] DynamicShapes.GraphFromModel (375 ms) 2023-01-11T22:11:03.3504825Z [ RUN ] DynamicShapes.MultiThreadedExecution 2023-01-11T22:11:03.4257947Z [ OK ] DynamicShapes.MultiThreadedExecution (75 ms) 2023-01-11T22:11:03.4258596Z [----------] 8 tests from DynamicShapes (1324 ms total) 2023-01-11T22:11:03.4258851Z 2023-01-11T22:11:03.4259097Z [----------] 30 tests from Expr 2023-01-11T22:11:03.4259570Z [ RUN ] Expr.BasicValueTest 2023-01-11T22:11:03.4259973Z [ OK ] Expr.BasicValueTest (0 ms) 2023-01-11T22:11:03.4260398Z [ RUN ] Expr.BasicValueTest02 2023-01-11T22:11:03.4260803Z [ OK ] Expr.BasicValueTest02 (0 ms) 2023-01-11T22:11:03.4261212Z [ RUN ] Expr.IsChannelsLastContiguous 2023-01-11T22:11:03.4261559Z [ OK ] Expr.IsChannelsLastContiguous (0 ms) 2023-01-11T22:11:03.4261841Z [ RUN ] Expr.LetTest01 2023-01-11T22:11:03.4262099Z [ OK ] Expr.LetTest01 (0 ms) 2023-01-11T22:11:03.4262338Z [ RUN ] Expr.LetTest02 2023-01-11T22:11:03.4262774Z [ OK ] Expr.LetTest02 (0 ms) 2023-01-11T22:11:03.4263240Z [ RUN ] Expr.LetStmtTest01 2023-01-11T22:11:03.4263501Z [ OK ] Expr.LetStmtTest01 (0 ms) 2023-01-11T22:11:03.4263754Z [ RUN ] Expr.IntTest 2023-01-11T22:11:03.4264019Z [ OK ] Expr.IntTest (0 ms) 2023-01-11T22:11:03.4264267Z [ RUN ] Expr.FloatTest 2023-01-11T22:11:03.4264512Z [ OK ] Expr.FloatTest (0 ms) 2023-01-11T22:11:03.4264759Z [ RUN ] Expr.ByteTest 2023-01-11T22:11:03.4265011Z [ OK ] Expr.ByteTest (0 ms) 2023-01-11T22:11:03.4265243Z [ RUN ] Expr.CharTest 2023-01-11T22:11:03.4265495Z [ OK ] Expr.CharTest (0 ms) 2023-01-11T22:11:03.4265740Z [ RUN ] Expr.ShortTest 2023-01-11T22:11:03.4265985Z [ OK ] Expr.ShortTest (0 ms) 2023-01-11T22:11:03.4266231Z [ RUN ] Expr.LongTest 2023-01-11T22:11:03.4266480Z [ OK ] Expr.LongTest (0 ms) 2023-01-11T22:11:03.4266710Z [ RUN ] Expr.HalfTest 2023-01-11T22:11:03.4267021Z [ OK ] Expr.HalfTest (0 ms) 2023-01-11T22:11:03.4267273Z [ RUN ] Expr.DoubleTest 2023-01-11T22:11:03.4267526Z [ OK ] Expr.DoubleTest (0 ms) 2023-01-11T22:11:03.4267780Z [ RUN ] Expr.VectorAdd01 2023-01-11T22:11:03.4276002Z [ OK ] Expr.VectorAdd01 (1 ms) 2023-01-11T22:11:03.4276264Z [ RUN ] Expr.CompareSelectEQ 2023-01-11T22:11:03.4312588Z [ OK ] Expr.CompareSelectEQ (3 ms) 2023-01-11T22:11:03.4312888Z [ RUN ] Expr.CompareSelectDtypes 2023-01-11T22:11:03.4350142Z [ OK ] Expr.CompareSelectDtypes (3 ms) 2023-01-11T22:11:03.4350464Z [ RUN ] Expr.IntrinsicsDtypes 2023-01-11T22:11:03.4356509Z [ OK ] Expr.IntrinsicsDtypes (0 ms) 2023-01-11T22:11:03.4357041Z [ RUN ] Expr.Substitute01 2023-01-11T22:11:03.4357513Z [ OK ] Expr.Substitute01 (0 ms) 2023-01-11T22:11:03.4357945Z [ RUN ] Expr.Math01 2023-01-11T22:11:03.4358376Z [ OK ] Expr.Math01 (0 ms) 2023-01-11T22:11:03.4358786Z [ RUN ] Expr.UnaryMath01 2023-01-11T22:11:03.4359688Z [ OK ] Expr.UnaryMath01 (0 ms) 2023-01-11T22:11:03.4360204Z [ RUN ] Expr.BinaryMath01 2023-01-11T22:11:03.4360487Z [ OK ] Expr.BinaryMath01 (0 ms) 2023-01-11T22:11:03.4360844Z [ RUN ] Expr.LogicalOps01 2023-01-11T22:11:03.4361306Z [ OK ] Expr.LogicalOps01 (0 ms) 2023-01-11T22:11:03.4361761Z [ RUN ] Expr.LogicalOps02 2023-01-11T22:11:03.4362264Z [ OK ] Expr.LogicalOps02 (0 ms) 2023-01-11T22:11:03.4362591Z [ RUN ] Expr.LogicalOps03 2023-01-11T22:11:03.4363450Z [ OK ] Expr.LogicalOps03 (0 ms) 2023-01-11T22:11:03.4363925Z [ RUN ] Expr.BitwiseOps 2023-01-11T22:11:03.4364334Z [ OK ] Expr.BitwiseOps (0 ms) 2023-01-11T22:11:03.4364595Z [ RUN ] Expr.DynamicShapeAdd 2023-01-11T22:11:03.4366451Z [ OK ] Expr.DynamicShapeAdd (0 ms) 2023-01-11T22:11:03.4366725Z [ RUN ] Expr.OutOfBounds 2023-01-11T22:11:03.4392379Z [ OK ] Expr.OutOfBounds (2 ms) 2023-01-11T22:11:03.4392667Z [ RUN ] Expr.OutOfBounds2d 2023-01-11T22:11:03.4400419Z [ OK ] Expr.OutOfBounds2d (0 ms) 2023-01-11T22:11:03.4400735Z [ RUN ] Expr.OutOfBounds2dFlattenedIndex 2023-01-11T22:11:03.4405181Z [ OK ] Expr.OutOfBounds2dFlattenedIndex (0 ms) 2023-01-11T22:11:03.4405584Z [----------] 30 tests from Expr (14 ms total) 2023-01-11T22:11:03.4405736Z 2023-01-11T22:11:03.4405889Z [----------] 16 tests from ExternalCall 2023-01-11T22:11:03.4406159Z [ RUN ] ExternalCall.Conv1d_float 2023-01-11T22:11:03.4683250Z [ OK ] ExternalCall.Conv1d_float (27 ms) 2023-01-11T22:11:03.4683658Z [ RUN ] ExternalCall.Conv1d_int 2023-01-11T22:11:03.4970584Z [ OK ] ExternalCall.Conv1d_int (28 ms) 2023-01-11T22:11:03.4970984Z [ RUN ] ExternalCall.Conv1d_nobias_noargs 2023-01-11T22:11:03.5212821Z [ OK ] ExternalCall.Conv1d_nobias_noargs (24 ms) 2023-01-11T22:11:03.5213232Z [ RUN ] ExternalCall.Conv2d_float 2023-01-11T22:11:03.5528270Z [ OK ] ExternalCall.Conv2d_float (31 ms) 2023-01-11T22:11:03.5528851Z [ RUN ] ExternalCall.Conv2d_int 2023-01-11T22:11:03.5909941Z [ OK ] ExternalCall.Conv2d_int (38 ms) 2023-01-11T22:11:03.5910406Z [ RUN ] ExternalCall.Conv2d_nobias_noargs 2023-01-11T22:11:03.6189384Z [ OK ] ExternalCall.Conv2d_nobias_noargs (27 ms) 2023-01-11T22:11:03.6190470Z [ RUN ] ExternalCall.Addmm_float 2023-01-11T22:11:03.6453601Z [ OK ] ExternalCall.Addmm_float (26 ms) 2023-01-11T22:11:03.6454123Z [ RUN ] ExternalCall.Embedding 2023-01-11T22:11:03.6703339Z [ OK ] ExternalCall.Embedding (24 ms) 2023-01-11T22:11:03.6703910Z [ RUN ] ExternalCall.MaxReduction 2023-01-11T22:11:03.6933019Z [ OK ] ExternalCall.MaxReduction (22 ms) 2023-01-11T22:11:03.6933868Z [ RUN ] ExternalCall.Prepacked_Linear_float 2023-01-11T22:11:03.7282639Z [ OK ] ExternalCall.Prepacked_Linear_float (34 ms) 2023-01-11T22:11:03.7283427Z [ RUN ] ExternalCall.Prepacked_Conv2d_float 2023-01-11T22:11:03.7765874Z [ OK ] ExternalCall.Prepacked_Conv2d_float (48 ms) 2023-01-11T22:11:03.7766453Z [ RUN ] ExternalCall.BinaryFloat 2023-01-11T22:11:03.8501486Z [ OK ] ExternalCall.BinaryFloat (74 ms) 2023-01-11T22:11:03.8501927Z [ RUN ] ExternalCall.UnaryFloat 2023-01-11T22:11:03.8956803Z [ OK ] ExternalCall.UnaryFloat (45 ms) 2023-01-11T22:11:03.8957162Z [ RUN ] ExternalCall.ComputeInterop 2023-01-11T22:11:04.9803795Z [ OK ] ExternalCall.ComputeInterop (1084 ms) 2023-01-11T22:11:04.9804382Z [ RUN ] ExternalCall.Inlining 2023-01-11T22:11:05.0539796Z [ OK ] ExternalCall.Inlining (73 ms) 2023-01-11T22:11:05.0540436Z [ RUN ] ExternalCall.JitCustomFusionOp 2023-01-11T22:11:05.1790322Z [ OK ] ExternalCall.JitCustomFusionOp (125 ms) 2023-01-11T22:11:05.1791040Z [----------] 16 tests from ExternalCall (1738 ms total) 2023-01-11T22:11:05.1791324Z 2023-01-11T22:11:05.1791590Z [----------] 8 tests from GraphOpt 2023-01-11T22:11:05.1792077Z [ RUN ] GraphOpt.OptimizeCat 2023-01-11T22:11:05.2122502Z [ OK ] GraphOpt.OptimizeCat (33 ms) 2023-01-11T22:11:05.2123054Z [ RUN ] GraphOpt.OptimizeCat2 2023-01-11T22:11:05.2490231Z [ OK ] GraphOpt.OptimizeCat2 (36 ms) 2023-01-11T22:11:05.2490775Z [ RUN ] GraphOpt.OptimizeCat3 2023-01-11T22:11:05.2913030Z [ OK ] GraphOpt.OptimizeCat3 (42 ms) 2023-01-11T22:11:05.2913723Z [ RUN ] GraphOpt.OptimizeCatWithTypePromotionInUser 2023-01-11T22:11:05.3252370Z [ OK ] GraphOpt.OptimizeCatWithTypePromotionInUser (33 ms) 2023-01-11T22:11:05.3253168Z [ RUN ] GraphOpt.OptimizeCatWithTypePromotionInCat 2023-01-11T22:11:05.3882969Z [ OK ] GraphOpt.OptimizeCatWithTypePromotionInCat (63 ms) 2023-01-11T22:11:05.3883758Z [ RUN ] GraphOpt.OptimizeCatNoSingleTensorElementwiseOp 2023-01-11T22:11:05.4277886Z [ OK ] GraphOpt.OptimizeCatNoSingleTensorElementwiseOp (39 ms) 2023-01-11T22:11:05.4278713Z [ RUN ] GraphOpt.OptimizeCatNoSingleTensorElementwiseOp2 2023-01-11T22:11:05.4700955Z [ OK ] GraphOpt.OptimizeCatNoSingleTensorElementwiseOp2 (42 ms) 2023-01-11T22:11:05.4701698Z [ RUN ] GraphOpt.AOTGraphPrepPasses 2023-01-11T22:11:05.4702286Z [ OK ] GraphOpt.AOTGraphPrepPasses (0 ms) 2023-01-11T22:11:05.4703469Z [----------] 8 tests from GraphOpt (291 ms total) 2023-01-11T22:11:05.4703742Z 2023-01-11T22:11:05.4704003Z [----------] 4 tests from IRPrinter 2023-01-11T22:11:05.4704490Z [ RUN ] IRPrinter.BasicValueTest 2023-01-11T22:11:05.4705036Z [ OK ] IRPrinter.BasicValueTest (0 ms) 2023-01-11T22:11:05.4705484Z [ RUN ] IRPrinter.BasicValueTest02 2023-01-11T22:11:05.4705881Z [ OK ] IRPrinter.BasicValueTest02 (0 ms) 2023-01-11T22:11:05.4706285Z [ RUN ] IRPrinter.CastTest 2023-01-11T22:11:05.4706605Z [ OK ] IRPrinter.CastTest (0 ms) 2023-01-11T22:11:05.4706994Z [ RUN ] IRPrinter.FunctionName 2023-01-11T22:11:05.4707649Z [ OK ] IRPrinter.FunctionName (0 ms) 2023-01-11T22:11:05.4708150Z [----------] 4 tests from IRPrinter (0 ms total) 2023-01-11T22:11:05.4708401Z 2023-01-11T22:11:05.4708622Z [----------] 8 tests from IRVerifier 2023-01-11T22:11:05.4709272Z [ RUN ] IRVerifier.BitwiseOps 2023-01-11T22:11:05.4709783Z [ OK ] IRVerifier.BitwiseOps (0 ms) 2023-01-11T22:11:05.4710308Z [ RUN ] IRVerifier.CompareSelect 2023-01-11T22:11:05.4710844Z [ OK ] IRVerifier.CompareSelect (0 ms) 2023-01-11T22:11:05.4711111Z [ RUN ] IRVerifier.Ramp 2023-01-11T22:11:05.4711382Z [ OK ] IRVerifier.Ramp (0 ms) 2023-01-11T22:11:05.4711636Z [ RUN ] IRVerifier.Load 2023-01-11T22:11:05.4711896Z [ OK ] IRVerifier.Load (0 ms) 2023-01-11T22:11:05.4712151Z [ RUN ] IRVerifier.IfThenElse 2023-01-11T22:11:05.4712437Z [ OK ] IRVerifier.IfThenElse (0 ms) 2023-01-11T22:11:05.4712698Z [ RUN ] IRVerifier.For 2023-01-11T22:11:05.4712945Z [ OK ] IRVerifier.For (0 ms) 2023-01-11T22:11:05.4713202Z [ RUN ] IRVerifier.Block 2023-01-11T22:11:05.4713471Z [ OK ] IRVerifier.Block (0 ms) 2023-01-11T22:11:05.4713719Z [ RUN ] IRVerifier.Store 2023-01-11T22:11:05.4713979Z [ OK ] IRVerifier.Store (0 ms) 2023-01-11T22:11:05.4714274Z [----------] 8 tests from IRVerifier (0 ms total) 2023-01-11T22:11:05.4714424Z 2023-01-11T22:11:05.4714548Z [----------] 37 tests from Kernel 2023-01-11T22:11:05.4714840Z [ RUN ] Kernel.ParallelExternalCallBuf 2023-01-11T22:11:05.5293523Z [ OK ] Kernel.ParallelExternalCallBuf (58 ms) 2023-01-11T22:11:05.5294165Z [ RUN ] Kernel.InliningIntermediates 2023-01-11T22:11:05.5985190Z [ OK ] Kernel.InliningIntermediates (69 ms) 2023-01-11T22:11:05.5985915Z [ RUN ] Kernel.PreAllocIntermediateBufs 2023-01-11T22:11:05.7115579Z [ OK ] Kernel.PreAllocIntermediateBufs (113 ms) 2023-01-11T22:11:05.7116151Z [ RUN ] Kernel._1 2023-01-11T22:11:05.7420108Z [ OK ] Kernel._1 (30 ms) 2023-01-11T22:11:05.7420558Z [ RUN ] Kernel._2 2023-01-11T22:11:05.7747033Z [ OK ] Kernel._2 (32 ms) 2023-01-11T22:11:05.7747472Z [ RUN ] Kernel._3 2023-01-11T22:11:05.8079233Z [ OK ] Kernel._3 (33 ms) 2023-01-11T22:11:05.8079796Z [ RUN ] Kernel.Huge 2023-01-11T22:11:05.8341161Z [ OK ] Kernel.Huge (26 ms) 2023-01-11T22:11:05.8341695Z [ RUN ] Kernel.ParallelStrided 2023-01-11T22:11:05.9554612Z [ OK ] Kernel.ParallelStrided (121 ms) 2023-01-11T22:11:05.9555331Z [ RUN ] Kernel.CatInputTypesPromotion 2023-01-11T22:11:06.0486895Z [ OK ] Kernel.CatInputTypesPromotion (93 ms) 2023-01-11T22:11:06.0487442Z [ RUN ] Kernel.ToDType 2023-01-11T22:11:06.0762852Z [ OK ] Kernel.ToDType (27 ms) 2023-01-11T22:11:06.0763463Z [ RUN ] Kernel.CatAndInlineWithAConstantDim 2023-01-11T22:11:06.1041095Z [ OK ] Kernel.CatAndInlineWithAConstantDim (27 ms) 2023-01-11T22:11:06.1042010Z [ RUN ] Kernel.CatWithEmptyInputs 2023-01-11T22:11:06.1746501Z [ OK ] Kernel.CatWithEmptyInputs (70 ms) 2023-01-11T22:11:06.1746860Z [ RUN ] Kernel.CatWoConditionals 2023-01-11T22:11:06.2479149Z [ OK ] Kernel.CatWoConditionals (73 ms) 2023-01-11T22:11:06.2479490Z [ RUN ] Kernel.OptimizeConditionals 2023-01-11T22:11:06.3433262Z [ OK ] Kernel.OptimizeConditionals (95 ms) 2023-01-11T22:11:06.3433924Z [ RUN ] Kernel.SumAllAxes 2023-01-11T22:11:06.3926502Z [ OK ] Kernel.SumAllAxes (49 ms) 2023-01-11T22:11:06.3927037Z [ RUN ] Kernel.SumOneAxis 2023-01-11T22:11:06.7943038Z [ OK ] Kernel.SumOneAxis (401 ms) 2023-01-11T22:11:06.7943592Z [ RUN ] Kernel.SumMultipleAxes 2023-01-11T22:11:07.2234985Z [ OK ] Kernel.SumMultipleAxes (429 ms) 2023-01-11T22:11:07.2235516Z [ RUN ] Kernel.Softmax2D 2023-01-11T22:11:07.6649221Z [ OK ] Kernel.Softmax2D (441 ms) 2023-01-11T22:11:07.6649752Z [ RUN ] Kernel.Softmax3D 2023-01-11T22:11:08.5261627Z [ OK ] Kernel.Softmax3D (861 ms) 2023-01-11T22:11:08.5262180Z [ RUN ] Kernel.Softmax4D 2023-01-11T22:11:09.6552019Z [ OK ] Kernel.Softmax4D (1128 ms) 2023-01-11T22:11:09.6552600Z [ RUN ] Kernel.SignTest 2023-01-11T22:11:09.7244544Z [ OK ] Kernel.SignTest (69 ms) 2023-01-11T22:11:09.7522771Z [ RUN ] Kernel.InlineProducerIntoReduction 2023-01-11T22:11:09.7523458Z [ OK ] Kernel.InlineProducerIntoReduction (27 ms) 2023-01-11T22:11:09.7524115Z [ RUN ] Kernel.InlineReductionIntoConsumer 2023-01-11T22:11:09.7866477Z [ OK ] Kernel.InlineReductionIntoConsumer (34 ms) 2023-01-11T22:11:09.7867073Z [ RUN ] Kernel.ConstantTensors 2023-01-11T22:11:09.8257652Z [ OK ] Kernel.ConstantTensors (39 ms) 2023-01-11T22:11:09.8258242Z [ RUN ] Kernel.ConstantTensorsNonContiguous 2023-01-11T22:11:09.8653157Z [ OK ] Kernel.ConstantTensorsNonContiguous (39 ms) 2023-01-11T22:11:09.8653736Z [ RUN ] Kernel.RunFast 2023-01-11T22:11:09.8968113Z [ OK ] Kernel.RunFast (31 ms) 2023-01-11T22:11:09.8968695Z [ RUN ] Kernel.RunWithAllocatedOutputs 2023-01-11T22:11:09.9282529Z [ OK ] Kernel.RunWithAllocatedOutputs (31 ms) 2023-01-11T22:11:09.9283052Z [ RUN ] Kernel.CodegenInspection 2023-01-11T22:11:09.9680972Z [ OK ] Kernel.CodegenInspection (39 ms) 2023-01-11T22:11:09.9681559Z [ RUN ] Kernel.CustomLowering 2023-01-11T22:11:09.9926551Z [ OK ] Kernel.CustomLowering (24 ms) 2023-01-11T22:11:09.9927133Z [ RUN ] Kernel.Vectorize 2023-01-11T22:11:10.0233572Z [ OK ] Kernel.Vectorize (30 ms) 2023-01-11T22:11:10.0234169Z [ RUN ] Kernel.Strided1dWithinBounds 2023-01-11T22:11:10.0460494Z [ OK ] Kernel.Strided1dWithinBounds (22 ms) 2023-01-11T22:11:10.0461132Z [ RUN ] Kernel.InputAsOutput 2023-01-11T22:11:10.0804187Z [ OK ] Kernel.InputAsOutput (34 ms) 2023-01-11T22:11:10.0804729Z [ RUN ] Kernel.ScalarOut 2023-01-11T22:11:10.1006288Z [ OK ] Kernel.ScalarOut (20 ms) 2023-01-11T22:11:10.1006654Z [ RUN ] Kernel.ScalarTensorOut 2023-01-11T22:11:10.1262822Z [ OK ] Kernel.ScalarTensorOut (25 ms) 2023-01-11T22:11:10.1263399Z [ RUN ] Kernel.FuseLoopsWithVariableBounds 2023-01-11T22:11:10.5658957Z [ OK ] Kernel.FuseLoopsWithVariableBounds (439 ms) 2023-01-11T22:11:10.5659370Z [ RUN ] Kernel.FuseLoopsWithVariableConcatDim 2023-01-11T22:11:11.0650782Z [ OK ] Kernel.FuseLoopsWithVariableConcatDim (499 ms) 2023-01-11T22:11:11.0651204Z [ RUN ] Kernel.DoNotFuseLoopsWithMismatchingVariableDims 2023-01-11T22:11:11.3892534Z [ OK ] Kernel.DoNotFuseLoopsWithMismatchingVariableDims (324 ms) 2023-01-11T22:11:11.3892983Z [----------] 37 tests from Kernel (5918 ms total) 2023-01-11T22:11:11.3893138Z 2023-01-11T22:11:11.3893285Z [----------] 174 tests from LoopNest 2023-01-11T22:11:11.3893556Z [ RUN ] LoopNest.ExprSimple01 2023-01-11T22:11:11.3907080Z [ OK ] LoopNest.ExprSimple01 (1 ms) 2023-01-11T22:11:11.3907618Z [ RUN ] LoopNest.ExprLower01 2023-01-11T22:11:11.3922234Z [ OK ] LoopNest.ExprLower01 (1 ms) 2023-01-11T22:11:11.3922749Z [ RUN ] LoopNest.ExprSimple02 2023-01-11T22:11:11.3944260Z [ OK ] LoopNest.ExprSimple02 (2 ms) 2023-01-11T22:11:11.3944861Z [ RUN ] LoopNest.ExprSliceHeadWithLoopOptions 2023-01-11T22:11:11.3946213Z [ OK ] LoopNest.ExprSliceHeadWithLoopOptions (0 ms) 2023-01-11T22:11:11.3947057Z [ RUN ] LoopNest.ExprSliceTailWithLoopOptions 2023-01-11T22:11:11.3948980Z [ OK ] LoopNest.ExprSliceTailWithLoopOptions (0 ms) 2023-01-11T22:11:11.3949667Z [ RUN ] LoopNest.ExprSliceHeadWhenFactorEqualsSize 2023-01-11T22:11:11.3966021Z [ OK ] LoopNest.ExprSliceHeadWhenFactorEqualsSize (1 ms) 2023-01-11T22:11:11.3966780Z [ RUN ] LoopNest.ExprSliceHeadWhenFactorLargerThanSize 2023-01-11T22:11:11.3967352Z [ OK ] LoopNest.ExprSliceHeadWhenFactorLargerThanSize (0 ms) 2023-01-11T22:11:11.3967963Z [ RUN ] LoopNest.ExprSliceHead 2023-01-11T22:11:11.3968363Z [ OK ] LoopNest.ExprSliceHead (0 ms) 2023-01-11T22:11:11.3968730Z [ RUN ] LoopNest.ExprSliceHeadWithNonZeroStart 2023-01-11T22:11:11.3969453Z [ OK ] LoopNest.ExprSliceHeadWithNonZeroStart (0 ms) 2023-01-11T22:11:11.3970123Z [ RUN ] LoopNest.ExprSliceTailWhenFactorEqualsSize 2023-01-11T22:11:11.3970856Z [ OK ] LoopNest.ExprSliceTailWhenFactorEqualsSize (0 ms) 2023-01-11T22:11:11.3971436Z [ RUN ] LoopNest.ExprSliceTailWhenFactorLargerThanSize 2023-01-11T22:11:11.3971940Z [ OK ] LoopNest.ExprSliceTailWhenFactorLargerThanSize (0 ms) 2023-01-11T22:11:11.3972404Z [ RUN ] LoopNest.ExprSliceTail 2023-01-11T22:11:11.3972701Z [ OK ] LoopNest.ExprSliceTail (0 ms) 2023-01-11T22:11:11.3973001Z [ RUN ] LoopNest.ExprSplitAndSlice 2023-01-11T22:11:11.3978527Z [ OK ] LoopNest.ExprSplitAndSlice (0 ms) 2023-01-11T22:11:11.3978853Z [ RUN ] LoopNest.ExprSliceAndNormalize 2023-01-11T22:11:11.3980395Z [ OK ] LoopNest.ExprSliceAndNormalize (0 ms) 2023-01-11T22:11:11.3980786Z [ RUN ] LoopNest.ExprSliceWithVariableDimension 2023-01-11T22:11:11.3996015Z [ OK ] LoopNest.ExprSliceWithVariableDimension (1 ms) 2023-01-11T22:11:11.3996364Z [ RUN ] LoopNest.ExprSplitWithTail 2023-01-11T22:11:11.4002960Z [ OK ] LoopNest.ExprSplitWithTail (0 ms) 2023-01-11T22:11:11.4003331Z [ RUN ] LoopNest.ExprSplitWithTailNone 2023-01-11T22:11:11.4015655Z [ OK ] LoopNest.ExprSplitWithTailNone (1 ms) 2023-01-11T22:11:11.4015988Z [ RUN ] LoopNest.ExprSplitWithMask01 2023-01-11T22:11:11.4055649Z [ OK ] LoopNest.ExprSplitWithMask01 (3 ms) 2023-01-11T22:11:11.4056023Z [ RUN ] LoopNest.ExprSplitWithMaskRepeatedNoMask 2023-01-11T22:11:11.4060970Z [ OK ] LoopNest.ExprSplitWithMaskRepeatedNoMask (0 ms) 2023-01-11T22:11:11.4061386Z [ RUN ] LoopNest.getLoopAt 2023-01-11T22:11:11.4061664Z [ OK ] LoopNest.getLoopAt (0 ms) 2023-01-11T22:11:11.4061942Z [ RUN ] LoopNest.TileSimple 2023-01-11T22:11:11.4941140Z [ OK ] LoopNest.TileSimple (87 ms) 2023-01-11T22:11:11.4941449Z [ RUN ] LoopNest.TileWithTails 2023-01-11T22:11:11.5818894Z [ OK ] LoopNest.TileWithTails (87 ms) 2023-01-11T22:11:11.5819233Z [ RUN ] LoopNest.TileInMiddle 2023-01-11T22:11:11.7226429Z [ OK ] LoopNest.TileInMiddle (140 ms) 2023-01-11T22:11:11.7227092Z [ RUN ] LoopNest.SplitWithTailWithLoopOptions 2023-01-11T22:11:11.7227599Z [ OK ] LoopNest.SplitWithTailWithLoopOptions (0 ms) 2023-01-11T22:11:11.7227977Z [ RUN ] LoopNest.SplitWithMaskWithLoopOptions 2023-01-11T22:11:11.7228350Z [ OK ] LoopNest.SplitWithMaskWithLoopOptions (0 ms) 2023-01-11T22:11:11.7228701Z [ RUN ] LoopNest.ScheduleBroadcastAddBuffer 2023-01-11T22:11:11.7245457Z [ OK ] LoopNest.ScheduleBroadcastAddBuffer (1 ms) 2023-01-11T22:11:11.7245822Z [ RUN ] LoopNest.ScheduleFunctionCall01 2023-01-11T22:11:11.7321386Z [ OK ] LoopNest.ScheduleFunctionCall01 (7 ms) 2023-01-11T22:11:11.7321730Z [ RUN ] LoopNest.ScheduleInlineSimple 2023-01-11T22:11:11.7440157Z [ OK ] LoopNest.ScheduleInlineSimple (11 ms) 2023-01-11T22:11:11.7440503Z [ RUN ] LoopNest.ScheduleInlineFunc01 2023-01-11T22:11:11.7995511Z [ OK ] LoopNest.ScheduleInlineFunc01 (55 ms) 2023-01-11T22:11:11.7995871Z [ RUN ] LoopNest.ScheduleInlineRandom 2023-01-11T22:11:11.7999187Z [ OK ] LoopNest.ScheduleInlineRandom (0 ms) 2023-01-11T22:11:11.7999618Z [ RUN ] LoopNest.ScheduleInlineRandomUnrelated 2023-01-11T22:11:11.8002605Z [ OK ] LoopNest.ScheduleInlineRandomUnrelated (0 ms) 2023-01-11T22:11:11.8003030Z [ RUN ] LoopNest.ScheduleInlineRandomLowerDimensions 2023-01-11T22:11:11.8005733Z [ OK ] LoopNest.ScheduleInlineRandomLowerDimensions (0 ms) 2023-01-11T22:11:11.8006143Z [ RUN ] LoopNest.ScheduleInlineIntrinsics 2023-01-11T22:11:11.8093145Z [ OK ] LoopNest.ScheduleInlineIntrinsics (8 ms) 2023-01-11T22:11:11.8093546Z [ RUN ] LoopNest.ScheduleInlineRandWithIntrinsics 2023-01-11T22:11:11.8096211Z [ OK ] LoopNest.ScheduleInlineRandWithIntrinsics (0 ms) 2023-01-11T22:11:11.8096569Z [ RUN ] LoopNest.ScheduleSplitAThenInline 2023-01-11T22:11:11.8097924Z [ OK ] LoopNest.ScheduleSplitAThenInline (0 ms) 2023-01-11T22:11:11.8098273Z [ RUN ] LoopNest.ScheduleSplitBThenInline 2023-01-11T22:11:11.8101746Z [ OK ] LoopNest.ScheduleSplitBThenInline (0 ms) 2023-01-11T22:11:11.8102094Z [ RUN ] LoopNest.ScheduleSplitTwiceThenInline 2023-01-11T22:11:11.8103650Z [ OK ] LoopNest.ScheduleSplitTwiceThenInline (0 ms) 2023-01-11T22:11:11.8104011Z [ RUN ] LoopNest.ScheduleInlineThenSplit 2023-01-11T22:11:11.8107090Z [ OK ] LoopNest.ScheduleInlineThenSplit (0 ms) 2023-01-11T22:11:11.8107453Z [ RUN ] LoopNest.ScheduleSplitInlineThenSplit 2023-01-11T22:11:11.8114396Z [ OK ] LoopNest.ScheduleSplitInlineThenSplit (0 ms) 2023-01-11T22:11:11.8114772Z [ RUN ] LoopNest.ScheduleSplitInlineSimplify 2023-01-11T22:11:11.8121719Z [ OK ] LoopNest.ScheduleSplitInlineSimplify (0 ms) 2023-01-11T22:11:11.8122087Z [ RUN ] LoopNest.ScheduleInlineThreeMixedOnce 2023-01-11T22:11:11.8126390Z [ OK ] LoopNest.ScheduleInlineThreeMixedOnce (0 ms) 2023-01-11T22:11:11.8126755Z [ RUN ] LoopNest.ScheduleInlineThreeMixedTwice 2023-01-11T22:11:11.8130938Z [ OK ] LoopNest.ScheduleInlineThreeMixedTwice (0 ms) 2023-01-11T22:11:11.8131315Z [ RUN ] LoopNest.ScheduleInlineThreeMixedInner 2023-01-11T22:11:11.8135800Z [ OK ] LoopNest.ScheduleInlineThreeMixedInner (0 ms) 2023-01-11T22:11:11.8136160Z [ RUN ] LoopNest.ScheduleInlineThreeMixedSplit 2023-01-11T22:11:11.8138118Z [ OK ] LoopNest.ScheduleInlineThreeMixedSplit (0 ms) 2023-01-11T22:11:11.8138677Z [ RUN ] LoopNest.ScheduleInlineOutputTensors 2023-01-11T22:11:11.8142502Z [ OK ] LoopNest.ScheduleInlineOutputTensors (0 ms) 2023-01-11T22:11:11.8142885Z [ RUN ] LoopNest.ScheduleInlineWithCompoundIndices 2023-01-11T22:11:11.8161191Z [ OK ] LoopNest.ScheduleInlineWithCompoundIndices (1 ms) 2023-01-11T22:11:11.8161726Z [ RUN ] LoopNest.ScheduleInlineConsumerIndicesWithCast 2023-01-11T22:11:11.8162376Z [ OK ] LoopNest.ScheduleInlineConsumerIndicesWithCast (0 ms) 2023-01-11T22:11:11.8163067Z [ RUN ] LoopNest.ScheduleInlineProducerIndicesWithCast 2023-01-11T22:11:11.8163763Z [ OK ] LoopNest.ScheduleInlineProducerIndicesWithCast (0 ms) 2023-01-11T22:11:11.8164302Z [ RUN ] LoopNest.ScheduleFuserStyle 2023-01-11T22:11:11.8211208Z [ OK ] LoopNest.ScheduleFuserStyle (4 ms) 2023-01-11T22:11:11.8211642Z [ RUN ] LoopNest.ScheduleFuserThreeArg 2023-01-11T22:11:11.8268021Z [ OK ] LoopNest.ScheduleFuserThreeArg (5 ms) 2023-01-11T22:11:11.8268451Z [ RUN ] LoopNest.ScheduleDynamicShape2D 2023-01-11T22:11:11.8388053Z [ OK ] LoopNest.ScheduleDynamicShape2D (11 ms) 2023-01-11T22:11:11.8388476Z [ RUN ] LoopNest.LoopNestComputeAt_1 2023-01-11T22:11:11.8396697Z [ OK ] LoopNest.LoopNestComputeAt_1 (0 ms) 2023-01-11T22:11:11.8397088Z [ RUN ] LoopNest.LoopNestComputeAt_2 2023-01-11T22:11:11.8883984Z [ OK ] LoopNest.LoopNestComputeAt_2 (48 ms) 2023-01-11T22:11:11.8884337Z [ RUN ] LoopNest.LoopNestComputeAt_3 2023-01-11T22:11:11.9368703Z [ OK ] LoopNest.LoopNestComputeAt_3 (48 ms) 2023-01-11T22:11:11.9369095Z [ RUN ] LoopNest.Reduce2dComputeAt 2023-01-11T22:11:12.0168923Z [ OK ] LoopNest.Reduce2dComputeAt (79 ms) 2023-01-11T22:11:12.0169333Z [ RUN ] LoopNest.LoopNestReorderAxis1 2023-01-11T22:11:12.0170798Z [ OK ] LoopNest.LoopNestReorderAxis1 (0 ms) 2023-01-11T22:11:12.0171181Z [ RUN ] LoopNest.LoopNestReorderPartialAxes 2023-01-11T22:11:12.0181007Z [ OK ] LoopNest.LoopNestReorderPartialAxes (0 ms) 2023-01-11T22:11:12.0181475Z [ RUN ] LoopNest.LoopNestReorderInternalAxis 2023-01-11T22:11:12.0189728Z [ OK ] LoopNest.LoopNestReorderInternalAxis (0 ms) 2023-01-11T22:11:12.0190343Z [ RUN ] LoopNest.LoopNestReorderEnclosingAxis 2023-01-11T22:11:12.0198886Z [ OK ] LoopNest.LoopNestReorderEnclosingAxis (0 ms) 2023-01-11T22:11:12.0199475Z [ RUN ] LoopNest.LoopNestReorderSameAxis 2023-01-11T22:11:12.0200136Z [ OK ] LoopNest.LoopNestReorderSameAxis (0 ms) 2023-01-11T22:11:12.0200695Z [ RUN ] LoopNest.LoopNestReorderExtraStatements 2023-01-11T22:11:12.0214246Z [ OK ] LoopNest.LoopNestReorderExtraStatements (1 ms) 2023-01-11T22:11:12.0214817Z [ RUN ] LoopNest.LoopNestReorderLongStringOfPreOrphans 2023-01-11T22:11:12.0755847Z [ OK ] LoopNest.LoopNestReorderLongStringOfPreOrphans (54 ms) 2023-01-11T22:11:12.0756392Z [ RUN ] LoopNest.LoopNestReorderLongStringOfPostOrphans 2023-01-11T22:11:12.1302216Z [ OK ] LoopNest.LoopNestReorderLongStringOfPostOrphans (54 ms) 2023-01-11T22:11:12.1302918Z [ RUN ] LoopNest.LoopNestReorderLongStringFull 2023-01-11T22:11:12.1983632Z [ OK ] LoopNest.LoopNestReorderLongStringFull (68 ms) 2023-01-11T22:11:12.1984220Z [ RUN ] LoopNest.LoopNestReorderInternalLoopNest 2023-01-11T22:11:12.2096675Z [ OK ] LoopNest.LoopNestReorderInternalLoopNest (11 ms) 2023-01-11T22:11:12.2097301Z [ RUN ] LoopNest.OuterLoopVectorization 2023-01-11T22:11:12.2098152Z [ OK ] LoopNest.OuterLoopVectorization (0 ms) 2023-01-11T22:11:12.2098506Z [ RUN ] LoopNest.VectorizeLoopNotNormalized 2023-01-11T22:11:12.2100321Z [ OK ] LoopNest.VectorizeLoopNotNormalized (0 ms) 2023-01-11T22:11:12.2100741Z [ RUN ] LoopNest.Unroll 2023-01-11T22:11:12.2101414Z [ OK ] LoopNest.Unroll (0 ms) 2023-01-11T22:11:12.2101712Z [ RUN ] LoopNest.UnrollOuter 2023-01-11T22:11:12.2103522Z [ OK ] LoopNest.UnrollOuter (0 ms) 2023-01-11T22:11:12.2103826Z [ RUN ] LoopNest.UnrollInner 2023-01-11T22:11:12.2105234Z [ OK ] LoopNest.UnrollInner (0 ms) 2023-01-11T22:11:12.2105552Z [ RUN ] LoopNest.UnrollMultipleStatements 2023-01-11T22:11:12.2106351Z [ OK ] LoopNest.UnrollMultipleStatements (0 ms) 2023-01-11T22:11:12.2106740Z [ RUN ] LoopNest.UnrollNonLiteralConstantBounds 2023-01-11T22:11:12.2107797Z [ OK ] LoopNest.UnrollNonLiteralConstantBounds (0 ms) 2023-01-11T22:11:12.2108158Z [ RUN ] LoopNest.UnrollNonConstantBounds 2023-01-11T22:11:12.2122780Z [ OK ] LoopNest.UnrollNonConstantBounds (1 ms) 2023-01-11T22:11:12.2123204Z [ RUN ] LoopNest.UnrollByFactorsLessThan2 2023-01-11T22:11:12.2123547Z [ OK ] LoopNest.UnrollByFactorsLessThan2 (0 ms) 2023-01-11T22:11:12.2123896Z [ RUN ] LoopNest.UnrollByFactorEqualToIters 2023-01-11T22:11:12.2126562Z [ OK ] LoopNest.UnrollByFactorEqualToIters (0 ms) 2023-01-11T22:11:12.2127041Z [ RUN ] LoopNest.UnrollEmpty 2023-01-11T22:11:12.2127441Z [ OK ] LoopNest.UnrollEmpty (0 ms) 2023-01-11T22:11:12.2127716Z [ RUN ] LoopNest.NoUnroll 2023-01-11T22:11:12.2127990Z [ OK ] LoopNest.NoUnroll (0 ms) 2023-01-11T22:11:12.2128256Z [ RUN ] LoopNest.UnrollWithLet 2023-01-11T22:11:12.2129863Z [ OK ] LoopNest.UnrollWithLet (0 ms) 2023-01-11T22:11:12.2130167Z [ RUN ] LoopNest.IsNormalized 2023-01-11T22:11:12.2130449Z [ OK ] LoopNest.IsNormalized (0 ms) 2023-01-11T22:11:12.2130766Z [ RUN ] LoopNest.NormalizeStartPositive 2023-01-11T22:11:12.2132261Z [ OK ] LoopNest.NormalizeStartPositive (0 ms) 2023-01-11T22:11:12.2132599Z [ RUN ] LoopNest.NormalizeStartNegative 2023-01-11T22:11:12.2134469Z [ OK ] LoopNest.NormalizeStartNegative (0 ms) 2023-01-11T22:11:12.2134863Z [ RUN ] LoopNest.NormalizeStartZero 2023-01-11T22:11:12.2135188Z [ OK ] LoopNest.NormalizeStartZero (0 ms) 2023-01-11T22:11:12.2135500Z [ RUN ] LoopNest.NormalizeStartVariable 2023-01-11T22:11:12.2138124Z [ OK ] LoopNest.NormalizeStartVariable (0 ms) 2023-01-11T22:11:12.2138474Z [ RUN ] LoopNest.NormalizeOnNestedOuterLoop 2023-01-11T22:11:12.2140365Z [ OK ] LoopNest.NormalizeOnNestedOuterLoop (0 ms) 2023-01-11T22:11:12.2140710Z [ RUN ] LoopNest.NormalizeOnNestedInnerLoop 2023-01-11T22:11:12.2143160Z [ OK ] LoopNest.NormalizeOnNestedInnerLoop (0 ms) 2023-01-11T22:11:12.2143520Z [ RUN ] LoopNest.NormalizeAndSplitWithTail 2023-01-11T22:11:12.2147018Z [ OK ] LoopNest.NormalizeAndSplitWithTail (0 ms) 2023-01-11T22:11:12.2147423Z [ RUN ] LoopNest.NotNormalizeAndSplitWithTail 2023-01-11T22:11:12.2152004Z [ OK ] LoopNest.NotNormalizeAndSplitWithTail (0 ms) 2023-01-11T22:11:12.2152377Z [ RUN ] LoopNest.FlattenSimpleLoopNest2D 2023-01-11T22:11:12.2161805Z [ OK ] LoopNest.FlattenSimpleLoopNest2D (0 ms) 2023-01-11T22:11:12.2162176Z [ RUN ] LoopNest.FlattenSimpleLoopNest3D 2023-01-11T22:11:12.2242927Z [ OK ] LoopNest.FlattenSimpleLoopNest3D (8 ms) 2023-01-11T22:11:12.2243302Z [ RUN ] LoopNest.FlattenLoopNestAfterNormalize 2023-01-11T22:11:12.2268244Z [ OK ] LoopNest.FlattenLoopNestAfterNormalize (2 ms) 2023-01-11T22:11:12.2268680Z [ RUN ] LoopNest.FlattenLoopNestWithNonLiteralConstantBounds 2023-01-11T22:11:12.2279808Z [ OK ] LoopNest.FlattenLoopNestWithNonLiteralConstantBounds (1 ms) 2023-01-11T22:11:12.2280507Z [ RUN ] LoopNest.FlattenImperfectLoopNest 2023-01-11T22:11:12.2281183Z [ OK ] LoopNest.FlattenImperfectLoopNest (0 ms) 2023-01-11T22:11:12.2281618Z [ RUN ] LoopNest.FlattenReductionLoopNest 2023-01-11T22:11:12.2282114Z [ OK ] LoopNest.FlattenReductionLoopNest (0 ms) 2023-01-11T22:11:12.2282695Z [ RUN ] LoopNest.FlattenReductionLoopNestFromTensor 2023-01-11T22:11:12.2283398Z [ OK ] LoopNest.FlattenReductionLoopNestFromTensor (0 ms) 2023-01-11T22:11:12.2283785Z [ RUN ] LoopNest.FlattenIncorrectLoopsAsInput 2023-01-11T22:11:12.2284145Z [ OK ] LoopNest.FlattenIncorrectLoopsAsInput (0 ms) 2023-01-11T22:11:12.2284501Z [ RUN ] LoopNest.DetectInlineRankMismatch 2023-01-11T22:11:12.2284971Z [ OK ] LoopNest.DetectInlineRankMismatch (0 ms) 2023-01-11T22:11:12.2285282Z [ RUN ] LoopNest.CacheReadsSimple 2023-01-11T22:11:12.2633644Z [ OK ] LoopNest.CacheReadsSimple (35 ms) 2023-01-11T22:11:12.2634170Z [ RUN ] LoopNest.CacheReadsOuter 2023-01-11T22:11:12.3013259Z [ OK ] LoopNest.CacheReadsOuter (37 ms) 2023-01-11T22:11:12.3013812Z [ RUN ] LoopNest.CacheReadsInternal 2023-01-11T22:11:12.3417809Z [ OK ] LoopNest.CacheReadsInternal (40 ms) 2023-01-11T22:11:12.3418350Z [ RUN ] LoopNest.CacheReadsInner 2023-01-11T22:11:12.4052831Z [ OK ] LoopNest.CacheReadsInner (63 ms) 2023-01-11T22:11:12.4053419Z [ RUN ] LoopNest.CacheWritesSimple 2023-01-11T22:11:12.4766699Z [ OK ] LoopNest.CacheWritesSimple (71 ms) 2023-01-11T22:11:12.4767266Z [ RUN ] LoopNest.DeadStoreElimination 2023-01-11T22:11:12.4779781Z [ OK ] LoopNest.DeadStoreElimination (1 ms) 2023-01-11T22:11:12.4780485Z [ RUN ] LoopNest.DeadStoreEliminationWithIntermediates 2023-01-11T22:11:12.4791510Z [ OK ] LoopNest.DeadStoreEliminationWithIntermediates (1 ms) 2023-01-11T22:11:12.4792081Z [ RUN ] LoopNest.CompoundTensorSimple 2023-01-11T22:11:12.4804280Z [ OK ] LoopNest.CompoundTensorSimple (1 ms) 2023-01-11T22:11:12.4804864Z [ RUN ] LoopNest.InlineConstantIndex 2023-01-11T22:11:12.4807680Z [ OK ] LoopNest.InlineConstantIndex (0 ms) 2023-01-11T22:11:12.4808228Z [ RUN ] LoopNest.CompoundTensorUsed 2023-01-11T22:11:12.4829289Z [ OK ] LoopNest.CompoundTensorUsed (2 ms) 2023-01-11T22:11:12.4829839Z [ RUN ] LoopNest.InlineFromLoad 2023-01-11T22:11:12.4830375Z [ OK ] LoopNest.InlineFromLoad (0 ms) 2023-01-11T22:11:12.4830822Z [ RUN ] LoopNest.OptimizeConditionalsSimple 2023-01-11T22:11:12.4831586Z [ OK ] LoopNest.OptimizeConditionalsSimple (0 ms) 2023-01-11T22:11:12.4832240Z [ RUN ] LoopNest.OptimizeConditionalsNestedConditions 2023-01-11T22:11:12.4833830Z [ OK ] LoopNest.OptimizeConditionalsNestedConditions (0 ms) 2023-01-11T22:11:12.4836573Z [ RUN ] LoopNest.OptimizeConditionalsMultipleStores 2023-01-11T22:11:12.4837026Z [ OK ] LoopNest.OptimizeConditionalsMultipleStores (0 ms) 2023-01-11T22:11:12.4837474Z [ RUN ] LoopNest.OptimizeConditionalsMultipleStoresInOneLoop 2023-01-11T22:11:12.4840984Z [ OK ] LoopNest.OptimizeConditionalsMultipleStoresInOneLoop (0 ms) 2023-01-11T22:11:12.4841424Z [ RUN ] LoopNest.OptimizeConditionalsOuterLoopVar 2023-01-11T22:11:12.4844231Z [ OK ] LoopNest.OptimizeConditionalsOuterLoopVar (0 ms) 2023-01-11T22:11:12.4844644Z [ RUN ] LoopNest.OptimizeConditionalsCompValuesNotOrdered 2023-01-11T22:11:12.4847150Z [ OK ] LoopNest.OptimizeConditionalsCompValuesNotOrdered (0 ms) 2023-01-11T22:11:12.4847824Z [ RUN ] LoopNest.OptimizeConditionalsCompValuesNotConstants 2023-01-11T22:11:12.4850076Z [ OK ] LoopNest.OptimizeConditionalsCompValuesNotConstants (0 ms) 2023-01-11T22:11:12.4850506Z [ RUN ] LoopNest.OptimizeConditionalsInvalidCondition 2023-01-11T22:11:12.4852999Z [ OK ] LoopNest.OptimizeConditionalsInvalidCondition (0 ms) 2023-01-11T22:11:12.4853429Z [ RUN ] LoopNest.OptimizeConditionalsInvalidCondition2 2023-01-11T22:11:12.4856014Z [ OK ] LoopNest.OptimizeConditionalsInvalidCondition2 (0 ms) 2023-01-11T22:11:12.4856427Z [ RUN ] LoopNest.OptimizeConditionalsInvalidCondition3 2023-01-11T22:11:12.4858671Z [ OK ] LoopNest.OptimizeConditionalsInvalidCondition3 (0 ms) 2023-01-11T22:11:12.4859099Z [ RUN ] LoopNest.OptimizeConditionalsInvalidCondition4 2023-01-11T22:11:12.4861158Z [ OK ] LoopNest.OptimizeConditionalsInvalidCondition4 (0 ms) 2023-01-11T22:11:12.4861622Z [ RUN ] LoopNest.OptimizeConditionalsNotNormalized 2023-01-11T22:11:12.4862729Z [ OK ] LoopNest.OptimizeConditionalsNotNormalized (0 ms) 2023-01-11T22:11:12.4863164Z [ RUN ] LoopNest.ColReduceSplitTailEvenReorder 2023-01-11T22:11:12.6293417Z [ OK ] LoopNest.ColReduceSplitTailEvenReorder (142 ms) 2023-01-11T22:11:12.6293869Z [ RUN ] LoopNest.ColReduceSplitTailUnevenReorder 2023-01-11T22:11:12.7429959Z [ OK ] LoopNest.ColReduceSplitTailUnevenReorder (113 ms) 2023-01-11T22:11:12.7430447Z [ RUN ] LoopNest.ColReduceSplitMaskEvenReorder 2023-01-11T22:11:12.8853166Z [ OK ] LoopNest.ColReduceSplitMaskEvenReorder (142 ms) 2023-01-11T22:11:12.8853937Z [ RUN ] LoopNest.ColReduceSplitMaskUnevenReorder 2023-01-11T22:11:13.0132717Z [ OK ] LoopNest.ColReduceSplitMaskUnevenReorder (127 ms) 2023-01-11T22:11:13.0133160Z [ RUN ] LoopNest.ReorderAxisWithMultipleConds 2023-01-11T22:11:13.0135279Z [ OK ] LoopNest.ReorderAxisWithMultipleConds (0 ms) 2023-01-11T22:11:13.0135850Z [ RUN ] LoopNest.VectorizeUse 2023-01-11T22:11:13.0136708Z [ OK ] LoopNest.VectorizeUse (0 ms) 2023-01-11T22:11:13.0137194Z [ RUN ] LoopNest.Int64Direct 2023-01-11T22:11:13.0137700Z [ OK ] LoopNest.Int64Direct (0 ms) 2023-01-11T22:11:13.0138096Z [ RUN ] LoopNest.Int64Compute 2023-01-11T22:11:13.0138625Z [ OK ] LoopNest.Int64Compute (0 ms) 2023-01-11T22:11:13.0139107Z [ RUN ] LoopNest.DistributeLoopWithAllStmtsAsPivots 2023-01-11T22:11:13.0139678Z [ OK ] LoopNest.DistributeLoopWithAllStmtsAsPivots (0 ms) 2023-01-11T22:11:13.0140192Z [ RUN ] LoopNest.DistributeLoopWithOneStmtAsPivot 2023-01-11T22:11:13.0140895Z [ OK ] LoopNest.DistributeLoopWithOneStmtAsPivot (0 ms) 2023-01-11T22:11:13.0141349Z [ RUN ] LoopNest.DistributeLoopWithoutAnyPivot 2023-01-11T22:11:13.0141827Z [ OK ] LoopNest.DistributeLoopWithoutAnyPivot (0 ms) 2023-01-11T22:11:13.0142201Z [ RUN ] LoopNest.DistributeLoopOverInnerLoops 2023-01-11T22:11:13.0142739Z [ OK ] LoopNest.DistributeLoopOverInnerLoops (0 ms) 2023-01-11T22:11:13.0143136Z [ RUN ] LoopNest.DistributeLoopAndParentsWithoutAnyPivot 2023-01-11T22:11:13.0145063Z [ OK ] LoopNest.DistributeLoopAndParentsWithoutAnyPivot (0 ms) 2023-01-11T22:11:13.0145478Z [ RUN ] LoopNest.fuseLoopsSimple 2023-01-11T22:11:13.0146891Z [ OK ] LoopNest.fuseLoopsSimple (0 ms) 2023-01-11T22:11:13.0147202Z [ RUN ] LoopNest.fuseLoopsMultiple 2023-01-11T22:11:13.0152122Z [ OK ] LoopNest.fuseLoopsMultiple (0 ms) 2023-01-11T22:11:13.0152441Z [ RUN ] LoopNest.fuseLoopsNested 2023-01-11T22:11:13.0158437Z [ OK ] LoopNest.fuseLoopsNested (0 ms) 2023-01-11T22:11:13.0158765Z [ RUN ] LoopNest.fuseLoopsNested2D 2023-01-11T22:11:13.0161816Z [ OK ] LoopNest.fuseLoopsNested2D (0 ms) 2023-01-11T22:11:13.0162189Z [ RUN ] LoopNest.fuseLoopsNested2DInner 2023-01-11T22:11:13.0164368Z [ OK ] LoopNest.fuseLoopsNested2DInner (0 ms) 2023-01-11T22:11:13.0164786Z [ RUN ] LoopNest.fuseLoopsDifferentStopBounds 2023-01-11T22:11:13.0165224Z [ OK ] LoopNest.fuseLoopsDifferentStopBounds (0 ms) 2023-01-11T22:11:13.0165642Z [ RUN ] LoopNest.fuseLoopsDifferentStartBounds 2023-01-11T22:11:13.0166112Z [ OK ] LoopNest.fuseLoopsDifferentStartBounds (0 ms) 2023-01-11T22:11:13.0166547Z [ RUN ] LoopNest.fuseLoopsNotContiguous 2023-01-11T22:11:13.0166891Z [ OK ] LoopNest.fuseLoopsNotContiguous (0 ms) 2023-01-11T22:11:13.0167378Z [ RUN ] LoopNest.fuseLoopsWithDifferentParents 2023-01-11T22:11:13.0167767Z [ OK ] LoopNest.fuseLoopsWithDifferentParents (0 ms) 2023-01-11T22:11:13.0168136Z [ RUN ] LoopNest.fuseLoopsWithVariableBounds 2023-01-11T22:11:13.0169879Z [ OK ] LoopNest.fuseLoopsWithVariableBounds (0 ms) 2023-01-11T22:11:13.0170277Z [ RUN ] LoopNest.fuseLoopsWithExprBounds 2023-01-11T22:11:13.0174658Z [ OK ] LoopNest.fuseLoopsWithExprBounds (0 ms) 2023-01-11T22:11:13.0175116Z [ RUN ] LoopNest.fuseLoopsWithDifferentExprBounds 2023-01-11T22:11:13.0178942Z [ OK ] LoopNest.fuseLoopsWithDifferentExprBounds (0 ms) 2023-01-11T22:11:13.0179413Z [ RUN ] LoopNest.fuseLoopsWithNonOverlappingBufferAccesses 2023-01-11T22:11:13.0182997Z [ OK ] LoopNest.fuseLoopsWithNonOverlappingBufferAccesses (0 ms) 2023-01-11T22:11:13.0183459Z [ RUN ] LoopNest.fuseLoopsWithNonOverlapping2DBufferAccesses 2023-01-11T22:11:13.0190364Z [ OK ] LoopNest.fuseLoopsWithNonOverlapping2DBufferAccesses (0 ms) 2023-01-11T22:11:13.0190770Z [ RUN ] LoopNest.fuseLoopsWithReductions 2023-01-11T22:11:13.0195143Z [ OK ] LoopNest.fuseLoopsWithReductions (0 ms) 2023-01-11T22:11:13.0195490Z [ RUN ] LoopNest.fuseLoopsWith2DReductions 2023-01-11T22:11:13.0204030Z [ OK ] LoopNest.fuseLoopsWith2DReductions (0 ms) 2023-01-11T22:11:13.0204405Z [ RUN ] LoopNest.fuseLoopsWithComplexIndices 2023-01-11T22:11:13.0211504Z [ OK ] LoopNest.fuseLoopsWithComplexIndices (0 ms) 2023-01-11T22:11:13.0211912Z [ RUN ] LoopNest.fuseLoopsWithMixedLoopVarsAsIndices 2023-01-11T22:11:13.0221811Z [ OK ] LoopNest.fuseLoopsWithMixedLoopVarsAsIndices (1 ms) 2023-01-11T22:11:13.0222199Z [ RUN ] LoopNest.fuseLoopsWithTranspose 2023-01-11T22:11:13.0227194Z [ OK ] LoopNest.fuseLoopsWithTranspose (0 ms) 2023-01-11T22:11:13.0227597Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies1 2023-01-11T22:11:13.0231376Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies1 (0 ms) 2023-01-11T22:11:13.0231788Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies2 2023-01-11T22:11:13.0235250Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies2 (0 ms) 2023-01-11T22:11:13.0235658Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies3 2023-01-11T22:11:13.0243029Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies3 (0 ms) 2023-01-11T22:11:13.0243428Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies4 2023-01-11T22:11:13.0252275Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies4 (0 ms) 2023-01-11T22:11:13.0252683Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies5 2023-01-11T22:11:13.0256751Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies5 (0 ms) 2023-01-11T22:11:13.0257351Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies6 2023-01-11T22:11:13.0261424Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies6 (0 ms) 2023-01-11T22:11:13.0261835Z [ RUN ] LoopNest.fuseLoopsThatViolateDependencies7 2023-01-11T22:11:13.0266712Z [ OK ] LoopNest.fuseLoopsThatViolateDependencies7 (0 ms) 2023-01-11T22:11:13.0267386Z [ RUN ] LoopNest.areLoopsPerfectlyNested 2023-01-11T22:11:13.0267925Z [ OK ] LoopNest.areLoopsPerfectlyNested (0 ms) 2023-01-11T22:11:13.0268463Z [ RUN ] LoopNest.reorderNestedLoops2D 2023-01-11T22:11:13.0269053Z [ OK ] LoopNest.reorderNestedLoops2D (0 ms) 2023-01-11T22:11:13.0269624Z [ RUN ] LoopNest.reorderNestedLoops3D 2023-01-11T22:11:13.0270094Z [ OK ] LoopNest.reorderNestedLoops3D (0 ms) 2023-01-11T22:11:13.0270446Z [ RUN ] LoopNest.reorderNestedLoops4D 2023-01-11T22:11:13.0271074Z [ OK ] LoopNest.reorderNestedLoops4D (0 ms) 2023-01-11T22:11:13.0271648Z [ RUN ] LoopNest.reorderTrivialPermutation 2023-01-11T22:11:13.0272231Z [ OK ] LoopNest.reorderTrivialPermutation (0 ms) 2023-01-11T22:11:13.0272875Z [ RUN ] LoopNest.reorderInvalidPermutations 2023-01-11T22:11:13.0273266Z [ OK ] LoopNest.reorderInvalidPermutations (0 ms) 2023-01-11T22:11:13.0273655Z [ RUN ] LoopNest.reorderInvalidLoopNest 2023-01-11T22:11:13.0274076Z [ OK ] LoopNest.reorderInvalidLoopNest (0 ms) 2023-01-11T22:11:13.0274484Z [ RUN ] LoopNest.compressBufferSimple 2023-01-11T22:11:13.0274814Z [ OK ] LoopNest.compressBufferSimple (0 ms) 2023-01-11T22:11:13.0275156Z [ RUN ] LoopNest.compressBufferMultipleDims 2023-01-11T22:11:13.0275498Z [ OK ] LoopNest.compressBufferMultipleDims (0 ms) 2023-01-11T22:11:13.0275862Z [ RUN ] LoopNest.compressBufferMultipleDims2 2023-01-11T22:11:13.0276247Z [ OK ] LoopNest.compressBufferMultipleDims2 (0 ms) 2023-01-11T22:11:13.0276621Z [ RUN ] LoopNest.compressBufferDifferentOrderIndices 2023-01-11T22:11:13.0277037Z [ OK ] LoopNest.compressBufferDifferentOrderIndices (0 ms) 2023-01-11T22:11:13.0277422Z [ RUN ] LoopNest.compressBufferVariableBounds 2023-01-11T22:11:13.0277788Z [ OK ] LoopNest.compressBufferVariableBounds (0 ms) 2023-01-11T22:11:13.0278159Z [ RUN ] LoopNest.compressBufferNoCommonParentLoops 2023-01-11T22:11:13.0278560Z [ OK ] LoopNest.compressBufferNoCommonParentLoops (0 ms) 2023-01-11T22:11:13.0278934Z [ RUN ] LoopNest.compressBufferIndicesMixed 2023-01-11T22:11:13.0279278Z [ OK ] LoopNest.compressBufferIndicesMixed (0 ms) 2023-01-11T22:11:13.0279620Z [ RUN ] LoopNest.compressMultipleBuffers 2023-01-11T22:11:13.0280065Z [ OK ] LoopNest.compressMultipleBuffers (0 ms) 2023-01-11T22:11:13.0280368Z [ RUN ] LoopNest.sanitizeNames 2023-01-11T22:11:13.0290247Z [ OK ] LoopNest.sanitizeNames (1 ms) 2023-01-11T22:11:13.0290852Z [----------] 174 tests from LoopNest (1639 ms total) 2023-01-11T22:11:13.0291094Z 2023-01-11T22:11:13.0291254Z [----------] 31 tests from MemDependency 2023-01-11T22:11:13.0291534Z [ RUN ] MemDependency.BoundOverlap 2023-01-11T22:11:13.0301201Z [ OK ] MemDependency.BoundOverlap (1 ms) 2023-01-11T22:11:13.0312263Z [ RUN ] MemDependency.BoundComparison 2023-01-11T22:11:13.0312856Z [ OK ] MemDependency.BoundComparison (0 ms) 2023-01-11T22:11:13.0313207Z [ RUN ] MemDependency.BoundOverlapSymbolic 2023-01-11T22:11:13.0317259Z [ OK ] MemDependency.BoundOverlapSymbolic (0 ms) 2023-01-11T22:11:13.0317860Z [ RUN ] MemDependency.BoundOverlapMultiDim 2023-01-11T22:11:13.0325370Z [ OK ] MemDependency.BoundOverlapMultiDim (0 ms) 2023-01-11T22:11:13.0326051Z [ RUN ] MemDependency.BoundSubtract 2023-01-11T22:11:13.0332119Z [ OK ] MemDependency.BoundSubtract (0 ms) 2023-01-11T22:11:13.0332738Z [ RUN ] MemDependency.BoundSubtractSymbolic 2023-01-11T22:11:13.0352774Z [ OK ] MemDependency.BoundSubtractSymbolic (2 ms) 2023-01-11T22:11:13.0353436Z [ RUN ] MemDependency.BoundSubtractMultiDim 2023-01-11T22:11:13.0375986Z [ OK ] MemDependency.BoundSubtractMultiDim (2 ms) 2023-01-11T22:11:13.0376605Z [ RUN ] MemDependency.BoundSubtractMultiDimSymbolic 2023-01-11T22:11:13.0407769Z [ OK ] MemDependency.BoundSubtractMultiDimSymbolic (3 ms) 2023-01-11T22:11:13.0408405Z [ RUN ] MemDependency.MemDependencyCheckerSimple 2023-01-11T22:11:13.0409005Z [ OK ] MemDependency.MemDependencyCheckerSimple (0 ms) 2023-01-11T22:11:13.0409595Z [ RUN ] MemDependency.MemDependencyCheckerMultiStmt 2023-01-11T22:11:13.0410163Z [ OK ] MemDependency.MemDependencyCheckerMultiStmt (0 ms) 2023-01-11T22:11:13.0410699Z [ RUN ] MemDependency.MemDependencyCheckerOverlap 2023-01-11T22:11:13.0411170Z [ OK ] MemDependency.MemDependencyCheckerOverlap (0 ms) 2023-01-11T22:11:13.0411547Z [ RUN ] MemDependency.MemDependencyCheckerLoop 2023-01-11T22:11:13.0411931Z [ OK ] MemDependency.MemDependencyCheckerLoop (0 ms) 2023-01-11T22:11:13.0412388Z [ RUN ] MemDependency.MemDependencyCheckerLoopReduce 2023-01-11T22:11:13.0413507Z [ OK ] MemDependency.MemDependencyCheckerLoopReduce (0 ms) 2023-01-11T22:11:13.0413939Z [ RUN ] MemDependency.MemDependencyCheckerLoopReduceExpanded 2023-01-11T22:11:13.0415981Z [ OK ] MemDependency.MemDependencyCheckerLoopReduceExpanded (0 ms) 2023-01-11T22:11:13.0416485Z [ RUN ] MemDependency.MemDependencyCheckerInputsOutputs 2023-01-11T22:11:13.0417796Z [ OK ] MemDependency.MemDependencyCheckerInputsOutputs (0 ms) 2023-01-11T22:11:13.0418418Z [ RUN ] MemDependency.MemDependencyCheckerOutputDoesntDepend 2023-01-11T22:11:13.0418921Z [ OK ] MemDependency.MemDependencyCheckerOutputDoesntDepend (0 ms) 2023-01-11T22:11:13.0419361Z [ RUN ] MemDependency.MemDependencyCheckerLoopBounds 2023-01-11T22:11:13.0429278Z [ OK ] MemDependency.MemDependencyCheckerLoopBounds (1 ms) 2023-01-11T22:11:13.0429749Z [ RUN ] MemDependency.MemDependencyCheckerLoopBoundsIndexShift 2023-01-11T22:11:13.0455440Z [ OK ] MemDependency.MemDependencyCheckerLoopBoundsIndexShift (2 ms) 2023-01-11T22:11:13.0455933Z [ RUN ] MemDependency.MemDependencyCheckerLoopSelfDependency 2023-01-11T22:11:13.0585400Z [ OK ] MemDependency.MemDependencyCheckerLoopSelfDependency (12 ms) 2023-01-11T22:11:13.0585917Z [ RUN ] MemDependency.MemDependencyCheckerLoopDistinctStrides 2023-01-11T22:11:13.0592937Z [ OK ] MemDependency.MemDependencyCheckerLoopDistinctStrides (0 ms) 2023-01-11T22:11:13.0593403Z [ RUN ] MemDependency.MemDependencyCheckerLoopBoundsCond 2023-01-11T22:11:13.0610196Z [ OK ] MemDependency.MemDependencyCheckerLoopBoundsCond (1 ms) 2023-01-11T22:11:13.0610661Z [ RUN ] MemDependency.MemDependencyCheckerIfThenElse 2023-01-11T22:11:13.0620889Z [ OK ] MemDependency.MemDependencyCheckerIfThenElse (1 ms) 2023-01-11T22:11:13.0621318Z [ RUN ] MemDependency.MemDependencyCheckerCutLoop 2023-01-11T22:11:13.0639218Z [ OK ] MemDependency.MemDependencyCheckerCutLoop (1 ms) 2023-01-11T22:11:13.0639657Z [ RUN ] MemDependency.MemDependencyCheckerDynamicShapes 2023-01-11T22:11:13.0666774Z [ OK ] MemDependency.MemDependencyCheckerDynamicShapes (2 ms) 2023-01-11T22:11:13.0667226Z [ RUN ] MemDependency.MemDependencyCheckerMultiDim 2023-01-11T22:11:13.0699782Z [ OK ] MemDependency.MemDependencyCheckerMultiDim (3 ms) 2023-01-11T22:11:13.0700276Z [ RUN ] MemDependency.MemDependencyCheckerComputeAPI 2023-01-11T22:11:13.0710269Z [ OK ] MemDependency.MemDependencyCheckerComputeAPI (1 ms) 2023-01-11T22:11:13.0710700Z [ RUN ] MemDependency.MemDependencyCheckerComputeInline 2023-01-11T22:11:13.0718015Z [ OK ] MemDependency.MemDependencyCheckerComputeInline (0 ms) 2023-01-11T22:11:13.0718464Z [ RUN ] MemDependency.MemDependencyCheckerComputeSplit 2023-01-11T22:11:13.0738947Z [ OK ] MemDependency.MemDependencyCheckerComputeSplit (2 ms) 2023-01-11T22:11:13.0739386Z [ RUN ] MemDependency.MemDependencyCheckerComputeReorder 2023-01-11T22:11:13.0752403Z [ OK ] MemDependency.MemDependencyCheckerComputeReorder (1 ms) 2023-01-11T22:11:13.0752999Z [ RUN ] MemDependency.MemDependencyCheckerComputeReduce 2023-01-11T22:11:13.0763312Z [ OK ] MemDependency.MemDependencyCheckerComputeReduce (1 ms) 2023-01-11T22:11:13.0763730Z [ RUN ] MemDependency.MemDependencyCheckerComputeGEMM 2023-01-11T22:11:13.0918299Z [ OK ] MemDependency.MemDependencyCheckerComputeGEMM (15 ms) 2023-01-11T22:11:13.0918718Z [----------] 31 tests from MemDependency (62 ms total) 2023-01-11T22:11:13.0918885Z 2023-01-11T22:11:13.0919017Z [----------] 2 tests from Ops 2023-01-11T22:11:13.0919239Z [ RUN ] Ops.Sum 2023-01-11T22:11:13.0958411Z [ OK ] Ops.Sum (4 ms) 2023-01-11T22:11:13.0958678Z [ RUN ] Ops.ChannelsLastSum 2023-01-11T22:11:13.2041606Z [ OK ] Ops.ChannelsLastSum (108 ms) 2023-01-11T22:11:13.2042039Z [----------] 2 tests from Ops (112 ms total) 2023-01-11T22:11:13.2042251Z 2023-01-11T22:11:13.2042412Z [----------] 10 tests from Quantization 2023-01-11T22:11:13.2042740Z [ RUN ] Quantization.QuantDequantInt8 2023-01-11T22:11:13.2315558Z [ OK ] Quantization.QuantDequantInt8 (27 ms) 2023-01-11T22:11:13.2315974Z [ RUN ] Quantization.QuantDequantUInt8 2023-01-11T22:11:13.2580112Z [ OK ] Quantization.QuantDequantUInt8 (26 ms) 2023-01-11T22:11:13.2580735Z [ RUN ] Quantization.QuantDequantUInt8_NLC 2023-01-11T22:11:13.2852712Z [ OK ] Quantization.QuantDequantUInt8_NLC (27 ms) 2023-01-11T22:11:13.2853267Z [ RUN ] Quantization.QuantAddDequantInt8 2023-01-11T22:11:13.3161394Z [ OK ] Quantization.QuantAddDequantInt8 (30 ms) 2023-01-11T22:11:13.3162037Z [ RUN ] Quantization.QuantAddDequantUInt8 2023-01-11T22:11:13.3468389Z [ OK ] Quantization.QuantAddDequantUInt8 (30 ms) 2023-01-11T22:11:13.3468895Z [ RUN ] Quantization.QuantSigmoidDequantUInt8 2023-01-11T22:11:13.3823557Z [ OK ] Quantization.QuantSigmoidDequantUInt8 (35 ms) 2023-01-11T22:11:13.3824029Z [ RUN ] Quantization.QuantMulDequantUInt8 2023-01-11T22:11:13.4229858Z [ OK ] Quantization.QuantMulDequantUInt8 (40 ms) 2023-01-11T22:11:13.4230461Z [ RUN ] Quantization.QuantUpsampleNearst2dDequantUInt8 2023-01-11T22:11:13.4698897Z [ OK ] Quantization.QuantUpsampleNearst2dDequantUInt8 (46 ms) 2023-01-11T22:11:13.4699363Z [ RUN ] Quantization.UpsampleNearst2d 2023-01-11T22:11:13.5006685Z [ OK ] Quantization.UpsampleNearst2d (30 ms) 2023-01-11T22:11:13.5007135Z [ RUN ] Quantization.QuantCatDequantUInt8 2023-01-11T22:11:13.5605112Z [ OK ] Quantization.QuantCatDequantUInt8 (59 ms) 2023-01-11T22:11:13.5606135Z [----------] 10 tests from Quantization (356 ms total) 2023-01-11T22:11:13.5606419Z 2023-01-11T22:11:13.5606692Z [----------] 2 tests from BufLiveRange 2023-01-11T22:11:13.5607634Z [ RUN ] BufLiveRange.SingleRangeLine 2023-01-11T22:11:13.5608193Z [ OK ] BufLiveRange.SingleRangeLine (0 ms) 2023-01-11T22:11:13.5608730Z [ RUN ] BufLiveRange.MulRangeLine 2023-01-11T22:11:13.5609272Z [ OK ] BufLiveRange.MulRangeLine (0 ms) 2023-01-11T22:11:13.5609827Z [----------] 2 tests from BufLiveRange (0 ms total) 2023-01-11T22:11:13.5610081Z 2023-01-11T22:11:13.5610338Z [----------] 6 tests from MemPlanning 2023-01-11T22:11:13.5610866Z [ RUN ] MemPlanning.MemReuseWithTypeCast 2023-01-11T22:11:13.6239447Z [ OK ] MemPlanning.MemReuseWithTypeCast (63 ms) 2023-01-11T22:11:13.6240366Z [ RUN ] MemPlanning.NoMemReuseForLargerType 2023-01-11T22:11:13.6967698Z [ OK ] MemPlanning.NoMemReuseForLargerType (72 ms) 2023-01-11T22:11:13.6968551Z [ RUN ] MemPlanning.SameBufSizeMemReuse 2023-01-11T22:11:13.9529034Z [ OK ] MemPlanning.SameBufSizeMemReuse (256 ms) 2023-01-11T22:11:13.9530015Z [ RUN ] MemPlanning.SameBufSizeMultiMemReuses 2023-01-11T22:11:14.2145381Z [ OK ] MemPlanning.SameBufSizeMultiMemReuses (261 ms) 2023-01-11T22:11:14.2146140Z [ RUN ] MemPlanning.SameBufSizeMultiMemReusesOfOneBuf 2023-01-11T22:11:14.5498421Z [ OK ] MemPlanning.SameBufSizeMultiMemReusesOfOneBuf (335 ms) 2023-01-11T22:11:14.5498862Z [ RUN ] MemPlanning.SmallerBufSizeNonMemReuse 2023-01-11T22:11:14.6958050Z [ OK ] MemPlanning.SmallerBufSizeNonMemReuse (145 ms) 2023-01-11T22:11:14.6958669Z [----------] 6 tests from MemPlanning (1135 ms total) 2023-01-11T22:11:14.6958870Z 2023-01-11T22:11:14.6959106Z [----------] 45 tests from Reductions 2023-01-11T22:11:14.6959453Z [ RUN ] Reductions.ReduceSum0D_1 2023-01-11T22:11:14.6959932Z [ OK ] Reductions.ReduceSum0D_1 (0 ms) 2023-01-11T22:11:14.6960339Z [ RUN ] Reductions.ReduceSum0D_2 2023-01-11T22:11:14.6960655Z [ OK ] Reductions.ReduceSum0D_2 (0 ms) 2023-01-11T22:11:14.6960931Z [ RUN ] Reductions.ReduceSum1D 2023-01-11T22:11:14.6961282Z [ OK ] Reductions.ReduceSum1D (0 ms) 2023-01-11T22:11:14.6961568Z [ RUN ] Reductions.ReduceSum2D 2023-01-11T22:11:14.6965066Z [ OK ] Reductions.ReduceSum2D (0 ms) 2023-01-11T22:11:14.6965358Z [ RUN ] Reductions.ReduceSum3D 2023-01-11T22:11:14.6995885Z [ OK ] Reductions.ReduceSum3D (3 ms) 2023-01-11T22:11:14.6996303Z [ RUN ] Reductions.ReduceSum10D 2023-01-11T22:11:15.3623074Z [ OK ] Reductions.ReduceSum10D (662 ms) 2023-01-11T22:11:15.3623468Z [ RUN ] Reductions.ReduceProduct 2023-01-11T22:11:15.3625193Z [ OK ] Reductions.ReduceProduct (0 ms) 2023-01-11T22:11:15.3625734Z [ RUN ] Reductions.ReduceMax 2023-01-11T22:11:15.3629291Z [ OK ] Reductions.ReduceMax (0 ms) 2023-01-11T22:11:15.3629912Z [ RUN ] Reductions.ReduceMinCustomInitializer 2023-01-11T22:11:15.3631768Z [ OK ] Reductions.ReduceMinCustomInitializer (0 ms) 2023-01-11T22:11:15.3632154Z [ RUN ] Reductions.ReduceAnyAll 2023-01-11T22:11:15.3652717Z [ OK ] Reductions.ReduceAnyAll (2 ms) 2023-01-11T22:11:15.3653104Z [ RUN ] Reductions.ReduceMatmul2D 2023-01-11T22:11:15.3661795Z [ OK ] Reductions.ReduceMatmul2D (0 ms) 2023-01-11T22:11:15.3662637Z [ RUN ] Reductions.ReduceRfactorLike 2023-01-11T22:11:15.3673009Z [ OK ] Reductions.ReduceRfactorLike (1 ms) 2023-01-11T22:11:15.3673332Z [ RUN ] Reductions.ReduceAsProducer 2023-01-11T22:11:15.3694629Z [ OK ] Reductions.ReduceAsProducer (2 ms) 2023-01-11T22:11:15.3694935Z [ RUN ] Reductions.ReduceAsConsumer 2023-01-11T22:11:15.3728935Z [ OK ] Reductions.ReduceAsConsumer (3 ms) 2023-01-11T22:11:15.3729460Z [ RUN ] Reductions.SplitReduceAxis 2023-01-11T22:11:15.3744835Z [ OK ] Reductions.SplitReduceAxis (1 ms) 2023-01-11T22:11:15.3745169Z [ RUN ] Reductions.SplitNonReduceAxis 2023-01-11T22:11:15.3777283Z [ OK ] Reductions.SplitNonReduceAxis (3 ms) 2023-01-11T22:11:15.3777646Z [ RUN ] Reductions.ReorderedReductionInitializer 2023-01-11T22:11:15.3815720Z [ OK ] Reductions.ReorderedReductionInitializer (3 ms) 2023-01-11T22:11:15.3816079Z [ RUN ] Reductions.ReduceRfactor 2023-01-11T22:11:15.3826454Z [ OK ] Reductions.ReduceRfactor (1 ms) 2023-01-11T22:11:15.3826793Z [ RUN ] Reductions.Reduce3DRfactorInner 2023-01-11T22:11:15.3950734Z [ OK ] Reductions.Reduce3DRfactorInner (12 ms) 2023-01-11T22:11:15.3951080Z [ RUN ] Reductions.Reduce3DRfactorOuter 2023-01-11T22:11:15.4079684Z [ OK ] Reductions.Reduce3DRfactorOuter (12 ms) 2023-01-11T22:11:15.4080111Z [ RUN ] Reductions.ReduceRepeatedInternalRfactor 2023-01-11T22:11:15.5791009Z [ OK ] Reductions.ReduceRepeatedInternalRfactor (171 ms) 2023-01-11T22:11:15.5791453Z [ RUN ] Reductions.ReduceSplitTail 2023-01-11T22:11:15.6179622Z [ OK ] Reductions.ReduceSplitTail (38 ms) 2023-01-11T22:11:15.6179966Z [ RUN ] Reductions.ReduceSplitNoTail 2023-01-11T22:11:15.6627275Z [ OK ] Reductions.ReduceSplitNoTail (44 ms) 2023-01-11T22:11:15.6627629Z [ RUN ] Reductions.ReduceOverSplitTail 2023-01-11T22:11:15.7002616Z [ OK ] Reductions.ReduceOverSplitTail (37 ms) 2023-01-11T22:11:15.7002960Z [ RUN ] Reductions.ReduceSplitMask 2023-01-11T22:11:15.7492638Z [ OK ] Reductions.ReduceSplitMask (49 ms) 2023-01-11T22:11:15.7492966Z [ RUN ] Reductions.ReduceSplitNoMask 2023-01-11T22:11:15.7940878Z [ OK ] Reductions.ReduceSplitNoMask (44 ms) 2023-01-11T22:11:15.7941241Z [ RUN ] Reductions.ReduceOverSplitMask 2023-01-11T22:11:15.8340886Z [ OK ] Reductions.ReduceOverSplitMask (39 ms) 2023-01-11T22:11:15.8341221Z [ RUN ] Reductions.ReduceSplitRfactor 2023-01-11T22:11:15.8383497Z [ OK ] Reductions.ReduceSplitRfactor (4 ms) 2023-01-11T22:11:15.8383853Z [ RUN ] Reductions.ReduceOverSplitRfactor 2023-01-11T22:11:15.8397286Z [ OK ] Reductions.ReduceOverSplitRfactor (1 ms) 2023-01-11T22:11:15.8397690Z [ RUN ] Reductions.ReduceInlineReduction 2023-01-11T22:11:15.8398483Z [ OK ] Reductions.ReduceInlineReduction (0 ms) 2023-01-11T22:11:15.8398820Z [ RUN ] Reductions.ReduceInlineConsumer 2023-01-11T22:11:15.8486367Z [ OK ] Reductions.ReduceInlineConsumer (8 ms) 2023-01-11T22:11:15.8486726Z [ RUN ] Reductions.ReduceInlineReducerInternal 2023-01-11T22:11:15.8575896Z [ OK ] Reductions.ReduceInlineReducerInternal (8 ms) 2023-01-11T22:11:15.8576322Z [ RUN ] Reductions.ReductionCacheAccessesOperatorAxis 2023-01-11T22:11:15.8622238Z [ OK ] Reductions.ReductionCacheAccessesOperatorAxis (4 ms) 2023-01-11T22:11:15.8622933Z [ RUN ] Reductions.ReductionCacheAccessesOuterReduceAxis 2023-01-11T22:11:15.8665455Z [ OK ] Reductions.ReductionCacheAccessesOuterReduceAxis (4 ms) 2023-01-11T22:11:15.8665891Z [ RUN ] Reductions.ReductionCacheAccessesInnerReduceAxis 2023-01-11T22:11:15.8708310Z [ OK ] Reductions.ReductionCacheAccessesInnerReduceAxis (4 ms) 2023-01-11T22:11:15.8708707Z [ RUN ] Reductions.ReductionCacheBodyAccess 2023-01-11T22:11:15.8723047Z [ OK ] Reductions.ReductionCacheBodyAccess (1 ms) 2023-01-11T22:11:15.8723439Z [ RUN ] Reductions.ReductionCacheConsumerAccess 2023-01-11T22:11:15.8740194Z [ OK ] Reductions.ReductionCacheConsumerAccess (1 ms) 2023-01-11T22:11:15.8740817Z [ RUN ] Reductions.ReductionSplitCacheConsumerAccess 2023-01-11T22:11:15.8759973Z [ OK ] Reductions.ReductionSplitCacheConsumerAccess (1 ms) 2023-01-11T22:11:15.8760398Z [ RUN ] Reductions.ReductionReorderCacheConsumerAccess 2023-01-11T22:11:15.8777367Z [ OK ] Reductions.ReductionReorderCacheConsumerAccess (1 ms) 2023-01-11T22:11:15.8777794Z [ RUN ] Reductions.ReductionRfactorCacheTempOuter 2023-01-11T22:11:15.8952498Z [ OK ] Reductions.ReductionRfactorCacheTempOuter (17 ms) 2023-01-11T22:11:15.8952958Z [ RUN ] Reductions.ReductionRfactorCacheTempInner 2023-01-11T22:11:15.9091769Z [ OK ] Reductions.ReductionRfactorCacheTempInner (13 ms) 2023-01-11T22:11:15.9092130Z [ RUN ] Reductions.ReductionVectorize 2023-01-11T22:11:15.9103151Z [ OK ] Reductions.ReductionVectorize (1 ms) 2023-01-11T22:11:15.9103671Z [ RUN ] Reductions.ReductionVectorizeInner 2023-01-11T22:11:15.9104028Z [ OK ] Reductions.ReductionVectorizeInner (0 ms) 2023-01-11T22:11:15.9104381Z [ RUN ] Reductions.ReductionVectorizeRfactor 2023-01-11T22:11:15.9117134Z [ OK ] Reductions.ReductionVectorizeRfactor (1 ms) 2023-01-11T22:11:15.9117527Z [ RUN ] Reductions.InitFunction 2023-01-11T22:11:15.9119102Z [ OK ] Reductions.InitFunction (0 ms) 2023-01-11T22:11:15.9119473Z [----------] 45 tests from Reductions (1216 ms total) 2023-01-11T22:11:15.9119620Z 2023-01-11T22:11:15.9119823Z [----------] 69 tests from Registerizer 2023-01-11T22:11:15.9120133Z [ RUN ] Registerizer.RegisterizerSimple 2023-01-11T22:11:15.9121418Z [ OK ] Registerizer.RegisterizerSimple (0 ms) 2023-01-11T22:11:15.9121818Z [ RUN ] Registerizer.RegisterizerLoop 2023-01-11T22:11:15.9122656Z [ OK ] Registerizer.RegisterizerLoop (0 ms) 2023-01-11T22:11:15.9123094Z [ RUN ] Registerizer.RegisterizerLoopFixedLoad 2023-01-11T22:11:15.9124113Z [ OK ] Registerizer.RegisterizerLoopFixedLoad (0 ms) 2023-01-11T22:11:15.9124478Z [ RUN ] Registerizer.RegisterizerLoopInternal 2023-01-11T22:11:15.9126002Z [ OK ] Registerizer.RegisterizerLoopInternal (0 ms) 2023-01-11T22:11:15.9126410Z [ RUN ] Registerizer.RegisterizerLoopInternalLoadOverlap 2023-01-11T22:11:15.9128250Z [ OK ] Registerizer.RegisterizerLoopInternalLoadOverlap (0 ms) 2023-01-11T22:11:15.9128763Z [ RUN ] Registerizer.RegisterizerLoopInternalRepeated 2023-01-11T22:11:15.9132297Z [ OK ] Registerizer.RegisterizerLoopInternalRepeated (0 ms) 2023-01-11T22:11:15.9132810Z [ RUN ] Registerizer.RegisterizerLoopInternalRepeatedOverlapLoopVar 2023-01-11T22:11:15.9135681Z [ OK ] Registerizer.RegisterizerLoopInternalRepeatedOverlapLoopVar (0 ms) 2023-01-11T22:11:15.9136208Z [ RUN ] Registerizer.RegisterizerLoopInternalRepeatedOverlapOther 2023-01-11T22:11:15.9139278Z [ OK ] Registerizer.RegisterizerLoopInternalRepeatedOverlapOther (0 ms) 2023-01-11T22:11:15.9139696Z [ RUN ] Registerizer.RegisterizerMultiVar 2023-01-11T22:11:15.9164591Z [ OK ] Registerizer.RegisterizerMultiVar (2 ms) 2023-01-11T22:11:15.9164959Z [ RUN ] Registerizer.RegisterizerVariableLoad 2023-01-11T22:11:15.9166740Z [ OK ] Registerizer.RegisterizerVariableLoad (0 ms) 2023-01-11T22:11:15.9167119Z [ RUN ] Registerizer.RegisterizerSymbolicIndices 2023-01-11T22:11:15.9168623Z [ OK ] Registerizer.RegisterizerSymbolicIndices (0 ms) 2023-01-11T22:11:15.9169024Z [ RUN ] Registerizer.RegisterizerMultiLoop 2023-01-11T22:11:15.9170979Z [ OK ] Registerizer.RegisterizerMultiLoop (0 ms) 2023-01-11T22:11:15.9171481Z [ RUN ] Registerizer.RegisterizerRepeated 2023-01-11T22:11:15.9174126Z [ OK ] Registerizer.RegisterizerRepeated (0 ms) 2023-01-11T22:11:15.9174746Z [ RUN ] Registerizer.RegisterizerNoLoads 2023-01-11T22:11:15.9175603Z [ OK ] Registerizer.RegisterizerNoLoads (0 ms) 2023-01-11T22:11:15.9176206Z [ RUN ] Registerizer.RegisterizerNoRepeatedStores 2023-01-11T22:11:15.9177757Z [ OK ] Registerizer.RegisterizerNoRepeatedStores (0 ms) 2023-01-11T22:11:15.9178361Z [ RUN ] Registerizer.RegisterizerMultiVarOverlap 2023-01-11T22:11:15.9181056Z [ OK ] Registerizer.RegisterizerMultiVarOverlap (0 ms) 2023-01-11T22:11:15.9181647Z [ RUN ] Registerizer.RegisterizerAllocs 2023-01-11T22:11:15.9184291Z [ OK ] Registerizer.RegisterizerAllocs (0 ms) 2023-01-11T22:11:15.9184900Z [ RUN ] Registerizer.RegisterizerNoInitializer 2023-01-11T22:11:15.9185987Z [ OK ] Registerizer.RegisterizerNoInitializer (0 ms) 2023-01-11T22:11:15.9186571Z [ RUN ] Registerizer.RegisterizerNoInitializerLoopVar 2023-01-11T22:11:15.9187416Z [ OK ] Registerizer.RegisterizerNoInitializerLoopVar (0 ms) 2023-01-11T22:11:15.9188027Z [ RUN ] Registerizer.RegisterizerLoadThenStore 2023-01-11T22:11:15.9189410Z [ OK ] Registerizer.RegisterizerLoadThenStore (0 ms) 2023-01-11T22:11:15.9189998Z [ RUN ] Registerizer.RegisterizerParallelized 2023-01-11T22:11:15.9210888Z [ OK ] Registerizer.RegisterizerParallelized (2 ms) 2023-01-11T22:11:15.9211468Z [ RUN ] Registerizer.RegisterizerConditionAfter 2023-01-11T22:11:15.9212726Z [ OK ] Registerizer.RegisterizerConditionAfter (0 ms) 2023-01-11T22:11:15.9213338Z [ RUN ] Registerizer.RegisterizerConditionBefore 2023-01-11T22:11:15.9214429Z [ OK ] Registerizer.RegisterizerConditionBefore (0 ms) 2023-01-11T22:11:15.9215047Z [ RUN ] Registerizer.RegisterizerConditionInside 2023-01-11T22:11:15.9216775Z [ OK ] Registerizer.RegisterizerConditionInside (0 ms) 2023-01-11T22:11:15.9217424Z [ RUN ] Registerizer.RegisterizerConditionInsideOverlap1 2023-01-11T22:11:15.9219672Z [ OK ] Registerizer.RegisterizerConditionInsideOverlap1 (0 ms) 2023-01-11T22:11:15.9220309Z [ RUN ] Registerizer.RegisterizerConditionInsideOverlap2 2023-01-11T22:11:15.9223636Z [ OK ] Registerizer.RegisterizerConditionInsideOverlap2 (0 ms) 2023-01-11T22:11:15.9224238Z [ RUN ] Registerizer.RegisterizerConditionHidden 2023-01-11T22:11:15.9225130Z [ OK ] Registerizer.RegisterizerConditionHidden (0 ms) 2023-01-11T22:11:15.9225765Z [ RUN ] Registerizer.RegisterizerConditionUnhidden 2023-01-11T22:11:15.9227453Z [ OK ] Registerizer.RegisterizerConditionUnhidden (0 ms) 2023-01-11T22:11:15.9228055Z [ RUN ] Registerizer.RegisterizerCondCondition 2023-01-11T22:11:15.9229224Z [ OK ] Registerizer.RegisterizerCondCondition (0 ms) 2023-01-11T22:11:15.9229843Z [ RUN ] Registerizer.RegisterizerCondConditionUnhidden 2023-01-11T22:11:15.9231582Z [ OK ] Registerizer.RegisterizerCondConditionUnhidden (0 ms) 2023-01-11T22:11:15.9232185Z [ RUN ] Registerizer.RegisterizerIfThenElseHidden 2023-01-11T22:11:15.9235344Z [ OK ] Registerizer.RegisterizerIfThenElseHidden (0 ms) 2023-01-11T22:11:15.9235955Z [ RUN ] Registerizer.RegisterizerIfThenElseUnhidden 2023-01-11T22:11:15.9239635Z [ OK ] Registerizer.RegisterizerIfThenElseUnhidden (0 ms) 2023-01-11T22:11:15.9240272Z [ RUN ] Registerizer.RegisterizerIfThenElseNested 2023-01-11T22:11:15.9241802Z [ OK ] Registerizer.RegisterizerIfThenElseNested (0 ms) 2023-01-11T22:11:15.9242422Z [ RUN ] Registerizer.RegisterizerIfThenElseInternal 2023-01-11T22:11:15.9244440Z [ OK ] Registerizer.RegisterizerIfThenElseInternal (0 ms) 2023-01-11T22:11:15.9245070Z [ RUN ] Registerizer.RegisterizerIfThenElseCondition 2023-01-11T22:11:15.9246120Z [ OK ] Registerizer.RegisterizerIfThenElseCondition (0 ms) 2023-01-11T22:11:15.9246783Z [ RUN ] Registerizer.RegisterizerIfThenElseConditionUnhidden 2023-01-11T22:11:15.9247993Z [ OK ] Registerizer.RegisterizerIfThenElseConditionUnhidden (0 ms) 2023-01-11T22:11:15.9248431Z [ RUN ] Registerizer.RegisterizerConditionBranchOnly 2023-01-11T22:11:15.9259403Z [ OK ] Registerizer.RegisterizerConditionBranchOnly (1 ms) 2023-01-11T22:11:15.9259856Z [ RUN ] Registerizer.RegisterizerCondIfThenElse 2023-01-11T22:11:15.9261627Z [ OK ] Registerizer.RegisterizerCondIfThenElse (0 ms) 2023-01-11T22:11:15.9262104Z [ RUN ] Registerizer.RegisterizerIfThenElseLoop 2023-01-11T22:11:15.9263851Z [ OK ] Registerizer.RegisterizerIfThenElseLoop (0 ms) 2023-01-11T22:11:15.9264249Z [ RUN ] Registerizer.RegisterizerIfThenElseLoopCut 2023-01-11T22:11:15.9265822Z [ OK ] Registerizer.RegisterizerIfThenElseLoopCut (0 ms) 2023-01-11T22:11:15.9266280Z [ RUN ] Registerizer.RegisterizerPartialAfter 2023-01-11T22:11:15.9269075Z [ OK ] Registerizer.RegisterizerPartialAfter (0 ms) 2023-01-11T22:11:15.9269513Z [ RUN ] Registerizer.RegisterizerPartialBefore 2023-01-11T22:11:15.9272090Z [ OK ] Registerizer.RegisterizerPartialBefore (0 ms) 2023-01-11T22:11:15.9272598Z [ RUN ] Registerizer.RegisterizerPartialInside 2023-01-11T22:11:15.9275958Z [ OK ] Registerizer.RegisterizerPartialInside (0 ms) 2023-01-11T22:11:15.9276415Z [ RUN ] Registerizer.RegisterizerPartialCondition 2023-01-11T22:11:15.9280295Z [ OK ] Registerizer.RegisterizerPartialCondition (0 ms) 2023-01-11T22:11:15.9280732Z [ RUN ] Registerizer.RegisterizerPartialConditionInternalCut 2023-01-11T22:11:15.9282160Z [ OK ] Registerizer.RegisterizerPartialConditionInternalCut (0 ms) 2023-01-11T22:11:15.9282681Z [ RUN ] Registerizer.RegisterizerPartialConditionInternalStart 2023-01-11T22:11:15.9284096Z [ OK ] Registerizer.RegisterizerPartialConditionInternalStart (0 ms) 2023-01-11T22:11:15.9284582Z [ RUN ] Registerizer.RegisterizerPartialOverlapsTwo 2023-01-11T22:11:15.9287080Z [ OK ] Registerizer.RegisterizerPartialOverlapsTwo (0 ms) 2023-01-11T22:11:15.9287535Z [ RUN ] Registerizer.RegisterizerNestedBlocks 2023-01-11T22:11:15.9289379Z [ OK ] Registerizer.RegisterizerNestedBlocks (0 ms) 2023-01-11T22:11:15.9289758Z [ RUN ] Registerizer.RegisterizerNestedConditions 2023-01-11T22:11:15.9291475Z [ OK ] Registerizer.RegisterizerNestedConditions (0 ms) 2023-01-11T22:11:15.9291895Z [ RUN ] Registerizer.RegisterizerNestedConditionsUnhidden 2023-01-11T22:11:15.9293840Z [ OK ] Registerizer.RegisterizerNestedConditionsUnhidden (0 ms) 2023-01-11T22:11:15.9294288Z [ RUN ] Registerizer.RegisterizerNestedConditionsHiddenFirst 2023-01-11T22:11:15.9296830Z [ OK ] Registerizer.RegisterizerNestedConditionsHiddenFirst (0 ms) 2023-01-11T22:11:15.9297296Z [ RUN ] Registerizer.RegisterizerNestedConditionsHiddenSecond 2023-01-11T22:11:15.9299745Z [ OK ] Registerizer.RegisterizerNestedConditionsHiddenSecond (0 ms) 2023-01-11T22:11:15.9300171Z [ RUN ] Registerizer.RegisterizerNestedConditionsCut 2023-01-11T22:11:15.9302009Z [ OK ] Registerizer.RegisterizerNestedConditionsCut (0 ms) 2023-01-11T22:11:15.9302738Z [ RUN ] Registerizer.RegisterizerNestedConditionLoopHidden 2023-01-11T22:11:15.9304575Z [ OK ] Registerizer.RegisterizerNestedConditionLoopHidden (0 ms) 2023-01-11T22:11:15.9305027Z [ RUN ] Registerizer.RegisterizerNestedConditionThreeDeep 2023-01-11T22:11:15.9310225Z [ OK ] Registerizer.RegisterizerNestedConditionThreeDeep (0 ms) 2023-01-11T22:11:15.9310681Z [ RUN ] Registerizer.RegisterizerNestedLoopSimple 2023-01-11T22:11:15.9312212Z [ OK ] Registerizer.RegisterizerNestedLoopSimple (0 ms) 2023-01-11T22:11:15.9312657Z [ RUN ] Registerizer.RegisterizerHiddenAccessYes 2023-01-11T22:11:15.9315395Z [ OK ] Registerizer.RegisterizerHiddenAccessYes (0 ms) 2023-01-11T22:11:15.9315841Z [ RUN ] Registerizer.RegisterizerHiddenAccessNo 2023-01-11T22:11:15.9318354Z [ OK ] Registerizer.RegisterizerHiddenAccessNo (0 ms) 2023-01-11T22:11:15.9318822Z [ RUN ] Registerizer.RegisterizerHiddenAccessMultiLoop 2023-01-11T22:11:15.9322369Z [ OK ] Registerizer.RegisterizerHiddenAccessMultiLoop (0 ms) 2023-01-11T22:11:15.9322805Z [ RUN ] Registerizer.RegisterizerTwoConditionalLoops 2023-01-11T22:11:15.9325043Z [ OK ] Registerizer.RegisterizerTwoConditionalLoops (0 ms) 2023-01-11T22:11:15.9325517Z [ RUN ] Registerizer.RegisterizerTwoConditionalLoopsCut 2023-01-11T22:11:15.9328234Z [ OK ] Registerizer.RegisterizerTwoConditionalLoopsCut (0 ms) 2023-01-11T22:11:15.9328660Z [ RUN ] Registerizer.RegisterizerLoopLetVar 2023-01-11T22:11:15.9330050Z [ OK ] Registerizer.RegisterizerLoopLetVar (0 ms) 2023-01-11T22:11:15.9330491Z [ RUN ] Registerizer.RegisterizerLoopLetVarOuter 2023-01-11T22:11:15.9331621Z [ OK ] Registerizer.RegisterizerLoopLetVarOuter (0 ms) 2023-01-11T22:11:15.9332047Z [ RUN ] Registerizer.RegisterizerMultiDim 2023-01-11T22:11:15.9333822Z [ OK ] Registerizer.RegisterizerMultiDim (0 ms) 2023-01-11T22:11:15.9334267Z [ RUN ] Registerizer.RegisterizerMultiDimPartial 2023-01-11T22:11:15.9336596Z [ OK ] Registerizer.RegisterizerMultiDimPartial (0 ms) 2023-01-11T22:11:15.9337024Z [ RUN ] Registerizer.RegisterizerMultiDimOverlap 2023-01-11T22:11:15.9339495Z [ OK ] Registerizer.RegisterizerMultiDimOverlap (0 ms) 2023-01-11T22:11:15.9339955Z [ RUN ] Registerizer.RegisterizerMultiDimPartialOverlap 2023-01-11T22:11:15.9342592Z [ OK ] Registerizer.RegisterizerMultiDimPartialOverlap (0 ms) 2023-01-11T22:11:15.9343029Z [ RUN ] Registerizer.RegisterizerMultiDim3DReduction1 2023-01-11T22:11:15.9345950Z [ OK ] Registerizer.RegisterizerMultiDim3DReduction1 (0 ms) 2023-01-11T22:11:15.9346366Z [ RUN ] Registerizer.RegisterizerMultiDim3DReduction2 2023-01-11T22:11:15.9349907Z [ OK ] Registerizer.RegisterizerMultiDim3DReduction2 (0 ms) 2023-01-11T22:11:15.9350426Z [----------] 69 tests from Registerizer (23 ms total) 2023-01-11T22:11:15.9350612Z 2023-01-11T22:11:15.9350793Z [----------] 92 tests from Simplify 2023-01-11T22:11:15.9351122Z [ RUN ] Simplify.ConstantFoldSimple 2023-01-11T22:11:15.9351444Z [ OK ] Simplify.ConstantFoldSimple (0 ms) 2023-01-11T22:11:15.9351762Z [ RUN ] Simplify.ConstantFoldTwoLayer 2023-01-11T22:11:15.9352089Z [ OK ] Simplify.ConstantFoldTwoLayer (0 ms) 2023-01-11T22:11:15.9352390Z [ RUN ] Simplify.ConstantFoldShifts 2023-01-11T22:11:15.9373854Z [ OK ] Simplify.ConstantFoldShifts (2 ms) 2023-01-11T22:11:15.9374369Z [ RUN ] Simplify.ConstantFoldBitwise 2023-01-11T22:11:15.9374769Z [ OK ] Simplify.ConstantFoldBitwise (0 ms) 2023-01-11T22:11:15.9375268Z [ RUN ] Simplify.ConstantFoldMultiOp 2023-01-11T22:11:15.9375686Z [ OK ] Simplify.ConstantFoldMultiOp (0 ms) 2023-01-11T22:11:15.9376304Z [ RUN ] Simplify.ConstantFoldMinMax 2023-01-11T22:11:15.9376701Z [ OK ] Simplify.ConstantFoldMinMax (0 ms) 2023-01-11T22:11:15.9377108Z [ RUN ] Simplify.ConstantFoldIntrinsics 2023-01-11T22:11:15.9377484Z [ OK ] Simplify.ConstantFoldIntrinsics (0 ms) 2023-01-11T22:11:15.9377879Z [ RUN ] Simplify.ConstantFoldCastToBool 2023-01-11T22:11:15.9378328Z [ OK ] Simplify.ConstantFoldCastToBool (0 ms) 2023-01-11T22:11:15.9378722Z [ RUN ] Simplify.ConstantFoldWithVar 2023-01-11T22:11:15.9379044Z [ OK ] Simplify.ConstantFoldWithVar (0 ms) 2023-01-11T22:11:15.9379397Z [ RUN ] Simplify.ConditionalSelectFoldSimple 2023-01-11T22:11:15.9379884Z [ OK ] Simplify.ConditionalSelectFoldSimple (0 ms) 2023-01-11T22:11:15.9380319Z [ RUN ] Simplify.ConditionalSelectFoldTwoLayer 2023-01-11T22:11:15.9380981Z [ OK ] Simplify.ConditionalSelectFoldTwoLayer (0 ms) 2023-01-11T22:11:15.9381472Z [ RUN ] Simplify.ConditionalSelectFoldWithVar 2023-01-11T22:11:15.9381996Z [ OK ] Simplify.ConditionalSelectFoldWithVar (0 ms) 2023-01-11T22:11:15.9382667Z [ RUN ] Simplify.UnFoldableExpr 2023-01-11T22:11:15.9383209Z [ OK ] Simplify.UnFoldableExpr (0 ms) 2023-01-11T22:11:15.9383716Z [ RUN ] Simplify.HashSimple 2023-01-11T22:11:15.9384018Z [ OK ] Simplify.HashSimple (0 ms) 2023-01-11T22:11:15.9384292Z [ RUN ] Simplify.HashEquivalence 2023-01-11T22:11:15.9384590Z [ OK ] Simplify.HashEquivalence (0 ms) 2023-01-11T22:11:15.9384895Z [ RUN ] Simplify.HashEquivalenceRand 2023-01-11T22:11:15.9385201Z [ OK ] Simplify.HashEquivalenceRand (0 ms) 2023-01-11T22:11:15.9385539Z [ RUN ] Simplify.HashEquivalenceAfterFolding 2023-01-11T22:11:15.9385901Z [ OK ] Simplify.HashEquivalenceAfterFolding (0 ms) 2023-01-11T22:11:15.9386241Z [ RUN ] Simplify.HashDifferenceTypes 2023-01-11T22:11:15.9386548Z [ OK ] Simplify.HashDifferenceTypes (0 ms) 2023-01-11T22:11:15.9386857Z [ RUN ] Simplify.HashLargeExpression 2023-01-11T22:11:15.9394551Z [ OK ] Simplify.HashLargeExpression (1 ms) 2023-01-11T22:11:15.9395112Z [ RUN ] Simplify.HashForLoopOptions 2023-01-11T22:11:15.9395703Z [ OK ] Simplify.HashForLoopOptions (0 ms) 2023-01-11T22:11:15.9396222Z [ RUN ] Simplify.SimplifyAdd 2023-01-11T22:11:15.9396719Z [ OK ] Simplify.SimplifyAdd (0 ms) 2023-01-11T22:11:15.9397187Z [ RUN ] Simplify.SimplifySub 2023-01-11T22:11:15.9397656Z [ OK ] Simplify.SimplifySub (0 ms) 2023-01-11T22:11:15.9398179Z [ RUN ] Simplify.SimplifyMultiLayer 2023-01-11T22:11:15.9398749Z [ OK ] Simplify.SimplifyMultiLayer (0 ms) 2023-01-11T22:11:15.9399324Z [ RUN ] Simplify.SimplifyMultiTerm 2023-01-11T22:11:15.9399980Z [ OK ] Simplify.SimplifyMultiTerm (0 ms) 2023-01-11T22:11:15.9400325Z [ RUN ] Simplify.SimplifyCasts 2023-01-11T22:11:15.9400615Z [ OK ] Simplify.SimplifyCasts (0 ms) 2023-01-11T22:11:15.9400930Z [ RUN ] Simplify.SimplifyEliminatesNoOps 2023-01-11T22:11:15.9401258Z [ OK ] Simplify.SimplifyEliminatesNoOps (0 ms) 2023-01-11T22:11:15.9401575Z [ RUN ] Simplify.SimplifyMultiVar 2023-01-11T22:11:15.9401880Z [ OK ] Simplify.SimplifyMultiVar (0 ms) 2023-01-11T22:11:15.9402195Z [ RUN ] Simplify.SimplifyEliminatesVar 2023-01-11T22:11:15.9409878Z [ OK ] Simplify.SimplifyEliminatesVar (1 ms) 2023-01-11T22:11:15.9410458Z [ RUN ] Simplify.SimplifyAdds 2023-01-11T22:11:15.9412175Z [ OK ] Simplify.SimplifyAdds (0 ms) 2023-01-11T22:11:15.9412682Z [ RUN ] Simplify.SimplifyMuls 2023-01-11T22:11:15.9416224Z [ OK ] Simplify.SimplifyMuls (0 ms) 2023-01-11T22:11:15.9416530Z [ RUN ] Simplify.SimplifySubs 2023-01-11T22:11:15.9422013Z [ OK ] Simplify.SimplifySubs (0 ms) 2023-01-11T22:11:15.9422500Z [ RUN ] Simplify.SimplifyDiv 2023-01-11T22:11:15.9422795Z [ OK ] Simplify.SimplifyDiv (0 ms) 2023-01-11T22:11:15.9423120Z [ RUN ] Simplify.SimplifyDivWithLoopContext0 2023-01-11T22:11:15.9424115Z [ OK ] Simplify.SimplifyDivWithLoopContext0 (0 ms) 2023-01-11T22:11:15.9424481Z [ RUN ] Simplify.SimplifyDivWithLoopContext1 2023-01-11T22:11:15.9426058Z [ OK ] Simplify.SimplifyDivWithLoopContext1 (0 ms) 2023-01-11T22:11:15.9426450Z [ RUN ] Simplify.SimplifyDivWithLoopContext2 2023-01-11T22:11:15.9427956Z [ OK ] Simplify.SimplifyDivWithLoopContext2 (0 ms) 2023-01-11T22:11:15.9428375Z [ RUN ] Simplify.SimplifyDivWithLoopContext3 2023-01-11T22:11:15.9429263Z [ OK ] Simplify.SimplifyDivWithLoopContext3 (0 ms) 2023-01-11T22:11:15.9429621Z [ RUN ] Simplify.SimplifyDivWithLoopContext4 2023-01-11T22:11:15.9431756Z [ OK ] Simplify.SimplifyDivWithLoopContext4 (0 ms) 2023-01-11T22:11:15.9432121Z [ RUN ] Simplify.SimplifyDivWithLoopContext5 2023-01-11T22:11:15.9434640Z [ OK ] Simplify.SimplifyDivWithLoopContext5 (0 ms) 2023-01-11T22:11:15.9434994Z [ RUN ] Simplify.SimplifyDivWithLoopContext6 2023-01-11T22:11:15.9437892Z [ OK ] Simplify.SimplifyDivWithLoopContext6 (0 ms) 2023-01-11T22:11:15.9438300Z [ RUN ] Simplify.SimplifyDivWithLoopContext7 2023-01-11T22:11:15.9440001Z [ OK ] Simplify.SimplifyDivWithLoopContext7 (0 ms) 2023-01-11T22:11:15.9440370Z [ RUN ] Simplify.SimplifyModWithLoopContext0 2023-01-11T22:11:15.9440734Z [ OK ] Simplify.SimplifyModWithLoopContext0 (0 ms) 2023-01-11T22:11:15.9441097Z [ RUN ] Simplify.SimplifyModWithLoopContext1 2023-01-11T22:11:15.9442756Z [ OK ] Simplify.SimplifyModWithLoopContext1 (0 ms) 2023-01-11T22:11:15.9443115Z [ RUN ] Simplify.SimplifyModWithLoopContext2 2023-01-11T22:11:15.9444638Z [ OK ] Simplify.SimplifyModWithLoopContext2 (0 ms) 2023-01-11T22:11:15.9445002Z [ RUN ] Simplify.SimplifyModWithLoopContext3 2023-01-11T22:11:15.9445412Z [ OK ] Simplify.SimplifyModWithLoopContext3 (0 ms) 2023-01-11T22:11:15.9445768Z [ RUN ] Simplify.SimplifyModWithLoopContext4 2023-01-11T22:11:15.9448082Z [ OK ] Simplify.SimplifyModWithLoopContext4 (0 ms) 2023-01-11T22:11:15.9448431Z [ RUN ] Simplify.SimplifyModWithLoopContext5 2023-01-11T22:11:15.9450915Z [ OK ] Simplify.SimplifyModWithLoopContext5 (0 ms) 2023-01-11T22:11:15.9451275Z [ RUN ] Simplify.SimplifyModWithLoopContext6 2023-01-11T22:11:15.9453912Z [ OK ] Simplify.SimplifyModWithLoopContext6 (0 ms) 2023-01-11T22:11:15.9454437Z [ RUN ] Simplify.SimplifyModWithLoopContext7 2023-01-11T22:11:15.9454877Z [ OK ] Simplify.SimplifyModWithLoopContext7 (0 ms) 2023-01-11T22:11:15.9455191Z [ RUN ] Simplify.SimplifyMod 2023-01-11T22:11:15.9457625Z [ OK ] Simplify.SimplifyMod (0 ms) 2023-01-11T22:11:15.9457946Z [ RUN ] Simplify.SimplifyMultiOp 2023-01-11T22:11:15.9459960Z [ OK ] Simplify.SimplifyMultiOp (0 ms) 2023-01-11T22:11:15.9460267Z [ RUN ] Simplify.SimplifyManyOps 2023-01-11T22:11:15.9463342Z [ OK ] Simplify.SimplifyManyOps (0 ms) 2023-01-11T22:11:15.9463661Z [ RUN ] Simplify.SimplifyFactorization 2023-01-11T22:11:15.9467819Z [ OK ] Simplify.SimplifyFactorization (0 ms) 2023-01-11T22:11:15.9468177Z [ RUN ] Simplify.SimplifyFactorizeUneven 2023-01-11T22:11:15.9469300Z [ OK ] Simplify.SimplifyFactorizeUneven (0 ms) 2023-01-11T22:11:15.9469650Z [ RUN ] Simplify.SimplifyDeeperTerms 2023-01-11T22:11:15.9471430Z [ OK ] Simplify.SimplifyDeeperTerms (0 ms) 2023-01-11T22:11:15.9471802Z [ RUN ] Simplify.SimplifyDeeperDifference 2023-01-11T22:11:15.9472201Z [ OK ] Simplify.SimplifyDeeperDifference (0 ms) 2023-01-11T22:11:15.9472623Z [ RUN ] Simplify.SimplifyFoldComplexDifference 2023-01-11T22:11:15.9473116Z [ OK ] Simplify.SimplifyFoldComplexDifference (0 ms) 2023-01-11T22:11:15.9473526Z [ RUN ] Simplify.SimplifyIfComponents 2023-01-11T22:11:15.9473885Z [ OK ] Simplify.SimplifyIfComponents (0 ms) 2023-01-11T22:11:15.9474339Z [ RUN ] Simplify.SimplifyOpaqueTerms 2023-01-11T22:11:15.9475694Z [ OK ] Simplify.SimplifyOpaqueTerms (0 ms) 2023-01-11T22:11:15.9476201Z [ RUN ] Simplify.SimplifySymbolicMinMax 2023-01-11T22:11:15.9476990Z [ OK ] Simplify.SimplifySymbolicMinMax (0 ms) 2023-01-11T22:11:15.9477306Z [ RUN ] Simplify.SimplifyNestedMax 2023-01-11T22:11:15.9488789Z [ OK ] Simplify.SimplifyNestedMax (1 ms) 2023-01-11T22:11:15.9489344Z [ RUN ] Simplify.SimplifyNestedMin 2023-01-11T22:11:15.9501072Z [ OK ] Simplify.SimplifyNestedMin (1 ms) 2023-01-11T22:11:15.9501639Z [ RUN ] Simplify.SimplifyWontReorderFloat 2023-01-11T22:11:15.9503297Z [ OK ] Simplify.SimplifyWontReorderFloat (0 ms) 2023-01-11T22:11:15.9503899Z [ RUN ] Simplify.SimplifyRoundModPattern 2023-01-11T22:11:15.9511092Z [ OK ] Simplify.SimplifyRoundModPattern (0 ms) 2023-01-11T22:11:15.9511721Z [ RUN ] Simplify.SimplifyRoundModPatternFactorization 2023-01-11T22:11:15.9515323Z [ OK ] Simplify.SimplifyRoundModPatternFactorization (0 ms) 2023-01-11T22:11:15.9515767Z [ RUN ] Simplify.SimplifyRoundModPatternMultivar 2023-01-11T22:11:15.9518674Z [ OK ] Simplify.SimplifyRoundModPatternMultivar (0 ms) 2023-01-11T22:11:15.9519058Z [ RUN ] Simplify.SimplifyModRoundModPattern 2023-01-11T22:11:15.9523305Z [ OK ] Simplify.SimplifyModRoundModPattern (0 ms) 2023-01-11T22:11:15.9523711Z [ RUN ] Simplify.SimplifyModRoundModPatternFactorization 2023-01-11T22:11:15.9530414Z [ OK ] Simplify.SimplifyModRoundModPatternFactorization (0 ms) 2023-01-11T22:11:15.9530893Z [ RUN ] Simplify.SimplifyModRoundModPatternMultivar 2023-01-11T22:11:15.9544191Z [ OK ] Simplify.SimplifyModRoundModPatternMultivar (1 ms) 2023-01-11T22:11:15.9544628Z [ RUN ] Simplify.SimplifyDivisionScalarFactorization 2023-01-11T22:11:15.9545402Z [ OK ] Simplify.SimplifyDivisionScalarFactorization (0 ms) 2023-01-11T22:11:15.9546038Z [ RUN ] Simplify.SimplifyConstantBranches 2023-01-11T22:11:15.9546845Z [ OK ] Simplify.SimplifyConstantBranches (0 ms) 2023-01-11T22:11:15.9547466Z [ RUN ] Simplify.SimplifyConstantCond 2023-01-11T22:11:15.9548202Z [ OK ] Simplify.SimplifyConstantCond (0 ms) 2023-01-11T22:11:15.9548807Z [ RUN ] Simplify.SimplifyEliminateEmptyCond 2023-01-11T22:11:15.9549464Z [ OK ] Simplify.SimplifyEliminateEmptyCond (0 ms) 2023-01-11T22:11:15.9550114Z [ RUN ] Simplify.SimplifyConstantComparisons 2023-01-11T22:11:15.9554277Z [ OK ] Simplify.SimplifyConstantComparisons (0 ms) 2023-01-11T22:11:15.9554919Z [ RUN ] Simplify.SimplifySymbolicComparisons 2023-01-11T22:11:15.9561774Z [ OK ] Simplify.SimplifySymbolicComparisons (0 ms) 2023-01-11T22:11:15.9562453Z [ RUN ] Simplify.SimplifyEliminateZeroLengthFor 2023-01-11T22:11:15.9563107Z [ OK ] Simplify.SimplifyEliminateZeroLengthFor (0 ms) 2023-01-11T22:11:15.9563920Z [ RUN ] Simplify.SimplifyOneLoopFor 2023-01-11T22:11:15.9564499Z [ OK ] Simplify.SimplifyOneLoopFor (0 ms) 2023-01-11T22:11:15.9565093Z [ RUN ] Simplify.SimplifyForWontLoseLoopOptions 2023-01-11T22:11:15.9565761Z [ OK ] Simplify.SimplifyForWontLoseLoopOptions (0 ms) 2023-01-11T22:11:15.9566360Z [ RUN ] Simplify.SimplifyMultilevelFor 2023-01-11T22:11:15.9566936Z [ OK ] Simplify.SimplifyMultilevelFor (0 ms) 2023-01-11T22:11:15.9567487Z [ RUN ] Simplify.SimplifyForCleansUp 2023-01-11T22:11:15.9570207Z [ OK ] Simplify.SimplifyForCleansUp (0 ms) 2023-01-11T22:11:15.9570816Z [ RUN ] Simplify.SimplifyEliminateEmptyFor 2023-01-11T22:11:15.9571562Z [ OK ] Simplify.SimplifyEliminateEmptyFor (0 ms) 2023-01-11T22:11:15.9572143Z [ RUN ] Simplify.SimplifyFlattenBlock 2023-01-11T22:11:15.9572832Z [ OK ] Simplify.SimplifyFlattenBlock (0 ms) 2023-01-11T22:11:15.9573455Z [ RUN ] Simplify.SimplifyEliminateZeroLengthAlloc 2023-01-11T22:11:15.9574144Z [ OK ] Simplify.SimplifyEliminateZeroLengthAlloc (0 ms) 2023-01-11T22:11:15.9574756Z [ RUN ] Simplify.DontSimplifyRand 2023-01-11T22:11:15.9575298Z [ OK ] Simplify.DontSimplifyRand (0 ms) 2023-01-11T22:11:15.9575867Z [ RUN ] Simplify.SimplifyReorderForCond 2023-01-11T22:11:15.9580712Z [ OK ] Simplify.SimplifyReorderForCond (0 ms) 2023-01-11T22:11:15.9581334Z [ RUN ] Simplify.SimplifyFuseConditions 2023-01-11T22:11:15.9587557Z [ OK ] Simplify.SimplifyFuseConditions (0 ms) 2023-01-11T22:11:15.9588103Z [ RUN ] Simplify.SimplifySyncThreads 2023-01-11T22:11:15.9588678Z [ OK ] Simplify.SimplifySyncThreads (0 ms) 2023-01-11T22:11:15.9589222Z [ RUN ] Simplify.SimplifyRampSubBroadcast 2023-01-11T22:11:15.9589863Z [ OK ] Simplify.SimplifyRampSubBroadcast (0 ms) 2023-01-11T22:11:15.9590486Z [ RUN ] Simplify.SimplifyBroadcastTermExpander 2023-01-11T22:11:15.9591157Z [ OK ] Simplify.SimplifyBroadcastTermExpander (0 ms) 2023-01-11T22:11:15.9591791Z [ RUN ] Simplify.CompareSelectLoopBounds 2023-01-11T22:11:15.9673445Z [ OK ] Simplify.CompareSelectLoopBounds (8 ms) 2023-01-11T22:11:15.9674105Z [ RUN ] Simplify.CompareSelectCondAlwaysInLoopBounds 2023-01-11T22:11:15.9674868Z [ OK ] Simplify.CompareSelectCondAlwaysInLoopBounds (0 ms) 2023-01-11T22:11:15.9675462Z [ RUN ] Simplify.IfThenCondAlwaysInLoopBounds 2023-01-11T22:11:15.9675944Z [ OK ] Simplify.IfThenCondAlwaysInLoopBounds (0 ms) 2023-01-11T22:11:15.9676367Z [ RUN ] Simplify.MultiClauseCondAlwaysInLoopBounds 2023-01-11T22:11:15.9676992Z [ OK ] Simplify.MultiClauseCondAlwaysInLoopBounds (0 ms) 2023-01-11T22:11:15.9677660Z [----------] 92 tests from Simplify (32 ms total) 2023-01-11T22:11:15.9677824Z 2023-01-11T22:11:15.9677975Z [----------] 12 tests from TEFuserPass 2023-01-11T22:11:15.9678260Z [ RUN ] TEFuserPass.FuserPass_1 2023-01-11T22:11:15.9683454Z [ OK ] TEFuserPass.FuserPass_1 (0 ms) 2023-01-11T22:11:15.9683752Z [ RUN ] TEFuserPass.FuserPass_2 2023-01-11T22:11:15.9686685Z [ OK ] TEFuserPass.FuserPass_2 (0 ms) 2023-01-11T22:11:15.9687030Z [ RUN ] TEFuserPass.FuserPass_3 2023-01-11T22:11:15.9689284Z [ OK ] TEFuserPass.FuserPass_3 (0 ms) 2023-01-11T22:11:15.9689634Z [ RUN ] TEFuserPass.FuserPass_0DimInput 2023-01-11T22:11:15.9691688Z [ OK ] TEFuserPass.FuserPass_0DimInput (0 ms) 2023-01-11T22:11:15.9692285Z [ RUN ] TEFuserPass.FuserPass_UnfusibleDevice 2023-01-11T22:11:15.9692896Z [ OK ] TEFuserPass.FuserPass_UnfusibleDevice (0 ms) 2023-01-11T22:11:15.9693535Z [ RUN ] TEFuserPass.FuserPass_UnknownShapes 2023-01-11T22:11:15.9694161Z [ OK ] TEFuserPass.FuserPass_UnknownShapes (0 ms) 2023-01-11T22:11:15.9694528Z [ RUN ] TEFuserPass.FuserPass_Multidevice 2023-01-11T22:11:15.9700808Z [ OK ] TEFuserPass.FuserPass_Multidevice (0 ms) 2023-01-11T22:11:15.9701375Z [ RUN ] TEFuserPass.FuserPass_MergeGroups 2023-01-11T22:11:15.9703423Z [ OK ] TEFuserPass.FuserPass_MergeGroups (0 ms) 2023-01-11T22:11:15.9704013Z [ RUN ] TEFuserPass.FuserPass_IgnoreUnknownShapeAtStart 2023-01-11T22:11:15.9704659Z [ OK ] TEFuserPass.FuserPass_IgnoreUnknownShapeAtStart (0 ms) 2023-01-11T22:11:15.9705266Z [ RUN ] TEFuserPass.FuserPass_Where 2023-01-11T22:11:15.9707296Z [ OK ] TEFuserPass.FuserPass_Where (0 ms) 2023-01-11T22:11:15.9707609Z [ RUN ] TEFuserPass.FuserPass_WhereList 2023-01-11T22:11:15.9708609Z [ OK ] TEFuserPass.FuserPass_WhereList (0 ms) 2023-01-11T22:11:15.9708944Z [ RUN ] TEFuserPass.DynamicShapeFusion 2023-01-11T22:11:16.2298782Z [ OK ] TEFuserPass.DynamicShapeFusion (258 ms) 2023-01-11T22:11:16.2299242Z [----------] 12 tests from TEFuserPass (261 ms total) 2023-01-11T22:11:16.2299406Z 2023-01-11T22:11:16.2299552Z [----------] 3 tests from Type 2023-01-11T22:11:16.2299790Z [ RUN ] Type.Test01 2023-01-11T22:11:16.2300038Z [ OK ] Type.Test01 (0 ms) 2023-01-11T22:11:16.2300286Z [ RUN ] Type.BitCasting 2023-01-11T22:11:16.2300546Z [ OK ] Type.BitCasting (0 ms) 2023-01-11T22:11:16.2300793Z [ RUN ] Type.Propagation 2023-01-11T22:11:16.2301062Z [ OK ] Type.Propagation (0 ms) 2023-01-11T22:11:16.2301347Z [----------] 3 tests from Type (0 ms total) 2023-01-11T22:11:16.2301474Z 2023-01-11T22:11:16.2301676Z [----------] 1 test from SpecializationsInCustomPasses 2023-01-11T22:11:16.2302030Z [ RUN ] SpecializationsInCustomPasses.Basic 2023-01-11T22:11:16.2315413Z [ OK ] SpecializationsInCustomPasses.Basic (1 ms) 2023-01-11T22:11:16.2316127Z [----------] 1 test from SpecializationsInCustomPasses (1 ms total) 2023-01-11T22:11:16.2316322Z 2023-01-11T22:11:16.2316458Z [----------] 150 tests from LLVM 2023-01-11T22:11:16.2316709Z [ RUN ] LLVM.ByteImmTest 2023-01-11T22:11:16.2520916Z [ OK ] LLVM.ByteImmTest (20 ms) 2023-01-11T22:11:16.2521391Z [ RUN ] LLVM.CharImmTest 2023-01-11T22:11:16.2709024Z [ OK ] LLVM.CharImmTest (18 ms) 2023-01-11T22:11:16.2709327Z [ RUN ] LLVM.ShortImmTest 2023-01-11T22:11:16.2901555Z [ OK ] LLVM.ShortImmTest (19 ms) 2023-01-11T22:11:16.2901839Z [ RUN ] LLVM.IntImmTest 2023-01-11T22:11:16.3090241Z [ OK ] LLVM.IntImmTest (18 ms) 2023-01-11T22:11:16.3090535Z [ RUN ] LLVM.LongImmTest 2023-01-11T22:11:16.3282016Z [ OK ] LLVM.LongImmTest (19 ms) 2023-01-11T22:11:16.3282329Z [ RUN ] LLVM.FloatImmTest 2023-01-11T22:11:16.3469241Z [ OK ] LLVM.FloatImmTest (18 ms) 2023-01-11T22:11:16.3469693Z [ RUN ] LLVM.DoubleImmTest 2023-01-11T22:11:16.3658915Z [ OK ] LLVM.DoubleImmTest (18 ms) 2023-01-11T22:11:16.3659420Z [ RUN ] LLVM.HalfImmTest 2023-01-11T22:11:16.3846695Z [ OK ] LLVM.HalfImmTest (18 ms) 2023-01-11T22:11:16.3847158Z [ RUN ] LLVM.ByteAddTest 2023-01-11T22:11:16.4031807Z [ OK ] LLVM.ByteAddTest (18 ms) 2023-01-11T22:11:16.4032096Z [ RUN ] LLVM.CharAddTest 2023-01-11T22:11:16.4216363Z [ OK ] LLVM.CharAddTest (18 ms) 2023-01-11T22:11:16.4216647Z [ RUN ] LLVM.ShortAddTest 2023-01-11T22:11:16.4406482Z [ OK ] LLVM.ShortAddTest (18 ms) 2023-01-11T22:11:16.4407042Z [ RUN ] LLVM.IntAddTest 2023-01-11T22:11:16.4595385Z [ OK ] LLVM.IntAddTest (18 ms) 2023-01-11T22:11:16.4595696Z [ RUN ] LLVM.LongAddTest 2023-01-11T22:11:16.4786430Z [ OK ] LLVM.LongAddTest (19 ms) 2023-01-11T22:11:16.4786695Z [ RUN ] LLVM.FloatAddTest 2023-01-11T22:11:16.4976593Z [ OK ] LLVM.FloatAddTest (18 ms) 2023-01-11T22:11:16.4976913Z [ RUN ] LLVM.DoubleAddTest 2023-01-11T22:11:16.5169670Z [ OK ] LLVM.DoubleAddTest (19 ms) 2023-01-11T22:11:16.5169960Z [ RUN ] LLVM.HalfAddTest 2023-01-11T22:11:16.5360943Z [ OK ] LLVM.HalfAddTest (19 ms) 2023-01-11T22:11:16.5361333Z [ RUN ] LLVM.ByteSubTest 2023-01-11T22:11:16.5550219Z [ OK ] LLVM.ByteSubTest (18 ms) 2023-01-11T22:11:16.5550487Z [ RUN ] LLVM.CharSubTest 2023-01-11T22:11:16.5743800Z [ OK ] LLVM.CharSubTest (19 ms) 2023-01-11T22:11:16.5744276Z [ RUN ] LLVM.ShortSubTest 2023-01-11T22:11:16.5933301Z [ OK ] LLVM.ShortSubTest (18 ms) 2023-01-11T22:11:16.5933581Z [ RUN ] LLVM.IntSubTest 2023-01-11T22:11:16.6127035Z [ OK ] LLVM.IntSubTest (19 ms) 2023-01-11T22:11:16.6127314Z [ RUN ] LLVM.LongSubTest 2023-01-11T22:11:16.6318015Z [ OK ] LLVM.LongSubTest (19 ms) 2023-01-11T22:11:16.6318291Z [ RUN ] LLVM.FloatSubTest 2023-01-11T22:11:16.6508289Z [ OK ] LLVM.FloatSubTest (19 ms) 2023-01-11T22:11:16.6508559Z [ RUN ] LLVM.DoubleSubTest 2023-01-11T22:11:16.6700746Z [ OK ] LLVM.DoubleSubTest (19 ms) 2023-01-11T22:11:16.6701020Z [ RUN ] LLVM.HalfSubTest 2023-01-11T22:11:16.6891504Z [ OK ] LLVM.HalfSubTest (19 ms) 2023-01-11T22:11:16.6891944Z [ RUN ] LLVM.ByteMulTest 2023-01-11T22:11:16.7081099Z [ OK ] LLVM.ByteMulTest (18 ms) 2023-01-11T22:11:16.7081453Z [ RUN ] LLVM.CharMulTest 2023-01-11T22:11:16.7271031Z [ OK ] LLVM.CharMulTest (19 ms) 2023-01-11T22:11:16.7271536Z [ RUN ] LLVM.ShortMulTest 2023-01-11T22:11:16.7461885Z [ OK ] LLVM.ShortMulTest (18 ms) 2023-01-11T22:11:16.7462625Z [ RUN ] LLVM.IntMulTest 2023-01-11T22:11:16.7652515Z [ OK ] LLVM.IntMulTest (19 ms) 2023-01-11T22:11:16.7652794Z [ RUN ] LLVM.LongMulTest 2023-01-11T22:11:16.7844714Z [ OK ] LLVM.LongMulTest (19 ms) 2023-01-11T22:11:16.7844972Z [ RUN ] LLVM.FloatMulTest 2023-01-11T22:11:16.8035954Z [ OK ] LLVM.FloatMulTest (19 ms) 2023-01-11T22:11:16.8036235Z [ RUN ] LLVM.DoubleMulTest 2023-01-11T22:11:16.8226873Z [ OK ] LLVM.DoubleMulTest (19 ms) 2023-01-11T22:11:16.8227143Z [ RUN ] LLVM.HalfMulTest 2023-01-11T22:11:16.8417419Z [ OK ] LLVM.HalfMulTest (19 ms) 2023-01-11T22:11:16.8417714Z [ RUN ] LLVM.ByteDivTest 2023-01-11T22:11:16.8607139Z [ OK ] LLVM.ByteDivTest (18 ms) 2023-01-11T22:11:16.8607426Z [ RUN ] LLVM.CharDivTest 2023-01-11T22:11:16.8798689Z [ OK ] LLVM.CharDivTest (19 ms) 2023-01-11T22:11:16.8799083Z [ RUN ] LLVM.ShortDivTest 2023-01-11T22:11:16.8990483Z [ OK ] LLVM.ShortDivTest (19 ms) 2023-01-11T22:11:16.8990757Z [ RUN ] LLVM.IntDivTest 2023-01-11T22:11:16.9182872Z [ OK ] LLVM.IntDivTest (19 ms) 2023-01-11T22:11:16.9183386Z [ RUN ] LLVM.LongDivTest 2023-01-11T22:11:16.9376317Z [ OK ] LLVM.LongDivTest (19 ms) 2023-01-11T22:11:16.9568440Z [ RUN ] LLVM.FloatDivTest 2023-01-11T22:11:16.9568958Z [ OK ] LLVM.FloatDivTest (19 ms) 2023-01-11T22:11:16.9569415Z [ RUN ] LLVM.DoubleDivTest 2023-01-11T22:11:16.9762006Z [ OK ] LLVM.DoubleDivTest (19 ms) 2023-01-11T22:11:16.9762796Z [ RUN ] LLVM.HalfDivTest 2023-01-11T22:11:16.9954769Z [ OK ] LLVM.HalfDivTest (19 ms) 2023-01-11T22:11:16.9955287Z [ RUN ] LLVM.IntToFloatCastTest 2023-01-11T22:11:17.0147582Z [ OK ] LLVM.IntToFloatCastTest (19 ms) 2023-01-11T22:11:17.0148120Z [ RUN ] LLVM.FloatToIntCastTest 2023-01-11T22:11:17.0338007Z [ OK ] LLVM.FloatToIntCastTest (19 ms) 2023-01-11T22:11:17.0338562Z [ RUN ] LLVM.IntToLongCastTest 2023-01-11T22:11:17.0529459Z [ OK ] LLVM.IntToLongCastTest (19 ms) 2023-01-11T22:11:17.0530048Z [ RUN ] LLVM.ByteToCharCastTest 2023-01-11T22:11:17.0721887Z [ OK ] LLVM.ByteToCharCastTest (19 ms) 2023-01-11T22:11:17.0722434Z [ RUN ] LLVM.HalfToLongCastTest 2023-01-11T22:11:17.0913245Z [ OK ] LLVM.HalfToLongCastTest (19 ms) 2023-01-11T22:11:17.0913802Z [ RUN ] LLVM.ByteToDoubleCastTest 2023-01-11T22:11:17.1105387Z [ OK ] LLVM.ByteToDoubleCastTest (19 ms) 2023-01-11T22:11:17.1105935Z [ RUN ] LLVM.FloatToByteCastTest 2023-01-11T22:11:17.1295998Z [ OK ] LLVM.FloatToByteCastTest (19 ms) 2023-01-11T22:11:17.1296558Z [ RUN ] LLVM.FloatToCharCastTest 2023-01-11T22:11:17.1487548Z [ OK ] LLVM.FloatToCharCastTest (19 ms) 2023-01-11T22:11:17.1488084Z [ RUN ] LLVM.ByteToFloatCastTest 2023-01-11T22:11:17.1678915Z [ OK ] LLVM.ByteToFloatCastTest (19 ms) 2023-01-11T22:11:17.1679475Z [ RUN ] LLVM.CharToFloatCastTest 2023-01-11T22:11:17.1870705Z [ OK ] LLVM.CharToFloatCastTest (19 ms) 2023-01-11T22:11:17.1871044Z [ RUN ] LLVM.BitCast 2023-01-11T22:11:17.2630413Z [ OK ] LLVM.BitCast (75 ms) 2023-01-11T22:11:17.2630884Z [ RUN ] LLVM.fastLogFloat 2023-01-11T22:11:17.3248303Z [ OK ] LLVM.fastLogFloat (61 ms) 2023-01-11T22:11:17.3248827Z [ RUN ] LLVM.LetTest01 2023-01-11T22:11:17.3440743Z [ OK ] LLVM.LetTest01 (19 ms) 2023-01-11T22:11:17.3441206Z [ RUN ] LLVM.LetTest02 2023-01-11T22:11:17.3631751Z [ OK ] LLVM.LetTest02 (19 ms) 2023-01-11T22:11:17.3632283Z [ RUN ] LLVM.LetTestMultitype 2023-01-11T22:11:17.3823444Z [ OK ] LLVM.LetTestMultitype (19 ms) 2023-01-11T22:11:17.3823970Z [ RUN ] LLVM.BufferTest 2023-01-11T22:11:17.4014987Z [ OK ] LLVM.BufferTest (19 ms) 2023-01-11T22:11:17.4015600Z [ RUN ] LLVM.BlockTest 2023-01-11T22:11:17.4208565Z [ OK ] LLVM.BlockTest (19 ms) 2023-01-11T22:11:17.4208878Z [ RUN ] LLVM.LoadStoreTest 2023-01-11T22:11:17.4403194Z [ OK ] LLVM.LoadStoreTest (19 ms) 2023-01-11T22:11:17.4403547Z [ RUN ] LLVM.IfThenElseTest 2023-01-11T22:11:17.4611952Z [ OK ] LLVM.IfThenElseTest (20 ms) 2023-01-11T22:11:17.4612288Z [ RUN ] LLVM.CondNoFalseBlockTest 2023-01-11T22:11:17.5219281Z [ OK ] LLVM.CondNoFalseBlockTest (60 ms) 2023-01-11T22:11:17.5219561Z [ RUN ] LLVM.CondTest 2023-01-11T22:11:17.5834396Z [ OK ] LLVM.CondTest (61 ms) 2023-01-11T22:11:17.5834680Z [ RUN ] LLVM.CondNestedTest 2023-01-11T22:11:17.6697128Z [ OK ] LLVM.CondNestedTest (86 ms) 2023-01-11T22:11:17.6697475Z [ RUN ] LLVM.DirectVectorization 2023-01-11T22:11:17.6965799Z [ OK ] LLVM.DirectVectorization (26 ms) 2023-01-11T22:11:17.6966111Z [ RUN ] LLVM.VecLoadStoreTest 2023-01-11T22:11:17.7168748Z [ OK ] LLVM.VecLoadStoreTest (20 ms) 2023-01-11T22:11:17.7169061Z [ RUN ] LLVM.VecFloat_erfLane4Test 2023-01-11T22:11:17.7363189Z [ OK ] LLVM.VecFloat_erfLane4Test (19 ms) 2023-01-11T22:11:17.7363495Z [ RUN ] LLVM.VecFloat_erfcLane4Test 2023-01-11T22:11:17.7559513Z [ OK ] LLVM.VecFloat_erfcLane4Test (19 ms) 2023-01-11T22:11:17.7559956Z [ RUN ] LLVM.VecFloat_acosLane4Test 2023-01-11T22:11:17.7759923Z [ OK ] LLVM.VecFloat_acosLane4Test (20 ms) 2023-01-11T22:11:17.7760292Z [ RUN ] LLVM.VecFloat_asinLane4Test 2023-01-11T22:11:17.7957710Z [ OK ] LLVM.VecFloat_asinLane4Test (19 ms) 2023-01-11T22:11:17.7958073Z [ RUN ] LLVM.VecFloat_atanLane4Test 2023-01-11T22:11:17.8157423Z [ OK ] LLVM.VecFloat_atanLane4Test (19 ms) 2023-01-11T22:11:17.8157711Z [ RUN ] LLVM.VecFloat_coshLane4Test 2023-01-11T22:11:17.8357842Z [ OK ] LLVM.VecFloat_coshLane4Test (20 ms) 2023-01-11T22:11:17.8358151Z [ RUN ] LLVM.VecFloat_sinhLane4Test 2023-01-11T22:11:17.8557649Z [ OK ] LLVM.VecFloat_sinhLane4Test (19 ms) 2023-01-11T22:11:17.8557938Z [ RUN ] LLVM.VecFloat_tanhLane4Test 2023-01-11T22:11:17.8758652Z [ OK ] LLVM.VecFloat_tanhLane4Test (20 ms) 2023-01-11T22:11:17.8758975Z [ RUN ] LLVM.VecFloat_expm1Lane4Test 2023-01-11T22:11:17.8958526Z [ OK ] LLVM.VecFloat_expm1Lane4Test (19 ms) 2023-01-11T22:11:17.8958976Z [ RUN ] LLVM.VecFloat_lgammaLane4Test 2023-01-11T22:11:17.9158984Z [ OK ] LLVM.VecFloat_lgammaLane4Test (19 ms) 2023-01-11T22:11:17.9159299Z [ RUN ] LLVM.VecFloat_erfLane8Test 2023-01-11T22:11:17.9358802Z [ OK ] LLVM.VecFloat_erfLane8Test (19 ms) 2023-01-11T22:11:17.9359115Z [ RUN ] LLVM.VecFloat_erfcLane8Test 2023-01-11T22:11:17.9556704Z [ OK ] LLVM.VecFloat_erfcLane8Test (19 ms) 2023-01-11T22:11:17.9556993Z [ RUN ] LLVM.VecFloat_acosLane8Test 2023-01-11T22:11:17.9752317Z [ OK ] LLVM.VecFloat_acosLane8Test (19 ms) 2023-01-11T22:11:17.9752620Z [ RUN ] LLVM.VecFloat_asinLane8Test 2023-01-11T22:11:17.9946886Z [ OK ] LLVM.VecFloat_asinLane8Test (19 ms) 2023-01-11T22:11:17.9947304Z [ RUN ] LLVM.VecFloat_atanLane8Test 2023-01-11T22:11:18.0142923Z [ OK ] LLVM.VecFloat_atanLane8Test (19 ms) 2023-01-11T22:11:18.0143273Z [ RUN ] LLVM.VecFloat_coshLane8Test 2023-01-11T22:11:18.0339988Z [ OK ] LLVM.VecFloat_coshLane8Test (19 ms) 2023-01-11T22:11:18.0340279Z [ RUN ] LLVM.VecFloat_sinhLane8Test 2023-01-11T22:11:18.0535623Z [ OK ] LLVM.VecFloat_sinhLane8Test (19 ms) 2023-01-11T22:11:18.0535933Z [ RUN ] LLVM.VecFloat_tanhLane8Test 2023-01-11T22:11:18.0732111Z [ OK ] LLVM.VecFloat_tanhLane8Test (19 ms) 2023-01-11T22:11:18.0732506Z [ RUN ] LLVM.VecFloat_expm1Lane8Test 2023-01-11T22:11:18.0932892Z [ OK ] LLVM.VecFloat_expm1Lane8Test (20 ms) 2023-01-11T22:11:18.0933208Z [ RUN ] LLVM.VecFloat_lgammaLane8Test 2023-01-11T22:11:18.1133518Z [ OK ] LLVM.VecFloat_lgammaLane8Test (19 ms) 2023-01-11T22:11:18.1133892Z [ RUN ] LLVM.VecDouble_erfLane2Test 2023-01-11T22:11:18.1348724Z [ OK ] LLVM.VecDouble_erfLane2Test (21 ms) 2023-01-11T22:11:18.1349098Z [ RUN ] LLVM.VecDouble_erfcLane2Test 2023-01-11T22:11:18.1550389Z [ OK ] LLVM.VecDouble_erfcLane2Test (20 ms) 2023-01-11T22:11:18.1550707Z [ RUN ] LLVM.VecDouble_acosLane2Test 2023-01-11T22:11:18.1751432Z [ OK ] LLVM.VecDouble_acosLane2Test (20 ms) 2023-01-11T22:11:18.1751752Z [ RUN ] LLVM.VecDouble_asinLane2Test 2023-01-11T22:11:18.1951809Z [ OK ] LLVM.VecDouble_asinLane2Test (19 ms) 2023-01-11T22:11:18.1952119Z [ RUN ] LLVM.VecDouble_atanLane2Test 2023-01-11T22:11:18.2147986Z [ OK ] LLVM.VecDouble_atanLane2Test (19 ms) 2023-01-11T22:11:18.2148335Z [ RUN ] LLVM.VecDouble_coshLane2Test 2023-01-11T22:11:18.2348161Z [ OK ] LLVM.VecDouble_coshLane2Test (19 ms) 2023-01-11T22:11:18.2348479Z [ RUN ] LLVM.VecDouble_sinhLane2Test 2023-01-11T22:11:18.2546168Z [ OK ] LLVM.VecDouble_sinhLane2Test (19 ms) 2023-01-11T22:11:18.2546464Z [ RUN ] LLVM.VecDouble_tanhLane2Test 2023-01-11T22:11:18.2746196Z [ OK ] LLVM.VecDouble_tanhLane2Test (19 ms) 2023-01-11T22:11:18.2746529Z [ RUN ] LLVM.VecDouble_expm1Lane2Test 2023-01-11T22:11:18.2946467Z [ OK ] LLVM.VecDouble_expm1Lane2Test (19 ms) 2023-01-11T22:11:18.2946782Z [ RUN ] LLVM.VecDouble_lgammaLane2Test 2023-01-11T22:11:18.3147119Z [ OK ] LLVM.VecDouble_lgammaLane2Test (19 ms) 2023-01-11T22:11:18.3147429Z [ RUN ] LLVM.VecDouble_erfLane4Test 2023-01-11T22:11:18.3346606Z [ OK ] LLVM.VecDouble_erfLane4Test (19 ms) 2023-01-11T22:11:18.3346937Z [ RUN ] LLVM.VecDouble_erfcLane4Test 2023-01-11T22:11:18.3547041Z [ OK ] LLVM.VecDouble_erfcLane4Test (19 ms) 2023-01-11T22:11:18.3547353Z [ RUN ] LLVM.VecDouble_acosLane4Test 2023-01-11T22:11:18.3745836Z [ OK ] LLVM.VecDouble_acosLane4Test (19 ms) 2023-01-11T22:11:18.3746162Z [ RUN ] LLVM.VecDouble_asinLane4Test 2023-01-11T22:11:18.3944985Z [ OK ] LLVM.VecDouble_asinLane4Test (19 ms) 2023-01-11T22:11:18.3945299Z [ RUN ] LLVM.VecDouble_atanLane4Test 2023-01-11T22:11:18.4145319Z [ OK ] LLVM.VecDouble_atanLane4Test (19 ms) 2023-01-11T22:11:18.4145637Z [ RUN ] LLVM.VecDouble_coshLane4Test 2023-01-11T22:11:18.4344638Z [ OK ] LLVM.VecDouble_coshLane4Test (19 ms) 2023-01-11T22:11:18.4344936Z [ RUN ] LLVM.VecDouble_sinhLane4Test 2023-01-11T22:11:18.4545137Z [ OK ] LLVM.VecDouble_sinhLane4Test (19 ms) 2023-01-11T22:11:18.4545500Z [ RUN ] LLVM.VecDouble_tanhLane4Test 2023-01-11T22:11:18.4742094Z [ OK ] LLVM.VecDouble_tanhLane4Test (19 ms) 2023-01-11T22:11:18.4742556Z [ RUN ] LLVM.VecDouble_expm1Lane4Test 2023-01-11T22:11:18.4936935Z [ OK ] LLVM.VecDouble_expm1Lane4Test (19 ms) 2023-01-11T22:11:18.4937252Z [ RUN ] LLVM.VecDouble_lgammaLane4Test 2023-01-11T22:11:18.5130542Z [ OK ] LLVM.VecDouble_lgammaLane4Test (19 ms) 2023-01-11T22:11:18.5130870Z [ RUN ] LLVM.VectorizerLoadStoreTest 2023-01-11T22:11:18.5320578Z [ OK ] LLVM.VectorizerLoadStoreTest (18 ms) 2023-01-11T22:11:18.5320891Z [ RUN ] LLVM.VectorizeBitCast 2023-01-11T22:11:18.5529235Z [ OK ] LLVM.VectorizeBitCast (20 ms) 2023-01-11T22:11:18.5529526Z [ RUN ] LLVM.MemcpyTest 2023-01-11T22:11:18.5733434Z [ OK ] LLVM.MemcpyTest (20 ms) 2023-01-11T22:11:18.5733715Z [ RUN ] LLVM.BzeroTest 2023-01-11T22:11:18.5931039Z [ OK ] LLVM.BzeroTest (19 ms) 2023-01-11T22:11:18.5931314Z [ RUN ] LLVM.ElemwiseAdd 2023-01-11T22:11:18.6412609Z [ OK ] LLVM.ElemwiseAdd (48 ms) 2023-01-11T22:11:18.6412901Z [ RUN ] LLVM.ElemwiseAddFloat 2023-01-11T22:11:18.6878075Z [ OK ] LLVM.ElemwiseAddFloat (46 ms) 2023-01-11T22:11:18.6878387Z [ RUN ] LLVM.ElemwiseLog10Float 2023-01-11T22:11:18.7109115Z [ OK ] LLVM.ElemwiseLog10Float (23 ms) 2023-01-11T22:11:18.7109480Z [ RUN ] LLVM.ElemwiseLog1pFloat 2023-01-11T22:11:18.7338039Z [ OK ] LLVM.ElemwiseLog1pFloat (22 ms) 2023-01-11T22:11:18.7338368Z [ RUN ] LLVM.ElemwiseMaxInt 2023-01-11T22:11:18.7661349Z [ OK ] LLVM.ElemwiseMaxInt (32 ms) 2023-01-11T22:11:18.7661647Z [ RUN ] LLVM.ElemwiseMinInt 2023-01-11T22:11:18.7989073Z [ OK ] LLVM.ElemwiseMinInt (32 ms) 2023-01-11T22:11:18.7989406Z [ RUN ] LLVM.ElemwiseMaxFloat 2023-01-11T22:11:18.8277438Z [ OK ] LLVM.ElemwiseMaxFloat (28 ms) 2023-01-11T22:11:18.8277747Z [ RUN ] LLVM.ElemwiseMaxNaNFloat 2023-01-11T22:11:18.8565905Z [ OK ] LLVM.ElemwiseMaxNaNFloat (28 ms) 2023-01-11T22:11:18.8566212Z [ RUN ] LLVM.ElemwiseMinFloat 2023-01-11T22:11:18.8853818Z [ OK ] LLVM.ElemwiseMinFloat (28 ms) 2023-01-11T22:11:18.8854138Z [ RUN ] LLVM.ElemwiseMinNaNFloat 2023-01-11T22:11:18.9141699Z [ OK ] LLVM.ElemwiseMinNaNFloat (28 ms) 2023-01-11T22:11:18.9142035Z [ RUN ] LLVM.ElemwiseMod 2023-01-11T22:11:18.9413441Z [ OK ] LLVM.ElemwiseMod (27 ms) 2023-01-11T22:11:18.9413758Z [ RUN ] LLVM.CompareSelectIntEQ 2023-01-11T22:11:18.9752083Z [ OK ] LLVM.CompareSelectIntEQ (33 ms) 2023-01-11T22:11:18.9752408Z [ RUN ] LLVM.CompareSelectFloatEQ 2023-01-11T22:11:19.0084036Z [ OK ] LLVM.CompareSelectFloatEQ (33 ms) 2023-01-11T22:11:19.0084653Z [ RUN ] LLVM.CompareSelectByteGT 2023-01-11T22:11:19.0418882Z [ OK ] LLVM.CompareSelectByteGT (33 ms) 2023-01-11T22:11:19.0419184Z [ RUN ] LLVM.CompareSelectByteGE 2023-01-11T22:11:19.0751193Z [ OK ] LLVM.CompareSelectByteGE (33 ms) 2023-01-11T22:11:19.0751523Z [ RUN ] LLVM.CompareSelectByteLT 2023-01-11T22:11:19.1081007Z [ OK ] LLVM.CompareSelectByteLT (32 ms) 2023-01-11T22:11:19.1081312Z [ RUN ] LLVM.CompareSelectByteLE 2023-01-11T22:11:19.1413414Z [ OK ] LLVM.CompareSelectByteLE (33 ms) 2023-01-11T22:11:19.1413940Z [ RUN ] LLVM.StoreFloat 2023-01-11T22:11:19.1595396Z [ OK ] LLVM.StoreFloat (18 ms) 2023-01-11T22:11:19.1595662Z [ RUN ] LLVM.SimpleMath01 2023-01-11T22:11:19.1964046Z [ OK ] LLVM.SimpleMath01 (36 ms) 2023-01-11T22:11:19.1964320Z [ RUN ] LLVM.ComputeMul 2023-01-11T22:11:19.2425295Z [ OK ] LLVM.ComputeMul (46 ms) 2023-01-11T22:11:19.2425651Z [ RUN ] LLVM.BroadcastAdd 2023-01-11T22:11:19.3532990Z [ OK ] LLVM.BroadcastAdd (110 ms) 2023-01-11T22:11:19.3533346Z [ RUN ] LLVM.BitwiseOps 2023-01-11T22:11:19.3717147Z [ OK ] LLVM.BitwiseOps (18 ms) 2023-01-11T22:11:19.3717500Z [ RUN ] LLVM.ArithmeticRightShift 2023-01-11T22:11:19.3903687Z [ OK ] LLVM.ArithmeticRightShift (18 ms) 2023-01-11T22:11:19.3904012Z [ RUN ] LLVM.LogicalRightShift 2023-01-11T22:11:19.4089385Z [ OK ] LLVM.LogicalRightShift (18 ms) 2023-01-11T22:11:19.4089784Z [ RUN ] LLVM.DynamicShapeAdd 2023-01-11T22:11:19.5390033Z [ OK ] LLVM.DynamicShapeAdd (130 ms) 2023-01-11T22:11:19.5390350Z [ RUN ] LLVM.BindDynamicShapeAdd 2023-01-11T22:11:19.6673406Z [ OK ] LLVM.BindDynamicShapeAdd (128 ms) 2023-01-11T22:11:19.6673749Z [ RUN ] LLVM.TensorDynamicShapeAdd 2023-01-11T22:11:19.7951871Z [ OK ] LLVM.TensorDynamicShapeAdd (127 ms) 2023-01-11T22:11:19.7952186Z [ RUN ] LLVM.DynamicShape2D 2023-01-11T22:11:20.0084754Z [ OK ] LLVM.DynamicShape2D (213 ms) 2023-01-11T22:11:20.0085145Z [ RUN ] LLVM.EmptyStmt 2023-01-11T22:11:20.0257324Z [ OK ] LLVM.EmptyStmt (17 ms) 2023-01-11T22:11:20.0257622Z [ RUN ] LLVM.EliminatedStmt 2023-01-11T22:11:20.0428893Z [ OK ] LLVM.EliminatedStmt (17 ms) 2023-01-11T22:11:20.0429200Z [ RUN ] LLVM.SimpleReduction 2023-01-11T22:11:20.1237723Z [ OK ] LLVM.SimpleReduction (80 ms) 2023-01-11T22:11:20.1238035Z [ RUN ] LLVM.RFactorReduction 2023-01-11T22:11:20.1549630Z [ OK ] LLVM.RFactorReduction (31 ms) 2023-01-11T22:11:20.1549956Z [ RUN ] LLVM.RFactorVectorizedReduction 2023-01-11T22:11:20.2030177Z [ OK ] LLVM.RFactorVectorizedReduction (48 ms) 2023-01-11T22:11:20.2030747Z [ RUN ] LLVM.SimpleParallelSS 2023-01-11T22:11:20.2263582Z [ OK ] LLVM.SimpleParallelSS (23 ms) 2023-01-11T22:11:20.2263884Z [ RUN ] LLVM.SimpleParallelSP 2023-01-11T22:11:20.2521813Z [ OK ] LLVM.SimpleParallelSP (25 ms) 2023-01-11T22:11:20.2522332Z [ RUN ] LLVM.SimpleParallelPS 2023-01-11T22:11:20.2787189Z [ OK ] LLVM.SimpleParallelPS (26 ms) 2023-01-11T22:11:20.2787601Z [ RUN ] LLVM.SimpleParallelPP 2023-01-11T22:11:20.3052369Z [ OK ] LLVM.SimpleParallelPP (26 ms) 2023-01-11T22:11:23.3184040Z [ RUN ] LLVM.CompositeParallel 2023-01-11T22:11:23.3184586Z [ OK ] LLVM.CompositeParallel (3013 ms) 2023-01-11T22:11:23.3185093Z [ RUN ] LLVM.VectorizedGEMM 2023-01-11T22:11:23.3982675Z [ OK ] LLVM.VectorizedGEMM (79 ms) 2023-01-11T22:11:23.3983007Z [ RUN ] LLVM.CallRaw 2023-01-11T22:11:24.0543925Z [ OK ] LLVM.CallRaw (656 ms) 2023-01-11T22:11:24.0544248Z [ RUN ] LLVM.CustomTarget 2023-01-11T22:11:24.0769234Z [ OK ] LLVM.CustomTarget (22 ms) 2023-01-11T22:11:24.0769558Z [ RUN ] LLVM.CodeGenKernelFuncName 2023-01-11T22:11:24.1128155Z [ OK ] LLVM.CodeGenKernelFuncName (35 ms) 2023-01-11T22:11:24.1128803Z [----------] 150 tests from LLVM (7881 ms total) 2023-01-11T22:11:24.1129013Z 2023-01-11T22:11:24.1129188Z [----------] Global test environment tear-down 2023-01-11T22:11:24.1205641Z [==========] 801 tests from 25 test suites ran. (25002 ms total) 2023-01-11T22:11:24.1206113Z [ PASSED ] 801 tests. 2023-01-11T22:11:24.1206350Z 2023-01-11T22:11:24.1206575Z  YOU HAVE 5 DISABLED TESTS 2023-01-11T22:11:24.1206711Z 2023-01-11T22:11:24.2028985Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *android* ]] 2023-01-11T22:11:24.2029434Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *cuda* ]] 2023-01-11T22:11:24.2029738Z + assert_git_not_dirty 2023-01-11T22:11:24.2030032Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:11:24.2030405Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:11:24.2030821Z ++ git status --porcelain 2023-01-11T22:11:24.2856406Z + git_status= 2023-01-11T22:11:24.2856940Z + [[ -n '' ]] 2023-01-11T22:11:24.2857222Z + test_aot_compilation 2023-01-11T22:11:24.2857503Z + echo 'Testing Ahead of Time compilation' 2023-01-11T22:11:24.2857718Z Testing Ahead of Time compilation 2023-01-11T22:11:24.2858356Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libc10d_cuda_test.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:11:24.2868518Z + ln -sf /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_cuda_linalg.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so /opt/conda/lib/python3.10/site-packages/torch/lib/libtorchbind_test.so /opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:11:24.2878358Z + TEST_REPORTS_DIR=test/test-reports/cpp-unittest/test_aot_compilation 2023-01-11T22:11:24.2878804Z + mkdir -p test/test-reports/cpp-unittest/test_aot_compilation 2023-01-11T22:11:24.2889420Z + '[' -f /opt/conda/lib/python3.10/site-packages/torch/bin/test_mobile_nnc ']' 2023-01-11T22:11:24.2890285Z + /opt/conda/lib/python3.10/site-packages/torch/bin/test_mobile_nnc --gtest_output=xml:test/test-reports/cpp-unittest/test_aot_compilation/test_mobile_nnc.xml 2023-01-11T22:11:24.6276497Z Note: Google Test filter = *-*_CUDA:*_MultiCUDA 2023-01-11T22:11:24.6277030Z [==========] Running 6 tests from 2 test suites. 2023-01-11T22:11:24.6277491Z [----------] Global test environment set-up. 2023-01-11T22:11:24.6277786Z [----------] 4 tests from Function 2023-01-11T22:11:24.6278052Z [ RUN ] Function.ExecuteSlowMul 2023-01-11T22:11:24.6280843Z [ OK ] Function.ExecuteSlowMul (0 ms) 2023-01-11T22:11:24.6281307Z [ RUN ] Function.Serialization 2023-01-11T22:11:24.6281611Z [ OK ] Function.Serialization (0 ms) 2023-01-11T22:11:24.6281884Z [ RUN ] Function.ValidInput 2023-01-11T22:11:24.6282167Z [ OK ] Function.ValidInput (0 ms) 2023-01-11T22:11:24.6282452Z [ RUN ] Function.InvalidInput 2023-01-11T22:11:24.6290989Z [ OK ] Function.InvalidInput (0 ms) 2023-01-11T22:11:24.6291515Z [----------] 4 tests from Function (1 ms total) 2023-01-11T22:11:24.6291806Z 2023-01-11T22:11:24.6292119Z [----------] 2 tests from MobileNNCRegistryTest 2023-01-11T22:11:24.6292592Z [ RUN ] MobileNNCRegistryTest.FindAndRun 2023-01-11T22:11:24.6292973Z [ OK ] MobileNNCRegistryTest.FindAndRun (0 ms) 2023-01-11T22:11:24.6293350Z [ RUN ] MobileNNCRegistryTest.NoKernel 2023-01-11T22:11:24.6293758Z [ OK ] MobileNNCRegistryTest.NoKernel (0 ms) 2023-01-11T22:11:24.6294169Z [----------] 2 tests from MobileNNCRegistryTest (0 ms total) 2023-01-11T22:11:24.6294377Z 2023-01-11T22:11:24.6294549Z [----------] Global test environment tear-down 2023-01-11T22:11:24.6294869Z [==========] 6 tests from 2 test suites ran. (1 ms total) 2023-01-11T22:11:24.6295127Z [ PASSED ] 6 tests. 2023-01-11T22:11:24.6295232Z 2023-01-11T22:11:24.6295356Z  YOU HAVE 1 DISABLED TEST 2023-01-11T22:11:24.6295479Z 2023-01-11T22:11:24.6989813Z + '[' -f /opt/conda/lib/python3.10/site-packages/torch/bin/aot_model_compiler_test ']' 2023-01-11T22:11:24.6990143Z + source test/mobile/nnc/test_aot_compile.sh 2023-01-11T22:11:24.6994539Z ++ set -e -o pipefail 2023-01-11T22:11:24.6998672Z +++ python -c 'import site; print(site.getsitepackages()[0])' 2023-01-11T22:11:24.7162297Z ++ TORCH_INSTALL_DIR=/opt/conda/lib/python3.10/site-packages/torch 2023-01-11T22:11:24.7162672Z ++ TORCH_BIN_DIR=/opt/conda/lib/python3.10/site-packages/torch/bin 2023-01-11T22:11:24.7165717Z +++ dirname test/mobile/nnc/test_aot_compile.sh 2023-01-11T22:11:24.7177179Z ++ CURRENT_DIR=test/mobile/nnc 2023-01-11T22:11:24.7177477Z ++ MODEL=aot_test_model.pt 2023-01-11T22:11:24.7177720Z ++ COMPILED_MODEL=aot_test_model.compiled.pt 2023-01-11T22:11:24.7177950Z ++ COMPILED_CODE=aot_test_model.compiled.ll 2023-01-11T22:11:24.7180621Z +++ mktemp -d -t build_XXX 2023-01-11T22:11:24.7228723Z ++ TMP_DIR=/tmp/build_RrY 2023-01-11T22:11:24.7229726Z + test_custom_script_ops 2023-01-11T22:11:24.7230170Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *asan* ]] 2023-01-11T22:11:24.7230564Z + echo 'Testing custom script operators' 2023-01-11T22:11:24.7230791Z Testing custom script operators 2023-01-11T22:11:24.7231150Z + CUSTOM_OP_BUILD=/var/lib/jenkins/workspace/build/custom_test_artifacts/custom-op-build 2023-01-11T22:11:24.7231422Z + pushd test/custom_operator 2023-01-11T22:11:24.7231648Z ~/workspace/test/custom_operator ~/workspace 2023-01-11T22:11:24.7231994Z + cp -a /var/lib/jenkins/workspace/build/custom_test_artifacts/custom-op-build build 2023-01-11T22:11:24.8539384Z + python test_custom_ops.py -v 2023-01-11T22:11:26.1634508Z Test results will be stored in test-reports/python-unittest/test_custom_ops 2023-01-11T22:11:26.1646948Z 2023-01-11T22:11:26.1647230Z Running tests... 2023-01-11T22:11:26.1647623Z ---------------------------------------------------------------------- 2023-01-11T22:11:26.1686955Z test_calling_custom_op (__main__.TestCustomOperators) ... ok (0.004s) 2023-01-11T22:11:26.2131129Z test_calling_custom_op_inside_script_module (__main__.TestCustomOperators) ... ok (0.044s) 2023-01-11T22:11:26.2137468Z test_calling_custom_op_string (__main__.TestCustomOperators) ... ok (0.001s) 2023-01-11T22:11:26.2156380Z test_calling_custom_op_with_autograd (__main__.TestCustomOperators) ... /opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py:197: UserWarning: Using backward() with create_graph=True will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using autograd.grad when creating the graph to avoid this. If you have to use this function, make sure to reset the .grad fields of your parameters to None after use to break the cycle and avoid the leak. (Triggered internally at /var/lib/jenkins/workspace/torch/csrc/autograd/engine.cpp:1134.) 2023-01-11T22:11:26.2157232Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-01-11T22:11:26.2170558Z ok (0.003s) 2023-01-11T22:11:26.2178319Z test_calling_custom_op_with_autograd_in_nograd_mode (__main__.TestCustomOperators) ... ok (0.001s) 2023-01-11T22:11:26.2182291Z test_custom_library_is_loaded (__main__.TestCustomOperators) ... ok (0.000s) 2023-01-11T22:11:26.2260063Z test_saving_and_loading_script_module_with_custom_op (__main__.TestCustomOperators) ... ok (0.008s) 2023-01-11T22:11:26.2260431Z 2023-01-11T22:11:26.2260887Z ---------------------------------------------------------------------- 2023-01-11T22:11:26.2261304Z Ran 7 tests in 0.061s 2023-01-11T22:11:26.2261420Z 2023-01-11T22:11:26.2261469Z OK 2023-01-11T22:11:26.2261561Z 2023-01-11T22:11:26.2261652Z Generating XML reports... 2023-01-11T22:11:26.2289688Z Generated XML report: test-reports/python-unittest/test_custom_ops/TEST-TestCustomOperators-20230111221126.xml 2023-01-11T22:11:26.4876094Z + python model.py --export-script-module=model.pt 2023-01-11T22:11:27.8332007Z + build/test_custom_ops ./model.pt 2023-01-11T22:11:28.1970454Z [W engine.cpp:1134] Warning: Using backward() with create_graph=True will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using autograd.grad when creating the graph to avoid this. If you have to use this function, make sure to reset the .grad fields of your parameters to None after use to break the cycle and avoid the leak. (function operator()) 2023-01-11T22:11:28.2550005Z ok 2023-01-11T22:11:28.3308588Z + popd 2023-01-11T22:11:28.3308853Z ~/workspace 2023-01-11T22:11:28.3309187Z + assert_git_not_dirty 2023-01-11T22:11:28.3309803Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:11:28.3310132Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:11:28.3312772Z ++ git status --porcelain 2023-01-11T22:11:28.4127131Z + git_status= 2023-01-11T22:11:28.4127573Z + [[ -n '' ]] 2023-01-11T22:11:28.4127799Z + test_custom_backend 2023-01-11T22:11:28.4128144Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *asan* ]] 2023-01-11T22:11:28.4128481Z + echo 'Testing custom backends' 2023-01-11T22:11:28.4128685Z Testing custom backends 2023-01-11T22:11:28.4129066Z + CUSTOM_BACKEND_BUILD=/var/lib/jenkins/workspace/build/custom_test_artifacts/custom-backend-build 2023-01-11T22:11:28.4129342Z + pushd test/custom_backend 2023-01-11T22:11:28.4129560Z ~/workspace/test/custom_backend ~/workspace 2023-01-11T22:11:28.4129928Z + cp -a /var/lib/jenkins/workspace/build/custom_test_artifacts/custom-backend-build build 2023-01-11T22:11:28.5183640Z + python test_custom_backend.py -v 2023-01-11T22:11:29.8256235Z Test results will be stored in test-reports/python-unittest/test_custom_backend 2023-01-11T22:11:29.8267206Z 2023-01-11T22:11:29.8267299Z Running tests... 2023-01-11T22:11:29.8268242Z ---------------------------------------------------------------------- 2023-01-11T22:11:29.8273667Z test_execute (__main__.TestCustomBackend) 2023-01-11T22:11:29.8794161Z Test execution using the custom backend. ... ok (0.052s) 2023-01-11T22:11:29.8801399Z test_save_load (__main__.TestCustomBackend) 2023-01-11T22:11:29.8976430Z Test that a lowered module can be executed correctly ... ok (0.018s) 2023-01-11T22:11:29.8977649Z 2023-01-11T22:11:29.8978357Z ---------------------------------------------------------------------- 2023-01-11T22:11:29.8978813Z Ran 2 tests in 0.071s 2023-01-11T22:11:29.8978938Z 2023-01-11T22:11:29.8978999Z OK 2023-01-11T22:11:29.8979090Z 2023-01-11T22:11:29.8979180Z Generating XML reports... 2023-01-11T22:11:29.9006166Z Generated XML report: test-reports/python-unittest/test_custom_backend/TEST-TestCustomBackend-20230111221129.xml 2023-01-11T22:11:30.1786779Z + python backend.py --export-module-to=model.pt 2023-01-11T22:11:31.5762545Z + build/test_custom_backend ./model.pt 2023-01-11T22:11:31.9666897Z Testing custom_backend 2023-01-11T22:11:32.0405985Z OK 2023-01-11T22:11:32.1143995Z + rm -f ./model.pt 2023-01-11T22:11:32.1170236Z + popd 2023-01-11T22:11:32.1170485Z ~/workspace 2023-01-11T22:11:32.1170662Z + assert_git_not_dirty 2023-01-11T22:11:32.1171075Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:11:32.1171407Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:11:32.1173302Z ++ git status --porcelain 2023-01-11T22:11:32.1982241Z + git_status= 2023-01-11T22:11:32.1982711Z + [[ -n '' ]] 2023-01-11T22:11:32.1982896Z + test_torch_function_benchmark 2023-01-11T22:11:32.1983229Z + echo 'Testing __torch_function__ benchmarks' 2023-01-11T22:11:32.1983475Z Testing __torch_function__ benchmarks 2023-01-11T22:11:32.1983695Z + pushd benchmarks/overrides_benchmark 2023-01-11T22:11:32.1984008Z ~/workspace/benchmarks/overrides_benchmark ~/workspace 2023-01-11T22:11:32.1984289Z + python bench.py -n 1 -m 2 2023-01-11T22:11:33.3157175Z Type tensor had a minimum time of 0.004291534423828125 us and a standard deviation of 0.0532736157765612 us. 2023-01-11T22:11:33.3157714Z Type SubTensor had a minimum time of 0.015735626220703125 us and a standard deviation of 0.03371747880009934 us. 2023-01-11T22:11:33.3158232Z Type WithTorchFunction had a minimum time of 0.010728836059570312 us and a standard deviation of 0.018207438188255765 us. 2023-01-11T22:11:33.3159592Z Type SubWithTorchFunction had a minimum time of 0.016689300537109375 us and a standard deviation of 0.010115243640029803 us. 2023-01-11T22:11:33.5403953Z + python pyspybench.py Tensor -n 1 2023-01-11T22:11:34.8818493Z + python pyspybench.py SubTensor -n 1 2023-01-11T22:11:36.2290826Z + python pyspybench.py WithTorchFunction -n 1 2023-01-11T22:11:37.5766694Z + python pyspybench.py SubWithTorchFunction -n 1 2023-01-11T22:11:38.9272944Z + popd 2023-01-11T22:11:38.9273317Z ~/workspace 2023-01-11T22:11:38.9273644Z + assert_git_not_dirty 2023-01-11T22:11:38.9274295Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:11:38.9274864Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:11:38.9276436Z ++ git status --porcelain 2023-01-11T22:11:39.0113161Z + git_status= 2023-01-11T22:11:39.0113715Z + [[ -n '' ]] 2023-01-11T22:11:39.0114034Z + test_benchmarks 2023-01-11T22:11:39.0114472Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 == *cuda* ]] 2023-01-11T22:11:39.0114723Z + [[ nogpu_NO_AVX2 != *nogpu* ]] 2023-01-11T22:11:39.0114913Z + test_executorch 2023-01-11T22:11:39.0115189Z + echo 'Testing Executorch op registration' 2023-01-11T22:11:39.0115422Z Testing Executorch op registration 2023-01-11T22:11:39.0115645Z + build/bin/test_edge_op_registration 2023-01-11T22:11:39.2750458Z Note: Google Test filter = *-*_CUDA:*_MultiCUDA 2023-01-11T22:11:39.2751028Z [==========] Running 1 test from 1 test suite. 2023-01-11T22:11:39.2751486Z [----------] Global test environment set-up. 2023-01-11T22:11:39.2752008Z [----------] 1 test from OperatorRegistrationTest 2023-01-11T22:11:39.2752488Z [ RUN ] OperatorRegistrationTest.Add 2023-01-11T22:11:39.2753005Z [ OK ] OperatorRegistrationTest.Add (0 ms) 2023-01-11T22:11:39.2753512Z [----------] 1 test from OperatorRegistrationTest (0 ms total) 2023-01-11T22:11:39.2753740Z 2023-01-11T22:11:39.2753964Z [----------] Global test environment tear-down 2023-01-11T22:11:39.2754367Z [==========] 1 test from 1 test suite ran. (0 ms total) 2023-01-11T22:11:39.2755084Z [ PASSED ] 1 test. 2023-01-11T22:11:39.3376841Z + assert_git_not_dirty 2023-01-11T22:11:39.3377430Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *rocm* ]] 2023-01-11T22:11:39.3377759Z + [[ linux-bionic-cuda11.7-py3.10-gcc7 != *xla* ]] 2023-01-11T22:11:39.3378946Z ++ git status --porcelain 2023-01-11T22:11:39.4198649Z + git_status= 2023-01-11T22:11:39.4199112Z + [[ -n '' ]] 2023-01-11T22:11:39.4329908Z Prepare all required actions 2023-01-11T22:11:39.4330240Z Getting action download info 2023-01-11T22:11:39.6516129Z ##[group]Run ./.github/actions/get-workflow-job-id 2023-01-11T22:11:39.6516335Z with: 2023-01-11T22:11:39.6516658Z github-token: *** 2023-01-11T22:11:39.6516828Z env: 2023-01-11T22:11:39.6516988Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:39.6517255Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:39.6517504Z ##[endgroup] 2023-01-11T22:11:39.6541800Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2023-01-11T22:11:39.6542044Z with: 2023-01-11T22:11:39.6542206Z shell: bash 2023-01-11T22:11:39.6542554Z timeout_minutes: 10 2023-01-11T22:11:39.6542744Z max_attempts: 5 2023-01-11T22:11:39.6542926Z retry_wait_seconds: 30 2023-01-11T22:11:39.6543339Z command: set -eux python3 -m pip install requests==2.26.0 GHA_WORKFLOW_JOB_ID=$(python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}") echo "job-id=${GHA_WORKFLOW_JOB_ID}" >> "${GITHUB_OUTPUT}" 2023-01-11T22:11:39.6543721Z polling_interval_seconds: 1 2023-01-11T22:11:39.6543903Z warning_on_retry: true 2023-01-11T22:11:39.6544093Z continue_on_error: false 2023-01-11T22:11:39.6544265Z env: 2023-01-11T22:11:39.6544423Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:39.6544686Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:39.6545048Z GITHUB_TOKEN: *** 2023-01-11T22:11:39.6545221Z ##[endgroup] 2023-01-11T22:11:39.9651365Z + python3 -m pip install requests==2.26.0 2023-01-11T22:11:40.6178411Z Defaulting to user installation because normal site-packages is not writeable 2023-01-11T22:11:40.6468877Z Requirement already satisfied: requests==2.26.0 in /home/ec2-user/.local/lib/python3.7/site-packages (2.26.0) 2023-01-11T22:11:40.6601946Z Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (3.4) 2023-01-11T22:11:40.6620602Z Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (2022.12.7) 2023-01-11T22:11:40.6636567Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (1.26.14) 2023-01-11T22:11:40.6799960Z Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (2.0.12) 2023-01-11T22:11:40.8839559Z ++ python3 .github/scripts/get_workflow_job_id.py 3896346758 i-0c70567fa7dfd6397 2023-01-11T22:11:47.5580149Z + GHA_WORKFLOW_JOB_ID=10589559916 2023-01-11T22:11:47.5580924Z + echo job-id=10589559916 2023-01-11T22:11:47.9699318Z Command completed after 1 attempt(s). 2023-01-11T22:11:47.9804426Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2023-01-11T22:11:47.9804664Z kill "$MONITOR_SCRIPT_PID" 2023-01-11T22:11:48.0328298Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:11:48.0328529Z env: 2023-01-11T22:11:48.0328707Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.0328968Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.0329232Z MONITOR_SCRIPT_PID: 30785 2023-01-11T22:11:48.0329419Z ##[endgroup] 2023-01-11T22:11:48.0411219Z Prepare all required actions 2023-01-11T22:11:48.0411490Z Getting action download info 2023-01-11T22:11:48.2160696Z Download action repository 'actions/upload-artifact@v3' (SHA:0b7f8abb1508181956e8e162db84b466c27e18ce) 2023-01-11T22:11:48.4276574Z ##[group]Run ./.github/actions/upload-test-artifacts 2023-01-11T22:11:48.4276789Z with: 2023-01-11T22:11:48.4277016Z file-suffix: test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916 2023-01-11T22:11:48.4277239Z env: 2023-01-11T22:11:48.4277401Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.4277668Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.4277915Z ##[endgroup] 2023-01-11T22:11:48.4300433Z ##[group]Run # Remove any previous test jsons if they exist 2023-01-11T22:11:48.4300698Z # Remove any previous test jsons if they exist 2023-01-11T22:11:48.4300929Z rm -f test-jsons-*.zip 2023-01-11T22:11:48.4301208Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test -i '*.json' 2023-01-11T22:11:48.4312321Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:11:48.4312540Z env: 2023-01-11T22:11:48.4312721Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.4312976Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.4313290Z FILE_SUFFIX: test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916 2023-01-11T22:11:48.4313513Z ##[endgroup] 2023-01-11T22:11:48.4562471Z adding: test/allowlist_for_publicAPI.json (deflated 78%) 2023-01-11T22:11:48.4621581Z adding: test/benchmark_utils/callgrind_artifacts.json (deflated 92%) 2023-01-11T22:11:48.4635158Z adding: test/profiler/profiler_utils_mock_events.json (deflated 87%) 2023-01-11T22:11:48.4636801Z adding: test/.pytorch-slow-tests.json (deflated 77%) 2023-01-11T22:11:48.4641379Z adding: test/.pytorch-disabled-tests.json (deflated 84%) 2023-01-11T22:11:48.4660443Z ##[group]Run # Remove any previous test reports if they exist 2023-01-11T22:11:48.4660726Z # Remove any previous test reports if they exist 2023-01-11T22:11:48.4660963Z rm -f test-reports-*.zip 2023-01-11T22:11:48.4661226Z zip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml' -i '*.csv' 2023-01-11T22:11:48.4672545Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:11:48.4672886Z env: 2023-01-11T22:11:48.4673209Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.4673608Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.4673920Z FILE_SUFFIX: test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916 2023-01-11T22:11:48.4674149Z ##[endgroup] 2023-01-11T22:11:48.4830584Z adding: test/custom_backend/test-reports/python-unittest/test_custom_backend/TEST-TestCustomBackend-20230111221129.xml (deflated 56%) 2023-01-11T22:11:48.4831310Z adding: test/custom_operator/test-reports/python-unittest/test_custom_ops/TEST-TestCustomOperators-20230111221126.xml (deflated 65%) 2023-01-11T22:11:48.4839528Z adding: test/test-reports/python-unittest/dynamo.test_export/TEST-ExportTests-20230111212231.xml (deflated 92%) 2023-01-11T22:11:48.4854908Z adding: test/test-reports/python-unittest/dynamo.test_misc/TEST-MiscTests-20230111212231.xml (deflated 89%) 2023-01-11T22:11:48.4859989Z adding: test/test-reports/python-unittest/dynamo.test_misc/TEST-TestTracer-20230111212231.xml (deflated 39%) 2023-01-11T22:11:48.4864710Z adding: test/test-reports/python-unittest/dynamo.test_optimizations/TEST-NormalizeIRTests-20230111212242.xml (deflated 42%) 2023-01-11T22:11:48.4869124Z adding: test/test-reports/python-unittest/dynamo.test_optimizations/TEST-TestOptimizations-20230111212242.xml (deflated 77%) 2023-01-11T22:11:48.4879060Z adding: test/test-reports/python-unittest/dynamo.test_repros/TEST-ReproTests-20230111212244.xml (deflated 87%) 2023-01-11T22:11:48.4883496Z adding: test/test-reports/python-unittest/dynamo.test_torchxla_integration/TEST-TorchXLAReuseGraphTest-20230111212247.xml (deflated 76%) 2023-01-11T22:11:48.4892055Z adding: test/test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecNNModuleTests-20230111212251.xml (deflated 91%) 2023-01-11T22:11:48.4902621Z adding: test/test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecReproTests-20230111212251.xml (deflated 88%) 2023-01-11T22:11:48.4907831Z adding: test/test-reports/python-unittest/dynamo.test_unspec/TEST-UnspecTests-20230111212251.xml (deflated 75%) 2023-01-11T22:11:48.4912837Z adding: test/test-reports/python-unittest/profiler.test_profiler_tree/TEST-TestProfilerTree-20230111212302.xml (deflated 82%) 2023-01-11T22:11:48.4918698Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_activation_sparsifier.TestActivationSparsifier-20230111212306.xml (deflated 54%) 2023-01-11T22:11:48.4923482Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_scheduler.TestBaseDataScheduler-20230111212306.xml (deflated 71%) 2023-01-11T22:11:48.4928288Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestBaseDataSparsifier-20230111212306.xml (deflated 65%) 2023-01-11T22:11:48.4932574Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestBaseSparsifier-20230111212306.xml (deflated 82%) 2023-01-11T22:11:48.4937949Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_structured_sparsifier.TestBaseStructuredSparsifier-20230111212306.xml (deflated 85%) 2023-01-11T22:11:48.4942870Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_composability.TestComposability-20230111212306.xml (deflated 74%) 2023-01-11T22:11:48.4949433Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_scheduler.TestCubicScheduler-20230111212306.xml (deflated 58%) 2023-01-11T22:11:48.4952735Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_parametrization.TestFakeSparsity-20230111212306.xml (deflated 73%) 2023-01-11T22:11:48.4956182Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_composability.TestFxComposability-20230111212306.xml (deflated 80%) 2023-01-11T22:11:48.4960086Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestNearlyDiagonalSparsifier-20230111212306.xml (deflated 76%) 2023-01-11T22:11:48.4963548Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestNormDataSparsifiers-20230111212306.xml (deflated 70%) 2023-01-11T22:11:48.4967564Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_data_sparsifier.TestQuantizationUtils-20230111212306.xml (deflated 71%) 2023-01-11T22:11:48.4971627Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_kernels.TestQuantizedSparseKernels-20230111212306.xml (deflated 50%) 2023-01-11T22:11:48.4975401Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_kernels.TestQuantizedSparseLayers-20230111212306.xml (deflated 62%) 2023-01-11T22:11:48.4979770Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_scheduler.TestScheduler-20230111212306.xml (deflated 70%) 2023-01-11T22:11:48.4983806Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsity_utils.TestSparsityUtilFunctions-20230111212306.xml (deflated 78%) 2023-01-11T22:11:48.4987875Z adding: test/test-reports/python-unittest/test_ao_sparsity/TEST-ao.sparsity.test_sparsifier.TestWeightNormSparsifier-20230111212306.xml (deflated 78%) 2023-01-11T22:11:48.4994902Z adding: test/test-reports/python-unittest/test_function_schema/TEST-TestFunctionSchema-20230111212321.xml (deflated 82%) 2023-01-11T22:11:48.5000509Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_alias_analysis.TestAliasAnalysis-20230111212326.xml (deflated 66%) 2023-01-11T22:11:48.5005508Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_async.TestAsync-20230111212326.xml (deflated 85%) 2023-01-11T22:11:48.5009540Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_aten_pow.TestAtenPow-20230111212326.xml (deflated 48%) 2023-01-11T22:11:48.5017111Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_autodiff.TestAutodiffJit-20230111212326.xml (deflated 74%) 2023-01-11T22:11:48.5033617Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_autodiff_subgraph_slicing.TestAutodiffSubgraphSlicing-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5038495Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_backends.TestBackends-20230111212326.xml (deflated 73%) 2023-01-11T22:11:48.5043893Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_backends.TestBackendsWithCompiler-20230111212326.xml (deflated 59%) 2023-01-11T22:11:48.5050351Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_batch_mm.TestBatchMM-20230111212326.xml (deflated 82%) 2023-01-11T22:11:48.5053994Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_builtins.TestBuiltins-20230111212326.xml (deflated 71%) 2023-01-11T22:11:48.5063239Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_class_type.TestClassType-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5067257Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_complex.TestComplex-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5071278Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_custom_operators.TestCustomOperators-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5074536Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_dce.TestDCE-20230111212326.xml (deflated 56%) 2023-01-11T22:11:48.5082181Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_dataclasses.TestDataclasses-20230111212326.xml (deflated 77%) 2023-01-11T22:11:48.5086301Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_device_analysis.TestDeviceAnalysis-20230111212326.xml (deflated 85%) 2023-01-11T22:11:48.5090831Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestDict-20230111212326.xml (deflated 82%) 2023-01-11T22:11:48.5098068Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_dtype_analysis.TestDtypeAnalysis-20230111212326.xml (deflated 70%) 2023-01-11T22:11:48.5101433Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_enum.TestEnum-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5110484Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestFreezing-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5114083Z adding: test/test-reports/python-unittest/test_jit/TEST-TestFrontend-20230111212326.xml (deflated 51%) 2023-01-11T22:11:48.5119811Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestFrozenOptimizations-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5127687Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_functional_blocks.TestFunctionalBlocks-20230111212326.xml (deflated 51%) 2023-01-11T22:11:48.5131344Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_convert_activation.TestFunctionalToInplaceActivation-20230111212326.xml (deflated 73%) 2023-01-11T22:11:48.5135173Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_attr.TestGetDefaultAttr-20230111212326.xml (deflated 46%) 2023-01-11T22:11:48.5139076Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_graph_rewrite_passes.TestGraphRewritePasses-20230111212326.xml (deflated 52%) 2023-01-11T22:11:48.5143762Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_hash.TestHash-20230111212326.xml (deflated 79%) 2023-01-11T22:11:48.5148631Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_hooks.TestHooks-20230111212326.xml (deflated 87%) 2023-01-11T22:11:48.5152717Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_ignorable_args.TestIgnorableArgs-20230111212326.xml (deflated 62%) 2023-01-11T22:11:48.5156494Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_ignore_context_manager.TestIgnoreContextManager-20230111212326.xml (deflated 72%) 2023-01-11T22:11:48.5159901Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_convert_activation.TestInplaceToFunctionalActivation-20230111212326.xml (deflated 62%) 2023-01-11T22:11:48.5164358Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_isinstance.TestIsinstance-20230111212326.xml (deflated 90%) 2023-01-11T22:11:48.5171282Z adding: test/test-reports/python-unittest/test_jit/TEST-TestJit-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5187880Z adding: test/test-reports/python-unittest/test_jit/TEST-TestJitGeneratedModule-20230111212326.xml (deflated 95%) 2023-01-11T22:11:48.5191659Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_jit_utils.TestJitUtils-20230111212326.xml (deflated 80%) 2023-01-11T22:11:48.5200706Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestList-20230111212326.xml (deflated 89%) 2023-01-11T22:11:48.5204210Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_logging.TestLogging-20230111212326.xml (deflated 76%) 2023-01-11T22:11:48.5209049Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_freezing.TestMKLDNNReinplacing-20230111212326.xml (deflated 70%) 2023-01-11T22:11:48.5213433Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_misc.TestMisc-20230111212326.xml (deflated 82%) 2023-01-11T22:11:48.5217998Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_tracer.TestMixTracingScripting-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5222769Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_models.TestModels-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5226131Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_module_apis.TestModuleAPIs-20230111212326.xml (deflated 68%) 2023-01-11T22:11:48.5231162Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_module_containers.TestModuleContainers-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5235077Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_module_interface.TestModuleInterface-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5239068Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_modules.TestModules-20230111212326.xml (deflated 49%) 2023-01-11T22:11:48.5242802Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestNamedTuple-20230111212326.xml (deflated 81%) 2023-01-11T22:11:48.5250732Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_backend_nnapi.TestNnapiBackend-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5254516Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_op_decompositions.TestOpDecompositions-20230111212326.xml (deflated 64%) 2023-01-11T22:11:48.5258952Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5263510Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_pdt.TestPDT-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5271078Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_parametrization.TestParametrization-20230111212326.xml (deflated 61%) 2023-01-11T22:11:48.5276489Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_peephole.TestPeephole-20230111212326.xml (deflated 88%) 2023-01-11T22:11:48.5280389Z adding: test/test-reports/python-unittest/test_jit/TEST-TestProducerVersion-20230111212326.xml (deflated 41%) 2023-01-11T22:11:48.5286957Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_profiler.TestProfiler-20230111212326.xml (deflated 81%) 2023-01-11T22:11:48.5291195Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_python_bindings.TestPythonBindings-20230111212326.xml (deflated 81%) 2023-01-11T22:11:48.5295624Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_python_builtins.TestPythonBuiltinOP-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5310590Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_python_ir.TestPythonIr-20230111212326.xml (deflated 47%) 2023-01-11T22:11:48.5323906Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_recursive_script.TestRecursiveScript-20230111212326.xml (deflated 88%) 2023-01-11T22:11:48.5329806Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_remove_mutation.TestRemoveMutation-20230111212326.xml (deflated 80%) 2023-01-11T22:11:48.5335323Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoad-20230111212326.xml (deflated 80%) 2023-01-11T22:11:48.5341594Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_save_load_for_op_version.TestSaveLoadForOpVersion-20230111212326.xml (deflated 82%) 2023-01-11T22:11:48.5364940Z adding: test/test-reports/python-unittest/test_jit/TEST-TestScript-20230111212326.xml (deflated 88%) 2023-01-11T22:11:48.5370725Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestScriptDict-20230111212326.xml (deflated 79%) 2023-01-11T22:11:48.5375857Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_list_dict.TestScriptList-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5383780Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_scriptmod_ann.TestScriptModuleInstanceAttributeTypeAnnotation-20230111212326.xml (deflated 87%) 2023-01-11T22:11:48.5388318Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_script_profile.TestScriptProfile-20230111212326.xml (deflated 75%) 2023-01-11T22:11:48.5393412Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_slice.TestSlice-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5397523Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_sparse.TestSparse-20230111212326.xml (deflated 71%) 2023-01-11T22:11:48.5405442Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_string_formatting.TestStringFormatting-20230111212326.xml (deflated 90%) 2023-01-11T22:11:48.5410368Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_symbolic_shape_analysis.TestSymbolicShapeAnalysis-20230111212326.xml (deflated 87%) 2023-01-11T22:11:48.5414587Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_builtins.TestTensorBuiltins-20230111212326.xml (deflated 74%) 2023-01-11T22:11:48.5421030Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_tensor_creation_ops.TestTensorCreationOps-20230111212326.xml (deflated 81%) 2023-01-11T22:11:48.5424527Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_tensor_methods.TestTensorMethods-20230111212326.xml (deflated 61%) 2023-01-11T22:11:48.5428642Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_torchbind.TestTorchbind-20230111212326.xml (deflated 88%) 2023-01-11T22:11:48.5437598Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_tracer.TestTracer-20230111212326.xml (deflated 87%) 2023-01-11T22:11:48.5442155Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_type_sharing.TestTypeSharing-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5446394Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_types.TestTypesAndAnnotation-20230111212326.xml (deflated 83%) 2023-01-11T22:11:48.5454334Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_typing.TestTyping-20230111212326.xml (deflated 88%) 2023-01-11T22:11:48.5458805Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_union.TestUnion-20230111212326.xml (deflated 86%) 2023-01-11T22:11:48.5463229Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_unsupported_ops.TestUnsupportedOps-20230111212326.xml (deflated 61%) 2023-01-11T22:11:48.5466937Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_upgraders.TestUpgraders-20230111212326.xml (deflated 85%) 2023-01-11T22:11:48.5470333Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_warn.TestWarn-20230111212326.xml (deflated 78%) 2023-01-11T22:11:48.5474653Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20230111212326.xml (deflated 73%) 2023-01-11T22:11:48.5482170Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20230111212326.xml (deflated 78%) 2023-01-11T22:11:48.5486189Z adding: test/test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20230111212326.xml (deflated 84%) 2023-01-11T22:11:48.5499994Z adding: test/test-reports/python-unittest/inductor.test_torchinductor/TEST-CPUReproTests-20230111212338.xml (deflated 90%) 2023-01-11T22:11:48.5701966Z adding: test/test-reports/python-unittest/inductor.test_torchinductor/TEST-CpuTests-20230111212338.xml (deflated 94%) 2023-01-11T22:11:48.5706097Z adding: test/test-reports/python-unittest/inductor.test_torchinductor/TEST-ExprPrinterTests-20230111212338.xml (deflated 40%) 2023-01-11T22:11:48.5732332Z adding: test/test-reports/python-unittest/inductor.test_torchinductor/TEST-SweepInputsCpuTest-20230111212338.xml (deflated 97%) 2023-01-11T22:11:48.5736977Z adding: test/test-reports/python-unittest/inductor.test_torchinductor/TEST-TestIndexingSimplification-20230111212338.xml (deflated 57%) 2023-01-11T22:11:48.5741583Z adding: test/test-reports/python-unittest/test_meta/TEST-TestMetaConverter-20230111212547.xml (deflated 77%) 2023-01-11T22:11:48.5745942Z adding: test/test-reports/python-unittest/test_proxy_tensor/TEST-TestFakeProxyTensor-20230111212552.xml (deflated 68%) 2023-01-11T22:11:48.5753783Z adding: test/test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorFake-20230111212552.xml (deflated 85%) 2023-01-11T22:11:48.5759487Z adding: test/test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorReal-20230111212552.xml (deflated 84%) 2023-01-11T22:11:48.5764791Z adding: test/test-reports/python-unittest/test_proxy_tensor/TEST-TestGenericProxyTensorSymbolic-20230111212552.xml (deflated 81%) 2023-01-11T22:11:48.5771243Z adding: test/test-reports/python-unittest/test_proxy_tensor/TEST-TestSymbolicTracing-20230111212552.xml (deflated 83%) 2023-01-11T22:11:48.5775677Z adding: test/test-reports/python-unittest/test_public_bindings/TEST-TestPublicBindings-20230111212634.xml (deflated 56%) 2023-01-11T22:11:48.5781444Z adding: test/test-reports/python-unittest/test_python_dispatch/TEST-TestPythonDispatch-20230111212640.xml (deflated 87%) 2023-01-11T22:11:48.5787547Z adding: test/test-reports/python-unittest/test_python_dispatch/TEST-TestPythonDispatcher-20230111212640.xml (deflated 55%) 2023-01-11T22:11:48.5792432Z adding: test/test-reports/python-unittest/test_python_dispatch/TEST-TestPythonRegistration-20230111212640.xml (deflated 68%) 2023-01-11T22:11:48.5797401Z adding: test/test-reports/python-unittest/test_sparse/TEST-TestSparseMeta-20230111212653.xml (deflated 40%) 2023-01-11T22:11:48.5805890Z adding: test/test-reports/python-unittest/test_sparse/TEST-TestSparseOneOff-20230111212653.xml (deflated 58%) 2023-01-11T22:11:48.5810331Z adding: test/test-reports/python-unittest/test_stateless/TEST-TestPythonOptimizeMode-20230111212657.xml (deflated 40%) 2023-01-11T22:11:48.5814507Z adding: test/test-reports/python-unittest/test_stateless/TEST-TestStatelessDeprecation-20230111212657.xml (deflated 43%) 2023-01-11T22:11:48.5819099Z adding: test/test-reports/python-unittest/test_stateless/TEST-TestStatelessFunctionalAPI-20230111212657.xml (deflated 78%) 2023-01-11T22:11:48.5823844Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertClose-20230111212708.xml (deflated 84%) 2023-01-11T22:11:48.5829215Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseContainer-20230111212708.xml (deflated 70%) 2023-01-11T22:11:48.5833849Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseErrorMessage-20230111212708.xml (deflated 84%) 2023-01-11T22:11:48.5838234Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseQuantized-20230111212708.xml (deflated 69%) 2023-01-11T22:11:48.5845777Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseBSC-20230111212708.xml (deflated 70%) 2023-01-11T22:11:48.5849635Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseBSR-20230111212708.xml (deflated 70%) 2023-01-11T22:11:48.5855546Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCOO-20230111212708.xml (deflated 76%) 2023-01-11T22:11:48.5863923Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCSC-20230111212708.xml (deflated 70%) 2023-01-11T22:11:48.5869049Z adding: test/test-reports/python-unittest/test_testing/TEST-TestAssertCloseSparseCSR-20230111212708.xml (deflated 63%) 2023-01-11T22:11:48.5872871Z adding: test/test-reports/python-unittest/test_testing/TEST-TestFrameworkUtils-20230111212708.xml (deflated 41%) 2023-01-11T22:11:48.5876068Z adding: test/test-reports/python-unittest/test_testing/TEST-TestImports-20230111212708.xml (deflated 58%) 2023-01-11T22:11:48.5879579Z adding: test/test-reports/python-unittest/test_testing/TEST-TestOpInfos-20230111212708.xml (deflated 53%) 2023-01-11T22:11:48.5897885Z adding: test/test-reports/python-unittest/test_testing/TEST-TestTestParametrization-20230111212708.xml (deflated 88%) 2023-01-11T22:11:48.5905767Z adding: test/test-reports/python-unittest/test_transformers/TEST-TestTransformers-20230111212735.xml (deflated 91%) 2023-01-11T22:11:48.5909838Z adding: test/test-reports/python-unittest/test_utils/TEST-TestAssert-20230111212745.xml (deflated 53%) 2023-01-11T22:11:48.5915440Z adding: test/test-reports/python-unittest/test_utils/TEST-TestCheckpoint-20230111212745.xml (deflated 78%) 2023-01-11T22:11:48.5925893Z adding: test/test-reports/python-unittest/test_utils/TEST-TestCppExtensionUtils-20230111212745.xml (deflated 55%) 2023-01-11T22:11:48.5931396Z adding: test/test-reports/python-unittest/test_utils/TEST-TestDataLoaderUtils-20230111212745.xml (deflated 62%) 2023-01-11T22:11:48.5936281Z adding: test/test-reports/python-unittest/test_utils/TEST-TestExtensionUtils-20230111212745.xml (deflated 41%) 2023-01-11T22:11:48.5945301Z adding: test/test-reports/python-unittest/test_utils/TEST-TestHipify-20230111212745.xml (deflated 39%) 2023-01-11T22:11:48.5949884Z adding: test/test-reports/python-unittest/test_utils/TEST-TestONNXUtils-20230111212745.xml (deflated 52%) 2023-01-11T22:11:48.5954172Z adding: test/test-reports/python-unittest/test_utils/TEST-TestStandaloneCPPJIT-20230111212745.xml (deflated 40%) 2023-01-11T22:11:48.5966542Z adding: test/test-reports/python-unittest/test_utils/TEST-TestBottleneck-20230111212745.xml (deflated 52%) 2023-01-11T22:11:48.5970874Z adding: test/test-reports/python-unittest/test_utils/TEST-TestCollectEnv-20230111212745.xml (deflated 41%) 2023-01-11T22:11:48.5975237Z adding: test/test-reports/python-unittest/backends.xeon.test_launch/TEST-TestTorchrun-20230111212751.xml (deflated 47%) 2023-01-11T22:11:48.5979548Z adding: test/test-reports/python-unittest/benchmark_utils.test_benchmark_utils/TEST-TestBenchmarkUtils-20230111212757.xml (deflated 78%) 2023-01-11T22:11:48.5984366Z adding: test/test-reports/python-unittest/dynamo.test_aot_autograd/TEST-AotAutogradFallbackTests-20230111212802.xml (deflated 82%) 2023-01-11T22:11:48.5990084Z adding: test/test-reports/python-unittest/dynamo.test_aot_cudagraphs/TEST-TestAotCudagraphs-20230111212810.xml (deflated 83%) 2023-01-11T22:11:48.6001527Z adding: test/test-reports/python-unittest/dynamo.test_comptime/TEST-ComptimeTests-20230111212813.xml (deflated 77%) 2023-01-11T22:11:48.6011753Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesExportTests-20230111212818.xml (deflated 93%) 2023-01-11T22:11:48.6023234Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesFunctionTests-20230111212818.xml (deflated 95%) 2023-01-11T22:11:48.6049424Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesMiscTests-20230111212818.xml (deflated 92%) 2023-01-11T22:11:48.6059083Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesNNModuleTests-20230111212818.xml (deflated 92%) 2023-01-11T22:11:48.6082774Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesReproTests-20230111212818.xml (deflated 89%) 2023-01-11T22:11:48.6090848Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesSubGraphTests-20230111212818.xml (deflated 92%) 2023-01-11T22:11:48.6095938Z adding: test/test-reports/python-unittest/dynamo.test_dynamic_shapes/TEST-torch._dynamo.testing.DynamicShapesUnspecTests-20230111212818.xml (deflated 82%) 2023-01-11T22:11:48.6100860Z adding: test/test-reports/python-unittest/dynamo.test_export_mutations/TEST-MutationExportTests-20230111212902.xml (deflated 81%) 2023-01-11T22:11:48.6110732Z adding: test/test-reports/python-unittest/dynamo.test_functions/TEST-FunctionTests-20230111212906.xml (deflated 95%) 2023-01-11T22:11:48.6115437Z adding: test/test-reports/python-unittest/dynamo.test_global/TEST-TestGlobals-20230111212912.xml (deflated 85%) 2023-01-11T22:11:48.6123731Z adding: test/test-reports/python-unittest/dynamo.test_minifier/TEST-MinifierTests-20230111212919.xml (deflated 85%) 2023-01-11T22:11:48.6129206Z adding: test/test-reports/python-unittest/dynamo.test_model_output/TEST-TestHFPretrained-20230111213005.xml (deflated 42%) 2023-01-11T22:11:48.6136222Z adding: test/test-reports/python-unittest/dynamo.test_model_output/TEST-TestModelOutput-20230111213005.xml (deflated 82%) 2023-01-11T22:11:48.6147202Z adding: test/test-reports/python-unittest/dynamo.test_modules/TEST-NNModuleTests-20230111213009.xml (deflated 90%) 2023-01-11T22:11:48.6152691Z adding: test/test-reports/python-unittest/dynamo.test_modules/TEST-OptimizedModuleTest-20230111213009.xml (deflated 76%) 2023-01-11T22:11:48.6157068Z adding: test/test-reports/python-unittest/dynamo.test_nops/TEST-NopTests-20230111213014.xml (deflated 67%) 2023-01-11T22:11:48.6164412Z adding: test/test-reports/python-unittest/dynamo.test_optimizers/TEST-End2EndTests-20230111213018.xml (deflated 48%) 2023-01-11T22:11:48.6173641Z adding: test/test-reports/python-unittest/dynamo.test_optimizers/TEST-OptimizerTests-20230111213018.xml (deflated 87%) 2023-01-11T22:11:48.6178775Z adding: test/test-reports/python-unittest/dynamo.test_python_autograd/TEST-TestPythonAutograd-20230111213024.xml (deflated 77%) 2023-01-11T22:11:48.6184229Z adding: test/test-reports/python-unittest/dynamo.test_replay_record/TEST-ReplayRecordTests-20230111213033.xml (deflated 82%) 2023-01-11T22:11:48.6188828Z adding: test/test-reports/python-unittest/dynamo.test_skip_non_tensor/TEST-SkipNonTensorTests-20230111213037.xml (deflated 79%) 2023-01-11T22:11:48.6195803Z adding: test/test-reports/python-unittest/dynamo.test_subgraphs/TEST-SubGraphTests-20230111213041.xml (deflated 92%) 2023-01-11T22:11:48.6200962Z adding: test/test-reports/python-unittest/dynamo.test_verify_correctness/TEST-TestVerifyCorrectness-20230111213053.xml (deflated 69%) 2023-01-11T22:11:48.6206325Z adding: test/test-reports/python-unittest/inductor.test_minifier/TEST-MinifierTests-20230111213058.xml (deflated 87%) 2023-01-11T22:11:48.6212037Z adding: test/test-reports/python-unittest/lazy.test_debug_util/TEST-DebugUtilTest-20230111213254.xml (deflated 39%) 2023-01-11T22:11:48.6216997Z adding: test/test-reports/python-unittest/lazy.test_reuse_ir/TEST-TestLazyReuseIr-20230111213306.xml (deflated 66%) 2023-01-11T22:11:48.6224091Z adding: test/test-reports/python-unittest/lazy.test_step_closures/TEST-ClosuresTest-20230111213310.xml (deflated 67%) 2023-01-11T22:11:48.6229354Z adding: test/test-reports/python-unittest/lazy.test_ts_opinfo/TEST-TestLazyDynamicOps-20230111213317.xml (deflated 40%) 2023-01-11T22:11:48.6233786Z adding: test/test-reports/python-unittest/lazy.test_ts_opinfo/TEST-TestLazyTensor-20230111213317.xml (deflated 58%) 2023-01-11T22:11:48.6242783Z adding: test/test-reports/python-unittest/nn.test_dropout/TEST-TestDropoutNN-20230111213321.xml (deflated 65%) 2023-01-11T22:11:48.6248093Z adding: test/test-reports/python-unittest/nn.test_embedding/TEST-TestEmbeddingNN-20230111213325.xml (deflated 87%) 2023-01-11T22:11:48.6256796Z adding: test/test-reports/python-unittest/nn.test_lazy_modules/TEST-TestLazyModules-20230111213334.xml (deflated 89%) 2023-01-11T22:11:48.6261768Z adding: test/test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleGlobalHooks-20230111213338.xml (deflated 71%) 2023-01-11T22:11:48.6266589Z adding: test/test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleHookNN-20230111213338.xml (deflated 79%) 2023-01-11T22:11:48.6270788Z adding: test/test-reports/python-unittest/nn.test_module_hooks/TEST-TestModuleHooks-20230111213338.xml (deflated 78%) 2023-01-11T22:11:48.6287738Z adding: test/test-reports/python-unittest/nn.test_module_hooks/TEST-TestStateDictHooks-20230111213338.xml (deflated 75%) 2023-01-11T22:11:48.6293801Z adding: test/test-reports/python-unittest/nn.test_multihead_attention/TEST-TestMultiheadAttentionNN-20230111213342.xml (deflated 71%) 2023-01-11T22:11:48.6303551Z adding: test/test-reports/python-unittest/nn.test_packed_sequence/TEST-PackedSequenceTest-20230111213350.xml (deflated 81%) 2023-01-11T22:11:48.6311843Z adding: test/test-reports/python-unittest/nn.test_parametrization/TEST-TestNNParametrization-20230111213355.xml (deflated 83%) 2023-01-11T22:11:48.6318172Z adding: test/test-reports/python-unittest/nn.test_pruning/TEST-TestPruningNN-20230111213359.xml (deflated 79%) 2023-01-11T22:11:48.6326503Z adding: test/test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestDataFlow-20230111213403.xml (deflated 73%) 2023-01-11T22:11:48.6330896Z adding: test/test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestIdentifyGradients-20230111213403.xml (deflated 71%) 2023-01-11T22:11:48.6335720Z adding: test/test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestMemoryProfiler-20230111213403.xml (deflated 42%) 2023-01-11T22:11:48.6343973Z adding: test/test-reports/python-unittest/profiler.test_memory_profiler/TEST-TestMemoryProfilerE2E-20230111213403.xml (deflated 84%) 2023-01-11T22:11:48.6348917Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestExecutionGraph-20230111213411.xml (deflated 73%) 2023-01-11T22:11:48.6354354Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestExperimentalUtils-20230111213411.xml (deflated 83%) 2023-01-11T22:11:48.6362341Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestProfiler-20230111213411.xml (deflated 75%) 2023-01-11T22:11:48.6367400Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestProfilerITT-20230111213411.xml (deflated 41%) 2023-01-11T22:11:48.6371521Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestRecordFunction-20230111213411.xml (deflated 70%) 2023-01-11T22:11:48.6382201Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestTorchTidyProfiler-20230111213411.xml (deflated 90%) 2023-01-11T22:11:48.6387254Z adding: test/test-reports/python-unittest/profiler.test_profiler/TEST-TestProfilerCUDA-20230111213411.xml (deflated 57%) 2023-01-11T22:11:48.6391338Z adding: test/test-reports/python-unittest/test_autocast/TEST-TestAutocastCPU-20230111213421.xml (deflated 77%) 2023-01-11T22:11:48.6396538Z adding: test/test-reports/python-unittest/test_autocast/TEST-TestTorchAutocast-20230111213421.xml (deflated 42%) 2023-01-11T22:11:48.6402975Z adding: test/test-reports/python-unittest/test_autocast/TEST-TestAutocastGPU-20230111213421.xml (deflated 45%) 2023-01-11T22:11:48.6408248Z adding: test/test-reports/python-unittest/test_bundled_inputs/TEST-TestBundledInputs-20230111213430.xml (deflated 77%) 2023-01-11T22:11:48.6416739Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestConcatDataset-20230111213447.xml (deflated 74%) 2023-01-11T22:11:48.6425554Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestDataLoader-20230111213447.xml (deflated 82%) 2023-01-11T22:11:48.6453654Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestDataLoaderPersistentWorkers-20230111213447.xml (deflated 85%) 2023-01-11T22:11:48.6454617Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestDatasetRandomSplit-20230111213447.xml (deflated 76%) 2023-01-11T22:11:48.6455473Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestDictDataLoader-20230111213447.xml (deflated 70%) 2023-01-11T22:11:48.6456172Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestNamedTupleDataLoader-20230111213447.xml (deflated 42%) 2023-01-11T22:11:48.6457538Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestSetAffinity-20230111213447.xml (deflated 40%) 2023-01-11T22:11:48.6461800Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestTensorDataset-20230111213447.xml (deflated 72%) 2023-01-11T22:11:48.6466456Z adding: test/test-reports/python-unittest/test_dataloader/TEST-IntegrationTestDataLoaderDataPipe-20230111213447.xml (deflated 43%) 2023-01-11T22:11:48.6470754Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestConvAfterFork-20230111213447.xml (deflated 41%) 2023-01-11T22:11:48.6475037Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestCustomPinFn-20230111213447.xml (deflated 58%) 2023-01-11T22:11:48.6481361Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestIndividualWorkerQueue-20230111213447.xml (deflated 42%) 2023-01-11T22:11:48.6484682Z adding: test/test-reports/python-unittest/test_dataloader/TEST-TestStringDataLoader-20230111213447.xml (deflated 41%) 2023-01-11T22:11:48.6490981Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestCircularSerialization-20230111213618.xml (deflated 54%) 2023-01-11T22:11:48.6499451Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestDataChunk-20230111213618.xml (deflated 76%) 2023-01-11T22:11:48.6505564Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestFunctionalIterDataPipe-20230111213618.xml (deflated 85%) 2023-01-11T22:11:48.6510078Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestFunctionalMapDataPipe-20230111213618.xml (deflated 71%) 2023-01-11T22:11:48.6515138Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestGraph-20230111213618.xml (deflated 67%) 2023-01-11T22:11:48.6519531Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeCountSampleYielded-20230111213618.xml (deflated 75%) 2023-01-11T22:11:48.6524626Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeGraphFastForward-20230111213618.xml (deflated 66%) 2023-01-11T22:11:48.6529586Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestIterDataPipeSingletonConstraint-20230111213618.xml (deflated 74%) 2023-01-11T22:11:48.6537656Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestIterableDataPipeBasic-20230111213618.xml (deflated 70%) 2023-01-11T22:11:48.6542239Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestSharding-20230111213618.xml (deflated 70%) 2023-01-11T22:11:48.6547019Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestStreamWrapper-20230111213618.xml (deflated 68%) 2023-01-11T22:11:48.6551938Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestTyping-20230111213618.xml (deflated 80%) 2023-01-11T22:11:48.6559991Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestCaptureDataFrame-20230111213618.xml (deflated 42%) 2023-01-11T22:11:48.6564828Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestDataFramesPipes-20230111213618.xml (deflated 79%) 2023-01-11T22:11:48.6569999Z adding: test/test-reports/python-unittest/test_datapipe/TEST-TestSerialization-20230111213618.xml (deflated 59%) 2023-01-11T22:11:48.6574768Z adding: test/test-reports/python-unittest/test_dynamic_shapes/TEST-TestPySymInt-20230111213636.xml (deflated 84%) 2023-01-11T22:11:48.6582006Z adding: test/test-reports/python-unittest/test_dynamic_shapes/TEST-TestSymNumberMagicMethods-20230111213636.xml (deflated 96%) 2023-01-11T22:11:48.6588613Z adding: test/test-reports/python-unittest/test_expanded_weights/TEST-TestExpandedWeightModule-20230111213641.xml (deflated 95%) 2023-01-11T22:11:48.6592846Z adding: test/test-reports/python-unittest/test_functional_autograd_benchmark/TEST-TestFunctionalAutogradBenchmark-20230111213646.xml (deflated 54%) 2023-01-11T22:11:48.6598992Z adding: test/test-reports/python-unittest/test_functional_optim/TEST-TestFunctionalOptimParity-20230111213714.xml (deflated 72%) 2023-01-11T22:11:48.6609068Z adding: test/test-reports/python-unittest/test_functionalization/TEST-TestCrossRefFunctionalization-20230111213719.xml (deflated 85%) 2023-01-11T22:11:48.6614571Z adding: test/test-reports/python-unittest/test_functionalization/TEST-TestFunctionalization-20230111213719.xml (deflated 83%) 2023-01-11T22:11:48.6622119Z adding: test/test-reports/python-unittest/test_futures/TEST-TestFuture-20230111213738.xml (deflated 84%) 2023-01-11T22:11:48.6635292Z adding: test/test-reports/python-unittest/test_fx_experimental/TEST-TestFXExperimental-20230111213744.xml (deflated 79%) 2023-01-11T22:11:48.6641168Z adding: test/test-reports/python-unittest/test_fx_passes/TEST-TestFXGraphPasses-20230111213805.xml (deflated 92%) 2023-01-11T22:11:48.6645878Z adding: test/test-reports/python-unittest/test_fx_passes/TEST-TestFXMatcherUtils-20230111213805.xml (deflated 87%) 2023-01-11T22:11:48.6651579Z adding: test/test-reports/python-unittest/test_fx_reinplace_pass/TEST-TestReinplacePass-20230111213809.xml (deflated 80%) 2023-01-11T22:11:48.6655919Z adding: test/test-reports/python-unittest/test_import_stats/TEST-TestImportTime-20230111213818.xml (deflated 52%) 2023-01-11T22:11:48.6663341Z adding: test/test-reports/python-unittest/test_itt/TEST-TestItt-20230111213824.xml (deflated 39%) 2023-01-11T22:11:48.6669058Z adding: test/test-reports/python-unittest/test_jit_autocast/TEST-TestAutocast-20230111213829.xml (deflated 89%) 2023-01-11T22:11:48.6673815Z adding: test/test-reports/python-unittest/test_jit_autocast/TEST-TestJitTraceAutocast-20230111213829.xml (deflated 75%) 2023-01-11T22:11:48.6678322Z adding: test/test-reports/python-unittest/test_jit_fuser_te/TEST-jit.test_fuser_common.TestFuserCommon-20230111213849.xml (deflated 48%) 2023-01-11T22:11:48.6688266Z adding: test/test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserDynamic-20230111213849.xml (deflated 86%) 2023-01-11T22:11:48.6696362Z adding: test/test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserStatic-20230111213849.xml (deflated 86%) 2023-01-11T22:11:48.6700657Z adding: test/test-reports/python-unittest/test_jit_llga_fuser/TEST-TestEnableDisableLlgaFuser-20230111214027.xml (deflated 41%) 2023-01-11T22:11:48.6706048Z adding: test/test-reports/python-unittest/test_jit_llga_fuser/TEST-TestModel-20230111214027.xml (deflated 93%) 2023-01-11T22:11:48.6711844Z adding: test/test-reports/python-unittest/test_jit_llga_fuser/TEST-TestDynamoAOT-20230111214027.xml (deflated 41%) 2023-01-11T22:11:48.6720957Z adding: test/test-reports/python-unittest/test_legacy_vmap/TEST-TestVmapAPI-20230111214037.xml (deflated 91%) 2023-01-11T22:11:48.6726879Z adding: test/test-reports/python-unittest/test_legacy_vmap/TEST-TestVmapOperators-20230111214037.xml (deflated 88%) 2023-01-11T22:11:48.6731403Z adding: test/test-reports/python-unittest/test_license/TEST-TestLicense-20230111214042.xml (deflated 50%) 2023-01-11T22:11:48.6735540Z adding: test/test-reports/python-unittest/test_logging/TEST-LoggingTest-20230111214046.xml (deflated 43%) 2023-01-11T22:11:48.6741136Z adding: test/test-reports/python-unittest/test_maskedtensor/TEST-TestBinary-20230111214053.xml (deflated 92%) 2023-01-11T22:11:48.6749491Z adding: test/test-reports/python-unittest/test_maskedtensor/TEST-TestReductions-20230111214053.xml (deflated 85%) 2023-01-11T22:11:48.6758366Z adding: test/test-reports/python-unittest/test_maskedtensor/TEST-TestUnary-20230111214053.xml (deflated 96%) 2023-01-11T22:11:48.6762920Z adding: test/test-reports/python-unittest/test_mkl_verbose/TEST-TestMKLVerbose-20230111214101.xml (deflated 53%) 2023-01-11T22:11:48.6768825Z adding: test/test-reports/python-unittest/test_mkldnn/TEST-TestMkldnn-20230111214109.xml (deflated 83%) 2023-01-11T22:11:48.6776355Z adding: test/test-reports/python-unittest/test_mkldnn_fusion/TEST-TestMkldnnFusion-20230111214128.xml (deflated 71%) 2023-01-11T22:11:48.6780507Z adding: test/test-reports/python-unittest/test_mkldnn_verbose/TEST-TestMKLDNNVerbose-20230111214202.xml (deflated 53%) 2023-01-11T22:11:48.6785522Z adding: test/test-reports/python-unittest/test_model_dump/TEST-TestModelDump-20230111214211.xml (deflated 75%) 2023-01-11T22:11:48.6792960Z adding: test/test-reports/python-unittest/test_monitor/TEST-TestMonitor-20230111214221.xml (deflated 66%) 2023-01-11T22:11:48.6797892Z adding: test/test-reports/python-unittest/test_monitor/TEST-TestMonitorTensorboard-20230111214221.xml (deflated 41%) 2023-01-11T22:11:48.6806170Z adding: test/test-reports/python-unittest/test_namedtensor/TEST-TestNamedTensor-20230111214226.xml (deflated 88%) 2023-01-11T22:11:48.6811026Z adding: test/test-reports/python-unittest/test_native_functions/TEST-TestNativeFunctions-20230111214232.xml (deflated 91%) 2023-01-11T22:11:48.6816821Z adding: test/test-reports/python-unittest/test_nestedtensor/TEST-TestNestedTensor-20230111214241.xml (deflated 90%) 2023-01-11T22:11:48.6821222Z adding: test/test-reports/python-unittest/test_numba_integration/TEST-TestNumbaIntegration-20230111214245.xml (deflated 75%) 2023-01-11T22:11:48.6825908Z adding: test/test-reports/python-unittest/test_nvfuser_dynamo/TEST-TestNvFuserDynamo-20230111214252.xml (deflated 71%) 2023-01-11T22:11:48.6830716Z adding: test/test-reports/python-unittest/test_nvfuser_frontend/TEST-TestNvFuserFrontend-20230111214254.xml (deflated 84%) 2023-01-11T22:11:48.6836616Z adding: test/test-reports/python-unittest/test_openmp/TEST-TestOpenMP_ParallelFor-20230111214256.xml (deflated 55%) 2023-01-11T22:11:48.6841000Z adding: test/test-reports/python-unittest/test_optim/TEST-TestDifferentiableOptimizer-20230111214258.xml (deflated 82%) 2023-01-11T22:11:48.6851229Z adding: test/test-reports/python-unittest/test_optim/TEST-TestLRScheduler-20230111214258.xml (deflated 91%) 2023-01-11T22:11:48.6855905Z adding: test/test-reports/python-unittest/test_optim/TEST-TestOptim-20230111214258.xml (deflated 81%) 2023-01-11T22:11:48.6860473Z adding: test/test-reports/python-unittest/test_optim/TEST-TestSWAUtils-20230111214258.xml (deflated 76%) 2023-01-11T22:11:48.6868102Z adding: test/test-reports/python-unittest/test_package/TEST-test_dependency_api.TestDependencyAPI-20230111214303.xml (deflated 81%) 2023-01-11T22:11:48.6872374Z adding: test/test-reports/python-unittest/test_package/TEST-test_dependency_hooks.TestDependencyHooks-20230111214303.xml (deflated 77%) 2023-01-11T22:11:48.6877352Z adding: test/test-reports/python-unittest/test_package/TEST-test_digraph.TestDiGraph-20230111214303.xml (deflated 84%) 2023-01-11T22:11:48.6882845Z adding: test/test-reports/python-unittest/test_package/TEST-test_directory_reader.DirectoryReaderTest-20230111214303.xml (deflated 76%) 2023-01-11T22:11:48.6887386Z adding: test/test-reports/python-unittest/test_package/TEST-test_glob_group.TestGlobGroup-20230111214303.xml (deflated 86%) 2023-01-11T22:11:48.6893713Z adding: test/test-reports/python-unittest/test_package/TEST-test_importer.TestImporter-20230111214303.xml (deflated 72%) 2023-01-11T22:11:48.6899196Z adding: test/test-reports/python-unittest/test_package/TEST-test_load_bc_packages.TestLoadBCPackages-20230111214303.xml (deflated 71%) 2023-01-11T22:11:48.6905595Z adding: test/test-reports/python-unittest/test_package/TEST-test_mangling.TestMangling-20230111214303.xml (deflated 78%) 2023-01-11T22:11:48.6913581Z adding: test/test-reports/python-unittest/test_package/TEST-test_misc.TestMisc-20230111214303.xml (deflated 73%) 2023-01-11T22:11:48.6918131Z adding: test/test-reports/python-unittest/test_package/TEST-test_package_fx.TestPackageFX-20230111214303.xml (deflated 75%) 2023-01-11T22:11:48.6931212Z adding: test/test-reports/python-unittest/test_package/TEST-test_package_script.TestPackageScript-20230111214303.xml (deflated 86%) 2023-01-11T22:11:48.6936572Z adding: test/test-reports/python-unittest/test_package/TEST-test_repackage.TestRepackage-20230111214303.xml (deflated 48%) 2023-01-11T22:11:48.6948963Z adding: test/test-reports/python-unittest/test_package/TEST-test_resources.TestResources-20230111214303.xml (deflated 69%) 2023-01-11T22:11:48.6953605Z adding: test/test-reports/python-unittest/test_package/TEST-test_save_load.TestSaveLoad-20230111214303.xml (deflated 76%) 2023-01-11T22:11:48.6960421Z adding: test/test-reports/python-unittest/test_package/TEST-test_analyze.TestAnalyze-20230111214303.xml (deflated 44%) 2023-01-11T22:11:48.6964299Z adding: test/test-reports/python-unittest/test_package/TEST-test_model.ModelTest-20230111214303.xml (deflated 71%) 2023-01-11T22:11:48.6968558Z adding: test/test-reports/python-unittest/test_per_overload_api/TEST-TestPerOverloadAPI-20230111214309.xml (deflated 63%) 2023-01-11T22:11:48.6972681Z adding: test/test-reports/python-unittest/test_pytree/TEST-TestPytree-20230111214318.xml (deflated 84%) 2023-01-11T22:11:48.6979709Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_ao_migration.TestAOMigrationNNIntrinsic-20230111214322.xml (deflated 89%) 2023-01-11T22:11:48.6984584Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_ao_migration.TestAOMigrationNNQuantized-20230111214322.xml (deflated 92%) 2023-01-11T22:11:48.6993148Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_quantization.TestAOMigrationQuantization-20230111214322.xml (deflated 92%) 2023-01-11T22:11:48.6997901Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.ao_migration.test_quantization_fx.TestAOMigrationQuantizationFx-20230111214322.xml (deflated 92%) 2023-01-11T22:11:48.7002304Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_backend_config.TestBackendConfig-20230111214322.xml (deflated 90%) 2023-01-11T22:11:48.7006953Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_bias_correction_eager.TestBiasCorrectionEager-20230111214322.xml (deflated 65%) 2023-01-11T22:11:48.7011903Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestComparatorOps-20230111214322.xml (deflated 63%) 2023-01-11T22:11:48.7015832Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_deprecated_jit_quant.TestDeprecatedJitQuantized-20230111214322.xml (deflated 82%) 2023-01-11T22:11:48.7020237Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestDistributed-20230111214322.xml (deflated 72%) 2023-01-11T22:11:48.7025074Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestDynamicQuantizedModule-20230111214322.xml (deflated 81%) 2023-01-11T22:11:48.7029367Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestDynamicQuantizedOps-20230111214322.xml (deflated 86%) 2023-01-11T22:11:48.7043796Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_equalize_eager.TestEqualizeEager-20230111214322.xml (deflated 75%) 2023-01-11T22:11:48.7048487Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_equalize_fx.TestEqualizeFx-20230111214322.xml (deflated 81%) 2023-01-11T22:11:48.7056235Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXGraphMatcher-20230111214322.xml (deflated 79%) 2023-01-11T22:11:48.7059840Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXGraphMatcherModels-20230111214322.xml (deflated 58%) 2023-01-11T22:11:48.7072809Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIs-20230111214322.xml (deflated 88%) 2023-01-11T22:11:48.7081327Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteCoreAPIsModels-20230111214322.xml (deflated 92%) 2023-01-11T22:11:48.7085786Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_numeric_suite_fx.TestFXNumericSuiteNShadows-20230111214322.xml (deflated 81%) 2023-01-11T22:11:48.7089767Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestFakeQuantize-20230111214322.xml (deflated 69%) 2023-01-11T22:11:48.7094984Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_ops.TestFakeQuantizeOps-20230111214322.xml (deflated 88%) 2023-01-11T22:11:48.7103876Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_fuse_eager.TestFuseEager-20230111214322.xml (deflated 77%) 2023-01-11T22:11:48.7107923Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestFuseFx-20230111214322.xml (deflated 77%) 2023-01-11T22:11:48.7111668Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_ops.TestFusedObsFakeQuant-20230111214322.xml (deflated 75%) 2023-01-11T22:11:48.7118329Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestFusedObsFakeQuantModule-20230111214322.xml (deflated 82%) 2023-01-11T22:11:48.7122045Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_fusion_passes.TestFusionPasses-20230111214322.xml (deflated 53%) 2023-01-11T22:11:48.7125648Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxDetectInputWeightEqualization-20230111214322.xml (deflated 71%) 2023-01-11T22:11:48.7129570Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxDetectOutliers-20230111214322.xml (deflated 73%) 2023-01-11T22:11:48.7134211Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportClass-20230111214322.xml (deflated 72%) 2023-01-11T22:11:48.7138361Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportDetectDynamicStatic-20230111214322.xml (deflated 52%) 2023-01-11T22:11:48.7141641Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportDetector-20230111214322.xml (deflated 79%) 2023-01-11T22:11:48.7146314Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportObserver-20230111214322.xml (deflated 73%) 2023-01-11T22:11:48.7154545Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_model_report_fx.TestFxModelReportVisualizer-20230111214322.xml (deflated 77%) 2023-01-11T22:11:48.7159251Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestHistogramObserver-20230111214322.xml (deflated 78%) 2023-01-11T22:11:48.7164958Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_model_numerics.TestModelNumericsEager-20230111214322.xml (deflated 74%) 2023-01-11T22:11:48.7170611Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_numeric_suite_eager.TestNumericSuiteEager-20230111214322.xml (deflated 85%) 2023-01-11T22:11:48.7175063Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestObserver-20230111214322.xml (deflated 79%) 2023-01-11T22:11:48.7181087Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestPadding-20230111214322.xml (deflated 62%) 2023-01-11T22:11:48.7186270Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQNNPackOps-20230111214322.xml (deflated 85%) 2023-01-11T22:11:48.7193760Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_docs.TestQuantizationDocs-20230111214322.xml (deflated 70%) 2023-01-11T22:11:48.7200162Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeDynamicJitOps-20230111214322.xml (deflated 58%) 2023-01-11T22:11:48.7204695Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeDynamicJitPasses-20230111214322.xml (deflated 83%) 2023-01-11T22:11:48.7208440Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerOps-20230111214322.xml (deflated 86%) 2023-01-11T22:11:48.7214930Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerPTQDynamic-20230111214322.xml (deflated 80%) 2023-01-11T22:11:48.7219742Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerPTQStatic-20230111214322.xml (deflated 79%) 2023-01-11T22:11:48.7227070Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_qat.TestQuantizeEagerQAT-20230111214322.xml (deflated 80%) 2023-01-11T22:11:48.7232614Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_qat.TestQuantizeEagerQATNumerics-20230111214322.xml (deflated 83%) 2023-01-11T22:11:48.7244889Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFx-20230111214322.xml (deflated 87%) 2023-01-11T22:11:48.7255612Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFxModels-20230111214322.xml (deflated 81%) 2023-01-11T22:11:48.7264506Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_fx.TestQuantizeFxOps-20230111214322.xml (deflated 87%) 2023-01-11T22:11:48.7269514Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJit-20230111214322.xml (deflated 71%) 2023-01-11T22:11:48.7276279Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJitOps-20230111214322.xml (deflated 85%) 2023-01-11T22:11:48.7282223Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.jit.test_quantize_jit.TestQuantizeJitPasses-20230111214322.xml (deflated 86%) 2023-01-11T22:11:48.7289977Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_quantize_pt2e.TestQuantizePT2EModels-20230111214322.xml (deflated 50%) 2023-01-11T22:11:48.7305049Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedConv-20230111214322.xml (deflated 81%) 2023-01-11T22:11:48.7319056Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedEmbeddingOps-20230111214322.xml (deflated 82%) 2023-01-11T22:11:48.7324237Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_functional.TestQuantizedFunctionalOps-20230111214322.xml (deflated 71%) 2023-01-11T22:11:48.7329949Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedLinear-20230111214322.xml (deflated 79%) 2023-01-11T22:11:48.7339900Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_op.TestQuantizedOps-20230111214322.xml (deflated 87%) 2023-01-11T22:11:48.7349381Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_tensor.TestQuantizedTensor-20230111214322.xml (deflated 89%) 2023-01-11T22:11:48.7355007Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_workflow_module.TestRecordHistogramObserver-20230111214322.xml (deflated 64%) 2023-01-11T22:11:48.7361898Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestReferenceQuantizedModule-20230111214322.xml (deflated 71%) 2023-01-11T22:11:48.7368050Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.bc.test_backward_compatibility.TestSerialization-20230111214322.xml (deflated 84%) 2023-01-11T22:11:48.7373559Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestStaticQuantizedModule-20230111214322.xml (deflated 86%) 2023-01-11T22:11:48.7378980Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter-20230111214322.xml (deflated 85%) 2023-01-11T22:11:48.7384143Z adding: test/test-reports/python-unittest/test_quantization/TEST-quantization.core.test_utils.TestUtils-20230111214322.xml (deflated 74%) 2023-01-11T22:11:48.7389390Z adding: test/test-reports/python-unittest/test_schema_check/TEST-TestSchemaCheck-20230111214428.xml (deflated 85%) 2023-01-11T22:11:48.7399523Z adding: test/test-reports/python-unittest/test_serialization/TEST-TestOldSerialization-20230111214432.xml (deflated 92%) 2023-01-11T22:11:48.7408321Z adding: test/test-reports/python-unittest/test_serialization/TEST-TestSerialization-20230111214432.xml (deflated 92%) 2023-01-11T22:11:48.7412157Z adding: test/test-reports/python-unittest/test_serialization/TEST-TestSubclassSerialization-20230111214432.xml (deflated 76%) 2023-01-11T22:11:48.7417536Z adding: test/test-reports/python-unittest/test_set_default_mobile_cpu_allocator/TEST-TestSetDefaultMobileCPUAllocator-20230111214521.xml (deflated 57%) 2023-01-11T22:11:48.7428523Z adding: test/test-reports/python-unittest/test_subclass/TEST-TestSubclass-20230111214529.xml (deflated 91%) 2023-01-11T22:11:48.7433160Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardEmbedding-20230111214534.xml (deflated 60%) 2023-01-11T22:11:48.7438841Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardNumpy-20230111214534.xml (deflated 77%) 2023-01-11T22:11:48.7443982Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardPyTorchNumpy-20230111214534.xml (deflated 73%) 2023-01-11T22:11:48.7448309Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardPytorchGraph-20230111214534.xml (deflated 64%) 2023-01-11T22:11:48.7453157Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardSummary-20230111214534.xml (deflated 84%) 2023-01-11T22:11:48.7457839Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardSummaryWriter-20230111214534.xml (deflated 64%) 2023-01-11T22:11:48.7464466Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardUtils-20230111214534.xml (deflated 66%) 2023-01-11T22:11:48.7474784Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardWriter-20230111214534.xml (deflated 40%) 2023-01-11T22:11:48.7479892Z adding: test/test-reports/python-unittest/test_tensorboard/TEST-TestTensorBoardFigure-20230111214534.xml (deflated 60%) 2023-01-11T22:11:48.7484982Z adding: test/test-reports/python-unittest/test_tensorexpr_pybind/TEST-TestExprHandlePyBind-20230111214607.xml (deflated 39%) 2023-01-11T22:11:48.7492580Z adding: test/test-reports/python-unittest/test_tensorexpr_pybind/TEST-TestTensorExprPyBind-20230111214607.xml (deflated 84%) 2023-01-11T22:11:48.7496943Z adding: test/test-reports/python-unittest/test_type_hints/TEST-TestTypeHints-20230111214611.xml (deflated 40%) 2023-01-11T22:11:48.7501062Z adding: test/test-reports/python-unittest/test_type_info/TEST-TestDTypeInfo-20230111214616.xml (deflated 61%) 2023-01-11T22:11:48.7507021Z adding: test/test-reports/python-unittest/test_vulkan/TEST-TestVulkanRewritePass-20230111214634.xml (deflated 42%) 2023-01-11T22:11:48.7515124Z adding: test/test-reports/python-unittest/test_weak/TEST-WeakKeyDictionaryTestCase-20230111214638.xml (deflated 84%) 2023-01-11T22:11:48.7521752Z adding: test/test-reports/python-unittest/test_weak/TEST-WeakTest-20230111214638.xml (deflated 80%) 2023-01-11T22:11:48.7527074Z adding: test/test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKOps-20230111214646.xml (deflated 62%) 2023-01-11T22:11:48.7531781Z adding: test/test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKRewritePass-20230111214646.xml (deflated 51%) 2023-01-11T22:11:48.7536495Z adding: test/test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKSerDes-20230111214646.xml (deflated 66%) 2023-01-11T22:11:48.7542014Z adding: test/test-reports/python-unittest/test_xnnpack_integration/TEST-TestXNNPACKConv1dTransformPass-20230111214646.xml (deflated 58%) 2023-01-11T22:11:48.7546620Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestAllowMutationOnSaved-20230111215658.xml (deflated 81%) 2023-01-11T22:11:48.7566318Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestAutograd-20230111215658.xml (deflated 87%) 2023-01-11T22:11:48.7570558Z adding: test/test-reports/python-unittest/test_autograd/TEST-autograd.test_complex.TestAutogradComplex-20230111215658.xml (deflated 60%) 2023-01-11T22:11:48.7577790Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestAutogradForwardMode-20230111215658.xml (deflated 86%) 2023-01-11T22:11:48.7583378Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestAutogradForwardModeBatchedGrad-20230111215658.xml (deflated 64%) 2023-01-11T22:11:48.7591907Z adding: test/test-reports/python-unittest/test_autograd/TEST-autograd.test_functional.TestAutogradFunctional-20230111215658.xml (deflated 94%) 2023-01-11T22:11:48.7596250Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestAutogradInferenceMode-20230111215658.xml (deflated 87%) 2023-01-11T22:11:48.7602849Z adding: test/test-reports/python-unittest/test_autograd/TEST-TestMultithreadAutograd-20230111215658.xml (deflated 76%) 2023-01-11T22:11:48.7609170Z adding: test/test-reports/python-unittest/test_cpp_extensions_jit/TEST-TestCppExtensionJIT-20230111215717.xml (deflated 87%) 2023-01-11T22:11:48.7613435Z adding: test/test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorConstHandling-20230111215755.xml (deflated 76%) 2023-01-11T22:11:48.7618292Z adding: test/test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorConverterTest-20230111215755.xml (deflated 73%) 2023-01-11T22:11:48.7635535Z adding: test/test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorOperatorInvariants-20230111215755.xml (deflated 60%) 2023-01-11T22:11:48.7645946Z adding: test/test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorPropTest-20230111215755.xml (deflated 41%) 2023-01-11T22:11:48.7650784Z adding: test/test-reports/python-unittest/test_fake_tensor/TEST-FakeTensorTest-20230111215755.xml (deflated 85%) 2023-01-11T22:11:48.7655091Z adding: test/test-reports/python-unittest/test_nn/TEST-TestAddRelu-20230111215758.xml (deflated 52%) 2023-01-11T22:11:48.7660910Z adding: test/test-reports/python-unittest/test_nn/TEST-TestConstantPadNd-20230111215758.xml (deflated 53%) 2023-01-11T22:11:48.7665384Z adding: test/test-reports/python-unittest/test_nn/TEST-TestFunctionalPickle-20230111215758.xml (deflated 41%) 2023-01-11T22:11:48.7669502Z adding: test/test-reports/python-unittest/test_nn/TEST-TestFusionEval-20230111215758.xml (deflated 39%) 2023-01-11T22:11:48.7673466Z adding: test/test-reports/python-unittest/test_nn/TEST-TestFusionUtils-20230111215758.xml (deflated 54%) 2023-01-11T22:11:48.7724890Z adding: test/test-reports/python-unittest/test_nn/TEST-TestNN-20230111215758.xml (deflated 96%) 2023-01-11T22:11:48.7729927Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestBroadcastAllOverride-20230111215838.xml (deflated 42%) 2023-01-11T22:11:48.7733833Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestDisabledTorchFunction-20230111215838.xml (deflated 41%) 2023-01-11T22:11:48.7737986Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestEinsumOverride-20230111215838.xml (deflated 41%) 2023-01-11T22:11:48.7742584Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestGradCheckOverride-20230111215838.xml (deflated 41%) 2023-01-11T22:11:48.7746928Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestGradNewOnesOverride-20230111215838.xml (deflated 41%) 2023-01-11T22:11:48.7751150Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestIndexing-20230111215838.xml (deflated 73%) 2023-01-11T22:11:48.7757378Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestIterator-20230111215838.xml (deflated 40%) 2023-01-11T22:11:48.7764983Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestNamedTuple-20230111215838.xml (deflated 39%) 2023-01-11T22:11:48.7769171Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestPickle-20230111215838.xml (deflated 39%) 2023-01-11T22:11:48.7773117Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestRNN-20230111215838.xml (deflated 38%) 2023-01-11T22:11:48.7780484Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestResolveName-20230111215838.xml (deflated 40%) 2023-01-11T22:11:48.7796657Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionMode-20230111215838.xml (deflated 85%) 2023-01-11T22:11:48.7821868Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionOverride-20230111215838.xml (deflated 96%) 2023-01-11T22:11:48.7827622Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestTorchFunctionWarning-20230111215838.xml (deflated 42%) 2023-01-11T22:11:48.7831969Z adding: test/test-reports/python-unittest/test_overrides/TEST-TestWrapTorchFunction-20230111215838.xml (deflated 41%) 2023-01-11T22:11:48.7845710Z adding: test/test-reports/python-unittest/test_sparse_csr/TEST-TestSparseCSRSampler-20230111215842.xml (deflated 40%) 2023-01-11T22:11:48.7853210Z adding: test/test-reports/python-unittest/test_torch/TEST-TestBasicVitalSigns-20230111215845.xml (deflated 63%) 2023-01-11T22:11:48.7872952Z adding: test/test-reports/python-unittest/test_torch/TEST-TestTorch-20230111215845.xml (deflated 93%) 2023-01-11T22:11:48.7878185Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestAgainstScipy-20230111215849.xml (deflated 62%) 2023-01-11T22:11:48.7882548Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestConstraints-20230111215849.xml (deflated 56%) 2023-01-11T22:11:48.7887138Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestDistributionShapes-20230111215849.xml (deflated 91%) 2023-01-11T22:11:48.7897749Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestDistributions-20230111215849.xml (deflated 91%) 2023-01-11T22:11:48.7902099Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestFunctors-20230111215849.xml (deflated 70%) 2023-01-11T22:11:48.7908017Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestJit-20230111215849.xml (deflated 73%) 2023-01-11T22:11:48.7912430Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestKL-20230111215849.xml (deflated 82%) 2023-01-11T22:11:48.7916837Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestLazyLogitsInitialization-20230111215849.xml (deflated 58%) 2023-01-11T22:11:48.7921386Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestNumericalStability-20230111215849.xml (deflated 83%) 2023-01-11T22:11:48.7926378Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestRsample-20230111215849.xml (deflated 75%) 2023-01-11T22:11:48.7931344Z adding: test/test-reports/python-unittest/distributions.test_distributions/TEST-TestValidation-20230111215849.xml (deflated 68%) 2023-01-11T22:11:48.7938829Z adding: test/test-reports/python-unittest/nn.test_convolution/TEST-TestConvolutionNN-20230111215931.xml (deflated 84%) 2023-01-11T22:11:48.7944225Z adding: test/test-reports/python-unittest/nn.test_pooling/TEST-TestAvgPool-20230111215939.xml (deflated 76%) 2023-01-11T22:11:48.7949302Z adding: test/test-reports/python-unittest/nn.test_pooling/TEST-TestPoolingNN-20230111215939.xml (deflated 80%) 2023-01-11T22:11:48.7984810Z adding: test/test-reports/python-unittest/test_cpp_api_parity/TEST-TestCppApiParity-20230111215947.xml (deflated 96%) 2023-01-11T22:11:48.7990014Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestCppExtensionAOT-20230111220005.xml (deflated 80%) 2023-01-11T22:11:48.7994812Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestORTTensor-20230111220005.xml (deflated 67%) 2023-01-11T22:11:48.7999402Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestPybindTypeCasters-20230111220005.xml (deflated 41%) 2023-01-11T22:11:48.8004793Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestRNGExtension-20230111220005.xml (deflated 40%) 2023-01-11T22:11:48.8011601Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_ninja/TEST-TestTorchLibrary-20230111220005.xml (deflated 41%) 2023-01-11T22:11:48.8016566Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestCppExtensionAOT-20230111220015.xml (deflated 81%) 2023-01-11T22:11:48.8022796Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestORTTensor-20230111220015.xml (deflated 68%) 2023-01-11T22:11:48.8027178Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestPybindTypeCasters-20230111220015.xml (deflated 41%) 2023-01-11T22:11:48.8031427Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestRNGExtension-20230111220015.xml (deflated 40%) 2023-01-11T22:11:48.8035780Z adding: test/test-reports/python-unittest/test_cpp_extensions_aot_no_ninja/TEST-TestTorchLibrary-20230111220015.xml (deflated 41%) 2023-01-11T22:11:48.8040970Z adding: test/test-reports/python-unittest/test_cpp_extensions_open_device_registration/TEST-TestCppExtensionOpenRgistration-20230111220017.xml (deflated 55%) 2023-01-11T22:11:48.8046058Z adding: test/test-reports/python-unittest/test_dispatch/TEST-TestDispatch-20230111220045.xml (deflated 81%) 2023-01-11T22:11:48.8055336Z adding: test/test-reports/python-unittest/test_dispatch/TEST-TestPythonDispatcher-20230111220045.xml (deflated 75%) 2023-01-11T22:11:48.8055994Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_gradual_type.AnnotationsTest-20230111220111.xml (deflated 78%) 2023-01-11T22:11:48.8056583Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_cse_pass.TestCSEPass-20230111220111.xml (deflated 84%) 2023-01-11T22:11:48.8057125Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_common_passes.TestCommonPass-20230111220111.xml (deflated 82%) 2023-01-11T22:11:48.8058939Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_fx_const_fold.TestConstFold-20230111220111.xml (deflated 83%) 2023-01-11T22:11:48.8059681Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_fx_param_shape_control_flow.TestConstParamShapeInControlFlow-20230111220111.xml (deflated 82%) 2023-01-11T22:11:48.8060359Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_dce_pass.TestDCE-20230111220111.xml (deflated 82%) 2023-01-11T22:11:48.8063508Z adding: test/test-reports/python-unittest/test_fx/TEST-TestFX-20230111220111.xml (deflated 86%) 2023-01-11T22:11:48.8064112Z adding: test/test-reports/python-unittest/test_fx/TEST-TestFXAPIBackwardCompatibility-20230111220111.xml (deflated 63%) 2023-01-11T22:11:48.8065667Z adding: test/test-reports/python-unittest/test_fx/TEST-TestFunctionalTracing-20230111220111.xml (deflated 94%) 2023-01-11T22:11:48.8066294Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_pass_infra.TestPassManager-20230111220111.xml (deflated 75%) 2023-01-11T22:11:48.8067391Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_subgraph_rewriter.TestSubgraphRewriter-20230111220111.xml (deflated 86%) 2023-01-11T22:11:48.8069654Z adding: test/test-reports/python-unittest/test_fx/TEST-TestVisionTracing-20230111220111.xml (deflated 88%) 2023-01-11T22:11:48.8070397Z adding: test/test-reports/python-unittest/test_fx/TEST-fx.test_gradual_type.TypeCheckerTest-20230111220111.xml (deflated 90%) 2023-01-11T22:11:48.8071084Z adding: test/test-reports/python-unittest/test_jit_cuda_fuser/TEST-TestEnableDisableCudaFuser-20230111220548.xml (deflated 69%) 2023-01-11T22:11:48.8071683Z adding: test/test-reports/python-unittest/test_jit_cuda_fuser/TEST-jit.test_fuser_common.TestFuserCommon-20230111220548.xml (deflated 48%) 2023-01-11T22:11:48.8074396Z adding: test/test-reports/python-unittest/test_jit_cuda_fuser/TEST-TestCudaFuser-20230111220548.xml (deflated 92%) 2023-01-11T22:11:48.8074997Z adding: test/test-reports/python-unittest/test_jit_disabled/TEST-TestJitDisabled-20230111220550.xml (deflated 61%) 2023-01-11T22:11:48.8075524Z adding: test/test-reports/python-unittest/test_mobile_optimizer/TEST-TestOptimizer-20230111220554.xml (deflated 64%) 2023-01-11T22:11:48.8076828Z adding: test/test-reports/python-unittest/test_multiprocessing/TEST-TestMultiprocessing-20230111220600.xml (deflated 86%) 2023-01-11T22:11:48.8077445Z adding: test/test-reports/python-unittest/test_multiprocessing_spawn/TEST-ErrorTest-20230111220612.xml (deflated 39%) 2023-01-11T22:11:48.8077994Z adding: test/test-reports/python-unittest/test_multiprocessing_spawn/TEST-ForkTest-20230111220612.xml (deflated 76%) 2023-01-11T22:11:48.8078585Z adding: test/test-reports/python-unittest/test_multiprocessing_spawn/TEST-SpawnTest-20230111220612.xml (deflated 78%) 2023-01-11T22:11:48.8079259Z adding: test/test-reports/python-unittest/test_namedtuple_return_api/TEST-TestNamedTupleAPI-20230111220631.xml (deflated 77%) 2023-01-11T22:11:48.8079840Z adding: test/test-reports/python-unittest/test_prims/TEST-TestPrimsBasic-20230111220706.xml (deflated 53%) 2023-01-11T22:11:48.8080340Z adding: test/test-reports/python-unittest/test_show_pickle/TEST-TestShowPickle-20230111220710.xml (deflated 40%) 2023-01-11T22:11:48.8081060Z adding: test/test-reports/python-unittest/test_tensorexpr/TEST-TestTensorExprFuser-20230111220717.xml (deflated 88%) 2023-01-11T22:11:48.8086033Z adding: test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-484174d0a16e76ad.xml (deflated 28%) 2023-01-11T22:11:48.8089992Z adding: test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-e42ebd2ecf6eaa23.xml (deflated 28%) 2023-01-11T22:11:48.8108208Z adding: test/test-reports/python-pytest/inductor.test_torchinductor_opinfo/inductor.test_torchinductor_opinfo-b7995a19757f5615.xml (deflated 28%) 2023-01-11T22:11:48.8108858Z adding: test/test-reports/python-pytest/test_ops/test_ops-b5301370d4336bb2.xml (deflated 28%) 2023-01-11T22:11:48.8109369Z adding: test/test-reports/python-pytest/test_ops/test_ops-b489e73817dffd05.xml (deflated 28%) 2023-01-11T22:11:48.8109881Z adding: test/test-reports/python-pytest/test_ops/test_ops-a5a4f1a02e3ee27c.xml (deflated 29%) 2023-01-11T22:11:48.8110566Z adding: test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-b9403dcd25081305.xml (deflated 28%) 2023-01-11T22:11:48.8111110Z adding: test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-1e0b07ffbf426a50.xml (deflated 28%) 2023-01-11T22:11:48.8111629Z adding: test/test-reports/python-pytest/test_ops_fwd_gradients/test_ops_fwd_gradients-27a816f975c007fa.xml (deflated 28%) 2023-01-11T22:11:48.8112131Z adding: test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-57f53faeab72d232.xml (deflated 28%) 2023-01-11T22:11:48.8112630Z adding: test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-fac064cd5b4e0e08.xml (deflated 28%) 2023-01-11T22:11:48.8113203Z adding: test/test-reports/python-pytest/test_ops_gradients/test_ops_gradients-94cc7b66651244d1.xml (deflated 28%) 2023-01-11T22:11:48.8113688Z adding: test/test-reports/python-pytest/test_ops_jit/test_ops_jit-fe16d0ecf9be8ab0.xml (deflated 29%) 2023-01-11T22:11:48.8114150Z adding: test/test-reports/python-pytest/test_ops_jit/test_ops_jit-f889cd96ee3679f1.xml (deflated 28%) 2023-01-11T22:11:48.8114604Z adding: test/test-reports/python-pytest/test_ops_jit/test_ops_jit-0627f0d222682e84.xml (deflated 28%) 2023-01-11T22:11:48.8123792Z adding: test/test-reports/cpp-unittest/test_libtorch/test_jit.xml (deflated 92%) 2023-01-11T22:11:48.8131655Z adding: test/test-reports/cpp-unittest/test_libtorch/test_lazy.xml (deflated 93%) 2023-01-11T22:11:48.8148314Z adding: test/test-reports/cpp-unittest/test_libtorch/test_api.xml (deflated 92%) 2023-01-11T22:11:48.8161937Z adding: test/test-reports/cpp-unittest/test_libtorch/test_tensorexpr.xml (deflated 92%) 2023-01-11T22:11:48.8162440Z adding: test/test-reports/cpp-unittest/test_aot_compilation/test_mobile_nnc.xml (deflated 75%) 2023-01-11T22:11:48.8186620Z ##[group]Run # Remove any previous test reports if they exist 2023-01-11T22:11:48.8186957Z # Remove any previous test reports if they exist 2023-01-11T22:11:48.8187233Z rm -f usage-log-*.zip 2023-01-11T22:11:48.8187545Z # this workflow is also run in bazel build test, but we dont generate usage reports for it 2023-01-11T22:11:48.8187857Z # so check to see if the file exists first 2023-01-11T22:11:48.8188120Z if [ -f 'usage_log.txt' ]; then 2023-01-11T22:11:48.8188399Z  zip "usage-log-${FILE_SUFFIX}.zip" 'usage_log.txt' 2023-01-11T22:11:48.8188633Z fi 2023-01-11T22:11:48.8199498Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:11:48.8199826Z env: 2023-01-11T22:11:48.8200027Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.8200338Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.8200692Z FILE_SUFFIX: test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916 2023-01-11T22:11:48.8200958Z ##[endgroup] 2023-01-11T22:11:48.8570368Z adding: usage_log.txt (deflated 97%) 2023-01-11T22:11:48.8616516Z ##[group]Run seemethere/upload-artifact-s3@v5 2023-01-11T22:11:48.8616731Z with: 2023-01-11T22:11:48.8616921Z s3-prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:48.8617140Z retention-days: 14 2023-01-11T22:11:48.8617335Z if-no-files-found: warn 2023-01-11T22:11:48.8617534Z path: test-jsons-*.zip 2023-01-11T22:11:48.8617704Z name: artifact 2023-01-11T22:11:48.8618127Z s3-bucket: gha-artifacts 2023-01-11T22:11:48.8618317Z region: us-east-1 2023-01-11T22:11:48.8618468Z env: 2023-01-11T22:11:48.8618640Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:48.8618907Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:48.8619141Z ##[endgroup] 2023-01-11T22:11:49.2196369Z NOTE: s3-prefix specified, ignoring name parameter 2023-01-11T22:11:49.2196753Z With the provided path, there will be 1 file uploaded 2023-01-11T22:11:49.2197076Z Uploading to s3 prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:49.2262776Z Starting upload of test-jsons-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:49.3661586Z Finished upload of test-jsons-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:49.3789779Z ##[group]Run seemethere/upload-artifact-s3@v5 2023-01-11T22:11:49.3790027Z with: 2023-01-11T22:11:49.3790248Z s3-prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:49.3790494Z retention-days: 14 2023-01-11T22:11:49.3790722Z if-no-files-found: error 2023-01-11T22:11:49.3790942Z path: test-reports-*.zip 2023-01-11T22:11:49.3791157Z name: artifact 2023-01-11T22:11:49.3791368Z s3-bucket: gha-artifacts 2023-01-11T22:11:49.3791574Z region: us-east-1 2023-01-11T22:11:49.3791770Z env: 2023-01-11T22:11:49.3791975Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:49.3792258Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:49.3792533Z ##[endgroup] 2023-01-11T22:11:49.7109305Z NOTE: s3-prefix specified, ignoring name parameter 2023-01-11T22:11:49.7109831Z With the provided path, there will be 1 file uploaded 2023-01-11T22:11:49.7110274Z Uploading to s3 prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:49.7118311Z Starting upload of test-reports-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:49.9601993Z Finished upload of test-reports-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:49.9723718Z ##[group]Run seemethere/upload-artifact-s3@v5 2023-01-11T22:11:49.9723935Z with: 2023-01-11T22:11:49.9724139Z s3-prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:49.9724347Z retention-days: 14 2023-01-11T22:11:49.9724541Z if-no-files-found: ignore 2023-01-11T22:11:49.9724739Z path: usage-log-*.zip 2023-01-11T22:11:49.9724911Z name: artifact 2023-01-11T22:11:49.9725090Z s3-bucket: gha-artifacts 2023-01-11T22:11:49.9725279Z region: us-east-1 2023-01-11T22:11:49.9725428Z env: 2023-01-11T22:11:49.9725597Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:49.9725866Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:49.9726101Z ##[endgroup] 2023-01-11T22:11:50.3027395Z NOTE: s3-prefix specified, ignoring name parameter 2023-01-11T22:11:50.3027987Z With the provided path, there will be 1 file uploaded 2023-01-11T22:11:50.3028375Z Uploading to s3 prefix: pytorch/pytorch/3896346758/1/artifact 2023-01-11T22:11:50.3035162Z Starting upload of usage-log-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:50.6047231Z Finished upload of usage-log-test-nogpu_NO_AVX2-1-1-linux.2xlarge_10589559916.zip 2023-01-11T22:11:50.6166430Z ##[group]Run # shellcheck disable=SC2156 2023-01-11T22:11:50.6166659Z # shellcheck disable=SC2156 2023-01-11T22:11:50.6166967Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2023-01-11T22:11:50.6177828Z shell: /usr/bin/bash -e {0} 2023-01-11T22:11:50.6178004Z env: 2023-01-11T22:11:50.6178188Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:11:50.6178459Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:11:50.6178701Z ##[endgroup] 2023-01-11T22:11:52.2034778Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:11:52.2035171Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:11:52.2035653Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:11:52.2035958Z This is free software: you are free to change and redistribute it. 2023-01-11T22:11:52.2036323Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:11:52.2036582Z and "show warranty" for details. 2023-01-11T22:11:52.2036876Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:11:52.2037131Z Type "show configuration" for configuration details. 2023-01-11T22:11:52.2037373Z For bug reporting instructions, please see: 2023-01-11T22:11:52.2037613Z . 2023-01-11T22:11:52.2038069Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:11:52.2038340Z . 2023-01-11T22:11:52.2038565Z For help, type "help". 2023-01-11T22:11:52.2038802Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:11:52.3843137Z Reading symbols from python...done. 2023-01-11T22:11:52.9453352Z 2023-01-11T22:11:52.9453746Z warning: core file may not match specified executable file. 2023-01-11T22:11:53.0304948Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:11:53.0305595Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:11:53.0306078Z [New LWP 8157] 2023-01-11T22:11:53.0306315Z [New LWP 8164] 2023-01-11T22:11:53.0306599Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:11:53.0306940Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:11:53.0307296Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:11:53.0307644Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:11:53.0307875Z [New LWP 8161] 2023-01-11T22:11:53.0308041Z [New LWP 8162] 2023-01-11T22:11:53.0308205Z [New LWP 8171] 2023-01-11T22:11:53.0308368Z [New LWP 8167] 2023-01-11T22:11:53.0308540Z [New LWP 8169] 2023-01-11T22:11:53.0308687Z [New LWP 8173] 2023-01-11T22:11:53.0381413Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:11:53.0381980Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:11:53.0423767Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:11:53.0424353Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:11:53.0831277Z [Thread debugging using libthread_db enabled] 2023-01-11T22:11:53.0831745Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:00.5877768Z 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. 2023-01-11T22:12:00.5878782Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:00.5879314Z Core was generated by `/opt/conda/bin/python -bb -c from multiprocessing.spawn import spawn_main; spaw'. 2023-01-11T22:12:00.5879660Z Program terminated with signal SIGSEGV, Segmentation fault. 2023-01-11T22:12:00.5880016Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:00.5880284Z [Current thread is 1 (Thread 0x7fcb46f39200 (LWP 8157))] 2023-01-11T22:12:00.5956290Z To enable execution of this file add 2023-01-11T22:12:00.5956869Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:00.5957405Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:00.5957758Z To completely disable this security protection add 2023-01-11T22:12:00.5958025Z set auto-load safe-path / 2023-01-11T22:12:00.5958256Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:00.5958531Z For more information about this security protection see the 2023-01-11T22:12:00.5958888Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:00.5959189Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:00.6021477Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:00.6024215Z #1 0x00007fcb0f5ca65b in handler_SIGSEGV(int, siginfo_t*, void*) () 2023-01-11T22:12:00.6024768Z from /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so 2023-01-11T22:12:00.6033874Z #2 2023-01-11T22:12:00.6034271Z #3 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:65 2023-01-11T22:12:00.6177005Z #4 0x00007fcb459f703b in string_at (ptr=0x0, size=-1) at :5564 2023-01-11T22:12:00.6179168Z #5 0x00007fcb46dee052 in ffi_call_unix64 () 2023-01-11T22:12:00.6179485Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:00.6188075Z #6 0x00007fcb46dec8cd in ffi_call_int () 2023-01-11T22:12:00.6188399Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:00.6195599Z #7 0x00007fcb459ff879 in _call_function_pointer (argtypecount=2, argcount=2, 2023-01-11T22:12:00.6196035Z resmem=0x7fff9a96d270, restype=, atypes=, 2023-01-11T22:12:00.6196338Z avalues=, pProc=0x7fcb459f7002 , flags=4357) 2023-01-11T22:12:00.6196718Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:916 2023-01-11T22:12:00.6196945Z #8 _ctypes_callproc () 2023-01-11T22:12:00.6197249Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:1259 2023-01-11T22:12:00.6197534Z #9 0x00007fcb459ff3fe in PyCFuncPtr_call () at :4201 2023-01-11T22:12:00.6199059Z #10 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:00.6199403Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:00.6230212Z #11 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:00.6230671Z kwnames=, 2023-01-11T22:12:00.6231056Z nargsf=, args=0x7fca9ff90358, 2023-01-11T22:12:00.6231442Z callable=, 2023-01-11T22:12:00.6231811Z tstate=) 2023-01-11T22:12:00.6232221Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:00.6235045Z #12 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6235382Z args=0x7fca9ff90358, callable=0x7fcb45cb0880, tstate=) 2023-01-11T22:12:00.6235751Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:00.6239615Z #13 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6239927Z args=0x7fca9ff90358, callable=0x7fcb45cb0880) 2023-01-11T22:12:00.6240274Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6243572Z #14 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6243928Z pp_stack=, trace_info=0x7fff9a96d580, 2023-01-11T22:12:00.6244171Z tstate=) 2023-01-11T22:12:00.6244484Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6244719Z #15 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6245026Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:00.6255861Z #16 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6256376Z throwflag=, 2023-01-11T22:12:00.6256971Z f=, 2023-01-11T22:12:00.6257337Z tstate=) 2023-01-11T22:12:00.6257736Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6261955Z #17 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6262782Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6263523Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6263932Z locals@entry=, con=0x7fcb45c6f890, 2023-01-11T22:12:00.6264338Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6264717Z tstate@entry=) 2023-01-11T22:12:00.6265125Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6265365Z #18 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6265663Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6271143Z #19 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6271478Z nargsf=, args=0x7fca9ff901c0, callable=0x7fcb45c6f880, 2023-01-11T22:12:00.6271720Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6272028Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6275332Z #20 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6275700Z args=0x7fca9ff901c0, callable=0x7fcb45c6f880) 2023-01-11T22:12:00.6276045Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6278468Z #21 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6278871Z pp_stack=, trace_info=0x7fff9a96d740, 2023-01-11T22:12:00.6279169Z tstate=) 2023-01-11T22:12:00.6279473Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6279713Z #22 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6280076Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:00.6291658Z #23 0x0000000000543a33 in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6292009Z throwflag=, 2023-01-11T22:12:00.6292372Z f=, 2023-01-11T22:12:00.6292731Z tstate=) 2023-01-11T22:12:00.6293113Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6297603Z #24 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:00.6298050Z argcount=, 2023-01-11T22:12:00.6298461Z args=, locals=0x0, con=0x7fca9feb8680, tstate=0x1b2bbf0) 2023-01-11T22:12:00.6298875Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6300690Z #25 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6301027Z stack=, func=0x7fca9feb8670) 2023-01-11T22:12:00.6301347Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6306499Z #26 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6306952Z args=, callable=0x7fca9feb8670, tstate=0x1b2bbf0) 2023-01-11T22:12:00.6307477Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:00.6307709Z #27 vectorcall_unbound ( 2023-01-11T22:12:00.6308020Z nargs=, args=, 2023-01-11T22:12:00.6308401Z func=, 2023-01-11T22:12:00.6308764Z unbound=, 2023-01-11T22:12:00.6309197Z tstate=) 2023-01-11T22:12:00.6309585Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1629 2023-01-11T22:12:00.6309817Z #28 vectorcall_method () 2023-01-11T22:12:00.6310103Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1661 2023-01-11T22:12:00.6311939Z #29 0x0000000000543898 in slot_mp_subscript (self=, 2023-01-11T22:12:00.6312384Z arg1=) 2023-01-11T22:12:00.6312957Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7258 2023-01-11T22:12:00.6313464Z #30 0x00000000004ef56e in _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6313847Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:2109 2023-01-11T22:12:00.6315775Z #31 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6316317Z throwflag=, 2023-01-11T22:12:00.6316866Z f=, 2023-01-11T22:12:00.6317229Z tstate=) 2023-01-11T22:12:00.6317632Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6322276Z #32 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6322900Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6323617Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6324020Z locals@entry=, con=0x7fcaa062c170, 2023-01-11T22:12:00.6324414Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6324803Z tstate@entry=) 2023-01-11T22:12:00.6325209Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6325439Z #33 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6325745Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6330085Z #34 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6330631Z nargsf=, args=0x7fcaa0122a48, callable=0x7fcaa062c160, 2023-01-11T22:12:00.6330929Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6331376Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6334405Z #35 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6334867Z args=0x7fcaa0122a48, callable=0x7fcaa062c160) 2023-01-11T22:12:00.6335384Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6337488Z #36 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6337979Z pp_stack=, trace_info=0x7fff9a96db40, 2023-01-11T22:12:00.6338363Z tstate=) 2023-01-11T22:12:00.6338751Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6339037Z #37 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6339457Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:00.6339765Z #38 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6340072Z throwflag=, 2023-01-11T22:12:00.6340429Z f=, 2023-01-11T22:12:00.6340777Z tstate=) 2023-01-11T22:12:00.6341159Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6345797Z #39 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6346431Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6347010Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6347414Z locals@entry=, con=0x7fcaa2379400, 2023-01-11T22:12:00.6347820Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6348210Z tstate@entry=) 2023-01-11T22:12:00.6348593Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6348834Z #40 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6349138Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6353585Z #41 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6354092Z nargsf=, args=0x6adf088, callable=0x7fcaa23793f0, 2023-01-11T22:12:00.6354458Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6354772Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6358032Z #42 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x6adf088, 2023-01-11T22:12:00.6358287Z callable=0x7fcaa23793f0) 2023-01-11T22:12:00.6358628Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6361285Z #43 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6361777Z pp_stack=, trace_info=0x7fff9a96dd00, 2023-01-11T22:12:00.6362161Z tstate=) 2023-01-11T22:12:00.6362516Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6362752Z #44 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6363170Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:00.6363533Z #45 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6363856Z throwflag=, 2023-01-11T22:12:00.6364221Z f=, 2023-01-11T22:12:00.6364577Z tstate=) 2023-01-11T22:12:00.6365066Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6369758Z #46 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6370315Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6371086Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6371496Z locals@entry=, con=0x7fcaa2378710, 2023-01-11T22:12:00.6371885Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6372271Z tstate@entry=) 2023-01-11T22:12:00.6372678Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6372908Z #47 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6373207Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6377849Z #48 0x00000000004f141c in do_call_core (kwdict=0x7fcb45dba780, 2023-01-11T22:12:00.6378250Z callargs=0x7fcaa0016520, func=0x7fcaa2378700, trace_info=0x7fff9a96dec0, 2023-01-11T22:12:00.6378485Z tstate=) 2023-01-11T22:12:00.6378911Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:00.6379159Z #49 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6379455Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:00.6380281Z #50 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6380804Z throwflag=, 2023-01-11T22:12:00.6381184Z f=, 2023-01-11T22:12:00.6381555Z tstate=) 2023-01-11T22:12:00.6382067Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6386327Z #51 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6386989Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6387557Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6387951Z locals@entry=, con=0x7fcb45de55b0, 2023-01-11T22:12:00.6388357Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6388748Z tstate@entry=) 2023-01-11T22:12:00.6389164Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6389397Z #52 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6389703Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6394315Z #53 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6394812Z nargsf=, args=0x6a51af8, callable=0x7fcb45de55a0, 2023-01-11T22:12:00.6395176Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6395504Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6399045Z #54 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x6a51af8, 2023-01-11T22:12:00.6399485Z callable=0x7fcb45de55a0) 2023-01-11T22:12:00.6400020Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6402484Z #55 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6403128Z pp_stack=, trace_info=0x7fff9a96e080, 2023-01-11T22:12:00.6403490Z tstate=) 2023-01-11T22:12:00.6403861Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6404178Z #56 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6404478Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:00.6404808Z #57 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6405343Z throwflag=, 2023-01-11T22:12:00.6406001Z f=, 2023-01-11T22:12:00.6406663Z tstate=) 2023-01-11T22:12:00.6407181Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6411099Z #58 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6411736Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6412295Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6412697Z locals@entry=, con=0x7fcb45de5f40, 2023-01-11T22:12:00.6413192Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6413565Z tstate@entry=) 2023-01-11T22:12:00.6413961Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6414201Z #59 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6414502Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6419012Z #60 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6419386Z nargsf=, args=0x1ba85a0, callable=0x7fcb45de5f30, 2023-01-11T22:12:00.6419621Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6419924Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6424008Z #61 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x1ba85a0, 2023-01-11T22:12:00.6424287Z callable=0x7fcb45de5f30) 2023-01-11T22:12:00.6424627Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6426738Z #62 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6427197Z pp_stack=, trace_info=0x7fff9a96e240, 2023-01-11T22:12:00.6427624Z tstate=) 2023-01-11T22:12:00.6428178Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6428618Z #63 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6429132Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:00.6429599Z #64 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6430135Z throwflag=, 2023-01-11T22:12:00.6430808Z f=, 2023-01-11T22:12:00.6431315Z tstate=) 2023-01-11T22:12:00.6431685Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6435369Z #65 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6435993Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6436818Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6437327Z locals@entry=, con=0x7fcb45c66de0, 2023-01-11T22:12:00.6437804Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6438327Z tstate@entry=) 2023-01-11T22:12:00.6438746Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6439044Z #66 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6439414Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6443466Z #67 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:00.6443923Z nargsf=, args=0x7fcb45f30200, callable=0x7fcb45c66dd0, 2023-01-11T22:12:00.6444223Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6444601Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6448122Z #68 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:00.6448555Z args=0x7fcb45f30200, callable=0x7fcb45c66dd0) 2023-01-11T22:12:00.6448954Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6451441Z #69 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:00.6451799Z pp_stack=, trace_info=0x7fff9a96e400, 2023-01-11T22:12:00.6452180Z tstate=) 2023-01-11T22:12:00.6452611Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6452913Z #70 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6453337Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:00.6453708Z #71 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6454116Z throwflag=, 2023-01-11T22:12:00.6454574Z f=, 2023-01-11T22:12:00.6454984Z tstate=) 2023-01-11T22:12:00.6455424Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:00.6460358Z #72 _PyEval_Vector (kwnames=, 2023-01-11T22:12:00.6461167Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:00.6461670Z args@entry=, locals=0x0, 2023-01-11T22:12:00.6462128Z locals@entry=, con=0x7fcb45c66d50, 2023-01-11T22:12:00.6462705Z con@entry=, tstate=0x1b2bbf0, 2023-01-11T22:12:00.6463159Z tstate@entry=) 2023-01-11T22:12:00.6463652Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6463907Z #73 _PyFunction_Vectorcall () 2023-01-11T22:12:00.6464264Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:00.6467087Z #74 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fcb45f09e40, 2023-01-11T22:12:00.6467617Z nargsf=, args=, callable=0x7fcb45c66d40, 2023-01-11T22:12:00.6467871Z tstate=0x1b2bbf0) 2023-01-11T22:12:00.6468252Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6470090Z #75 PyObject_Vectorcall (kwnames=0x7fcb45f09e40, nargsf=, 2023-01-11T22:12:00.6470403Z args=, callable=0x7fcb45c66d40) 2023-01-11T22:12:00.6470837Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:00.6473332Z #76 call_function (kwnames=0x7fcb45f09e40, oparg=, 2023-01-11T22:12:00.6473685Z pp_stack=, trace_info=0x7fff9a96e5c0, 2023-01-11T22:12:00.6473937Z tstate=) 2023-01-11T22:12:00.6474356Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:00.6474655Z #77 _PyEval_EvalFrameDefault () 2023-01-11T22:12:00.6474980Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:00.6476334Z #78 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:00.6476772Z throwflag=, 2023-01-11T22:12:00.6477525Z f=, 2023-01-11T22:12:00.6477904Z tstate=) 2023-01-11T22:12:00.6478417Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:00.6478744Z #79 _PyEval_Vector () 2023-01-11T22:12:00.6479133Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:00.6484826Z #80 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7fcb45ecc920, 2023-01-11T22:12:00.6485442Z globals=globals@entry=0x7fcb45ebe600, locals=locals@entry=0x7fcb45ebe600) 2023-01-11T22:12:00.6485951Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:00.6486598Z #81 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:00.6487042Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:00.6489162Z #82 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:00.6489544Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:00.6491600Z #83 0x00000000005b9ebb in PyRun_StringFlags.localalias () 2023-01-11T22:12:00.6492045Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1183 2023-01-11T22:12:00.6494481Z #84 0x00000000005b9cfb in PyRun_SimpleStringFlags.localalias () 2023-01-11T22:12:00.6495148Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:503 2023-01-11T22:12:00.6508317Z #85 0x00000000005b8d5c in pymain_run_command ( 2023-01-11T22:12:00.6508971Z command=) 2023-01-11T22:12:00.6509599Z at /croot/python-split_1669298683653/work/build-static/python.c:252 2023-01-11T22:12:00.6511066Z #86 pymain_run_python (exitcode=0x7fff9a96e820) 2023-01-11T22:12:00.6511768Z at /croot/python-split_1669298683653/work/build-static/python.c:582 2023-01-11T22:12:00.6512209Z #87 Py_RunMain.localalias () 2023-01-11T22:12:00.6512633Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:00.6527990Z #88 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:00.6528495Z argv=) 2023-01-11T22:12:00.6528965Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:00.6545174Z #89 0x00007fcb45f9dc87 in __libc_start_main (main=0x587be0
, argc=5, 2023-01-11T22:12:00.6545714Z argv=0x7fff9a96ea28, init=, fini=, 2023-01-11T22:12:00.6546150Z rtld_fini=, stack_end=0x7fff9a96ea18) 2023-01-11T22:12:00.6546444Z at ../csu/libc-start.c:310 2023-01-11T22:12:00.6546651Z #90 0x0000000000587ade in _start () 2023-01-11T22:12:00.6546968Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:00.8130164Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:12:00.8130663Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:12:00.8130981Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:12:00.8131297Z This is free software: you are free to change and redistribute it. 2023-01-11T22:12:00.8131650Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:12:00.8132110Z and "show warranty" for details. 2023-01-11T22:12:00.8132404Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:12:00.8132647Z Type "show configuration" for configuration details. 2023-01-11T22:12:00.8132902Z For bug reporting instructions, please see: 2023-01-11T22:12:00.8133145Z . 2023-01-11T22:12:00.8133396Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:12:00.8133679Z . 2023-01-11T22:12:00.8133903Z For help, type "help". 2023-01-11T22:12:00.8134133Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:12:00.9269927Z Reading symbols from python...done. 2023-01-11T22:12:01.4753083Z 2023-01-11T22:12:01.4753427Z warning: core file may not match specified executable file. 2023-01-11T22:12:01.5613577Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:01.5613993Z [New LWP 8156] 2023-01-11T22:12:01.5614447Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:01.5614912Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:01.5615272Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:01.5615766Z [New LWP 8163] 2023-01-11T22:12:01.5615929Z [New LWP 8168] 2023-01-11T22:12:01.5616097Z [New LWP 8165] 2023-01-11T22:12:01.5616267Z [New LWP 8160] 2023-01-11T22:12:01.5616421Z [New LWP 8170] 2023-01-11T22:12:01.5616587Z [New LWP 8166] 2023-01-11T22:12:01.5616752Z [New LWP 8172] 2023-01-11T22:12:01.5617021Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:01.5617356Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:01.5630258Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:01.5631064Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:01.5631612Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:01.5632147Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:01.5936864Z [Thread debugging using libthread_db enabled] 2023-01-11T22:12:01.5937352Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:08.5232872Z 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. 2023-01-11T22:12:08.5233630Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:08.5234155Z Core was generated by `/opt/conda/bin/python -bb -c from multiprocessing.spawn import spawn_main; spaw'. 2023-01-11T22:12:08.5234484Z Program terminated with signal SIGSEGV, Segmentation fault. 2023-01-11T22:12:08.5234777Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:08.5235065Z [Current thread is 1 (Thread 0x7f8762811200 (LWP 8156))] 2023-01-11T22:12:08.5262798Z To enable execution of this file add 2023-01-11T22:12:08.5263375Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:08.5263821Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:08.5264086Z To completely disable this security protection add 2023-01-11T22:12:08.5264338Z set auto-load safe-path / 2023-01-11T22:12:08.5264581Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:08.5265071Z For more information about this security protection see the 2023-01-11T22:12:08.5265423Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:08.5265732Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:08.5308372Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:08.5311028Z #1 0x00007f872aea265b in handler_SIGSEGV(int, siginfo_t*, void*) () 2023-01-11T22:12:08.5311482Z from /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so 2023-01-11T22:12:08.5321008Z #2 2023-01-11T22:12:08.5321412Z #3 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:65 2023-01-11T22:12:08.5465425Z #4 0x00007f87612cf03b in string_at (ptr=0x0, size=-1) at :5564 2023-01-11T22:12:08.5468517Z #5 0x00007f87626c6052 in ffi_call_unix64 () 2023-01-11T22:12:08.5469209Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:08.5477525Z #6 0x00007f87626c48cd in ffi_call_int () 2023-01-11T22:12:08.5478144Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:08.5485954Z #7 0x00007f87612d7879 in _call_function_pointer (argtypecount=2, argcount=2, 2023-01-11T22:12:08.5486434Z resmem=0x7ffd22900310, restype=, atypes=, 2023-01-11T22:12:08.5486858Z avalues=, pProc=0x7f87612cf002 , flags=4357) 2023-01-11T22:12:08.5487665Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:916 2023-01-11T22:12:08.5488016Z #8 _ctypes_callproc () 2023-01-11T22:12:08.5488488Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:1259 2023-01-11T22:12:08.5488932Z #9 0x00007f87612d73fe in PyCFuncPtr_call () at :4201 2023-01-11T22:12:08.5489467Z #10 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:08.5489847Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:08.5495022Z #11 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:08.5495567Z kwnames=, 2023-01-11T22:12:08.5496126Z nargsf=, args=0x7f86bb868358, 2023-01-11T22:12:08.5496510Z callable=, 2023-01-11T22:12:08.5496883Z tstate=) 2023-01-11T22:12:08.5497278Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:08.5499906Z #12 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5500426Z args=0x7f86bb868358, callable=0x7f8761588880, tstate=) 2023-01-11T22:12:08.5500929Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:08.5504890Z #13 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5505203Z args=0x7f86bb868358, callable=0x7f8761588880) 2023-01-11T22:12:08.5505546Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5508904Z #14 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5509328Z pp_stack=, trace_info=0x7ffd22900620, 2023-01-11T22:12:08.5509749Z tstate=) 2023-01-11T22:12:08.5510224Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5510455Z #15 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5510848Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:08.5511525Z #16 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5512044Z throwflag=, 2023-01-11T22:12:08.5512613Z f=, 2023-01-11T22:12:08.5513128Z tstate=) 2023-01-11T22:12:08.5513527Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5517922Z #17 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5518551Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5519226Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5519628Z locals@entry=, con=0x7f8761547890, 2023-01-11T22:12:08.5520070Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5520454Z tstate@entry=) 2023-01-11T22:12:08.5520863Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5521103Z #18 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5521393Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5526110Z #19 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5526451Z nargsf=, args=0x7f86bb8681c0, callable=0x7f8761547880, 2023-01-11T22:12:08.5526802Z tstate=0x1596bf0) 2023-01-11T22:12:08.5527109Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5530543Z #20 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5530855Z args=0x7f86bb8681c0, callable=0x7f8761547880) 2023-01-11T22:12:08.5531189Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5533626Z #21 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5533983Z pp_stack=, trace_info=0x7ffd229007e0, 2023-01-11T22:12:08.5534224Z tstate=) 2023-01-11T22:12:08.5534524Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5534771Z #22 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5535080Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:08.5536163Z #23 0x0000000000543a33 in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5536555Z throwflag=, 2023-01-11T22:12:08.5536922Z f=, 2023-01-11T22:12:08.5537279Z tstate=) 2023-01-11T22:12:08.5537650Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5542167Z #24 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:08.5542727Z argcount=, 2023-01-11T22:12:08.5543125Z args=, locals=0x0, con=0x7f86bb794680, tstate=0x1596bf0) 2023-01-11T22:12:08.5543548Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5545388Z #25 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5545735Z stack=, func=0x7f86bb794670) 2023-01-11T22:12:08.5546074Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5550918Z #26 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5551430Z args=, callable=0x7f86bb794670, tstate=0x1596bf0) 2023-01-11T22:12:08.5552054Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:08.5552484Z #27 vectorcall_unbound ( 2023-01-11T22:12:08.5553186Z nargs=, args=, 2023-01-11T22:12:08.5553775Z func=, 2023-01-11T22:12:08.5554135Z unbound=, 2023-01-11T22:12:08.5554496Z tstate=) 2023-01-11T22:12:08.5554892Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1629 2023-01-11T22:12:08.5555128Z #28 vectorcall_method () 2023-01-11T22:12:08.5555416Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1661 2023-01-11T22:12:08.5556062Z #29 0x0000000000543898 in slot_mp_subscript (self=, 2023-01-11T22:12:08.5556405Z arg1=) 2023-01-11T22:12:08.5556824Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7258 2023-01-11T22:12:08.5557242Z #30 0x00000000004ef56e in _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5557578Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:2109 2023-01-11T22:12:08.5559727Z #31 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5560124Z throwflag=, 2023-01-11T22:12:08.5560488Z f=, 2023-01-11T22:12:08.5560945Z tstate=) 2023-01-11T22:12:08.5561341Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5566091Z #32 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5566767Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5567587Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5568319Z locals@entry=, con=0x7f86bbf04170, 2023-01-11T22:12:08.5568722Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5569108Z tstate@entry=) 2023-01-11T22:12:08.5569551Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5569782Z #33 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5570094Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5573499Z #34 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5574043Z nargsf=, args=0x7f86bb9faa48, callable=0x7f86bbf04160, 2023-01-11T22:12:08.5574370Z tstate=0x1596bf0) 2023-01-11T22:12:08.5574697Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5577875Z #35 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5578311Z args=0x7f86bb9faa48, callable=0x7f86bbf04160) 2023-01-11T22:12:08.5578854Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5580999Z #36 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5581489Z pp_stack=, trace_info=0x7ffd22900be0, 2023-01-11T22:12:08.5581899Z tstate=) 2023-01-11T22:12:08.5582565Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5582862Z #37 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5583283Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:08.5583585Z #38 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5583891Z throwflag=, 2023-01-11T22:12:08.5584387Z f=, 2023-01-11T22:12:08.5584733Z tstate=) 2023-01-11T22:12:08.5585119Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5589254Z #39 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5589794Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5590571Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5591304Z locals@entry=, con=0x7f86bdc51400, 2023-01-11T22:12:08.5592018Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5592391Z tstate@entry=) 2023-01-11T22:12:08.5592789Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5593028Z #40 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5593334Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5597369Z #41 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5597870Z nargsf=, args=0x654a958, callable=0x7f86bdc513f0, 2023-01-11T22:12:08.5598279Z tstate=0x1596bf0) 2023-01-11T22:12:08.5598588Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5601837Z #42 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x654a958, 2023-01-11T22:12:08.5602309Z callable=0x7f86bdc513f0) 2023-01-11T22:12:08.5602750Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5604978Z #43 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5605444Z pp_stack=, trace_info=0x7ffd22900da0, 2023-01-11T22:12:08.5605872Z tstate=) 2023-01-11T22:12:08.5606187Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5606434Z #44 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5606863Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:08.5607320Z #45 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5607787Z throwflag=, 2023-01-11T22:12:08.5608152Z f=, 2023-01-11T22:12:08.5608512Z tstate=) 2023-01-11T22:12:08.5608890Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5612933Z #46 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5613398Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5613950Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5614339Z locals@entry=, con=0x7f86bdc50710, 2023-01-11T22:12:08.5614738Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5615127Z tstate@entry=) 2023-01-11T22:12:08.5615520Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5615746Z #47 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5616120Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5621505Z #48 0x00000000004f141c in do_call_core (kwdict=0x7f8761692780, 2023-01-11T22:12:08.5621857Z callargs=0x7f86bb8ee520, func=0x7f86bdc50700, trace_info=0x7ffd22900f60, 2023-01-11T22:12:08.5622113Z tstate=) 2023-01-11T22:12:08.5622621Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:08.5622886Z #49 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5623241Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:08.5623895Z #50 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5624429Z throwflag=, 2023-01-11T22:12:08.5624948Z f=, 2023-01-11T22:12:08.5625312Z tstate=) 2023-01-11T22:12:08.5625699Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5630849Z #51 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5631492Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5632217Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5632617Z locals@entry=, con=0x7f87616bd5b0, 2023-01-11T22:12:08.5633019Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5633392Z tstate@entry=) 2023-01-11T22:12:08.5633827Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5634075Z #52 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5634382Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5638758Z #53 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5639275Z nargsf=, args=0x64bd348, callable=0x7f87616bd5a0, 2023-01-11T22:12:08.5639613Z tstate=0x1596bf0) 2023-01-11T22:12:08.5639981Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5643457Z #54 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x64bd348, 2023-01-11T22:12:08.5643911Z callable=0x7f87616bd5a0) 2023-01-11T22:12:08.5644394Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5646778Z #55 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5647258Z pp_stack=, trace_info=0x7ffd22901120, 2023-01-11T22:12:08.5647679Z tstate=) 2023-01-11T22:12:08.5647987Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5648231Z #56 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5648723Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:08.5649009Z #57 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5649377Z throwflag=, 2023-01-11T22:12:08.5649744Z f=, 2023-01-11T22:12:08.5650104Z tstate=) 2023-01-11T22:12:08.5650471Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5655701Z #58 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5656532Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5656980Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5657372Z locals@entry=, con=0x7f87616bdf40, 2023-01-11T22:12:08.5657773Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5658166Z tstate@entry=) 2023-01-11T22:12:08.5658589Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5658819Z #59 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5659123Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5663698Z #60 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5664009Z nargsf=, args=0x16135a0, callable=0x7f87616bdf30, 2023-01-11T22:12:08.5664246Z tstate=0x1596bf0) 2023-01-11T22:12:08.5664572Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5668238Z #61 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x16135a0, 2023-01-11T22:12:08.5668503Z callable=0x7f87616bdf30) 2023-01-11T22:12:08.5668939Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5671508Z #62 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5671836Z pp_stack=, trace_info=0x7ffd229012e0, 2023-01-11T22:12:08.5672131Z tstate=) 2023-01-11T22:12:08.5672450Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5672698Z #63 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5673004Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:08.5673722Z #64 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5674129Z throwflag=, 2023-01-11T22:12:08.5674479Z f=, 2023-01-11T22:12:08.5674838Z tstate=) 2023-01-11T22:12:08.5675230Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5680257Z #65 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5680749Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5681179Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5681580Z locals@entry=, con=0x7f876153ede0, 2023-01-11T22:12:08.5681975Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5682350Z tstate@entry=) 2023-01-11T22:12:08.5682744Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5682992Z #66 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5683295Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5688735Z #67 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:08.5689256Z nargsf=, args=0x7f8761808200, callable=0x7f876153edd0, 2023-01-11T22:12:08.5689687Z tstate=0x1596bf0) 2023-01-11T22:12:08.5690057Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5693005Z #68 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:08.5693623Z args=0x7f8761808200, callable=0x7f876153edd0) 2023-01-11T22:12:08.5694036Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5695920Z #69 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:08.5696368Z pp_stack=, trace_info=0x7ffd229014a0, 2023-01-11T22:12:08.5696781Z tstate=) 2023-01-11T22:12:08.5697150Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5697464Z #70 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5697832Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:08.5698181Z #71 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5698742Z throwflag=, 2023-01-11T22:12:08.5699201Z f=, 2023-01-11T22:12:08.5699551Z tstate=) 2023-01-11T22:12:08.5699934Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:08.5705119Z #72 _PyEval_Vector (kwnames=, 2023-01-11T22:12:08.5705766Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:08.5706610Z args@entry=, locals=0x0, 2023-01-11T22:12:08.5707011Z locals@entry=, con=0x7f876153ed50, 2023-01-11T22:12:08.5707410Z con@entry=, tstate=0x1596bf0, 2023-01-11T22:12:08.5707798Z tstate@entry=) 2023-01-11T22:12:08.5708202Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5708443Z #73 _PyFunction_Vectorcall () 2023-01-11T22:12:08.5708747Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:08.5711652Z #74 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7f87617e1e40, 2023-01-11T22:12:08.5712207Z nargsf=, args=, callable=0x7f876153ed40, 2023-01-11T22:12:08.5712562Z tstate=0x1596bf0) 2023-01-11T22:12:08.5712882Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5714999Z #75 PyObject_Vectorcall (kwnames=0x7f87617e1e40, nargsf=, 2023-01-11T22:12:08.5715315Z args=, callable=0x7f876153ed40) 2023-01-11T22:12:08.5715671Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:08.5718187Z #76 call_function (kwnames=0x7f87617e1e40, oparg=, 2023-01-11T22:12:08.5718663Z pp_stack=, trace_info=0x7ffd22901660, 2023-01-11T22:12:08.5719094Z tstate=) 2023-01-11T22:12:08.5719486Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:08.5719716Z #77 _PyEval_EvalFrameDefault () 2023-01-11T22:12:08.5720080Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:08.5720859Z #78 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:08.5721369Z throwflag=, 2023-01-11T22:12:08.5721954Z f=, 2023-01-11T22:12:08.5722316Z tstate=) 2023-01-11T22:12:08.5722777Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:08.5723032Z #79 _PyEval_Vector () 2023-01-11T22:12:08.5723447Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:08.5728838Z #80 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7f87617a4920, 2023-01-11T22:12:08.5729236Z globals=globals@entry=0x7f8761796600, locals=locals@entry=0x7f8761796600) 2023-01-11T22:12:08.5729581Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:08.5730941Z #81 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:08.5731357Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:08.5733337Z #82 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:08.5733749Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:08.5735607Z #83 0x00000000005b9ebb in PyRun_StringFlags.localalias () 2023-01-11T22:12:08.5736017Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1183 2023-01-11T22:12:08.5737987Z #84 0x00000000005b9cfb in PyRun_SimpleStringFlags.localalias () 2023-01-11T22:12:08.5738404Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:503 2023-01-11T22:12:08.5740440Z #85 0x00000000005b8d5c in pymain_run_command ( 2023-01-11T22:12:08.5740811Z command=) 2023-01-11T22:12:08.5741215Z at /croot/python-split_1669298683653/work/build-static/python.c:252 2023-01-11T22:12:08.5743108Z #86 pymain_run_python (exitcode=0x7ffd229018c0) 2023-01-11T22:12:08.5743520Z at /croot/python-split_1669298683653/work/build-static/python.c:582 2023-01-11T22:12:08.5743760Z #87 Py_RunMain.localalias () 2023-01-11T22:12:08.5744076Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:08.5760046Z #88 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:08.5760333Z argv=) 2023-01-11T22:12:08.5760658Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:08.5765666Z #89 0x00007f8761875c87 in __libc_start_main (main=0x587be0
, argc=5, 2023-01-11T22:12:08.5765974Z argv=0x7ffd22901ac8, init=, fini=, 2023-01-11T22:12:08.5766238Z rtld_fini=, stack_end=0x7ffd22901ab8) 2023-01-11T22:12:08.5766513Z at ../csu/libc-start.c:310 2023-01-11T22:12:08.5767250Z #90 0x0000000000587ade in _start () 2023-01-11T22:12:08.5767653Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:08.7367271Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:12:08.7367743Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:12:08.7368216Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:12:08.7368735Z This is free software: you are free to change and redistribute it. 2023-01-11T22:12:08.7369212Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:12:08.7369601Z and "show warranty" for details. 2023-01-11T22:12:08.7370108Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:12:08.7370517Z Type "show configuration" for configuration details. 2023-01-11T22:12:08.7370921Z For bug reporting instructions, please see: 2023-01-11T22:12:08.7371304Z . 2023-01-11T22:12:08.7371736Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:12:08.7372201Z . 2023-01-11T22:12:08.7372552Z For help, type "help". 2023-01-11T22:12:08.7372940Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:12:08.8510807Z Reading symbols from python...done. 2023-01-11T22:12:09.3944944Z 2023-01-11T22:12:09.3945560Z warning: core file may not match specified executable file. 2023-01-11T22:12:09.4746154Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:09.4747020Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:09.4747477Z [New LWP 10634] 2023-01-11T22:12:09.4747789Z [New LWP 10644] 2023-01-11T22:12:09.4748310Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:09.4748965Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:09.4749393Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:09.4749743Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:09.4749994Z [New LWP 10645] 2023-01-11T22:12:09.4750148Z [New LWP 10651] 2023-01-11T22:12:09.4750313Z [New LWP 10648] 2023-01-11T22:12:09.4750481Z [New LWP 10656] 2023-01-11T22:12:09.4750633Z [New LWP 10653] 2023-01-11T22:12:09.4750795Z [New LWP 10655] 2023-01-11T22:12:09.4763339Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:09.4764408Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:09.4765462Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:09.4766788Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:09.5047808Z [Thread debugging using libthread_db enabled] 2023-01-11T22:12:09.5048781Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:16.4853408Z 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. 2023-01-11T22:12:16.4854070Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:16.4854599Z Core was generated by `/opt/conda/bin/python -bb -c from multiprocessing.spawn import spawn_main; spaw'. 2023-01-11T22:12:16.4854931Z Program terminated with signal SIGSEGV, Segmentation fault. 2023-01-11T22:12:16.4855226Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:16.4855516Z [Current thread is 1 (Thread 0x7efc8d5be200 (LWP 10634))] 2023-01-11T22:12:16.4882976Z To enable execution of this file add 2023-01-11T22:12:16.4883487Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:16.4883829Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:16.4884101Z To completely disable this security protection add 2023-01-11T22:12:16.4884360Z set auto-load safe-path / 2023-01-11T22:12:16.4884603Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:16.4884874Z For more information about this security protection see the 2023-01-11T22:12:16.4885239Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:16.4885528Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:16.4928569Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:16.4930907Z #1 0x00007efc55c4f65b in handler_SIGSEGV(int, siginfo_t*, void*) () 2023-01-11T22:12:16.4931278Z from /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so 2023-01-11T22:12:16.4940502Z #2 2023-01-11T22:12:16.4940834Z #3 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:65 2023-01-11T22:12:16.5084901Z #4 0x00007efc8c07c03b in string_at (ptr=0x0, size=-1) at :5564 2023-01-11T22:12:16.5086661Z #5 0x00007efc8d473052 in ffi_call_unix64 () 2023-01-11T22:12:16.5087066Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:16.5094966Z #6 0x00007efc8d4718cd in ffi_call_int () 2023-01-11T22:12:16.5095567Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:16.5102850Z #7 0x00007efc8c084879 in _call_function_pointer (argtypecount=2, argcount=2, 2023-01-11T22:12:16.5103430Z resmem=0x7ffe60c85840, restype=, atypes=, 2023-01-11T22:12:16.5103910Z avalues=, pProc=0x7efc8c07c002 , flags=4357) 2023-01-11T22:12:16.5104307Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:916 2023-01-11T22:12:16.5104558Z #8 _ctypes_callproc () 2023-01-11T22:12:16.5104869Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:1259 2023-01-11T22:12:16.5105296Z #9 0x00007efc8c0843fe in PyCFuncPtr_call () at :4201 2023-01-11T22:12:16.5105686Z #10 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:16.5106078Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:16.5110765Z #11 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:16.5111314Z kwnames=, 2023-01-11T22:12:16.5111887Z nargsf=, args=0x7efbe647c358, 2023-01-11T22:12:16.5112271Z callable=, 2023-01-11T22:12:16.5112634Z tstate=) 2023-01-11T22:12:16.5113184Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:16.5115513Z #12 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5115910Z args=0x7efbe647c358, callable=0x7efc8c338880, tstate=) 2023-01-11T22:12:16.5116309Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:16.5119759Z #13 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5120167Z args=0x7efbe647c358, callable=0x7efc8c338880) 2023-01-11T22:12:16.5120529Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5123676Z #14 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5124052Z pp_stack=, trace_info=0x7ffe60c85b50, 2023-01-11T22:12:16.5124292Z tstate=) 2023-01-11T22:12:16.5124607Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5124845Z #15 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5125155Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:16.5125911Z #16 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5126292Z throwflag=, 2023-01-11T22:12:16.5126667Z f=, 2023-01-11T22:12:16.5127032Z tstate=) 2023-01-11T22:12:16.5127417Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5132189Z #17 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5132777Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5133457Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5133861Z locals@entry=, con=0x7efc8c2f7890, 2023-01-11T22:12:16.5134251Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5134639Z tstate@entry=) 2023-01-11T22:12:16.5135158Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5135388Z #18 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5135692Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5140185Z #19 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5140704Z nargsf=, args=0x7efbe647c1c0, callable=0x7efc8c2f7880, 2023-01-11T22:12:16.5141059Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5141373Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5145921Z #20 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5146209Z args=0x7efbe647c1c0, callable=0x7efc8c2f7880) 2023-01-11T22:12:16.5146557Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5148976Z #21 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5149348Z pp_stack=, trace_info=0x7ffe60c85d10, 2023-01-11T22:12:16.5149639Z tstate=) 2023-01-11T22:12:16.5149950Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5150192Z #22 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5150516Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:16.5151308Z #23 0x0000000000543a33 in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5151755Z throwflag=, 2023-01-11T22:12:16.5152483Z f=, 2023-01-11T22:12:16.5152832Z tstate=) 2023-01-11T22:12:16.5153223Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5157413Z #24 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:16.5157775Z argcount=, 2023-01-11T22:12:16.5158174Z args=, locals=0x0, con=0x7efbe65a0680, tstate=0xf7cbf0) 2023-01-11T22:12:16.5158601Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5160436Z #25 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5160713Z stack=, func=0x7efbe65a0670) 2023-01-11T22:12:16.5161048Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5165888Z #26 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5166529Z args=, callable=0x7efbe65a0670, tstate=0xf7cbf0) 2023-01-11T22:12:16.5167062Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:16.5167431Z #27 vectorcall_unbound ( 2023-01-11T22:12:16.5167976Z nargs=, args=, 2023-01-11T22:12:16.5168570Z func=, 2023-01-11T22:12:16.5168920Z unbound=, 2023-01-11T22:12:16.5169283Z tstate=) 2023-01-11T22:12:16.5169723Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1629 2023-01-11T22:12:16.5170061Z #28 vectorcall_method () 2023-01-11T22:12:16.5170369Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1661 2023-01-11T22:12:16.5171081Z #29 0x0000000000543898 in slot_mp_subscript (self=, 2023-01-11T22:12:16.5171355Z arg1=) 2023-01-11T22:12:16.5171795Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7258 2023-01-11T22:12:16.5172074Z #30 0x00000000004ef56e in _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5172505Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:2109 2023-01-11T22:12:16.5174493Z #31 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5175021Z throwflag=, 2023-01-11T22:12:16.5175562Z f=, 2023-01-11T22:12:16.5175912Z tstate=) 2023-01-11T22:12:16.5176302Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5181112Z #32 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5181705Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5182656Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5183286Z locals@entry=, con=0x7efbe6cf8170, 2023-01-11T22:12:16.5183694Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5184069Z tstate@entry=) 2023-01-11T22:12:16.5184598Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5184841Z #33 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5185141Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5189520Z #34 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5189891Z nargsf=, args=0x7efbe6802a48, callable=0x7efbe6cf8160, 2023-01-11T22:12:16.5190125Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5190430Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5194578Z #35 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5195077Z args=0x7efbe6802a48, callable=0x7efbe6cf8160) 2023-01-11T22:12:16.5195554Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5197973Z #36 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5198319Z pp_stack=, trace_info=0x7ffe60c86110, 2023-01-11T22:12:16.5198564Z tstate=) 2023-01-11T22:12:16.5198893Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5199143Z #37 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5199503Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:16.5200043Z #38 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5200543Z throwflag=, 2023-01-11T22:12:16.5201109Z f=, 2023-01-11T22:12:16.5201469Z tstate=) 2023-01-11T22:12:16.5201844Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5206574Z #39 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5207043Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5207487Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5207875Z locals@entry=, con=0x7efbe8a05400, 2023-01-11T22:12:16.5208276Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5208770Z tstate@entry=) 2023-01-11T22:12:16.5209175Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5209401Z #40 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5209704Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5214390Z #41 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5214723Z nargsf=, args=0x5f30068, callable=0x7efbe8a053f0, 2023-01-11T22:12:16.5214943Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5215259Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5218820Z #42 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x5f30068, 2023-01-11T22:12:16.5219125Z callable=0x7efbe8a053f0) 2023-01-11T22:12:16.5219455Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5222145Z #43 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5222757Z pp_stack=, trace_info=0x7ffe60c862d0, 2023-01-11T22:12:16.5223163Z tstate=) 2023-01-11T22:12:16.5223697Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5224105Z #44 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5224583Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:16.5225186Z #45 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5225744Z throwflag=, 2023-01-11T22:12:16.5226408Z f=, 2023-01-11T22:12:16.5226817Z tstate=) 2023-01-11T22:12:16.5227202Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5230689Z #46 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5231299Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5232100Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5232838Z locals@entry=, con=0x7efbe8a04710, 2023-01-11T22:12:16.5233315Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5233688Z tstate@entry=) 2023-01-11T22:12:16.5234095Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5234333Z #47 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5234635Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5238550Z #48 0x00000000004f141c in do_call_core (kwdict=0x7efc8c442800, 2023-01-11T22:12:16.5239053Z callargs=0x7efbe66fa520, func=0x7efbe8a04700, trace_info=0x7ffe60c86490, 2023-01-11T22:12:16.5239468Z tstate=) 2023-01-11T22:12:16.5239892Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:16.5240145Z #49 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5240451Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:16.5240779Z #50 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5241324Z throwflag=, 2023-01-11T22:12:16.5241798Z f=, 2023-01-11T22:12:16.5242154Z tstate=) 2023-01-11T22:12:16.5242615Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5246769Z #51 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5247403Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5248197Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5248590Z locals@entry=, con=0x7efc8c46d5b0, 2023-01-11T22:12:16.5248990Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5249374Z tstate@entry=) 2023-01-11T22:12:16.5249766Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5249994Z #52 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5250291Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5255032Z #53 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5255673Z nargsf=, args=0x5ea2ad8, callable=0x7efc8c46d5a0, 2023-01-11T22:12:16.5255914Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5256345Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5259174Z #54 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x5ea2ad8, 2023-01-11T22:12:16.5259614Z callable=0x7efc8c46d5a0) 2023-01-11T22:12:16.5260105Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5262627Z #55 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5263136Z pp_stack=, trace_info=0x7ffe60c86650, 2023-01-11T22:12:16.5263454Z tstate=) 2023-01-11T22:12:16.5263977Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5264229Z #56 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5264702Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:16.5265178Z #57 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5265522Z throwflag=, 2023-01-11T22:12:16.5265876Z f=, 2023-01-11T22:12:16.5266233Z tstate=) 2023-01-11T22:12:16.5266615Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5270830Z #58 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5271498Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5271967Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5272367Z locals@entry=, con=0x7efc8c46df40, 2023-01-11T22:12:16.5272768Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5273144Z tstate@entry=) 2023-01-11T22:12:16.5273543Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5273781Z #59 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5274072Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5278520Z #60 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5279131Z nargsf=, args=0xff95a0, callable=0x7efc8c46df30, 2023-01-11T22:12:16.5279532Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5279908Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5283310Z #61 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0xff95a0, 2023-01-11T22:12:16.5283783Z callable=0x7efc8c46df30) 2023-01-11T22:12:16.5284202Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5286334Z #62 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5286800Z pp_stack=, trace_info=0x7ffe60c86810, 2023-01-11T22:12:16.5287228Z tstate=) 2023-01-11T22:12:16.5287552Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5287900Z #63 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5288291Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:16.5288744Z #64 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5289174Z throwflag=, 2023-01-11T22:12:16.5289535Z f=, 2023-01-11T22:12:16.5289890Z tstate=) 2023-01-11T22:12:16.5290257Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5294324Z #65 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5294942Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5295613Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5296000Z locals@entry=, con=0x7efc8c2eede0, 2023-01-11T22:12:16.5296402Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5296789Z tstate@entry=) 2023-01-11T22:12:16.5297183Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5297408Z #66 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5297713Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5302112Z #67 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:16.5302749Z nargsf=, args=0x7efc8c5b4200, callable=0x7efc8c2eedd0, 2023-01-11T22:12:16.5303167Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5303501Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5306676Z #68 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:16.5307142Z args=0x7efc8c5b4200, callable=0x7efc8c2eedd0) 2023-01-11T22:12:16.5307603Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5309829Z #69 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:16.5310279Z pp_stack=, trace_info=0x7ffe60c869d0, 2023-01-11T22:12:16.5310714Z tstate=) 2023-01-11T22:12:16.5311061Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5311402Z #70 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5311746Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:16.5312149Z #71 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5312684Z throwflag=, 2023-01-11T22:12:16.5313031Z f=, 2023-01-11T22:12:16.5313471Z tstate=) 2023-01-11T22:12:16.5313855Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:16.5318222Z #72 _PyEval_Vector (kwnames=, 2023-01-11T22:12:16.5318717Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:16.5319151Z args@entry=, locals=0x0, 2023-01-11T22:12:16.5319551Z locals@entry=, con=0x7efc8c2eed50, 2023-01-11T22:12:16.5320003Z con@entry=, tstate=0xf7cbf0, 2023-01-11T22:12:16.5320377Z tstate@entry=) 2023-01-11T22:12:16.5320781Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5321018Z #73 _PyFunction_Vectorcall () 2023-01-11T22:12:16.5321305Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:16.5325002Z #74 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7efc8c58de40, 2023-01-11T22:12:16.5325406Z nargsf=, args=, callable=0x7efc8c2eed40, 2023-01-11T22:12:16.5325738Z tstate=0xf7cbf0) 2023-01-11T22:12:16.5326043Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5328391Z #75 PyObject_Vectorcall (kwnames=0x7efc8c58de40, nargsf=, 2023-01-11T22:12:16.5328764Z args=, callable=0x7efc8c2eed40) 2023-01-11T22:12:16.5329132Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:16.5331840Z #76 call_function (kwnames=0x7efc8c58de40, oparg=, 2023-01-11T22:12:16.5332436Z pp_stack=, trace_info=0x7ffe60c86b90, 2023-01-11T22:12:16.5332817Z tstate=) 2023-01-11T22:12:16.5333122Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:16.5333369Z #77 _PyEval_EvalFrameDefault () 2023-01-11T22:12:16.5333672Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:16.5335210Z #78 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:16.5335728Z throwflag=, 2023-01-11T22:12:16.5336342Z f=, 2023-01-11T22:12:16.5336696Z tstate=) 2023-01-11T22:12:16.5337147Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:16.5337414Z #79 _PyEval_Vector () 2023-01-11T22:12:16.5337708Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:16.5343597Z #80 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7efc8c550920, 2023-01-11T22:12:16.5344119Z globals=globals@entry=0x7efc8c542600, locals=locals@entry=0x7efc8c542600) 2023-01-11T22:12:16.5344564Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:16.5345712Z #81 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:16.5346309Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:16.5348227Z #82 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:16.5348932Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:16.5350617Z #83 0x00000000005b9ebb in PyRun_StringFlags.localalias () 2023-01-11T22:12:16.5351255Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1183 2023-01-11T22:12:16.5353084Z #84 0x00000000005b9cfb in PyRun_SimpleStringFlags.localalias () 2023-01-11T22:12:16.5353834Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:503 2023-01-11T22:12:16.5355696Z #85 0x00000000005b8d5c in pymain_run_command ( 2023-01-11T22:12:16.5356245Z command=) 2023-01-11T22:12:16.5356770Z at /croot/python-split_1669298683653/work/build-static/python.c:252 2023-01-11T22:12:16.5358305Z #86 pymain_run_python (exitcode=0x7ffe60c86df0) 2023-01-11T22:12:16.5358915Z at /croot/python-split_1669298683653/work/build-static/python.c:582 2023-01-11T22:12:16.5359301Z #87 Py_RunMain.localalias () 2023-01-11T22:12:16.5359616Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:16.5375014Z #88 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:16.5375411Z argv=) 2023-01-11T22:12:16.5375887Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:16.5380416Z #89 0x00007efc8c622c87 in __libc_start_main (main=0x587be0
, argc=5, 2023-01-11T22:12:16.5380907Z argv=0x7ffe60c86ff8, init=, fini=, 2023-01-11T22:12:16.5381315Z rtld_fini=, stack_end=0x7ffe60c86fe8) 2023-01-11T22:12:16.5381633Z at ../csu/libc-start.c:310 2023-01-11T22:12:16.5382192Z #90 0x0000000000587ade in _start () 2023-01-11T22:12:16.5382695Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:16.6968700Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:12:16.6969109Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:12:16.6969715Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:12:16.6970393Z This is free software: you are free to change and redistribute it. 2023-01-11T22:12:16.6970762Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:12:16.6971020Z and "show warranty" for details. 2023-01-11T22:12:16.6971313Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:12:16.6971566Z Type "show configuration" for configuration details. 2023-01-11T22:12:16.6971821Z For bug reporting instructions, please see: 2023-01-11T22:12:16.6972062Z . 2023-01-11T22:12:16.6972327Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:12:16.6972587Z . 2023-01-11T22:12:16.6972813Z For help, type "help". 2023-01-11T22:12:16.6973055Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:12:16.8107903Z Reading symbols from python...done. 2023-01-11T22:12:17.3735448Z 2023-01-11T22:12:17.3735754Z warning: core file may not match specified executable file. 2023-01-11T22:12:17.4507699Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:17.4508181Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:17.4508539Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:17.4508898Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:17.4509160Z [New LWP 10635] 2023-01-11T22:12:17.4509334Z [New LWP 10643] 2023-01-11T22:12:17.4509488Z [New LWP 10646] 2023-01-11T22:12:17.4509653Z [New LWP 10649] 2023-01-11T22:12:17.4509827Z [New LWP 10647] 2023-01-11T22:12:17.4509979Z [New LWP 10654] 2023-01-11T22:12:17.4510145Z [New LWP 10650] 2023-01-11T22:12:17.4510311Z [New LWP 10652] 2023-01-11T22:12:17.4510968Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:17.4511355Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:17.4522805Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:17.4524011Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:17.4524601Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:17.4525151Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:17.4814851Z [Thread debugging using libthread_db enabled] 2023-01-11T22:12:17.4815493Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:24.5098250Z 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. 2023-01-11T22:12:24.5099271Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:24.5100200Z Core was generated by `/opt/conda/bin/python -bb -c from multiprocessing.spawn import spawn_main; spaw'. 2023-01-11T22:12:24.5100813Z Program terminated with signal SIGSEGV, Segmentation fault. 2023-01-11T22:12:24.5101312Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:24.5101812Z [Current thread is 1 (Thread 0x7fddb82d6200 (LWP 10635))] 2023-01-11T22:12:24.5131534Z To enable execution of this file add 2023-01-11T22:12:24.5132598Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:24.5133141Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:24.5133533Z To completely disable this security protection add 2023-01-11T22:12:24.5133808Z set auto-load safe-path / 2023-01-11T22:12:24.5134040Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:24.5134313Z For more information about this security protection see the 2023-01-11T22:12:24.5134684Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:24.5134978Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:24.5178313Z #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:51 2023-01-11T22:12:24.5180748Z #1 0x00007fdd8096765b in handler_SIGSEGV(int, siginfo_t*, void*) () 2023-01-11T22:12:24.5181446Z from /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so 2023-01-11T22:12:24.5191425Z #2 2023-01-11T22:12:24.5191972Z #3 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:65 2023-01-11T22:12:24.5346028Z #4 0x00007fddb6d9403b in string_at (ptr=0x0, size=-1) at :5564 2023-01-11T22:12:24.5347667Z #5 0x00007fddb818b052 in ffi_call_unix64 () 2023-01-11T22:12:24.5348196Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:24.5356515Z #6 0x00007fddb81898cd in ffi_call_int () 2023-01-11T22:12:24.5357093Z from /opt/conda/lib/python3.10/lib-dynload/../../libffi.so.8 2023-01-11T22:12:24.5364656Z #7 0x00007fddb6d9c879 in _call_function_pointer (argtypecount=2, argcount=2, 2023-01-11T22:12:24.5365231Z resmem=0x7ffd6cef99c0, restype=, atypes=, 2023-01-11T22:12:24.5365806Z avalues=, pProc=0x7fddb6d94002 , flags=4357) 2023-01-11T22:12:24.5366186Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:916 2023-01-11T22:12:24.5366412Z #8 _ctypes_callproc () 2023-01-11T22:12:24.5366727Z at /usr/local/src/conda/python-3.10.8/build-static/stgdict.c:1259 2023-01-11T22:12:24.5367015Z #9 0x00007fddb6d9c3fe in PyCFuncPtr_call () at :4201 2023-01-11T22:12:24.5368114Z #10 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:24.5368463Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:24.5373973Z #11 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:24.5374405Z kwnames=, 2023-01-11T22:12:24.5374972Z nargsf=, args=0x7fdd11194358, 2023-01-11T22:12:24.5375364Z callable=, 2023-01-11T22:12:24.5375734Z tstate=) 2023-01-11T22:12:24.5376160Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:24.5379151Z #12 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5379458Z args=0x7fdd11194358, callable=0x7fddb7050880, tstate=) 2023-01-11T22:12:24.5379833Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:24.5383748Z #13 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5384077Z args=0x7fdd11194358, callable=0x7fddb7050880) 2023-01-11T22:12:24.5384431Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5387977Z #14 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5388430Z pp_stack=, trace_info=0x7ffd6cef9cd0, 2023-01-11T22:12:24.5388852Z tstate=) 2023-01-11T22:12:24.5389185Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5389555Z #15 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5389867Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:24.5390666Z #16 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5391092Z throwflag=, 2023-01-11T22:12:24.5391443Z f=, 2023-01-11T22:12:24.5391797Z tstate=) 2023-01-11T22:12:24.5392194Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5397232Z #17 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5397876Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5398478Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5398886Z locals@entry=, con=0x7fddb700f890, 2023-01-11T22:12:24.5399272Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5399654Z tstate@entry=) 2023-01-11T22:12:24.5400112Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5400360Z #18 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5400649Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5406568Z #19 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5406956Z nargsf=, args=0x7fdd111941c0, callable=0x7fddb700f880, 2023-01-11T22:12:24.5407200Z tstate=0x2695bf0) 2023-01-11T22:12:24.5407513Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5411567Z #20 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5411871Z args=0x7fdd111941c0, callable=0x7fddb700f880) 2023-01-11T22:12:24.5412207Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5414880Z #21 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5415349Z pp_stack=, trace_info=0x7ffd6cef9e90, 2023-01-11T22:12:24.5415770Z tstate=) 2023-01-11T22:12:24.5416202Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5416452Z #22 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5416764Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:24.5418027Z #23 0x0000000000543a33 in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5418398Z throwflag=, 2023-01-11T22:12:24.5418752Z f=, 2023-01-11T22:12:24.5419105Z tstate=) 2023-01-11T22:12:24.5419490Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5424643Z #24 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:24.5424999Z argcount=, 2023-01-11T22:12:24.5425414Z args=, locals=0x0, con=0x7fdd112bc680, tstate=0x2695bf0) 2023-01-11T22:12:24.5425837Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5427839Z #25 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5428282Z stack=, func=0x7fdd112bc670) 2023-01-11T22:12:24.5428901Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5433781Z #26 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5434305Z args=, callable=0x7fdd112bc670, tstate=0x2695bf0) 2023-01-11T22:12:24.5434867Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:24.5435275Z #27 vectorcall_unbound ( 2023-01-11T22:12:24.5435829Z nargs=, args=, 2023-01-11T22:12:24.5436207Z func=, 2023-01-11T22:12:24.5436578Z unbound=, 2023-01-11T22:12:24.5436938Z tstate=) 2023-01-11T22:12:24.5437322Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1629 2023-01-11T22:12:24.5437546Z #28 vectorcall_method () 2023-01-11T22:12:24.5437839Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:1661 2023-01-11T22:12:24.5439345Z #29 0x0000000000543898 in slot_mp_subscript (self=, 2023-01-11T22:12:24.5439616Z arg1=) 2023-01-11T22:12:24.5440103Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7258 2023-01-11T22:12:24.5441048Z #30 0x00000000004ef56e in _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5441379Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:2109 2023-01-11T22:12:24.5443727Z #31 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5444120Z throwflag=, 2023-01-11T22:12:24.5444479Z f=, 2023-01-11T22:12:24.5444824Z tstate=) 2023-01-11T22:12:24.5445218Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5449926Z #32 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5450512Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5451192Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5451756Z locals@entry=, con=0x7fdd11a14170, 2023-01-11T22:12:24.5452166Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5452553Z tstate@entry=) 2023-01-11T22:12:24.5452943Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5453190Z #33 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5453492Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5458583Z #34 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5459107Z nargsf=, args=0x7fdd1151aa48, callable=0x7fdd11a14160, 2023-01-11T22:12:24.5459351Z tstate=0x2695bf0) 2023-01-11T22:12:24.5459738Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5462928Z #35 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5463455Z args=0x7fdd1151aa48, callable=0x7fdd11a14160) 2023-01-11T22:12:24.5463885Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5466116Z #36 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5466578Z pp_stack=, trace_info=0x7ffd6cefa290, 2023-01-11T22:12:24.5467174Z tstate=) 2023-01-11T22:12:24.5467499Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5467860Z #37 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5468250Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:24.5468558Z #38 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5468861Z throwflag=, 2023-01-11T22:12:24.5469211Z f=, 2023-01-11T22:12:24.5469576Z tstate=) 2023-01-11T22:12:24.5469959Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5474778Z #39 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5475584Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5476386Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5476808Z locals@entry=, con=0x7fdd1371d400, 2023-01-11T22:12:24.5477196Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5477581Z tstate@entry=) 2023-01-11T22:12:24.5477996Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5478236Z #40 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5478525Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5483412Z #41 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5483724Z nargsf=, args=0x7649028, callable=0x7fdd1371d3f0, 2023-01-11T22:12:24.5483957Z tstate=0x2695bf0) 2023-01-11T22:12:24.5484295Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5487901Z #42 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x7649028, 2023-01-11T22:12:24.5488230Z callable=0x7fdd1371d3f0) 2023-01-11T22:12:24.5488550Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5491379Z #43 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5491924Z pp_stack=, trace_info=0x7ffd6cefa450, 2023-01-11T22:12:24.5492169Z tstate=) 2023-01-11T22:12:24.5492475Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5492717Z #44 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5493140Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:24.5493590Z #45 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5494111Z throwflag=, 2023-01-11T22:12:24.5494710Z f=, 2023-01-11T22:12:24.5495072Z tstate=) 2023-01-11T22:12:24.5495449Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5500474Z #46 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5501165Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5501667Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5502072Z locals@entry=, con=0x7fdd1371c710, 2023-01-11T22:12:24.5502788Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5503176Z tstate@entry=) 2023-01-11T22:12:24.5503591Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5503833Z #47 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5504138Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5508752Z #48 0x00000000004f141c in do_call_core (kwdict=0x7fddb715a800, 2023-01-11T22:12:24.5509207Z callargs=0x7fdd11412520, func=0x7fdd1371c700, trace_info=0x7ffd6cefa610, 2023-01-11T22:12:24.5509644Z tstate=) 2023-01-11T22:12:24.5510092Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:24.5510394Z #49 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5510792Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:24.5511275Z #50 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5511668Z throwflag=, 2023-01-11T22:12:24.5512019Z f=, 2023-01-11T22:12:24.5512376Z tstate=) 2023-01-11T22:12:24.5512763Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5517085Z #51 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5518018Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5518522Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5519292Z locals@entry=, con=0x7fddb71855b0, 2023-01-11T22:12:24.5519931Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5520320Z tstate@entry=) 2023-01-11T22:12:24.5520747Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5520990Z #52 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5521399Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5525236Z #53 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5525737Z nargsf=, args=0x75bba18, callable=0x7fddb71855a0, 2023-01-11T22:12:24.5526124Z tstate=0x2695bf0) 2023-01-11T22:12:24.5526453Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5529798Z #54 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x75bba18, 2023-01-11T22:12:24.5530266Z callable=0x7fddb71855a0) 2023-01-11T22:12:24.5530697Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5532908Z #55 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5533393Z pp_stack=, trace_info=0x7ffd6cefa7d0, 2023-01-11T22:12:24.5533798Z tstate=) 2023-01-11T22:12:24.5534122Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5534424Z #56 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5534817Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:24.5535202Z #57 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5535768Z throwflag=, 2023-01-11T22:12:24.5536281Z f=, 2023-01-11T22:12:24.5536625Z tstate=) 2023-01-11T22:12:24.5537009Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5541408Z #58 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5542089Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5542808Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5543211Z locals@entry=, con=0x7fddb7185f40, 2023-01-11T22:12:24.5543615Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5544004Z tstate@entry=) 2023-01-11T22:12:24.5544423Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5544664Z #59 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5544965Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5549331Z #60 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5549815Z nargsf=, args=0x27125a0, callable=0x7fddb7185f30, 2023-01-11T22:12:24.5550225Z tstate=0x2695bf0) 2023-01-11T22:12:24.5550567Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5553627Z #61 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x27125a0, 2023-01-11T22:12:24.5554081Z callable=0x7fddb7185f30) 2023-01-11T22:12:24.5554562Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5556657Z #62 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5557104Z pp_stack=, trace_info=0x7ffd6cefa990, 2023-01-11T22:12:24.5557538Z tstate=) 2023-01-11T22:12:24.5557892Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5558156Z #63 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5558626Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:24.5559037Z #64 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5559753Z throwflag=, 2023-01-11T22:12:24.5560180Z f=, 2023-01-11T22:12:24.5560536Z tstate=) 2023-01-11T22:12:24.5560928Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5565473Z #65 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5566104Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5566743Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5567147Z locals@entry=, con=0x7fddb7006de0, 2023-01-11T22:12:24.5567543Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5567925Z tstate@entry=) 2023-01-11T22:12:24.5568326Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5568567Z #66 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5568959Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5573717Z #67 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:24.5574263Z nargsf=, args=0x7fddb72cc200, callable=0x7fddb7006dd0, 2023-01-11T22:12:24.5574529Z tstate=0x2695bf0) 2023-01-11T22:12:24.5574857Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5577894Z #68 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:24.5578362Z args=0x7fddb72cc200, callable=0x7fddb7006dd0) 2023-01-11T22:12:24.5578903Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5581197Z #69 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:24.5581687Z pp_stack=, trace_info=0x7ffd6cefab50, 2023-01-11T22:12:24.5582062Z tstate=) 2023-01-11T22:12:24.5582557Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5582812Z #70 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5583175Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:24.5584007Z #71 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5584573Z throwflag=, 2023-01-11T22:12:24.5585263Z f=, 2023-01-11T22:12:24.5585923Z tstate=) 2023-01-11T22:12:24.5586561Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:24.5591250Z #72 _PyEval_Vector (kwnames=, 2023-01-11T22:12:24.5591848Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:24.5592557Z args@entry=, locals=0x0, 2023-01-11T22:12:24.5593303Z locals@entry=, con=0x7fddb7006d50, 2023-01-11T22:12:24.5594038Z con@entry=, tstate=0x2695bf0, 2023-01-11T22:12:24.5594684Z tstate@entry=) 2023-01-11T22:12:24.5595449Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5596023Z #73 _PyFunction_Vectorcall () 2023-01-11T22:12:24.5596621Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:24.5598984Z #74 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fddb72a5e40, 2023-01-11T22:12:24.5599521Z nargsf=, args=, callable=0x7fddb7006d40, 2023-01-11T22:12:24.5599980Z tstate=0x2695bf0) 2023-01-11T22:12:24.5600597Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5602971Z #75 PyObject_Vectorcall (kwnames=0x7fddb72a5e40, nargsf=, 2023-01-11T22:12:24.5603483Z args=, callable=0x7fddb7006d40) 2023-01-11T22:12:24.5604081Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:24.5606935Z #76 call_function (kwnames=0x7fddb72a5e40, oparg=, 2023-01-11T22:12:24.5607406Z pp_stack=, trace_info=0x7ffd6cefad10, 2023-01-11T22:12:24.5607789Z tstate=) 2023-01-11T22:12:24.5608302Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:24.5608718Z #77 _PyEval_EvalFrameDefault () 2023-01-11T22:12:24.5609257Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:24.5610267Z #78 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:24.5610756Z throwflag=, 2023-01-11T22:12:24.5611524Z f=, 2023-01-11T22:12:24.5612095Z tstate=) 2023-01-11T22:12:24.5612888Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:24.5613291Z #79 _PyEval_Vector () 2023-01-11T22:12:24.5613739Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:24.5620753Z #80 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7fddb7268920, 2023-01-11T22:12:24.5621254Z globals=globals@entry=0x7fddb725a600, locals=locals@entry=0x7fddb725a600) 2023-01-11T22:12:24.5621854Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:24.5622926Z #81 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:24.5623548Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:24.5626762Z #82 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:24.5627376Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:24.5630148Z #83 0x00000000005b9ebb in PyRun_StringFlags.localalias () 2023-01-11T22:12:24.5630857Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1183 2023-01-11T22:12:24.5633608Z #84 0x00000000005b9cfb in PyRun_SimpleStringFlags.localalias () 2023-01-11T22:12:24.5634315Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:503 2023-01-11T22:12:24.5637134Z #85 0x00000000005b8d5c in pymain_run_command ( 2023-01-11T22:12:24.5637718Z command=) 2023-01-11T22:12:24.5638481Z at /croot/python-split_1669298683653/work/build-static/python.c:252 2023-01-11T22:12:24.5639705Z #86 pymain_run_python (exitcode=0x7ffd6cefaf70) 2023-01-11T22:12:24.5640426Z at /croot/python-split_1669298683653/work/build-static/python.c:582 2023-01-11T22:12:24.5640871Z #87 Py_RunMain.localalias () 2023-01-11T22:12:24.5641374Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:24.5658996Z #88 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:24.5659416Z argv=) 2023-01-11T22:12:24.5660037Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:24.5665305Z #89 0x00007fddb733ac87 in __libc_start_main (main=0x587be0
, argc=5, 2023-01-11T22:12:24.5666010Z argv=0x7ffd6cefb178, init=, fini=, 2023-01-11T22:12:24.5666424Z rtld_fini=, stack_end=0x7ffd6cefb168) 2023-01-11T22:12:24.5666811Z at ../csu/libc-start.c:310 2023-01-11T22:12:24.5667072Z #90 0x0000000000587ade in _start () 2023-01-11T22:12:24.5667522Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:24.7342818Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:12:24.7343320Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:12:24.7343881Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:12:24.7344482Z This is free software: you are free to change and redistribute it. 2023-01-11T22:12:24.7344923Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:12:24.7345170Z and "show warranty" for details. 2023-01-11T22:12:24.7345463Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:12:24.7345735Z Type "show configuration" for configuration details. 2023-01-11T22:12:24.7345978Z For bug reporting instructions, please see: 2023-01-11T22:12:24.7346221Z . 2023-01-11T22:12:24.7346488Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:12:24.7346752Z . 2023-01-11T22:12:24.7346975Z For help, type "help". 2023-01-11T22:12:24.7347450Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:12:24.8491623Z Reading symbols from python...done. 2023-01-11T22:12:25.4230762Z 2023-01-11T22:12:25.4231585Z warning: core file may not match specified executable file. 2023-01-11T22:12:25.4263225Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:25.4263698Z [New LWP 23769] 2023-01-11T22:12:25.4264184Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:25.4265639Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:25.4266273Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:25.4266869Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:25.4267489Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:25.4281263Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:25.4282261Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:25.4282807Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:25.4283360Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:25.4596890Z [Thread debugging using libthread_db enabled] 2023-01-11T22:12:25.4597472Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:33.0412236Z 78 ../sysdeps/unix/syscall-template.S: No such file or directory. 2023-01-11T22:12:33.0413135Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:33.0413663Z Core was generated by `/opt/conda/bin/python -bb test_multiprocessing_spawn.py -v --import-slow-tests'. 2023-01-11T22:12:33.0413983Z Program terminated with signal SIGABRT, Aborted. 2023-01-11T22:12:33.0414339Z #0 0x00007fc8511c6177 in kill () at ../sysdeps/unix/syscall-template.S:78 2023-01-11T22:12:33.0440854Z To enable execution of this file add 2023-01-11T22:12:33.0441753Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:33.0442054Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:33.0442324Z To completely disable this security protection add 2023-01-11T22:12:33.0442594Z set auto-load safe-path / 2023-01-11T22:12:33.0442828Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:33.0443106Z For more information about this security protection see the 2023-01-11T22:12:33.0443466Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:33.0443769Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:33.0488652Z #0 0x00007fc8511c6177 in kill () at ../sysdeps/unix/syscall-template.S:78 2023-01-11T22:12:33.0504705Z #1 0x00000000004cb0d3 in os_kill_impl ( 2023-01-11T22:12:33.0505203Z module=, 2023-01-11T22:12:33.0505822Z signal=, 2023-01-11T22:12:33.0506186Z pid=) 2023-01-11T22:12:33.0506666Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/sys/_iomodule.c:7929 2023-01-11T22:12:33.0510709Z #2 os_kill (module=, args=args@entry=0x7fc7f030c358, 2023-01-11T22:12:33.0511311Z nargs=) 2023-01-11T22:12:33.0511786Z at /usr/local/src/conda/python-3.10.8/Modules/codecs.c:3581 2023-01-11T22:12:33.0519452Z #3 0x00000000004fe7d4 in cfunction_vectorcall_FASTCALL (func=0x7fc852109a30, 2023-01-11T22:12:33.0520078Z args=0x7fc7f030c358, nargsf=, kwnames=) 2023-01-11T22:12:33.0520513Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_bitutils.h:430 2023-01-11T22:12:33.0527637Z #4 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0528181Z nargsf=, args=0x7fc7f030c358, callable=0x7fc852109a30, 2023-01-11T22:12:33.0528518Z tstate=0x1f51b80) 2023-01-11T22:12:33.0528840Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0532458Z #5 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0532756Z args=0x7fc7f030c358, callable=0x7fc852109a30) 2023-01-11T22:12:33.0533167Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0535156Z #6 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0535520Z pp_stack=, trace_info=0x7ffe1b69d5c0, 2023-01-11T22:12:33.0535747Z tstate=) 2023-01-11T22:12:33.0536069Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0536438Z #7 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0536853Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:33.0537184Z #8 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0537570Z throwflag=, 2023-01-11T22:12:33.0537933Z f=, 2023-01-11T22:12:33.0538279Z tstate=) 2023-01-11T22:12:33.0538669Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0543593Z #9 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0544299Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0544737Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0545304Z locals@entry=, con=0x7fc7ab1f36e0, 2023-01-11T22:12:33.0545721Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0546116Z tstate@entry=) 2023-01-11T22:12:33.0546537Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0546789Z #10 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0547098Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0551084Z #11 0x00000000004f141c in do_call_core (kwdict=0x0, callargs=0x7fc7a4f1bd60, 2023-01-11T22:12:33.0551563Z func=0x7fc7ab1f36d0, trace_info=0x7ffe1b69d780, tstate=) 2023-01-11T22:12:33.0552131Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:33.0552377Z #12 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0552755Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:33.0553339Z #13 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0553910Z throwflag=, 2023-01-11T22:12:33.0554546Z f=, 2023-01-11T22:12:33.0555209Z tstate=) 2023-01-11T22:12:33.0556120Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0560915Z #14 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0561519Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0562324Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0562999Z locals@entry=, con=0x7fc7adfef1d0, 2023-01-11T22:12:33.0563768Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0564455Z tstate@entry=) 2023-01-11T22:12:33.0565129Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0565528Z #15 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0566068Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0570184Z #16 0x00000000004f141c in do_call_core (kwdict=0x7fc7ae4aa740, 2023-01-11T22:12:33.0570682Z callargs=0x7fc7a50013f0, func=0x7fc7adfef1c0, trace_info=0x7ffe1b69d940, 2023-01-11T22:12:33.0571111Z tstate=) 2023-01-11T22:12:33.0571632Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:33.0572002Z #17 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0572555Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:33.0573012Z #18 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0573506Z throwflag=, 2023-01-11T22:12:33.0574191Z f=, 2023-01-11T22:12:33.0574848Z tstate=) 2023-01-11T22:12:33.0575504Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0580031Z #19 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0580661Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0581607Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0582275Z locals@entry=, con=0x7fc7adf9b650, 2023-01-11T22:12:33.0583269Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0583882Z tstate@entry=) 2023-01-11T22:12:33.0584656Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0585109Z #20 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0585598Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0589655Z #21 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0590120Z nargsf=, args=0x744db48, callable=0x7fc7adf9b640, 2023-01-11T22:12:33.0590481Z tstate=0x1f51b80) 2023-01-11T22:12:33.0591101Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0594814Z #22 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x744db48, 2023-01-11T22:12:33.0595251Z callable=0x7fc7adf9b640) 2023-01-11T22:12:33.0595795Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0598509Z #23 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0599125Z pp_stack=, trace_info=0x7ffe1b69db00, 2023-01-11T22:12:33.0599360Z tstate=) 2023-01-11T22:12:33.0611983Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0612231Z #24 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0612521Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.0618614Z #25 0x0000000000509dbe in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0619127Z throwflag=, 2023-01-11T22:12:33.0619771Z f=, 2023-01-11T22:12:33.0620451Z tstate=) 2023-01-11T22:12:33.0621118Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0627054Z #26 _PyEval_Vector (kwnames=, argcount=, 2023-01-11T22:12:33.0627617Z args=0x7fc7ab48f1d8, locals=0x0, con=0x7fc7adfc4050, tstate=0x1f51b80) 2023-01-11T22:12:33.0628187Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0631013Z #27 _PyFunction_Vectorcall (kwnames=, nargsf=, 2023-01-11T22:12:33.0631558Z stack=0x7fc7ab48f1d8, func=0x7fc7adfc4040) 2023-01-11T22:12:33.0632105Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0636602Z #28 _PyObject_VectorcallTstate (kwnames=, 2023-01-11T22:12:33.0637096Z nargsf=, args=0x7fc7ab48f1d8, callable=0x7fc7adfc4040, 2023-01-11T22:12:33.0637429Z tstate=0x1f51b80) 2023-01-11T22:12:33.0637964Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.0638416Z #29 method_vectorcall () 2023-01-11T22:12:33.0639023Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:53 2023-01-11T22:12:33.0644254Z #30 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fc7a4f40400, 2023-01-11T22:12:33.0644771Z nargsf=, args=, callable=0x7fc7aaeb9200, 2023-01-11T22:12:33.0645160Z tstate=0x1f51b80) 2023-01-11T22:12:33.0645750Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0647639Z #31 PyObject_Vectorcall (kwnames=0x7fc7a4f40400, nargsf=, 2023-01-11T22:12:33.0648175Z args=, callable=0x7fc7aaeb9200) 2023-01-11T22:12:33.0648908Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0651741Z #32 call_function (kwnames=0x7fc7a4f40400, oparg=, 2023-01-11T22:12:33.0652277Z pp_stack=, trace_info=0x7ffe1b69dd10, 2023-01-11T22:12:33.0652618Z tstate=) 2023-01-11T22:12:33.0652939Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0653185Z #33 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0653681Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:33.0653952Z #34 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0654244Z throwflag=, 2023-01-11T22:12:33.0654606Z f=, 2023-01-11T22:12:33.0654962Z tstate=) 2023-01-11T22:12:33.0655339Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0660024Z #35 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0660445Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0660880Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0661399Z locals@entry=, con=0x7fc7a4ffb9b0, 2023-01-11T22:12:33.0661799Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0662180Z tstate@entry=) 2023-01-11T22:12:33.0662713Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0662943Z #36 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0663245Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0668459Z #37 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0668778Z nargsf=, args=0x7fc7a50fa578, callable=0x7fc7a4ffb9a0, 2023-01-11T22:12:33.0669016Z tstate=0x1f51b80) 2023-01-11T22:12:33.0669331Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0672874Z #38 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0673125Z args=0x7fc7a50fa578, callable=0x7fc7a4ffb9a0) 2023-01-11T22:12:33.0673468Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0676288Z #39 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0676822Z pp_stack=, trace_info=0x7ffe1b69ded0, 2023-01-11T22:12:33.0677148Z tstate=) 2023-01-11T22:12:33.0677493Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0677764Z #40 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0678099Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.0693979Z #41 0x00000000004f706d in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0694512Z throwflag=, 2023-01-11T22:12:33.0695044Z f=, 2023-01-11T22:12:33.0695405Z tstate=) 2023-01-11T22:12:33.0695806Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0701052Z #42 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:33.0701687Z argcount=, args=, locals=0x0, con=0x7fc7a4ffb5c0, 2023-01-11T22:12:33.0702244Z tstate=0x1f51b80) 2023-01-11T22:12:33.0702688Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0705050Z #43 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0705540Z stack=, func=0x7fc7a4ffb5b0) 2023-01-11T22:12:33.0706007Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0706276Z #44 _PyObject_FastCallDictTstate.localalias () 2023-01-11T22:12:33.0706610Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:142 2023-01-11T22:12:33.0713656Z #45 0x0000000000507af8 in _PyObject_Call_Prepend (kwargs=0x0, 2023-01-11T22:12:33.0714260Z kwargs@entry=, args=0x7fc7ab1da530, 2023-01-11T22:12:33.0714808Z args@entry=, obj=, 2023-01-11T22:12:33.0715225Z obj@entry=, callable=0x7fc7a4ffb5b0, 2023-01-11T22:12:33.0715635Z callable@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0716015Z tstate@entry=) 2023-01-11T22:12:33.0716527Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:431 2023-01-11T22:12:33.0716754Z #46 slot_tp_init () 2023-01-11T22:12:33.0717052Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7734 2023-01-11T22:12:33.0719454Z #47 0x00000000004f7bdb in type_call (kwds=0x0, args=0x7fc7ab1da530, 2023-01-11T22:12:33.0719864Z type=) 2023-01-11T22:12:33.0720449Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:33.0720695Z #48 _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:33.0721024Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:215 2023-01-11T22:12:33.0723184Z #49 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.0723618Z kwnames=, 2023-01-11T22:12:33.0723999Z nargsf=, args=0x7fc7a50fa3d8, 2023-01-11T22:12:33.0724393Z callable=, 2023-01-11T22:12:33.0724764Z tstate=) 2023-01-11T22:12:33.0725170Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:33.0727762Z #50 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0728299Z args=0x7fc7a50fa3d8, callable=0x74f77a0, tstate=) 2023-01-11T22:12:33.0728794Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:33.0732219Z #51 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0732695Z args=0x7fc7a50fa3d8, callable=0x74f77a0) 2023-01-11T22:12:33.0733181Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0736239Z #52 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0736720Z pp_stack=, trace_info=0x7ffe1b69e1b0, 2023-01-11T22:12:33.0737128Z tstate=) 2023-01-11T22:12:33.0737508Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0737853Z #53 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0738378Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.0738841Z #54 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0739165Z throwflag=, 2023-01-11T22:12:33.0739613Z f=, 2023-01-11T22:12:33.0739977Z tstate=) 2023-01-11T22:12:33.0740362Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0745604Z #55 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0746227Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0746873Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0747276Z locals@entry=, con=0x7fc7adfec200, 2023-01-11T22:12:33.0747668Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0748059Z tstate@entry=) 2023-01-11T22:12:33.0748462Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0748705Z #56 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0748993Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0753491Z #57 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0753918Z nargsf=, args=0x7fc7a50fa098, callable=0x7fc7adfec1f0, 2023-01-11T22:12:33.0754162Z tstate=0x1f51b80) 2023-01-11T22:12:33.0754467Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0757905Z #58 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0758370Z args=0x7fc7a50fa098, callable=0x7fc7adfec1f0) 2023-01-11T22:12:33.0758872Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0761273Z #59 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0761744Z pp_stack=, trace_info=0x7ffe1b69e370, 2023-01-11T22:12:33.0762179Z tstate=) 2023-01-11T22:12:33.0762616Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0762887Z #60 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0763408Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:33.0763872Z #61 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0764220Z throwflag=, 2023-01-11T22:12:33.0764584Z f=, 2023-01-11T22:12:33.0764930Z tstate=) 2023-01-11T22:12:33.0765313Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0769321Z #62 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0769994Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0770521Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0770931Z locals@entry=, con=0x7fc7adf9b6e0, 2023-01-11T22:12:33.0771337Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0771724Z tstate@entry=) 2023-01-11T22:12:33.0772103Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0772342Z #63 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0772733Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0777086Z #64 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0777579Z nargsf=, args=0x72a77f8, callable=0x7fc7adf9b6d0, 2023-01-11T22:12:33.0777958Z tstate=0x1f51b80) 2023-01-11T22:12:33.0778285Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0781485Z #65 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x72a77f8, 2023-01-11T22:12:33.0782076Z callable=0x7fc7adf9b6d0) 2023-01-11T22:12:33.0782640Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0784887Z #66 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0785355Z pp_stack=, trace_info=0x7ffe1b69e530, 2023-01-11T22:12:33.0785757Z tstate=) 2023-01-11T22:12:33.0786118Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0786398Z #67 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0786801Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.0787230Z #68 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0787774Z throwflag=, 2023-01-11T22:12:33.0788126Z f=, 2023-01-11T22:12:33.0788588Z tstate=) 2023-01-11T22:12:33.0788975Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0793011Z #69 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0793668Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0794250Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0794655Z locals@entry=, con=0x7fc7ae009b50, 2023-01-11T22:12:33.0795047Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0795434Z tstate@entry=) 2023-01-11T22:12:33.0795827Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0796068Z #70 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0796359Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0799735Z #71 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fc850fbdb80, 2023-01-11T22:12:33.0800342Z nargsf=, args=, callable=0x7fc7ae009b40, 2023-01-11T22:12:33.0800662Z tstate=0x1f51b80) 2023-01-11T22:12:33.0800990Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0803107Z #72 PyObject_Vectorcall (kwnames=0x7fc850fbdb80, nargsf=, 2023-01-11T22:12:33.0803736Z args=, callable=0x7fc7ae009b40) 2023-01-11T22:12:33.0804137Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0806674Z #73 call_function (kwnames=0x7fc850fbdb80, oparg=, 2023-01-11T22:12:33.0807220Z pp_stack=, trace_info=0x7ffe1b69e6f0, 2023-01-11T22:12:33.0807502Z tstate=) 2023-01-11T22:12:33.0807883Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0808147Z #74 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0808457Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:33.0809033Z #75 0x0000000000509dbe in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0809685Z throwflag=, 2023-01-11T22:12:33.0810396Z f=, 2023-01-11T22:12:33.0811035Z tstate=) 2023-01-11T22:12:33.0811749Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0817154Z #76 _PyEval_Vector (kwnames=, argcount=, 2023-01-11T22:12:33.0817738Z args=0x7fc7a5029150, locals=0x0, con=0x7fc7ab1f3e30, tstate=0x1f51b80) 2023-01-11T22:12:33.0818303Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0821264Z #77 _PyFunction_Vectorcall (kwnames=, nargsf=, 2023-01-11T22:12:33.0821802Z stack=0x7fc7a5029150, func=0x7fc7ab1f3e20) 2023-01-11T22:12:33.0822577Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0827059Z #78 _PyObject_VectorcallTstate (kwnames=, 2023-01-11T22:12:33.0827510Z nargsf=, args=0x7fc7a5029150, callable=0x7fc7ab1f3e20, 2023-01-11T22:12:33.0827946Z tstate=0x1f51b80) 2023-01-11T22:12:33.0828558Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.0828939Z #79 method_vectorcall () 2023-01-11T22:12:33.0829484Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:53 2023-01-11T22:12:33.0835848Z #80 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0836343Z nargsf=, args=0x7fc7a5029158, callable=0x7fc7aae938c0, 2023-01-11T22:12:33.0836711Z tstate=0x1f51b80) 2023-01-11T22:12:33.0837317Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0840921Z #81 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0841410Z args=0x7fc7a5029158, callable=0x7fc7aae938c0) 2023-01-11T22:12:33.0841988Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0844909Z #82 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0845363Z pp_stack=, trace_info=0x7ffe1b69e900, 2023-01-11T22:12:33.0845780Z tstate=) 2023-01-11T22:12:33.0846294Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0846702Z #83 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0847193Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.0847650Z #84 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0848187Z throwflag=, 2023-01-11T22:12:33.0848802Z f=, 2023-01-11T22:12:33.0849479Z tstate=) 2023-01-11T22:12:33.0850165Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0854856Z #85 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.0855472Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.0856295Z args@entry=, locals=0x0, 2023-01-11T22:12:33.0856935Z locals@entry=, con=0x7fc850f18d40, 2023-01-11T22:12:33.0857698Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.0858350Z tstate@entry=) 2023-01-11T22:12:33.0859097Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0859700Z #86 _PyFunction_Vectorcall () 2023-01-11T22:12:33.0860212Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0864301Z #87 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.0864779Z nargsf=, args=0x6fefc20, callable=0x7fc850f18d30, 2023-01-11T22:12:33.0865149Z tstate=0x1f51b80) 2023-01-11T22:12:33.0865744Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0869477Z #88 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x6fefc20, 2023-01-11T22:12:33.0869924Z callable=0x7fc850f18d30) 2023-01-11T22:12:33.0870460Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0873136Z #89 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.0873610Z pp_stack=, trace_info=0x7ffe1b69eac0, 2023-01-11T22:12:33.0874033Z tstate=) 2023-01-11T22:12:33.0874534Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0874930Z #90 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0875442Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.0875875Z #91 0x0000000000509dbe in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0876370Z throwflag=, 2023-01-11T22:12:33.0877225Z f=, 2023-01-11T22:12:33.0877829Z tstate=) 2023-01-11T22:12:33.0878549Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0883928Z #92 _PyEval_Vector (kwnames=, argcount=, 2023-01-11T22:12:33.0884502Z args=0x6fe1d78, locals=0x0, con=0x7fc850f18ef0, tstate=0x1f51b80) 2023-01-11T22:12:33.0885064Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0887826Z #93 _PyFunction_Vectorcall (kwnames=, nargsf=, 2023-01-11T22:12:33.0888346Z stack=0x6fe1d78, func=0x7fc850f18ee0) 2023-01-11T22:12:33.0888905Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0893327Z #94 _PyObject_VectorcallTstate (kwnames=, 2023-01-11T22:12:33.0893815Z nargsf=, args=0x6fe1d78, callable=0x7fc850f18ee0, 2023-01-11T22:12:33.0894140Z tstate=0x1f51b80) 2023-01-11T22:12:33.0894692Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.0895100Z #95 method_vectorcall () 2023-01-11T22:12:33.0895699Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:53 2023-01-11T22:12:33.0900932Z #96 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fc7abb9a2c0, 2023-01-11T22:12:33.0901444Z nargsf=, args=, callable=0x7fc7aaf1b400, 2023-01-11T22:12:33.0901833Z tstate=0x1f51b80) 2023-01-11T22:12:33.0902684Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0904582Z #97 PyObject_Vectorcall (kwnames=0x7fc7abb9a2c0, nargsf=, 2023-01-11T22:12:33.0905128Z args=, callable=0x7fc7aaf1b400) 2023-01-11T22:12:33.0905684Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0908730Z #98 call_function (kwnames=0x7fc7abb9a2c0, oparg=, 2023-01-11T22:12:33.0909242Z pp_stack=, trace_info=0x7ffe1b69ecd0, 2023-01-11T22:12:33.0909643Z tstate=) 2023-01-11T22:12:33.0910157Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0910517Z #99 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0911061Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:33.0911488Z #100 0x0000000000509dbe in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0912180Z throwflag=, 2023-01-11T22:12:33.0912865Z f=, 2023-01-11T22:12:33.0913445Z tstate=) 2023-01-11T22:12:33.0914206Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0919281Z #101 _PyEval_Vector (kwnames=, argcount=, 2023-01-11T22:12:33.0919846Z args=0x7fc7ab4e6b08, locals=0x0, con=0x7fc7ab1f0320, tstate=0x1f51b80) 2023-01-11T22:12:33.0920496Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0923380Z #102 _PyFunction_Vectorcall (kwnames=, nargsf=, 2023-01-11T22:12:33.0923911Z stack=0x7fc7ab4e6b08, func=0x7fc7ab1f0310) 2023-01-11T22:12:33.0924477Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0928907Z #103 _PyObject_VectorcallTstate (kwnames=, 2023-01-11T22:12:33.0929380Z nargsf=, args=0x7fc7ab4e6b08, callable=0x7fc7ab1f0310, 2023-01-11T22:12:33.0929716Z tstate=0x1f51b80) 2023-01-11T22:12:33.0930282Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.0930875Z #104 method_vectorcall () 2023-01-11T22:12:33.0931475Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:53 2023-01-11T22:12:33.0936465Z #105 0x00000000004efd83 in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.0936942Z kwnames=0x7fc7ab9ab970, nargsf=, args=, 2023-01-11T22:12:33.0937354Z callable=0x7fc7aad1e3c0, tstate=0x1f51b80) 2023-01-11T22:12:33.0938009Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0940161Z #106 PyObject_Vectorcall (kwnames=0x7fc7ab9ab970, nargsf=, 2023-01-11T22:12:33.0940678Z args=, callable=0x7fc7aad1e3c0) 2023-01-11T22:12:33.0941242Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.0944258Z #107 call_function (kwnames=0x7fc7ab9ab970, oparg=, 2023-01-11T22:12:33.0944750Z pp_stack=, trace_info=0x7ffe1b69eee0, 2023-01-11T22:12:33.0945170Z tstate=) 2023-01-11T22:12:33.0945723Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.0946155Z #108 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0946629Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:33.0947084Z #109 0x0000000000509f16 in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0947648Z throwflag=, 2023-01-11T22:12:33.0948225Z f=, 2023-01-11T22:12:33.0948905Z tstate=) 2023-01-11T22:12:33.0949583Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0954151Z #110 _PyEval_Vector ( 2023-01-11T22:12:33.0954670Z kwnames=, 2023-01-11T22:12:33.0955406Z argcount=, args=0x7ffe1b69efd0, locals=0x0, con=0x7fc7ab1f03b0, 2023-01-11T22:12:33.0956000Z tstate=0x1f51b80, 2023-01-11T22:12:33.0956464Z tstate@entry=) 2023-01-11T22:12:33.0957156Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0959540Z #111 _PyFunction_Vectorcall ( 2023-01-11T22:12:33.0960197Z kwnames=, 2023-01-11T22:12:33.0961049Z nargsf=, stack=0x7ffe1b69efd0, func=0x7fc7ab1f03a0) 2023-01-11T22:12:33.0961789Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0964066Z #112 _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.0964614Z kwnames=, nargsf=, args=, 2023-01-11T22:12:33.0965237Z callable=0x7fc7ab1f03a0, tstate=0x1f51b80) 2023-01-11T22:12:33.0965892Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.0966250Z #113 method_vectorcall () 2023-01-11T22:12:33.0966828Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:83 2023-01-11T22:12:33.0973054Z #114 0x00000000004f141c in do_call_core (kwdict=0x7fc7ab220580, 2023-01-11T22:12:33.0973574Z callargs=0x7fc7ab1b7e20, func=0x7fc7ac65fa00, trace_info=0x7ffe1b69f0f0, 2023-01-11T22:12:33.0973993Z tstate=) 2023-01-11T22:12:33.0974509Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:33.0974966Z #115 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.0975542Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:33.0975982Z #116 0x00000000004f706d in _PyEval_EvalFrame ( 2023-01-11T22:12:33.0976641Z throwflag=, 2023-01-11T22:12:33.0977248Z f=, 2023-01-11T22:12:33.0977867Z tstate=) 2023-01-11T22:12:33.0978618Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.0983553Z #117 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:33.0984147Z argcount=, args=, locals=0x0, con=0x7fc850f190a0, 2023-01-11T22:12:33.0984669Z tstate=0x1f51b80) 2023-01-11T22:12:33.0985206Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.0987650Z #118 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.0988093Z stack=, func=0x7fc850f19090) 2023-01-11T22:12:33.0988619Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.0989082Z #119 _PyObject_FastCallDictTstate.localalias () 2023-01-11T22:12:33.0989643Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:142 2023-01-11T22:12:33.0990054Z #120 0x00000000005084a6 in _PyObject_Call_Prepend () 2023-01-11T22:12:33.0990579Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:431 2023-01-11T22:12:33.0993291Z #121 0x00000000005d04d3 in slot_tp_call () 2023-01-11T22:12:33.0993916Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7494 2023-01-11T22:12:33.0995407Z #122 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:33.0995860Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:33.0999964Z #123 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1000511Z kwnames=, 2023-01-11T22:12:33.1001187Z nargsf=, args=0x7fc7ab435970, 2023-01-11T22:12:33.1001577Z callable=, 2023-01-11T22:12:33.1001947Z tstate=) 2023-01-11T22:12:33.1002353Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:33.1004950Z #124 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1005632Z args=0x7fc7ab435970, callable=0x7fc7ab1d94e0, tstate=) 2023-01-11T22:12:33.1006084Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:33.1009546Z #125 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1010034Z args=0x7fc7ab435970, callable=0x7fc7ab1d94e0) 2023-01-11T22:12:33.1010501Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1013855Z #126 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1014252Z pp_stack=, trace_info=0x7ffe1b69f3f0, 2023-01-11T22:12:33.1014496Z tstate=) 2023-01-11T22:12:33.1014818Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1015167Z #127 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1015638Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.1016217Z #128 0x0000000000509f16 in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1016546Z throwflag=, 2023-01-11T22:12:33.1016918Z f=, 2023-01-11T22:12:33.1017262Z tstate=) 2023-01-11T22:12:33.1017815Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1022619Z #129 _PyEval_Vector ( 2023-01-11T22:12:33.1023048Z kwnames=, 2023-01-11T22:12:33.1023455Z argcount=, args=0x7ffe1b69f4e0, locals=0x0, con=0x7fc850f24200, 2023-01-11T22:12:33.1023766Z tstate=0x1f51b80, 2023-01-11T22:12:33.1024061Z tstate@entry=) 2023-01-11T22:12:33.1024471Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1027330Z #130 _PyFunction_Vectorcall ( 2023-01-11T22:12:33.1027705Z kwnames=, 2023-01-11T22:12:33.1028150Z nargsf=, stack=0x7ffe1b69f4e0, func=0x7fc850f241f0) 2023-01-11T22:12:33.1028579Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1031236Z #131 _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1031739Z kwnames=, nargsf=, args=, 2023-01-11T22:12:33.1032109Z callable=0x7fc850f241f0, tstate=0x1f51b80) 2023-01-11T22:12:33.1032445Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.1032698Z #132 method_vectorcall () 2023-01-11T22:12:33.1033016Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:83 2023-01-11T22:12:33.1039437Z #133 0x00000000004f141c in do_call_core (kwdict=0x7fc7ab261180, 2023-01-11T22:12:33.1039817Z callargs=0x7fc7ab1a4100, func=0x7fc7ab3fb9c0, trace_info=0x7ffe1b69f600, 2023-01-11T22:12:33.1040147Z tstate=) 2023-01-11T22:12:33.1040474Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:33.1040737Z #134 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1041227Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:33.1041663Z #135 0x00000000004f706d in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1042034Z throwflag=, 2023-01-11T22:12:33.1042386Z f=, 2023-01-11T22:12:33.1042874Z tstate=) 2023-01-11T22:12:33.1043271Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1048834Z #136 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:33.1049295Z argcount=, args=, locals=0x0, con=0x7fc850f240e0, 2023-01-11T22:12:33.1049623Z tstate=0x1f51b80) 2023-01-11T22:12:33.1049932Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1052510Z #137 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1052964Z stack=, func=0x7fc850f240d0) 2023-01-11T22:12:33.1053542Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1053916Z #138 _PyObject_FastCallDictTstate.localalias () 2023-01-11T22:12:33.1054308Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:142 2023-01-11T22:12:33.1054574Z #139 0x00000000005084a6 in _PyObject_Call_Prepend () 2023-01-11T22:12:33.1054888Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:431 2023-01-11T22:12:33.1057250Z #140 0x00000000005d04d3 in slot_tp_call () 2023-01-11T22:12:33.1057637Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7494 2023-01-11T22:12:33.1059251Z #141 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:33.1059744Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:33.1063492Z #142 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1064047Z kwnames=, 2023-01-11T22:12:33.1064634Z nargsf=, args=0x7fc7ab435060, 2023-01-11T22:12:33.1065010Z callable=, 2023-01-11T22:12:33.1065381Z tstate=) 2023-01-11T22:12:33.1065788Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:33.1068375Z #143 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1068718Z args=0x7fc7ab435060, callable=0x7fc7ab1dbd90, tstate=) 2023-01-11T22:12:33.1069098Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:33.1072914Z #144 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1073209Z args=0x7fc7ab435060, callable=0x7fc7ab1dbd90) 2023-01-11T22:12:33.1073544Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1077091Z #145 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1077602Z pp_stack=, trace_info=0x7ffe1b69f900, 2023-01-11T22:12:33.1077945Z tstate=) 2023-01-11T22:12:33.1078266Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1078640Z #146 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1079056Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.1079533Z #147 0x0000000000509f16 in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1080001Z throwflag=, 2023-01-11T22:12:33.1080370Z f=, 2023-01-11T22:12:33.1080714Z tstate=) 2023-01-11T22:12:33.1081099Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1085482Z #148 _PyEval_Vector ( 2023-01-11T22:12:33.1085954Z kwnames=, 2023-01-11T22:12:33.1086747Z argcount=, args=0x7ffe1b69f9f0, locals=0x0, con=0x7fc850f24200, 2023-01-11T22:12:33.1087067Z tstate=0x1f51b80, 2023-01-11T22:12:33.1087353Z tstate@entry=) 2023-01-11T22:12:33.1087742Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1090317Z #149 _PyFunction_Vectorcall ( 2023-01-11T22:12:33.1090843Z kwnames=, 2023-01-11T22:12:33.1091417Z nargsf=, stack=0x7ffe1b69f9f0, func=0x7fc850f241f0) 2023-01-11T22:12:33.1091822Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1094254Z #150 _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1094912Z kwnames=, nargsf=, args=, 2023-01-11T22:12:33.1095332Z callable=0x7fc850f241f0, tstate=0x1f51b80) 2023-01-11T22:12:33.1095666Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:114 2023-01-11T22:12:33.1095911Z #151 method_vectorcall () 2023-01-11T22:12:33.1096230Z at /usr/local/src/conda/python-3.10.8/Programs/_functoolsmodule.c:83 2023-01-11T22:12:33.1102019Z #152 0x00000000004f141c in do_call_core (kwdict=0x7fc7ac6fb000, 2023-01-11T22:12:33.1102475Z callargs=0x7fc7ab1c2560, func=0x7fc7ac24cf00, trace_info=0x7ffe1b69fb10, 2023-01-11T22:12:33.1102728Z tstate=) 2023-01-11T22:12:33.1103048Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:33.1103283Z #153 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1103731Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:33.1104002Z #154 0x00000000004f706d in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1104538Z throwflag=, 2023-01-11T22:12:33.1105049Z f=, 2023-01-11T22:12:33.1105411Z tstate=) 2023-01-11T22:12:33.1105798Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1111087Z #155 _PyEval_Vector (kwnames=0x0, 2023-01-11T22:12:33.1111736Z argcount=, args=, locals=0x0, con=0x7fc850f240e0, 2023-01-11T22:12:33.1112109Z tstate=0x1f51b80) 2023-01-11T22:12:33.1112402Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1114543Z #156 _PyFunction_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1115006Z stack=, func=0x7fc850f240d0) 2023-01-11T22:12:33.1116091Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1116666Z #157 _PyObject_FastCallDictTstate.localalias () 2023-01-11T22:12:33.1117010Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:142 2023-01-11T22:12:33.1117267Z #158 0x00000000005084a6 in _PyObject_Call_Prepend () 2023-01-11T22:12:33.1117566Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:431 2023-01-11T22:12:33.1119092Z #159 0x00000000005d04d3 in slot_tp_call () 2023-01-11T22:12:33.1119525Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7494 2023-01-11T22:12:33.1120895Z #160 0x00000000004f7b8b in _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:33.1121297Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:33.1125001Z #161 0x00000000004f37ae in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1125416Z kwnames=, 2023-01-11T22:12:33.1125927Z nargsf=, args=0x6fe43d0, 2023-01-11T22:12:33.1126299Z callable=, 2023-01-11T22:12:33.1126667Z tstate=) 2023-01-11T22:12:33.1127097Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:33.1129815Z #162 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1130493Z args=0x6fe43d0, callable=0x7fc7ab1b5fc0, tstate=) 2023-01-11T22:12:33.1130889Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:33.1134395Z #163 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1134833Z args=0x6fe43d0, callable=0x7fc7ab1b5fc0) 2023-01-11T22:12:33.1135373Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1138731Z #164 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1139108Z pp_stack=, trace_info=0x7ffe1b69fe10, 2023-01-11T22:12:33.1139332Z tstate=) 2023-01-11T22:12:33.1139676Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1140179Z #165 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1140506Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.1140930Z #166 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1141492Z throwflag=, 2023-01-11T22:12:33.1141854Z f=, 2023-01-11T22:12:33.1142214Z tstate=) 2023-01-11T22:12:33.1142725Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1147773Z #167 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.1148383Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.1149032Z args@entry=, locals=0x0, 2023-01-11T22:12:33.1149441Z locals@entry=, con=0x7fc7ab21c7a0, 2023-01-11T22:12:33.1149845Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.1150221Z tstate@entry=) 2023-01-11T22:12:33.1150626Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1150874Z #168 _PyFunction_Vectorcall () 2023-01-11T22:12:33.1151179Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1156377Z #169 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.1156715Z nargsf=, args=0x6eb4108, callable=0x7fc7ab21c790, 2023-01-11T22:12:33.1156953Z tstate=0x1f51b80) 2023-01-11T22:12:33.1157290Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1160931Z #170 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1161192Z args=0x6eb4108, callable=0x7fc7ab21c790) 2023-01-11T22:12:33.1161546Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1164205Z #171 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1164652Z pp_stack=, trace_info=0x7ffe1b69ffd0, 2023-01-11T22:12:33.1164896Z tstate=) 2023-01-11T22:12:33.1165346Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1165709Z #172 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1166060Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.1166410Z #173 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1166783Z throwflag=, 2023-01-11T22:12:33.1167155Z f=, 2023-01-11T22:12:33.1167519Z tstate=) 2023-01-11T22:12:33.1167888Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1172803Z #174 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.1173417Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.1174148Z args@entry=, locals=0x0, 2023-01-11T22:12:33.1174535Z locals@entry=, con=0x7fc850d723c0, 2023-01-11T22:12:33.1174938Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.1175424Z tstate@entry=) 2023-01-11T22:12:33.1175825Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1176055Z #175 _PyFunction_Vectorcall () 2023-01-11T22:12:33.1176356Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1181152Z #176 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.1181682Z nargsf=, args=0x6e52cc8, callable=0x7fc850d723b0, 2023-01-11T22:12:33.1181974Z tstate=0x1f51b80) 2023-01-11T22:12:33.1182300Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1186006Z #177 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1186442Z args=0x6e52cc8, callable=0x7fc850d723b0) 2023-01-11T22:12:33.1186901Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1189220Z #178 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1189666Z pp_stack=, trace_info=0x7ffe1b6a0190, 2023-01-11T22:12:33.1190089Z tstate=) 2023-01-11T22:12:33.1190417Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1190664Z #179 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1190960Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:33.1191288Z #180 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1191723Z throwflag=, 2023-01-11T22:12:33.1192075Z f=, 2023-01-11T22:12:33.1192434Z tstate=) 2023-01-11T22:12:33.1192821Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1198054Z #181 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.1198593Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.1199323Z args@entry=, locals=0x0, 2023-01-11T22:12:33.1199781Z locals@entry=, con=0x7fc850d71e20, 2023-01-11T22:12:33.1200332Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.1200705Z tstate@entry=) 2023-01-11T22:12:33.1201116Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1201361Z #182 _PyFunction_Vectorcall () 2023-01-11T22:12:33.1201652Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1201950Z #183 0x00000000004f711d in _PyObject_FastCallDictTstate.localalias () 2023-01-11T22:12:33.1202302Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:153 2023-01-11T22:12:33.1209129Z #184 0x0000000000507af8 in _PyObject_Call_Prepend (kwargs=0x7fc7ac65fe00, 2023-01-11T22:12:33.1209684Z kwargs@entry=, args=0x7fc852018070, 2023-01-11T22:12:33.1210391Z args@entry=, obj=, 2023-01-11T22:12:33.1210802Z obj@entry=, callable=0x7fc850d71e10, 2023-01-11T22:12:33.1211210Z callable@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.1211695Z tstate@entry=) 2023-01-11T22:12:33.1212092Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:431 2023-01-11T22:12:33.1212319Z #185 slot_tp_init () 2023-01-11T22:12:33.1212612Z at /usr/local/src/conda/python-3.10.8/Programs/gcmodule.c:7734 2023-01-11T22:12:33.1215324Z #186 0x00000000004f7bdb in type_call (kwds=0x7fc7ac65fe00, 2023-01-11T22:12:33.1215746Z args=0x7fc852018070, type=) 2023-01-11T22:12:33.1216284Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:224 2023-01-11T22:12:33.1216531Z #187 _PyObject_MakeTpCall.localalias () 2023-01-11T22:12:33.1216851Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:215 2023-01-11T22:12:33.1219195Z #188 0x00000000004f428d in _PyObject_VectorcallTstate ( 2023-01-11T22:12:33.1219551Z kwnames=0x7fc7ab9a50c0, 2023-01-11T22:12:33.1220145Z nargsf=, args=, 2023-01-11T22:12:33.1220564Z callable=, 2023-01-11T22:12:33.1220930Z tstate=) 2023-01-11T22:12:33.1221323Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:112 2023-01-11T22:12:33.1224123Z #189 _PyObject_VectorcallTstate (kwnames=0x7fc7ab9a50c0, 2023-01-11T22:12:33.1224433Z nargsf=, args=, callable=0x20eeb40, 2023-01-11T22:12:33.1224673Z tstate=0x1f51b80) 2023-01-11T22:12:33.1224980Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:99 2023-01-11T22:12:33.1227263Z #190 PyObject_Vectorcall (kwnames=0x7fc7ab9a50c0, nargsf=, 2023-01-11T22:12:33.1227613Z args=, callable=0x20eeb40) 2023-01-11T22:12:33.1227949Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1230611Z #191 call_function (kwnames=0x7fc7ab9a50c0, oparg=, 2023-01-11T22:12:33.1231087Z pp_stack=, trace_info=0x7ffe1b6a04b0, 2023-01-11T22:12:33.1231518Z tstate=) 2023-01-11T22:12:33.1231862Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1232153Z #192 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1232523Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:33.1232943Z #193 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1233582Z throwflag=, 2023-01-11T22:12:33.1233951Z f=, 2023-01-11T22:12:33.1234308Z tstate=) 2023-01-11T22:12:33.1234684Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:33.1238773Z #194 _PyEval_Vector (kwnames=, 2023-01-11T22:12:33.1239322Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:33.1239940Z args@entry=, locals=0x0, 2023-01-11T22:12:33.1240336Z locals@entry=, con=0x7fc7ab96e8d0, 2023-01-11T22:12:33.1240748Z con@entry=, tstate=0x1f51b80, 2023-01-11T22:12:33.1241140Z tstate@entry=) 2023-01-11T22:12:33.1241538Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1241850Z #195 _PyFunction_Vectorcall () 2023-01-11T22:12:33.1242159Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:33.1247071Z #196 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:33.1247376Z nargsf=, args=0x7fc85203dba8, callable=0x7fc7ab96e8c0, 2023-01-11T22:12:33.1247618Z tstate=0x1f51b80) 2023-01-11T22:12:33.1247933Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1251712Z #197 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:33.1251966Z args=0x7fc85203dba8, callable=0x7fc7ab96e8c0) 2023-01-11T22:12:33.1252311Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:33.1255065Z #198 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:33.1255382Z pp_stack=, trace_info=0x7ffe1b6a0670, 2023-01-11T22:12:33.1255618Z tstate=) 2023-01-11T22:12:33.1255927Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:33.1256158Z #199 _PyEval_EvalFrameDefault () 2023-01-11T22:12:33.1256469Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:33.1257868Z #200 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:33.1258310Z throwflag=, 2023-01-11T22:12:33.1258663Z f=, 2023-01-11T22:12:33.1259021Z tstate=) 2023-01-11T22:12:33.1259484Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:33.1259752Z #201 _PyEval_Vector () 2023-01-11T22:12:33.1260033Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:33.1266281Z #202 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7fc850f8f730, 2023-01-11T22:12:33.1266595Z globals=globals@entry=0x7fc8510ca480, locals=locals@entry=0x7fc8510ca480) 2023-01-11T22:12:33.1266943Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:33.1268048Z #203 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:33.1268382Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:33.1270474Z #204 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:33.1270783Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:33.1287209Z #205 0x000000000045adf2 in pyrun_file ( 2023-01-11T22:12:33.1287689Z fp=, 2023-01-11T22:12:33.1288354Z filename=, 2023-01-11T22:12:33.1288768Z start=, 2023-01-11T22:12:33.1289196Z globals=, 2023-01-11T22:12:33.1289617Z locals=, 2023-01-11T22:12:33.1289991Z closeit=, flags=0x7ffe1b6a0968) 2023-01-11T22:12:33.1290421Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1208 2023-01-11T22:12:33.1290729Z #206 0x00000000005bc25f in _PyRun_SimpleFileObject.localalias () 2023-01-11T22:12:33.1291266Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:456 2023-01-11T22:12:33.1291675Z #207 0x00000000005bc063 in _PyRun_AnyFileObject.localalias () 2023-01-11T22:12:33.1292087Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:90 2023-01-11T22:12:33.1298180Z #208 0x00000000005b8e7d in pymain_run_file_obj (skip_source_first_line=0, 2023-01-11T22:12:33.1298549Z filename=0x7fc851145840, program_name=0x7fc8510b7a00) 2023-01-11T22:12:33.1298887Z at /croot/python-split_1669298683653/work/build-static/python.c:357 2023-01-11T22:12:33.1300044Z #209 pymain_run_file (config=0x1f35e50) 2023-01-11T22:12:33.1300615Z at /croot/python-split_1669298683653/work/build-static/python.c:376 2023-01-11T22:12:33.1302155Z #210 pymain_run_python (exitcode=0x7ffe1b6a0960) 2023-01-11T22:12:33.1302916Z at /croot/python-split_1669298683653/work/build-static/python.c:591 2023-01-11T22:12:33.1303268Z #211 Py_RunMain.localalias () 2023-01-11T22:12:33.1303589Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:33.1318028Z #212 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:33.1318317Z argv=) 2023-01-11T22:12:33.1318636Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:33.1323878Z #213 0x00007fc8511a8c87 in __libc_start_main (main=0x587be0
, argc=6, 2023-01-11T22:12:33.1324232Z argv=0x7ffe1b6a0b68, init=, fini=, 2023-01-11T22:12:33.1324507Z rtld_fini=, stack_end=0x7ffe1b6a0b58) 2023-01-11T22:12:33.1324781Z at ../csu/libc-start.c:310 2023-01-11T22:12:33.1326087Z #214 0x0000000000587ade in _start () 2023-01-11T22:12:33.1326419Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:33.2931170Z GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 2023-01-11T22:12:33.2931642Z Copyright (C) 2018 Free Software Foundation, Inc. 2023-01-11T22:12:33.2932080Z License GPLv3+: GNU GPL version 3 or later 2023-01-11T22:12:33.2932656Z This is free software: you are free to change and redistribute it. 2023-01-11T22:12:33.2933131Z There is NO WARRANTY, to the extent permitted by law. Type "show copying" 2023-01-11T22:12:33.2933376Z and "show warranty" for details. 2023-01-11T22:12:33.2933672Z This GDB was configured as "x86_64-linux-gnu". 2023-01-11T22:12:33.2933941Z Type "show configuration" for configuration details. 2023-01-11T22:12:33.2934182Z For bug reporting instructions, please see: 2023-01-11T22:12:33.2934423Z . 2023-01-11T22:12:33.2934691Z Find the GDB manual and other documentation resources online at: 2023-01-11T22:12:33.2934970Z . 2023-01-11T22:12:33.2935183Z For help, type "help". 2023-01-11T22:12:33.2935424Z Type "apropos word" to search for commands related to "word"... 2023-01-11T22:12:33.4071675Z Reading symbols from python...done. 2023-01-11T22:12:33.9656349Z 2023-01-11T22:12:33.9656668Z warning: core file may not match specified executable file. 2023-01-11T22:12:33.9689353Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:33.9689984Z BFD: warning: /opt/conda/lib/libgomp.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:33.9690451Z [New LWP 23909] 2023-01-11T22:12:33.9690745Z [New LWP 23921] 2023-01-11T22:12:33.9691249Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:33.9691888Z BFD: warning: /opt/conda/bin/../lib/libstdc++.so.6: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:33.9692363Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:33.9692716Z BFD: warning: /opt/conda/bin/../lib/libgcc_s.so.1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:33.9692975Z [New LWP 23916] 2023-01-11T22:12:33.9693143Z [New LWP 23913] 2023-01-11T22:12:33.9693299Z [New LWP 23918] 2023-01-11T22:12:33.9693467Z [New LWP 23911] 2023-01-11T22:12:33.9693633Z [New LWP 23923] 2023-01-11T22:12:33.9693785Z [New LWP 23924] 2023-01-11T22:12:33.9706427Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:33.9707530Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libgfortran.so.5: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:33.9708083Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001 2023-01-11T22:12:33.9708613Z BFD: warning: /opt/conda/lib/python3.10/site-packages/numpy/core/../../../.././libquadmath.so.0: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002 2023-01-11T22:12:33.9932354Z [Thread debugging using libthread_db enabled] 2023-01-11T22:12:33.9932823Z Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2023-01-11T22:12:40.8850040Z 78 ../sysdeps/unix/syscall-template.S: No such file or directory. 2023-01-11T22:12:40.8850901Z warning: File "/var/lib/jenkins/workspace/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". 2023-01-11T22:12:40.8851445Z Core was generated by `/opt/conda/bin/python -bb -c from multiprocessing.spawn import spawn_main; spaw'. 2023-01-11T22:12:40.8851776Z Program terminated with signal SIGABRT, Aborted. 2023-01-11T22:12:40.8852122Z #0 0x00007fd4cad5d177 in kill () at ../sysdeps/unix/syscall-template.S:78 2023-01-11T22:12:40.8852411Z [Current thread is 1 (Thread 0x7fd4cbcdb200 (LWP 23909))] 2023-01-11T22:12:40.8879231Z To enable execution of this file add 2023-01-11T22:12:40.8879817Z add-auto-load-safe-path /var/lib/jenkins/workspace/.gdbinit 2023-01-11T22:12:40.8880371Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:40.8880868Z To completely disable this security protection add 2023-01-11T22:12:40.8881356Z set auto-load safe-path / 2023-01-11T22:12:40.8881746Z line to your configuration file "/var/lib/jenkins/.gdbinit". 2023-01-11T22:12:40.8882027Z For more information about this security protection see the 2023-01-11T22:12:40.8882389Z "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: 2023-01-11T22:12:40.8882685Z info "(gdb)Auto-loading safe path" 2023-01-11T22:12:40.8921860Z #0 0x00007fd4cad5d177 in kill () at ../sysdeps/unix/syscall-template.S:78 2023-01-11T22:12:40.8924549Z #1 0x00000000004cb0d3 in os_kill_impl ( 2023-01-11T22:12:40.8924921Z module=, 2023-01-11T22:12:40.8925287Z signal=, 2023-01-11T22:12:40.8925874Z pid=) 2023-01-11T22:12:40.8926424Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/sys/_iomodule.c:7929 2023-01-11T22:12:40.8929858Z #2 os_kill (module=, args=args@entry=0x7fd424f8ebf8, 2023-01-11T22:12:40.8930208Z nargs=) 2023-01-11T22:12:40.8930532Z at /usr/local/src/conda/python-3.10.8/Modules/codecs.c:3581 2023-01-11T22:12:40.8938204Z #3 0x00000000004fe7d4 in cfunction_vectorcall_FASTCALL (func=0x7fd4cbca18f0, 2023-01-11T22:12:40.8938777Z args=0x7fd424f8ebf8, nargsf=, kwnames=) 2023-01-11T22:12:40.8939197Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_bitutils.h:430 2023-01-11T22:12:40.8946141Z #4 0x00000000004f351e in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:40.8946518Z nargsf=, args=0x7fd424f8ebf8, callable=0x7fd4cbca18f0, 2023-01-11T22:12:40.8946765Z tstate=0x2096d10) 2023-01-11T22:12:40.8947113Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.8950716Z #5 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:40.8950996Z args=0x7fd424f8ebf8, callable=0x7fd4cbca18f0) 2023-01-11T22:12:40.8951354Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.8953908Z #6 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:40.8954462Z pp_stack=, trace_info=0x7ffcf67604f0, 2023-01-11T22:12:40.8954705Z tstate=) 2023-01-11T22:12:40.8955030Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:40.8955277Z #7 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.8955678Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4181 2023-01-11T22:12:40.8956163Z #8 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.8956518Z throwflag=, 2023-01-11T22:12:40.8956895Z f=, 2023-01-11T22:12:40.8957264Z tstate=) 2023-01-11T22:12:40.8957676Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.8962377Z #9 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.8963002Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.8963710Z args@entry=, locals=0x0, 2023-01-11T22:12:40.8964111Z locals@entry=, con=0x7fd424dc7890, 2023-01-11T22:12:40.8964501Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.8964888Z tstate@entry=) 2023-01-11T22:12:40.8965300Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.8965542Z #10 _PyFunction_Vectorcall () 2023-01-11T22:12:40.8965832Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.8970165Z #11 0x00000000004f141c in do_call_core (kwdict=0x0, callargs=0x7fd4caa53be0, 2023-01-11T22:12:40.8970570Z func=0x7fd424dc7880, trace_info=0x7ffcf67606b0, tstate=) 2023-01-11T22:12:40.8970930Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:40.8971213Z #12 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.8971648Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:40.8972130Z #13 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.8972852Z throwflag=, 2023-01-11T22:12:40.8973510Z f=, 2023-01-11T22:12:40.8974169Z tstate=) 2023-01-11T22:12:40.8974652Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.8978241Z #14 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.8978838Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.8979540Z args@entry=, locals=0x0, 2023-01-11T22:12:40.8979942Z locals@entry=, con=0x7fd427b629f0, 2023-01-11T22:12:40.8980349Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.8980721Z tstate@entry=) 2023-01-11T22:12:40.8981118Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.8981360Z #15 _PyFunction_Vectorcall () 2023-01-11T22:12:40.8981728Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.8986642Z #16 0x00000000004f141c in do_call_core (kwdict=0x7fd4cab627c0, 2023-01-11T22:12:40.8987176Z callargs=0x7fd424dca9d0, func=0x7fd427b629e0, trace_info=0x7ffcf6760870, 2023-01-11T22:12:40.8987524Z tstate=) 2023-01-11T22:12:40.8987864Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5943 2023-01-11T22:12:40.8988220Z #17 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.8988594Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4277 2023-01-11T22:12:40.8988849Z #18 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.8989158Z throwflag=, 2023-01-11T22:12:40.8989520Z f=, 2023-01-11T22:12:40.8989878Z tstate=) 2023-01-11T22:12:40.8990249Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.8994448Z #19 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.8994953Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.8995395Z args@entry=, locals=0x0, 2023-01-11T22:12:40.8995788Z locals@entry=, con=0x7fd4cab8d5b0, 2023-01-11T22:12:40.8996192Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.8996581Z tstate@entry=) 2023-01-11T22:12:40.8996980Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.8997214Z #20 _PyFunction_Vectorcall () 2023-01-11T22:12:40.8997514Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.9003096Z #21 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:40.9003576Z nargsf=, args=0x6fa5608, callable=0x7fd4cab8d5a0, 2023-01-11T22:12:40.9003988Z tstate=0x2096d10) 2023-01-11T22:12:40.9004313Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9008221Z #22 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x6fa5608, 2023-01-11T22:12:40.9008671Z callable=0x7fd4cab8d5a0) 2023-01-11T22:12:40.9009262Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9011725Z #23 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:40.9012194Z pp_stack=, trace_info=0x7ffcf6760a30, 2023-01-11T22:12:40.9012626Z tstate=) 2023-01-11T22:12:40.9013336Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:40.9013867Z #24 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.9014404Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:40.9014823Z #25 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.9015134Z throwflag=, 2023-01-11T22:12:40.9015483Z f=, 2023-01-11T22:12:40.9015846Z tstate=) 2023-01-11T22:12:40.9016225Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.9020200Z #26 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.9020984Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.9021565Z args@entry=, locals=0x0, 2023-01-11T22:12:40.9021977Z locals@entry=, con=0x7fd4cab8df40, 2023-01-11T22:12:40.9022712Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.9023170Z tstate@entry=) 2023-01-11T22:12:40.9023605Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.9023854Z #27 _PyFunction_Vectorcall () 2023-01-11T22:12:40.9024162Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.9028675Z #28 0x00000000004ef101 in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:40.9029072Z nargsf=, args=0x21136c0, callable=0x7fd4cab8df30, 2023-01-11T22:12:40.9029314Z tstate=0x2096d10) 2023-01-11T22:12:40.9029624Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9033893Z #29 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x21136c0, 2023-01-11T22:12:40.9034369Z callable=0x7fd4cab8df30) 2023-01-11T22:12:40.9034828Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9037239Z #30 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:40.9037732Z pp_stack=, trace_info=0x7ffcf6760bf0, 2023-01-11T22:12:40.9038137Z tstate=) 2023-01-11T22:12:40.9038443Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:40.9038688Z #31 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.9038994Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4198 2023-01-11T22:12:40.9039350Z #32 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.9039945Z throwflag=, 2023-01-11T22:12:40.9040479Z f=, 2023-01-11T22:12:40.9040839Z tstate=) 2023-01-11T22:12:40.9041215Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.9045973Z #33 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.9046527Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.9046963Z args@entry=, locals=0x0, 2023-01-11T22:12:40.9047351Z locals@entry=, con=0x7fd4caa12de0, 2023-01-11T22:12:40.9047759Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.9048151Z tstate@entry=) 2023-01-11T22:12:40.9048546Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.9048775Z #34 _PyFunction_Vectorcall () 2023-01-11T22:12:40.9049077Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.9054219Z #35 0x00000000004eecef in _PyObject_VectorcallTstate (kwnames=0x0, 2023-01-11T22:12:40.9054638Z nargsf=, args=0x7fd4cacd4200, callable=0x7fd4caa12dd0, 2023-01-11T22:12:40.9054889Z tstate=0x2096d10) 2023-01-11T22:12:40.9055204Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9059003Z #36 PyObject_Vectorcall (kwnames=0x0, nargsf=, 2023-01-11T22:12:40.9059386Z args=0x7fd4cacd4200, callable=0x7fd4caa12dd0) 2023-01-11T22:12:40.9059735Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9062876Z #37 call_function (kwnames=0x0, oparg=, 2023-01-11T22:12:40.9063335Z pp_stack=, trace_info=0x7ffcf6760db0, 2023-01-11T22:12:40.9063792Z tstate=) 2023-01-11T22:12:40.9064208Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:40.9064523Z #38 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.9065089Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4213 2023-01-11T22:12:40.9065579Z #39 0x00000000004fe5ef in _PyEval_EvalFrame ( 2023-01-11T22:12:40.9065947Z throwflag=, 2023-01-11T22:12:40.9066298Z f=, 2023-01-11T22:12:40.9066655Z tstate=) 2023-01-11T22:12:40.9067041Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5052 2023-01-11T22:12:40.9071091Z #40 _PyEval_Vector (kwnames=, 2023-01-11T22:12:40.9071736Z kwnames@entry=, argcount=, args=, 2023-01-11T22:12:40.9072290Z args@entry=, locals=0x0, 2023-01-11T22:12:40.9072699Z locals@entry=, con=0x7fd4caa12d50, 2023-01-11T22:12:40.9073103Z con@entry=, tstate=0x2096d10, 2023-01-11T22:12:40.9073475Z tstate@entry=) 2023-01-11T22:12:40.9073872Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.9074113Z #41 _PyFunction_Vectorcall () 2023-01-11T22:12:40.9074402Z at /usr/local/src/conda/python-3.10.8/Include/abstract.h:342 2023-01-11T22:12:40.9077895Z #42 0x00000000004efd83 in _PyObject_VectorcallTstate (kwnames=0x7fd4cacadd80, 2023-01-11T22:12:40.9078464Z nargsf=, args=, callable=0x7fd4caa12d40, 2023-01-11T22:12:40.9078796Z tstate=0x2096d10) 2023-01-11T22:12:40.9079111Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9081422Z #43 PyObject_Vectorcall (kwnames=0x7fd4cacadd80, nargsf=, 2023-01-11T22:12:40.9081934Z args=, callable=0x7fd4caa12d40) 2023-01-11T22:12:40.9082289Z at /usr/local/src/conda/python-3.10.8/Objects/pycore_pyerrors.h:123 2023-01-11T22:12:40.9084469Z #44 call_function (kwnames=0x7fd4cacadd80, oparg=, 2023-01-11T22:12:40.9084989Z pp_stack=, trace_info=0x7ffcf6760f70, 2023-01-11T22:12:40.9085361Z tstate=) 2023-01-11T22:12:40.9085668Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5891 2023-01-11T22:12:40.9085909Z #45 _PyEval_EvalFrameDefault () 2023-01-11T22:12:40.9086212Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:4231 2023-01-11T22:12:40.9087384Z #46 0x0000000000594b72 in _PyEval_EvalFrame ( 2023-01-11T22:12:40.9087786Z throwflag=, 2023-01-11T22:12:40.9088161Z f=, 2023-01-11T22:12:40.9088513Z tstate=) 2023-01-11T22:12:40.9088960Z at /croot/python-split_1669298683653/_build_env/x86_64-conda-linux-gnu/sysroot/usr/include/bits/call.c:46 2023-01-11T22:12:40.9089225Z #47 _PyEval_Vector () 2023-01-11T22:12:40.9089622Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:5065 2023-01-11T22:12:40.9096110Z #48 0x0000000000594ab7 in PyEval_EvalCode (co=co@entry=0x7fd4cac70920, 2023-01-11T22:12:40.9096669Z globals=globals@entry=0x7fd4cac625c0, locals=locals@entry=0x7fd4cac625c0) 2023-01-11T22:12:40.9097148Z at /usr/local/src/conda/python-3.10.8/Modules/ceval_gil.h:1134 2023-01-11T22:12:40.9097831Z #49 0x00000000005c6e57 in run_eval_code_obj () 2023-01-11T22:12:40.9098430Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1291 2023-01-11T22:12:40.9100776Z #50 0x00000000005c1d40 in run_mod () 2023-01-11T22:12:40.9101199Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1312 2023-01-11T22:12:40.9103224Z #51 0x00000000005b9ebb in PyRun_StringFlags.localalias () 2023-01-11T22:12:40.9103625Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:1183 2023-01-11T22:12:40.9105849Z #52 0x00000000005b9cfb in PyRun_SimpleStringFlags.localalias () 2023-01-11T22:12:40.9106470Z at /usr/local/src/conda/python-3.10.8/Objects/clinic/marshal.c.h:503 2023-01-11T22:12:40.9108400Z #53 0x00000000005b8d5c in pymain_run_command ( 2023-01-11T22:12:40.9108771Z command=) 2023-01-11T22:12:40.9109189Z at /croot/python-split_1669298683653/work/build-static/python.c:252 2023-01-11T22:12:40.9110992Z #54 pymain_run_python (exitcode=0x7ffcf67611d0) 2023-01-11T22:12:40.9111611Z at /croot/python-split_1669298683653/work/build-static/python.c:582 2023-01-11T22:12:40.9111948Z #55 Py_RunMain.localalias () 2023-01-11T22:12:40.9112272Z at /croot/python-split_1669298683653/work/build-static/python.c:670 2023-01-11T22:12:40.9128184Z #56 0x0000000000587c29 in Py_BytesMain (argc=, 2023-01-11T22:12:40.9128467Z argv=) 2023-01-11T22:12:40.9128792Z at /croot/python-split_1669298683653/work/build-static/python.c:1090 2023-01-11T22:12:40.9133903Z #57 0x00007fd4cad3fc87 in __libc_start_main (main=0x587be0
, argc=5, 2023-01-11T22:12:40.9134240Z argv=0x7ffcf67613d8, init=, fini=, 2023-01-11T22:12:40.9134515Z rtld_fini=, stack_end=0x7ffcf67613c8) 2023-01-11T22:12:40.9134793Z at ../csu/libc-start.c:310 2023-01-11T22:12:40.9135719Z #58 0x0000000000587ade in _start () 2023-01-11T22:12:41.6774989Z at /usr/local/src/conda/python-3.10.8/Modules/_io/clinic/peg_api.c:880 2023-01-11T22:12:41.6800602Z ##[group]Run set -x 2023-01-11T22:12:41.6800837Z set -x 2023-01-11T22:12:41.6801092Z python3 -m pip install -r requirements.txt 2023-01-11T22:12:41.6801361Z python3 -m pip install boto3==1.19.12 2023-01-11T22:12:41.6801685Z python3 -m tools.stats.print_test_stats --upload-to-s3 --compare-with-s3 test 2023-01-11T22:12:41.6812630Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:12:41.6812865Z env: 2023-01-11T22:12:41.6813072Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:12:41.6813379Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:12:41.6813660Z AWS_DEFAULT_REGION: us-east-1 2023-01-11T22:12:41.6813876Z BRANCH: 2023-01-11T22:12:41.6814083Z TEST_CONFIG: nogpu_NO_AVX2 2023-01-11T22:12:41.6814302Z SHARD_NUMBER: 1 2023-01-11T22:12:41.6814552Z BUILD_ENVIRONMENT: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T22:12:41.6814813Z PR_NUMBER: 2023-01-11T22:12:41.6815048Z PYTORCH_RETRY_TEST_CASES: 1 2023-01-11T22:12:41.6815293Z PYTORCH_OVERRIDE_FLAKY_SIGNAL: 1 2023-01-11T22:12:41.6815560Z SHA1: 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T22:12:41.6815808Z TAG: ciflow/trunk/91627 2023-01-11T22:12:41.6816012Z WORKFLOW_ID: 3896346758 2023-01-11T22:12:41.6816366Z GITHUB_TOKEN: *** 2023-01-11T22:12:41.6816592Z GHA_WORKFLOW_JOB_ID: 10589559916 2023-01-11T22:12:41.6816797Z ##[endgroup] 2023-01-11T22:12:41.6841554Z + python3 -m pip install -r requirements.txt 2023-01-11T22:12:41.8994417Z Defaulting to user installation because normal site-packages is not writeable 2023-01-11T22:12:41.9283991Z Requirement already satisfied: astunparse in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (1.6.3) 2023-01-11T22:12:41.9311475Z Requirement already satisfied: expecttest in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (0.1.4) 2023-01-11T22:12:41.9319418Z Requirement already satisfied: future in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (0.18.2) 2023-01-11T22:12:41.9327529Z Requirement already satisfied: hypothesis in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (6.62.0) 2023-01-11T22:12:41.9691392Z Requirement already satisfied: numpy in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 6)) (1.21.6) 2023-01-11T22:12:41.9700327Z Requirement already satisfied: psutil in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 7)) (5.9.1) 2023-01-11T22:12:41.9776116Z Requirement already satisfied: pyyaml in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 8)) (6.0) 2023-01-11T22:12:41.9784214Z Requirement already satisfied: requests in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 9)) (2.26.0) 2023-01-11T22:12:42.0010683Z Requirement already satisfied: setuptools in /usr/lib/python3.7/site-packages (from -r requirements.txt (line 10)) (49.1.3) 2023-01-11T22:12:42.0176506Z Requirement already satisfied: six in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 11)) (1.16.0) 2023-01-11T22:12:42.0185237Z Requirement already satisfied: types-dataclasses in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 12)) (0.6.6) 2023-01-11T22:12:42.0190703Z Requirement already satisfied: typing_extensions in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 13)) (4.4.0) 2023-01-11T22:12:42.0199971Z Requirement already satisfied: sympy in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 14)) (1.10.1) 2023-01-11T22:12:42.0218250Z Requirement already satisfied: filelock in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 15)) (3.9.0) 2023-01-11T22:12:42.0289389Z Requirement already satisfied: networkx in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 16)) (2.6.3) 2023-01-11T22:12:42.0445516Z Requirement already satisfied: jinja2 in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 17)) (3.1.2) 2023-01-11T22:12:42.0470476Z Requirement already satisfied: wheel<1.0,>=0.23.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from astunparse->-r requirements.txt (line 2)) (0.38.4) 2023-01-11T22:12:42.0486284Z Requirement already satisfied: attrs>=19.2.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from hypothesis->-r requirements.txt (line 5)) (22.2.0) 2023-01-11T22:12:42.0741013Z Requirement already satisfied: sortedcontainers<3.0.0,>=2.1.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from hypothesis->-r requirements.txt (line 5)) (2.4.0) 2023-01-11T22:12:42.0752485Z Requirement already satisfied: exceptiongroup>=1.0.0; python_version < "3.11" in /home/ec2-user/.local/lib/python3.7/site-packages (from hypothesis->-r requirements.txt (line 5)) (1.1.0) 2023-01-11T22:12:42.0769075Z Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 9)) (2022.12.7) 2023-01-11T22:12:42.0777473Z Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 9)) (3.4) 2023-01-11T22:12:42.0788572Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 9)) (1.26.14) 2023-01-11T22:12:42.0942691Z Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 9)) (2.0.12) 2023-01-11T22:12:42.0961264Z Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/.local/lib/python3.7/site-packages (from sympy->-r requirements.txt (line 14)) (1.2.1) 2023-01-11T22:12:42.1016886Z Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from jinja2->-r requirements.txt (line 17)) (2.1.1) 2023-01-11T22:12:42.1651380Z + python3 -m pip install boto3==1.19.12 2023-01-11T22:12:42.3770955Z Defaulting to user installation because normal site-packages is not writeable 2023-01-11T22:12:42.3950164Z Requirement already satisfied: boto3==1.19.12 in /home/ec2-user/.local/lib/python3.7/site-packages (1.19.12) 2023-01-11T22:12:42.3998664Z Requirement already satisfied: botocore<1.23.0,>=1.22.12 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (1.22.12) 2023-01-11T22:12:42.4040783Z Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (0.5.2) 2023-01-11T22:12:42.4073784Z Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (0.10.0) 2023-01-11T22:12:42.4085957Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/.local/lib/python3.7/site-packages (from botocore<1.23.0,>=1.22.12->boto3==1.19.12) (1.26.14) 2023-01-11T22:12:42.4236718Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from botocore<1.23.0,>=1.22.12->boto3==1.19.12) (2.8.2) 2023-01-11T22:12:42.4254313Z Requirement already satisfied: six>=1.5 in /home/ec2-user/.local/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.23.0,>=1.22.12->boto3==1.19.12) (1.16.0) 2023-01-11T22:12:42.6051766Z + python3 -m tools.stats.print_test_stats --upload-to-s3 --compare-with-s3 test 2023-01-11T22:15:03.8392290Z [scribe] Scribe access token not provided, sending report via boto3... 2023-01-11T22:15:03.8393433Z ERROR ENCOUNTERED WHEN UPLOADING TO SCRIBE: {"errorMessage":"2023-01-11T22:14:47.494Z 00aa022d-ed99-4cd4-8cab-bb5449156722 Task timed out after 60.01 seconds"} 2023-01-11T22:15:03.8393917Z 2023-01-11T22:15:03.8395103Z ----- Historic stats comparison result ------ 2023-01-11T22:15:03.8395609Z 2023-01-11T22:15:03.8395802Z job: linux-bionic-cuda11.7-py3.10-gcc7 2023-01-11T22:15:03.8396078Z commit: 8419ddda87c8a47eacc63b54bc7ec98c1f27c26e 2023-01-11T22:15:03.8396230Z 2023-01-11T22:15:03.8396380Z Commit graph (base is most recent master ancestor with at least one S3 report): 2023-01-11T22:15:03.8396545Z 2023-01-11T22:15:03.8396613Z : (master) 2023-01-11T22:15:03.8396774Z | 2023-01-11T22:15:03.8396973Z | * 8419ddda87 (HEAD) total time 3511.30s 2023-01-11T22:15:03.8397159Z | | 2023-01-11T22:15:03.8401078Z | : (2 commits) 2023-01-11T22:15:03.8401313Z |/ 2023-01-11T22:15:03.8401834Z * db2a237763 (base) 11 reports, total time 4966.48s ± 3495.23s 2023-01-11T22:15:03.8402329Z * 2b0abd4ce3 11 reports, total time 4990.75s ± 3463.36s 2023-01-11T22:15:03.8402871Z * f7939b21e1 33 reports, total time 3500.97s ± 3537.57s 2023-01-11T22:15:03.8403359Z * cb3204823e 11 reports, total time 4951.99s ± 3458.09s 2023-01-11T22:15:03.8403685Z * 6e236553f5 11 reports, total time 4964.39s ± 3513.19s 2023-01-11T22:15:03.8404867Z * cce577b391 11 reports, total time 4938.35s ± 3358.29s 2023-01-11T22:15:03.8405186Z * fae821c2f1 11 reports, total time 4751.06s ± 3169.39s 2023-01-11T22:15:03.8405497Z * 0c3659586d 11 reports, total time 4713.33s ± 3185.77s 2023-01-11T22:15:03.8405808Z * 122245985a 11 reports, total time 4767.91s ± 3184.40s 2023-01-11T22:15:03.8406104Z * b797a24259 11 reports, total time 4784.26s ± 3253.10s 2023-01-11T22:15:03.8406487Z | 2023-01-11T22:15:03.8406636Z : 2023-01-11T22:15:03.8406714Z 2023-01-11T22:15:03.8406834Z Removed (across 859 suites) 0 tests, totaling 0.00s 2023-01-11T22:15:03.8407095Z Modified (across 0 suites) 0 tests, totaling 0.00s 2023-01-11T22:15:03.8407360Z Added (across 682 suites) 17743 tests, totaling +3511.30s 2023-01-11T22:15:03.9233633Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2023-01-11T22:15:03.9233874Z with: 2023-01-11T22:15:03.9234012Z env: 2023-01-11T22:15:03.9234416Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:15:03.9234685Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:15:03.9234921Z ##[endgroup] 2023-01-11T22:15:03.9248149Z ##[group]Run set -eou pipefail 2023-01-11T22:15:03.9248363Z set -eou pipefail 2023-01-11T22:15:03.9248542Z  2023-01-11T22:15:03.9248777Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2023-01-11T22:15:03.9249022Z for _ in $(seq 1440); do 2023-01-11T22:15:03.9249238Z  # Break if no ssh session exists anymore 2023-01-11T22:15:03.9249453Z  if [ "$(who)" = "" ]; then 2023-01-11T22:15:03.9249633Z  break 2023-01-11T22:15:03.9249785Z  fi 2023-01-11T22:15:03.9249981Z  echo "." 2023-01-11T22:15:03.9250140Z  sleep 5 2023-01-11T22:15:03.9250305Z done 2023-01-11T22:15:03.9261313Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:15:03.9261531Z env: 2023-01-11T22:15:03.9261703Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:15:03.9261958Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:15:03.9262205Z ##[endgroup] 2023-01-11T22:15:03.9286089Z Holding runner for 2 hours until all ssh sessions have logged out 2023-01-11T22:15:03.9373873Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2023-01-11T22:15:03.9374178Z # ignore expansion of "docker ps -q" since it could be empty 2023-01-11T22:15:03.9374426Z # shellcheck disable=SC2046 2023-01-11T22:15:03.9374650Z docker stop $(docker ps -q) || true 2023-01-11T22:15:03.9374875Z # Prune all of the docker images 2023-01-11T22:15:03.9375079Z docker system prune -af 2023-01-11T22:15:03.9385687Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-01-11T22:15:03.9385905Z env: 2023-01-11T22:15:03.9386082Z GIT_DEFAULT_BRANCH: master 2023-01-11T22:15:03.9386338Z DOCKER_CONTAINER_ID: a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:15:03.9386588Z ##[endgroup] 2023-01-11T22:15:04.3461790Z a71a466242c1 2023-01-11T22:15:06.7688971Z Deleted Containers: 2023-01-11T22:15:06.7689276Z a71a466242c1f37c1e0011471b01ee39ce7e013772770194af88dbad63e7c256 2023-01-11T22:15:06.7689447Z 2023-01-11T22:15:12.5395334Z Deleted Images: 2023-01-11T22:15:12.5396468Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7:fd224c2e6c79d7fdec6408da598bf52bc5b201dd 2023-01-11T22:15:12.5397853Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7@sha256:0da23f4faf0ce20770149c4a783e08eaa91c07112511dc5511c77937c66edb24 2023-01-11T22:15:12.5398452Z deleted: sha256:dd055998e88c3bb7db98caef99cc4aaaa492114a459a38a5f0ab49c735f40318 2023-01-11T22:15:12.5398776Z deleted: sha256:e4008aa27d9451086197883cfac22b827879bbe380f63c8c39e3db8313773f3c 2023-01-11T22:15:12.5399112Z deleted: sha256:acc638ed73c788f1c8fdbf04e65d27fa42e6c32d67dbdb50616e173ef284a563 2023-01-11T22:15:12.5399572Z deleted: sha256:f5db2d6ac11f27c63a5f2d0250a45efbd078c37a32d2e2973e544ea526501ba9 2023-01-11T22:15:12.5400215Z deleted: sha256:ce58ef265c69d549d5071cb5418ec43a703978b2c7a88b1141673272cd29de77 2023-01-11T22:15:12.5400682Z deleted: sha256:2e1d5c5ea4a63305e9617c6ab380960f330e1755c39c714f3a3eefb2b603e92a 2023-01-11T22:15:12.5401145Z deleted: sha256:f6e5b9727392978f694412774dde4231ab32666b8604ff0d64727308d45a9163 2023-01-11T22:15:12.5401478Z deleted: sha256:0cfd1beb2f29eb03cb53178c729bf68f37f74bbd713aa6e3a6b1dd0b8121eb61 2023-01-11T22:15:12.5401807Z deleted: sha256:8ec9997c9a1e58cb7d553f0862f1f975bdc1bbd0d0297b6c483bf8731508824b 2023-01-11T22:15:12.5402284Z deleted: sha256:367991b9b8c74307719e7037512b4d1d67917a888c40c4ed86af9843f0c38c77 2023-01-11T22:15:12.5402770Z deleted: sha256:83b736f4ed1139df1136ab38e29568f048e4ce111951d4fdc1a508713303ca00 2023-01-11T22:15:12.5403352Z deleted: sha256:d2ca56a8d719ce8cfe4b65b165cdee296b1b456c56eb3e06990ca62ba42bc18a 2023-01-11T22:15:12.5403783Z deleted: sha256:cdfc9a191a57609f700ced389b19973ef4436b66b1d47313c799be166d6fce4b 2023-01-11T22:15:12.5404105Z deleted: sha256:293a82b424c32cb77ae038879dff6856d5fd08e5b9db2c5b015f424ebf88d24b 2023-01-11T22:15:12.5404432Z deleted: sha256:21a63ac2015ba16ccb51b9090e0f4b78a8437ca4dd189f376d1e9f45e6c74d3e 2023-01-11T22:15:12.5404747Z deleted: sha256:afc58208e3d75e15f79b0f474b361bbc1e21b8eb232fe613a1c79b8415827c86 2023-01-11T22:15:12.5405090Z deleted: sha256:86179776c9e36c00cf4c4a64f717303555b00dc594ab664fb29cfdb707eece2b 2023-01-11T22:15:12.5405392Z deleted: sha256:45f06a2900391711a143b22a31156d3013a0f37430ff8367ab4dbb27ac33381e 2023-01-11T22:15:12.5405706Z deleted: sha256:d7d350150f2a9a9ac81a59d44fd01f74d44f5f354f07a2f055aea97c8be52d92 2023-01-11T22:15:12.5406037Z deleted: sha256:eae59ad1f09ea2b1f99a568bfd5f85dae44a3af9b46b5ac5af19931de9e8fb8d 2023-01-11T22:15:12.5406377Z deleted: sha256:d734a02f47eda33d1022c9642ccfc469b44f5b104ab8ae7aa855ea76be550288 2023-01-11T22:15:12.5406723Z deleted: sha256:643cbebfdfe0cee71fed16a3233b2ee4b6392d91833a41269cdc80c8d0841ae9 2023-01-11T22:15:12.5407055Z deleted: sha256:3a4db9e7d414af3a157be5297f5b7bcddc4c63c0e83221d36728ddc32f84bc2d 2023-01-11T22:15:12.5407398Z deleted: sha256:d93fc8ff6f3d356d012ef7cca3d02018fa3c0a4be9ca2ae4ce78d174274ce530 2023-01-11T22:15:12.5407749Z deleted: sha256:cdd9af1450d9ea8f070e7c03dfe076d49ab615a4d3558be68bd7a4f16d804038 2023-01-11T22:15:12.5408080Z deleted: sha256:de611f0f78a22b1c6c68620370fb99a959c668d8c38a2cc832912e784f94869f 2023-01-11T22:15:12.5408380Z deleted: sha256:0c7878e2d089271e8e5c181eef49d0a43c99b827d1a60dafd15a54627d9146b4 2023-01-11T22:15:12.5408698Z deleted: sha256:55fe9bdaa629b37baee75e1f6878ab1941e000e0937ef45452a9482f29de4577 2023-01-11T22:15:12.5409032Z deleted: sha256:1e641da8b29087f06ad852506d00c1eaaedaeed0bf2a451c1d462c0b476c169f 2023-01-11T22:15:12.5409338Z deleted: sha256:77079a0a22a485736fdd6052b5270d4fb8ee1771976ed0d55e0f315cbc6d1da5 2023-01-11T22:15:12.5409650Z deleted: sha256:b3dda998ed6389c88d249f3aa8d96b2f94876931c45284ef00425cdea77b7c07 2023-01-11T22:15:12.5409968Z deleted: sha256:7dcf69834443047244edadb0a7016bc391279d485de12e8d2d8aad25af532912 2023-01-11T22:15:12.5410288Z deleted: sha256:932db4f0d0b27a0b32d81f5e108fdec1c5433e7b4614cfdc39d64189a59bc228 2023-01-11T22:15:12.5410609Z deleted: sha256:99ea7bfa823ad0725ffcc42e5dd47b90270c3a3e49a0b2a31adae4497d029331 2023-01-11T22:15:12.5410942Z deleted: sha256:58c3ab544a412f163fd3613ab991ea85fa1ae7c97f5a6cbc7b86a1b97fdd5484 2023-01-11T22:15:12.5411268Z deleted: sha256:fd75951227e5b4de5b96f5ee360cb3f1caf2f32a33ca89976013a385d23a34be 2023-01-11T22:15:12.5411585Z deleted: sha256:980d2a371b758adf16fc78370ed6b8bcf77846721ef4f20da94a6d1299457ad6 2023-01-11T22:15:12.5411902Z deleted: sha256:311c72e743e14e8490d04f4331dcc0a35309e9b94266986b0b5badd5fe499765 2023-01-11T22:15:12.5412203Z deleted: sha256:857d2698f0a0af2341318f6ed93060f21d498e989b339e2867c5236bab9c63d5 2023-01-11T22:15:12.5412511Z deleted: sha256:df69730c501d9b3ce0f2316b2b638e20515ce1e9aad01098b13234f4a2154927 2023-01-11T22:15:12.5412815Z deleted: sha256:288a6d7efd5c9e470341b16b3ea2cd41124c769de8d10643ac688d651e9767c9 2023-01-11T22:15:12.5413133Z deleted: sha256:6a565f7b04668396c32d11cad845e1cd1b84d09e6f970457c5494adfde59690f 2023-01-11T22:15:12.5413467Z deleted: sha256:b4e45ce76a79fe9f3dbbf723401c4bf189f0e70bc81fc2dbe4e70c80044c2fac 2023-01-11T22:15:12.5413853Z deleted: sha256:e4a3cd7f0f84ce7171b04c994248707950295feb437bc5feaadcb66a3f7bf5a3 2023-01-11T22:15:12.5414177Z deleted: sha256:24ac908f9f592af03d6a51011147428f9e682c79b8ca2ad5afd2b1a44aeed617 2023-01-11T22:15:12.5414498Z deleted: sha256:50cc7186fda7f64aa964824a32c139a1085ce030491c0da5fab99fdecae66fdf 2023-01-11T22:15:12.5414869Z deleted: sha256:6e9c802974cfa887b7300715840ef7aaf5765df415d8d4680f72c6034a10292b 2023-01-11T22:15:12.5415157Z deleted: sha256:30109e8b967541225d66d116256e57334c2b63c25b456f9f7cd72d14d46d8da3 2023-01-11T22:15:12.5415461Z deleted: sha256:18ce8ec73f72efbc789a00688f5d57c798690e22048389f236dbc593cec31d6e 2023-01-11T22:15:12.5415775Z deleted: sha256:195741932e0b070b4fed22eee8d97719dc71f1f569594b418d777b87dbe76a6d 2023-01-11T22:15:12.5416082Z deleted: sha256:6f099faae794c47a468400004f89aed66ec84fa1bd6c606a9877ab09c84a5289 2023-01-11T22:15:12.5416402Z deleted: sha256:5bddaa98761511a0e16047132a49704d0cf176bec84f42b91644b8e7adb3cb88 2023-01-11T22:15:12.5416699Z deleted: sha256:5089072a88c6788d2594696a16346c495f97fd117430602f033541a0f333de5f 2023-01-11T22:15:12.5416990Z deleted: sha256:9bc67bb187c368480f186819831faa7998ba6d4f2e4ab8bd5b5fbc8a5aada045 2023-01-11T22:15:12.5417299Z deleted: sha256:45bbe3d22998589317c7f6c4dd591475423bb37ca9b922529c5878653483b18d 2023-01-11T22:15:12.5417466Z 2023-01-11T22:15:12.5420563Z Total reclaimed space: 20.62GB 2023-01-11T22:15:12.5484452Z Post job cleanup. 2023-01-11T22:15:12.5529191Z Post job cleanup. 2023-01-11T22:15:12.6593176Z [command]/usr/bin/git version 2023-01-11T22:15:12.6635669Z git version 2.38.1 2023-01-11T22:15:12.6672928Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/a9be2f8a-2de5-4873-b8cf-f8c7169e2aa2' before making global git config changes 2023-01-11T22:15:12.6673777Z Adding repository directory to the temporary git global config as a safe directory 2023-01-11T22:15:12.6677176Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2023-01-11T22:15:12.6725212Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2023-01-11T22:15:12.6756231Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2023-01-11T22:15:12.7174029Z Entering 'android/libs/fbjni' 2023-01-11T22:15:12.7222566Z Entering 'third_party/FP16' 2023-01-11T22:15:12.7269502Z Entering 'third_party/FXdiv' 2023-01-11T22:15:12.7318530Z Entering 'third_party/NNPACK' 2023-01-11T22:15:12.7367116Z Entering 'third_party/QNNPACK' 2023-01-11T22:15:12.7421104Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T22:15:12.7454222Z Entering 'third_party/XNNPACK' 2023-01-11T22:15:12.7567392Z Entering 'third_party/benchmark' 2023-01-11T22:15:12.7614695Z Entering 'third_party/cpuinfo' 2023-01-11T22:15:12.7663284Z Entering 'third_party/cub' 2023-01-11T22:15:12.7707992Z Entering 'third_party/cudnn_frontend' 2023-01-11T22:15:12.7748125Z Entering 'third_party/cutlass' 2023-01-11T22:15:12.7788054Z Entering 'third_party/eigen' 2023-01-11T22:15:12.7847550Z Entering 'third_party/fbgemm' 2023-01-11T22:15:12.7913569Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T22:15:12.7973243Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T22:15:12.8039953Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T22:15:12.8106757Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T22:15:12.8149139Z Entering 'third_party/flatbuffers' 2023-01-11T22:15:12.8185172Z Entering 'third_party/fmt' 2023-01-11T22:15:12.8231707Z Entering 'third_party/foxi' 2023-01-11T22:15:12.8276312Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T22:15:12.8320096Z Entering 'third_party/gloo' 2023-01-11T22:15:12.8365472Z Entering 'third_party/googletest' 2023-01-11T22:15:12.8417161Z Entering 'third_party/ideep' 2023-01-11T22:15:12.8479255Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T22:15:12.8572488Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T22:15:12.8647826Z Entering 'third_party/ios-cmake' 2023-01-11T22:15:12.8691310Z Entering 'third_party/ittapi' 2023-01-11T22:15:12.8724383Z Entering 'third_party/kineto' 2023-01-11T22:15:12.8758359Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T22:15:12.8791720Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T22:15:12.8825864Z Entering 'third_party/nccl/nccl' 2023-01-11T22:15:12.8874101Z Entering 'third_party/neon2sse' 2023-01-11T22:15:12.8917014Z Entering 'third_party/nlohmann' 2023-01-11T22:15:12.8951736Z Entering 'third_party/onnx' 2023-01-11T22:15:12.9065452Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T22:15:12.9129710Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T22:15:12.9176747Z Entering 'third_party/onnx-tensorrt' 2023-01-11T22:15:12.9234948Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T22:15:12.9328672Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T22:15:12.9382129Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T22:15:12.9447906Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T22:15:12.9498405Z Entering 'third_party/pocketfft' 2023-01-11T22:15:12.9532269Z Entering 'third_party/protobuf' 2023-01-11T22:15:12.9638028Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T22:15:12.9705554Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T22:15:12.9750505Z Entering 'third_party/psimd' 2023-01-11T22:15:12.9793510Z Entering 'third_party/pthreadpool' 2023-01-11T22:15:12.9838854Z Entering 'third_party/pybind11' 2023-01-11T22:15:12.9885539Z Entering 'third_party/python-enum' 2023-01-11T22:15:12.9928529Z Entering 'third_party/python-peachpy' 2023-01-11T22:15:12.9973456Z Entering 'third_party/python-six' 2023-01-11T22:15:13.0018094Z Entering 'third_party/sleef' 2023-01-11T22:15:13.0062142Z Entering 'third_party/tbb' 2023-01-11T22:15:13.0097343Z Entering 'third_party/tensorpipe' 2023-01-11T22:15:13.0162276Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T22:15:13.0234482Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T22:15:13.0286481Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T22:15:13.0345944Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T22:15:13.0405967Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T22:15:13.0455141Z Entering 'third_party/zstd' 2023-01-11T22:15:13.0516373Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2023-01-11T22:15:13.0542801Z http.https://github.com/.extraheader 2023-01-11T22:15:13.0550202Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2023-01-11T22:15:13.0581168Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2023-01-11T22:15:13.0830824Z Entering 'android/libs/fbjni' 2023-01-11T22:15:13.0850010Z http.https://github.com/.extraheader 2023-01-11T22:15:13.0875962Z Entering 'third_party/FP16' 2023-01-11T22:15:13.0895839Z http.https://github.com/.extraheader 2023-01-11T22:15:13.0921404Z Entering 'third_party/FXdiv' 2023-01-11T22:15:13.0941388Z http.https://github.com/.extraheader 2023-01-11T22:15:13.0968179Z Entering 'third_party/NNPACK' 2023-01-11T22:15:13.0988468Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1014662Z Entering 'third_party/QNNPACK' 2023-01-11T22:15:13.1034283Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1060173Z Entering 'third_party/VulkanMemoryAllocator' 2023-01-11T22:15:13.1080156Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1106407Z Entering 'third_party/XNNPACK' 2023-01-11T22:15:13.1126229Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1161973Z Entering 'third_party/benchmark' 2023-01-11T22:15:13.1181583Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1208298Z Entering 'third_party/cpuinfo' 2023-01-11T22:15:13.1227925Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1254174Z Entering 'third_party/cub' 2023-01-11T22:15:13.1273747Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1299308Z Entering 'third_party/cudnn_frontend' 2023-01-11T22:15:13.1319026Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1349445Z Entering 'third_party/cutlass' 2023-01-11T22:15:13.1369392Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1402026Z Entering 'third_party/eigen' 2023-01-11T22:15:13.1421003Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1449130Z Entering 'third_party/fbgemm' 2023-01-11T22:15:13.1468967Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1494599Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-01-11T22:15:13.1514063Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1539719Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-01-11T22:15:13.1558763Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1584439Z Entering 'third_party/fbgemm/third_party/googletest' 2023-01-11T22:15:13.1603773Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1629439Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-01-11T22:15:13.1648930Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1675378Z Entering 'third_party/flatbuffers' 2023-01-11T22:15:13.1695350Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1722975Z Entering 'third_party/fmt' 2023-01-11T22:15:13.1742142Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1766979Z Entering 'third_party/foxi' 2023-01-11T22:15:13.1787208Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1812311Z Entering 'third_party/gemmlowp/gemmlowp' 2023-01-11T22:15:13.1832359Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1858155Z Entering 'third_party/gloo' 2023-01-11T22:15:13.1877465Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1903457Z Entering 'third_party/googletest' 2023-01-11T22:15:13.1922711Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1949345Z Entering 'third_party/ideep' 2023-01-11T22:15:13.1969179Z http.https://github.com/.extraheader 2023-01-11T22:15:13.1994095Z Entering 'third_party/ideep/mkl-dnn' 2023-01-11T22:15:13.2013444Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2041334Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2023-01-11T22:15:13.2060509Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2092302Z Entering 'third_party/ios-cmake' 2023-01-11T22:15:13.2112722Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2138437Z Entering 'third_party/ittapi' 2023-01-11T22:15:13.2158188Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2184492Z Entering 'third_party/kineto' 2023-01-11T22:15:13.2204298Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2230421Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-01-11T22:15:13.2249953Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2274956Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-01-11T22:15:13.2294234Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2322206Z Entering 'third_party/nccl/nccl' 2023-01-11T22:15:13.2342939Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2368817Z Entering 'third_party/neon2sse' 2023-01-11T22:15:13.2389379Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2414386Z Entering 'third_party/nlohmann' 2023-01-11T22:15:13.2434413Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2460895Z Entering 'third_party/onnx' 2023-01-11T22:15:13.2481504Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2517823Z Entering 'third_party/onnx/third_party/benchmark' 2023-01-11T22:15:13.2536906Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2562667Z Entering 'third_party/onnx/third_party/pybind11' 2023-01-11T22:15:13.2581841Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2609359Z Entering 'third_party/onnx-tensorrt' 2023-01-11T22:15:13.2628760Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2654198Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-01-11T22:15:13.2673307Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2703706Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-01-11T22:15:13.2723076Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2749179Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-01-11T22:15:13.2768598Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2795481Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-01-11T22:15:13.2814415Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2844532Z Entering 'third_party/pocketfft' 2023-01-11T22:15:13.2864165Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2889290Z Entering 'third_party/protobuf' 2023-01-11T22:15:13.2908600Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2935971Z Entering 'third_party/protobuf/third_party/benchmark' 2023-01-11T22:15:13.2955532Z http.https://github.com/.extraheader 2023-01-11T22:15:13.2981509Z Entering 'third_party/protobuf/third_party/googletest' 2023-01-11T22:15:13.3001107Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3028718Z Entering 'third_party/psimd' 2023-01-11T22:15:13.3048266Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3073625Z Entering 'third_party/pthreadpool' 2023-01-11T22:15:13.3092754Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3118383Z Entering 'third_party/pybind11' 2023-01-11T22:15:13.3137961Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3164716Z Entering 'third_party/python-enum' 2023-01-11T22:15:13.3184235Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3209311Z Entering 'third_party/python-peachpy' 2023-01-11T22:15:13.3229248Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3254868Z Entering 'third_party/python-six' 2023-01-11T22:15:13.3275019Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3300732Z Entering 'third_party/sleef' 2023-01-11T22:15:13.3320660Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3346263Z Entering 'third_party/tbb' 2023-01-11T22:15:13.3366212Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3393895Z Entering 'third_party/tensorpipe' 2023-01-11T22:15:13.3413678Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3440538Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-01-11T22:15:13.3460728Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3486491Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-01-11T22:15:13.3507231Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3533082Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-01-11T22:15:13.3551893Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3577725Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-01-11T22:15:13.3598134Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3624256Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-01-11T22:15:13.3642465Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3671510Z Entering 'third_party/zstd' 2023-01-11T22:15:13.3691085Z http.https://github.com/.extraheader 2023-01-11T22:15:13.3914157Z Cleaning up orphan processes