2022-05-18T03:29:46.5666769Z Requested labels: linux.2xlarge 2022-05-18T03:29:46.5666896Z Job defined at: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/master 2022-05-18T03:29:46.5666923Z Waiting for a runner to pick up this job... 2022-05-18T03:29:48.3783352Z Job is about to start running on the runner: i-0dae033c09f631bd6 (repository) 2022-05-18T03:29:53.8978111Z Current runner version: '2.291.1' 2022-05-18T03:29:53.8983746Z Runner name: 'i-0dae033c09f631bd6' 2022-05-18T03:29:53.8984368Z Runner group name: 'Default' 2022-05-18T03:29:53.8984962Z Machine name: 'ip-10-0-3-68' 2022-05-18T03:29:53.8987211Z ##[group]GITHUB_TOKEN Permissions 2022-05-18T03:29:53.8987950Z Actions: write 2022-05-18T03:29:53.8988282Z Checks: write 2022-05-18T03:29:53.8988581Z Contents: write 2022-05-18T03:29:53.8988904Z Deployments: write 2022-05-18T03:29:53.8989232Z Discussions: write 2022-05-18T03:29:53.8989515Z Issues: write 2022-05-18T03:29:53.8989823Z Metadata: read 2022-05-18T03:29:53.8990194Z Packages: write 2022-05-18T03:29:53.8990469Z Pages: write 2022-05-18T03:29:53.8990809Z PullRequests: write 2022-05-18T03:29:53.8991170Z RepositoryProjects: write 2022-05-18T03:29:53.8991483Z SecurityEvents: write 2022-05-18T03:29:53.8991837Z Statuses: write 2022-05-18T03:29:53.8992156Z ##[endgroup] 2022-05-18T03:29:53.8995297Z Secret source: Actions 2022-05-18T03:29:53.8995935Z Prepare workflow directory 2022-05-18T03:29:54.2260758Z Prepare all required actions 2022-05-18T03:29:54.2441592Z Getting action download info 2022-05-18T03:29:54.4220383Z Download action repository 'pytorch/pytorch@master' (SHA:acf7136a525422459d97d5f993e30afdff18b1b9) 2022-05-18T03:29:56.8813082Z Download action repository 'nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a' (SHA:71062288b76e2b6214ebde0e673ce0de1755740a) 2022-05-18T03:29:56.9696146Z Download action repository 'seemethere/upload-artifact-s3@v4' (SHA:c1c31f57581a11fe6d4d052da6276adb2df71f1e) 2022-05-18T03:29:57.1937262Z Getting action download info 2022-05-18T03:29:57.3239015Z Download action repository 'malfet/checkout@silent-checkout' (SHA:f63e9e15406be6060f159846cd2e098f759c5246) 2022-05-18T03:29:57.5512088Z Getting action download info 2022-05-18T03:29:57.7753482Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@master 2022-05-18T03:29:57.7753813Z with: 2022-05-18T03:29:57.7754013Z submodules: recursive 2022-05-18T03:29:57.7754242Z fetch-depth: 0 2022-05-18T03:29:57.7754465Z env: 2022-05-18T03:29:57.7754644Z IN_CI: 1 2022-05-18T03:29:57.7754836Z IS_GHA: 1 2022-05-18T03:29:57.7755047Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:29:57.7755282Z ##[endgroup] 2022-05-18T03:29:57.7981937Z ##[group]Run echo "${GITHUB_WORKSPACE}" 2022-05-18T03:29:57.7982270Z echo "${GITHUB_WORKSPACE}" 2022-05-18T03:29:57.7982534Z if [ -z "${NO_SUDO}" ]; then 2022-05-18T03:29:57.7982787Z  sudo rm -rf "${GITHUB_WORKSPACE}" 2022-05-18T03:29:57.7983002Z else 2022-05-18T03:29:57.7983225Z  rm -rf "${GITHUB_WORKSPACE}" 2022-05-18T03:29:57.7983442Z fi 2022-05-18T03:29:57.7983642Z mkdir "${GITHUB_WORKSPACE}" 2022-05-18T03:29:57.8000023Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:29:57.8000303Z env: 2022-05-18T03:29:57.8000485Z IN_CI: 1 2022-05-18T03:29:57.8000681Z IS_GHA: 1 2022-05-18T03:29:57.8000897Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:29:57.8001108Z NO_SUDO: 2022-05-18T03:29:57.8001315Z ##[endgroup] 2022-05-18T03:29:57.8166796Z /home/ec2-user/actions-runner/_work/pytorch/pytorch 2022-05-18T03:29:59.9474403Z ##[group]Run malfet/checkout@silent-checkout 2022-05-18T03:29:59.9474695Z with: 2022-05-18T03:29:59.9474928Z ref: 3b2375291aab7b48442f2e6fb1ef66cebc761e24 2022-05-18T03:29:59.9475152Z fetch-depth: 0 2022-05-18T03:29:59.9475372Z submodules: recursive 2022-05-18T03:29:59.9475591Z quiet-checkout: true 2022-05-18T03:29:59.9475809Z repository: pytorch/pytorch 2022-05-18T03:29:59.9476227Z token: *** 2022-05-18T03:29:59.9476433Z ssh-strict: true 2022-05-18T03:29:59.9476661Z persist-credentials: true 2022-05-18T03:29:59.9476870Z clean: true 2022-05-18T03:29:59.9477063Z lfs: false 2022-05-18T03:29:59.9477289Z set-safe-directory: true 2022-05-18T03:29:59.9477485Z env: 2022-05-18T03:29:59.9477669Z IN_CI: 1 2022-05-18T03:29:59.9477856Z IS_GHA: 1 2022-05-18T03:29:59.9478053Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:29:59.9478266Z ##[endgroup] 2022-05-18T03:30:00.0623714Z Syncing repository: pytorch/pytorch 2022-05-18T03:30:00.0625117Z ##[group]Getting Git version info 2022-05-18T03:30:00.0625692Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2022-05-18T03:30:00.0626155Z [command]/usr/bin/git version 2022-05-18T03:30:00.0626338Z git version 2.32.0 2022-05-18T03:30:00.0626905Z ##[endgroup] 2022-05-18T03:30:00.0638149Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/fb560b5a-c6c2-491e-8ffc-b946a97af168' before making global git config changes 2022-05-18T03:30:00.0638582Z Adding repository directory to the temporary git global config as a safe directory 2022-05-18T03:30:00.0640782Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2022-05-18T03:30:00.0679906Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2022-05-18T03:30:00.0683389Z ##[group]Initializing the repository 2022-05-18T03:30:00.0687717Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2022-05-18T03:30:00.0827585Z hint: Using 'master' as the name for the initial branch. This default branch name 2022-05-18T03:30:00.0828129Z hint: is subject to change. To configure the initial branch name to use in all 2022-05-18T03:30:00.0828449Z hint: of your new repositories, which will suppress this warning, call: 2022-05-18T03:30:00.0828718Z hint: 2022-05-18T03:30:00.0829027Z hint: git config --global init.defaultBranch 2022-05-18T03:30:00.0829228Z hint: 2022-05-18T03:30:00.0829525Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2022-05-18T03:30:00.0831729Z hint: 'development'. The just-created branch can be renamed via this command: 2022-05-18T03:30:00.0832108Z hint: 2022-05-18T03:30:00.0832454Z hint: git branch -m 2022-05-18T03:30:00.0832835Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2022-05-18T03:30:00.0839481Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2022-05-18T03:30:00.0872668Z ##[endgroup] 2022-05-18T03:30:00.0873053Z ##[group]Disabling automatic garbage collection 2022-05-18T03:30:00.0876844Z [command]/usr/bin/git config --local gc.auto 0 2022-05-18T03:30:00.0905419Z ##[endgroup] 2022-05-18T03:30:00.0905840Z ##[group]Setting up auth 2022-05-18T03:30:00.0912471Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2022-05-18T03:30:00.0943679Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2022-05-18T03:30:00.1198207Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2022-05-18T03:30:00.1231623Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2022-05-18T03:30:00.1490046Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2022-05-18T03:30:00.1546400Z ##[endgroup] 2022-05-18T03:30:00.1546757Z ##[group]Fetching the repository 2022-05-18T03:30:00.1552926Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --quiet --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2022-05-18T03:30:42.7962773Z [command]/usr/bin/git rev-parse --verify --quiet 3b2375291aab7b48442f2e6fb1ef66cebc761e24^{object} 2022-05-18T03:30:42.7990841Z 3b2375291aab7b48442f2e6fb1ef66cebc761e24 2022-05-18T03:30:42.7996063Z ##[endgroup] 2022-05-18T03:30:42.7996433Z ##[group]Determining the checkout info 2022-05-18T03:30:42.7997919Z ##[endgroup] 2022-05-18T03:30:42.7998261Z ##[group]Checking out the ref 2022-05-18T03:30:42.8002622Z [command]/usr/bin/git checkout --quiet --force 3b2375291aab7b48442f2e6fb1ef66cebc761e24 2022-05-18T03:30:43.9845395Z ##[endgroup] 2022-05-18T03:30:43.9846006Z ##[group]Setting up auth for fetching submodules 2022-05-18T03:30:43.9852166Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2022-05-18T03:30:43.9906021Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2022-05-18T03:30:43.9938299Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2022-05-18T03:30:43.9967435Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2022-05-18T03:30:43.9993908Z ##[endgroup] 2022-05-18T03:30:43.9994263Z ##[group]Fetching submodules 2022-05-18T03:30:43.9998623Z [command]/usr/bin/git submodule sync --recursive 2022-05-18T03:30:44.0267160Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2022-05-18T03:30:44.0533478Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2022-05-18T03:30:44.0534853Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2022-05-18T03:30:44.0535752Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2022-05-18T03:30:44.0537176Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2022-05-18T03:30:44.0539365Z Submodule 'third_party/QNNPACK' (https://github.com/pytorch/QNNPACK) registered for path 'third_party/QNNPACK' 2022-05-18T03:30:44.0541619Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2022-05-18T03:30:44.0544264Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2022-05-18T03:30:44.0546718Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2022-05-18T03:30:44.0549225Z Submodule 'third_party/cub' (https://github.com/NVlabs/cub.git) registered for path 'third_party/cub' 2022-05-18T03:30:44.0552113Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2022-05-18T03:30:44.0554736Z Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' 2022-05-18T03:30:44.0557596Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2022-05-18T03:30:44.0561258Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2022-05-18T03:30:44.0564364Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2022-05-18T03:30:44.0567403Z Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' 2022-05-18T03:30:44.0570764Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:30:44.0574136Z Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' 2022-05-18T03:30:44.0577648Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2022-05-18T03:30:44.0581097Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2022-05-18T03:30:44.0584908Z Submodule 'third_party/ios-cmake' (https://github.com/Yangqing/ios-cmake.git) registered for path 'third_party/ios-cmake' 2022-05-18T03:30:44.0588741Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2022-05-18T03:30:44.0592602Z Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' 2022-05-18T03:30:44.0596641Z Submodule 'third_party/neon2sse' (https://github.com/intel/ARM_NEON_2_x86_SSE.git) registered for path 'third_party/neon2sse' 2022-05-18T03:30:44.0600905Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2022-05-18T03:30:44.0605074Z Submodule 'third_party/onnx-tensorrt' (https://github.com/onnx/onnx-tensorrt) registered for path 'third_party/onnx-tensorrt' 2022-05-18T03:30:44.0609323Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2022-05-18T03:30:44.0613796Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2022-05-18T03:30:44.0618323Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2022-05-18T03:30:44.0622906Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2022-05-18T03:30:44.0627568Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2022-05-18T03:30:44.0632404Z Submodule 'third_party/python-enum' (https://github.com/PeachPy/enum34.git) registered for path 'third_party/python-enum' 2022-05-18T03:30:44.0637419Z Submodule 'third_party/python-peachpy' (https://github.com/Maratyszcza/PeachPy.git) registered for path 'third_party/python-peachpy' 2022-05-18T03:30:44.0642454Z Submodule 'third_party/python-six' (https://github.com/benjaminp/six.git) registered for path 'third_party/python-six' 2022-05-18T03:30:44.0647471Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2022-05-18T03:30:44.0652733Z Submodule 'third_party/tbb' (https://github.com/01org/tbb) registered for path 'third_party/tbb' 2022-05-18T03:30:44.0657932Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2022-05-18T03:30:44.0663432Z Submodule 'third_party/zstd' (https://github.com/facebook/zstd.git) registered for path 'third_party/zstd' 2022-05-18T03:30:44.0723527Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2022-05-18T03:30:44.3135803Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2022-05-18T03:30:44.4840098Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2022-05-18T03:30:44.6585641Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2022-05-18T03:30:44.9111737Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/QNNPACK'... 2022-05-18T03:30:45.1329330Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2022-05-18T03:30:49.6932451Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2022-05-18T03:30:50.0323867Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2022-05-18T03:30:50.4758377Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cub'... 2022-05-18T03:30:51.6852877Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2022-05-18T03:30:52.6371857Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/eigen'... 2022-05-18T03:30:56.8007080Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2022-05-18T03:30:57.4737475Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2022-05-18T03:30:58.3848553Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2022-05-18T03:30:59.3450870Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/foxi'... 2022-05-18T03:30:59.5166125Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2022-05-18T03:30:59.9045700Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2022-05-18T03:31:00.1845114Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2022-05-18T03:31:01.0470929Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2022-05-18T03:31:01.4122175Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ios-cmake'... 2022-05-18T03:31:01.6752409Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2022-05-18T03:31:02.9303638Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nccl/nccl'... 2022-05-18T03:31:03.2724850Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/neon2sse'... 2022-05-18T03:31:03.6179852Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2022-05-18T03:31:04.8372615Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt'... 2022-05-18T03:31:05.1676477Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2022-05-18T03:31:05.3600773Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2022-05-18T03:31:09.6395539Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2022-05-18T03:31:09.8244402Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2022-05-18T03:31:10.0671599Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2022-05-18T03:31:10.7318566Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-enum'... 2022-05-18T03:31:10.9472174Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2022-05-18T03:31:11.2098095Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-six'... 2022-05-18T03:31:11.4722230Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2022-05-18T03:31:11.9730167Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tbb'... 2022-05-18T03:31:13.5169022Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2022-05-18T03:31:13.9459731Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/zstd'... 2022-05-18T03:31:15.8109190Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2022-05-18T03:31:15.8443751Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2022-05-18T03:31:15.8752754Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2022-05-18T03:31:15.9198805Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2022-05-18T03:31:15.9645049Z Submodule path 'third_party/QNNPACK': checked out '7d2a4e9931a82adc3814275b6219a03e24e36b4c' 2022-05-18T03:31:16.5630679Z Submodule path 'third_party/XNNPACK': checked out 'ae108ef49aa5623b896fc93d4298c49d1750d9ba' 2022-05-18T03:31:16.6062534Z Submodule path 'third_party/benchmark': checked out 'e991355c02b93fe17713efe04cbc2e278e00fdbd' 2022-05-18T03:31:16.7222714Z Submodule path 'third_party/cpuinfo': checked out '5916273f79a21551890fd3d56fc5375a78d1598d' 2022-05-18T03:31:16.7775280Z Submodule path 'third_party/cub': checked out 'd106ddb991a56c3df1b6d51b2409e36ba8181ce4' 2022-05-18T03:31:17.0730404Z Submodule path 'third_party/cudnn_frontend': checked out '43709ab96c47e26eebcdac72f93f946d44ceffa8' 2022-05-18T03:31:17.3223751Z Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' 2022-05-18T03:31:17.3862984Z Submodule path 'third_party/fbgemm': checked out '2e9be65810107a9595da717f95d21924b73be833' 2022-05-18T03:31:17.3908971Z Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:17.3910082Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:17.3912624Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:17.3951671Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... 2022-05-18T03:31:18.0500187Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... 2022-05-18T03:31:18.5743128Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... 2022-05-18T03:31:19.5399736Z Submodule path 'third_party/fbgemm/third_party/asmjit': checked out '8b35b4cffb62ecb58a903bf91cb7537d7a672211' 2022-05-18T03:31:19.6563208Z Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' 2022-05-18T03:31:19.7342208Z Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' 2022-05-18T03:31:19.8380922Z Submodule path 'third_party/flatbuffers': checked out 'd0cede9c90c5257537c293517a21376408b549fa' 2022-05-18T03:31:19.8927705Z Submodule path 'third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2022-05-18T03:31:19.9233855Z Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' 2022-05-18T03:31:19.9837904Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2022-05-18T03:31:20.0288549Z Submodule path 'third_party/gloo': checked out 'c22a5cfba94edf8ea4f53a174d38aa0c629d070f' 2022-05-18T03:31:20.0950494Z Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2022-05-18T03:31:20.1278473Z Submodule path 'third_party/ideep': checked out '02b17c5748c9349dcc586c359af800c684d9b1ab' 2022-05-18T03:31:20.1322603Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:20.1358827Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2022-05-18T03:31:25.1878378Z Submodule path 'third_party/ideep/mkl-dnn': checked out '888a87a954e4fddb4d81fd10858eb834f2441b46' 2022-05-18T03:31:25.1933316Z Submodule 'third_party/oneDNN' (https://github.com/oneapi-src/oneDNN.git) registered for path 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:25.1972939Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN'... 2022-05-18T03:31:30.2601288Z Submodule path 'third_party/ideep/mkl-dnn/third_party/oneDNN': checked out '52b5f107dd9cf10910aaa19cb47f3abf9b349815' 2022-05-18T03:31:30.2937003Z Submodule path 'third_party/ios-cmake': checked out '8abaed637d56f1337d6e1d2c4026e25c1eade724' 2022-05-18T03:31:30.4027088Z Submodule path 'third_party/kineto': checked out 'b2b48c00c6e5bd8e807e2231adb229db6a1d1c22' 2022-05-18T03:31:30.4072299Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:30.4074645Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:30.4111645Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2022-05-18T03:31:31.3969728Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2022-05-18T03:31:32.3261273Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '2591ab91c3898c9f6544fff04660276537d32ffd' 2022-05-18T03:31:32.3985566Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2022-05-18T03:31:32.4403667Z Submodule path 'third_party/nccl/nccl': checked out '7e515921295adaab72adf56ea71a0fafb0ecb5f3' 2022-05-18T03:31:32.4766494Z Submodule path 'third_party/neon2sse': checked out '97a126f08ce318023be604d03f88bf0820a9464a' 2022-05-18T03:31:32.7205014Z Submodule path 'third_party/onnx': checked out '96046b8ccfb8e6fa82f6b2b34b3d56add2e8849c' 2022-05-18T03:31:32.7262018Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:32.7262722Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:32.7312687Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/benchmark'... 2022-05-18T03:31:33.0604789Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2022-05-18T03:31:33.7619084Z Submodule path 'third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' 2022-05-18T03:31:33.8142272Z Submodule path 'third_party/onnx/third_party/pybind11': checked out '59a2ac2745d8a57ac94c6accced73620d59fb844' 2022-05-18T03:31:33.8516482Z Submodule path 'third_party/onnx-tensorrt': checked out 'c153211418a7c57ce071d9ce2a41f8d1c85a878f' 2022-05-18T03:31:33.8561185Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:33.8598670Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx'... 2022-05-18T03:31:35.2002903Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx': checked out '765f5ee823a67a866f4bd28a9860e81f3c811ce8' 2022-05-18T03:31:35.2060942Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:35.2062148Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:35.2106442Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark'... 2022-05-18T03:31:35.6623233Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11'... 2022-05-18T03:31:36.3663865Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' 2022-05-18T03:31:36.4489385Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11': checked out 'a1041190c8b8ff0cd9e2f0752248ad5e3789ea0c' 2022-05-18T03:31:36.4542673Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:36.4581309Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang'... 2022-05-18T03:31:36.6666439Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2022-05-18T03:31:36.6986111Z Submodule path 'third_party/pocketfft': checked out 'ea778e37710c07723435b1be58235996d1d43a5a' 2022-05-18T03:31:36.9558491Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2022-05-18T03:31:36.9614511Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:36.9615555Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:36.9650953Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2022-05-18T03:31:37.2929284Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2022-05-18T03:31:38.1865745Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2022-05-18T03:31:38.2724553Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2022-05-18T03:31:38.3041832Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2022-05-18T03:31:38.3367336Z Submodule path 'third_party/pthreadpool': checked out 'a134dd5d4cee80cce15db81a72e7f929d71dd413' 2022-05-18T03:31:38.3867907Z Submodule path 'third_party/pybind11': checked out '8de7772cc72daca8e947b79b83fea46214931604' 2022-05-18T03:31:38.4173992Z Submodule path 'third_party/python-enum': checked out '4cfedc426c4e2fc52e3f5c2b4297e15ed8d6b8c7' 2022-05-18T03:31:38.4665763Z Submodule path 'third_party/python-peachpy': checked out '07d8fde8ac45d7705129475c0f94ed8925b93473' 2022-05-18T03:31:38.4974996Z Submodule path 'third_party/python-six': checked out '15e31431af97e5e64b80af0a3f598d382bcdd49a' 2022-05-18T03:31:38.5607203Z Submodule path 'third_party/sleef': checked out 'e0a003ee838b75d11763aa9c3ef17bf71a725bff' 2022-05-18T03:31:38.6835075Z Submodule path 'third_party/tbb': checked out 'a51a90bc609bb73db8ea13841b5cf7aa4344d4a9' 2022-05-18T03:31:38.7297368Z Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' 2022-05-18T03:31:38.7341598Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:38.7342939Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:38.7345428Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:38.7348056Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:38.7387104Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2022-05-18T03:31:39.5995522Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2022-05-18T03:31:39.8093504Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2022-05-18T03:31:40.7536869Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2022-05-18T03:31:41.4713295Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2022-05-18T03:31:41.5071222Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2022-05-18T03:31:41.5907018Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' 2022-05-18T03:31:41.6385131Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2022-05-18T03:31:41.6437329Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:41.6474296Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2022-05-18T03:31:41.8571203Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2022-05-18T03:31:42.0026485Z Submodule path 'third_party/zstd': checked out 'aec56a52fbab207fc639a1937d1e708a282edca8' 2022-05-18T03:31:42.0105911Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2022-05-18T03:31:42.0373013Z Entering 'android/libs/fbjni' 2022-05-18T03:31:42.0409520Z Entering 'third_party/FP16' 2022-05-18T03:31:42.0444824Z Entering 'third_party/FXdiv' 2022-05-18T03:31:42.0480028Z Entering 'third_party/NNPACK' 2022-05-18T03:31:42.0515040Z Entering 'third_party/QNNPACK' 2022-05-18T03:31:42.0551418Z Entering 'third_party/XNNPACK' 2022-05-18T03:31:42.0596185Z Entering 'third_party/benchmark' 2022-05-18T03:31:42.0630852Z Entering 'third_party/cpuinfo' 2022-05-18T03:31:42.0666422Z Entering 'third_party/cub' 2022-05-18T03:31:42.0701681Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:31:42.0742026Z Entering 'third_party/eigen' 2022-05-18T03:31:42.0780530Z Entering 'third_party/fbgemm' 2022-05-18T03:31:42.0816671Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:42.0851025Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:42.0886569Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:42.0921849Z Entering 'third_party/flatbuffers' 2022-05-18T03:31:42.0958364Z Entering 'third_party/fmt' 2022-05-18T03:31:42.0995803Z Entering 'third_party/foxi' 2022-05-18T03:31:42.1030233Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:31:42.1065123Z Entering 'third_party/gloo' 2022-05-18T03:31:42.1101308Z Entering 'third_party/googletest' 2022-05-18T03:31:42.1136546Z Entering 'third_party/ideep' 2022-05-18T03:31:42.1171840Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:42.1207538Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:42.1247432Z Entering 'third_party/ios-cmake' 2022-05-18T03:31:42.1282594Z Entering 'third_party/kineto' 2022-05-18T03:31:42.1317584Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:42.1352468Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:42.1388581Z Entering 'third_party/nccl/nccl' 2022-05-18T03:31:42.1423348Z Entering 'third_party/neon2sse' 2022-05-18T03:31:42.1457709Z Entering 'third_party/onnx' 2022-05-18T03:31:42.1503808Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.1538209Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.1574983Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:31:42.1609673Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:42.1649098Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.1683188Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.1717140Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.1756389Z Entering 'third_party/pocketfft' 2022-05-18T03:31:42.1791349Z Entering 'third_party/protobuf' 2022-05-18T03:31:42.1830925Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:42.1864494Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:42.1900001Z Entering 'third_party/psimd' 2022-05-18T03:31:42.1934368Z Entering 'third_party/pthreadpool' 2022-05-18T03:31:42.1970141Z Entering 'third_party/pybind11' 2022-05-18T03:31:42.2005999Z Entering 'third_party/python-enum' 2022-05-18T03:31:42.2039917Z Entering 'third_party/python-peachpy' 2022-05-18T03:31:42.2074491Z Entering 'third_party/python-six' 2022-05-18T03:31:42.2109578Z Entering 'third_party/sleef' 2022-05-18T03:31:42.2145567Z Entering 'third_party/tbb' 2022-05-18T03:31:42.2182841Z Entering 'third_party/tensorpipe' 2022-05-18T03:31:42.2220147Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:42.2254277Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:42.2288123Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:42.2322894Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:42.2356562Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.2393043Z Entering 'third_party/zstd' 2022-05-18T03:31:42.2437786Z ##[endgroup] 2022-05-18T03:31:42.2438165Z ##[group]Persisting credentials for submodules 2022-05-18T03:31:42.2446182Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || : 2022-05-18T03:31:42.2719691Z Entering 'android/libs/fbjni' 2022-05-18T03:31:42.2753358Z Entering 'third_party/FP16' 2022-05-18T03:31:42.2787924Z Entering 'third_party/FXdiv' 2022-05-18T03:31:42.2823072Z Entering 'third_party/NNPACK' 2022-05-18T03:31:42.2857951Z Entering 'third_party/QNNPACK' 2022-05-18T03:31:42.2894026Z Entering 'third_party/XNNPACK' 2022-05-18T03:31:42.2938598Z Entering 'third_party/benchmark' 2022-05-18T03:31:42.2973468Z Entering 'third_party/cpuinfo' 2022-05-18T03:31:42.3009699Z Entering 'third_party/cub' 2022-05-18T03:31:42.3043996Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:31:42.3083215Z Entering 'third_party/eigen' 2022-05-18T03:31:42.3119651Z Entering 'third_party/fbgemm' 2022-05-18T03:31:42.3153836Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:42.3189110Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:42.3223035Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:42.3258257Z Entering 'third_party/flatbuffers' 2022-05-18T03:31:42.3295800Z Entering 'third_party/fmt' 2022-05-18T03:31:42.3331392Z Entering 'third_party/foxi' 2022-05-18T03:31:42.3366160Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:31:42.3400736Z Entering 'third_party/gloo' 2022-05-18T03:31:42.3435346Z Entering 'third_party/googletest' 2022-05-18T03:31:42.3470121Z Entering 'third_party/ideep' 2022-05-18T03:31:42.3503422Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:42.3540938Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:42.3581152Z Entering 'third_party/ios-cmake' 2022-05-18T03:31:42.3615644Z Entering 'third_party/kineto' 2022-05-18T03:31:42.3650047Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:42.3685809Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:42.3721038Z Entering 'third_party/nccl/nccl' 2022-05-18T03:31:42.3755890Z Entering 'third_party/neon2sse' 2022-05-18T03:31:42.3791356Z Entering 'third_party/onnx' 2022-05-18T03:31:42.3836931Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.3871069Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.3907228Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:31:42.3940473Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:42.3979370Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.4013117Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.4047828Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.4086223Z Entering 'third_party/pocketfft' 2022-05-18T03:31:42.4121507Z Entering 'third_party/protobuf' 2022-05-18T03:31:42.4159841Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:42.4193021Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:42.4228524Z Entering 'third_party/psimd' 2022-05-18T03:31:42.4264111Z Entering 'third_party/pthreadpool' 2022-05-18T03:31:42.4297877Z Entering 'third_party/pybind11' 2022-05-18T03:31:42.4332566Z Entering 'third_party/python-enum' 2022-05-18T03:31:42.4367893Z Entering 'third_party/python-peachpy' 2022-05-18T03:31:42.4402279Z Entering 'third_party/python-six' 2022-05-18T03:31:42.4436126Z Entering 'third_party/sleef' 2022-05-18T03:31:42.4470314Z Entering 'third_party/tbb' 2022-05-18T03:31:42.4506149Z Entering 'third_party/tensorpipe' 2022-05-18T03:31:42.4540613Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:42.4574650Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:42.4607734Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:42.4642374Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:42.4674968Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.4711267Z Entering 'third_party/zstd' 2022-05-18T03:31:42.4757123Z [command]/usr/bin/git submodule foreach --recursive git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url 2022-05-18T03:31:42.5027636Z Entering 'android/libs/fbjni' 2022-05-18T03:31:42.5061837Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2022-05-18T03:31:42.5076046Z Entering 'third_party/FP16' 2022-05-18T03:31:42.5108374Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2022-05-18T03:31:42.5122524Z Entering 'third_party/FXdiv' 2022-05-18T03:31:42.5154930Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2022-05-18T03:31:42.5168520Z Entering 'third_party/NNPACK' 2022-05-18T03:31:42.5202803Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2022-05-18T03:31:42.5216708Z Entering 'third_party/QNNPACK' 2022-05-18T03:31:42.5249313Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/QNNPACK/config remote.origin.url 2022-05-18T03:31:42.5263712Z Entering 'third_party/XNNPACK' 2022-05-18T03:31:42.5296157Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2022-05-18T03:31:42.5321012Z Entering 'third_party/benchmark' 2022-05-18T03:31:42.5352923Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2022-05-18T03:31:42.5366509Z Entering 'third_party/cpuinfo' 2022-05-18T03:31:42.5398298Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2022-05-18T03:31:42.5412912Z Entering 'third_party/cub' 2022-05-18T03:31:42.5445127Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cub/config remote.origin.url 2022-05-18T03:31:42.5458917Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:31:42.5490845Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2022-05-18T03:31:42.5510467Z Entering 'third_party/eigen' 2022-05-18T03:31:42.5542490Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/eigen/config remote.origin.url 2022-05-18T03:31:42.5559219Z Entering 'third_party/fbgemm' 2022-05-18T03:31:42.5590965Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2022-05-18T03:31:42.5605640Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:42.5636909Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/asmjit/config remote.origin.url 2022-05-18T03:31:42.5650268Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:42.5682003Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cpuinfo/config remote.origin.url 2022-05-18T03:31:42.5695928Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:42.5727867Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/googletest/config remote.origin.url 2022-05-18T03:31:42.5743154Z Entering 'third_party/flatbuffers' 2022-05-18T03:31:42.5775188Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2022-05-18T03:31:42.5791012Z Entering 'third_party/fmt' 2022-05-18T03:31:42.5823208Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2022-05-18T03:31:42.5837274Z Entering 'third_party/foxi' 2022-05-18T03:31:42.5869784Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/foxi/config remote.origin.url 2022-05-18T03:31:42.5884494Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:31:42.5917655Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2022-05-18T03:31:42.5931289Z Entering 'third_party/gloo' 2022-05-18T03:31:42.5964502Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2022-05-18T03:31:42.5978594Z Entering 'third_party/googletest' 2022-05-18T03:31:42.6012896Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2022-05-18T03:31:42.6027174Z Entering 'third_party/ideep' 2022-05-18T03:31:42.6059488Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2022-05-18T03:31:42.6072436Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:42.6104717Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2022-05-18T03:31:42.6120680Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:42.6152543Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/modules/third_party/oneDNN/config remote.origin.url 2022-05-18T03:31:42.6173551Z Entering 'third_party/ios-cmake' 2022-05-18T03:31:42.6206886Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ios-cmake/config remote.origin.url 2022-05-18T03:31:42.6220479Z Entering 'third_party/kineto' 2022-05-18T03:31:42.6252781Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2022-05-18T03:31:42.6266457Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:42.6298559Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2022-05-18T03:31:42.6312478Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:42.6344024Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2022-05-18T03:31:42.6359700Z Entering 'third_party/nccl/nccl' 2022-05-18T03:31:42.6397759Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nccl/nccl/config remote.origin.url 2022-05-18T03:31:42.6409403Z Entering 'third_party/neon2sse' 2022-05-18T03:31:42.6441847Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/neon2sse/config remote.origin.url 2022-05-18T03:31:42.6454797Z Entering 'third_party/onnx' 2022-05-18T03:31:42.6488143Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2022-05-18T03:31:42.6512591Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.6545298Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2022-05-18T03:31:42.6559717Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.6591494Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2022-05-18T03:31:42.6607848Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:31:42.6639836Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/config remote.origin.url 2022-05-18T03:31:42.6654049Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:42.6686093Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/config remote.origin.url 2022-05-18T03:31:42.6705428Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.6737457Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2022-05-18T03:31:42.6751516Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.6783955Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2022-05-18T03:31:42.6797730Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.6831404Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2022-05-18T03:31:42.6851030Z Entering 'third_party/pocketfft' 2022-05-18T03:31:42.6883333Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2022-05-18T03:31:42.6897382Z Entering 'third_party/protobuf' 2022-05-18T03:31:42.6929501Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2022-05-18T03:31:42.6946527Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:42.6980135Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2022-05-18T03:31:42.6994293Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:42.7026993Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2022-05-18T03:31:42.7043751Z Entering 'third_party/psimd' 2022-05-18T03:31:42.7075793Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2022-05-18T03:31:42.7090491Z Entering 'third_party/pthreadpool' 2022-05-18T03:31:42.7123491Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2022-05-18T03:31:42.7137428Z Entering 'third_party/pybind11' 2022-05-18T03:31:42.7169385Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2022-05-18T03:31:42.7184235Z Entering 'third_party/python-enum' 2022-05-18T03:31:42.7217109Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-enum/config remote.origin.url 2022-05-18T03:31:42.7231540Z Entering 'third_party/python-peachpy' 2022-05-18T03:31:42.7263742Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2022-05-18T03:31:42.7277994Z Entering 'third_party/python-six' 2022-05-18T03:31:42.7310759Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-six/config remote.origin.url 2022-05-18T03:31:42.7325878Z Entering 'third_party/sleef' 2022-05-18T03:31:42.7357560Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2022-05-18T03:31:42.7371401Z Entering 'third_party/tbb' 2022-05-18T03:31:42.7403859Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tbb/config remote.origin.url 2022-05-18T03:31:42.7420333Z Entering 'third_party/tensorpipe' 2022-05-18T03:31:42.7453976Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2022-05-18T03:31:42.7468334Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:42.7500943Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2022-05-18T03:31:42.7516133Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:42.7548314Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2022-05-18T03:31:42.7563678Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:42.7595186Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2022-05-18T03:31:42.7609556Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:42.7642583Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2022-05-18T03:31:42.7655844Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:42.7688748Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2022-05-18T03:31:42.7705402Z Entering 'third_party/zstd' 2022-05-18T03:31:42.7737715Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/zstd/config remote.origin.url 2022-05-18T03:31:42.8503843Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2022-05-18T03:31:42.8779167Z Entering 'android/libs/fbjni' 2022-05-18T03:31:42.8814704Z Entering 'third_party/FP16' 2022-05-18T03:31:42.8850290Z Entering 'third_party/FXdiv' 2022-05-18T03:31:42.8886020Z Entering 'third_party/NNPACK' 2022-05-18T03:31:42.8922100Z Entering 'third_party/QNNPACK' 2022-05-18T03:31:42.8957572Z Entering 'third_party/XNNPACK' 2022-05-18T03:31:42.9003400Z Entering 'third_party/benchmark' 2022-05-18T03:31:42.9039227Z Entering 'third_party/cpuinfo' 2022-05-18T03:31:42.9074757Z Entering 'third_party/cub' 2022-05-18T03:31:42.9110096Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:31:42.9150294Z Entering 'third_party/eigen' 2022-05-18T03:31:42.9187193Z Entering 'third_party/fbgemm' 2022-05-18T03:31:42.9222793Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:42.9260977Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:42.9296201Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:42.9332317Z Entering 'third_party/flatbuffers' 2022-05-18T03:31:42.9370577Z Entering 'third_party/fmt' 2022-05-18T03:31:42.9405793Z Entering 'third_party/foxi' 2022-05-18T03:31:42.9441212Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:31:42.9476013Z Entering 'third_party/gloo' 2022-05-18T03:31:42.9512550Z Entering 'third_party/googletest' 2022-05-18T03:31:42.9548459Z Entering 'third_party/ideep' 2022-05-18T03:31:42.9583462Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:42.9620016Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:42.9661050Z Entering 'third_party/ios-cmake' 2022-05-18T03:31:42.9696463Z Entering 'third_party/kineto' 2022-05-18T03:31:42.9731717Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:42.9766800Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:42.9802886Z Entering 'third_party/nccl/nccl' 2022-05-18T03:31:42.9837220Z Entering 'third_party/neon2sse' 2022-05-18T03:31:42.9871738Z Entering 'third_party/onnx' 2022-05-18T03:31:42.9918154Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:42.9952951Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:42.9989225Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:31:43.0024230Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:43.0063450Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:43.0098849Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:43.0133501Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:43.0173627Z Entering 'third_party/pocketfft' 2022-05-18T03:31:43.0208700Z Entering 'third_party/protobuf' 2022-05-18T03:31:43.0248293Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:43.0283529Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:43.0320496Z Entering 'third_party/psimd' 2022-05-18T03:31:43.0355900Z Entering 'third_party/pthreadpool' 2022-05-18T03:31:43.0392150Z Entering 'third_party/pybind11' 2022-05-18T03:31:43.0427909Z Entering 'third_party/python-enum' 2022-05-18T03:31:43.0463455Z Entering 'third_party/python-peachpy' 2022-05-18T03:31:43.0500528Z Entering 'third_party/python-six' 2022-05-18T03:31:43.0535528Z Entering 'third_party/sleef' 2022-05-18T03:31:43.0571229Z Entering 'third_party/tbb' 2022-05-18T03:31:43.0608681Z Entering 'third_party/tensorpipe' 2022-05-18T03:31:43.0644135Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:43.0678312Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:43.0712941Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:43.0747153Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:43.0781175Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:43.0818529Z Entering 'third_party/zstd' 2022-05-18T03:31:43.0866267Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2022-05-18T03:31:43.1134115Z Entering 'android/libs/fbjni' 2022-05-18T03:31:43.1169116Z Entering 'third_party/FP16' 2022-05-18T03:31:43.1204347Z Entering 'third_party/FXdiv' 2022-05-18T03:31:43.1238794Z Entering 'third_party/NNPACK' 2022-05-18T03:31:43.1274097Z Entering 'third_party/QNNPACK' 2022-05-18T03:31:43.1309470Z Entering 'third_party/XNNPACK' 2022-05-18T03:31:43.1357362Z Entering 'third_party/benchmark' 2022-05-18T03:31:43.1392893Z Entering 'third_party/cpuinfo' 2022-05-18T03:31:43.1428790Z Entering 'third_party/cub' 2022-05-18T03:31:43.1464581Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:31:43.1504394Z Entering 'third_party/eigen' 2022-05-18T03:31:43.1542055Z Entering 'third_party/fbgemm' 2022-05-18T03:31:43.1578434Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:31:43.1612907Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:31:43.1648505Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:31:43.1684558Z Entering 'third_party/flatbuffers' 2022-05-18T03:31:43.1721743Z Entering 'third_party/fmt' 2022-05-18T03:31:43.1755547Z Entering 'third_party/foxi' 2022-05-18T03:31:43.1790517Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:31:43.1824902Z Entering 'third_party/gloo' 2022-05-18T03:31:43.1860716Z Entering 'third_party/googletest' 2022-05-18T03:31:43.1896174Z Entering 'third_party/ideep' 2022-05-18T03:31:43.1930347Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:31:43.1966364Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:31:43.2007059Z Entering 'third_party/ios-cmake' 2022-05-18T03:31:43.2042011Z Entering 'third_party/kineto' 2022-05-18T03:31:43.2076733Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:31:43.2112634Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:31:43.2149477Z Entering 'third_party/nccl/nccl' 2022-05-18T03:31:43.2184687Z Entering 'third_party/neon2sse' 2022-05-18T03:31:43.2220591Z Entering 'third_party/onnx' 2022-05-18T03:31:43.2266596Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:31:43.2302068Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:31:43.2340376Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:31:43.2376145Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:31:43.2414803Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:31:43.2449908Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:31:43.2484271Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:31:43.2523779Z Entering 'third_party/pocketfft' 2022-05-18T03:31:43.2558371Z Entering 'third_party/protobuf' 2022-05-18T03:31:43.2597771Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:31:43.2632437Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:31:43.2668973Z Entering 'third_party/psimd' 2022-05-18T03:31:43.2706096Z Entering 'third_party/pthreadpool' 2022-05-18T03:31:43.2740380Z Entering 'third_party/pybind11' 2022-05-18T03:31:43.2774735Z Entering 'third_party/python-enum' 2022-05-18T03:31:43.2810003Z Entering 'third_party/python-peachpy' 2022-05-18T03:31:43.2843810Z Entering 'third_party/python-six' 2022-05-18T03:31:43.2878611Z Entering 'third_party/sleef' 2022-05-18T03:31:43.2913851Z Entering 'third_party/tbb' 2022-05-18T03:31:43.2950853Z Entering 'third_party/tensorpipe' 2022-05-18T03:31:43.2985732Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:31:43.3019681Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:31:43.3054585Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:31:43.3089754Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:31:43.3122718Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:31:43.3160596Z Entering 'third_party/zstd' 2022-05-18T03:31:43.3204110Z ##[endgroup] 2022-05-18T03:31:43.3245683Z [command]/usr/bin/git log -1 --format='%H' 2022-05-18T03:31:43.3272798Z '3b2375291aab7b48442f2e6fb1ef66cebc761e24' 2022-05-18T03:31:43.3402480Z Prepare all required actions 2022-05-18T03:31:43.3425860Z ##[group]Run ./.github/actions/setup-linux 2022-05-18T03:31:43.3426071Z env: 2022-05-18T03:31:43.3426212Z IN_CI: 1 2022-05-18T03:31:43.3426371Z IS_GHA: 1 2022-05-18T03:31:43.3426554Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:43.3426726Z ##[endgroup] 2022-05-18T03:31:43.3440214Z ##[group]Run set -euo pipefail 2022-05-18T03:31:43.3440453Z set -euo pipefail 2022-05-18T03:31:43.3440649Z function get_ec2_metadata() { 2022-05-18T03:31:43.3440897Z  # Pulled from instance metadata endpoint for EC2 2022-05-18T03:31:43.3441251Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2022-05-18T03:31:43.3441558Z  category=$1 2022-05-18T03:31:43.3441787Z  curl -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2022-05-18T03:31:43.3442014Z } 2022-05-18T03:31:43.3442234Z echo "ami-id: $(get_ec2_metadata ami-id)" 2022-05-18T03:31:43.3442492Z echo "instance-id: $(get_ec2_metadata instance-id)" 2022-05-18T03:31:43.3442766Z echo "instance-type: $(get_ec2_metadata instance-type)" 2022-05-18T03:31:43.3443015Z echo "system info $(uname -a)" 2022-05-18T03:31:43.3454717Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:31:43.3454935Z env: 2022-05-18T03:31:43.3455098Z IN_CI: 1 2022-05-18T03:31:43.3455242Z IS_GHA: 1 2022-05-18T03:31:43.3455420Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:43.3455611Z ##[endgroup] 2022-05-18T03:31:43.3541370Z ami-id: ami-096198a0bccc6bad4 2022-05-18T03:31:43.3592304Z instance-id: i-0dae033c09f631bd6 2022-05-18T03:31:43.3642471Z instance-type: c5.2xlarge 2022-05-18T03:31:43.3648688Z system info Linux ip-10-0-3-68.ec2.internal 4.14.252-195.483.amzn2.x86_64 #1 SMP Mon Nov 1 20:58:46 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux 2022-05-18T03:31:43.3663976Z ##[group]Run if systemctl is-active --quiet docker; then 2022-05-18T03:31:43.3664280Z if systemctl is-active --quiet docker; then 2022-05-18T03:31:43.3664533Z  echo "Docker daemon is running..."; 2022-05-18T03:31:43.3664740Z else 2022-05-18T03:31:43.3665087Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2022-05-18T03:31:43.3665322Z fi 2022-05-18T03:31:43.3676974Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:31:43.3677189Z env: 2022-05-18T03:31:43.3677354Z IN_CI: 1 2022-05-18T03:31:43.3677518Z IS_GHA: 1 2022-05-18T03:31:43.3677686Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:43.3677876Z ##[endgroup] 2022-05-18T03:31:43.3782826Z Docker daemon is running... 2022-05-18T03:31:43.3798005Z ##[group]Run AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2022-05-18T03:31:43.3798370Z AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2022-05-18T03:31:43.3798650Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2022-05-18T03:31:43.3799043Z retry aws ecr get-login*** "$AWS_DEFAULT_REGION" | docker login --username AWS \ 2022-05-18T03:31:43.3799584Z  --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" 2022-05-18T03:31:43.3810535Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:31:43.3810760Z env: 2022-05-18T03:31:43.3810904Z IN_CI: 1 2022-05-18T03:31:43.3811068Z IS_GHA: 1 2022-05-18T03:31:43.3811251Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:43.3811437Z AWS_RETRY_MODE: standard 2022-05-18T03:31:43.3811631Z AWS_MAX_ATTEMPTS: 5 2022-05-18T03:31:43.3811831Z AWS_DEFAULT_REGION: us-east-1 2022-05-18T03:31:43.3812019Z ##[endgroup] 2022-05-18T03:31:44.3484070Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2022-05-18T03:31:44.3484603Z Configure a credential helper to remove this warning. See 2022-05-18T03:31:44.3485431Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2022-05-18T03:31:44.3485738Z 2022-05-18T03:31:44.3485843Z Login Succeeded 2022-05-18T03:31:44.3521474Z ##[group]Run env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}" 2022-05-18T03:31:44.3521779Z env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}" 2022-05-18T03:31:44.3532978Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:31:44.3533207Z env: 2022-05-18T03:31:44.3533369Z IN_CI: 1 2022-05-18T03:31:44.3533521Z IS_GHA: 1 2022-05-18T03:31:44.3533711Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:44.3533897Z ##[endgroup] 2022-05-18T03:31:44.3585583Z Prepare all required actions 2022-05-18T03:31:44.3585825Z Getting action download info 2022-05-18T03:31:44.4926325Z Download action repository 'seemethere/add-github-ssh-key@v1' (SHA:1ecffedb1e192a50aa67dba2f0e048e5d3bfa144) 2022-05-18T03:31:44.6010343Z ##[group]Run ./.github/actions/setup-ssh 2022-05-18T03:31:44.6010552Z with: 2022-05-18T03:31:44.6010855Z github-secret: *** 2022-05-18T03:31:44.6011027Z env: 2022-05-18T03:31:44.6011184Z IN_CI: 1 2022-05-18T03:31:44.6011330Z IS_GHA: 1 2022-05-18T03:31:44.6011507Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:44.6011690Z ##[endgroup] 2022-05-18T03:31:44.6030358Z ##[group]Run seemethere/add-github-ssh-key@v1 2022-05-18T03:31:44.6030577Z with: 2022-05-18T03:31:44.6030864Z GITHUB_TOKEN: *** 2022-05-18T03:31:44.6031067Z activate-with-label: false 2022-05-18T03:31:44.6031251Z label: with-ssh 2022-05-18T03:31:44.6031448Z remove-existing-keys: true 2022-05-18T03:31:44.6031632Z env: 2022-05-18T03:31:44.6031771Z IN_CI: 1 2022-05-18T03:31:44.6031971Z IS_GHA: 1 2022-05-18T03:31:44.6032140Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:44.6032333Z ##[endgroup] 2022-05-18T03:31:44.6541997Z Not on pull request and ciflow reference could not be extracted, skipping adding ssh keys 2022-05-18T03:31:44.6588505Z Prepare all required actions 2022-05-18T03:31:44.6604391Z ##[group]Run ./.github/actions/pull-docker-image 2022-05-18T03:31:44.6604603Z with: 2022-05-18T03:31:44.6604951Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:31:44.6605273Z env: 2022-05-18T03:31:44.6605427Z IN_CI: 1 2022-05-18T03:31:44.6605586Z IS_GHA: 1 2022-05-18T03:31:44.6605753Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:44.6605939Z ##[endgroup] 2022-05-18T03:31:44.6616792Z ##[group]Run retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2022-05-18T03:31:44.6617061Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2022-05-18T03:31:44.6617302Z retry docker pull "${DOCKER_IMAGE}" 2022-05-18T03:31:44.6628467Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:31:44.6628787Z env: 2022-05-18T03:31:44.6628942Z IN_CI: 1 2022-05-18T03:31:44.6629120Z IS_GHA: 1 2022-05-18T03:31:44.6629297Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:31:44.6629651Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:31:44.6629993Z ##[endgroup] 2022-05-18T03:31:44.8704455Z 6deab82db6a72ca54cd3e3322ee4f13864536734: Pulling from pytorch/pytorch-linux-xenial-py3.7-gcc5.4 2022-05-18T03:31:44.8715719Z 58690f9b18fc: Pulling fs layer 2022-05-18T03:31:44.8716485Z b51569e7c507: Pulling fs layer 2022-05-18T03:31:44.8716839Z da8ef40b9eca: Pulling fs layer 2022-05-18T03:31:44.8717230Z fb15d46c38dc: Pulling fs layer 2022-05-18T03:31:44.8717607Z 5ba54a79e67d: Pulling fs layer 2022-05-18T03:31:44.8717955Z 0a20b8d84c46: Pulling fs layer 2022-05-18T03:31:44.8718319Z 5877c23144ae: Pulling fs layer 2022-05-18T03:31:44.8718628Z d3e83054f718: Pulling fs layer 2022-05-18T03:31:44.8718987Z 140e7e919a6a: Pulling fs layer 2022-05-18T03:31:44.8719536Z 00fe5dff19d6: Pulling fs layer 2022-05-18T03:31:44.8719932Z 5253901bce0a: Pulling fs layer 2022-05-18T03:31:44.8720285Z f2ad3e4779d8: Pulling fs layer 2022-05-18T03:31:44.8720638Z fb15d46c38dc: Waiting 2022-05-18T03:31:44.8720981Z d1935ca92dc4: Pulling fs layer 2022-05-18T03:31:44.8721336Z 0a20b8d84c46: Waiting 2022-05-18T03:31:44.8721799Z 370a68d9a452: Pulling fs layer 2022-05-18T03:31:44.8722207Z 92c75209b8cf: Pulling fs layer 2022-05-18T03:31:44.8722677Z 5ba54a79e67d: Waiting 2022-05-18T03:31:44.8722860Z fdd1b5b4d4e2: Pulling fs layer 2022-05-18T03:31:44.8723192Z 5877c23144ae: Waiting 2022-05-18T03:31:44.8723524Z 641ed2a0ee80: Pulling fs layer 2022-05-18T03:31:44.8723734Z d3e83054f718: Waiting 2022-05-18T03:31:44.8723938Z 17ceb3758ec4: Pulling fs layer 2022-05-18T03:31:44.8724148Z 81ca05b8cd5a: Pulling fs layer 2022-05-18T03:31:44.8724319Z 140e7e919a6a: Waiting 2022-05-18T03:31:44.8724549Z d124e94e1971: Pulling fs layer 2022-05-18T03:31:44.8724738Z f2ad3e4779d8: Waiting 2022-05-18T03:31:44.8724901Z 00fe5dff19d6: Waiting 2022-05-18T03:31:44.8725113Z d4bdbe109a27: Pulling fs layer 2022-05-18T03:31:44.8725362Z 070594bc61e9: Pulling fs layer 2022-05-18T03:31:44.8725551Z 5253901bce0a: Waiting 2022-05-18T03:31:44.8725841Z 2c9ca9e145e6: Pulling fs layer 2022-05-18T03:31:44.8726118Z d1935ca92dc4: Waiting 2022-05-18T03:31:44.8726405Z 9d031d383e17: Pulling fs layer 2022-05-18T03:31:44.8726700Z 62e4a89ba8d3: Pulling fs layer 2022-05-18T03:31:44.8726971Z 171756362e4e: Pulling fs layer 2022-05-18T03:31:44.8727248Z 370a68d9a452: Waiting 2022-05-18T03:31:44.8727590Z ea726e35e256: Pulling fs layer 2022-05-18T03:31:44.8727916Z 81ca05b8cd5a: Waiting 2022-05-18T03:31:44.8728131Z 721b024a3031: Pulling fs layer 2022-05-18T03:31:44.8728313Z 92c75209b8cf: Waiting 2022-05-18T03:31:44.8728499Z 206a6f5bfe61: Pulling fs layer 2022-05-18T03:31:44.8728684Z d124e94e1971: Waiting 2022-05-18T03:31:44.8728853Z 45b7b7460778: Pulling fs layer 2022-05-18T03:31:44.8729054Z 28ef77622cff: Pulling fs layer 2022-05-18T03:31:44.8729281Z fdd1b5b4d4e2: Waiting 2022-05-18T03:31:44.8729734Z ead995a9636d: Pulling fs layer 2022-05-18T03:31:44.8730038Z d4bdbe109a27: Waiting 2022-05-18T03:31:44.8730346Z 55366a8087ad: Pulling fs layer 2022-05-18T03:31:44.8730627Z 641ed2a0ee80: Waiting 2022-05-18T03:31:44.8730936Z 17ceb3758ec4: Waiting 2022-05-18T03:31:44.8731145Z a01ab60b3807: Pulling fs layer 2022-05-18T03:31:44.8731327Z c9a9d301cafd: Pulling fs layer 2022-05-18T03:31:44.8731528Z 070594bc61e9: Waiting 2022-05-18T03:31:44.8731795Z 275239b0f78d: Pulling fs layer 2022-05-18T03:31:44.8732057Z 62e4a89ba8d3: Waiting 2022-05-18T03:31:44.8732373Z 3550d2a21107: Pulling fs layer 2022-05-18T03:31:44.8732696Z 2c9ca9e145e6: Waiting 2022-05-18T03:31:44.8732994Z 586f2f9bc005: Pulling fs layer 2022-05-18T03:31:44.8733237Z 11fd06f0243a: Pulling fs layer 2022-05-18T03:31:44.8733440Z 9d031d383e17: Waiting 2022-05-18T03:31:44.8733607Z 477485598060: Pulling fs layer 2022-05-18T03:31:44.8733934Z 171756362e4e: Waiting 2022-05-18T03:31:44.8734122Z aaeef6a5d26a: Pulling fs layer 2022-05-18T03:31:44.8734313Z ea726e35e256: Waiting 2022-05-18T03:31:44.8734485Z e9b66d11d0f7: Pulling fs layer 2022-05-18T03:31:44.8734675Z 242315b336c5: Pulling fs layer 2022-05-18T03:31:44.8734853Z 721b024a3031: Waiting 2022-05-18T03:31:44.8735022Z 7e414c970966: Pulling fs layer 2022-05-18T03:31:44.8735213Z 81551a9ff750: Pulling fs layer 2022-05-18T03:31:44.8735394Z 55366a8087ad: Waiting 2022-05-18T03:31:44.8735551Z ead995a9636d: Waiting 2022-05-18T03:31:44.8735725Z c9a9d301cafd: Waiting 2022-05-18T03:31:44.8735894Z 206a6f5bfe61: Waiting 2022-05-18T03:31:44.8736063Z e673be102bed: Pulling fs layer 2022-05-18T03:31:44.8736441Z 4f94094bd9bd: Pulling fs layer 2022-05-18T03:31:44.8736733Z a01ab60b3807: Waiting 2022-05-18T03:31:44.8736891Z 45b7b7460778: Waiting 2022-05-18T03:31:44.8737072Z 98756dfdd888: Pulling fs layer 2022-05-18T03:31:44.8737274Z 275239b0f78d: Waiting 2022-05-18T03:31:44.8737457Z 586f2f9bc005: Waiting 2022-05-18T03:31:44.8737643Z 1238debabecc: Pulling fs layer 2022-05-18T03:31:44.8737821Z 3550d2a21107: Waiting 2022-05-18T03:31:44.8738007Z cefaab4f809a: Pulling fs layer 2022-05-18T03:31:44.8738242Z ebced6807dae: Pulling fs layer 2022-05-18T03:31:44.8742479Z aaeef6a5d26a: Waiting 2022-05-18T03:31:44.8743036Z 912549afe5e1: Pulling fs layer 2022-05-18T03:31:44.8743372Z 215c8a788eb9: Pulling fs layer 2022-05-18T03:31:44.8743719Z 242315b336c5: Waiting 2022-05-18T03:31:44.8744022Z 11fd06f0243a: Waiting 2022-05-18T03:31:44.8744342Z e9b66d11d0f7: Waiting 2022-05-18T03:31:44.8744647Z 477485598060: Waiting 2022-05-18T03:31:44.8744931Z 7e414c970966: Waiting 2022-05-18T03:31:44.8745332Z e673be102bed: Waiting 2022-05-18T03:31:44.8745661Z 61717ae21dd2: Pulling fs layer 2022-05-18T03:31:44.8745957Z 81551a9ff750: Waiting 2022-05-18T03:31:44.8746277Z 4f94094bd9bd: Waiting 2022-05-18T03:31:44.8746574Z 98756dfdd888: Waiting 2022-05-18T03:31:44.8746739Z 1238debabecc: Waiting 2022-05-18T03:31:44.8746927Z 61717ae21dd2: Waiting 2022-05-18T03:31:44.8747123Z 912549afe5e1: Waiting 2022-05-18T03:31:44.8747282Z ebced6807dae: Waiting 2022-05-18T03:31:44.8747468Z cefaab4f809a: Waiting 2022-05-18T03:31:44.8747645Z 215c8a788eb9: Waiting 2022-05-18T03:31:44.9408892Z b51569e7c507: Verifying Checksum 2022-05-18T03:31:44.9409293Z b51569e7c507: Download complete 2022-05-18T03:31:44.9432289Z da8ef40b9eca: Verifying Checksum 2022-05-18T03:31:44.9432655Z da8ef40b9eca: Download complete 2022-05-18T03:31:45.0062246Z fb15d46c38dc: Verifying Checksum 2022-05-18T03:31:45.0063095Z fb15d46c38dc: Download complete 2022-05-18T03:31:45.0172829Z 5ba54a79e67d: Verifying Checksum 2022-05-18T03:31:45.1405743Z 5ba54a79e67d: Download complete 2022-05-18T03:31:45.1406088Z 5877c23144ae: Verifying Checksum 2022-05-18T03:31:45.1406382Z 5877c23144ae: Download complete 2022-05-18T03:31:45.2144885Z d3e83054f718: Verifying Checksum 2022-05-18T03:31:45.2145445Z d3e83054f718: Download complete 2022-05-18T03:31:45.2909484Z 140e7e919a6a: Verifying Checksum 2022-05-18T03:31:45.2909981Z 140e7e919a6a: Download complete 2022-05-18T03:31:45.3671636Z 00fe5dff19d6: Verifying Checksum 2022-05-18T03:31:45.3672236Z 00fe5dff19d6: Download complete 2022-05-18T03:31:45.3824379Z 58690f9b18fc: Verifying Checksum 2022-05-18T03:31:45.3824845Z 58690f9b18fc: Download complete 2022-05-18T03:31:45.4277212Z 5253901bce0a: Verifying Checksum 2022-05-18T03:31:45.4277489Z 5253901bce0a: Download complete 2022-05-18T03:31:45.4492226Z f2ad3e4779d8: Verifying Checksum 2022-05-18T03:31:45.4492661Z f2ad3e4779d8: Download complete 2022-05-18T03:31:45.4975575Z d1935ca92dc4: Verifying Checksum 2022-05-18T03:31:45.4975906Z d1935ca92dc4: Download complete 2022-05-18T03:31:45.5635964Z 92c75209b8cf: Verifying Checksum 2022-05-18T03:31:45.6281169Z fdd1b5b4d4e2: Download complete 2022-05-18T03:31:45.8450915Z 370a68d9a452: Verifying Checksum 2022-05-18T03:31:45.8451363Z 370a68d9a452: Download complete 2022-05-18T03:31:45.9140641Z 17ceb3758ec4: Verifying Checksum 2022-05-18T03:31:45.9141105Z 17ceb3758ec4: Download complete 2022-05-18T03:31:45.9771507Z 81ca05b8cd5a: Download complete 2022-05-18T03:31:46.2516194Z d124e94e1971: Download complete 2022-05-18T03:31:46.3305575Z d4bdbe109a27: Download complete 2022-05-18T03:31:46.4103853Z 070594bc61e9: Verifying Checksum 2022-05-18T03:31:46.4105088Z 070594bc61e9: Download complete 2022-05-18T03:31:46.5101459Z 2c9ca9e145e6: Verifying Checksum 2022-05-18T03:31:46.5101984Z 2c9ca9e145e6: Download complete 2022-05-18T03:31:46.6005013Z 58690f9b18fc: Pull complete 2022-05-18T03:31:46.6961804Z b51569e7c507: Pull complete 2022-05-18T03:31:46.8180827Z da8ef40b9eca: Pull complete 2022-05-18T03:31:46.9211809Z fb15d46c38dc: Pull complete 2022-05-18T03:31:47.0332469Z 5ba54a79e67d: Pull complete 2022-05-18T03:31:47.3336979Z 9d031d383e17: Verifying Checksum 2022-05-18T03:31:47.3337433Z 9d031d383e17: Download complete 2022-05-18T03:31:47.4021563Z 62e4a89ba8d3: Verifying Checksum 2022-05-18T03:31:47.4258914Z 62e4a89ba8d3: Download complete 2022-05-18T03:31:47.4259517Z 0a20b8d84c46: Verifying Checksum 2022-05-18T03:31:47.4259973Z 0a20b8d84c46: Download complete 2022-05-18T03:31:47.4895444Z 171756362e4e: Verifying Checksum 2022-05-18T03:31:47.4896000Z 171756362e4e: Download complete 2022-05-18T03:31:47.5066168Z ea726e35e256: Verifying Checksum 2022-05-18T03:31:47.5066790Z ea726e35e256: Download complete 2022-05-18T03:31:47.5623343Z 721b024a3031: Verifying Checksum 2022-05-18T03:31:47.5623949Z 721b024a3031: Download complete 2022-05-18T03:31:47.6024544Z 206a6f5bfe61: Verifying Checksum 2022-05-18T03:31:47.6025784Z 206a6f5bfe61: Download complete 2022-05-18T03:31:47.6897195Z 28ef77622cff: Verifying Checksum 2022-05-18T03:31:47.6897665Z 28ef77622cff: Download complete 2022-05-18T03:31:47.7620948Z ead995a9636d: Verifying Checksum 2022-05-18T03:31:47.7621518Z ead995a9636d: Download complete 2022-05-18T03:31:47.8742701Z 55366a8087ad: Verifying Checksum 2022-05-18T03:31:47.8743246Z 55366a8087ad: Download complete 2022-05-18T03:31:47.9524316Z a01ab60b3807: Verifying Checksum 2022-05-18T03:31:47.9525224Z a01ab60b3807: Download complete 2022-05-18T03:31:48.0152701Z c9a9d301cafd: Verifying Checksum 2022-05-18T03:31:48.0153228Z c9a9d301cafd: Download complete 2022-05-18T03:31:48.1165763Z 275239b0f78d: Download complete 2022-05-18T03:31:48.1951162Z 3550d2a21107: Download complete 2022-05-18T03:31:48.2779266Z 586f2f9bc005: Download complete 2022-05-18T03:31:48.3527055Z 11fd06f0243a: Download complete 2022-05-18T03:31:48.6954214Z 477485598060: Verifying Checksum 2022-05-18T03:31:48.6954641Z 477485598060: Download complete 2022-05-18T03:31:48.7982136Z aaeef6a5d26a: Download complete 2022-05-18T03:31:48.8121851Z 45b7b7460778: Verifying Checksum 2022-05-18T03:31:48.8122245Z 45b7b7460778: Download complete 2022-05-18T03:31:48.8771808Z e9b66d11d0f7: Verifying Checksum 2022-05-18T03:31:48.8772367Z e9b66d11d0f7: Download complete 2022-05-18T03:31:48.9478697Z 7e414c970966: Verifying Checksum 2022-05-18T03:31:48.9479278Z 7e414c970966: Download complete 2022-05-18T03:31:49.0445506Z 81551a9ff750: Verifying Checksum 2022-05-18T03:31:49.0446045Z 81551a9ff750: Download complete 2022-05-18T03:31:49.1188198Z e673be102bed: Download complete 2022-05-18T03:31:49.1869209Z 4f94094bd9bd: Verifying Checksum 2022-05-18T03:31:49.1872070Z 4f94094bd9bd: Download complete 2022-05-18T03:31:49.2178615Z 242315b336c5: Verifying Checksum 2022-05-18T03:31:49.2182242Z 242315b336c5: Download complete 2022-05-18T03:31:49.2830842Z 1238debabecc: Verifying Checksum 2022-05-18T03:31:49.2860120Z 1238debabecc: Download complete 2022-05-18T03:31:49.4314973Z 98756dfdd888: Verifying Checksum 2022-05-18T03:31:49.4315267Z 98756dfdd888: Download complete 2022-05-18T03:31:49.5071697Z ebced6807dae: Verifying Checksum 2022-05-18T03:31:49.5072061Z ebced6807dae: Download complete 2022-05-18T03:31:49.5682712Z 912549afe5e1: Verifying Checksum 2022-05-18T03:31:49.5683360Z 912549afe5e1: Download complete 2022-05-18T03:31:49.6339524Z 215c8a788eb9: Verifying Checksum 2022-05-18T03:31:49.6343857Z 215c8a788eb9: Download complete 2022-05-18T03:31:50.2224749Z 61717ae21dd2: Verifying Checksum 2022-05-18T03:31:50.2225435Z 61717ae21dd2: Download complete 2022-05-18T03:31:52.9221171Z cefaab4f809a: Verifying Checksum 2022-05-18T03:31:52.9221481Z cefaab4f809a: Download complete 2022-05-18T03:31:52.9620848Z 0a20b8d84c46: Pull complete 2022-05-18T03:31:53.1453375Z 5877c23144ae: Pull complete 2022-05-18T03:31:53.3754353Z d3e83054f718: Pull complete 2022-05-18T03:31:53.6069142Z 140e7e919a6a: Pull complete 2022-05-18T03:31:53.8287496Z 00fe5dff19d6: Pull complete 2022-05-18T03:31:53.9942288Z 5253901bce0a: Pull complete 2022-05-18T03:31:54.2392430Z f2ad3e4779d8: Pull complete 2022-05-18T03:31:54.4773927Z d1935ca92dc4: Pull complete 2022-05-18T03:31:55.6627548Z 370a68d9a452: Pull complete 2022-05-18T03:31:55.8714309Z 92c75209b8cf: Pull complete 2022-05-18T03:31:56.0590536Z fdd1b5b4d4e2: Pull complete 2022-05-18T03:31:57.9581241Z 641ed2a0ee80: Verifying Checksum 2022-05-18T03:31:57.9581656Z 641ed2a0ee80: Download complete 2022-05-18T03:32:17.6960493Z 641ed2a0ee80: Pull complete 2022-05-18T03:32:18.0125897Z 17ceb3758ec4: Pull complete 2022-05-18T03:32:18.2328396Z 81ca05b8cd5a: Pull complete 2022-05-18T03:32:18.3325487Z d124e94e1971: Pull complete 2022-05-18T03:32:18.5133684Z d4bdbe109a27: Pull complete 2022-05-18T03:32:18.7022577Z 070594bc61e9: Pull complete 2022-05-18T03:32:18.7965178Z 2c9ca9e145e6: Pull complete 2022-05-18T03:32:20.4526073Z 9d031d383e17: Pull complete 2022-05-18T03:32:20.6407652Z 62e4a89ba8d3: Pull complete 2022-05-18T03:32:20.7964385Z 171756362e4e: Pull complete 2022-05-18T03:32:21.0303565Z ea726e35e256: Pull complete 2022-05-18T03:32:21.2504942Z 721b024a3031: Pull complete 2022-05-18T03:32:21.4341341Z 206a6f5bfe61: Pull complete 2022-05-18T03:32:23.5120713Z 45b7b7460778: Pull complete 2022-05-18T03:32:23.7476027Z 28ef77622cff: Pull complete 2022-05-18T03:32:23.9828359Z ead995a9636d: Pull complete 2022-05-18T03:32:24.1961909Z 55366a8087ad: Pull complete 2022-05-18T03:32:24.4217788Z a01ab60b3807: Pull complete 2022-05-18T03:32:24.5788080Z c9a9d301cafd: Pull complete 2022-05-18T03:32:24.7771156Z 275239b0f78d: Pull complete 2022-05-18T03:32:25.0157638Z 3550d2a21107: Pull complete 2022-05-18T03:32:25.2656070Z 586f2f9bc005: Pull complete 2022-05-18T03:32:25.4690365Z 11fd06f0243a: Pull complete 2022-05-18T03:32:25.6725002Z 477485598060: Pull complete 2022-05-18T03:32:25.9224704Z aaeef6a5d26a: Pull complete 2022-05-18T03:32:26.0779481Z e9b66d11d0f7: Pull complete 2022-05-18T03:32:27.2581665Z 242315b336c5: Pull complete 2022-05-18T03:32:27.5263450Z 7e414c970966: Pull complete 2022-05-18T03:32:27.7627931Z 81551a9ff750: Pull complete 2022-05-18T03:32:28.0317689Z e673be102bed: Pull complete 2022-05-18T03:32:28.1604722Z 4f94094bd9bd: Pull complete 2022-05-18T03:32:28.4430655Z 98756dfdd888: Pull complete 2022-05-18T03:32:28.5612063Z 1238debabecc: Pull complete 2022-05-18T03:32:32.9150031Z cefaab4f809a: Pull complete 2022-05-18T03:32:33.0107605Z ebced6807dae: Pull complete 2022-05-18T03:32:33.1020840Z 912549afe5e1: Pull complete 2022-05-18T03:32:33.2013666Z 215c8a788eb9: Pull complete 2022-05-18T03:32:34.8169825Z 61717ae21dd2: Pull complete 2022-05-18T03:32:34.8598846Z Digest: sha256:9c228d64aeaa1a84153f684d8bf8d2b818b53df05ec50809bfb8bb625f2aea5c 2022-05-18T03:32:34.8654702Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:32:34.8685815Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:32:34.8765957Z Prepare all required actions 2022-05-18T03:32:34.8766353Z Getting action download info 2022-05-18T03:32:35.0500963Z Download action repository 'seemethere/download-artifact-s3@v3' (SHA:64048a097659c8ca71ceacbb3c01cee9ed6f1b05) 2022-05-18T03:32:35.2632330Z Download action repository 'actions/download-artifact@v2' (SHA:f023be2c48cc18debc3bacd34cb396e0295e2869) 2022-05-18T03:32:35.3658070Z ##[group]Run ./.github/actions/download-build-artifacts 2022-05-18T03:32:35.3658287Z with: 2022-05-18T03:32:35.3658568Z name: linux-xenial-py3.7-gcc5.4 2022-05-18T03:32:35.3658760Z env: 2022-05-18T03:32:35.3658897Z IN_CI: 1 2022-05-18T03:32:35.3659058Z IS_GHA: 1 2022-05-18T03:32:35.3659238Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:35.3659410Z ##[endgroup] 2022-05-18T03:32:35.3682197Z ##[group]Run seemethere/download-artifact-s3@v3 2022-05-18T03:32:35.3682413Z with: 2022-05-18T03:32:35.3682589Z name: linux-xenial-py3.7-gcc5.4 2022-05-18T03:32:35.3682798Z s3-bucket: gha-artifacts 2022-05-18T03:32:35.3683030Z region: us-east-1 2022-05-18T03:32:35.3683181Z env: 2022-05-18T03:32:35.3683332Z IN_CI: 1 2022-05-18T03:32:35.3683487Z IS_GHA: 1 2022-05-18T03:32:35.3683650Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:35.3683835Z ##[endgroup] 2022-05-18T03:32:35.8063775Z Found 1 objects with prefix pytorch/pytorch/2342799944/1/linux-xenial-py3.7-gcc5.4/ 2022-05-18T03:32:35.8064572Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2022-05-18T03:32:38.6422888Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2022-05-18T03:32:38.6423168Z 2022-05-18T03:32:38.6427600Z Artifact download has finished successfully 2022-05-18T03:32:38.6531541Z ##[group]Run unzip -o artifacts.zip 2022-05-18T03:32:38.6531776Z unzip -o artifacts.zip 2022-05-18T03:32:38.6542894Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:32:38.6543118Z env: 2022-05-18T03:32:38.6543277Z IN_CI: 1 2022-05-18T03:32:38.6543433Z IS_GHA: 1 2022-05-18T03:32:38.6543614Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:38.6543807Z ##[endgroup] 2022-05-18T03:32:38.6578845Z Archive: artifacts.zip 2022-05-18T03:32:38.6580373Z creating: dist/ 2022-05-18T03:32:39.3606743Z inflating: dist/torch-1.12.0a0+git3b23752-cp37-cp37m-linux_x86_64.whl 2022-05-18T03:32:39.3607292Z creating: build/custom_test_artifacts/ 2022-05-18T03:32:39.3607855Z creating: build/custom_test_artifacts/custom-op-build/ 2022-05-18T03:32:39.3608450Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2022-05-18T03:32:39.3610086Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeOutput.log 2022-05-18T03:32:39.3610808Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/ 2022-05-18T03:32:39.3611589Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CMakeSystem.cmake 2022-05-18T03:32:39.3612333Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdC/ 2022-05-18T03:32:39.3613066Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdC/tmp/ 2022-05-18T03:32:39.3614137Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdC/CMakeCCompilerId.c 2022-05-18T03:32:39.3615822Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdC/a.out 2022-05-18T03:32:39.3616362Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdCXX/ 2022-05-18T03:32:39.3616819Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdCXX/tmp/ 2022-05-18T03:32:39.3618326Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2022-05-18T03:32:39.3619580Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CompilerIdCXX/a.out 2022-05-18T03:32:39.3621187Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_C.bin 2022-05-18T03:32:39.3621790Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CMakeCCompiler.cmake 2022-05-18T03:32:39.3623550Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_CXX.bin 2022-05-18T03:32:39.3624452Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.12.4/CMakeCXXCompiler.cmake 2022-05-18T03:32:39.3625272Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2022-05-18T03:32:39.3626160Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/feature_tests.c 2022-05-18T03:32:39.3626902Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/feature_tests.cxx 2022-05-18T03:32:39.3627792Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/feature_tests.bin 2022-05-18T03:32:39.3628556Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeError.log 2022-05-18T03:32:39.3629153Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2022-05-18T03:32:39.3629620Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2022-05-18T03:32:39.3647695Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2022-05-18T03:32:39.3648249Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2022-05-18T03:32:39.3648824Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2022-05-18T03:32:39.3649670Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2022-05-18T03:32:39.3650240Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2022-05-18T03:32:39.3651162Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2022-05-18T03:32:39.3652017Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2022-05-18T03:32:39.3695220Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/CXX.includecache 2022-05-18T03:32:39.3709280Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.internal 2022-05-18T03:32:39.3795032Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2022-05-18T03:32:39.3795826Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2022-05-18T03:32:39.3815687Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2022-05-18T03:32:39.3816538Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2022-05-18T03:32:39.3817373Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2022-05-18T03:32:39.3818259Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2022-05-18T03:32:39.3819113Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2022-05-18T03:32:39.3819958Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2022-05-18T03:32:39.3820776Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2022-05-18T03:32:39.3863137Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/CXX.includecache 2022-05-18T03:32:39.3877070Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.internal 2022-05-18T03:32:39.3938993Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2022-05-18T03:32:39.3939897Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2022-05-18T03:32:39.3940856Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2022-05-18T03:32:39.3941650Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2022-05-18T03:32:39.3942398Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2022-05-18T03:32:39.3943113Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2022-05-18T03:32:39.3943978Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2022-05-18T03:32:39.3944983Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2022-05-18T03:32:39.3945829Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2022-05-18T03:32:39.4018399Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2022-05-18T03:32:39.4066912Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2022-05-18T03:32:39.4067308Z creating: build/custom_test_artifacts/jit-hook-build/ 2022-05-18T03:32:39.4067821Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2022-05-18T03:32:39.4071004Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeOutput.log 2022-05-18T03:32:39.4071513Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/ 2022-05-18T03:32:39.4072028Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CMakeSystem.cmake 2022-05-18T03:32:39.4072502Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdC/ 2022-05-18T03:32:39.4072948Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdC/tmp/ 2022-05-18T03:32:39.4074338Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdC/CMakeCCompilerId.c 2022-05-18T03:32:39.4075602Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdC/a.out 2022-05-18T03:32:39.4076093Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdCXX/ 2022-05-18T03:32:39.4076554Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdCXX/tmp/ 2022-05-18T03:32:39.4078208Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2022-05-18T03:32:39.4079643Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CompilerIdCXX/a.out 2022-05-18T03:32:39.4080964Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_C.bin 2022-05-18T03:32:39.4082035Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CMakeCCompiler.cmake 2022-05-18T03:32:39.4083150Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_CXX.bin 2022-05-18T03:32:39.4084131Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.12.4/CMakeCXXCompiler.cmake 2022-05-18T03:32:39.4084917Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2022-05-18T03:32:39.4085640Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/feature_tests.c 2022-05-18T03:32:39.4086373Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/feature_tests.cxx 2022-05-18T03:32:39.4087756Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/feature_tests.bin 2022-05-18T03:32:39.4088518Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeError.log 2022-05-18T03:32:39.4089262Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2022-05-18T03:32:39.4089980Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2022-05-18T03:32:39.4110421Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2022-05-18T03:32:39.4111238Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2022-05-18T03:32:39.4112004Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2022-05-18T03:32:39.4112755Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2022-05-18T03:32:39.4113686Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2022-05-18T03:32:39.4114554Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2022-05-18T03:32:39.4115378Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2022-05-18T03:32:39.4158951Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/CXX.includecache 2022-05-18T03:32:39.4172874Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.internal 2022-05-18T03:32:39.4222127Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2022-05-18T03:32:39.4223037Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2022-05-18T03:32:39.4223901Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2022-05-18T03:32:39.4224759Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2022-05-18T03:32:39.4225493Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2022-05-18T03:32:39.4226169Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2022-05-18T03:32:39.4226888Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2022-05-18T03:32:39.4227769Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2022-05-18T03:32:39.4228457Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2022-05-18T03:32:39.4268189Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2022-05-18T03:32:39.4268870Z creating: build/custom_test_artifacts/custom-backend-build/ 2022-05-18T03:32:39.4269530Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2022-05-18T03:32:39.4271885Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeOutput.log 2022-05-18T03:32:39.4272637Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/ 2022-05-18T03:32:39.4273432Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CMakeSystem.cmake 2022-05-18T03:32:39.4274245Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdC/ 2022-05-18T03:32:39.4275040Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdC/tmp/ 2022-05-18T03:32:39.4275921Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdC/CMakeCCompilerId.c 2022-05-18T03:32:39.4276779Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdC/a.out 2022-05-18T03:32:39.4277595Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdCXX/ 2022-05-18T03:32:39.4278393Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdCXX/tmp/ 2022-05-18T03:32:39.4279671Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2022-05-18T03:32:39.4280919Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CompilerIdCXX/a.out 2022-05-18T03:32:39.4282468Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_C.bin 2022-05-18T03:32:39.4283372Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CMakeCCompiler.cmake 2022-05-18T03:32:39.4284680Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CMakeDetermineCompilerABI_CXX.bin 2022-05-18T03:32:39.4285826Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.12.4/CMakeCXXCompiler.cmake 2022-05-18T03:32:39.4286645Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2022-05-18T03:32:39.4287421Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/feature_tests.c 2022-05-18T03:32:39.4288195Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/feature_tests.cxx 2022-05-18T03:32:39.4289301Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/feature_tests.bin 2022-05-18T03:32:39.4290230Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeError.log 2022-05-18T03:32:39.4290992Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2022-05-18T03:32:39.4291774Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2022-05-18T03:32:39.4313353Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2022-05-18T03:32:39.4314275Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2022-05-18T03:32:39.4315195Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2022-05-18T03:32:39.4316098Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2022-05-18T03:32:39.4317019Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2022-05-18T03:32:39.4317935Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2022-05-18T03:32:39.4318832Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2022-05-18T03:32:39.4361069Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/CXX.includecache 2022-05-18T03:32:39.4375083Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.internal 2022-05-18T03:32:39.4419342Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2022-05-18T03:32:39.4420210Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2022-05-18T03:32:39.4423591Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2022-05-18T03:32:39.4424484Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2022-05-18T03:32:39.4425466Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2022-05-18T03:32:39.4426357Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2022-05-18T03:32:39.4427267Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2022-05-18T03:32:39.4428131Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2022-05-18T03:32:39.4428987Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2022-05-18T03:32:39.4433415Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/CXX.includecache 2022-05-18T03:32:39.4436450Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.internal 2022-05-18T03:32:39.4549348Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2022-05-18T03:32:39.4550258Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2022-05-18T03:32:39.4551281Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2022-05-18T03:32:39.4552099Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2022-05-18T03:32:39.4552854Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2022-05-18T03:32:39.4553591Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2022-05-18T03:32:39.4554352Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2022-05-18T03:32:39.4555287Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2022-05-18T03:32:39.4556149Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2022-05-18T03:32:39.4650753Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2022-05-18T03:32:39.4686159Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2022-05-18T03:32:39.4686651Z creating: build/lib/ 2022-05-18T03:32:39.4687024Z inflating: build/lib/libclog.a 2022-05-18T03:32:39.4739758Z inflating: build/lib/libgtest.a 2022-05-18T03:32:39.4748020Z inflating: build/lib/libpthreadpool.a 2022-05-18T03:32:39.4814889Z inflating: build/lib/libbenchmark.a 2022-05-18T03:32:39.4900563Z inflating: build/lib/libprotobuf-lite.a 2022-05-18T03:32:39.4963037Z inflating: build/lib/libasmjit.a 2022-05-18T03:32:39.4988651Z inflating: build/lib/libtensorpipe_uv.a 2022-05-18T03:32:39.5082328Z inflating: build/lib/libgloo.a 2022-05-18T03:32:39.5507605Z inflating: build/lib/libprotobuf.a 2022-05-18T03:32:39.5524065Z inflating: build/lib/libfmt.a 2022-05-18T03:32:39.5524747Z inflating: build/lib/libfoxi_loader.a 2022-05-18T03:32:39.5525596Z inflating: build/lib/libtorch_global_deps.so 2022-05-18T03:32:39.5577870Z inflating: build/lib/libc10.so 2022-05-18T03:32:39.5585654Z inflating: build/lib/libcpuinfo.a 2022-05-18T03:32:39.5592755Z inflating: build/lib/libcpuinfo_internals.a 2022-05-18T03:32:39.5606033Z inflating: build/lib/libqnnpack.a 2022-05-18T03:32:39.5608406Z inflating: build/lib/libnnpack_reference_layers.a 2022-05-18T03:32:39.5627753Z inflating: build/lib/libpytorch_qnnpack.a 2022-05-18T03:32:39.6082785Z inflating: build/lib/libprotoc.a 2022-05-18T03:32:39.6097601Z inflating: build/lib/libgmock.a 2022-05-18T03:32:39.6098277Z inflating: build/lib/libgtest_main.a 2022-05-18T03:32:39.6098953Z inflating: build/lib/libbenchmark_main.a 2022-05-18T03:32:40.2750605Z inflating: build/lib/libdnnl.a 2022-05-18T03:32:40.2767681Z inflating: build/lib/libnnpack.a 2022-05-18T03:32:40.3319436Z inflating: build/lib/libtensorpipe.a 2022-05-18T03:32:40.3319959Z inflating: build/lib/libgmock_main.a 2022-05-18T03:32:40.4512903Z inflating: build/lib/libfbgemm.a 2022-05-18T03:32:40.5454330Z inflating: build/lib/libdnnl_graph.a 2022-05-18T03:32:40.5660538Z inflating: build/lib/libkineto.a 2022-05-18T03:32:40.5697323Z inflating: build/lib/libcaffe2_protos.a 2022-05-18T03:32:40.5810088Z inflating: build/lib/libXNNPACK.a 2022-05-18T03:32:40.5848876Z inflating: build/lib/libonnx_proto.a 2022-05-18T03:32:40.6384204Z inflating: build/lib/libonnx.a 2022-05-18T03:32:42.2919374Z inflating: build/lib/libtorch_cpu.so 2022-05-18T03:32:42.2919832Z inflating: build/lib/libtorch.so 2022-05-18T03:32:42.2939580Z inflating: build/lib/libjitbackend_test.so 2022-05-18T03:32:42.2964808Z inflating: build/lib/libbackend_with_compiler.so 2022-05-18T03:32:42.3007445Z inflating: build/lib/libtorchbind_test.so 2022-05-18T03:32:42.3011445Z inflating: build/lib/libshm.so 2022-05-18T03:32:42.4277888Z inflating: build/lib/libtorch_python.so 2022-05-18T03:32:42.4308984Z inflating: build/lib/libnnapi_backend.so 2022-05-18T03:32:42.4309396Z creating: build/bin/ 2022-05-18T03:32:42.4354641Z inflating: build/bin/c10_registry_test 2022-05-18T03:32:42.4415182Z inflating: build/bin/c10_optional_test 2022-05-18T03:32:42.4551697Z inflating: build/bin/c10_intrusive_ptr_test 2022-05-18T03:32:42.4592327Z inflating: build/bin/c10_flags_test 2022-05-18T03:32:42.4635319Z inflating: build/bin/c10_exception_test 2022-05-18T03:32:42.4681662Z inflating: build/bin/c10_logging_test 2022-05-18T03:32:42.4726725Z inflating: build/bin/c10_complex_test 2022-05-18T03:32:42.4814911Z inflating: build/bin/c10_either_test 2022-05-18T03:32:42.4855573Z inflating: build/bin/c10_irange_test 2022-05-18T03:32:42.4901165Z inflating: build/bin/c10_bfloat16_test 2022-05-18T03:32:42.4948598Z inflating: build/bin/c10_string_view_test 2022-05-18T03:32:42.4990978Z inflating: build/bin/c10_accumulate_test 2022-05-18T03:32:42.5035254Z inflating: build/bin/c10_complex_math_test 2022-05-18T03:32:42.5078425Z inflating: build/bin/c10_Bitset_test 2022-05-18T03:32:42.5192024Z inflating: build/bin/c10_SmallVectorTest 2022-05-18T03:32:42.5238132Z inflating: build/bin/c10_typeid_test 2022-05-18T03:32:42.5283409Z inflating: build/bin/c10_InlineDeviceGuard_test 2022-05-18T03:32:42.5329342Z inflating: build/bin/c10_InlineStreamGuard_test 2022-05-18T03:32:42.5369840Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2022-05-18T03:32:42.5411892Z inflating: build/bin/c10_tempfile_test 2022-05-18T03:32:42.5457907Z inflating: build/bin/c10_SizesAndStrides_test 2022-05-18T03:32:42.5497425Z inflating: build/bin/c10_StreamGuard_test 2022-05-18T03:32:42.5548316Z inflating: build/bin/c10_ordered_preserving_dict_test 2022-05-18T03:32:42.5594092Z inflating: build/bin/c10_ThreadLocal_test 2022-05-18T03:32:42.5641313Z inflating: build/bin/c10_DispatchKeySet_test 2022-05-18T03:32:42.5683333Z inflating: build/bin/c10_DeviceGuard_test 2022-05-18T03:32:42.5724578Z inflating: build/bin/c10_C++17_test 2022-05-18T03:32:42.5764373Z inflating: build/bin/c10_TypeTraits_test 2022-05-18T03:32:42.5805436Z inflating: build/bin/c10_Device_test 2022-05-18T03:32:42.5845900Z inflating: build/bin/c10_DeadlockDetection_test 2022-05-18T03:32:42.5886687Z inflating: build/bin/c10_Half_test 2022-05-18T03:32:42.5933796Z inflating: build/bin/c10_LeftRight_test 2022-05-18T03:32:42.5973584Z inflating: build/bin/c10_ConstexprCrc_test 2022-05-18T03:32:42.6024853Z inflating: build/bin/c10_Metaprogramming_test 2022-05-18T03:32:42.6064249Z inflating: build/bin/c10_Array_test 2022-05-18T03:32:42.6105303Z inflating: build/bin/c10_Synchronized_test 2022-05-18T03:32:42.6146393Z inflating: build/bin/c10_TypeList_test 2022-05-18T03:32:42.6189332Z inflating: build/bin/c10_TypeIndex_test 2022-05-18T03:32:42.6231741Z inflating: build/bin/c10_intrusive_ptr_benchmark 2022-05-18T03:32:42.6624348Z inflating: build/bin/protoc-3.13.0.0 2022-05-18T03:32:42.7016257Z inflating: build/bin/protoc 2022-05-18T03:32:42.7261067Z inflating: build/bin/vec_test_all_types_DEFAULT 2022-05-18T03:32:42.7530612Z inflating: build/bin/vec_test_all_types_AVX2 2022-05-18T03:32:42.7574500Z inflating: build/bin/FileStoreTest 2022-05-18T03:32:42.7618567Z inflating: build/bin/HashStoreTest 2022-05-18T03:32:42.7667643Z inflating: build/bin/TCPStoreTest 2022-05-18T03:32:42.7707103Z inflating: build/bin/op_allowlist_test 2022-05-18T03:32:42.7709762Z inflating: build/bin/example_allreduce 2022-05-18T03:32:42.7764665Z inflating: build/bin/ProcessGroupGlooTest 2022-05-18T03:32:42.7813354Z inflating: build/bin/kernel_stackbased_test 2022-05-18T03:32:42.7891445Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2022-05-18T03:32:42.7969379Z inflating: build/bin/kernel_function_test 2022-05-18T03:32:42.8070203Z inflating: build/bin/kernel_function_legacy_test 2022-05-18T03:32:42.8115312Z inflating: build/bin/backend_fallback_test 2022-05-18T03:32:42.8167440Z inflating: build/bin/KernelFunction_test 2022-05-18T03:32:42.8216332Z inflating: build/bin/IListRef_test 2022-05-18T03:32:42.8258365Z inflating: build/bin/stride_properties_test 2022-05-18T03:32:42.8299169Z inflating: build/bin/dispatch_key_set_test 2022-05-18T03:32:42.8355713Z inflating: build/bin/vmap_test 2022-05-18T03:32:42.8405737Z inflating: build/bin/type_test 2022-05-18T03:32:42.8480191Z inflating: build/bin/cpu_rng_test 2022-05-18T03:32:42.8520314Z inflating: build/bin/reduce_ops_test 2022-05-18T03:32:42.8563605Z inflating: build/bin/undefined_tensor_test 2022-05-18T03:32:42.8643231Z inflating: build/bin/ivalue_test 2022-05-18T03:32:42.8692022Z inflating: build/bin/apply_utils_test 2022-05-18T03:32:42.8742360Z inflating: build/bin/basic 2022-05-18T03:32:42.8786433Z inflating: build/bin/broadcast_test 2022-05-18T03:32:42.8871180Z inflating: build/bin/kernel_lambda_test 2022-05-18T03:32:42.8918990Z inflating: build/bin/cpu_generator_test 2022-05-18T03:32:42.9151390Z inflating: build/bin/op_registration_test 2022-05-18T03:32:42.9197154Z inflating: build/bin/half_test 2022-05-18T03:32:42.9239019Z inflating: build/bin/reportMemoryUsage_test 2022-05-18T03:32:42.9281619Z inflating: build/bin/Dimname_test 2022-05-18T03:32:42.9324787Z inflating: build/bin/memory_format_test 2022-05-18T03:32:42.9370601Z inflating: build/bin/test_parallel 2022-05-18T03:32:42.9413823Z inflating: build/bin/cpu_profiling_allocator_test 2022-05-18T03:32:42.9516869Z inflating: build/bin/kernel_lambda_legacy_test 2022-05-18T03:32:42.9517845Z inflating: build/bin/verify_api_visibility 2022-05-18T03:32:42.9577271Z inflating: build/bin/Dict_test 2022-05-18T03:32:42.9623542Z inflating: build/bin/scalar_test 2022-05-18T03:32:42.9670316Z inflating: build/bin/extension_backend_test 2022-05-18T03:32:42.9713579Z inflating: build/bin/inline_container_test 2022-05-18T03:32:42.9802787Z inflating: build/bin/List_test 2022-05-18T03:32:42.9845064Z inflating: build/bin/wrapdim_test 2022-05-18T03:32:42.9891611Z inflating: build/bin/native_test 2022-05-18T03:32:42.9938419Z inflating: build/bin/scalar_tensor_test 2022-05-18T03:32:42.9978607Z inflating: build/bin/lazy_tensor_test 2022-05-18T03:32:43.0020889Z inflating: build/bin/memory_overlapping_test 2022-05-18T03:32:43.0069826Z inflating: build/bin/atest 2022-05-18T03:32:43.0117028Z inflating: build/bin/quantized_test 2022-05-18T03:32:43.0164264Z inflating: build/bin/NamedTensor_test 2022-05-18T03:32:43.0205021Z inflating: build/bin/dlconvertor_test 2022-05-18T03:32:43.0247458Z inflating: build/bin/weakref_test 2022-05-18T03:32:43.0250175Z inflating: build/bin/thread_init_test 2022-05-18T03:32:43.0291128Z inflating: build/bin/operators_test 2022-05-18T03:32:43.0332624Z inflating: build/bin/CppSignature_test 2022-05-18T03:32:43.0395733Z inflating: build/bin/tensor_iterator_test 2022-05-18T03:32:43.0436096Z inflating: build/bin/variant_test 2022-05-18T03:32:43.0479366Z inflating: build/bin/math_kernel_test 2022-05-18T03:32:43.0533858Z inflating: build/bin/pow_test 2022-05-18T03:32:43.0576402Z inflating: build/bin/mobile_memory_cleanup 2022-05-18T03:32:43.0590865Z inflating: build/bin/tutorial_tensorexpr 2022-05-18T03:32:43.0635531Z inflating: build/bin/test_dist_autograd 2022-05-18T03:32:43.0693698Z inflating: build/bin/test_cpp_rpc 2022-05-18T03:32:43.0695999Z inflating: build/bin/parallel_benchmark 2022-05-18T03:32:43.0752564Z inflating: build/bin/test_mobile_nnc 2022-05-18T03:32:43.0761746Z inflating: build/bin/aot_model_compiler_test 2022-05-18T03:32:43.1056286Z inflating: build/bin/test_lazy 2022-05-18T03:32:43.1060681Z inflating: build/bin/torch_shm_manager 2022-05-18T03:32:43.1742559Z inflating: build/bin/test_tensorexpr 2022-05-18T03:32:43.2752489Z inflating: build/bin/test_api 2022-05-18T03:32:43.3219534Z inflating: build/bin/test_jit 2022-05-18T03:32:43.3220580Z inflating: .pytorch-test-times.json 2022-05-18T03:32:43.3242524Z ##[group]Run df -H 2022-05-18T03:32:43.3242718Z df -H 2022-05-18T03:32:43.3254050Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:32:43.3254279Z env: 2022-05-18T03:32:43.3254436Z IN_CI: 1 2022-05-18T03:32:43.3254589Z IS_GHA: 1 2022-05-18T03:32:43.3254770Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:43.3254960Z ##[endgroup] 2022-05-18T03:32:43.3287112Z Filesystem Size Used Avail Use% Mounted on 2022-05-18T03:32:43.3287562Z devtmpfs 8.2G 0 8.2G 0% /dev 2022-05-18T03:32:43.3287927Z tmpfs 8.2G 103k 8.2G 1% /dev/shm 2022-05-18T03:32:43.3288139Z tmpfs 8.2G 410k 8.2G 1% /run 2022-05-18T03:32:43.3288462Z tmpfs 8.2G 0 8.2G 0% /sys/fs/cgroup 2022-05-18T03:32:43.3288697Z /dev/nvme0n1p1 162G 14G 148G 9% / 2022-05-18T03:32:43.3304255Z ##[group]Run .github/scripts/parse_ref.py 2022-05-18T03:32:43.3304500Z .github/scripts/parse_ref.py 2022-05-18T03:32:43.3315085Z shell: /usr/bin/bash -e {0} 2022-05-18T03:32:43.3315258Z env: 2022-05-18T03:32:43.3315416Z IN_CI: 1 2022-05-18T03:32:43.3315579Z IS_GHA: 1 2022-05-18T03:32:43.3315746Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:43.3315932Z ##[endgroup] 2022-05-18T03:32:43.3583589Z ##[group]Run set -x 2022-05-18T03:32:43.3583846Z set -x 2022-05-18T03:32:43.3584012Z  2022-05-18T03:32:43.3584211Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2022-05-18T03:32:43.3584457Z  TEST_COMMAND=.jenkins/pytorch/multigpu-test.sh 2022-05-18T03:32:43.3584812Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2022-05-18T03:32:43.3585051Z  TEST_COMMAND=.jenkins/caffe2/test.sh 2022-05-18T03:32:43.3585246Z else 2022-05-18T03:32:43.3585451Z  TEST_COMMAND=.jenkins/pytorch/test.sh 2022-05-18T03:32:43.3585649Z fi 2022-05-18T03:32:43.3585807Z  2022-05-18T03:32:43.3586029Z COMMIT_MESSAGES=$(git cherry -v "origin/${GIT_DEFAULT_BRANCH:-master}") 2022-05-18T03:32:43.3586282Z export COMMIT_MESSAGES 2022-05-18T03:32:43.3586461Z  2022-05-18T03:32:43.3586670Z # detached container should get cleaned up by teardown_ec2_linux 2022-05-18T03:32:43.3587080Z # TODO: Stop building test binaries as part of the build phase 2022-05-18T03:32:43.3587354Z # Used for GPU_FLAG since that doesn't play nice 2022-05-18T03:32:43.3587579Z # shellcheck disable=SC2086,SC2090 2022-05-18T03:32:43.3587798Z container_name=$(docker run \ 2022-05-18T03:32:43.3587995Z  ${GPU_FLAG:-} \ 2022-05-18T03:32:43.3588177Z  -e BUILD_ENVIRONMENT \ 2022-05-18T03:32:43.3588376Z  -e PR_NUMBER \ 2022-05-18T03:32:43.3588589Z  -e CUSTOM_TEST_ARTIFACT_BUILD_DIR \ 2022-05-18T03:32:43.3588788Z  -e GITHUB_ACTIONS \ 2022-05-18T03:32:43.3588972Z  -e IN_CI \ 2022-05-18T03:32:43.3589150Z  -e IS_GHA \ 2022-05-18T03:32:43.3589314Z  -e BRANCH \ 2022-05-18T03:32:43.3589487Z  -e SHA1 \ 2022-05-18T03:32:43.3589674Z  -e AWS_DEFAULT_REGION \ 2022-05-18T03:32:43.3589868Z  -e IN_WHEEL_TEST \ 2022-05-18T03:32:43.3590047Z  -e SHARD_NUMBER \ 2022-05-18T03:32:43.3590234Z  -e JOB_BASE_NAME \ 2022-05-18T03:32:43.3590421Z  -e TEST_CONFIG \ 2022-05-18T03:32:43.3590598Z  -e NUM_TEST_SHARDS \ 2022-05-18T03:32:43.3590788Z  -e PR_BODY \ 2022-05-18T03:32:43.3590973Z  -e COMMIT_MESSAGES \ 2022-05-18T03:32:43.3591167Z  -e PYTORCH_RETRY_TEST_CASES \ 2022-05-18T03:32:43.3591367Z  -e PR_LABELS \ 2022-05-18T03:32:43.3591576Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2022-05-18T03:32:43.3591772Z  -e SCCACHE_BUCKET \ 2022-05-18T03:32:43.3591956Z  -e XLA_CUDA \ 2022-05-18T03:32:43.3592159Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2022-05-18T03:32:43.3592393Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2022-05-18T03:32:43.3592625Z  --ulimit stack=10485760:83886080 \ 2022-05-18T03:32:43.3592853Z  --security-opt seccomp=unconfined \ 2022-05-18T03:32:43.3593140Z  --cap-add=SYS_PTRACE \ 2022-05-18T03:32:43.3593353Z  --ipc=host \ 2022-05-18T03:32:43.3593547Z  --shm-size="${SHM_SIZE}" \ 2022-05-18T03:32:43.3593737Z  --tty \ 2022-05-18T03:32:43.3593898Z  --detach \ 2022-05-18T03:32:43.3594095Z  --name="${container_name}" \ 2022-05-18T03:32:43.3594295Z  --user jenkins \ 2022-05-18T03:32:43.3594518Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2022-05-18T03:32:43.3594880Z  -w /var/lib/jenkins/workspace \ 2022-05-18T03:32:43.3595086Z  "${DOCKER_IMAGE}" 2022-05-18T03:32:43.3595250Z ) 2022-05-18T03:32:43.3595500Z docker exec -t "${container_name}" sh -c "pip install dist/*.whl && ${TEST_COMMAND}" 2022-05-18T03:32:43.3606750Z shell: /usr/bin/bash -e {0} 2022-05-18T03:32:43.3606925Z env: 2022-05-18T03:32:43.3607082Z IN_CI: 1 2022-05-18T03:32:43.3607250Z IS_GHA: 1 2022-05-18T03:32:43.3607419Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:32:43.3607749Z BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc5.4 2022-05-18T03:32:43.3607969Z PR_NUMBER: 2022-05-18T03:32:43.3608141Z BRANCH: master 2022-05-18T03:32:43.3608351Z CUSTOM_TEST_ARTIFACT_BUILD_DIR: build/custom_test_artifacts 2022-05-18T03:32:43.3608610Z SHA1: 3b2375291aab7b48442f2e6fb1ef66cebc761e24 2022-05-18T03:32:43.3608830Z PYTORCH_RETRY_TEST_CASES: 1 2022-05-18T03:32:43.3609057Z JOB_BASE_NAME: linux-xenial-py3.7-gcc5.4-test 2022-05-18T03:32:43.3609307Z TEST_CONFIG: backwards_compat 2022-05-18T03:32:43.3609497Z SHARD_NUMBER: 1 2022-05-18T03:32:43.3609655Z NUM_TEST_SHARDS: 1 2022-05-18T03:32:43.3609828Z PR_BODY: 2022-05-18T03:32:43.3610047Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2022-05-18T03:32:43.3610262Z SHM_SIZE: 1g 2022-05-18T03:32:43.3610606Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:32:43.3610939Z XLA_CUDA: 2022-05-18T03:32:43.3611195Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2022-05-18T03:32:43.3611441Z ##[endgroup] 2022-05-18T03:32:43.3636517Z + [[ backwards_compat == \m\u\l\t\i\g\p\u ]] 2022-05-18T03:32:43.3637145Z + [[ linux-xenial-py3.7-gcc5.4 == *onnx* ]] 2022-05-18T03:32:43.3637441Z + TEST_COMMAND=.jenkins/pytorch/test.sh 2022-05-18T03:32:43.3639656Z ++ git cherry -v origin/master 2022-05-18T03:32:43.3664736Z + COMMIT_MESSAGES= 2022-05-18T03:32:43.3665092Z + export COMMIT_MESSAGES 2022-05-18T03:32:43.3673084Z +++ nproc --ignore=2 2022-05-18T03:32:43.3690061Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e CUSTOM_TEST_ARTIFACT_BUILD_DIR -e GITHUB_ACTIONS -e IN_CI -e IS_GHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e JOB_BASE_NAME -e TEST_CONFIG -e NUM_TEST_SHARDS -e PR_BODY -e COMMIT_MESSAGES -e PYTORCH_RETRY_TEST_CASES -e PR_LABELS -e MAX_JOBS=6 -e SCCACHE_BUCKET -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME --env-file=/tmp/github_env_2342799944 --ulimit stack=10485760:83886080 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=1g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:32:54.3246326Z + container_name=55bd3be61eda7b9683d31d97f73e33a89445917b6839b0a67cacf21da99c10c3 2022-05-18T03:32:54.3247145Z + docker exec -t 55bd3be61eda7b9683d31d97f73e33a89445917b6839b0a67cacf21da99c10c3 sh -c 'pip install dist/*.whl && .jenkins/pytorch/test.sh' 2022-05-18T03:32:54.6899210Z Processing ./dist/torch-1.12.0a0+git3b23752-cp37-cp37m-linux_x86_64.whl 2022-05-18T03:32:54.7698890Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from torch==1.12.0a0+git3b23752) (4.1.1) 2022-05-18T03:32:55.1465352Z Installing collected packages: torch 2022-05-18T03:33:00.8032888Z Successfully installed torch-1.12.0a0+git3b23752 2022-05-18T03:33:00.8662101Z + COMPACT_JOB_NAME=linux-xenial-py3.7-gcc5.4 2022-05-18T03:33:00.8662761Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2022-05-18T03:33:00.8826281Z + TORCH_INSTALL_DIR=/opt/conda/lib/python3.7/site-packages/torch 2022-05-18T03:33:00.8867455Z + TORCH_BIN_DIR=/opt/conda/lib/python3.7/site-packages/torch/bin 2022-05-18T03:33:00.8868021Z + TORCH_LIB_DIR=/opt/conda/lib/python3.7/site-packages/torch/lib 2022-05-18T03:33:00.8868763Z + TORCH_TEST_DIR=/opt/conda/lib/python3.7/site-packages/torch/test 2022-05-18T03:33:00.8869064Z + BUILD_DIR=build 2022-05-18T03:33:00.8869363Z + BUILD_RENAMED_DIR=build_renamed 2022-05-18T03:33:00.8869659Z + BUILD_BIN_DIR=build/bin 2022-05-18T03:33:00.8870006Z + [[ -n backwards_compat ]] 2022-05-18T03:33:00.8870462Z + BUILD_ENVIRONMENT=linux-xenial-py3.7-gcc5.4-backwards_compat 2022-05-18T03:33:00.8870952Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat != *bazel* ]] 2022-05-18T03:33:00.8871391Z ++ realpath build/custom_test_artifacts 2022-05-18T03:33:00.8871682Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2022-05-18T03:33:00.8871940Z ++ dirname .jenkins/pytorch/test.sh 2022-05-18T03:33:00.8872142Z + source .jenkins/pytorch/common.sh 2022-05-18T03:33:00.8872346Z +++ dirname .jenkins/pytorch/common.sh 2022-05-18T03:33:00.8872563Z ++ source .jenkins/pytorch/common_utils.sh 2022-05-18T03:33:00.8872812Z +++ TORCHVISION_COMMIT=8a2dc6f22ac4389ccba8859aa1e1cb14f1ee53db 2022-05-18T03:33:00.8873064Z ++ set -ex 2022-05-18T03:33:00.8879697Z ++++ dirname .jenkins/pytorch/common.sh 2022-05-18T03:33:00.8888080Z +++ cd .jenkins/pytorch 2022-05-18T03:33:00.8888366Z +++ pwd -P 2022-05-18T03:33:00.8890803Z ++ SCRIPT_DIR=/var/lib/jenkins/workspace/.jenkins/pytorch 2022-05-18T03:33:00.8891192Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *linux* ]] 2022-05-18T03:33:00.8893582Z +++ find /etc/apt/ -type f -name '*.list' 2022-05-18T03:33:00.8907115Z ++ sudo sed -i 's/.*nvidia.*/# &/' /etc/apt/sources.list /etc/apt/sources.list.d/nodesource.list /etc/apt/sources.list.d/ubuntu-toolchain-r-ubuntu-test-xenial.list /etc/apt/sources.list.d/yarn.list 2022-05-18T03:33:00.8952161Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *rocm* ]] 2022-05-18T03:33:00.8952618Z ++ echo ENTERED_USER_LAND 2022-05-18T03:33:00.8952914Z ENTERED_USER_LAND 2022-05-18T03:33:00.8953092Z ++ export IN_CI=1 2022-05-18T03:33:00.8953298Z ++ IN_CI=1 2022-05-18T03:33:00.8953678Z ++ declare -f -t trap_add 2022-05-18T03:33:00.8954034Z ++ trap_add cleanup EXIT 2022-05-18T03:33:00.8954378Z ++ trap_add_cmd=cleanup 2022-05-18T03:33:00.8954550Z ++ shift 2022-05-18T03:33:00.8954798Z ++ for trap_add_name in '"$@"' 2022-05-18T03:33:00.8961104Z ++++ trap -p EXIT 2022-05-18T03:33:00.8963845Z +++ eval 'extract_trap_cmd ' 2022-05-18T03:33:00.8964071Z ++++ extract_trap_cmd 2022-05-18T03:33:00.8964289Z ++++ printf '%s\n' '' 2022-05-18T03:33:00.8964494Z +++ printf '%s\n' cleanup 2022-05-18T03:33:00.8966288Z ++ trap -- ' 2022-05-18T03:33:00.8966537Z cleanup' EXIT 2022-05-18T03:33:00.8969006Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat != *win-* ]] 2022-05-18T03:33:00.8969269Z ++ which sccache 2022-05-18T03:33:00.8977808Z ++ sccache --stop-server 2022-05-18T03:33:00.8999015Z ++ true 2022-05-18T03:33:00.8999478Z ++ rm -f /var/lib/jenkins/sccache_error.log 2022-05-18T03:33:00.9005621Z ++ [[ -n '' ]] 2022-05-18T03:33:00.9006128Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *rocm* ]] 2022-05-18T03:33:00.9006652Z ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2022-05-18T03:33:00.9007045Z ++ SCCACHE_IDLE_TIMEOUT=1200 2022-05-18T03:33:00.9024343Z ++ RUST_LOG=sccache::server=error 2022-05-18T03:33:00.9024915Z ++ sccache --start-server 2022-05-18T03:33:00.9030219Z sccache: Starting the server... 2022-05-18T03:33:00.9172112Z ++ sccache --zero-stats 2022-05-18T03:33:00.9191514Z Compile requests 0 2022-05-18T03:33:00.9191920Z Compile requests executed 0 2022-05-18T03:33:00.9192273Z Cache hits 0 2022-05-18T03:33:00.9192662Z Cache misses 0 2022-05-18T03:33:00.9192907Z Cache timeouts 0 2022-05-18T03:33:00.9193092Z Cache read errors 0 2022-05-18T03:33:00.9193346Z Forced recaches 0 2022-05-18T03:33:00.9193547Z Cache write errors 0 2022-05-18T03:33:00.9193830Z Compilation failures 0 2022-05-18T03:33:00.9194369Z Cache errors 0 2022-05-18T03:33:00.9194766Z Non-cacheable compilations 0 2022-05-18T03:33:00.9195014Z Non-cacheable calls 0 2022-05-18T03:33:00.9195260Z Non-compilation calls 0 2022-05-18T03:33:00.9195477Z Unsupported compiler calls 0 2022-05-18T03:33:00.9195681Z Average cache write 0.000 s 2022-05-18T03:33:00.9195898Z Average cache read miss 0.000 s 2022-05-18T03:33:00.9196108Z Average cache read hit 0.000 s 2022-05-18T03:33:00.9196411Z Failed distributed compilations 0 2022-05-18T03:33:00.9197083Z Cache location S3, bucket: Bucket(name=ossci-compiler-cache-circleci-v2, base_url=http://ossci-compiler-cache-circleci-v2.s3.amazonaws.com/) 2022-05-18T03:33:00.9197580Z ++ [[ linux-xenial-py3.7-gcc5.4-test == *-build ]] 2022-05-18T03:33:00.9197800Z ++ which ccache 2022-05-18T03:33:00.9203728Z ++ '[' -z linux-xenial-py3.7-gcc5.4 ']' 2022-05-18T03:33:00.9204155Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *linux-trusty-py3.6-gcc7* ]] 2022-05-18T03:33:00.9204524Z ++ BUILD_TEST_LIBTORCH=0 2022-05-18T03:33:00.9205523Z ++ [[ backwards_compat == *xla* ]] 2022-05-18T03:33:00.9205902Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *centos* ]] 2022-05-18T03:33:00.9206298Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *linux-bionic* ]] 2022-05-18T03:33:00.9206684Z ++ [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *linux-focal* ]] 2022-05-18T03:33:00.9206954Z + echo 'Testing pytorch' 2022-05-18T03:33:00.9207138Z Testing pytorch 2022-05-18T03:33:00.9207342Z + export LANG=C.UTF-8 2022-05-18T03:33:00.9207520Z + LANG=C.UTF-8 2022-05-18T03:33:00.9208944Z + PR_NUMBER= 2022-05-18T03:33:00.9209276Z + [[ backwards_compat == \d\e\f\a\u\l\t ]] 2022-05-18T03:33:00.9209625Z + [[ backwards_compat == \d\i\s\t\r\i\b\u\t\e\d ]] 2022-05-18T03:33:00.9210229Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *-slow-* ]] 2022-05-18T03:33:00.9210610Z + [[ backwards_compat == \s\l\o\w ]] 2022-05-18T03:33:00.9211159Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *slow-gradcheck* ]] 2022-05-18T03:33:00.9211802Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *cuda* ]] 2022-05-18T03:33:00.9212284Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *rocm* ]] 2022-05-18T03:33:00.9212631Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *cuda11* ]] 2022-05-18T03:33:00.9213013Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *crossref* ]] 2022-05-18T03:33:00.9213454Z + [[ -n '' ]] 2022-05-18T03:33:00.9213850Z + export PYTORCH_TEST_SKIP_CUDA_MEM_LEAK_CHECK=0 2022-05-18T03:33:00.9214247Z + PYTORCH_TEST_SKIP_CUDA_MEM_LEAK_CHECK=0 2022-05-18T03:33:00.9214600Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *rocm* ]] 2022-05-18T03:33:00.9214966Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat != *ppc64le* ]] 2022-05-18T03:33:00.9215324Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat != *-bazel-* ]] 2022-05-18T03:33:00.9215585Z + pip_install --user ninja 2022-05-18T03:33:00.9215853Z + pip install --progress-bar off --user ninja 2022-05-18T03:33:01.3184666Z Collecting ninja 2022-05-18T03:33:01.3357007Z Downloading ninja-1.10.2.3-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2022-05-18T03:33:01.3421660Z [?25l 2022-05-18T03:33:01.6610455Z [?25hInstalling collected packages: ninja 2022-05-18T03:33:01.6696244Z  WARNING: The script ninja is installed in '/var/lib/jenkins/.local/bin' which is not on PATH. 2022-05-18T03:33:01.6696814Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2022-05-18T03:33:01.6755006Z Successfully installed ninja-1.10.2.3 2022-05-18T03:33:01.7412012Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2022-05-18T03:33:01.7412678Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2022-05-18T03:33:01.7413222Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *asan* ]] 2022-05-18T03:33:01.7414219Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *-NO_AVX-* ]] 2022-05-18T03:33:01.7414699Z + [[ backwards_compat == \n\o\g\p\u\_\N\O\_\A\V\X ]] 2022-05-18T03:33:01.7415323Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *-NO_AVX2-* ]] 2022-05-18T03:33:01.7415654Z + [[ backwards_compat == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2022-05-18T03:33:01.7415992Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *-NO_AVX512-* ]] 2022-05-18T03:33:01.7416330Z + [[ backwards_compat == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]] 2022-05-18T03:33:01.7416741Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *tbb* ]] 2022-05-18T03:33:01.7428010Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *libtorch* ]] 2022-05-18T03:33:01.7428604Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *-bazel-* ]] 2022-05-18T03:33:01.7430911Z + cd test 2022-05-18T03:33:01.7431865Z + python -c 'import torch; print(torch.__config__.show())' 2022-05-18T03:33:02.2934553Z PyTorch built with: 2022-05-18T03:33:02.2935083Z - GCC 5.4 2022-05-18T03:33:02.2935307Z - C++ Version: 201402 2022-05-18T03:33:02.2935725Z - Intel(R) oneAPI Math Kernel Library Version 2022.0-Product Build 20211112 for Intel(R) 64 architecture applications 2022-05-18T03:33:02.2936209Z - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815) 2022-05-18T03:33:02.2936700Z - OpenMP 201307 (a.k.a. OpenMP 4.0) 2022-05-18T03:33:02.2937083Z - LAPACK is enabled (usually provided by MKL) 2022-05-18T03:33:02.2937410Z - NNPACK is enabled 2022-05-18T03:33:02.2937804Z - CPU capability usage: AVX2 2022-05-18T03:33:02.2941445Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-attributes -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Werror -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 2022-05-18T03:33:02.2943085Z 2022-05-18T03:33:02.4004062Z + cd test 2022-05-18T03:33:02.4004609Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2022-05-18T03:33:02.9273949Z ATen/Parallel: 2022-05-18T03:33:02.9274480Z at::get_num_threads() : 4 2022-05-18T03:33:02.9274857Z at::get_num_interop_threads() : 4 2022-05-18T03:33:02.9275190Z OpenMP 201307 (a.k.a. OpenMP 4.0) 2022-05-18T03:33:02.9275398Z omp_get_max_threads() : 4 2022-05-18T03:33:02.9275930Z Intel(R) oneAPI Math Kernel Library Version 2022.0-Product Build 20211112 for Intel(R) 64 architecture applications 2022-05-18T03:33:02.9276227Z mkl_get_max_threads() : 4 2022-05-18T03:33:02.9276549Z Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815) 2022-05-18T03:33:02.9276828Z std::thread::hardware_concurrency() : 8 2022-05-18T03:33:02.9277059Z Environment variables: 2022-05-18T03:33:02.9277243Z OMP_NUM_THREADS : [not set] 2022-05-18T03:33:02.9277439Z MKL_NUM_THREADS : [not set] 2022-05-18T03:33:02.9277640Z ATen parallel backend: OpenMP 2022-05-18T03:33:02.9277766Z 2022-05-18T03:33:03.0340813Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *deploy* ]] 2022-05-18T03:33:03.0341255Z + [[ linux-xenial-py3.7-gcc5.4-backwards_compat == *backward* ]] 2022-05-18T03:33:03.0341789Z + test_forward_backward_compatibility 2022-05-18T03:33:03.0342046Z + set -x 2022-05-18T03:33:03.0342277Z + python test/create_dummy_torchscript_model.py /tmp/model_new.pt 2022-05-18T03:33:03.7024127Z + pushd test/forward_backward_compatibility 2022-05-18T03:33:03.7024710Z ~/workspace/test/forward_backward_compatibility ~/workspace 2022-05-18T03:33:03.7025084Z + python -m venv venv 2022-05-18T03:33:06.3065260Z + . venv/bin/activate 2022-05-18T03:33:06.3066426Z ++ deactivate nondestructive 2022-05-18T03:33:06.3066884Z ++ '[' -n '' ']' 2022-05-18T03:33:06.3067543Z ++ '[' -n '' ']' 2022-05-18T03:33:06.3067923Z ++ '[' -n /bin/bash -o -n '' ']' 2022-05-18T03:33:06.3068223Z ++ hash -r 2022-05-18T03:33:06.3068530Z ++ '[' -n '' ']' 2022-05-18T03:33:06.3068785Z ++ unset VIRTUAL_ENV 2022-05-18T03:33:06.3069209Z ++ '[' '!' nondestructive = nondestructive ']' 2022-05-18T03:33:06.3069639Z ++ VIRTUAL_ENV=/var/lib/jenkins/workspace/test/forward_backward_compatibility/venv 2022-05-18T03:33:06.3070063Z ++ export VIRTUAL_ENV 2022-05-18T03:33:06.3071159Z ++ _OLD_VIRTUAL_PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2022-05-18T03:33:06.3072248Z ++ PATH=/var/lib/jenkins/workspace/test/forward_backward_compatibility/venv/bin:/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2022-05-18T03:33:06.3072993Z ++ export PATH 2022-05-18T03:33:06.3073517Z ++ '[' -n '' ']' 2022-05-18T03:33:06.3073911Z ++ '[' -z '' ']' 2022-05-18T03:33:06.3074302Z ++ _OLD_VIRTUAL_PS1= 2022-05-18T03:33:06.3074774Z ++ '[' 'x(venv) ' '!=' x ']' 2022-05-18T03:33:06.3075198Z ++ PS1='(venv) ' 2022-05-18T03:33:06.3075569Z ++ export PS1 2022-05-18T03:33:06.3076042Z ++ '[' -n /bin/bash -o -n '' ']' 2022-05-18T03:33:06.3076440Z ++ hash -r 2022-05-18T03:33:06.3077246Z + pip_install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html 2022-05-18T03:33:06.3078378Z + pip install --progress-bar off --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html 2022-05-18T03:33:06.6889198Z Looking in links: https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html 2022-05-18T03:33:07.2025367Z Collecting torch 2022-05-18T03:33:07.2076434Z Downloading https://download.pytorch.org/whl/nightly/cpu/torch-1.12.0.dev20220517%2Bcpu-cp37-cp37m-linux_x86_64.whl (187.9 MB) 2022-05-18T03:33:09.3934701Z Collecting typing-extensions 2022-05-18T03:33:09.4114524Z Downloading typing_extensions-4.2.0-py3-none-any.whl (24 kB) 2022-05-18T03:33:09.5078668Z Installing collected packages: typing-extensions, torch 2022-05-18T03:33:16.4388969Z Successfully installed torch-1.12.0.dev20220517+cpu typing-extensions-4.2.0 2022-05-18T03:33:16.6512506Z WARNING: You are using pip version 22.0.4; however, version 22.1 is available. 2022-05-18T03:33:16.6513100Z You should consider upgrading via the '/var/lib/jenkins/workspace/test/forward_backward_compatibility/venv/bin/python -m pip install --upgrade pip' command. 2022-05-18T03:33:16.7628081Z + pip show torch 2022-05-18T03:33:17.1282723Z Name: torch 2022-05-18T03:33:17.1283112Z Version: 1.12.0.dev20220517+cpu 2022-05-18T03:33:17.1284425Z Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration 2022-05-18T03:33:17.1287194Z Home-page: https://pytorch.org/ 2022-05-18T03:33:17.1288421Z Author: PyTorch Team 2022-05-18T03:33:17.1291020Z Author-email: packages@pytorch.org 2022-05-18T03:33:17.1292635Z License: BSD-3 2022-05-18T03:33:17.1295446Z Location: /var/lib/jenkins/workspace/test/forward_backward_compatibility/venv/lib/python3.7/site-packages 2022-05-18T03:33:17.1296680Z Requires: typing-extensions 2022-05-18T03:33:17.1299142Z Required-by: 2022-05-18T03:33:17.1756577Z + python dump_all_function_schemas.py --filename nightly_schemas.txt 2022-05-18T03:33:17.7399802Z + python ../load_torchscript_model.py /tmp/model_new.pt 2022-05-18T03:33:18.1770459Z RecursiveScriptModule( 2022-05-18T03:33:18.1771169Z original_name=NeuralNetwork 2022-05-18T03:33:18.1771419Z (flatten): RecursiveScriptModule(original_name=Flatten) 2022-05-18T03:33:18.1771716Z (linear_relu_stack): RecursiveScriptModule( 2022-05-18T03:33:18.1771943Z original_name=Sequential 2022-05-18T03:33:18.1772183Z (0): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:18.1772433Z (1): RecursiveScriptModule(original_name=ReLU) 2022-05-18T03:33:18.1772696Z (2): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:18.1772957Z (3): RecursiveScriptModule(original_name=ReLU) 2022-05-18T03:33:18.1773299Z (4): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:18.1773506Z ) 2022-05-18T03:33:18.1773657Z ) 2022-05-18T03:33:18.2790253Z + python ../create_dummy_torchscript_model.py /tmp/model_old.pt 2022-05-18T03:33:18.7036331Z /var/lib/jenkins/workspace/test/forward_backward_compatibility/venv/lib/python3.7/site-packages/torch/nn/modules/linear.py:94: UserWarning: Failed to initialize NumPy: numpy.core.multiarray failed to import (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:68.) 2022-05-18T03:33:18.7036926Z self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs)) 2022-05-18T03:33:18.8362518Z + deactivate 2022-05-18T03:33:18.8363233Z + '[' -n /var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ']' 2022-05-18T03:33:18.8363678Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2022-05-18T03:33:18.8364101Z + export PATH 2022-05-18T03:33:18.8364447Z + unset _OLD_VIRTUAL_PATH 2022-05-18T03:33:18.8364822Z + '[' -n '' ']' 2022-05-18T03:33:18.8365204Z + '[' -n /bin/bash -o -n '' ']' 2022-05-18T03:33:18.8365566Z + hash -r 2022-05-18T03:33:18.8365813Z + '[' -n '' ']' 2022-05-18T03:33:18.8365991Z + unset VIRTUAL_ENV 2022-05-18T03:33:18.8366217Z + '[' '!' '' = nondestructive ']' 2022-05-18T03:33:18.8366427Z + unset -f deactivate 2022-05-18T03:33:18.8366630Z + rm -r venv 2022-05-18T03:33:19.0891828Z + pip show torch 2022-05-18T03:33:19.6673228Z Name: torch 2022-05-18T03:33:19.6673625Z Version: 1.12.0a0+git3b23752 2022-05-18T03:33:19.6674001Z Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration 2022-05-18T03:33:19.6674495Z Home-page: https://pytorch.org/ 2022-05-18T03:33:19.6674837Z Author: PyTorch Team 2022-05-18T03:33:19.6675286Z Author-email: packages@pytorch.org 2022-05-18T03:33:19.6675684Z License: BSD-3 2022-05-18T03:33:19.6676199Z Location: /opt/conda/lib/python3.7/site-packages 2022-05-18T03:33:19.6676511Z Requires: typing-extensions 2022-05-18T03:33:19.6676732Z Required-by: 2022-05-18T03:33:19.7109191Z + python check_forward_backward_compatibility.py --existing-schemas nightly_schemas.txt 2022-05-18T03:33:20.3356235Z processing existing schema: prim::rpc_async(...) -> (...) 2022-05-18T03:33:20.3356821Z processing existing schema: prim::rpc_remote(...) -> (...) 2022-05-18T03:33:20.3357226Z processing existing schema: prim::rpc_sync(...) -> (...) 2022-05-18T03:33:20.3358025Z processing existing schema: aten::dist_backward(int context_id, Tensor[] roots, bool retain_graph=False) -> () 2022-05-18T03:33:20.3359717Z processing existing schema: aten::confirmed_by_owner(RRef(t) self) -> (bool) 2022-05-18T03:33:20.3361531Z processing existing schema: aten::owner_name(RRef(t) self) -> (str) 2022-05-18T03:33:20.3363555Z processing existing schema: aten::owner(RRef(t) self) -> (__torch__.torch.classes.dist_rpc.WorkerInfo) 2022-05-18T03:33:20.3365083Z processing existing schema: aten::is_owner(RRef(t) self) -> (bool) 2022-05-18T03:33:20.3367238Z processing existing schema: aten::local_value(RRef(t) self) -> (t(*)) 2022-05-18T03:33:20.3369252Z processing existing schema: aten::to_here(RRef(t) self, float timeout=60.) -> (t(*)) 2022-05-18T03:33:20.3369975Z processing existing schema: prim::PythonOp(...) -> (...) 2022-05-18T03:33:20.3372004Z processing existing schema: quantization::_FloatToBfloat16Quantized(Tensor input) -> (Tensor) 2022-05-18T03:33:20.3373176Z processing existing schema: quantization::_Bfloat16QuantizedToFloat(Tensor input) -> (Tensor) 2022-05-18T03:33:20.3375019Z processing existing schema: aten::set_grad_enabled(bool val) -> () 2022-05-18T03:33:20.3375884Z processing existing schema: aten::is_grad_enabled() -> (bool) 2022-05-18T03:33:20.3378167Z processing existing schema: aten::_no_grad_zero_(Tensor(a!) tensor) -> (Tensor(a!)) 2022-05-18T03:33:20.3380001Z processing existing schema: aten::_no_grad_fill_(Tensor(a!) tensor, float val) -> (Tensor(a!)) 2022-05-18T03:33:20.3381955Z processing existing schema: aten::_no_grad_normal_(Tensor(a!) tensor, float mean, float std) -> (Tensor(a!)) 2022-05-18T03:33:20.3383998Z processing existing schema: aten::_no_grad_uniform_(Tensor(a!) tensor, float a, float b) -> (Tensor(a!)) 2022-05-18T03:33:20.3385221Z processing existing schema: aten::has_torch_function(...) -> (bool) 2022-05-18T03:33:20.3386512Z processing existing schema: aten::is_scripting() -> (bool) 2022-05-18T03:33:20.3388476Z processing existing schema: aten::_get_tracing_state() -> (bool) 2022-05-18T03:33:20.3391395Z processing existing schema: aten::_pack_sequence(Tensor output, Tensor batch_sizes, Tensor? sorted_indices, Tensor? unsorted_indices) -> (Tensor, Tensor, Tensor?, Tensor?) 2022-05-18T03:33:20.3393006Z processing existing schema: aten::_no_grad_embedding_renorm_(Tensor weight, Tensor input, float max_norm, float norm_type) -> (Tensor) 2022-05-18T03:33:20.3396251Z processing existing schema: aten::_infer_size(int[] a, int[] b) -> (int[]) 2022-05-18T03:33:20.3398372Z processing existing schema: aten::as_tensor.float(float t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.3400853Z processing existing schema: aten::as_tensor.int(int t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.3402628Z processing existing schema: aten::as_tensor.bool(bool t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.3404243Z processing existing schema: aten::as_tensor.complex(complex t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.3406055Z processing existing schema: aten::as_tensor(Tensor(a) data, *, int? dtype=None, Device? device=None) -> (Tensor(a|b)) 2022-05-18T03:33:20.3408013Z processing existing schema: aten::as_tensor.list(t[] data, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.3409879Z processing existing schema: aten::tensor.float(float t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.3411640Z processing existing schema: aten::tensor.int(int t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.3413448Z processing existing schema: aten::tensor.bool(bool t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.3415296Z processing existing schema: aten::tensor.complex(complex t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.3417372Z processing existing schema: aten::tensor(t[] data, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.3419185Z processing existing schema: _test::get_first(str[][] _0) -> (str _0) 2022-05-18T03:33:20.3420842Z processing existing schema: _test::cat(Tensor[] inputs) -> (Tensor) 2022-05-18T03:33:20.3422853Z processing existing schema: _test::leaky_relu(Tensor self, float v=0.01) -> (Tensor) 2022-05-18T03:33:20.3424797Z processing existing schema: aten::__upsample_bilinear(Tensor input, int? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3426943Z processing existing schema: aten::__upsample_bilinear.size_list(Tensor input, int[]? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3429045Z processing existing schema: aten::__upsample_bilinear.scale_list(Tensor input, int? size=None, int[]? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3431482Z processing existing schema: aten::__upsample_bilinear.size_list_scale_list(Tensor input, int[]? size=None, int[]? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3433800Z processing existing schema: aten::__upsample(Tensor input, int? size=None, int? scale_factor=None, str mode="nearest", bool? align_corners=None) -> (Tensor) 2022-05-18T03:33:20.3436363Z processing existing schema: aten::__upsample.size_list(Tensor input, int[]? size=None, int? scale_factor=None, str mode="nearest", bool? align_corners=None) -> (Tensor) 2022-05-18T03:33:20.3438147Z processing existing schema: aten::__upsample_nearest(Tensor input, int? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3440433Z processing existing schema: aten::__upsample_nearest.size_list(Tensor input, int[]? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.3443552Z processing existing schema: aten::__interpolate.scale_list(Tensor input, int? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.3446711Z processing existing schema: aten::__interpolate.size_list_scale_list(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.3449244Z processing existing schema: aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.3452101Z processing existing schema: aten::__interpolate.size_list(Tensor input, int[]? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.3453040Z processing existing schema: prim::TimePoint() -> (int) 2022-05-18T03:33:20.3454523Z processing existing schema: prim::AddStatValue(str key, int val) -> () 2022-05-18T03:33:20.3456138Z processing existing schema: aten::wait(Future(t) self) -> (t) 2022-05-18T03:33:20.3457454Z processing existing schema: prim::IgnoredPythonOp(...) -> (NoneType) 2022-05-18T03:33:20.3458917Z processing existing schema: aten::save(t item, str filename) -> () 2022-05-18T03:33:20.3462929Z processing existing schema: aten::grad(Tensor[] outputs, Tensor[] inputs, Tensor?[]? grad_outputs=None, bool? retain_graph=None, bool create_graph=False, bool allow_unused=False) -> (Tensor?[]) 2022-05-18T03:33:20.3463843Z processing existing schema: prim::BailoutTemplate() -> (int) 2022-05-18T03:33:20.3465395Z processing existing schema: prim::BailOut(...) -> (Tensor(a)) 2022-05-18T03:33:20.3466994Z processing existing schema: prim::Guard(Tensor(a) t) -> (Tensor(a)) 2022-05-18T03:33:20.3468270Z processing existing schema: prim::FallbackGraph(...) -> (...) 2022-05-18T03:33:20.3469416Z processing existing schema: prim::TypeCheck(...) -> (...) 2022-05-18T03:33:20.3471600Z processing existing schema: aten::_grad_sum_to_size(Tensor(a) self, int[]? size) -> (Tensor(a)) 2022-05-18T03:33:20.3472700Z processing existing schema: prim::ChunkSizes(...) -> (...) 2022-05-18T03:33:20.3474063Z processing existing schema: prim::ConstantChunk(...) -> (...) 2022-05-18T03:33:20.3475251Z processing existing schema: prim::RequiresGradCheck(...) -> (...) 2022-05-18T03:33:20.3476582Z processing existing schema: prim::FusionGroup(...) -> (...) 2022-05-18T03:33:20.3477890Z processing existing schema: prim::profile_ivalue(...) -> (...) 2022-05-18T03:33:20.3479526Z processing existing schema: prim::profile(...) -> (...) 2022-05-18T03:33:20.3480824Z processing existing schema: aten::hash.generic(t value) -> (int) 2022-05-18T03:33:20.3482388Z processing existing schema: prim::ModuleContainerIndex.list(Any self, int ind) -> (Any) 2022-05-18T03:33:20.3483982Z processing existing schema: prim::ModuleContainerIndex.dict(Any self, str ind) -> (Any) 2022-05-18T03:33:20.3485126Z processing existing schema: prim::id(AnyClassType? x) -> (int) 2022-05-18T03:33:20.3486583Z processing existing schema: aten::divmod.int(int x, int y) -> (int, int) 2022-05-18T03:33:20.3488163Z processing existing schema: aten::divmod.float(float x, float y) -> (float, float) 2022-05-18T03:33:20.3489742Z processing existing schema: aten::divmod.int_float(int x, float y) -> (float, float) 2022-05-18T03:33:20.3491040Z processing existing schema: aten::divmod.float_int(float x, int y) -> (float, float) 2022-05-18T03:33:20.3492849Z processing existing schema: aten::_list_to_tensor(int[] self) -> (Tensor) 2022-05-18T03:33:20.3494364Z processing existing schema: aten::_tensor_to_list(Tensor self) -> (int[]) 2022-05-18T03:33:20.3495618Z processing existing schema: prim::abs.int(int a) -> (int) 2022-05-18T03:33:20.3497081Z processing existing schema: prim::abs.float(float a) -> (float) 2022-05-18T03:33:20.3498393Z processing existing schema: prim::abs.complex(complex a) -> (float) 2022-05-18T03:33:20.3499742Z processing existing schema: prim::abs.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.3501089Z processing existing schema: prim::abs(Tensor x) -> (Tensor) 2022-05-18T03:33:20.3502420Z processing existing schema: aten::fabs.int(int a) -> (float) 2022-05-18T03:33:20.3503854Z processing existing schema: aten::fabs.float(float a) -> (float) 2022-05-18T03:33:20.3505350Z processing existing schema: aten::fabs.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.3506710Z processing existing schema: aten::gamma.int(int a) -> (float) 2022-05-18T03:33:20.3508193Z processing existing schema: aten::gamma.float(float a) -> (float) 2022-05-18T03:33:20.3509588Z processing existing schema: aten::gamma.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.3511078Z processing existing schema: aten::factorial.int(int a) -> (int) 2022-05-18T03:33:20.3512598Z processing existing schema: aten::_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:20.3514708Z processing existing schema: aten::_softmax.out(Tensor self, int dim, bool half_to_float, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3516138Z processing existing schema: aten::sinc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.3517787Z processing existing schema: aten::logit_(Tensor(a!) self, float? eps=None) -> (Tensor(a!)) 2022-05-18T03:33:20.3518987Z processing existing schema: aten::mish_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.3520882Z processing existing schema: aten::mish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.3521956Z processing existing schema: aten::mish(Tensor self) -> (Tensor) 2022-05-18T03:33:20.3523918Z processing existing schema: aten::mish.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3525530Z processing existing schema: aten::silu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.3527192Z processing existing schema: aten::hardshrink_backward(Tensor grad_out, Tensor self, Scalar lambd) -> (Tensor) 2022-05-18T03:33:20.3529353Z processing existing schema: aten::hardshrink_backward.grad_input(Tensor grad_out, Tensor self, Scalar lambd, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.3531227Z processing existing schema: aten::hardshrink(Tensor self, Scalar lambd=0.5) -> (Tensor) 2022-05-18T03:33:20.3533589Z processing existing schema: aten::hardshrink.out(Tensor self, Scalar lambd=0.5, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3535401Z processing existing schema: aten::gelu_backward(Tensor grad_output, Tensor self, *, str approximate="none") -> (Tensor) 2022-05-18T03:33:20.3537655Z processing existing schema: aten::gelu_backward.grad_input(Tensor grad_output, Tensor self, *, str approximate="none", Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.3539530Z processing existing schema: aten::gelu_(Tensor(a!) self, *, str approximate="none") -> (Tensor(a!)) 2022-05-18T03:33:20.3541214Z processing existing schema: aten::gelu(Tensor self, *, str approximate="none") -> (Tensor) 2022-05-18T03:33:20.3543282Z processing existing schema: aten::gelu.out(Tensor self, *, str approximate="none", Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3545045Z processing existing schema: aten::prelu_backward(Tensor grad_output, Tensor self, Tensor weight) -> (Tensor, Tensor) 2022-05-18T03:33:20.3546516Z processing existing schema: aten::native_channel_shuffle(Tensor self, int groups) -> (Tensor) 2022-05-18T03:33:20.3548413Z processing existing schema: aten::batch_norm_update_stats(Tensor input, Tensor? running_mean, Tensor? running_var, float momentum) -> (Tensor, Tensor) 2022-05-18T03:33:20.3550444Z processing existing schema: quantized::conv_transpose3d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.3552974Z processing existing schema: aten::native_batch_norm_backward(Tensor grad_out, Tensor input, Tensor? weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_invstd, bool train, float eps, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3554199Z processing existing schema: aten::narrow_copy(Tensor self, int dim, int start, int length) -> (Tensor) 2022-05-18T03:33:20.3556424Z processing existing schema: aten::narrow_copy.out(Tensor self, int dim, int start, int length, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3558095Z processing existing schema: aten::narrow_copy.SymInt(Tensor self, int dim, int start, SymInt length) -> (Tensor) 2022-05-18T03:33:20.3560216Z processing existing schema: aten::mvlgamma.out(Tensor self, int p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3561588Z processing existing schema: aten::mvlgamma(Tensor self, int p) -> (Tensor) 2022-05-18T03:33:20.3564017Z processing existing schema: aten::nan_to_num.out(Tensor self, float? nan=None, float? posinf=None, float? neginf=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3565927Z processing existing schema: aten::nan_to_num(Tensor self, float? nan=None, float? posinf=None, float? neginf=None) -> (Tensor) 2022-05-18T03:33:20.3568728Z processing existing schema: aten::native_layer_norm_backward(Tensor grad_out, Tensor input, int[] normalized_shape, Tensor mean, Tensor rstd, Tensor? weight, Tensor? bias, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3570552Z processing existing schema: aten::kl_div_backward(Tensor grad_output, Tensor self, Tensor target, int reduction=1, *, bool log_target=False) -> (Tensor) 2022-05-18T03:33:20.3572420Z processing existing schema: aten::isin.Tensor_Tensor(Tensor elements, Tensor test_elements, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.3574650Z processing existing schema: aten::isin.Tensor_Tensor_out(Tensor elements, Tensor test_elements, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3576459Z processing existing schema: aten::isin.Tensor_Scalar(Tensor elements, Scalar test_element, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.3578783Z processing existing schema: aten::isin.Tensor_Scalar_out(Tensor elements, Scalar test_element, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3580611Z processing existing schema: aten::isin.Scalar_Tensor(Scalar element, Tensor test_elements, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.3582888Z processing existing schema: aten::isin.Scalar_Tensor_out(Scalar element, Tensor test_elements, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3585737Z processing existing schema: aten::_index_put_impl_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False, bool unsafe=False) -> (Tensor(a!)) 2022-05-18T03:33:20.3588257Z processing existing schema: aten::_index_put_impl_.hacked_twin(Tensor(a!) self, Tensor[] indices, Tensor values, bool accumulate=False, bool unsafe=False) -> (Tensor(a!)) 2022-05-18T03:33:20.3590124Z processing existing schema: aten::index_copy_(Tensor(a!) self, int dim, Tensor index, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.3592116Z processing existing schema: aten::index_copy_.dimname(Tensor(a!) self, str dim, Tensor index, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.3594174Z processing existing schema: aten::index.Tensor(Tensor self, Tensor?[] indices) -> (Tensor) 2022-05-18T03:33:20.3596008Z processing existing schema: aten::index.Tensor_hacked_twin(Tensor self, Tensor[] indices) -> (Tensor) 2022-05-18T03:33:20.3597938Z processing existing schema: aten::index.str(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.3599878Z processing existing schema: aten::index.list_int(int[] self, int el) -> (int) 2022-05-18T03:33:20.3602376Z processing existing schema: aten::index.list_float(float[] self, float el) -> (int) 2022-05-18T03:33:20.3603478Z processing existing schema: aten::index.list_bool(bool[] self, bool el) -> (int) 2022-05-18T03:33:20.3605112Z processing existing schema: aten::index.list_Tensor(Tensor[] self, Tensor el) -> (int) 2022-05-18T03:33:20.3606875Z processing existing schema: aten::index.list_str(str[] self, str el) -> (int) 2022-05-18T03:33:20.3608662Z processing existing schema: aten::_fft_r2c(Tensor self, int[] dim, int normalization, bool onesided) -> (Tensor) 2022-05-18T03:33:20.3611338Z processing existing schema: aten::_fft_r2c.out(Tensor self, int[] dim, int normalization, bool onesided, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3613081Z processing existing schema: aten::native_group_norm(Tensor input, Tensor? weight, Tensor? bias, int N, int C, int HxW, int group, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3614891Z schema: aten::grid_sampler_3d_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners, bool[2] output_mask) -> (Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.3616745Z processing existing schema: aten::grid_sampler_2d_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners, bool[2] output_mask) -> (Tensor, Tensor) 2022-05-18T03:33:20.3618205Z processing existing schema: aten::lcm_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.3619210Z processing existing schema: aten::lcm(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.3621055Z processing existing schema: aten::lcm.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3622683Z processing existing schema: aten::gcd_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.3623817Z processing existing schema: aten::gcd(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.3625696Z processing existing schema: aten::gcd.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3626896Z processing existing schema: aten::gcd.int(int a, int b) -> (int) 2022-05-18T03:33:20.3629072Z processing existing schema: aten::_embedding_bag_per_sample_weights_backward(Tensor grad, Tensor weight, Tensor indices, Tensor offsets, Tensor offset2bag, int mode, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:20.3630391Z schema: aten::_embedding_bag_dense_backward(Tensor grad, Tensor indices, Tensor offset2bag, Tensor bag_size, Tensor maximum_indices, int num_weights, bool scale_grad_by_freq, int mode, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.3631371Z processing existing schema: aten::logit(Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:20.3633140Z processing existing schema: aten::logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3634784Z processing existing schema: aten::_cummin_helper(Tensor self, Tensor(a!) values, Tensor(b!) indices, int dim) -> () 2022-05-18T03:33:20.3636458Z processing existing schema: aten::bincount(Tensor self, Tensor? weights=None, int minlength=0) -> (Tensor) 2022-05-18T03:33:20.3639429Z processing existing schema: _quantized::conv2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.3641204Z processing existing schema: aten::binary_cross_entropy_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.3643390Z processing existing schema: aten::binary_cross_entropy_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=1, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.3646275Z processing existing schema: quantized::conv_transpose1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.3647443Z processing existing schema: aten::argmin(Tensor self, int? dim=None, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.3649433Z processing existing schema: aten::argmin.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3651047Z processing existing schema: quantized::add_scalar_relu_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3652602Z processing existing schema: quantized::add_scalar_relu_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3654080Z processing existing schema: aten::argmax(Tensor self, int? dim=None, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.3655896Z processing existing schema: aten::argmax.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3657542Z processing existing schema: quantized::add_scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3659147Z processing existing schema: quantized::add_scalar_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3660701Z processing existing schema: aten::native_dropout_backward(Tensor grad_output, Tensor mask, float scale) -> (Tensor) 2022-05-18T03:33:20.3661627Z processing existing schema: aten::_assert_async(Tensor self) -> () 2022-05-18T03:33:20.3664728Z processing existing schema: aten::_sparse_coo_tensor_unsafe(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3666646Z processing existing schema: aten::sparse_coo_tensor.size(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.3668799Z processing existing schema: aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3671912Z processing existing schema: aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3674401Z processing existing schema: aten::_sparse_bsc_tensor_unsafe(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3676760Z processing existing schema: aten::_sparse_bsr_tensor_unsafe(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3679566Z processing existing schema: aten::_sparse_compressed_tensor_unsafe(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3681982Z processing existing schema: aten::sparse_bsc_tensor.ccol_row_value_size(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.3684028Z processing existing schema: aten::sparse_bsc_tensor.ccol_row_value(Tensor ccol_indices, Tensor row_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.3686146Z processing existing schema: aten::_efficientzerotensor(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3688470Z processing existing schema: aten::range.step(Scalar start, Scalar end, Scalar step=1, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3690219Z processing existing schema: aten::range(Scalar start, Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3691978Z processing existing schema: aten::range.out(Scalar start, Scalar end, Scalar step=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3693998Z processing existing schema: aten::scalar_tensor(Scalar s, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3696715Z processing existing schema: aten::ones.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3698573Z processing existing schema: aten::ones(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3700365Z processing existing schema: aten::ones.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3702948Z processing existing schema: aten::logspace(Scalar start, Scalar end, int steps, float base=10., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3704991Z processing existing schema: aten::logspace.out(Scalar start, Scalar end, int steps, float base=10., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3707151Z processing existing schema: aten::linspace(Scalar start, Scalar end, int steps, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3709019Z processing existing schema: aten::linspace.out(Scalar start, Scalar end, int steps, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3711177Z processing existing schema: aten::kaiser_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3713371Z processing existing schema: aten::kaiser_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3715678Z processing existing schema: aten::kaiser_window.beta(int window_length, bool periodic, float beta, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3717665Z processing existing schema: aten::hamming_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3720072Z processing existing schema: aten::hamming_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3722292Z processing existing schema: aten::hamming_window.periodic_alpha(int window_length, bool periodic, float alpha, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3724659Z processing existing schema: aten::hamming_window.periodic_alpha_beta(int window_length, bool periodic, float alpha, float beta, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3726596Z processing existing schema: aten::hann_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3728795Z processing existing schema: aten::hann_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3731253Z processing existing schema: aten::from_file(str filename, bool? shared=None, int? size=0, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3734164Z processing existing schema: aten::full.names(int[] size, Scalar fill_value, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3736496Z processing existing schema: aten::full(int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3738532Z processing existing schema: aten::full.out(int[] size, Scalar fill_value, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3740654Z processing existing schema: aten::bartlett_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3742841Z processing existing schema: aten::bartlett_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3744756Z processing existing schema: _quantized::conv2d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.3746590Z processing existing schema: aten::arange(Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3748824Z processing existing schema: aten::arange.start(Scalar start, Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3751177Z processing existing schema: aten::arange.start_step(Scalar start, Scalar end, Scalar step, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3753085Z processing existing schema: aten::arange.start_out(Scalar start, Scalar end, Scalar step=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3754785Z processing existing schema: aten::arange.out(Scalar end, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3757096Z processing existing schema: quantized::quantized_rnn_relu_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:20.3759367Z processing existing schema: aten::_cudnn_init_dropout_state(float dropout, bool train, int dropout_seed, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.3760759Z processing existing schema: prepacked::conv2d_transpose_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.TransposeConv2dOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.3763606Z processing existing schema: aten::cudnn_convolution_relu(Tensor self, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.3764986Z processing existing schema: prepacked::conv2d_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.Conv2dOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.3768209Z processing existing schema: aten::cudnn_convolution_add_relu(Tensor self, Tensor weight, Tensor z, Scalar? alpha, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.3770983Z processing existing schema: prepacked::conv2d_transpose_clamp_prepack(Tensor W, Tensor? B, int[2] stride, int[2] padding, int[2] output_padding, int[2] dilation, int groups, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.TransposeConv2dOpContext) 2022-05-18T03:33:20.3773684Z processing existing schema: aten::cudnn_convolution(Tensor self, Tensor weight, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:20.3776272Z processing existing schema: prepacked::conv2d_clamp_prepack(Tensor W, Tensor? B, int[2] stride, int[2] padding, int[2] dilation, int groups, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.Conv2dOpContext) 2022-05-18T03:33:20.3778610Z processing existing schema: aten::cudnn_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon, Tensor reserveSpace) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3779831Z processing existing schema: prepacked::linear_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.LinearOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.3782115Z processing existing schema: aten::cudnn_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3784145Z processing existing schema: prepacked::linear_clamp_prepack(Tensor W, Tensor? B=None, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.LinearOpContext) 2022-05-18T03:33:20.3785653Z processing existing schema: aten::cudnn_affine_grid_generator_backward(Tensor grad, int N, int C, int H, int W) -> (Tensor grad_theta) 2022-05-18T03:33:20.3786748Z schema: prepacked::unpack_prepacked_sizes_linear(Any W_prepack) -> (Any) found on allowlist, skipping 2022-05-18T03:33:20.3788640Z processing existing schema: aten::cudnn_affine_grid_generator(Tensor theta, int N, int C, int H, int W) -> (Tensor grid) 2022-05-18T03:33:20.3789610Z schema: prepacked::unpack_prepacked_sizes_conv2d(Any W_prepack) -> (Any) found on allowlist, skipping 2022-05-18T03:33:20.3792255Z processing existing schema: aten::ctc_loss.IntList(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank=0, int reduction=1, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:20.3794413Z processing existing schema: aten::ctc_loss.Tensor(Tensor log_probs, Tensor targets, Tensor input_lengths, Tensor target_lengths, int blank=0, int reduction=1, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:20.3795376Z processing existing schema: _quantized::linear_prepack_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:20.3797027Z processing existing schema: aten::crow_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.3799035Z processing existing schema: _quantized::conv3d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.3801176Z processing existing schema: aten::cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, float label_smoothing=0.) -> (Tensor) 2022-05-18T03:33:20.3802672Z processing existing schema: quantized::linear_unpack_fp16(__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:20.3804029Z processing existing schema: quantized::linear_unpack_fp16.legacy(Tensor W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:20.3805996Z processing existing schema: aten::cov(Tensor self, *, int correction=1, Tensor? fweights=None, Tensor? aweights=None) -> (Tensor) 2022-05-18T03:33:20.3807736Z processing existing schema: quantized::linear_unpack(__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:20.3808955Z processing existing schema: quantized::linear_unpack.legacy(Tensor W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:20.3810761Z processing existing schema: aten::count_nonzero.dim_IntList(Tensor self, int[] dim) -> (Tensor) 2022-05-18T03:33:20.3812264Z processing existing schema: aten::count_nonzero(Tensor self, int? dim=None) -> (Tensor) 2022-05-18T03:33:20.3813830Z processing existing schema: quantized::conv_transpose3d_transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.3815937Z processing existing schema: aten::cosine_similarity(Tensor x1, Tensor x2, int dim=1, float eps=1e-08) -> (Tensor) 2022-05-18T03:33:20.3817926Z processing existing schema: aten::eye(int n, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3820072Z processing existing schema: aten::eye.m(int n, int m, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.3821659Z processing existing schema: aten::eye.out(int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3823536Z processing existing schema: aten::eye.m_out(int n, int m, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3824853Z processing existing schema: prim::index(Device self) -> (int?) 2022-05-18T03:33:20.3826656Z processing existing schema: quantized::conv_transpose3d_groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.3828479Z processing existing schema: aten::cosine_embedding_loss(Tensor input1, Tensor input2, Tensor target, float margin=0., int reduction=1) -> (Tensor) 2022-05-18T03:33:20.3830385Z processing existing schema: quantized::conv_transpose3d_output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3831239Z processing existing schema: aten::cosh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.3833295Z processing existing schema: aten::cosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3834300Z processing existing schema: aten::cosh.int(int a) -> (float) 2022-05-18T03:33:20.3835588Z processing existing schema: aten::cosh.float(float a) -> (float) 2022-05-18T03:33:20.3837283Z processing existing schema: aten::cosh.complex(complex a) -> (complex) 2022-05-18T03:33:20.3838748Z processing existing schema: aten::cosh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.3840757Z processing existing schema: quantized::conv_transpose3d_unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.3841571Z processing existing schema: aten::corrcoef(Tensor self) -> (Tensor) 2022-05-18T03:33:20.3843195Z processing existing schema: aten::exp2_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.3844350Z processing existing schema: prim::is_mkldnn(Tensor a) -> (bool) 2022-05-18T03:33:20.3846362Z processing existing schema: quantized::conv_transpose2d_output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3847657Z processing existing schema: aten::exp2(Tensor self) -> (Tensor) 2022-05-18T03:33:20.3849543Z processing existing schema: aten::exp2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3850580Z processing existing schema: prim::is_sparse_csr(Tensor a) -> (bool) 2022-05-18T03:33:20.3852576Z processing existing schema: quantized::conv_transpose2d_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3854253Z processing existing schema: aten::copy_(Tensor(a!) self, Tensor src, bool non_blocking=False) -> (Tensor(a!)) 2022-05-18T03:33:20.3855976Z processing existing schema: aten::copy_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.3857736Z processing existing schema: aten::copy_.int(Tensor(a!) self, int other) -> (Tensor(a!)) 2022-05-18T03:33:20.3860032Z processing existing schema: aten::copy_.float(Tensor(a!) self, float other) -> (Tensor(a!)) 2022-05-18T03:33:20.3861558Z processing existing schema: quantized::conv3d_stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3863907Z processing existing schema: aten::conv_tbc_backward(Tensor self, Tensor input, Tensor weight, Tensor bias, int pad) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.3866405Z processing existing schema: aten::empty_quantized(int[] size, Tensor qtensor, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.3867620Z processing existing schema: aten::isprintable(str self) -> (bool) 2022-05-18T03:33:20.3869414Z processing existing schema: quantized::conv2d_dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3872328Z processing existing schema: aten::conv3d(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.3875189Z processing existing schema: aten::conv3d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], str padding="valid", int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.3876859Z processing existing schema: quantized::conv2d_stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.3878484Z processing existing schema: aten::contiguous(Tensor(a) self, *, int memory_format=0) -> (Tensor(a)) 2022-05-18T03:33:20.3880713Z processing existing schema: aten::embedding_renorm_(Tensor(a!) self, Tensor indices, float max_norm, float norm_type) -> (Tensor(a!)) 2022-05-18T03:33:20.3882336Z processing existing schema: aten::rfind(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.3884242Z processing existing schema: quantized::conv3d_unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.3885958Z processing existing schema: aten::constant_pad_nd(Tensor self, int[] pad, Scalar value=0) -> (Tensor) 2022-05-18T03:33:20.3887165Z processing existing schema: quantized::conv2d_unpack_sizes(Any packed_weights) -> (Any) 2022-05-18T03:33:20.3889049Z processing existing schema: aten::conj_physical_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.3890925Z processing existing schema: aten::embedding_dense_backward(Tensor grad_output, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> (Tensor) 2022-05-18T03:33:20.3892010Z processing existing schema: aten::expandtabs(str self, int tabsize=8) -> (str) 2022-05-18T03:33:20.3893984Z processing existing schema: quantized::conv2d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.3894820Z processing existing schema: aten::conj_physical(Tensor self) -> (Tensor) 2022-05-18T03:33:20.3896566Z processing existing schema: aten::conj_physical.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3898289Z processing existing schema: quantized::conv1d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.3899396Z processing existing schema: aten::conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.3901198Z processing existing schema: quantized::conv_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.3902650Z processing existing schema: aten::concat(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:20.3904930Z processing existing schema: aten::concat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3906699Z processing existing schema: aten::concat.names(Tensor[] tensors, str dim) -> (Tensor) 2022-05-18T03:33:20.3908738Z processing existing schema: aten::concat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3910316Z processing existing schema: quantized::threshold(Tensor qx, Scalar threshold, Scalar value) -> (Tensor qy) 2022-05-18T03:33:20.3912055Z processing existing schema: aten::complex.out(Tensor real, Tensor imag, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3913411Z processing existing schema: aten::complex(Tensor real, Tensor imag) -> (Tensor) 2022-05-18T03:33:20.3915166Z processing existing schema: quantized::softmax(Tensor qx, int dim, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.3916455Z processing existing schema: aten::combinations(Tensor self, int r=2, bool with_replacement=False) -> (Tensor) 2022-05-18T03:33:20.3918095Z processing existing schema: quantized::sigmoid(Tensor qx, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.3919760Z processing existing schema: aten::column_stack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.3921574Z processing existing schema: aten::column_stack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3924542Z processing existing schema: quantized::max_pool1d(Tensor qx, int[] kernel_size, int[] stride, int[] padding, int[] dilation, bool ceil_mode) -> (Tensor) 2022-05-18T03:33:20.3926559Z processing existing schema: aten::col2im(Tensor self, int[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:20.3929532Z processing existing schema: aten::col2im.out(Tensor self, int[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3930959Z processing existing schema: quantized::mul_scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3932665Z processing existing schema: quantized::mul_scalar_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.3934273Z processing existing schema: aten::clamp_min_(Tensor(a!) self, Scalar min) -> (Tensor(a!)) 2022-05-18T03:33:20.3935916Z processing existing schema: aten::clamp_min_.Tensor(Tensor(a!) self, Tensor min) -> (Tensor(a!)) 2022-05-18T03:33:20.3937388Z processing existing schema: quantized::mul_scalar_relu(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.3938511Z processing existing schema: quantized::mul_scalar_relu.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:20.3939918Z processing existing schema: aten::clamp_min(Tensor self, Scalar min) -> (Tensor) 2022-05-18T03:33:20.3941294Z processing existing schema: aten::clamp_min.Tensor(Tensor self, Tensor min) -> (Tensor) 2022-05-18T03:33:20.3943161Z processing existing schema: aten::clamp_min.out(Tensor self, Scalar min, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3945163Z processing existing schema: aten::clamp_min.Tensor_out(Tensor self, Tensor min, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3946943Z processing existing schema: aten::xlogy_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.3948613Z processing existing schema: aten::xlogy_.Scalar_Other(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.3950214Z processing existing schema: quantized::matmul(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.3952003Z processing existing schema: aten::choose_qparams_optimized(Tensor input, int numel, int n_bins, float ratio, int bit_width) -> (Tensor, Tensor) 2022-05-18T03:33:20.3953102Z processing existing schema: aten::xlogy.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.3955051Z processing existing schema: aten::xlogy.OutTensor(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3956205Z processing existing schema: aten::xlogy.Scalar_Self(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.3958155Z processing existing schema: aten::xlogy.OutScalar_Self(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3959455Z processing existing schema: aten::xlogy.Scalar_Other(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.3961351Z processing existing schema: aten::xlogy.OutScalar_Other(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3962781Z processing existing schema: _quantized::linear_prepack_fp16_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:20.3964297Z processing existing schema: aten::cholesky_solve(Tensor self, Tensor input2, bool upper=False) -> (Tensor) 2022-05-18T03:33:20.3966113Z processing existing schema: aten::cholesky_solve.out(Tensor self, Tensor input2, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3967757Z processing existing schema: _quantized::linear_prepack_fp16(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:20.3968855Z processing existing schema: aten::cholesky_inverse(Tensor self, bool upper=False) -> (Tensor) 2022-05-18T03:33:20.3970814Z processing existing schema: aten::cholesky_inverse.out(Tensor self, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3972228Z processing existing schema: quantized::linear_prepack_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:20.3973733Z processing existing schema: aten::chain_matmul(Tensor[] matrices) -> (Tensor) 2022-05-18T03:33:20.3975727Z processing existing schema: aten::chain_matmul.out(Tensor[] matrices, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3977102Z processing existing schema: quantized::embedding_bag_2bit_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:20.3978219Z processing existing schema: aten::can_cast(int from, int to) -> (bool) 2022-05-18T03:33:20.3980288Z processing existing schema: aten::cumsum_(Tensor(a!) self, int dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:20.3982201Z processing existing schema: aten::cumsum_.dimname(Tensor(a!) self, str dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:20.3982716Z schema: static_runtime::signed_log1p(Tensor input) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.3983618Z processing existing schema: quantized::embedding_bag_4bit_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:20.3985537Z processing existing schema: aten::bucketize.Tensor(Tensor self, Tensor boundaries, *, bool out_int32=False, bool right=False) -> (Tensor) 2022-05-18T03:33:20.3987600Z processing existing schema: aten::bucketize.Tensor_out(Tensor self, Tensor boundaries, *, bool out_int32=False, bool right=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.3989144Z processing existing schema: aten::bucketize.Scalar(Scalar self, Tensor boundaries, *, bool out_int32=False, bool right=False) -> (Tensor) 2022-05-18T03:33:20.3990470Z processing existing schema: quantized::embedding_bag_prepack(Tensor weight) -> (__torch__.torch.classes.quantized.EmbeddingPackedParamsBase W_prepack) 2022-05-18T03:33:20.3992259Z processing existing schema: aten::broadcast_tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.3995140Z processing existing schema: quantized::embedding_bag_2bit_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:20.3996764Z processing existing schema: aten::bitwise_xor_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.3998306Z processing existing schema: aten::bitwise_xor_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4001189Z processing existing schema: quantized::embedding_bag_byte_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:20.4002399Z processing existing schema: aten::bitwise_right_shift_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4004145Z processing existing schema: aten::bitwise_right_shift_.Tensor_Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4005851Z processing existing schema: quantized::embedding_byte(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, bool pruned_weights=False) -> (Tensor) 2022-05-18T03:33:20.4007220Z processing existing schema: aten::bitwise_or_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4009418Z processing existing schema: aten::bitwise_or_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4012378Z processing existing schema: quantized::embedding_bag_byte(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:20.4013471Z processing existing schema: aten::bitwise_not_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4015312Z processing existing schema: quantized::celu(Tensor self, float output_scale, int output_zero_point, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.4016272Z processing existing schema: aten::bitwise_not(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4017933Z processing existing schema: aten::bitwise_not.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4021396Z processing existing schema: quantized::conv_transpose3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:20.4023303Z processing existing schema: aten::binary_cross_entropy_with_logits_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.4026654Z processing existing schema: quantized::conv_transpose2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.4028454Z processing existing schema: aten::binary_cross_entropy_with_logits(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.4031306Z processing existing schema: quantized::conv2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.4032811Z processing existing schema: aten::bilinear(Tensor input1, Tensor input2, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:20.4035752Z processing existing schema: quantized::conv1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.4037509Z processing existing schema: aten::bernoulli_.Tensor(Tensor(a!) self, Tensor p, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4040188Z processing existing schema: aten::bernoulli_.float(Tensor(a!) self, float p=0.5, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4041951Z processing existing schema: quantized::conv1d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.4044113Z processing existing schema: aten::batch_norm_backward_reduce(Tensor grad_out, Tensor input, Tensor mean, Tensor invstd, Tensor? weight, bool input_g, bool weight_g, bool bias_g) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.4045765Z processing existing schema: quantized::conv_transpose2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4048002Z processing existing schema: aten::avg_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> (Tensor) 2022-05-18T03:33:20.4050809Z processing existing schema: aten::avg_pool3d_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, bool ceil_mode, bool count_include_pad, int? divisor_override, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.4052231Z processing existing schema: quantized::conv_transpose1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4054932Z processing existing schema: aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor) 2022-05-18T03:33:20.4058115Z processing existing schema: aten::avg_pool3d.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4060280Z processing existing schema: aten::tril_indices(int row, int col, int offset=0, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.4062122Z processing existing schema: quantized::conv3d.new(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4065467Z processing existing schema: quantized::conv3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4067954Z processing existing schema: aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor) 2022-05-18T03:33:20.4071021Z processing existing schema: aten::avg_pool2d.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4072949Z processing existing schema: quantized::cat_relu_out(Tensor[] qx, int dim, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4074544Z processing existing schema: aten::atanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4076800Z processing existing schema: quantized::batch_norm2d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4078295Z processing existing schema: aten::asinh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4080070Z processing existing schema: aten::asinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4081382Z processing existing schema: aten::asinh.int(int a) -> (float) 2022-05-18T03:33:20.4082681Z processing existing schema: aten::asinh.float(float a) -> (float) 2022-05-18T03:33:20.4083835Z processing existing schema: aten::asinh.complex(complex a) -> (complex) 2022-05-18T03:33:20.4085271Z processing existing schema: aten::asinh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.4087250Z processing existing schema: quantized::batch_norm_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4089516Z processing existing schema: aten::as_strided_(Tensor(a!) self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4091471Z processing existing schema: quantized::batch_norm(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4093707Z processing existing schema: aten::as_strided(Tensor(a) self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor(a)) 2022-05-18T03:33:20.4095414Z processing existing schema: _quantized::add(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.4096383Z processing existing schema: aten::argwhere(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4097691Z processing existing schema: quantized::add_scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4099318Z processing existing schema: quantized::add_scalar.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:20.4100093Z processing existing schema: aten::arctanh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4102212Z processing existing schema: aten::arctanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4103459Z processing existing schema: quantized::add(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.4105551Z processing existing schema: quantized::add.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4106754Z processing existing schema: quantized::add.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4107870Z processing existing schema: quantized::add.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:20.4110160Z processing existing schema: quantized::add.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4111046Z processing existing schema: aten::arctan(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4112999Z processing existing schema: aten::arctan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4115387Z processing existing schema: quantized::quantized_rnn_tanh_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:20.4116378Z processing existing schema: aten::arccos(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4118033Z processing existing schema: aten::arccos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4120513Z processing existing schema: aten::native_batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.4123755Z processing existing schema: aten::native_batch_norm.out(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, *, Tensor(a!) out, Tensor(b!) save_mean, Tensor(c!) save_invstd) -> (Tensor(a!), Tensor(b!), Tensor(c!)) 2022-05-18T03:33:20.4124624Z processing existing schema: aten::_fw_primal(Tensor(a) self, int level) -> (Tensor(a)) 2022-05-18T03:33:20.4126318Z processing existing schema: aten::retain_grad(Tensor(a!) self) -> () 2022-05-18T03:33:20.4127860Z processing existing schema: aten::is_leaf(Tensor self) -> (bool) 2022-05-18T03:33:20.4129963Z processing existing schema: quantized::embedding_bag_unpack(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase W_prepack) -> (Tensor W_origin) 2022-05-18T03:33:20.4131721Z processing existing schema: aten::cartesian_prod(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.4133334Z processing existing schema: aten::data(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4134129Z schema: static_runtime::select_tensor(Tensor(a) a, Tensor(b) b, bool use_b) -> (Tensor(a|b)) found on allowlist, skipping 2022-05-18T03:33:20.4135810Z processing existing schema: _quantized::linear(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:20.4136997Z processing existing schema: aten::ccol_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4138630Z processing existing schema: aten::var_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 2022-05-18T03:33:20.4140336Z processing existing schema: aten::var_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4142106Z processing existing schema: aten::var_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4144006Z processing existing schema: aten::var_mean.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4145973Z processing existing schema: aten::var_mean.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4147972Z processing existing schema: quantized::linear_relu(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:20.4150021Z processing existing schema: aten::cauchy_(Tensor(a!) self, float median=0., float sigma=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4151227Z processing existing schema: aten::var(Tensor self, bool unbiased=True) -> (Tensor) 2022-05-18T03:33:20.4153187Z processing existing schema: aten::var.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4155092Z processing existing schema: aten::var.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4157450Z processing existing schema: aten::var.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4159774Z processing existing schema: aten::var.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4161650Z processing existing schema: aten::var.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4163902Z processing existing schema: aten::var.correction_out(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4165740Z processing existing schema: aten::var.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4167901Z processing existing schema: aten::var.correction_names_out(Tensor self, str[1] dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4171261Z processing existing schema: _quantized::conv_transpose1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.4172280Z processing existing schema: aten::bitwise_and.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4174212Z processing existing schema: aten::bitwise_and.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4175587Z processing existing schema: aten::bitwise_and.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4177357Z processing existing schema: aten::bitwise_and.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4179394Z processing existing schema: aten::unsafe_split_with_sizes(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.4182207Z processing existing schema: _quantized::conv3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:20.4183501Z processing existing schema: aten::binomial(Tensor count, Tensor prob, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.4185544Z processing existing schema: aten::unsafe_split.Tensor(Tensor self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.4187184Z processing existing schema: aten::copysign_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4188814Z processing existing schema: aten::copysign_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4190368Z processing existing schema: quantized::conv_transpose2d_transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.4192145Z processing existing schema: _quantized::conv_transpose2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4193864Z processing existing schema: aten::batch_norm_backward_elemt(Tensor grad_out, Tensor input, Tensor mean, Tensor invstd, Tensor? weight, Tensor mean_dy, Tensor mean_dy_xmu, Tensor count) -> (Tensor) 2022-05-18T03:33:20.4195038Z processing existing schema: aten::trunc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4196696Z processing existing schema: aten::true_divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4198259Z processing existing schema: aten::true_divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4200166Z processing existing schema: _quantized::conv2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4202132Z processing existing schema: aten::baddbmm_(Tensor(a!) self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.4203187Z processing existing schema: aten::true_divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4204476Z processing existing schema: aten::true_divide.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4206328Z processing existing schema: aten::true_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4207952Z processing existing schema: quantized::add_relu(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.4209617Z processing existing schema: quantized::add_relu.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4211051Z processing existing schema: quantized::add_relu.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4212498Z processing existing schema: quantized::add_relu.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:20.4214475Z processing existing schema: quantized::add_relu.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4215913Z processing existing schema: aten::arctan2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4217798Z processing existing schema: aten::arctan2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4219176Z processing existing schema: aten::threshold(Tensor self, Scalar threshold, Scalar value) -> (Tensor) 2022-05-18T03:33:20.4221025Z processing existing schema: aten::threshold.out(Tensor self, Scalar threshold, Scalar value, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4222454Z processing existing schema: aten::square_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4224579Z processing existing schema: quantized::elu(Tensor self, float output_scale, int output_zero_point, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor) 2022-05-18T03:33:20.4226497Z processing existing schema: aten::addcdiv_(Tensor(a!) self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor(a!)) 2022-05-18T03:33:20.4227507Z processing existing schema: aten::square(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4229395Z processing existing schema: aten::square.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4230815Z processing existing schema: aten::sqrt_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4232332Z processing existing schema: aten::sinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4233482Z processing existing schema: aten::signbit(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4235403Z processing existing schema: aten::signbit.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4236961Z processing existing schema: aten::sign_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4237902Z processing existing schema: aten::_version(Tensor self) -> (int) 2022-05-18T03:33:20.4239642Z processing existing schema: aten::round_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4241423Z processing existing schema: aten::round_.decimals(Tensor(a!) self, *, int decimals) -> (Tensor(a!)) 2022-05-18T03:33:20.4243264Z processing existing schema: aten::resize_as_(Tensor(a!) self, Tensor the_template, *, int? memory_format=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4245172Z processing existing schema: aten::rename_(Tensor(a!) self, str[]? names) -> (Tensor(a!)) 2022-05-18T03:33:20.4246671Z processing existing schema: aten::relu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4248718Z processing existing schema: aten::refine_names(Tensor(a) self, str[] names) -> (Tensor(a)) 2022-05-18T03:33:20.4250325Z processing existing schema: aten::reciprocal_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4253209Z processing existing schema: aten::_sparse_coo_tensor_with_dims_and_tensors(int sparse_dim, int dense_dim, int[] size, Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.4254516Z processing existing schema: aten::rad2deg_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4257237Z processing existing schema: aten::_sparse_coo_tensor_with_dims(int sparse_dim, int dense_dim, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.4258278Z processing existing schema: aten::rad2deg(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4260138Z processing existing schema: aten::rad2deg.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4261920Z processing existing schema: aten::pow_.Scalar(Tensor(a!) self, Scalar exponent) -> (Tensor(a!)) 2022-05-18T03:33:20.4263560Z processing existing schema: aten::pow_.Tensor(Tensor(a!) self, Tensor exponent) -> (Tensor(a!)) 2022-05-18T03:33:20.4265042Z processing existing schema: aten::output_nr(Tensor self) -> (int) 2022-05-18T03:33:20.4267247Z processing existing schema: aten::ones_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.4268403Z processing existing schema: aten::_logcumsumexp(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.4270364Z processing existing schema: aten::_logcumsumexp.out(Tensor self, int dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4272078Z processing existing schema: aten::nextafter_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4273649Z processing existing schema: aten::_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, int input_dtype) -> (Tensor) 2022-05-18T03:33:20.4275683Z processing existing schema: aten::_log_softmax_backward_data.out(Tensor grad_output, Tensor output, int dim, int input_dtype, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4276890Z processing existing schema: aten::nextafter(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4278857Z processing existing schema: aten::nextafter.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4280459Z processing existing schema: aten::neg_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4282691Z processing existing schema: aten::mul_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4283708Z processing existing schema: aten::mul_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4285843Z processing existing schema: aten::mul_.t(t[](a!) l, int n) -> (t[](a!)) 2022-05-18T03:33:20.4287644Z processing existing schema: aten::mode(Tensor self, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4289187Z processing existing schema: aten::mode.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4291547Z processing existing schema: aten::mode.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4294049Z processing existing schema: aten::mode.values(Tensor self, int dim=-1, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4294912Z processing existing schema: aten::min(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4296663Z processing existing schema: aten::min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4299174Z processing existing schema: aten::min.dim_min(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4300836Z processing existing schema: aten::min.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4303180Z processing existing schema: aten::min.names_dim_min(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4304292Z processing existing schema: aten::min.other(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4306542Z processing existing schema: aten::min.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4308006Z processing existing schema: aten::nanmedian(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4309595Z processing existing schema: aten::nanmedian.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4312456Z processing existing schema: aten::nanmedian.dim_values(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4314100Z processing existing schema: aten::nanmedian.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4316652Z processing existing schema: aten::nanmedian.names_dim_values(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4317774Z processing existing schema: aten::median(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4319644Z processing existing schema: aten::median.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4322016Z processing existing schema: aten::median.dim_values(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4323698Z processing existing schema: aten::median.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4326219Z processing existing schema: aten::median.names_dim_values(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4327538Z processing existing schema: aten::mean(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.4329551Z processing existing schema: aten::mean.dim(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.4331474Z processing existing schema: aten::mean.names_dim(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.4333745Z processing existing schema: aten::mean.names_out(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4336013Z processing existing schema: aten::mean.out(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4338941Z processing existing schema: aten::max_pool3d_with_indices(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4342593Z processing existing schema: aten::max_pool3d_with_indices.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.4345225Z processing existing schema: aten::max_pool2d_with_indices(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4348944Z processing existing schema: aten::max_pool2d_with_indices.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.4351320Z processing existing schema: aten::max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.4353937Z processing existing schema: aten::max_pool1d_with_indices(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4356433Z processing existing schema: aten::max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.4357419Z processing existing schema: aten::max(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4359660Z processing existing schema: aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4362085Z processing existing schema: aten::max.dim_max(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) max, Tensor(b!) max_values) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4363559Z processing existing schema: aten::max.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4365909Z processing existing schema: aten::max.names_dim_max(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) max, Tensor(b!) max_values) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4367051Z processing existing schema: aten::max.other(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4368958Z processing existing schema: aten::max.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4370375Z processing existing schema: aten::masked_select(Tensor self, Tensor mask) -> (Tensor) 2022-05-18T03:33:20.4371944Z processing existing schema: aten::masked_select.out(Tensor self, Tensor mask, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4373690Z processing existing schema: aten::masked_fill_.Scalar(Tensor(a!) self, Tensor mask, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.4375507Z processing existing schema: aten::masked_fill_.Tensor(Tensor(a!) self, Tensor mask, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:20.4377037Z processing existing schema: aten::masked_fill.Scalar(Tensor self, Tensor mask, Scalar value) -> (Tensor) 2022-05-18T03:33:20.4378579Z processing existing schema: aten::masked_fill.Tensor(Tensor self, Tensor mask, Tensor value) -> (Tensor) 2022-05-18T03:33:20.4380178Z processing existing schema: aten::logsumexp(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4381871Z processing existing schema: aten::logsumexp.names(Tensor self, str[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4384022Z processing existing schema: aten::logsumexp.names_out(Tensor self, str[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4386202Z processing existing schema: aten::logsumexp.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4387987Z processing existing schema: aten::_cummax_helper(Tensor self, Tensor(a!) values, Tensor(b!) indices, int dim) -> () 2022-05-18T03:33:20.4389625Z processing existing schema: aten::logical_xor_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4391028Z processing existing schema: aten::logical_xor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4392898Z processing existing schema: aten::logical_xor.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4394430Z processing existing schema: aten::logical_or_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4396016Z processing existing schema: aten::logical_or(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4397618Z processing existing schema: aten::logical_or.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4399266Z processing existing schema: aten::logical_not_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4400513Z processing existing schema: aten::logical_not(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4402213Z processing existing schema: aten::logical_not.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4403837Z processing existing schema: aten::logical_and_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4404940Z processing existing schema: aten::logical_and(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4406643Z processing existing schema: aten::logical_and.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4409620Z processing existing schema: aten::_ctc_loss_backward(Tensor grad, Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, Tensor neg_log_likelihood, Tensor log_alpha, int blank, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:20.4410715Z processing existing schema: aten::logaddexp2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4412351Z processing existing schema: aten::logaddexp2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4414940Z processing existing schema: aten::_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank=0, bool zero_infinity=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.4415939Z processing existing schema: aten::logaddexp(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4417799Z processing existing schema: aten::logaddexp.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4419301Z processing existing schema: aten::log_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4420626Z processing existing schema: aten::log2_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4422114Z processing existing schema: aten::log1p_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4423680Z processing existing schema: aten::_compute_linear_combination(Tensor input, Tensor coefficients) -> (Tensor) 2022-05-18T03:33:20.4425305Z processing existing schema: aten::_compute_linear_combination.out(Tensor input, Tensor coefficients, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4427069Z processing existing schema: aten::log10_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4428258Z processing existing schema: aten::lgamma_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4430036Z processing existing schema: aten::kthvalue(Tensor self, int k, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4431686Z processing existing schema: aten::kthvalue.dimname(Tensor self, int k, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.4434060Z processing existing schema: aten::kthvalue.dimname_out(Tensor self, int k, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4436522Z processing existing schema: aten::kthvalue.values(Tensor self, int k, int dim=-1, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.4437405Z processing existing schema: aten::item(Tensor self) -> (Scalar) 2022-05-18T03:33:20.4438872Z processing existing schema: aten::isnan(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4439892Z processing existing schema: aten::isnan.float(float a) -> (bool) 2022-05-18T03:33:20.4441039Z processing existing schema: aten::isnan.complex(complex a) -> (bool) 2022-05-18T03:33:20.4442545Z processing existing schema: aten::isinf(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4443447Z processing existing schema: aten::isinf.float(float a) -> (bool) 2022-05-18T03:33:20.4444942Z processing existing schema: aten::isinf.complex(complex a) -> (bool) 2022-05-18T03:33:20.4445991Z processing existing schema: aten::isfinite(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4447356Z processing existing schema: aten::isfinite.float(float a) -> (bool) 2022-05-18T03:33:20.4448805Z processing existing schema: aten::isfinite.complex(complex a) -> (bool) 2022-05-18T03:33:20.4449791Z processing existing schema: aten::is_signed(Tensor self) -> (bool) 2022-05-18T03:33:20.4451191Z processing existing schema: aten::is_pinned(Tensor self, Device? device=None) -> (bool) 2022-05-18T03:33:20.4452174Z processing existing schema: aten::is_nonzero(Tensor self) -> (bool) 2022-05-18T03:33:20.4453768Z processing existing schema: aten::is_inference(Tensor self) -> (bool) 2022-05-18T03:33:20.4455179Z processing existing schema: aten::is_coalesced(Tensor self) -> (bool) 2022-05-18T03:33:20.4457154Z processing existing schema: aten::index_fill_.Dimname_Scalar(Tensor(a!) self, str dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.4459054Z processing existing schema: aten::index_fill_.Dimname_Tensor(Tensor(a!) self, str dim, Tensor index, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:20.4460943Z processing existing schema: aten::index_fill_.int_Scalar(Tensor(a!) self, int dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.4462779Z processing existing schema: aten::index_fill_.int_Tensor(Tensor(a!) self, int dim, Tensor index, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:20.4464689Z processing existing schema: aten::index_fill.Dimname_Scalar(Tensor self, str dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:20.4466218Z processing existing schema: aten::index_fill.Dimname_Tensor(Tensor self, str dim, Tensor index, Tensor value) -> (Tensor) 2022-05-18T03:33:20.4467764Z processing existing schema: aten::index_fill.int_Scalar(Tensor self, int dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:20.4469343Z processing existing schema: aten::index_fill.int_Tensor(Tensor self, int dim, Tensor index, Tensor value) -> (Tensor) 2022-05-18T03:33:20.4470812Z processing existing schema: aten::igammac(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4472563Z processing existing schema: aten::igammac.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4474327Z processing existing schema: aten::igamma_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4475755Z processing existing schema: aten::igamma(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4477729Z processing existing schema: aten::igamma.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4479277Z processing existing schema: aten::i0_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4480765Z processing existing schema: aten::i0(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4482377Z processing existing schema: aten::i0.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4484041Z processing existing schema: aten::hypot_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4485407Z processing existing schema: aten::hypot(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4487293Z processing existing schema: aten::hypot.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4489169Z processing existing schema: aten::frac_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4490428Z processing existing schema: aten::floor_divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4492008Z processing existing schema: aten::floor_divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4493329Z processing existing schema: aten::floor_divide(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4494466Z processing existing schema: aten::floor_divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4496308Z processing existing schema: aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4497733Z processing existing schema: aten::floor_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4499751Z processing existing schema: aten::flatten.DimnameList(Tensor(a) self, str[] dims, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:20.4501497Z processing existing schema: aten::flatten.named_out_dim(Tensor(a) self, int start_dim, int end_dim, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:20.4503215Z processing existing schema: aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> (Tensor(a)) 2022-05-18T03:33:20.4505073Z processing existing schema: aten::flatten.using_names(Tensor(a) self, str start_dim, str end_dim, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:20.4506850Z processing existing schema: quantized::conv_transpose3d_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4508072Z processing existing schema: aten::cos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4509626Z processing existing schema: aten::expm1_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4510794Z processing existing schema: prim::is_ort(Tensor a) -> (bool) 2022-05-18T03:33:20.4512605Z processing existing schema: quantized::conv_transpose2d_dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4514262Z processing existing schema: aten::copy_sparse_to_sparse_(Tensor(a!) self, Tensor src, bool non_blocking=False) -> (Tensor(a!)) 2022-05-18T03:33:20.4515625Z processing existing schema: aten::exp_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4516771Z processing existing schema: prim::is_mps(Tensor a) -> (bool) 2022-05-18T03:33:20.4518651Z processing existing schema: quantized::conv_transpose2d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.4521798Z processing existing schema: aten::convolution_overrideable(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups) -> (Tensor) 2022-05-18T03:33:20.4523112Z processing existing schema: aten::erfinv_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4524743Z processing existing schema: aten::join(str self, str[] values) -> (str) 2022-05-18T03:33:20.4526417Z processing existing schema: quantized::conv3d_transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.4530154Z processing existing schema: aten::convolution_backward(Tensor grad_output, Tensor input, Tensor weight, int[]? bias_sizes, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.4531196Z processing existing schema: aten::erfc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4532775Z processing existing schema: aten::rpartition(str self, str separator) -> (str, str, str) 2022-05-18T03:33:20.4534177Z processing existing schema: quantized::conv3d_groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.4537262Z processing existing schema: aten::convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups) -> (Tensor) 2022-05-18T03:33:20.4538209Z processing existing schema: aten::erfc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4539907Z processing existing schema: aten::erfc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4541039Z processing existing schema: aten::erfc.int(int a) -> (float) 2022-05-18T03:33:20.4542510Z processing existing schema: aten::erfc.float(float a) -> (float) 2022-05-18T03:33:20.4543582Z processing existing schema: aten::erfc.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.4545290Z processing existing schema: aten::partition(str self, str separator) -> (str, str, str) 2022-05-18T03:33:20.4547033Z processing existing schema: quantized::conv3d_dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4550379Z processing existing schema: aten::conv_transpose3d.input(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int groups=1, int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:20.4551579Z processing existing schema: aten::erf_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4553142Z processing existing schema: aten::replace(str self, str old, str new, int max=-1) -> (str) 2022-05-18T03:33:20.4554903Z processing existing schema: quantized::conv3d_output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4558012Z processing existing schema: aten::conv_transpose2d.input(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int groups=1, int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:20.4558809Z processing existing schema: aten::erf(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4560912Z processing existing schema: aten::erf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4561942Z processing existing schema: aten::erf.int(int a) -> (float) 2022-05-18T03:33:20.4562984Z processing existing schema: aten::erf.float(float a) -> (float) 2022-05-18T03:33:20.4564493Z processing existing schema: aten::erf.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.4565895Z processing existing schema: aten::rstrip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:20.4567746Z processing existing schema: quantized::conv3d_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4570681Z processing existing schema: aten::conv_transpose1d(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], int[1] padding=[0], int[1] output_padding=[0], int groups=1, int[1] dilation=[1]) -> (Tensor) 2022-05-18T03:33:20.4571912Z processing existing schema: aten::equal(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:20.4573450Z processing existing schema: aten::lstrip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:20.4575543Z processing existing schema: quantized::group_norm(Tensor input, int num_groups, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4576811Z processing existing schema: aten::clone(Tensor self, *, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.4578727Z processing existing schema: aten::dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:20.4581083Z processing existing schema: aten::items.str(Dict(str, t) self) -> ((str, t)[]) 2022-05-18T03:33:20.4583346Z processing existing schema: aten::items.int(Dict(int, t) self) -> ((int, t)[]) 2022-05-18T03:33:20.4585907Z processing existing schema: aten::items.bool(Dict(bool, t) self) -> ((bool, t)[]) 2022-05-18T03:33:20.4588317Z processing existing schema: aten::items.float(Dict(float, t) self) -> ((float, t)[]) 2022-05-18T03:33:20.4590686Z processing existing schema: aten::items.complex(Dict(complex, t) self) -> ((complex, t)[]) 2022-05-18T03:33:20.4593030Z processing existing schema: aten::items.Tensor(Dict(Tensor, t) self) -> ((Tensor, t)[]) 2022-05-18T03:33:20.4595362Z processing existing schema: quantized::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4597440Z processing existing schema: aten::clip_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4599689Z processing existing schema: aten::clip_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4600823Z processing existing schema: aten::dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:20.4603227Z processing existing schema: aten::update.str(Dict(str, t)(a!) self, Dict(str, t)(a!) to_add) -> () 2022-05-18T03:33:20.4605582Z processing existing schema: aten::update.int(Dict(int, t)(a!) self, Dict(int, t)(a!) to_add) -> () 2022-05-18T03:33:20.4607861Z processing existing schema: aten::update.bool(Dict(bool, t)(a!) self, Dict(bool, t)(a!) to_add) -> () 2022-05-18T03:33:20.4610194Z processing existing schema: aten::update.float(Dict(float, t)(a!) self, Dict(float, t)(a!) to_add) -> () 2022-05-18T03:33:20.4612779Z processing existing schema: aten::update.complex(Dict(complex, t)(a!) self, Dict(complex, t)(a!) to_add) -> () 2022-05-18T03:33:20.4615191Z processing existing schema: aten::update.Tensor(Dict(Tensor, t)(a!) self, Dict(Tensor, t)(a!) to_add) -> () 2022-05-18T03:33:20.4616698Z processing existing schema: quantized::mul_scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4618284Z processing existing schema: quantized::mul_scalar.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:20.4619848Z processing existing schema: aten::clamp_max_(Tensor(a!) self, Scalar max) -> (Tensor(a!)) 2022-05-18T03:33:20.4621555Z processing existing schema: aten::clamp_max_.Tensor(Tensor(a!) self, Tensor max) -> (Tensor(a!)) 2022-05-18T03:33:20.4623341Z processing existing schema: aten::div_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.4625090Z processing existing schema: aten::div_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.4627104Z processing existing schema: aten::div_.Tensor_mode(Tensor(a!) self, Tensor other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:20.4629211Z processing existing schema: aten::div_.Scalar_mode(Tensor(a!) self, Scalar other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:20.4630221Z processing existing schema: aten::upper(str self) -> (str) 2022-05-18T03:33:20.4631905Z processing existing schema: quantized::hardswish(Tensor input, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4633428Z processing existing schema: aten::cat(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:20.4635135Z processing existing schema: aten::cat.names(Tensor[] tensors, str dim) -> (Tensor) 2022-05-18T03:33:20.4637476Z processing existing schema: aten::cat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4639904Z processing existing schema: aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4641099Z processing existing schema: aten::deg2rad(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4642959Z processing existing schema: aten::deg2rad.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4643998Z schema: static_runtime::embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.4645073Z schema: static_runtime::embedding_bag.padding_idx(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, bool include_last_offset, int? padding_idx) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.4646364Z processing existing schema: quantized::conv_transpose2d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.4647311Z processing existing schema: aten::batch_norm_stats(Tensor input, float eps) -> (Tensor, Tensor) 2022-05-18T03:33:20.4648470Z processing existing schema: aten::cosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4650350Z processing existing schema: quantized::conv_transpose3d_dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4651819Z processing existing schema: quantized::conv3d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.4653848Z processing existing schema: aten::batch_norm_gather_stats(Tensor input, Tensor mean, Tensor invstd, Tensor? running_mean, Tensor? running_var, float momentum, float eps, int count) -> (Tensor, Tensor) 2022-05-18T03:33:20.4655056Z processing existing schema: sparse::qlinear_relu_dynamic(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.4656230Z processing existing schema: aten::arcsin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4658016Z processing existing schema: aten::_upsample_nearest_exact1d(Tensor self, int[1] output_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.4660142Z processing existing schema: aten::_upsample_nearest_exact1d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.4662360Z processing existing schema: aten::_upsample_nearest_exact1d.out(Tensor self, int[1] output_size, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4663848Z processing existing schema: aten::multinomial(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.4666322Z processing existing schema: aten::multinomial.out(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4667038Z processing existing schema: aten::is_floating_point(Tensor self) -> (bool) 2022-05-18T03:33:20.4669694Z processing existing schema: aten::full_like(Tensor self, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.4670498Z processing existing schema: aten::reciprocal(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4672350Z processing existing schema: aten::reciprocal.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4674577Z processing existing schema: aten::resize_(Tensor(a!) self, int[] size, *, int? memory_format=None) -> (Tensor(a!)) 2022-05-18T03:33:20.4676717Z processing existing schema: quantized::mul(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.4678055Z processing existing schema: quantized::mul.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4679643Z processing existing schema: quantized::mul.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4681358Z processing existing schema: quantized::mul.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:20.4683070Z processing existing schema: quantized::mul.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4684849Z processing existing schema: aten::chunk(Tensor(a -> *) self, int chunks, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.4686047Z processing existing schema: aten::digamma(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4687686Z processing existing schema: aten::digamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4688820Z processing existing schema: aten::isalnum(str self) -> (bool) 2022-05-18T03:33:20.4690533Z processing existing schema: aten::multilabel_margin_loss_forward(Tensor self, Tensor target, int reduction) -> (Tensor output, Tensor is_target) 2022-05-18T03:33:20.4692756Z processing existing schema: aten::multilabel_margin_loss_forward.output(Tensor self, Tensor target, int reduction, *, Tensor(a!) output, Tensor(b!) is_target) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.4694572Z processing existing schema: aten::hsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:20.4696886Z processing existing schema: aten::hsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:20.4698166Z processing existing schema: aten::absolute(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4699784Z processing existing schema: aten::absolute.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4703064Z processing existing schema: _quantized::conv_transpose3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:20.4704081Z processing existing schema: aten::bitwise_left_shift.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4706028Z processing existing schema: aten::bitwise_left_shift.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4707237Z processing existing schema: aten::bitwise_left_shift.Tensor_Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4709070Z processing existing schema: aten::bitwise_left_shift.Tensor_Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4710342Z processing existing schema: aten::bitwise_left_shift.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4711940Z processing existing schema: aten::unsqueeze_(Tensor(a!) self, int dim) -> (Tensor(a!)) 2022-05-18T03:33:20.4713799Z processing existing schema: quantized::linear(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:20.4715110Z processing existing schema: aten::deg2rad_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4715623Z schema: static_runtime::clamp_nan_to_num(Tensor input, Scalar? min, Scalar? max, float? nan, float? posinf, float? posinf) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.4717369Z processing existing schema: aten::vander(Tensor x, int? N=None, bool increasing=False) -> (Tensor) 2022-05-18T03:33:20.4718647Z processing existing schema: aten::sgn(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4721170Z processing existing schema: aten::sgn.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4723670Z processing existing schema: aten::addmm_(Tensor(a!) self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.4724883Z processing existing schema: prim::MKLDNNHardSwish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4726895Z processing existing schema: aten::set_data(Tensor(a!) self, Tensor new_data) -> () 2022-05-18T03:33:20.4729012Z processing existing schema: aten::addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.4731521Z processing existing schema: aten::addmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4733744Z processing existing schema: aten::_ncf_view(Tensor(a) self, int[] input_shape, int normalized_ndim) -> (Tensor(a)) 2022-05-18T03:33:20.4736159Z processing existing schema: aten::index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:20.4738419Z processing existing schema: aten::index_put.hacked_twin(Tensor self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:20.4740204Z processing existing schema: aten::repeat_interleave.Tensor(Tensor repeats, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:20.4742324Z processing existing schema: aten::repeat_interleave.self_Tensor(Tensor self, Tensor repeats, int? dim=None, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:20.4744333Z processing existing schema: aten::repeat_interleave.self_int(Tensor self, int repeats, int? dim=None, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:20.4745386Z processing existing schema: aten::log(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4748193Z processing existing schema: aten::log.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4748838Z processing existing schema: aten::log.int(int a) -> (float) 2022-05-18T03:33:20.4750958Z processing existing schema: aten::log.float(float a) -> (float) 2022-05-18T03:33:20.4751938Z processing existing schema: aten::log.complex(complex a) -> (complex) 2022-05-18T03:33:20.4753548Z processing existing schema: aten::log.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.4754940Z processing existing schema: aten::log.int_int(int a, int b) -> (float) 2022-05-18T03:33:20.4756318Z processing existing schema: aten::log.float_float(float a, float b) -> (float) 2022-05-18T03:33:20.4758542Z processing existing schema: aten::log.complex_complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.4759394Z processing existing schema: aten::log.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.4760619Z processing existing schema: aten::log.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.4762188Z processing existing schema: aten::log.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:20.4763560Z processing existing schema: aten::log.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:20.4765232Z processing existing schema: aten::log.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:20.4766819Z processing existing schema: aten::log.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:20.4768414Z processing existing schema: aten::log.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.4770606Z processing existing schema: quantized::instance_norm(Tensor input, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4771907Z processing existing schema: aten::coalesce(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4774172Z processing existing schema: aten::dsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:20.4776420Z processing existing schema: aten::dsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:20.4778064Z processing existing schema: aten::strip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:20.4779745Z processing existing schema: aten::sigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4781836Z processing existing schema: aten::addr(Tensor self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.4783897Z processing existing schema: aten::addr.out(Tensor self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4785261Z processing existing schema: prim::MKLDNNClamp_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4786647Z processing existing schema: aten::ne.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4788000Z processing existing schema: aten::ne.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4790022Z processing existing schema: aten::ne.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4791844Z processing existing schema: aten::ne.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4793716Z processing existing schema: aten::ne.int_list(int[] a, int[] b) -> (bool) 2022-05-18T03:33:20.4795064Z processing existing schema: aten::ne.device(Device a, Device b) -> (bool) 2022-05-18T03:33:20.4796407Z processing existing schema: aten::ne.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:20.4797862Z processing existing schema: aten::ne.enum(AnyEnumType a, AnyEnumType b) -> (bool) 2022-05-18T03:33:20.4799313Z processing existing schema: aten::ne.int(int a, int b) -> (bool) 2022-05-18T03:33:20.4800961Z processing existing schema: aten::ne.complex(complex a, complex b) -> (bool) 2022-05-18T03:33:20.4802248Z processing existing schema: aten::ne.float(float a, float b) -> (bool) 2022-05-18T03:33:20.4803918Z processing existing schema: aten::ne.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.4805150Z processing existing schema: aten::ne.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.4806719Z processing existing schema: aten::ne.float_complex(float a, complex b) -> (bool) 2022-05-18T03:33:20.4808130Z processing existing schema: aten::ne.complex_float(complex a, float b) -> (bool) 2022-05-18T03:33:20.4809485Z processing existing schema: aten::ne(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.4811037Z processing existing schema: aten::ne.str(str a, str b) -> (bool) 2022-05-18T03:33:20.4813263Z processing existing schema: aten::ne.float_list(float[] a, float[] b) -> (bool) 2022-05-18T03:33:20.4815437Z processing existing schema: aten::ne.Tensor_list(Tensor[] a, Tensor[] b) -> (bool) 2022-05-18T03:33:20.4817535Z processing existing schema: aten::ne.bool_list(bool[] a, bool[] b) -> (bool) 2022-05-18T03:33:20.4819615Z processing existing schema: aten::ne.str_list(str[] a, str[] b) -> (bool) 2022-05-18T03:33:20.4821878Z processing existing schema: quantized::conv2d_output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.4824689Z processing existing schema: aten::conv2d(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.4827520Z processing existing schema: aten::conv2d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], str padding="valid", int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.4829737Z processing existing schema: aten::empty_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.4830734Z processing existing schema: aten::istitle(str self) -> (bool) 2022-05-18T03:33:20.4833193Z processing existing schema: quantized::batch_norm2d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4834413Z processing existing schema: aten::asin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4837797Z processing existing schema: aten::_embedding_bag_forward_only(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.4838789Z processing existing schema: aten::lu_solve(Tensor self, Tensor LU_data, Tensor LU_pivots) -> (Tensor) 2022-05-18T03:33:20.4840833Z processing existing schema: aten::lu_solve.out(Tensor self, Tensor LU_data, Tensor LU_pivots, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4842093Z processing existing schema: aten::ge.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4843363Z processing existing schema: aten::ge.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4845544Z processing existing schema: aten::ge.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4847358Z processing existing schema: aten::ge.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4848417Z processing existing schema: aten::ge.int(int a, int b) -> (bool) 2022-05-18T03:33:20.4849865Z processing existing schema: aten::ge.float(float a, float b) -> (bool) 2022-05-18T03:33:20.4851021Z processing existing schema: aten::ge.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.4852960Z processing existing schema: aten::ge.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.4853908Z processing existing schema: aten::ge(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.4855326Z processing existing schema: aten::ge.str(str a, str b) -> (bool) 2022-05-18T03:33:20.4856997Z processing existing schema: aten::reflection_pad2d(Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:20.4858774Z processing existing schema: aten::reflection_pad2d.out(Tensor self, int[4] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4860307Z processing existing schema: sparse::qlinear(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:20.4861328Z processing existing schema: aten::arccosh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4863233Z processing existing schema: aten::arccosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4864841Z processing existing schema: aten::tan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4866372Z processing existing schema: aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor) 2022-05-18T03:33:20.4867950Z processing existing schema: aten::clamp.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> (Tensor) 2022-05-18T03:33:20.4870086Z processing existing schema: aten::clamp.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4872127Z processing existing schema: aten::clamp.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4873802Z processing existing schema: quantized::mul_relu(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:20.4875415Z processing existing schema: quantized::mul_relu.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4876676Z processing existing schema: quantized::mul_relu.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.4878184Z processing existing schema: quantized::mul_relu.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:20.4880341Z processing existing schema: quantized::mul_relu.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4882255Z processing existing schema: quantized::batch_norm3d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4883465Z processing existing schema: aten::asinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.4884911Z processing existing schema: aten::mH(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4886476Z processing existing schema: aten::mH.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4888388Z processing existing schema: _quantized::linear_prepack(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:20.4889261Z processing existing schema: aten::cholesky(Tensor self, bool upper=False) -> (Tensor) 2022-05-18T03:33:20.4891299Z processing existing schema: aten::cholesky.out(Tensor self, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4892253Z schema: aten::diagonal_backward(Tensor grad_output, int[] input_sizes, int offset, int dim1, int dim2) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.4893039Z processing existing schema: prim::is_xpu(Tensor a) -> (bool) 2022-05-18T03:33:20.4895303Z processing existing schema: aten::multi_margin_loss(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.4897781Z processing existing schema: aten::multi_margin_loss.out(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4899554Z processing existing schema: aten::aminmax(Tensor self, *, int? dim=None, bool keepdim=False) -> (Tensor min, Tensor max) 2022-05-18T03:33:20.4902655Z processing existing schema: aten::aminmax.out(Tensor self, *, int? dim=None, bool keepdim=False, Tensor(a!) min, Tensor(b!) max) -> (Tensor(a!) min, Tensor(b!) max) 2022-05-18T03:33:20.4904313Z processing existing schema: quantized::make_quantized_cell_params(Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:20.4905389Z schema: aten::slice_backward(Tensor grad_output, int[] input_sizes, int dim, int start, int end, int step) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.4906428Z processing existing schema: aten::addcmul(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor) 2022-05-18T03:33:20.4908609Z processing existing schema: aten::addcmul.out(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4909652Z processing existing schema: aten::mm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:20.4911689Z processing existing schema: aten::mm.out(Tensor self, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4912997Z processing existing schema: aten::linalg_tensorinv(Tensor self, int ind=2) -> (Tensor) 2022-05-18T03:33:20.4915093Z processing existing schema: aten::linalg_tensorinv.out(Tensor self, int ind=2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4916932Z processing existing schema: quantized::mul_scalar_relu_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4918847Z processing existing schema: quantized::mul_scalar_relu_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.4920353Z processing existing schema: aten::clip(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor) 2022-05-18T03:33:20.4922852Z processing existing schema: aten::clip.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4924182Z processing existing schema: aten::clip.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> (Tensor) 2022-05-18T03:33:20.4926433Z processing existing schema: aten::clip.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4927596Z processing existing schema: aten::dot(Tensor self, Tensor tensor) -> (Tensor) 2022-05-18T03:33:20.4929634Z processing existing schema: aten::dot.out(Tensor self, Tensor tensor, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4931482Z processing existing schema: aten::popitem.str(Dict(str, t)(a!) self) -> ((str, t)) 2022-05-18T03:33:20.4933549Z processing existing schema: aten::popitem.int(Dict(int, t)(a!) self) -> ((int, t)) 2022-05-18T03:33:20.4935644Z processing existing schema: aten::popitem.bool(Dict(bool, t)(a!) self) -> ((bool, t)) 2022-05-18T03:33:20.4937645Z processing existing schema: aten::popitem.float(Dict(float, t)(a!) self) -> ((float, t)) 2022-05-18T03:33:20.4939724Z processing existing schema: aten::popitem.complex(Dict(complex, t)(a!) self) -> ((complex, t)) 2022-05-18T03:33:20.4941855Z processing existing schema: aten::popitem.Tensor(Dict(Tensor, t)(a!) self) -> ((Tensor, t)) 2022-05-18T03:33:20.4943277Z processing existing schema: aten::resolve_neg(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4945332Z processing existing schema: aten::_upsample_nearest_exact3d(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.4947340Z processing existing schema: aten::_upsample_nearest_exact3d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.4949784Z processing existing schema: aten::_upsample_nearest_exact3d.out(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4950977Z processing existing schema: aten::resolve_conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.4952566Z processing existing schema: aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:20.4954546Z processing existing schema: aten::linear.out(Tensor input, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4957446Z processing existing schema: aten::quantized_gru.input(Tensor input, Tensor hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.4960195Z processing existing schema: aten::quantized_gru.data(Tensor data, Tensor batch_sizes, Tensor hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:20.4962341Z processing existing schema: aten::quantized_gru.input_legacy(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.4964534Z processing existing schema: aten::quantized_gru.data_legacy(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:20.4966171Z processing existing schema: aten::alpha_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:20.4967555Z processing existing schema: aten::swapdims(Tensor(a) self, int dim0, int dim1) -> (Tensor(a)) 2022-05-18T03:33:20.4969688Z processing existing schema: quantized::quantized_gru_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:20.4970423Z processing existing schema: aten::any(Tensor self) -> (Tensor) 2022-05-18T03:33:20.4972081Z processing existing schema: aten::any.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4973954Z processing existing schema: aten::any.out(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4975865Z processing existing schema: aten::any.all_out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4977191Z processing existing schema: aten::any.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.4979314Z processing existing schema: aten::any.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4980831Z processing existing schema: aten::any.str(str[] self) -> (bool) 2022-05-18T03:33:20.4982414Z processing existing schema: aten::any.int(int[] self) -> (bool) 2022-05-18T03:33:20.4984045Z processing existing schema: aten::any.float(float[] self) -> (bool) 2022-05-18T03:33:20.4985798Z processing existing schema: aten::any.bool(bool[] self) -> (bool) 2022-05-18T03:33:20.4987906Z processing existing schema: quantized::batch_norm1d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.4990018Z processing existing schema: aten::as_strided_copy(Tensor self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor) 2022-05-18T03:33:20.4991409Z processing existing schema: aten::lt.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.4992878Z processing existing schema: aten::lt.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.4994635Z processing existing schema: aten::lt.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4996375Z processing existing schema: aten::lt.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.4998323Z processing existing schema: aten::lt.int(int a, int b) -> (bool) 2022-05-18T03:33:20.4998903Z processing existing schema: aten::lt.float(float a, float b) -> (bool) 2022-05-18T03:33:20.5000641Z processing existing schema: aten::lt.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.5002061Z processing existing schema: aten::lt.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.5003391Z processing existing schema: aten::lt(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.5005040Z processing existing schema: aten::lt.str(str a, str b) -> (bool) 2022-05-18T03:33:20.5007057Z processing existing schema: quantized::linear_prepack(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:20.5008531Z processing existing schema: aten::celu_(Tensor(a!) self, Scalar alpha=1.) -> (Tensor(a!)) 2022-05-18T03:33:20.5009917Z processing existing schema: aten::view_as_real(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5011382Z processing existing schema: aten::real(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5013011Z processing existing schema: aten::imag(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5014411Z processing existing schema: aten::result_type.Tensor(Tensor tensor, Tensor other) -> (int) 2022-05-18T03:33:20.5015816Z processing existing schema: aten::result_type.Scalar(Tensor tensor, Scalar other) -> (int) 2022-05-18T03:33:20.5017258Z processing existing schema: aten::result_type.Scalar_Tensor(Scalar scalar, Tensor tensor) -> (int) 2022-05-18T03:33:20.5018700Z processing existing schema: aten::result_type.Scalar_Scalar(Scalar scalar1, Scalar scalar2) -> (int) 2022-05-18T03:33:20.5020310Z processing existing schema: prim::FusedConcat(...) -> (...) 2022-05-18T03:33:20.5021791Z processing existing schema: aten::linalg_svd(Tensor A, bool full_matrices=True) -> (Tensor U, Tensor S, Tensor Vh) 2022-05-18T03:33:20.5024598Z processing existing schema: aten::linalg_svd.U(Tensor A, bool full_matrices=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) 2022-05-18T03:33:20.5025727Z processing existing schema: aten::sqrt(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5027468Z processing existing schema: aten::sqrt.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5028926Z processing existing schema: aten::sqrt.int(int a) -> (float) 2022-05-18T03:33:20.5029922Z processing existing schema: aten::sqrt.float(float a) -> (float) 2022-05-18T03:33:20.5031432Z processing existing schema: aten::sqrt.complex(complex a) -> (complex) 2022-05-18T03:33:20.5032815Z processing existing schema: aten::sqrt.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5034276Z processing existing schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:20.5035607Z processing existing schema: aten::pow.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor) 2022-05-18T03:33:20.5036890Z processing existing schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:20.5038771Z processing existing schema: aten::pow.Scalar_out(Scalar self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5040724Z processing existing schema: aten::pow.Tensor_Scalar_out(Tensor self, Scalar exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5042470Z processing existing schema: aten::pow.Tensor_Tensor_out(Tensor self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5043557Z processing existing schema: aten::pow.int(int a, int b) -> (float) 2022-05-18T03:33:20.5045087Z processing existing schema: aten::pow.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.5046438Z processing existing schema: aten::pow.float(float a, float b) -> (float) 2022-05-18T03:33:20.5047833Z processing existing schema: aten::pow.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.5049250Z processing existing schema: aten::pow.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.5050708Z processing existing schema: aten::pow.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:20.5052149Z processing existing schema: aten::pow.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:20.5053561Z processing existing schema: aten::pow.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.5055031Z processing existing schema: aten::pow.int_to_int(int a, int b) -> (int) 2022-05-18T03:33:20.5056325Z processing existing schema: aten::silu(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5058083Z processing existing schema: aten::silu.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5059551Z processing existing schema: aten::alias(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5061228Z processing existing schema: prim::MKLDNNScalarMul_(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.5062318Z processing existing schema: aten::_conj_physical(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5063620Z processing existing schema: aten::log2(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5065495Z processing existing schema: aten::log2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5067221Z processing existing schema: aten::stack(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:20.5069465Z processing existing schema: aten::stack.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5070871Z processing existing schema: prim::MKLDNNHardTanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5073042Z processing existing schema: aten::addmv_(Tensor(a!) self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.5074145Z processing existing schema: aten::bmm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:20.5076109Z processing existing schema: aten::bmm.out(Tensor self, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5078464Z processing existing schema: quantized::embedding_bag_2bit_prepack(Tensor weight, bool optimized_qparams=False, int nbins=200, float ratio=0.16) -> (Tensor) 2022-05-18T03:33:20.5080004Z processing existing schema: prim::MKLDNNHardSigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5081765Z processing existing schema: aten::addmv(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.5083893Z processing existing schema: aten::addmv.out(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5085620Z processing existing schema: quantized::add_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.5087361Z processing existing schema: aten::arctan2_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.5089192Z processing existing schema: aten::threshold_(Tensor(a!) self, Scalar threshold, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.5090396Z processing existing schema: aten::copysign.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5092158Z processing existing schema: aten::copysign.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5093288Z processing existing schema: aten::copysign.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5095275Z processing existing schema: aten::copysign.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5096747Z processing existing schema: aten::copysign.int(int a, int b) -> (float) 2022-05-18T03:33:20.5098025Z processing existing schema: aten::copysign.float(float a, float b) -> (float) 2022-05-18T03:33:20.5099138Z processing existing schema: aten::copysign.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.5101481Z processing existing schema: aten::copysign.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.5101964Z processing existing schema: aten::copysign(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.5103625Z processing existing schema: quantized::conv_transpose2d_groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.5105403Z processing existing schema: _quantized::conv_transpose1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.5128520Z processing existing schema: aten::batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:20.5129509Z processing existing schema: aten::trunc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5130347Z processing existing schema: aten::trunc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5131213Z processing existing schema: quantized::conv_transpose1d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.5132170Z processing existing schema: aten::batch_norm_gather_stats_with_counts(Tensor input, Tensor mean, Tensor invstd, Tensor? running_mean, Tensor? running_var, float momentum, float eps, Tensor counts) -> (Tensor, Tensor) 2022-05-18T03:33:20.5132815Z processing existing schema: aten::unflatten.int(Tensor(a) self, int dim, int[] sizes, str[]? names=None) -> (Tensor(a)) 2022-05-18T03:33:20.5133495Z processing existing schema: aten::unflatten.Dimname(Tensor(a) self, str dim, int[] sizes, str[] names) -> (Tensor(a)) 2022-05-18T03:33:20.5134106Z processing existing schema: aten::geometric_(Tensor(a!) self, float p, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.5134838Z processing existing schema: quantized::conv2d_groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.5135461Z processing existing schema: aten::conv_depthwise3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding, int[3] dilation) -> (Tensor) 2022-05-18T03:33:20.5136174Z processing existing schema: aten::empty_strided(int[] size, int[] stride, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5136631Z processing existing schema: aten::ljust(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:20.5136994Z processing existing schema: aten::neg(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5137367Z processing existing schema: aten::neg.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5137716Z processing existing schema: aten::neg.int(int a) -> (int) 2022-05-18T03:33:20.5138041Z processing existing schema: aten::neg.float(float a) -> (float) 2022-05-18T03:33:20.5138456Z processing existing schema: aten::neg.complex(complex a) -> (complex) 2022-05-18T03:33:20.5138897Z processing existing schema: aten::neg.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5140402Z processing existing schema: aten::sparse_compressed_tensor.comp_plain_value_size(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5143135Z processing existing schema: aten::sparse_compressed_tensor.comp_plain_value(Tensor compressed_indices, Tensor plain_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5144275Z processing existing schema: aten::sinh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5147165Z processing existing schema: aten::sinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5148183Z processing existing schema: aten::sinh.int(int a) -> (float) 2022-05-18T03:33:20.5150405Z processing existing schema: aten::sinh.float(float a) -> (float) 2022-05-18T03:33:20.5151801Z processing existing schema: aten::sinh.complex(complex a) -> (complex) 2022-05-18T03:33:20.5154087Z processing existing schema: aten::sinh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5155709Z processing existing schema: aten::angle(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5158377Z processing existing schema: aten::angle.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5159632Z processing existing schema: aten::angle.int(int a) -> (float) 2022-05-18T03:33:20.5161853Z processing existing schema: aten::angle.float(float a) -> (float) 2022-05-18T03:33:20.5163323Z processing existing schema: aten::angle.complex(complex a) -> (float) 2022-05-18T03:33:20.5165480Z processing existing schema: aten::angle.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5169421Z processing existing schema: quantized::quantized_lstm_cell_dynamic(Tensor input, Tensor[] hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor bias_ih, Tensor bias_hh) -> (Tensor, Tensor) 2022-05-18T03:33:20.5170930Z processing existing schema: quantized::linear_relu_dynamic_fp16(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.5172524Z processing existing schema: aten::ceil_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5174975Z processing existing schema: aten::view_as_complex(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5176854Z processing existing schema: quantized::linear_prepack_fp16_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:20.5178880Z processing existing schema: aten::channel_shuffle(Tensor self, int groups) -> (Tensor) 2022-05-18T03:33:20.5181354Z processing existing schema: aten::vsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:20.5184017Z processing existing schema: aten::vsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:20.5186618Z processing existing schema: aten::diagonal(Tensor(a) self, int offset=0, int dim1=0, int dim2=1) -> (Tensor(a)) 2022-05-18T03:33:20.5189124Z processing existing schema: aten::diagonal.Dimname(Tensor(a) self, *, str outdim, str dim1, str dim2, int offset=0) -> (Tensor(a)) 2022-05-18T03:33:20.5190126Z processing existing schema: aten::lower(str self) -> (str) 2022-05-18T03:33:20.5192322Z processing existing schema: aten::sign(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5194574Z processing existing schema: aten::sign.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5196351Z processing existing schema: aten::silu_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.5198911Z processing existing schema: aten::silu_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.5200263Z processing existing schema: aten::align_as(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5202368Z processing existing schema: prim::CudaFusionSizeEq(...) -> (bool) 2022-05-18T03:33:20.5205031Z processing existing schema: quantized::linear_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:20.5206083Z processing existing schema: aten::ccol_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5208239Z processing existing schema: aten::vdot(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5210502Z processing existing schema: aten::vdot.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5211962Z processing existing schema: aten::logcumsumexp(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.5214074Z processing existing schema: aten::logcumsumexp.dimname(Tensor self, str dim) -> (Tensor) 2022-05-18T03:33:20.5216511Z processing existing schema: aten::logcumsumexp.dimname_out(Tensor self, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5218575Z processing existing schema: aten::logcumsumexp.out(Tensor self, int dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5221265Z processing existing schema: quantized::make_quantized_cell_params_fp16(__torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:20.5222837Z processing existing schema: aten::amin(Tensor self, int[1] dim=[], bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5226026Z processing existing schema: aten::amin.out(Tensor self, int[1] dim=[], bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5228147Z processing existing schema: aten::symeig(Tensor self, bool eigenvectors=False, bool upper=True) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:20.5231410Z processing existing schema: aten::symeig.e(Tensor self, bool eigenvectors=False, bool upper=True, *, Tensor(a!) e, Tensor(b!) V) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:20.5232343Z processing existing schema: aten::polygamma(int n, Tensor self) -> (Tensor) 2022-05-18T03:33:20.5235086Z processing existing schema: aten::polygamma.out(int n, Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5236826Z processing existing schema: aten::_remove_batch_dim(Tensor self, int level, int batch_size, int out_dim) -> (Tensor) 2022-05-18T03:33:20.5238407Z processing existing schema: prim::TensorExprDynamicGroup(...) -> (...) 2022-05-18T03:33:20.5240685Z processing existing schema: aten::t(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5242052Z processing existing schema: aten::tan(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5244537Z processing existing schema: aten::tan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5245863Z processing existing schema: aten::tan.int(int a) -> (float) 2022-05-18T03:33:20.5247878Z processing existing schema: aten::tan.float(float a) -> (float) 2022-05-18T03:33:20.5249377Z processing existing schema: aten::tan.complex(complex a) -> (complex) 2022-05-18T03:33:20.5251428Z processing existing schema: aten::tan.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5253601Z processing existing schema: aten::swapaxes(Tensor(a) self, int axis0, int axis1) -> (Tensor(a)) 2022-05-18T03:33:20.5255894Z schema: prim::infer_squeeze_size.dim(int[] a, int dim) -> (int[]) found on allowlist, skipping 2022-05-18T03:33:20.5257877Z schema: prim::infer_squeeze_size(int[] a) -> (int[]) found on allowlist, skipping 2022-05-18T03:33:20.5261185Z processing existing schema: aten::allclose(Tensor self, Tensor other, float rtol=1.0000000000000001e-05, float atol=1e-08, bool equal_nan=False) -> (bool) 2022-05-18T03:33:20.5263981Z processing existing schema: aten::fft_rfftfreq(int n, float d=1., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5266194Z processing existing schema: aten::fft_rfftfreq.out(int n, float d=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5268401Z processing existing schema: prim::reshape_copy(Tensor self, int[] shape) -> (Tensor) 2022-05-18T03:33:20.5269884Z processing existing schema: aten::prelu(Tensor self, Tensor weight) -> (Tensor) 2022-05-18T03:33:20.5272717Z processing existing schema: quantized::conv1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.5273499Z processing existing schema: aten::atleast_1d(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5276291Z processing existing schema: aten::atleast_1d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.5278229Z processing existing schema: aten::igammac_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.5280549Z processing existing schema: aten::_ncf_unsqueeze(Tensor(a) self, int ndim) -> (Tensor(a)) 2022-05-18T03:33:20.5282121Z processing existing schema: aten::bernoulli(Tensor self, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.5284843Z processing existing schema: aten::bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5286552Z processing existing schema: aten::bernoulli.p(Tensor self, float p, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.5290897Z processing existing schema: quantized::conv_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.5291399Z processing existing schema: aten::linalg_inv(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5294101Z processing existing schema: aten::linalg_inv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5296264Z processing existing schema: aten::linalg_cholesky_ex(Tensor self, *, bool upper=False, bool check_errors=False) -> (Tensor L, Tensor info) 2022-05-18T03:33:20.5299447Z processing existing schema: aten::linalg_cholesky_ex.L(Tensor self, *, bool upper=False, bool check_errors=False, Tensor(a!) L, Tensor(b!) info) -> (Tensor(a!) L, Tensor(b!) info) 2022-05-18T03:33:20.5300999Z processing existing schema: aten::_new_zeros_with_same_feature_meta(Tensor self, Tensor other, *, int self_num_batch_dims=0) -> (Tensor) 2022-05-18T03:33:20.5303071Z processing existing schema: aten::orgqr(Tensor self, Tensor input2) -> (Tensor) 2022-05-18T03:33:20.5305631Z processing existing schema: aten::orgqr.out(Tensor self, Tensor input2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5307079Z processing existing schema: aten::lift(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5309872Z processing existing schema: quantized::conv_transpose2d_stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.5311169Z processing existing schema: aten::copy(Tensor self, Tensor src, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.5313918Z processing existing schema: aten::copy.t(t[](a) self) -> (t[]) 2022-05-18T03:33:20.5316814Z processing existing schema: aten::copy.Dict_str(Dict(str, t)(a) self) -> (Dict(str, t)) 2022-05-18T03:33:20.5319511Z processing existing schema: aten::copy.Dict_int(Dict(int, t)(a) self) -> (Dict(int, t)) 2022-05-18T03:33:20.5321968Z processing existing schema: aten::copy.Dict_bool(Dict(bool, t)(a) self) -> (Dict(bool, t)) 2022-05-18T03:33:20.5324619Z processing existing schema: aten::copy.Dict_float(Dict(float, t)(a) self) -> (Dict(float, t)) 2022-05-18T03:33:20.5327450Z processing existing schema: aten::copy.Dict_complex(Dict(complex, t)(a) self) -> (Dict(complex, t)) 2022-05-18T03:33:20.5330054Z processing existing schema: aten::copy.Dict_Tensor(Dict(Tensor, t)(a) self) -> (Dict(Tensor, t)) 2022-05-18T03:33:20.5331347Z processing existing schema: aten::exp(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5333888Z processing existing schema: aten::exp.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5334985Z processing existing schema: aten::exp.int(int a) -> (float) 2022-05-18T03:33:20.5337208Z processing existing schema: aten::exp.float(float a) -> (float) 2022-05-18T03:33:20.5338692Z processing existing schema: aten::exp.complex(complex a) -> (complex) 2022-05-18T03:33:20.5340567Z processing existing schema: aten::exp.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5342126Z processing existing schema: prim::is_sparse(Tensor a) -> (bool) 2022-05-18T03:33:20.5345043Z processing existing schema: aten::cdist(Tensor x1, Tensor x2, float p=2., int? compute_mode=None) -> (Tensor) 2022-05-18T03:33:20.5347029Z processing existing schema: quantized::linear_relu_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:20.5348505Z processing existing schema: prim::CudaFusionIvalGuard(...) -> (bool) 2022-05-18T03:33:20.5351197Z processing existing schema: aten::align_to(Tensor(a) self, str[] names) -> (Tensor(a)) 2022-05-18T03:33:20.5353923Z processing existing schema: aten::align_to.ellipsis_idx(Tensor(a) self, str[] order, int ellipsis_idx) -> (Tensor(a)) 2022-05-18T03:33:20.5354806Z processing existing schema: prim::CudaFusionGuard(...) -> (bool) 2022-05-18T03:33:20.5357432Z processing existing schema: aten::_pin_memory(Tensor self, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.5358282Z processing existing schema: aten::polar(Tensor abs, Tensor angle) -> (Tensor) 2022-05-18T03:33:20.5360287Z processing existing schema: aten::polar.out(Tensor abs, Tensor angle, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5361665Z processing existing schema: aten::polar.int(int a, int b) -> (complex) 2022-05-18T03:33:20.5363435Z processing existing schema: aten::polar.float(float a, float b) -> (complex) 2022-05-18T03:33:20.5364883Z processing existing schema: aten::polar.int_float(int a, float b) -> (complex) 2022-05-18T03:33:20.5366227Z processing existing schema: aten::polar.float_int(float a, int b) -> (complex) 2022-05-18T03:33:20.5367807Z processing existing schema: aten::polar.Scalar_Scalar(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.5370253Z processing existing schema: aten::fft_ifft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5372908Z processing existing schema: aten::fft_ifft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5373958Z processing existing schema: prim::CudaFusionGroup(...) -> (...) 2022-05-18T03:33:20.5375612Z processing existing schema: aten::_pdist_forward(Tensor self, float p=2.) -> (Tensor) 2022-05-18T03:33:20.5377767Z processing existing schema: aten::poisson_nll_loss(Tensor input, Tensor target, bool log_input, bool full, float eps, int reduction) -> (Tensor) 2022-05-18T03:33:20.5379041Z processing existing schema: aten::log_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.5380726Z processing existing schema: aten::log_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.5382754Z processing existing schema: aten::log_softmax.int_out(Tensor self, int dim, int? dtype=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5384162Z processing existing schema: aten::t_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5385598Z processing existing schema: prim::TensorExprGroup(...) -> (...) 2022-05-18T03:33:20.5387234Z processing existing schema: prim::view_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:20.5389395Z processing existing schema: aten::fft_rfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5391911Z processing existing schema: aten::fft_rfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5393104Z processing existing schema: quantized::add_scalar_relu(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:20.5394592Z processing existing schema: quantized::add_scalar_relu.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:20.5396087Z processing existing schema: aten::arctanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5398483Z processing existing schema: aten::to.device(Tensor(a) self, Device device, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:20.5400682Z processing existing schema: aten::to.dtype(Tensor(a) self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:20.5403004Z processing existing schema: aten::to.other(Tensor(a) self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:20.5405533Z processing existing schema: aten::to.dtype_layout(Tensor(a) self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:20.5407617Z processing existing schema: aten::to.prim_Device(Tensor(a) self, Device? device, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:20.5409653Z processing existing schema: aten::to.prim_dtype(Tensor(a) self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:20.5411705Z processing existing schema: aten::to.prim_other(Tensor(a) self, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:20.5413042Z processing existing schema: aten::_make_dual(Tensor(a) primal, Tensor tangent, int level) -> (Tensor(a)) 2022-05-18T03:33:20.5414602Z processing existing schema: aten::_log_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:20.5416882Z processing existing schema: aten::_log_softmax.out(Tensor self, int dim, bool half_to_float, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5419104Z processing existing schema: aten::new_zeros(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5421081Z processing existing schema: aten::split.Tensor(Tensor(a -> *) self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5423671Z processing existing schema: aten::split.sizes(Tensor(a -> *) self, int[] split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5425553Z processing existing schema: aten::split.str(str self, str? separator=None, int max=-1) -> (str[]) 2022-05-18T03:33:20.5427801Z processing existing schema: aten::split(Tensor(a -> *) self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5428746Z schema: aten::linalg_qr(Tensor A, str mode="reduced") -> (Tensor Q, Tensor R) found on allowlist, skipping 2022-05-18T03:33:20.5430081Z schema: aten::linalg_qr.out(Tensor A, str mode="reduced", *, Tensor(a!) Q, Tensor(b!) R) -> (Tensor(a!) Q, Tensor(b!) R) found on allowlist, skipping 2022-05-18T03:33:20.5432232Z processing existing schema: aten::exponential_(Tensor(a!) self, float lambd=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.5433164Z processing existing schema: prim::name(Tensor a) -> (str?) 2022-05-18T03:33:20.5435332Z processing existing schema: aten::baddbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.5437767Z processing existing schema: aten::baddbmm.out(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5439408Z processing existing schema: quantized::conv_transpose3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.5440545Z processing existing schema: aten::linalg_cholesky(Tensor self, *, bool upper=False) -> (Tensor) 2022-05-18T03:33:20.5442328Z processing existing schema: aten::linalg_cholesky.out(Tensor self, *, bool upper=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5443949Z processing existing schema: aten::fft_ifft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5446246Z processing existing schema: aten::fft_ifft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5447256Z processing existing schema: prim::StaticRuntimeCopyOuts(...) -> (...) 2022-05-18T03:33:20.5449467Z processing existing schema: aten::fft_fftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5451763Z processing existing schema: aten::fft_fftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5453621Z processing existing schema: aten::reshape(Tensor(a) self, int[] shape) -> (Tensor(a)) 2022-05-18T03:33:20.5455511Z processing existing schema: aten::gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:20.5457354Z processing existing schema: sparse::qlinear_relu(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:20.5458710Z processing existing schema: aten::arccosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5460639Z processing existing schema: aten::clamp_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> (Tensor(a!)) 2022-05-18T03:33:20.5462526Z processing existing schema: aten::clamp_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> (Tensor(a!)) 2022-05-18T03:33:20.5464197Z processing existing schema: quantized::mul_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.5465776Z processing existing schema: aten::tanh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5467773Z processing existing schema: aten::tanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5468867Z processing existing schema: aten::tanh.int(int a) -> (float) 2022-05-18T03:33:20.5469944Z processing existing schema: aten::tanh.float(float a) -> (float) 2022-05-18T03:33:20.5471623Z processing existing schema: aten::tanh.complex(complex a) -> (complex) 2022-05-18T03:33:20.5472712Z processing existing schema: aten::tanh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5475105Z processing existing schema: aten::blackman_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5477306Z processing existing schema: aten::blackman_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5478579Z processing existing schema: quantized::embedding_bag_byte_prepack(Tensor weight) -> (Tensor) 2022-05-18T03:33:20.5480418Z processing existing schema: aten::squeeze_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5481978Z processing existing schema: aten::squeeze_.dim(Tensor(a!) self, int dim) -> (Tensor(a!)) 2022-05-18T03:33:20.5483715Z processing existing schema: aten::squeeze_.dimname(Tensor(a!) self, str dim) -> (Tensor(a!)) 2022-05-18T03:33:20.5485060Z processing existing schema: prim::TensorExprDynamicGuard(...) -> (bool) 2022-05-18T03:33:20.5486055Z processing existing schema: aten::alias_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5488319Z processing existing schema: quantized::batch_norm1d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.5489383Z processing existing schema: aten::asin(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5491043Z processing existing schema: aten::asin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5492501Z processing existing schema: aten::asin.int(int a) -> (float) 2022-05-18T03:33:20.5493811Z processing existing schema: aten::asin.float(float a) -> (float) 2022-05-18T03:33:20.5495134Z processing existing schema: aten::asin.complex(complex a) -> (complex) 2022-05-18T03:33:20.5496461Z processing existing schema: aten::asin.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5497911Z processing existing schema: prim::MKLDNNHardTanh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5499724Z processing existing schema: aten::permute(Tensor(a) self, int[] dims) -> (Tensor(a)) 2022-05-18T03:33:20.5501069Z processing existing schema: prim::ConstantMKLDNNTensor(...) -> (...) 2022-05-18T03:33:20.5503340Z processing existing schema: aten::fft_fft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5505807Z processing existing schema: aten::fft_fft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5506840Z processing existing schema: aten::_conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5508152Z processing existing schema: aten::log1p(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5509947Z processing existing schema: aten::log1p.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5511580Z processing existing schema: aten::log1p.int(int a) -> (float) 2022-05-18T03:33:20.5513006Z processing existing schema: aten::log1p.float(float a) -> (float) 2022-05-18T03:33:20.5514526Z processing existing schema: aten::log1p.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5516383Z processing existing schema: prim::add_optional(Tensor(a) input, Tensor? bias) -> (Tensor(a)) 2022-05-18T03:33:20.5518463Z processing existing schema: aten::fft_rfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5520996Z processing existing schema: aten::fft_rfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5522805Z processing existing schema: aten::expand_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:20.5524196Z processing existing schema: prim::is_ipu(Tensor a) -> (bool) 2022-05-18T03:33:20.5526487Z processing existing schema: aten::expand(Tensor(a) self, int[] size, *, bool implicit=False) -> (Tensor(a)) 2022-05-18T03:33:20.5528796Z processing existing schema: aten::expand.SymInt(Tensor(a) self, SymInt[] size, *, bool implicit=False) -> (Tensor(a)) 2022-05-18T03:33:20.5530237Z processing existing schema: prim::is_vulkan(Tensor a) -> (bool) 2022-05-18T03:33:20.5532602Z processing existing schema: aten::fft_fftfreq(int n, float d=1., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5534573Z processing existing schema: aten::fft_fftfreq.out(int n, float d=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5536042Z processing existing schema: prim::BroadcastMKLDNNTensors(...) -> (...) 2022-05-18T03:33:20.5539200Z processing existing schema: aten::sparse_bsr_tensor.crow_col_value_size(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5541593Z processing existing schema: aten::sparse_bsr_tensor.crow_col_value(Tensor crow_indices, Tensor col_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5542987Z processing existing schema: aten::l1_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.5545301Z processing existing schema: aten::l1_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5546735Z processing existing schema: prim::MKLDNNClamp(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5548286Z processing existing schema: aten::is_same_size(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:20.5550068Z processing existing schema: aten::amax(Tensor self, int[1] dim=[], bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5552422Z processing existing schema: aten::amax.out(Tensor self, int[1] dim=[], bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5555035Z processing existing schema: quantized::make_quantized_cell_params_dynamic(__torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor bias_ih, Tensor bias_hh, bool reduce_range=False) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:20.5556000Z processing existing schema: aten::size.int(Tensor self, int dim) -> (int) 2022-05-18T03:33:20.5557666Z processing existing schema: aten::size.Dimname(Tensor self, str dim) -> (int) 2022-05-18T03:33:20.5559729Z processing existing schema: aten::size(Tensor self) -> (int[]) 2022-05-18T03:33:20.5561385Z processing existing schema: aten::reshape_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:20.5562925Z processing existing schema: aten::gt.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5564372Z processing existing schema: aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5566272Z processing existing schema: aten::gt.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5568192Z processing existing schema: aten::gt.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5569698Z processing existing schema: aten::gt.int(int a, int b) -> (bool) 2022-05-18T03:33:20.5571211Z processing existing schema: aten::gt.float(float a, float b) -> (bool) 2022-05-18T03:33:20.5572746Z processing existing schema: aten::gt.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.5574257Z processing existing schema: aten::gt.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.5575761Z processing existing schema: aten::gt(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.5577377Z processing existing schema: aten::gt.str(str a, str b) -> (bool) 2022-05-18T03:33:20.5579439Z processing existing schema: aten::_upsample_nearest_exact2d(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.5581696Z processing existing schema: aten::_upsample_nearest_exact2d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.5584141Z processing existing schema: aten::_upsample_nearest_exact2d.out(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5586737Z processing existing schema: aten::new_empty(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5588761Z processing existing schema: prim::MKLDNNHardSwish(Tensor a) -> (Tensor) 2022-05-18T03:33:20.5590113Z processing existing schema: aten::linalg_inv_ex(Tensor self, *, bool check_errors=False) -> (Tensor inverse, Tensor info) 2022-05-18T03:33:20.5592295Z processing existing schema: aten::linalg_inv_ex.inverse(Tensor self, *, bool check_errors=False, Tensor(a!) inverse, Tensor(b!) info) -> (Tensor(a!) inverse, Tensor(b!) info) 2022-05-18T03:33:20.5593887Z processing existing schema: aten::_add_batch_dim(Tensor self, int batch_dim, int level) -> (Tensor) 2022-05-18T03:33:20.5595533Z processing existing schema: aten::linalg_eigh(Tensor self, str UPLO="L") -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:20.5598274Z processing existing schema: aten::linalg_eigh.eigvals(Tensor self, str UPLO="L", *, Tensor(a!) eigvals, Tensor(b!) eigvecs) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:20.5598936Z processing existing schema: aten::atan(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5600876Z processing existing schema: aten::atan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5601910Z processing existing schema: aten::atan.int(int a) -> (float) 2022-05-18T03:33:20.5603374Z processing existing schema: aten::atan.float(float a) -> (float) 2022-05-18T03:33:20.5604534Z processing existing schema: aten::atan.complex(complex a) -> (complex) 2022-05-18T03:33:20.5606018Z processing existing schema: aten::atan.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5608402Z processing existing schema: quantized::batch_norm3d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.5609428Z processing existing schema: aten::le.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5610942Z processing existing schema: aten::le.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5612697Z processing existing schema: aten::le.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5614471Z processing existing schema: aten::le.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5615531Z processing existing schema: aten::le.int(int a, int b) -> (bool) 2022-05-18T03:33:20.5617055Z processing existing schema: aten::le.float(float a, float b) -> (bool) 2022-05-18T03:33:20.5618404Z processing existing schema: aten::le.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.5619949Z processing existing schema: aten::le.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.5620958Z processing existing schema: aten::le(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.5622524Z processing existing schema: aten::le.str(str a, str b) -> (bool) 2022-05-18T03:33:20.5624619Z processing existing schema: aten::fft_irfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5626874Z processing existing schema: aten::fft_irfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5628061Z processing existing schema: prim::oneDNNFusionGroup(...) -> (...) 2022-05-18T03:33:20.5630218Z processing existing schema: aten::fft_ifftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5632706Z processing existing schema: aten::fft_ifftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5634455Z processing existing schema: quantized::conv2d_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.5637136Z processing existing schema: aten::conv1d(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], int[1] padding=[0], int[1] dilation=[1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.5640114Z processing existing schema: aten::conv1d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], str padding="valid", int[1] dilation=[1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.5642576Z processing existing schema: aten::empty.memory_format(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.5644702Z processing existing schema: aten::empty.out(int[] size, *, int? memory_format=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5647865Z processing existing schema: aten::empty.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.5648848Z processing existing schema: aten::isidentifier(str self) -> (bool) 2022-05-18T03:33:20.5650488Z processing existing schema: quantized::relu6(Tensor qx, bool inplace=False) -> (Tensor) 2022-05-18T03:33:20.5651865Z processing existing schema: aten::col_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5653504Z processing existing schema: aten::einsum(str equation, Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.5654701Z processing existing schema: aten::einsum.sublist(Tensor a, ...) -> (Tensor) 2022-05-18T03:33:20.5656138Z processing existing schema: aten::islower(str self) -> (bool) 2022-05-18T03:33:20.5658430Z processing existing schema: aten::triu_indices(int row, int col, int offset=0, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5660270Z processing existing schema: aten::sub_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.5662004Z processing existing schema: aten::sub_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.5663166Z processing existing schema: aten::lgamma(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5664910Z processing existing schema: aten::lgamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5665969Z processing existing schema: aten::lgamma.int(int a) -> (float) 2022-05-18T03:33:20.5667395Z processing existing schema: aten::lgamma.float(float a) -> (float) 2022-05-18T03:33:20.5668673Z processing existing schema: aten::lgamma.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5670878Z processing existing schema: aten::fft_irfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5673390Z processing existing schema: aten::fft_irfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5674643Z processing existing schema: prim::oneDNNFusionGuard(...) -> (...) 2022-05-18T03:33:20.5676015Z processing existing schema: aten::mv(Tensor self, Tensor vec) -> (Tensor) 2022-05-18T03:33:20.5677864Z processing existing schema: aten::mv.out(Tensor self, Tensor vec, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5679486Z processing existing schema: aten::detach(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5680978Z processing existing schema: aten::numel(Tensor self) -> (int) 2022-05-18T03:33:20.5682860Z processing existing schema: aten::view(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:20.5684628Z processing existing schema: aten::view.dtype(Tensor(a) self, int dtype) -> (Tensor(a)) 2022-05-18T03:33:20.5685704Z processing existing schema: prim::StaticSubgraph(...) -> (...) 2022-05-18T03:33:20.5687166Z processing existing schema: prim::squeeze_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5688678Z processing existing schema: prim::squeeze_copy.dim(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.5690763Z processing existing schema: aten::fft_rfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5693293Z processing existing schema: aten::fft_rfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5695578Z processing existing schema: aten::slice.Tensor(Tensor(a) self, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor(a)) 2022-05-18T03:33:20.5697962Z processing existing schema: aten::slice.t(t[] l, int? start=None, int? end=None, int step=1) -> (t[]) 2022-05-18T03:33:20.5699923Z processing existing schema: aten::slice.str(str string, int? start=None, int? end=None, int step=1) -> (str) 2022-05-18T03:33:20.5701663Z processing existing schema: aten::requires_grad_(Tensor(a!) self, bool requires_grad=True) -> (Tensor(a!)) 2022-05-18T03:33:20.5703368Z processing existing schema: aten::_unsafe_view(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:20.5705320Z processing existing schema: aten::grid_sampler_2d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.5706866Z processing existing schema: aten::replication_pad2d(Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:20.5708794Z processing existing schema: aten::replication_pad2d.out(Tensor self, int[4] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5709899Z processing existing schema: aten::log10(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5711800Z processing existing schema: aten::log10.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5713250Z processing existing schema: aten::log10.int(int a) -> (float) 2022-05-18T03:33:20.5714536Z processing existing schema: aten::log10.float(float a) -> (float) 2022-05-18T03:33:20.5715956Z processing existing schema: aten::log10.complex(complex a) -> (complex) 2022-05-18T03:33:20.5717376Z processing existing schema: aten::log10.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5720535Z processing existing schema: aten::new_empty_strided(Tensor self, int[] size, int[] stride, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.5722298Z processing existing schema: quantized::conv_transpose3d_stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:20.5723322Z processing existing schema: aten::cos(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5725201Z processing existing schema: aten::cos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5726588Z processing existing schema: aten::cos.int(int a) -> (float) 2022-05-18T03:33:20.5727945Z processing existing schema: aten::cos.float(float a) -> (float) 2022-05-18T03:33:20.5729370Z processing existing schema: aten::cos.complex(complex a) -> (complex) 2022-05-18T03:33:20.5730762Z processing existing schema: aten::cos.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5732108Z processing existing schema: aten::expm1(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5733838Z processing existing schema: aten::expm1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5735139Z processing existing schema: aten::expm1.int(int a) -> (float) 2022-05-18T03:33:20.5736558Z processing existing schema: aten::expm1.float(float a) -> (float) 2022-05-18T03:33:20.5737959Z processing existing schema: aten::expm1.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5739361Z processing existing schema: prim::is_meta(Tensor a) -> (bool) 2022-05-18T03:33:20.5742019Z processing existing schema: aten::fft_hfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5743802Z processing existing schema: aten::fft_hfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5744925Z processing existing schema: prim::MKLDNNHardSigmoid(Tensor a) -> (Tensor) 2022-05-18T03:33:20.5746733Z processing existing schema: aten::pdist(Tensor self, float p=2.) -> (Tensor) 2022-05-18T03:33:20.5748492Z processing existing schema: aten::fft_fft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.5750781Z processing existing schema: aten::fft_fft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5753565Z processing existing schema: aten::sparse_csc_tensor.ccol_row_value_size(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5755684Z processing existing schema: aten::sparse_csc_tensor.ccol_row_value(Tensor ccol_indices, Tensor row_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.5756867Z processing existing schema: aten::std(Tensor self, bool unbiased=True) -> (Tensor) 2022-05-18T03:33:20.5758704Z processing existing schema: aten::std.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5760590Z processing existing schema: aten::std.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5762734Z processing existing schema: aten::std.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5764786Z processing existing schema: aten::std.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5766484Z processing existing schema: aten::std.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5768616Z processing existing schema: aten::std.correction_out(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5770362Z processing existing schema: aten::std.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5772556Z processing existing schema: aten::std.correction_names_out(Tensor self, str[1] dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5774370Z processing existing schema: prim::infer_unsqueeze_size(int[] a, int dim) -> (int[]) 2022-05-18T03:33:20.5775654Z processing existing schema: aten::all(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5777223Z processing existing schema: aten::all.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5779227Z processing existing schema: aten::all.out(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5780999Z processing existing schema: aten::all.all_out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5782525Z processing existing schema: aten::all.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.5784685Z processing existing schema: aten::all.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5786394Z processing existing schema: aten::all.int(int[] self) -> (bool) 2022-05-18T03:33:20.5787998Z processing existing schema: aten::all.float(float[] self) -> (bool) 2022-05-18T03:33:20.5789611Z processing existing schema: aten::all.bool(bool[] self) -> (bool) 2022-05-18T03:33:20.5791408Z processing existing schema: aten::svd(Tensor self, bool some=True, bool compute_uv=True) -> (Tensor U, Tensor S, Tensor V) 2022-05-18T03:33:20.5794272Z processing existing schema: aten::svd.U(Tensor self, bool some=True, bool compute_uv=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) V) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) V) 2022-05-18T03:33:20.5795502Z processing existing schema: aten::ceil(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5797130Z processing existing schema: aten::ceil.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5798611Z processing existing schema: aten::ceil.int(int a) -> (int) 2022-05-18T03:33:20.5799750Z processing existing schema: aten::ceil.float(float a) -> (int) 2022-05-18T03:33:20.5801276Z processing existing schema: aten::ceil.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.5803068Z processing existing schema: quantized::linear_dynamic_fp16(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.5804174Z processing existing schema: aten::numpy_T(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5805929Z processing existing schema: aten::numpy_T.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.5807122Z processing existing schema: aten::relu(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5808562Z processing existing schema: aten::stride.int(Tensor self, int dim) -> (int) 2022-05-18T03:33:20.5810015Z processing existing schema: aten::stride.Dimname(Tensor self, str dim) -> (int) 2022-05-18T03:33:20.5812840Z processing existing schema: prim::mkldnn_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.5814531Z processing existing schema: aten::affine_grid_generator(Tensor theta, int[] size, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.5815661Z processing existing schema: prim::CudaFusionViewGuard(...) -> (bool) 2022-05-18T03:33:20.5817610Z processing existing schema: aten::align_tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.5819450Z processing existing schema: aten::sum.dim_IntList(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.5820963Z processing existing schema: aten::sum(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.5822827Z processing existing schema: aten::sum.dim_DimnameList(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.5825307Z processing existing schema: aten::sum.DimnameList_out(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5827526Z processing existing schema: aten::sum.IntList_out(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5829007Z processing existing schema: aten::sum.int(int[] self) -> (int) 2022-05-18T03:33:20.5830652Z processing existing schema: aten::sum.float(float[] self) -> (float) 2022-05-18T03:33:20.5832354Z processing existing schema: aten::sum.complex(complex[] self) -> (complex) 2022-05-18T03:33:20.5833851Z processing existing schema: aten::sum.bool(bool[] self) -> (int) 2022-05-18T03:33:20.5837740Z processing existing schema: aten::_convolution.deprecated(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:20.5841357Z processing existing schema: aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:20.5843197Z processing existing schema: aten::log_normal_(Tensor(a!) self, float mean=1., float std=2., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.5844764Z processing existing schema: aten::mul.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5845899Z processing existing schema: aten::mul.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5848308Z processing existing schema: aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5849951Z processing existing schema: aten::mul.left_t(t[] l, int n) -> (t[]) 2022-05-18T03:33:20.5851960Z processing existing schema: aten::mul.right_(int n, t[] l) -> (t[]) 2022-05-18T03:33:20.5853679Z processing existing schema: aten::mul.int(int a, int b) -> (int) 2022-05-18T03:33:20.5854821Z processing existing schema: aten::mul.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.5856334Z processing existing schema: aten::mul.float(float a, float b) -> (float) 2022-05-18T03:33:20.5857939Z processing existing schema: aten::mul.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:20.5859096Z processing existing schema: aten::mul.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:20.5860856Z processing existing schema: aten::mul.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:20.5862561Z processing existing schema: aten::mul.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:20.5863629Z processing existing schema: aten::mul.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.5865332Z processing existing schema: aten::mul.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.5866803Z processing existing schema: aten::mul(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.5868511Z processing existing schema: aten::detach_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5869967Z processing existing schema: aten::get_device(Tensor self) -> (int) 2022-05-18T03:33:20.5871353Z processing existing schema: aten::view_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:20.5873447Z processing existing schema: quantized::linear_prepack_fp16(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:20.5874976Z processing existing schema: aten::chalf(Tensor self, *, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.5876001Z processing existing schema: aten::diagflat(Tensor self, int offset=0) -> (Tensor) 2022-05-18T03:33:20.5877584Z processing existing schema: prim::type(Device self) -> (str) 2022-05-18T03:33:20.5880038Z processing existing schema: aten::native_group_norm_backward(Tensor grad_out, Tensor input, Tensor mean, Tensor rstd, Tensor? weight, int N, int C, int HxW, int group, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.5881126Z processing existing schema: aten::_has_same_storage_numel(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:20.5883082Z processing existing schema: quantized::add_relu_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.5884632Z processing existing schema: aten::arctan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5886187Z processing existing schema: aten::threshold_backward(Tensor grad_output, Tensor self, Scalar threshold) -> (Tensor) 2022-05-18T03:33:20.5888189Z processing existing schema: aten::threshold_backward.grad_input(Tensor grad_output, Tensor self, Scalar threshold, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.5889764Z processing existing schema: aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.5891286Z processing existing schema: aten::sub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.5893256Z processing existing schema: aten::sub.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5894616Z processing existing schema: aten::sub.int(int a, int b) -> (int) 2022-05-18T03:33:20.5895951Z processing existing schema: aten::sub.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.5898031Z processing existing schema: aten::sub.float(float a, float b) -> (float) 2022-05-18T03:33:20.5898958Z processing existing schema: aten::sub.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:20.5900057Z processing existing schema: aten::sub.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:20.5901725Z processing existing schema: aten::sub.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:20.5902740Z processing existing schema: aten::sub.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:20.5904353Z processing existing schema: aten::sub.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.5906071Z processing existing schema: aten::sub.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.5907508Z processing existing schema: aten::sub(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.5909204Z processing existing schema: prim::MKLDNNScalarMul(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5911045Z processing existing schema: aten::affine_grid_generator_backward(Tensor grad, int[] size, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.5912440Z processing existing schema: aten::sigmoid_backward(Tensor grad_output, Tensor output) -> (Tensor) 2022-05-18T03:33:20.5914374Z processing existing schema: aten::sigmoid_backward.grad_input(Tensor grad_output, Tensor output, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.5916610Z processing existing schema: aten::instance_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool use_input_stats, float momentum, float eps, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:20.5917762Z processing existing schema: aten::is_complex(Tensor self) -> (bool) 2022-05-18T03:33:20.5919322Z processing existing schema: aten::sinc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5921187Z processing existing schema: aten::sinc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5923051Z processing existing schema: aten::linalg_solve_triangular(Tensor self, Tensor B, *, bool upper, bool left=True, bool unitriangular=False) -> (Tensor) 2022-05-18T03:33:20.5925378Z processing existing schema: aten::linalg_solve_triangular.out(Tensor self, Tensor B, *, bool upper, bool left=True, bool unitriangular=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5926868Z processing existing schema: aten::_cdist_forward(Tensor x1, Tensor x2, float p, int? compute_mode) -> (Tensor) 2022-05-18T03:33:20.5928281Z processing existing schema: prim::DifferentiableGraph(...) -> (...) 2022-05-18T03:33:20.5929918Z processing existing schema: aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.5931635Z processing existing schema: aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:20.5933279Z processing existing schema: quantized::conv2d_transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:20.5934795Z processing existing schema: aten::conv_tbc(Tensor self, Tensor weight, Tensor bias, int pad=0) -> (Tensor) 2022-05-18T03:33:20.5936243Z processing existing schema: aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.5937884Z processing existing schema: aten::eq.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.5939752Z processing existing schema: aten::eq.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5941626Z processing existing schema: aten::eq.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5943635Z processing existing schema: aten::eq.int_list(int[] a, int[] b) -> (bool) 2022-05-18T03:33:20.5945204Z processing existing schema: aten::eq.device(Device a, Device b) -> (bool) 2022-05-18T03:33:20.5946729Z processing existing schema: aten::eq.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:20.5948280Z processing existing schema: aten::eq.enum(AnyEnumType a, AnyEnumType b) -> (bool) 2022-05-18T03:33:20.5949707Z processing existing schema: aten::eq.int(int a, int b) -> (bool) 2022-05-18T03:33:20.5951248Z processing existing schema: aten::eq.complex(complex a, complex b) -> (bool) 2022-05-18T03:33:20.5952729Z processing existing schema: aten::eq.float(float a, float b) -> (bool) 2022-05-18T03:33:20.5954304Z processing existing schema: aten::eq.int_float(int a, float b) -> (bool) 2022-05-18T03:33:20.5955744Z processing existing schema: aten::eq.float_int(float a, int b) -> (bool) 2022-05-18T03:33:20.5957299Z processing existing schema: aten::eq.float_complex(float a, complex b) -> (bool) 2022-05-18T03:33:20.5958846Z processing existing schema: aten::eq.complex_float(complex a, float b) -> (bool) 2022-05-18T03:33:20.5960443Z processing existing schema: aten::eq(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:20.5961946Z processing existing schema: aten::eq.str(str a, str b) -> (bool) 2022-05-18T03:33:20.5964161Z processing existing schema: aten::eq.float_list(float[] a, float[] b) -> (bool) 2022-05-18T03:33:20.5966317Z processing existing schema: aten::eq.Tensor_list(Tensor[] a, Tensor[] b) -> (bool) 2022-05-18T03:33:20.5968486Z processing existing schema: aten::eq.bool_list(bool[] a, bool[] b) -> (bool) 2022-05-18T03:33:20.5970624Z processing existing schema: aten::eq.str_list(str[] a, str[] b) -> (bool) 2022-05-18T03:33:20.5972418Z processing existing schema: aten::rjust(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:20.5974042Z processing existing schema: aten::digamma_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5975442Z processing existing schema: aten::isalpha(str self) -> (bool) 2022-05-18T03:33:20.5977050Z processing existing schema: aten::zero_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.5979095Z processing existing schema: sparse::qlinear_prepack(Tensor W, Tensor? B, int out_features_block_size, int in_features_block_size) -> (__torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:20.5980127Z processing existing schema: aten::arcsinh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5981988Z processing existing schema: aten::arcsinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5984131Z processing existing schema: aten::tensor_split.sections(Tensor(a -> *) self, int sections, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5986647Z processing existing schema: aten::tensor_split.indices(Tensor(a -> *) self, int[] indices, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5989483Z processing existing schema: aten::tensor_split.tensor_indices_or_sections(Tensor(a -> *) self, Tensor tensor_indices_or_sections, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.5991480Z processing existing schema: aten::movedim.intlist(Tensor(a) self, int[] source, int[] destination) -> (Tensor(a)) 2022-05-18T03:33:20.5993290Z processing existing schema: aten::movedim.int(Tensor(a) self, int source, int destination) -> (Tensor(a)) 2022-05-18T03:33:20.5995197Z processing existing schema: aten::frac(Tensor self) -> (Tensor) 2022-05-18T03:33:20.5996564Z processing existing schema: aten::frac.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.5999362Z processing existing schema: aten::randint(int high, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6001964Z processing existing schema: aten::randint.generator(int high, int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6004337Z processing existing schema: aten::randint.low(int low, int high, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6007222Z processing existing schema: aten::randint.low_generator(int low, int high, int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6008959Z processing existing schema: aten::randint.out(int high, int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6011199Z processing existing schema: aten::randint.generator_out(int high, int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6013420Z processing existing schema: aten::randint.low_out(int low, int high, int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6015642Z processing existing schema: aten::randint.low_generator_out(int low, int high, int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6017332Z processing existing schema: prim::MMTreeReduce(...) -> (Tensor) 2022-05-18T03:33:20.6019134Z schema: aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool? onesided=None, bool? return_complex=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6021807Z schema: aten::stft.center(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, str pad_mode="reflect", bool normalized=False, bool? onesided=None, bool? return_complex=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6024984Z processing existing schema: prim::MKLDNNLayerNorm_(Tensor(a!) input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor(a!)) 2022-05-18T03:33:20.6026522Z processing existing schema: aten::adjoint(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6030138Z processing existing schema: aten::quantized_lstm.input(Tensor input, Tensor[] hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6033310Z processing existing schema: aten::quantized_lstm.data(Tensor data, Tensor batch_sizes, Tensor[] hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6036126Z processing existing schema: aten::quantized_lstm.input_legacy(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6039347Z processing existing schema: aten::quantized_lstm.data_legacy(Tensor data, Tensor batch_sizes, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6040432Z processing existing schema: aten::alpha_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:20.6041820Z processing existing schema: aten::celu(Tensor self, Scalar alpha=1.) -> (Tensor) 2022-05-18T03:33:20.6043667Z processing existing schema: _quantized::linear_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:20.6046573Z processing existing schema: aten::_empty_affine_quantized(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, float scale=1., int zero_point=0, int? memory_format=0) -> (Tensor) 2022-05-18T03:33:20.6047769Z processing existing schema: aten::mT(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6049431Z processing existing schema: aten::mT.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6050871Z processing existing schema: aten::trace(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6052429Z processing existing schema: aten::std_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 2022-05-18T03:33:20.6054362Z processing existing schema: aten::std_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.6056328Z processing existing schema: aten::std_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.6058215Z processing existing schema: aten::std_mean.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.6060262Z processing existing schema: aten::std_mean.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.6063161Z processing existing schema: prim::MKLDNNLayerNorm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor) 2022-05-18T03:33:20.6065318Z processing existing schema: aten::addr_(Tensor(a!) self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6067261Z processing existing schema: aten::_fft_c2r(Tensor self, int[] dim, int normalization, int last_dim_size) -> (Tensor) 2022-05-18T03:33:20.6069684Z processing existing schema: aten::_fft_c2r.out(Tensor self, int[] dim, int normalization, int last_dim_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6071218Z processing existing schema: aten::matrix_H(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6072790Z processing existing schema: aten::matrix_H.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6074786Z processing existing schema: quantized::conv2d_relu.new(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6077751Z processing existing schema: quantized::conv2d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6080321Z processing existing schema: aten::avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], bool ceil_mode=False, bool count_include_pad=True) -> (Tensor) 2022-05-18T03:33:20.6082936Z processing existing schema: aten::max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.6084674Z processing existing schema: aten::normal.Tensor_float(Tensor mean, float std=1., *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.6086813Z processing existing schema: aten::normal.Tensor_float_out(Tensor mean, float std=1., *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6088865Z processing existing schema: aten::normal.float_Tensor_out(float mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6090602Z processing existing schema: aten::normal.float_Tensor(float mean, Tensor std, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.6092395Z processing existing schema: aten::normal.Tensor_Tensor(Tensor mean, Tensor std, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.6094401Z processing existing schema: aten::normal.Tensor_Tensor_out(Tensor mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6097394Z processing existing schema: aten::normal.float_float(float mean, float std, int[] size, *, Generator? generator=None, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6099778Z processing existing schema: aten::normal.float_float_out(float mean, float std, int[] size, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6101391Z processing existing schema: prim::unsqueeze_copy(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.6102732Z processing existing schema: aten::prod(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6104498Z processing existing schema: aten::prod.dim_int(Tensor self, int dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6106376Z processing existing schema: aten::prod.dim_Dimname(Tensor self, str dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6108447Z processing existing schema: aten::prod.Dimname_out(Tensor self, str dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6110707Z processing existing schema: aten::prod.int_out(Tensor self, int dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6112589Z processing existing schema: aten::fft_irfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.6115081Z processing existing schema: aten::fft_irfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6116770Z processing existing schema: aten::polygamma_(Tensor(a!) self, int n) -> (Tensor(a!)) 2022-05-18T03:33:20.6119007Z processing existing schema: aten::_reshape_alias(Tensor(a) self, int[] size, int[] stride) -> (Tensor(a)) 2022-05-18T03:33:20.6121122Z processing existing schema: quantized::cat_relu(Tensor[] qx, int dim, float? scale, int? zero_point) -> (Tensor) 2022-05-18T03:33:20.6122727Z processing existing schema: aten::atan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6124361Z processing existing schema: aten::transpose_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:20.6126362Z processing existing schema: quantized::conv2d.new(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6129401Z processing existing schema: quantized::conv2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6130195Z processing existing schema: aten::atleast_3d(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6132179Z processing existing schema: aten::atleast_3d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.6134160Z processing existing schema: aten::_fft_c2c(Tensor self, int[] dim, int normalization, bool forward) -> (Tensor) 2022-05-18T03:33:20.6136605Z processing existing schema: aten::_fft_c2c.out(Tensor self, int[] dim, int normalization, bool forward, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6138118Z processing existing schema: aten::matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6139972Z processing existing schema: aten::matmul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6140894Z processing existing schema: aten::abs(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6142750Z processing existing schema: aten::abs.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6144299Z processing existing schema: aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6145809Z processing existing schema: aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6147759Z processing existing schema: aten::add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6149960Z processing existing schema: aten::add.t(t[] a, t[] b) -> (t[]) 2022-05-18T03:33:20.6151404Z processing existing schema: aten::add.str(str a, str b) -> (str) 2022-05-18T03:33:20.6152784Z processing existing schema: aten::add.int(int a, int b) -> (int) 2022-05-18T03:33:20.6154262Z processing existing schema: aten::add.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.6155662Z processing existing schema: aten::add.float(float a, float b) -> (float) 2022-05-18T03:33:20.6157163Z processing existing schema: aten::add.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:20.6158552Z processing existing schema: aten::add.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:20.6160104Z processing existing schema: aten::add.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:20.6161520Z processing existing schema: aten::add.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:20.6162980Z processing existing schema: aten::add.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.6164441Z processing existing schema: aten::add.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.6165953Z processing existing schema: aten::add(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.6166410Z schema: static_runtime::VarTupleUnpack(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:20.6167816Z processing existing schema: aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a)) 2022-05-18T03:33:20.6169517Z processing existing schema: aten::select.Dimname(Tensor(a) self, str dim, int index) -> (Tensor(a)) 2022-05-18T03:33:20.6171258Z processing existing schema: aten::select.t(t[](a) list, int idx) -> (t(*)) 2022-05-18T03:33:20.6173660Z processing existing schema: aten::split_with_sizes(Tensor(a -> *) self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.6174772Z processing existing schema: aten::linalg_solve(Tensor input, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6176575Z processing existing schema: aten::linalg_solve.out(Tensor input, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6178261Z processing existing schema: quantized::conv2d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.6180360Z processing existing schema: aten::batch_norm_elemt.out(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor invstd, float eps, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6181977Z processing existing schema: aten::batch_norm_elemt(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor invstd, float eps) -> (Tensor) 2022-05-18T03:33:20.6183814Z processing existing schema: aten::unbind.int(Tensor(a -> *) self, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.6185732Z processing existing schema: aten::unbind.Dimname(Tensor(a -> *) self, str dim) -> (Tensor[]) 2022-05-18T03:33:20.6186894Z processing existing schema: aten::round(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6188739Z processing existing schema: aten::round.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6190136Z processing existing schema: aten::round.decimals(Tensor self, *, int decimals) -> (Tensor) 2022-05-18T03:33:20.6191925Z processing existing schema: aten::round.decimals_out(Tensor self, *, int decimals, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6193063Z processing existing schema: aten::round.int(int a) -> (float) 2022-05-18T03:33:20.6194505Z processing existing schema: aten::round.float(float a) -> (float) 2022-05-18T03:33:20.6195992Z processing existing schema: aten::round.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6198005Z processing existing schema: aten::histc(Tensor self, int bins=100, Scalar min=0, Scalar max=0) -> (Tensor) 2022-05-18T03:33:20.6200132Z processing existing schema: aten::histc.out(Tensor self, int bins=100, Scalar min=0, Scalar max=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6201423Z processing existing schema: aten::adaptive_avg_pool3d(Tensor self, int[3] output_size) -> (Tensor) 2022-05-18T03:33:20.6203325Z processing existing schema: aten::adaptive_avg_pool3d.out(Tensor self, int[3] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6206377Z processing existing schema: _quantized::conv_transpose2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:20.6207729Z processing existing schema: aten::bitwise_and_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.6209509Z processing existing schema: aten::bitwise_and_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.6211593Z processing existing schema: aten::unsqueeze(Tensor(a) self, int dim) -> (Tensor(a)) 2022-05-18T03:33:20.6212065Z schema: profiler::_call_end_callbacks_on_jit_fut(Tensor x, Future(t) y) -> (Future(t)) found on allowlist, skipping 2022-05-18T03:33:20.6212739Z schema: profiler::_call_end_callbacks_on_jit_fut._RecordFunction(__torch__.torch.classes.profiler._RecordFunction x, Future(t) y) -> (Future(t)) found on allowlist, skipping 2022-05-18T03:33:20.6213954Z processing existing schema: aten::addcmul_(Tensor(a!) self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6215868Z processing existing schema: aten::squeeze(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.6217759Z processing existing schema: aten::squeeze.dim(Tensor(a) self, int dim) -> (Tensor(a)) 2022-05-18T03:33:20.6219711Z processing existing schema: aten::squeeze.dimname(Tensor(a) self, str dim) -> (Tensor(a)) 2022-05-18T03:33:20.6221664Z processing existing schema: sparse::qlinear_dynamic(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:20.6223040Z processing existing schema: aten::arcsin(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6225271Z processing existing schema: aten::arcsin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6227082Z processing existing schema: aten::tanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6228580Z processing existing schema: aten::clamp_max(Tensor self, Scalar max) -> (Tensor) 2022-05-18T03:33:20.6230468Z processing existing schema: aten::clamp_max.Tensor(Tensor self, Tensor max) -> (Tensor) 2022-05-18T03:33:20.6232623Z processing existing schema: aten::clamp_max.out(Tensor self, Scalar max, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6234703Z processing existing schema: aten::clamp_max.Tensor_out(Tensor self, Tensor max, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6236733Z processing existing schema: quantized::mul_relu_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:20.6238067Z processing existing schema: aten::acos(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6240494Z processing existing schema: aten::acos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6241863Z processing existing schema: aten::acos.int(int a) -> (float) 2022-05-18T03:33:20.6243757Z processing existing schema: aten::acos.float(float a) -> (float) 2022-05-18T03:33:20.6245256Z processing existing schema: aten::acos.complex(complex a) -> (complex) 2022-05-18T03:33:20.6247127Z processing existing schema: aten::acos.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6248614Z processing existing schema: aten::floor(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6250820Z processing existing schema: aten::floor.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6252202Z processing existing schema: aten::floor.int(int a) -> (int) 2022-05-18T03:33:20.6254071Z processing existing schema: aten::floor.float(float a) -> (int) 2022-05-18T03:33:20.6255570Z processing existing schema: aten::floor.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6258356Z processing existing schema: aten::normal_(Tensor(a!) self, float mean=0., float std=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6260346Z processing existing schema: aten::_pdist_backward(Tensor grad, Tensor self, float p, Tensor pdist) -> (Tensor) 2022-05-18T03:33:20.6261978Z processing existing schema: aten::poisson(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.6264498Z processing existing schema: aten::random_.from(Tensor(a!) self, int from, int? to, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6266744Z processing existing schema: aten::random_.to(Tensor(a!) self, int to, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6268828Z processing existing schema: aten::random_(Tensor(a!) self, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6271456Z processing existing schema: aten::rand_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6274021Z processing existing schema: aten::randn_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6276998Z processing existing schema: aten::_sparse_csr_tensor_unsafe(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6279689Z processing existing schema: aten::randint_like(Tensor self, int high, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6282368Z processing existing schema: aten::randint_like.low_dtype(Tensor self, int low, int high, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6285259Z processing existing schema: aten::_sparse_csc_tensor_unsafe(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6287750Z processing existing schema: aten::rand(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6290620Z processing existing schema: aten::rand.generator(int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6294253Z processing existing schema: aten::rand.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6297019Z processing existing schema: aten::rand.generator_with_names(int[] size, *, Generator? generator, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6299544Z processing existing schema: aten::rand.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6301982Z processing existing schema: aten::rand.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6303183Z processing existing schema: aten::fmod.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6305346Z processing existing schema: aten::fmod.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.6307716Z processing existing schema: aten::fmod.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6309621Z processing existing schema: aten::fmod.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6311126Z processing existing schema: aten::fmod.int(int a, int b) -> (float) 2022-05-18T03:33:20.6312962Z processing existing schema: aten::fmod.float(float a, float b) -> (float) 2022-05-18T03:33:20.6314746Z processing existing schema: aten::fmod.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.6316446Z processing existing schema: aten::fmod.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.6318266Z processing existing schema: aten::fmod(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.6320854Z processing existing schema: aten::fractional_max_pool2d(Tensor self, int[2] kernel_size, int[2] output_size, Tensor random_samples) -> (Tensor, Tensor) 2022-05-18T03:33:20.6324522Z processing existing schema: aten::fractional_max_pool2d.output(Tensor self, int[2] kernel_size, int[2] output_size, Tensor random_samples, *, Tensor(a!) output, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.6326118Z processing existing schema: aten::_sparse_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6328608Z processing existing schema: aten::_sparse_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6329841Z processing existing schema: aten::_sparse_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:20.6331633Z schema: aten::randperm(int n, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6332821Z schema: aten::randperm.generator(int n, *, Generator? generator, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6333700Z schema: aten::randperm.out(int n, *, Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6334903Z schema: aten::randperm.generator_out(int n, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6336984Z processing existing schema: aten::div.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6338613Z processing existing schema: aten::div.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.6341078Z processing existing schema: aten::div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:20.6342942Z processing existing schema: aten::div.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:20.6345456Z processing existing schema: aten::div.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6347999Z processing existing schema: aten::div.out_mode(Tensor self, Tensor other, *, str? rounding_mode, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6349140Z processing existing schema: aten::div.int(int a, int b) -> (float) 2022-05-18T03:33:20.6351496Z processing existing schema: aten::div.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:20.6352997Z processing existing schema: aten::div.float(float a, float b) -> (float) 2022-05-18T03:33:20.6355169Z processing existing schema: aten::div(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.6356543Z processing existing schema: aten::isnumeric(str self) -> (bool) 2022-05-18T03:33:20.6360017Z processing existing schema: aten::zeros_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6361774Z processing existing schema: aten::narrow(Tensor(a) self, int dim, int start, int length) -> (Tensor(a)) 2022-05-18T03:33:20.6364324Z processing existing schema: aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> (Tensor(a)) 2022-05-18T03:33:20.6366173Z processing existing schema: aten::_fused_dropout(Tensor self, float p, Generator? generator=None) -> (Tensor, Tensor) 2022-05-18T03:33:20.6368526Z processing existing schema: quantized::clamp(Tensor qx, Scalar? min=None, Scalar? max=None) -> (Tensor qy) 2022-05-18T03:33:20.6369878Z processing existing schema: aten::atan2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6372530Z processing existing schema: aten::atan2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6373837Z processing existing schema: aten::atan2.int(int a, int b) -> (float) 2022-05-18T03:33:20.6376113Z processing existing schema: aten::atan2.float(float a, float b) -> (float) 2022-05-18T03:33:20.6377573Z processing existing schema: aten::atan2.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.6379750Z processing existing schema: aten::atan2.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.6381367Z processing existing schema: aten::atan2.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.6383862Z processing existing schema: aten::trace_backward(Tensor grad, int[] sizes) -> (Tensor) 2022-05-18T03:33:20.6387809Z processing existing schema: aten::_empty_per_channel_affine_quantized(int[] size, *, Tensor scales, Tensor zero_points, int axis, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=0) -> (Tensor) 2022-05-18T03:33:20.6389760Z processing existing schema: aten::margin_ranking_loss(Tensor input1, Tensor input2, Tensor target, float margin=0., int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6392630Z processing existing schema: quantized::cat(Tensor[] qx, int dim, float? scale, int? zero_point) -> (Tensor) 2022-05-18T03:33:20.6394076Z processing existing schema: aten::atan2_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.6396655Z processing existing schema: aten::transpose.int(Tensor(a) self, int dim0, int dim1) -> (Tensor(a)) 2022-05-18T03:33:20.6398624Z processing existing schema: aten::transpose.Dimname(Tensor(a) self, str dim0, str dim1) -> (Tensor(a)) 2022-05-18T03:33:20.6401468Z processing existing schema: quantized::cat_out(Tensor[] qx, int dim, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6402497Z processing existing schema: aten::atanh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6405082Z processing existing schema: aten::atanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6406119Z processing existing schema: aten::atanh.int(int a) -> (float) 2022-05-18T03:33:20.6408342Z processing existing schema: aten::atanh.float(float a) -> (float) 2022-05-18T03:33:20.6409830Z processing existing schema: aten::atanh.complex(complex a) -> (complex) 2022-05-18T03:33:20.6411805Z processing existing schema: aten::atanh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6414160Z processing existing schema: quantized::dropout(Tensor self, float output_scale, int output_zero_point, Scalar p=0.5, bool training=False) -> (Tensor) 2022-05-18T03:33:20.6415723Z processing existing schema: aten::bitwise_left_shift_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.6417223Z processing existing schema: aten::bitwise_left_shift_.Tensor_Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.6418678Z processing existing schema: aten::sgn_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6420459Z processing existing schema: aten::addbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6422750Z processing existing schema: aten::addbmm.out(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6422960Z schema: static_runtime::create_owned_ref(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:20.6424539Z processing existing schema: aten::linalg_multi_dot(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.6426638Z processing existing schema: aten::linalg_multi_dot.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6428607Z processing existing schema: aten::rename(Tensor(a) self, str[]? names) -> (Tensor(a)) 2022-05-18T03:33:20.6430848Z processing existing schema: aten::_thnn_fused_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6432758Z processing existing schema: aten::_thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor) 2022-05-18T03:33:20.6435028Z processing existing schema: aten::lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor, Tensor) 2022-05-18T03:33:20.6437027Z processing existing schema: aten::rnn_tanh_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:20.6438923Z processing existing schema: aten::rnn_relu_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:20.6440856Z processing existing schema: quantized::conv_transpose1d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:20.6444509Z processing existing schema: aten::convolution_backward_overrideable(Tensor grad_output, Tensor input, Tensor weight, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias) 2022-05-18T03:33:20.6445614Z processing existing schema: aten::erfinv(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6447204Z processing existing schema: aten::erfinv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6449176Z processing existing schema: aten::rsplit(str self, str separator=" ", int max=-1) -> (str[]) 2022-05-18T03:33:20.6450809Z processing existing schema: aten::softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> (Tensor) 2022-05-18T03:33:20.6452785Z processing existing schema: aten::softplus.out(Tensor self, Scalar beta=1, Scalar threshold=20, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6455656Z processing existing schema: aten::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor) 2022-05-18T03:33:20.6457662Z processing existing schema: aten::native_layer_norm(Tensor input, int[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6460092Z processing existing schema: aten::group_norm(Tensor input, int num_groups, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enabled=True) -> (Tensor) 2022-05-18T03:33:20.6461100Z processing existing schema: aten::frobenius_norm(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6462813Z processing existing schema: aten::frobenius_norm.dim(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.6464915Z processing existing schema: aten::frobenius_norm.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6466304Z processing existing schema: aten::nuclear_norm(Tensor self, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.6467879Z processing existing schema: aten::nuclear_norm.dim(Tensor self, int[2] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.6469705Z processing existing schema: aten::nuclear_norm.out(Tensor self, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6471763Z processing existing schema: aten::nuclear_norm.dim_out(Tensor self, int[2] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6473452Z processing existing schema: aten::unfold(Tensor(a) self, int dimension, int size, int step) -> (Tensor(a)) 2022-05-18T03:33:20.6475357Z processing existing schema: aten::max_unpool3d(Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding) -> (Tensor) 2022-05-18T03:33:20.6477654Z processing existing schema: aten::max_unpool3d.out(Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6479668Z processing existing schema: aten::nll_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:20.6481843Z processing existing schema: aten::nll_loss.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6483499Z processing existing schema: aten::_lu_with_info(Tensor self, bool pivot=True, bool check_errors=True) -> (Tensor LU, Tensor pivots, Tensor info) 2022-05-18T03:33:20.6485308Z processing existing schema: aten::nll_loss2d(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:20.6487628Z processing existing schema: aten::nll_loss2d.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6488595Z processing existing schema: prim::MMBatchSide(...) -> (...) 2022-05-18T03:33:20.6490429Z processing existing schema: aten::hinge_embedding_loss(Tensor self, Tensor target, float margin=1., int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6492071Z processing existing schema: aten::kl_div(Tensor self, Tensor target, int reduction=1, *, bool log_target=False) -> (Tensor) 2022-05-18T03:33:20.6493561Z processing existing schema: aten::soft_margin_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6495365Z processing existing schema: aten::soft_margin_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6496937Z processing existing schema: aten::smooth_l1_loss(Tensor self, Tensor target, int reduction=1, float beta=1.) -> (Tensor) 2022-05-18T03:33:20.6499040Z processing existing schema: aten::smooth_l1_loss.out(Tensor self, Tensor target, int reduction=1, float beta=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6500581Z processing existing schema: aten::huber_loss(Tensor self, Tensor target, int reduction=1, float delta=1.) -> (Tensor) 2022-05-18T03:33:20.6502684Z processing existing schema: aten::huber_loss.out(Tensor self, Tensor target, int reduction=1, float delta=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6503676Z processing existing schema: aten::rsqrt(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6505366Z processing existing schema: aten::rsqrt.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6506821Z processing existing schema: aten::acos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6508265Z processing existing schema: aten::mse_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6510243Z processing existing schema: aten::mse_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6511550Z processing existing schema: aten::diag(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:20.6513347Z processing existing schema: aten::diag.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6514669Z processing existing schema: aten::is_contiguous(Tensor self) -> (bool) 2022-05-18T03:33:20.6516173Z processing existing schema: aten::multilabel_margin_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6518056Z processing existing schema: aten::multilabel_margin_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6519968Z processing existing schema: quantized::conv3d_relu.new(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6522856Z processing existing schema: quantized::conv3d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6524851Z processing existing schema: aten::avg_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> (Tensor) 2022-05-18T03:33:20.6527457Z processing existing schema: aten::avg_pool2d_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.6530066Z processing existing schema: aten::triplet_margin_loss(Tensor anchor, Tensor positive, Tensor negative, float margin=1., float p=2., float eps=9.9999999999999995e-07, bool swap=False, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6531365Z processing existing schema: aten::_aminmax(Tensor self) -> (Tensor, Tensor) 2022-05-18T03:33:20.6532946Z processing existing schema: aten::_aminmax.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.6534977Z processing existing schema: aten::linalg_lstsq(Tensor self, Tensor b, float? rcond=None, *, str? driver=None) -> (Tensor solution, Tensor residuals, Tensor rank, Tensor singular_values) 2022-05-18T03:33:20.6538560Z processing existing schema: aten::linalg_lstsq.out(Tensor self, Tensor b, float? rcond=None, *, str? driver=None, Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) -> (Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) 2022-05-18T03:33:20.6540966Z processing existing schema: aten::zeros.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6543118Z processing existing schema: aten::zeros(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6545003Z processing existing schema: aten::zeros.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6546722Z processing existing schema: aten::dist(Tensor self, Tensor other, Scalar p=2) -> (Tensor) 2022-05-18T03:33:20.6547840Z processing existing schema: aten::isdecimal(str self) -> (bool) 2022-05-18T03:33:20.6549499Z processing existing schema: aten::renorm(Tensor self, Scalar p, int dim, Scalar maxnorm) -> (Tensor) 2022-05-18T03:33:20.6551432Z processing existing schema: aten::renorm.out(Tensor self, Scalar p, int dim, Scalar maxnorm, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6552953Z processing existing schema: aten::softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6554681Z processing existing schema: aten::softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6556501Z processing existing schema: aten::softmax.int_out(Tensor self, int dim, int? dtype=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6558752Z processing existing schema: quantized::embedding_bag_4bit_prepack(Tensor weight, bool optimized_qparams=False, int nbins=200, float ratio=0.16) -> (Tensor) 2022-05-18T03:33:20.6560856Z processing existing schema: aten::block_diag(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.6561682Z processing existing schema: aten::cumprod(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6563420Z processing existing schema: aten::cumprod.dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6565368Z processing existing schema: aten::cumprod.dimname_out(Tensor self, str dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6567531Z processing existing schema: aten::cumprod.out(Tensor self, int dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6567794Z schema: static_runtime::expand_dims_copy(Tensor input, int[] dims) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6569332Z processing existing schema: quantized::embedding_4bit(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, bool pruned_weights=False) -> (Tensor) 2022-05-18T03:33:20.6570494Z processing existing schema: aten::bitwise_right_shift.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6572319Z processing existing schema: aten::bitwise_right_shift.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6573665Z processing existing schema: aten::bitwise_right_shift.Tensor_Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.6575493Z processing existing schema: aten::bitwise_right_shift.Tensor_Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6576942Z processing existing schema: aten::bitwise_right_shift.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6578776Z processing existing schema: aten::upsample_linear1d(Tensor self, int[1] output_size, bool align_corners, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.6580940Z processing existing schema: aten::upsample_linear1d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6583057Z processing existing schema: aten::upsample_linear1d.out(Tensor self, int[1] output_size, bool align_corners, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6584970Z processing existing schema: aten::norm.Scalar(Tensor self, Scalar p=2) -> (Tensor) 2022-05-18T03:33:20.6586452Z processing existing schema: aten::norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.6588427Z processing existing schema: aten::norm.names_ScalarOpt_dim(Tensor self, Scalar? p, str[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.6589997Z processing existing schema: aten::norm.ScalarOpt_dim_dtype(Tensor self, Scalar? p, int[1] dim, bool keepdim, *, int dtype) -> (Tensor) 2022-05-18T03:33:20.6592346Z processing existing schema: aten::norm.dtype_out(Tensor self, Scalar? p, int[1] dim, bool keepdim, *, int dtype, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6594313Z processing existing schema: aten::norm.out(Tensor self, Scalar? p, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6596049Z processing existing schema: aten::norm.ScalarOpt_dtype(Tensor self, Scalar? p, *, int dtype) -> (Tensor) 2022-05-18T03:33:20.6597959Z processing existing schema: aten::norm.names_ScalarOpt_dim_dtype(Tensor self, Scalar? p, str[1] dim, bool keepdim, *, int dtype) -> (Tensor) 2022-05-18T03:33:20.6600282Z processing existing schema: aten::norm.names_dtype_out(Tensor self, Scalar? p, str[1] dim, bool keepdim, *, int dtype, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6602441Z processing existing schema: aten::norm.names_out(Tensor self, Scalar? p, str[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6603858Z processing existing schema: aten::selu(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6605595Z processing existing schema: aten::addcdiv(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor) 2022-05-18T03:33:20.6607541Z processing existing schema: aten::addcdiv.out(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6609131Z processing existing schema: aten::index_copy(Tensor self, int dim, Tensor index, Tensor source) -> (Tensor) 2022-05-18T03:33:20.6610845Z processing existing schema: aten::index_copy.dimname(Tensor self, str dim, Tensor index, Tensor source) -> (Tensor) 2022-05-18T03:33:20.6612959Z processing existing schema: aten::index_copy.out(Tensor self, int dim, Tensor index, Tensor source, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6616035Z processing existing schema: quantized::conv3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:20.6618198Z processing existing schema: aten::binary_cross_entropy(Tensor self, Tensor target, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.6620332Z processing existing schema: aten::binary_cross_entropy.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6622135Z processing existing schema: aten::uniform_(Tensor(a!) self, float from=0., float to=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6623509Z processing existing schema: aten::cross(Tensor self, Tensor other, int? dim=None) -> (Tensor) 2022-05-18T03:33:20.6625671Z processing existing schema: aten::cross.out(Tensor self, Tensor other, int? dim=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6627586Z processing existing schema: _quantized::conv3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6628827Z processing existing schema: aten::grid_sampler(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.6631032Z processing existing schema: sparse::qlinear_unpack(__torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin, int[] block_pattern) 2022-05-18T03:33:20.6632649Z processing existing schema: aten::arcsinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6634958Z processing existing schema: aten::tensordot(Tensor self, Tensor other, int[] dims_self, int[] dims_other) -> (Tensor) 2022-05-18T03:33:20.6637583Z processing existing schema: aten::tensordot.out(Tensor self, Tensor other, int[] dims_self, int[] dims_other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6639292Z processing existing schema: aten::scatter_add(Tensor self, int dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:20.6641351Z processing existing schema: aten::scatter_add.out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6643010Z processing existing schema: aten::scatter_add.dimname(Tensor self, str dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:20.6646086Z processing existing schema: quantized::embedding_bag_4bit_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:20.6647293Z processing existing schema: aten::bitwise_xor.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6649223Z processing existing schema: aten::bitwise_xor.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6651140Z processing existing schema: aten::bitwise_xor.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6652663Z processing existing schema: aten::bitwise_xor.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.6654282Z processing existing schema: aten::cummax(Tensor self, int dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.6656022Z processing existing schema: aten::cummax.dimname(Tensor self, str dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.6658633Z processing existing schema: aten::cummax.dimname_out(Tensor self, str dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.6661129Z processing existing schema: aten::cummax.out(Tensor self, int dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.6661386Z schema: static_runtime::permute_copy(Tensor self, int[] dims) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6662816Z processing existing schema: aten::upsample_nearest1d(Tensor self, int[1] output_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.6665072Z processing existing schema: aten::upsample_nearest1d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6667121Z processing existing schema: aten::upsample_nearest1d.out(Tensor self, int[1] output_size, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6668548Z processing existing schema: aten::cummin(Tensor self, int dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.6670026Z processing existing schema: aten::cummin.dimname(Tensor self, str dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.6672400Z processing existing schema: aten::cummin.dimname_out(Tensor self, str dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.6674609Z processing existing schema: aten::cummin.out(Tensor self, int dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.6675077Z schema: static_runtime::flatten_copy.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6676668Z processing existing schema: aten::upsample_nearest2d(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.6678718Z processing existing schema: aten::upsample_nearest2d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6681561Z processing existing schema: aten::upsample_nearest2d.out(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6682999Z processing existing schema: aten::cumprod_(Tensor(a!) self, int dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6685090Z processing existing schema: aten::cumprod_.dimname(Tensor(a!) self, str dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:20.6685465Z schema: static_runtime::to_maybe_copy_out.prim_dtype(Tensor self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:20.6685858Z schema: static_runtime::to_maybe_copy_out.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:20.6686260Z schema: static_runtime::to_maybe_copy_out.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:20.6687578Z processing existing schema: aten::upsample_nearest3d(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.6689797Z processing existing schema: aten::upsample_nearest3d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6692059Z processing existing schema: aten::upsample_nearest3d.out(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6695092Z processing existing schema: quantized::embedding_bag_4bit(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:20.6695477Z processing existing schema: aten::bitwise_or.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.6697461Z processing existing schema: aten::bitwise_or.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6699284Z processing existing schema: aten::bitwise_or.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6700326Z processing existing schema: aten::bitwise_or.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.6703836Z processing existing schema: aten::cudnn_convolution_transpose(Tensor self, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:20.6705448Z processing existing schema: aten::get_gradients(int context_id) -> (Dict(Tensor, Tensor)) 2022-05-18T03:33:20.6707503Z processing existing schema: aten::upsample_bilinear2d(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.6709749Z processing existing schema: aten::upsample_bilinear2d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6712259Z processing existing schema: aten::upsample_bilinear2d.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6713330Z processing existing schema: quantized::embedding_bag_byte_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:20.6715340Z processing existing schema: aten::broadcast_to(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:20.6716958Z processing existing schema: aten::cumsum(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6718659Z processing existing schema: aten::cumsum.dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6720854Z processing existing schema: aten::cumsum.dimname_out(Tensor self, str dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6722874Z processing existing schema: aten::cumsum.out(Tensor self, int dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6723321Z schema: static_runtime::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6725499Z processing existing schema: aten::upsample_trilinear3d(Tensor self, int[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.6727509Z processing existing schema: aten::upsample_trilinear3d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.6730025Z processing existing schema: aten::upsample_trilinear3d.out(Tensor self, int[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6731487Z schema: aten::quantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6733083Z schema: aten::quantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6734999Z schema: aten::quantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6736894Z schema: aten::quantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6738455Z schema: aten::nanquantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6740084Z schema: aten::nanquantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6742026Z schema: aten::nanquantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6743930Z schema: aten::nanquantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.6745563Z processing existing schema: aten::grid_sampler_3d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.6747003Z processing existing schema: aten::replication_pad3d(Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:20.6748938Z processing existing schema: aten::replication_pad3d.out(Tensor self, int[6] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6750267Z processing existing schema: aten::inverse(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6751879Z processing existing schema: aten::inverse.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6753234Z processing existing schema: aten::sin(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6754957Z processing existing schema: aten::sin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6756312Z processing existing schema: aten::sin.int(int a) -> (float) 2022-05-18T03:33:20.6757982Z processing existing schema: aten::sin.float(float a) -> (float) 2022-05-18T03:33:20.6758764Z processing existing schema: aten::sin.complex(complex a) -> (complex) 2022-05-18T03:33:20.6761183Z processing existing schema: aten::sin.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6761976Z processing existing schema: aten::matrix_rank(Tensor self, bool symmetric=False) -> (Tensor) 2022-05-18T03:33:20.6763851Z processing existing schema: aten::matrix_rank.tol(Tensor self, float tol, bool symmetric=False) -> (Tensor) 2022-05-18T03:33:20.6765425Z processing existing schema: aten::ormqr(Tensor self, Tensor input2, Tensor input3, bool left=True, bool transpose=False) -> (Tensor) 2022-05-18T03:33:20.6767525Z processing existing schema: aten::ormqr.out(Tensor self, Tensor input2, Tensor input3, bool left=True, bool transpose=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6769166Z processing existing schema: aten::pinverse(Tensor self, float rcond=1.0000000000000001e-15) -> (Tensor) 2022-05-18T03:33:20.6770780Z processing existing schema: aten::max_unpool2d(Tensor self, Tensor indices, int[2] output_size) -> (Tensor) 2022-05-18T03:33:20.6772728Z processing existing schema: aten::max_unpool2d.out(Tensor self, Tensor indices, int[2] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6774067Z processing existing schema: aten::reflection_pad1d(Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:20.6775866Z processing existing schema: aten::reflection_pad1d.out(Tensor self, int[2] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6777308Z processing existing schema: aten::replication_pad1d(Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:20.6779346Z processing existing schema: aten::replication_pad1d.out(Tensor self, int[2] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6781113Z processing existing schema: quantized::leaky_relu(Tensor qx, Scalar negative_slope, bool inplace, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6782426Z processing existing schema: aten::col_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6784607Z processing existing schema: aten::elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor) 2022-05-18T03:33:20.6786801Z processing existing schema: aten::elu.out(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6788036Z processing existing schema: aten::capitalize(str self) -> (str) 2022-05-18T03:33:20.6790076Z processing existing schema: aten::unsafe_chunk(Tensor self, int chunks, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.6792035Z processing existing schema: aten::fft_ihfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.6794461Z processing existing schema: aten::fft_ihfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6797099Z processing existing schema: aten::linalg_matrix_norm(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6799889Z processing existing schema: aten::linalg_matrix_norm.str_ord(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6802667Z processing existing schema: aten::linalg_matrix_norm.out(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6805652Z processing existing schema: aten::linalg_matrix_norm.str_ord_out(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6807071Z processing existing schema: aten::linalg_cond(Tensor self, Scalar? p=None) -> (Tensor) 2022-05-18T03:33:20.6808680Z processing existing schema: aten::linalg_cond.p_str(Tensor self, str p) -> (Tensor) 2022-05-18T03:33:20.6810627Z processing existing schema: aten::linalg_cond.out(Tensor self, Scalar? p=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6812700Z processing existing schema: aten::linalg_cond.p_str_out(Tensor self, str p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6814895Z processing existing schema: aten::_backward(Tensor self, Tensor[] inputs, Tensor? gradient=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:20.6816512Z processing existing schema: aten::linalg_matrix_rank(Tensor self, float tol, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.6818217Z processing existing schema: aten::linalg_matrix_rank.tol_tensor(Tensor input, Tensor tol, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.6820132Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_tensor(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.6822070Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_float(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.6824527Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_tensor_out(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6826836Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_float_out(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6828801Z processing existing schema: aten::linalg_matrix_rank.out(Tensor self, float tol, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6830869Z processing existing schema: aten::linalg_matrix_rank.out_tol_tensor(Tensor input, Tensor tol, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6832318Z processing existing schema: aten::linalg_svdvals(Tensor A) -> (Tensor) 2022-05-18T03:33:20.6834091Z processing existing schema: aten::linalg_svdvals.out(Tensor A, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6835553Z processing existing schema: aten::linalg_eigvals(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6837284Z processing existing schema: aten::linalg_eigvals.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6839226Z processing existing schema: aten::_add_relu.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6841421Z processing existing schema: aten::_add_relu.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6842790Z processing existing schema: aten::_add_relu.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6844445Z processing existing schema: aten::linalg_eigvalsh(Tensor self, str UPLO="L") -> (Tensor) 2022-05-18T03:33:20.6846543Z processing existing schema: aten::linalg_eigvalsh.out(Tensor self, str UPLO="L", *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6848597Z processing existing schema: aten::_add_relu_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6850405Z processing existing schema: aten::_add_relu_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6851919Z processing existing schema: aten::linalg_householder_product(Tensor input, Tensor tau) -> (Tensor) 2022-05-18T03:33:20.6853864Z processing existing schema: aten::linalg_householder_product.out(Tensor input, Tensor tau, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6855590Z processing existing schema: aten::_cdist_backward(Tensor grad, Tensor x1, Tensor x2, float p, Tensor cdist) -> (Tensor) 2022-05-18T03:33:20.6857564Z processing existing schema: aten::linalg_tensorsolve(Tensor self, Tensor other, int[]? dims=None) -> (Tensor) 2022-05-18T03:33:20.6859975Z processing existing schema: aten::linalg_tensorsolve.out(Tensor self, Tensor other, int[]? dims=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6861787Z processing existing schema: aten::fake_quantize_per_tensor_affine(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:20.6863494Z processing existing schema: aten::fake_quantize_per_tensor_affine.tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:20.6864939Z processing existing schema: aten::mathremainder.int(int a, int b) -> (float) 2022-05-18T03:33:20.6866485Z processing existing schema: aten::mathremainder.float(float a, float b) -> (float) 2022-05-18T03:33:20.6868053Z processing existing schema: aten::mathremainder.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.6869493Z processing existing schema: aten::mathremainder.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.6870963Z processing existing schema: aten::mathremainder(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:20.6872525Z processing existing schema: aten::glu(Tensor self, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.6874550Z processing existing schema: aten::glu.out(Tensor self, int dim=-1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6877647Z processing existing schema: quantized::max_pool2d(Tensor qx, int[] kernel_size, int[] stride, int[] padding, int[] dilation, bool ceil_mode) -> (Tensor) 2022-05-18T03:33:20.6879861Z processing existing schema: aten::col2im_backward(Tensor grad_output, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:20.6882317Z processing existing schema: aten::col2im_backward.grad_input(Tensor grad_output, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.6883817Z processing existing schema: aten::eig(Tensor self, bool eigenvectors=False) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:20.6886294Z processing existing schema: aten::eig.e(Tensor self, bool eigenvectors=False, *, Tensor(a!) e, Tensor(b!) v) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:20.6887554Z processing existing schema: aten::isupper(str self) -> (bool) 2022-05-18T03:33:20.6889031Z processing existing schema: aten::geqrf(Tensor self) -> (Tensor a, Tensor tau) 2022-05-18T03:33:20.6891284Z processing existing schema: aten::geqrf.a(Tensor self, *, Tensor(a!) a, Tensor(b!) tau) -> (Tensor(a!) a, Tensor(b!) tau) 2022-05-18T03:33:20.6894350Z processing existing schema: aten::_embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.6895602Z processing existing schema: aten::lstsq(Tensor self, Tensor A) -> (Tensor solution, Tensor QR) 2022-05-18T03:33:20.6897984Z processing existing schema: aten::lstsq.X(Tensor self, Tensor A, *, Tensor(a!) X, Tensor(b!) qr) -> (Tensor(a!) solution, Tensor(b!) QR) 2022-05-18T03:33:20.6899510Z processing existing schema: aten::qr(Tensor self, bool some=True) -> (Tensor Q, Tensor R) 2022-05-18T03:33:20.6901933Z processing existing schema: aten::qr.Q(Tensor self, bool some=True, *, Tensor(a!) Q, Tensor(b!) R) -> (Tensor(a!) Q, Tensor(b!) R) 2022-05-18T03:33:20.6903870Z processing existing schema: quantized::conv1d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.6905118Z processing existing schema: aten::atleast_2d(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6907135Z processing existing schema: aten::atleast_2d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.6909343Z processing existing schema: aten::triangular_solve(Tensor self, Tensor A, bool upper=True, bool transpose=False, bool unitriangular=False) -> (Tensor solution, Tensor cloned_coefficient) 2022-05-18T03:33:20.6912219Z processing existing schema: aten::triangular_solve.X(Tensor self, Tensor A, bool upper=True, bool transpose=False, bool unitriangular=False, *, Tensor(a!) X, Tensor(b!) M) -> (Tensor(a!) solution, Tensor(b!) cloned_coefficient) 2022-05-18T03:33:20.6913929Z processing existing schema: aten::fractional_max_pool3d(Tensor self, int[3] kernel_size, int[3] output_size, Tensor random_samples) -> (Tensor, Tensor) 2022-05-18T03:33:20.6916767Z processing existing schema: aten::fractional_max_pool3d.output(Tensor self, int[3] kernel_size, int[3] output_size, Tensor random_samples, *, Tensor(a!) output, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.6918149Z processing existing schema: aten::adaptive_max_pool3d(Tensor self, int[3] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:20.6920726Z processing existing schema: aten::adaptive_max_pool3d.out(Tensor self, int[3] output_size, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.6922159Z processing existing schema: aten::linalg_eig(Tensor self) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:20.6924465Z processing existing schema: aten::linalg_eig.out(Tensor self, *, Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:20.6926195Z processing existing schema: aten::_grid_sampler_2d_cpu_fallback(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:20.6927745Z processing existing schema: aten::native_dropout(Tensor input, float p, bool? train) -> (Tensor, Tensor) 2022-05-18T03:33:20.6929119Z processing existing schema: aten::_local_scalar_dense(Tensor self) -> (Scalar) 2022-05-18T03:33:20.6931579Z processing existing schema: aten::randn(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6934181Z processing existing schema: aten::randn.generator(int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6937014Z processing existing schema: aten::randn.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6940075Z processing existing schema: aten::randn.generator_with_names(int[] size, *, Generator? generator, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.6941992Z processing existing schema: aten::randn.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6944322Z processing existing schema: aten::randn.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6946227Z processing existing schema: aten::_sparse_log_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6947853Z processing existing schema: aten::_sparse_log_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.6949482Z processing existing schema: aten::_sparse_log_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:20.6952058Z processing existing schema: aten::_to_copy(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:20.6953511Z processing existing schema: aten::abs_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6955166Z processing existing schema: aten::absolute_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6956807Z processing existing schema: aten::rsqrt_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6958221Z processing existing schema: aten::acosh(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6960133Z processing existing schema: aten::acosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6961498Z processing existing schema: aten::acosh.int(int a) -> (float) 2022-05-18T03:33:20.6962946Z processing existing schema: aten::acosh.float(float a) -> (float) 2022-05-18T03:33:20.6964390Z processing existing schema: aten::acosh.complex(complex a) -> (complex) 2022-05-18T03:33:20.6965815Z processing existing schema: aten::acosh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.6967592Z processing existing schema: aten::rsub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6969280Z processing existing schema: aten::rsub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.6971521Z processing existing schema: aten::acosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6972120Z schema: aten::select_backward(Tensor grad_output, int[] input_sizes, int dim, int index) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.6973764Z processing existing schema: aten::add_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6975602Z processing existing schema: aten::add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.6977967Z processing existing schema: aten::add_.t(t[](a!) self, t[] b) -> (t[]) 2022-05-18T03:33:20.6978502Z schema: static_runtime::fused_equally_split(Tensor input, int num_split, int dim) -> (...) found on allowlist, skipping 2022-05-18T03:33:20.6981052Z processing existing schema: aten::set_.source_Storage_storage_offset(Tensor(a!) self, Storage source, int storage_offset, int[] size, int[] stride=[]) -> (Tensor(a!)) 2022-05-18T03:33:20.6982395Z processing existing schema: aten::set_.source_Tensor(Tensor(a!) self, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.6983946Z processing existing schema: aten::set_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6985762Z processing existing schema: aten::set_.source_Storage(Tensor(a!) self, Storage source) -> (Tensor(a!)) 2022-05-18T03:33:20.6988659Z processing existing schema: aten::set_.source_Tensor_storage_offset(Tensor(a!) self, Tensor source, int storage_offset, int[] size, int[] stride=[]) -> (Tensor(a!)) 2022-05-18T03:33:20.6990046Z processing existing schema: aten::sigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:20.6991826Z processing existing schema: aten::sigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.6993353Z processing existing schema: aten::sin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.6996414Z processing existing schema: aten::sparse_csr_tensor.crow_col_value_size(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.6998650Z processing existing schema: aten::sparse_csr_tensor.crow_col_value(Tensor crow_indices, Tensor col_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:20.7000377Z processing existing schema: aten::_softmax_backward_data(Tensor grad_output, Tensor output, int dim, int input_dtype) -> (Tensor) 2022-05-18T03:33:20.7002330Z processing existing schema: aten::_softmax_backward_data.out(Tensor grad_output, Tensor output, int dim, int input_dtype, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7004610Z processing existing schema: aten::sspaddmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7006426Z processing existing schema: aten::sspaddmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.7008204Z processing existing schema: aten::_stack(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:20.7010513Z processing existing schema: aten::_stack.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7012558Z processing existing schema: aten::nansum(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.7015044Z processing existing schema: aten::nansum.out(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7016731Z processing existing schema: aten::flip(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:20.7018593Z processing existing schema: aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor) 2022-05-18T03:33:20.7020258Z schema: aten::_transform_bias_rescale_qkv(Tensor qkv, Tensor qkv_bias, int num_heads) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.7021442Z schema: aten::_nested_tensor_from_mask(Tensor t, Tensor mask) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.7022941Z processing existing schema: aten::_nested_from_padded(Tensor padded, Tensor cpu_nested_shape_example, bool fuse_transform_0213=False) -> (Tensor) 2022-05-18T03:33:20.7024742Z processing existing schema: aten::_unique(Tensor self, bool sorted=True, bool return_inverse=False) -> (Tensor, Tensor) 2022-05-18T03:33:20.7026789Z processing existing schema: aten::unique_dim(Tensor self, int dim, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.7028770Z processing existing schema: aten::unique_consecutive(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.7030599Z processing existing schema: aten::unique_dim_consecutive(Tensor self, int dim, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.7032509Z processing existing schema: aten::_unique2(Tensor self, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.7034095Z processing existing schema: aten::where.self(Tensor condition, Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7035971Z processing existing schema: aten::where.self_out(Tensor condition, Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7037336Z processing existing schema: aten::where.ScalarSelf(Tensor condition, Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7038874Z processing existing schema: aten::where.ScalarOther(Tensor condition, Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7040519Z processing existing schema: aten::where.Scalar(Tensor condition, Scalar self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7042026Z processing existing schema: aten::where(Tensor condition) -> (Tensor[]) 2022-05-18T03:33:20.7043608Z processing existing schema: aten::_weight_norm_interface(Tensor v, Tensor g, int dim=0) -> (Tensor, Tensor) 2022-05-18T03:33:20.7045359Z processing existing schema: aten::_weight_norm_interface_backward(Tensor grad_w, Tensor saved_v, Tensor saved_g, Tensor saved_norms, int dim) -> (Tensor, Tensor) 2022-05-18T03:33:20.7046711Z processing existing schema: aten::_standard_gamma_grad(Tensor self, Tensor output) -> (Tensor) 2022-05-18T03:33:20.7048150Z processing existing schema: aten::_standard_gamma(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.7049619Z processing existing schema: aten::_dirichlet_grad(Tensor x, Tensor alpha, Tensor total) -> (Tensor) 2022-05-18T03:33:20.7051046Z processing existing schema: aten::_sample_dirichlet(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.7053215Z processing existing schema: aten::frexp.Tensor_out(Tensor self, *, Tensor(a!) mantissa, Tensor(b!) exponent) -> (Tensor(a!) mantissa, Tensor(b!) exponent) 2022-05-18T03:33:20.7054630Z processing existing schema: aten::frexp.Tensor(Tensor self) -> (Tensor mantissa, Tensor exponent) 2022-05-18T03:33:20.7055894Z processing existing schema: aten::frexp(float a) -> (float, int) 2022-05-18T03:33:20.7057211Z processing existing schema: aten::heaviside(Tensor self, Tensor values) -> (Tensor) 2022-05-18T03:33:20.7058960Z processing existing schema: aten::heaviside.out(Tensor self, Tensor values, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7060591Z processing existing schema: aten::heaviside_(Tensor(a!) self, Tensor values) -> (Tensor(a!)) 2022-05-18T03:33:20.7062635Z processing existing schema: aten::_addmm_activation(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, bool use_gelu=False) -> (Tensor) 2022-05-18T03:33:20.7065124Z processing existing schema: aten::_addmm_activation.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, bool use_gelu=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7066529Z processing existing schema: aten::to_sparse.sparse_dim(Tensor self, int sparse_dim) -> (Tensor) 2022-05-18T03:33:20.7067518Z processing existing schema: aten::to_sparse(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7068903Z processing existing schema: aten::to_sparse_csr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7070264Z processing existing schema: aten::to_sparse_csc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7072114Z processing existing schema: aten::to_sparse_bsr(Tensor self, int[2] blocksize) -> (Tensor) 2022-05-18T03:33:20.7073226Z processing existing schema: aten::to_sparse_bsc(Tensor self, int[2] blocksize) -> (Tensor) 2022-05-18T03:33:20.7074845Z processing existing schema: aten::to_mkldnn(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.7076110Z processing existing schema: aten::quantize_per_tensor_dynamic(Tensor self, int dtype, bool reduce_range) -> (Tensor) 2022-05-18T03:33:20.7077594Z processing existing schema: aten::quantize_per_tensor(Tensor self, float scale, int zero_point, int dtype) -> (Tensor) 2022-05-18T03:33:20.7079265Z processing existing schema: aten::quantize_per_tensor.tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, int dtype) -> (Tensor) 2022-05-18T03:33:20.7081281Z processing existing schema: aten::quantize_per_tensor.tensors(Tensor[] tensors, Tensor scales, Tensor zero_points, int dtype) -> (Tensor[]) 2022-05-18T03:33:20.7082762Z processing existing schema: aten::quantize_per_channel(Tensor self, Tensor scales, Tensor zero_points, int axis, int dtype) -> (Tensor) 2022-05-18T03:33:20.7083981Z processing existing schema: aten::dequantize.self(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7085779Z processing existing schema: aten::dequantize.tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7087457Z processing existing schema: aten::dequantize.tensor(Tensor qtensor) -> (Tensor) 2022-05-18T03:33:20.7089100Z processing existing schema: aten::dequantize.list(Tensor[] qtensors) -> (Tensor[]) 2022-05-18T03:33:20.7090468Z processing existing schema: aten::dequantize.any(Any tensors) -> (Any) 2022-05-18T03:33:20.7092192Z processing existing schema: aten::Size(int[] sizes) -> (int[]) 2022-05-18T03:33:20.7093823Z processing existing schema: aten::_make_per_tensor_quantized_tensor(Tensor self, float scale, int zero_point) -> (Tensor) 2022-05-18T03:33:20.7095532Z processing existing schema: aten::_make_per_channel_quantized_tensor(Tensor self, Tensor scale, Tensor zero_point, int axis) -> (Tensor) 2022-05-18T03:33:20.7097396Z processing existing schema: aten::fake_quantize_per_tensor_affine_cachemask(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:20.7098623Z processing existing schema: aten::degrees.int(int a) -> (float) 2022-05-18T03:33:20.7100088Z processing existing schema: aten::degrees.float(float a) -> (float) 2022-05-18T03:33:20.7101463Z processing existing schema: aten::degrees.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.7103755Z processing existing schema: aten::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, Tensor fake_quant_enabled, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:20.7105495Z processing existing schema: aten::_fake_quantize_learnable_per_tensor_affine(Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor) 2022-05-18T03:33:20.7107496Z processing existing schema: aten::fake_quantize_per_channel_affine_cachemask(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:20.7109329Z processing existing schema: aten::remove.int(int[](a!) self, int el) -> () 2022-05-18T03:33:20.7111401Z processing existing schema: aten::remove.float(float[](a!) self, float el) -> () 2022-05-18T03:33:20.7113299Z processing existing schema: aten::remove.bool(bool[](a!) self, bool el) -> () 2022-05-18T03:33:20.7115326Z processing existing schema: aten::remove.Tensor(Tensor[](a!) self, Tensor el) -> () 2022-05-18T03:33:20.7117327Z processing existing schema: aten::remove.str(str[](a!) self, str el) -> () 2022-05-18T03:33:20.7119708Z processing existing schema: aten::_fake_quantize_learnable_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor) 2022-05-18T03:33:20.7123204Z processing existing schema: aten::_fused_moving_avg_obs_fq_helper(Tensor self, Tensor observer_on, Tensor fake_quant_on, Tensor(a!) running_min, Tensor(b!) running_max, Tensor(c!) scale, Tensor(d!) zero_point, float averaging_const, int quant_min, int quant_max, int ch_axis, bool per_row_fake_quant=False, bool symmetric_quant=False) -> (Tensor output, Tensor mask) 2022-05-18T03:33:20.7124039Z processing existing schema: aten::is_set_to(Tensor self, Tensor tensor) -> (bool) 2022-05-18T03:33:20.7125841Z processing existing schema: aten::masked_scatter_(Tensor(a!) self, Tensor mask, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.7127496Z processing existing schema: aten::_masked_softmax(Tensor self, Tensor mask, int? dim=None) -> (Tensor) 2022-05-18T03:33:20.7129541Z processing existing schema: aten::_masked_softmax_backward(Tensor grad_output, Tensor output, Tensor mask, int? dim=None) -> (Tensor) 2022-05-18T03:33:20.7131183Z processing existing schema: aten::put_(Tensor(a!) self, Tensor index, Tensor source, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:20.7132801Z processing existing schema: aten::index_add(Tensor self, int dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.7134917Z processing existing schema: aten::index_add.out(Tensor self, int dim, Tensor index, Tensor source, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7136681Z processing existing schema: aten::index_add.dimname(Tensor self, str dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.7138630Z processing existing schema: aten::index_add_(Tensor(a!) self, int dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.7140420Z processing existing schema: aten::index_reduce(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor) 2022-05-18T03:33:20.7142616Z processing existing schema: aten::index_reduce.out(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7144785Z processing existing schema: aten::index_reduce_(Tensor(a!) self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor(a!)) 2022-05-18T03:33:20.7146317Z processing existing schema: aten::scatter.src(Tensor self, int dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:20.7148207Z processing existing schema: aten::scatter.src_out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7149738Z processing existing schema: aten::scatter.value(Tensor self, int dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:20.7151717Z processing existing schema: aten::scatter.value_out(Tensor self, int dim, Tensor index, Scalar value, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7153413Z processing existing schema: aten::scatter.reduce(Tensor self, int dim, Tensor index, Tensor src, *, str reduce) -> (Tensor) 2022-05-18T03:33:20.7155603Z processing existing schema: aten::scatter.reduce_out(Tensor self, int dim, Tensor index, Tensor src, *, str reduce, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7157248Z processing existing schema: aten::scatter.value_reduce(Tensor self, int dim, Tensor index, Scalar value, *, str reduce) -> (Tensor) 2022-05-18T03:33:20.7159511Z processing existing schema: aten::scatter.value_reduce_out(Tensor self, int dim, Tensor index, Scalar value, *, str reduce, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7161132Z processing existing schema: aten::scatter.dimname_src(Tensor self, str dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:20.7162682Z processing existing schema: aten::scatter.dimname_value(Tensor self, str dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:20.7164525Z processing existing schema: aten::scatter_.src(Tensor(a!) self, int dim, Tensor index, Tensor src) -> (Tensor(a!)) 2022-05-18T03:33:20.7166278Z processing existing schema: aten::scatter_.value(Tensor(a!) self, int dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:20.7168245Z processing existing schema: aten::scatter_.reduce(Tensor(a!) self, int dim, Tensor index, Tensor src, *, str reduce) -> (Tensor(a!)) 2022-05-18T03:33:20.7170255Z processing existing schema: aten::scatter_.value_reduce(Tensor(a!) self, int dim, Tensor index, Scalar value, *, str reduce) -> (Tensor(a!)) 2022-05-18T03:33:20.7171989Z processing existing schema: aten::scatter_add_(Tensor(a!) self, int dim, Tensor index, Tensor src) -> (Tensor(a!)) 2022-05-18T03:33:20.7173921Z processing existing schema: aten::scatter_reduce.two(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> (Tensor) 2022-05-18T03:33:20.7176117Z processing existing schema: aten::scatter_reduce.two_out(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7178234Z processing existing schema: aten::scatter_reduce_.two(Tensor(a!) self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> (Tensor(a!)) 2022-05-18T03:33:20.7179781Z processing existing schema: aten::eq_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7181404Z processing existing schema: aten::eq_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7183318Z processing existing schema: aten::zfill(str self, int width) -> (str) 2022-05-18T03:33:20.7184282Z processing existing schema: aten::__lshift__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7185849Z processing existing schema: aten::__lshift__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7187177Z processing existing schema: aten::__lshift__.int(int a, int b) -> (int) 2022-05-18T03:33:20.7188761Z processing existing schema: aten::__ilshift__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7190420Z processing existing schema: aten::__ilshift__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7191776Z processing existing schema: aten::__rshift__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7193170Z processing existing schema: aten::__rshift__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7194393Z processing existing schema: aten::__rshift__.int(int a, int b) -> (int) 2022-05-18T03:33:20.7195991Z processing existing schema: aten::__irshift__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7197708Z processing existing schema: aten::__irshift__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7199201Z processing existing schema: aten::tril(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:20.7201041Z processing existing schema: aten::tril.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7202774Z processing existing schema: aten::tril_(Tensor(a!) self, int diagonal=0) -> (Tensor(a!)) 2022-05-18T03:33:20.7204382Z processing existing schema: aten::triu(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:20.7206285Z processing existing schema: aten::triu.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7207804Z processing existing schema: aten::triu_(Tensor(a!) self, int diagonal=0) -> (Tensor(a!)) 2022-05-18T03:33:20.7209292Z processing existing schema: aten::lerp.Scalar(Tensor self, Tensor end, Scalar weight) -> (Tensor) 2022-05-18T03:33:20.7211207Z processing existing schema: aten::lerp.Scalar_out(Tensor self, Tensor end, Scalar weight, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7212636Z processing existing schema: aten::lerp.Tensor(Tensor self, Tensor end, Tensor weight) -> (Tensor) 2022-05-18T03:33:20.7214389Z processing existing schema: aten::lerp.Tensor_out(Tensor self, Tensor end, Tensor weight, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7216087Z processing existing schema: aten::lerp_.Scalar(Tensor(a!) self, Tensor end, Scalar weight) -> (Tensor(a!)) 2022-05-18T03:33:20.7217754Z processing existing schema: aten::lerp_.Tensor(Tensor(a!) self, Tensor end, Tensor weight) -> (Tensor(a!)) 2022-05-18T03:33:20.7219776Z processing existing schema: aten::addbmm_(Tensor(a!) self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.7221372Z processing existing schema: aten::ne_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7222954Z processing existing schema: aten::ne_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7224700Z processing existing schema: aten::ge_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7226329Z processing existing schema: aten::ge_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7227985Z processing existing schema: aten::le_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7229620Z processing existing schema: aten::le_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7231218Z processing existing schema: aten::gt_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7232759Z processing existing schema: aten::gt_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7234449Z processing existing schema: aten::lt_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7236116Z processing existing schema: aten::lt_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7237580Z processing existing schema: aten::take(Tensor self, Tensor index) -> (Tensor) 2022-05-18T03:33:20.7239316Z processing existing schema: aten::take.out(Tensor self, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7240765Z processing existing schema: aten::index_select(Tensor self, int dim, Tensor index) -> (Tensor) 2022-05-18T03:33:20.7242635Z processing existing schema: aten::index_select.out(Tensor self, int dim, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7244071Z processing existing schema: aten::index_select.dimname(Tensor self, str dim, Tensor index) -> (Tensor) 2022-05-18T03:33:20.7246427Z processing existing schema: aten::index_select.dimname_out(Tensor self, str dim, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7247231Z processing existing schema: aten::nonzero(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7249092Z processing existing schema: aten::nonzero.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7250724Z processing existing schema: aten::gather(Tensor self, int dim, Tensor index, *, bool sparse_grad=False) -> (Tensor) 2022-05-18T03:33:20.7252729Z processing existing schema: aten::gather.out(Tensor self, int dim, Tensor index, *, bool sparse_grad=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7254647Z processing existing schema: aten::gather.dimname(Tensor self, str dim, Tensor index, *, bool sparse_grad=False) -> (Tensor) 2022-05-18T03:33:20.7256643Z processing existing schema: aten::gather.dimname_out(Tensor self, str dim, Tensor index, *, bool sparse_grad=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7258294Z processing existing schema: aten::_symeig_helper(Tensor self, bool eigenvectors, bool upper) -> (Tensor, Tensor) 2022-05-18T03:33:20.7259510Z processing existing schema: aten::_cholesky_solve_helper(Tensor self, Tensor A, bool upper) -> (Tensor) 2022-05-18T03:33:20.7261572Z processing existing schema: aten::lu_unpack(Tensor LU_data, Tensor LU_pivots, bool unpack_data=True, bool unpack_pivots=True) -> (Tensor P, Tensor L, Tensor U) 2022-05-18T03:33:20.7264745Z processing existing schema: aten::lu_unpack.out(Tensor LU_data, Tensor LU_pivots, bool unpack_data=True, bool unpack_pivots=True, *, Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) -> (Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) 2022-05-18T03:33:20.7266499Z processing existing schema: aten::histogram.bins_tensor(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) 2022-05-18T03:33:20.7269329Z processing existing schema: aten::histogram.bins_tensor_out(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) 2022-05-18T03:33:20.7271651Z processing existing schema: aten::histogram.bin_ct(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) 2022-05-18T03:33:20.7274829Z processing existing schema: aten::histogram.bin_ct_out(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) 2022-05-18T03:33:20.7277319Z processing existing schema: aten::_histogramdd_bin_edges(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor[]) 2022-05-18T03:33:20.7279891Z processing existing schema: aten::_histogramdd_from_bin_cts(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor) 2022-05-18T03:33:20.7281763Z processing existing schema: aten::_histogramdd_from_bin_tensors(Tensor self, Tensor[] bins, *, Tensor? weight=None, bool density=False) -> (Tensor) 2022-05-18T03:33:20.7283404Z processing existing schema: aten::fmod_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7285168Z processing existing schema: aten::fmod_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7286689Z processing existing schema: aten::remainder.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7288535Z processing existing schema: aten::remainder.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7289947Z processing existing schema: aten::remainder.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7291473Z processing existing schema: aten::remainder.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7293295Z processing existing schema: aten::remainder.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7294714Z processing existing schema: aten::remainder.int(int a, int b) -> (int) 2022-05-18T03:33:20.7296366Z processing existing schema: aten::remainder.float(float a, float b) -> (float) 2022-05-18T03:33:20.7297776Z processing existing schema: aten::remainder.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.7299109Z processing existing schema: aten::remainder.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.7300780Z processing existing schema: aten::remainder(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.7302310Z processing existing schema: aten::remainder_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.7303832Z processing existing schema: aten::remainder_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.7305315Z processing existing schema: aten::fmin(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7307077Z processing existing schema: aten::fmin.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7308566Z processing existing schema: aten::fmax(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7310447Z processing existing schema: aten::fmax.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7311941Z processing existing schema: aten::maximum(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7313989Z processing existing schema: aten::maximum.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7315434Z processing existing schema: aten::minimum(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7317275Z processing existing schema: aten::minimum.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7319520Z processing existing schema: aten::sort.stable(Tensor self, *, bool? stable, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.7322252Z processing existing schema: aten::sort.values_stable(Tensor self, *, bool? stable, int dim=-1, bool descending=False, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.7323887Z processing existing schema: aten::sort(Tensor self, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.7326509Z processing existing schema: aten::sort.values(Tensor self, int dim=-1, bool descending=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.7328196Z processing existing schema: aten::sort.dimname(Tensor self, str dim, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.7330893Z processing existing schema: aten::sort.dimname_values(Tensor self, str dim, bool descending=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.7332882Z processing existing schema: aten::sort.dimname_stable(Tensor self, *, bool? stable, str dim, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.7335772Z processing existing schema: aten::sort.dimname_values_stable(Tensor self, *, bool? stable, str dim, bool descending=False, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.7337601Z processing existing schema: aten::sort.int(int[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7339629Z processing existing schema: aten::sort.float(float[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7341744Z processing existing schema: aten::sort.Tensor(Tensor[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7343622Z processing existing schema: aten::sort.bool(bool[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7345823Z processing existing schema: aten::sort.str(str[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7348011Z processing existing schema: aten::sort.any(t[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:20.7350162Z processing existing schema: aten::topk(Tensor self, int k, int dim=-1, bool largest=True, bool sorted=True) -> (Tensor values, Tensor indices) 2022-05-18T03:33:20.7353205Z processing existing schema: aten::topk.values(Tensor self, int k, int dim=-1, bool largest=True, bool sorted=True, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:20.7354807Z processing existing schema: aten::renorm_(Tensor(a!) self, Scalar p, int dim, Scalar maxnorm) -> (Tensor(a!)) 2022-05-18T03:33:20.7356936Z processing existing schema: aten::unfold_backward(Tensor grad_in, int[] input_sizes, int dim, int size, int step) -> (Tensor) 2022-05-18T03:33:20.7358973Z processing existing schema: aten::_foreach_add.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:20.7361724Z processing existing schema: aten::_foreach_add.List(Tensor[] tensors1, Tensor[] tensors2, *, Scalar alpha=1) -> (Tensor[]) 2022-05-18T03:33:20.7363909Z processing existing schema: aten::_foreach_add.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7365703Z processing existing schema: aten::_foreach_add_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:20.7367947Z processing existing schema: aten::_foreach_add_.List(Tensor[] self, Tensor[] other, *, Scalar alpha=1) -> () 2022-05-18T03:33:20.7370097Z processing existing schema: aten::_foreach_add_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:20.7372117Z processing existing schema: aten::_foreach_sub.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:20.7374525Z processing existing schema: aten::_foreach_sub.List(Tensor[] tensors1, Tensor[] tensors2, *, Scalar alpha=1) -> (Tensor[]) 2022-05-18T03:33:20.7376831Z processing existing schema: aten::_foreach_sub.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7378567Z processing existing schema: aten::_foreach_sub_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:20.7380829Z processing existing schema: aten::_foreach_sub_.List(Tensor[] self, Tensor[] other, *, Scalar alpha=1) -> () 2022-05-18T03:33:20.7382918Z processing existing schema: aten::_foreach_sub_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:20.7384913Z processing existing schema: aten::_foreach_mul.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:20.7387282Z processing existing schema: aten::_foreach_mul.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:20.7389581Z processing existing schema: aten::_foreach_mul.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7391315Z processing existing schema: aten::_foreach_mul_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:20.7393308Z processing existing schema: aten::_foreach_mul_.List(Tensor[] self, Tensor[] other) -> () 2022-05-18T03:33:20.7395472Z processing existing schema: aten::_foreach_mul_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:20.7397512Z processing existing schema: aten::_foreach_div.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:20.7400035Z processing existing schema: aten::_foreach_div.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:20.7402318Z processing existing schema: aten::_foreach_div.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7404066Z processing existing schema: aten::_foreach_div_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:20.7406032Z processing existing schema: aten::_foreach_div_.List(Tensor[] self, Tensor[] other) -> () 2022-05-18T03:33:20.7408108Z processing existing schema: aten::_foreach_div_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:20.7410022Z processing existing schema: aten::_foreach_exp(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7411662Z processing existing schema: aten::_foreach_zero_(Tensor[] self) -> () 2022-05-18T03:33:20.7413273Z processing existing schema: aten::_foreach_exp_(Tensor[] self) -> () 2022-05-18T03:33:20.7415259Z processing existing schema: aten::_foreach_sqrt(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7416841Z processing existing schema: aten::_foreach_sqrt_(Tensor[] self) -> () 2022-05-18T03:33:20.7418893Z processing existing schema: aten::_foreach_abs(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7420483Z processing existing schema: aten::_foreach_abs_(Tensor[] self) -> () 2022-05-18T03:33:20.7422462Z processing existing schema: aten::_foreach_acos(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7424045Z processing existing schema: aten::_foreach_acos_(Tensor[] self) -> () 2022-05-18T03:33:20.7426163Z processing existing schema: aten::_foreach_asin(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7427738Z processing existing schema: aten::_foreach_asin_(Tensor[] self) -> () 2022-05-18T03:33:20.7429800Z processing existing schema: aten::_foreach_atan(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7431339Z processing existing schema: aten::_foreach_atan_(Tensor[] self) -> () 2022-05-18T03:33:20.7433369Z processing existing schema: aten::_foreach_ceil(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7435018Z processing existing schema: aten::_foreach_ceil_(Tensor[] self) -> () 2022-05-18T03:33:20.7436954Z processing existing schema: aten::_foreach_cos(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7438681Z processing existing schema: aten::_foreach_cos_(Tensor[] self) -> () 2022-05-18T03:33:20.7440758Z processing existing schema: aten::_foreach_cosh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7442308Z processing existing schema: aten::_foreach_cosh_(Tensor[] self) -> () 2022-05-18T03:33:20.7444308Z processing existing schema: aten::_foreach_erf(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7445897Z processing existing schema: aten::_foreach_erf_(Tensor[] self) -> () 2022-05-18T03:33:20.7447881Z processing existing schema: aten::_foreach_erfc(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7449479Z processing existing schema: aten::_foreach_erfc_(Tensor[] self) -> () 2022-05-18T03:33:20.7451479Z processing existing schema: aten::_foreach_expm1(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7453050Z processing existing schema: aten::_foreach_expm1_(Tensor[] self) -> () 2022-05-18T03:33:20.7455044Z processing existing schema: aten::_foreach_floor(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7456649Z processing existing schema: aten::_foreach_floor_(Tensor[] self) -> () 2022-05-18T03:33:20.7458666Z processing existing schema: aten::_foreach_log(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7460274Z processing existing schema: aten::_foreach_log_(Tensor[] self) -> () 2022-05-18T03:33:20.7462240Z processing existing schema: aten::_foreach_log10(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7463868Z processing existing schema: aten::_foreach_log10_(Tensor[] self) -> () 2022-05-18T03:33:20.7466011Z processing existing schema: aten::_foreach_log1p(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7467569Z processing existing schema: aten::_foreach_log1p_(Tensor[] self) -> () 2022-05-18T03:33:20.7469575Z processing existing schema: aten::_foreach_log2(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7471196Z processing existing schema: aten::_foreach_log2_(Tensor[] self) -> () 2022-05-18T03:33:20.7473186Z processing existing schema: aten::_foreach_neg(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7474818Z processing existing schema: aten::_foreach_neg_(Tensor[] self) -> () 2022-05-18T03:33:20.7476837Z processing existing schema: aten::_foreach_tan(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7478399Z processing existing schema: aten::_foreach_tan_(Tensor[] self) -> () 2022-05-18T03:33:20.7480515Z processing existing schema: aten::_foreach_tanh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7482043Z processing existing schema: aten::_foreach_tanh_(Tensor[] self) -> () 2022-05-18T03:33:20.7484029Z processing existing schema: aten::_foreach_sin(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7485670Z processing existing schema: aten::_foreach_sin_(Tensor[] self) -> () 2022-05-18T03:33:20.7487628Z processing existing schema: aten::_foreach_sinh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7489245Z processing existing schema: aten::_foreach_sinh_(Tensor[] self) -> () 2022-05-18T03:33:20.7491185Z processing existing schema: aten::_foreach_round(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7492773Z processing existing schema: aten::_foreach_round_(Tensor[] self) -> () 2022-05-18T03:33:20.7494770Z processing existing schema: aten::_foreach_lgamma(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7496367Z processing existing schema: aten::_foreach_lgamma_(Tensor[] self) -> () 2022-05-18T03:33:20.7498293Z processing existing schema: aten::_foreach_frac(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7499955Z processing existing schema: aten::_foreach_frac_(Tensor[] self) -> () 2022-05-18T03:33:20.7501906Z processing existing schema: aten::_foreach_reciprocal(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7503547Z processing existing schema: aten::_foreach_reciprocal_(Tensor[] self) -> () 2022-05-18T03:33:20.7505615Z processing existing schema: aten::_foreach_sigmoid(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7507305Z processing existing schema: aten::_foreach_sigmoid_(Tensor[] self) -> () 2022-05-18T03:33:20.7509419Z processing existing schema: aten::_foreach_trunc(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.7510961Z processing existing schema: aten::_foreach_trunc_(Tensor[] self) -> () 2022-05-18T03:33:20.7513644Z processing existing schema: aten::_foreach_addcdiv_.Scalar(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> () 2022-05-18T03:33:20.7516411Z processing existing schema: aten::_foreach_addcdiv_.ScalarList(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> () 2022-05-18T03:33:20.7518996Z processing existing schema: aten::_foreach_addcmul_.Scalar(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> () 2022-05-18T03:33:20.7521950Z processing existing schema: aten::_foreach_addcmul_.ScalarList(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> () 2022-05-18T03:33:20.7524718Z processing existing schema: aten::_foreach_addcdiv.Scalar(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> (Tensor[]) 2022-05-18T03:33:20.7527765Z processing existing schema: aten::_foreach_addcdiv.ScalarList(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7530606Z processing existing schema: aten::_foreach_addcmul.Scalar(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> (Tensor[]) 2022-05-18T03:33:20.7533652Z processing existing schema: aten::_foreach_addcmul.ScalarList(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:20.7535898Z processing existing schema: aten::_foreach_maximum.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:20.7538243Z processing existing schema: aten::_foreach_minimum.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:20.7540358Z processing existing schema: aten::_foreach_norm.Scalar(Tensor[] tensors, Scalar ord=2) -> (Tensor[]) 2022-05-18T03:33:20.7542733Z processing existing schema: aten::searchsorted.Tensor(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> (Tensor) 2022-05-18T03:33:20.7545497Z processing existing schema: aten::searchsorted.Tensor_out(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7547666Z processing existing schema: aten::searchsorted.Scalar(Tensor sorted_sequence, Scalar self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> (Tensor) 2022-05-18T03:33:20.7549500Z processing existing schema: aten::_convert_indices_from_coo_to_csr(Tensor self, int size, *, bool out_int32=False) -> (Tensor) 2022-05-18T03:33:20.7551436Z processing existing schema: aten::_convert_indices_from_coo_to_csr.out(Tensor self, int size, *, bool out_int32=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7553173Z processing existing schema: aten::_convert_indices_from_csr_to_coo(Tensor crow_indices, Tensor col_indices, *, bool out_int32=False, bool transpose=False) -> (Tensor) 2022-05-18T03:33:20.7555577Z processing existing schema: aten::_convert_indices_from_csr_to_coo.out(Tensor crow_indices, Tensor col_indices, *, bool out_int32=False, bool transpose=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7557065Z processing existing schema: aten::mse_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:20.7559297Z processing existing schema: aten::mse_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7561334Z processing existing schema: aten::l1_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7562893Z processing existing schema: aten::l1_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:20.7565312Z processing existing schema: aten::multi_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:20.7567729Z processing existing schema: aten::multi_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=1, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7569388Z processing existing schema: aten::multilabel_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, Tensor is_target) -> (Tensor) 2022-05-18T03:33:20.7571654Z processing existing schema: aten::multilabel_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, Tensor is_target, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7573500Z processing existing schema: aten::nll_loss_forward(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index) -> (Tensor output, Tensor total_weight) 2022-05-18T03:33:20.7576166Z processing existing schema: aten::nll_loss_forward.output(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, *, Tensor(a!) output, Tensor(b!) total_weight) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.7578041Z processing existing schema: aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> (Tensor) 2022-05-18T03:33:20.7580483Z processing existing schema: aten::nll_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7582293Z processing existing schema: aten::nll_loss2d_forward(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index) -> (Tensor output, Tensor total_weight) 2022-05-18T03:33:20.7585049Z processing existing schema: aten::nll_loss2d_forward.output(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, *, Tensor(a!) output, Tensor(b!) total_weight) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.7587069Z processing existing schema: aten::nll_loss2d_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> (Tensor) 2022-05-18T03:33:20.7589483Z processing existing schema: aten::nll_loss2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7591618Z processing existing schema: aten::smooth_l1_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, float beta, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7593411Z processing existing schema: aten::smooth_l1_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float beta) -> (Tensor) 2022-05-18T03:33:20.7595659Z processing existing schema: aten::huber_loss_backward.out(Tensor grad_output, Tensor self, Tensor target, int reduction, float delta, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7597257Z processing existing schema: aten::huber_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float delta) -> (Tensor) 2022-05-18T03:33:20.7599649Z processing existing schema: aten::elu_(Tensor(a!) self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor(a!)) 2022-05-18T03:33:20.7601015Z processing existing schema: aten::title(str self) -> (str) 2022-05-18T03:33:20.7602929Z processing existing schema: aten::elu_backward(Tensor grad_output, Scalar alpha, Scalar scale, Scalar input_scale, bool is_result, Tensor self_or_result) -> (Tensor) 2022-05-18T03:33:20.7605474Z processing existing schema: aten::elu_backward.grad_input(Tensor grad_output, Scalar alpha, Scalar scale, Scalar input_scale, bool is_result, Tensor self_or_result, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7606774Z processing existing schema: aten::center(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:20.7608240Z processing existing schema: aten::glu_backward(Tensor grad_output, Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.7610251Z processing existing schema: aten::glu_backward.grad_input(Tensor grad_output, Tensor self, int dim, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7611675Z processing existing schema: aten::glu_jvp(Tensor glu, Tensor x, Tensor dx, int dim) -> (Tensor) 2022-05-18T03:33:20.7613654Z processing existing schema: aten::hardsigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7615011Z processing existing schema: aten::hardsigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7616456Z processing existing schema: aten::hardsigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.7618006Z processing existing schema: aten::hardsigmoid_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.7620051Z processing existing schema: aten::hardsigmoid_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7621440Z processing existing schema: aten::hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor) 2022-05-18T03:33:20.7623672Z processing existing schema: aten::hardtanh.out(Tensor self, Scalar min_val=-1, Scalar max_val=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7625695Z processing existing schema: aten::hardtanh_(Tensor(a!) self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor(a!)) 2022-05-18T03:33:20.7627453Z processing existing schema: aten::hardtanh_backward(Tensor grad_output, Tensor self, Scalar min_val, Scalar max_val) -> (Tensor) 2022-05-18T03:33:20.7629586Z processing existing schema: aten::hardtanh_backward.grad_input(Tensor grad_output, Tensor self, Scalar min_val, Scalar max_val, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7630630Z processing existing schema: aten::hardswish(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7632864Z processing existing schema: aten::hardswish.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7634527Z processing existing schema: aten::hardswish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.7635666Z processing existing schema: aten::hardswish_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.7637478Z processing existing schema: aten::leaky_relu(Tensor self, Scalar negative_slope=0.01) -> (Tensor) 2022-05-18T03:33:20.7639891Z processing existing schema: aten::leaky_relu.out(Tensor self, Scalar negative_slope=0.01, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7642172Z processing existing schema: aten::leaky_relu_(Tensor(a!) self, Scalar negative_slope=0.01) -> (Tensor(a!)) 2022-05-18T03:33:20.7644137Z processing existing schema: aten::leaky_relu_backward(Tensor grad_output, Tensor self, Scalar negative_slope, bool self_is_result) -> (Tensor) 2022-05-18T03:33:20.7646603Z processing existing schema: aten::leaky_relu_backward.grad_input(Tensor grad_output, Tensor self, Scalar negative_slope, bool self_is_result, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7648058Z processing existing schema: aten::log_sigmoid_forward(Tensor self) -> (Tensor output, Tensor buffer) 2022-05-18T03:33:20.7650525Z processing existing schema: aten::log_sigmoid_forward.output(Tensor self, *, Tensor(a!) output, Tensor(b!) buffer) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.7651966Z processing existing schema: aten::log_sigmoid_backward(Tensor grad_output, Tensor self, Tensor buffer) -> (Tensor) 2022-05-18T03:33:20.7654213Z processing existing schema: aten::log_sigmoid_backward.grad_input(Tensor grad_output, Tensor self, Tensor buffer, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7657219Z processing existing schema: aten::rrelu_with_noise(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.7660483Z processing existing schema: aten::rrelu_with_noise.out(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7663653Z processing existing schema: aten::rrelu_with_noise_(Tensor(a!) self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.7665252Z processing existing schema: aten::softplus_backward(Tensor grad_output, Tensor self, Scalar beta, Scalar threshold) -> (Tensor) 2022-05-18T03:33:20.7667741Z processing existing schema: aten::softplus_backward.grad_input(Tensor grad_output, Tensor self, Scalar beta, Scalar threshold, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7669497Z processing existing schema: aten::softshrink(Tensor self, Scalar lambd=0.5) -> (Tensor) 2022-05-18T03:33:20.7671961Z processing existing schema: aten::softshrink.out(Tensor self, Scalar lambd=0.5, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7673321Z processing existing schema: aten::softshrink_backward(Tensor grad_output, Tensor self, Scalar lambd) -> (Tensor) 2022-05-18T03:33:20.7675881Z processing existing schema: aten::softshrink_backward.grad_input(Tensor grad_output, Tensor self, Scalar lambd, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7677992Z processing existing schema: aten::adaptive_avg_pool2d.out(Tensor self, int[2] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7679866Z processing existing schema: aten::adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:20.7681261Z processing existing schema: aten::_adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:20.7683136Z processing existing schema: aten::_adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.7685134Z processing existing schema: aten::_adaptive_avg_pool3d(Tensor self, int[3] output_size) -> (Tensor) 2022-05-18T03:33:20.7685651Z schema: aten::adaptive_avg_pool3d_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:20.7687597Z processing existing schema: aten::_adaptive_avg_pool3d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.7689016Z processing existing schema: aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:20.7691822Z processing existing schema: aten::adaptive_max_pool2d.out(Tensor self, int[2] output_size, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:20.7693452Z processing existing schema: aten::adaptive_max_pool2d_backward(Tensor grad_output, Tensor self, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7695859Z processing existing schema: aten::adaptive_max_pool2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7697512Z processing existing schema: aten::adaptive_max_pool3d_backward(Tensor grad_output, Tensor self, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7699924Z processing existing schema: aten::adaptive_max_pool3d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7700277Z schema: static_runtime::dict_unpack(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:20.7702314Z processing existing schema: aten::fractional_max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] output_size, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7705101Z processing existing schema: aten::fractional_max_pool2d_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] output_size, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7707200Z processing existing schema: aten::fractional_max_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] output_size, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7709690Z processing existing schema: aten::fractional_max_pool3d_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] output_size, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7712191Z processing existing schema: aten::max_pool2d_with_indices_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7715162Z processing existing schema: aten::max_pool2d_with_indices_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7717580Z processing existing schema: aten::max_pool3d_with_indices_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, int[3] dilation, bool ceil_mode, Tensor indices) -> (Tensor) 2022-05-18T03:33:20.7720690Z processing existing schema: aten::max_pool3d_with_indices_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, int[3] dilation, bool ceil_mode, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7722281Z processing existing schema: aten::reflection_pad1d_backward(Tensor grad_output, Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:20.7724717Z processing existing schema: aten::reflection_pad1d_backward.grad_input(Tensor grad_output, Tensor self, int[2] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7726453Z processing existing schema: aten::reflection_pad2d_backward(Tensor grad_output, Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:20.7728821Z processing existing schema: aten::reflection_pad2d_backward.grad_input(Tensor grad_output, Tensor self, int[4] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7730333Z processing existing schema: aten::reflection_pad3d(Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:20.7732889Z processing existing schema: aten::reflection_pad3d.out(Tensor self, int[6] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7734749Z processing existing schema: aten::reflection_pad3d_backward(Tensor grad_output, Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:20.7737142Z processing existing schema: aten::reflection_pad3d_backward.grad_input(Tensor grad_output, Tensor self, int[6] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7738909Z processing existing schema: aten::replication_pad1d_backward(Tensor grad_output, Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:20.7741228Z processing existing schema: aten::replication_pad1d_backward.grad_input(Tensor grad_output, Tensor self, int[2] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7742964Z processing existing schema: aten::replication_pad2d_backward(Tensor grad_output, Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:20.7745373Z processing existing schema: aten::replication_pad2d_backward.grad_input(Tensor grad_output, Tensor self, int[4] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7747118Z processing existing schema: aten::replication_pad3d_backward(Tensor grad_output, Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:20.7749480Z processing existing schema: aten::replication_pad3d_backward.grad_input(Tensor grad_output, Tensor self, int[6] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7752262Z processing existing schema: aten::upsample_nearest3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7754898Z processing existing schema: aten::upsample_nearest3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7757741Z processing existing schema: aten::upsample_nearest3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7760624Z processing existing schema: aten::_upsample_nearest_exact3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7762976Z processing existing schema: aten::_upsample_nearest_exact3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7765860Z processing existing schema: aten::_upsample_nearest_exact3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7767969Z processing existing schema: aten::upsample_linear1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, bool align_corners, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.7770633Z processing existing schema: aten::upsample_linear1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, bool align_corners, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7773443Z processing existing schema: aten::upsample_linear1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7775820Z processing existing schema: aten::upsample_bilinear2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7778642Z processing existing schema: aten::upsample_bilinear2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7781443Z processing existing schema: aten::upsample_bilinear2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7783598Z processing existing schema: aten::_upsample_bilinear2d_aa(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7786372Z processing existing schema: aten::_upsample_bilinear2d_aa.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7788832Z processing existing schema: aten::_upsample_bilinear2d_aa.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7791259Z processing existing schema: aten::_upsample_bilinear2d_aa_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7794086Z processing existing schema: aten::_upsample_bilinear2d_aa_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7796856Z processing existing schema: aten::_upsample_bilinear2d_aa_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7798984Z processing existing schema: aten::upsample_bicubic2d(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7801843Z processing existing schema: aten::upsample_bicubic2d.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7804218Z processing existing schema: aten::upsample_bicubic2d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7806690Z processing existing schema: aten::upsample_bicubic2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7809514Z processing existing schema: aten::upsample_bicubic2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7812327Z processing existing schema: aten::upsample_bicubic2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7814458Z processing existing schema: aten::_upsample_bicubic2d_aa(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7817086Z processing existing schema: aten::_upsample_bicubic2d_aa.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7819564Z processing existing schema: aten::_upsample_bicubic2d_aa.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7821945Z processing existing schema: aten::_upsample_bicubic2d_aa_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7824942Z processing existing schema: aten::_upsample_bicubic2d_aa_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7827694Z processing existing schema: aten::_upsample_bicubic2d_aa_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7830250Z processing existing schema: aten::upsample_trilinear3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7833272Z processing existing schema: aten::upsample_trilinear3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7836412Z processing existing schema: aten::upsample_trilinear3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7838633Z processing existing schema: aten::upsample_nearest1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.7841103Z processing existing schema: aten::upsample_nearest1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7843703Z processing existing schema: aten::upsample_nearest1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7845793Z processing existing schema: aten::_upsample_nearest_exact1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:20.7848339Z processing existing schema: aten::_upsample_nearest_exact1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7850661Z processing existing schema: aten::_upsample_nearest_exact1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7852347Z processing existing schema: aten::upsample_nearest2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7854696Z processing existing schema: aten::upsample_nearest2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7857190Z processing existing schema: aten::upsample_nearest2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7859008Z processing existing schema: aten::_upsample_nearest_exact2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:20.7861681Z processing existing schema: aten::_upsample_nearest_exact2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7864055Z processing existing schema: aten::_upsample_nearest_exact2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:20.7865412Z processing existing schema: aten::logit_backward(Tensor grad_output, Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:20.7867369Z processing existing schema: aten::logit_backward.grad_input(Tensor grad_output, Tensor self, float? eps=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7868578Z processing existing schema: aten::tanh_backward(Tensor grad_output, Tensor output) -> (Tensor) 2022-05-18T03:33:20.7870477Z processing existing schema: aten::tanh_backward.grad_input(Tensor grad_output, Tensor output, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7873679Z processing existing schema: aten::slow_conv_transpose2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:20.7877164Z processing existing schema: aten::slow_conv_transpose2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int[2] dilation=[1, 1], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7880552Z processing existing schema: aten::slow_conv_transpose3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:20.7884151Z processing existing schema: aten::slow_conv_transpose3d.out(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int[3] dilation=[1, 1, 1], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7886028Z processing existing schema: aten::_slow_conv2d_forward(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding) -> (Tensor) 2022-05-18T03:33:20.7888411Z processing existing schema: aten::_slow_conv2d_forward.output(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, *, Tensor(a!) output) -> (Tensor(a!)) 2022-05-18T03:33:20.7891751Z processing existing schema: aten::_slow_conv2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, *, Tensor(a!) grad_input, Tensor(b!) grad_weight, Tensor(c!) grad_bias) -> (Tensor(a!), Tensor(b!), Tensor(c!)) 2022-05-18T03:33:20.7894059Z processing existing schema: aten::_slow_conv2d_backward.output_mask(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias) 2022-05-18T03:33:20.7895819Z processing existing schema: aten::slow_conv3d_forward(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding) -> (Tensor) 2022-05-18T03:33:20.7898235Z processing existing schema: aten::slow_conv3d_forward.output(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding, *, Tensor(a!) output) -> (Tensor(a!)) 2022-05-18T03:33:20.7900976Z processing existing schema: aten::slow_conv_dilated2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:20.7903825Z processing existing schema: aten::slow_conv_dilated3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:20.7905698Z processing existing schema: aten::im2col(Tensor self, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:20.7908106Z processing existing schema: aten::im2col.out(Tensor self, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7910224Z processing existing schema: aten::im2col_backward(Tensor grad_output, int[2] input_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:20.7912811Z processing existing schema: aten::im2col_backward.grad_input(Tensor grad_output, int[2] input_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.7914162Z processing existing schema: aten::isposinf(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7915755Z processing existing schema: aten::isposinf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7917009Z processing existing schema: aten::isneginf(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7918663Z processing existing schema: aten::isneginf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7920229Z processing existing schema: aten::special_entr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7922003Z processing existing schema: aten::special_entr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7923476Z processing existing schema: aten::special_ndtri(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7925174Z processing existing schema: aten::special_ndtri.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7926493Z processing existing schema: aten::special_log_ndtr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7928274Z processing existing schema: aten::special_log_ndtr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7929733Z processing existing schema: aten::special_erfcx(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7931588Z processing existing schema: aten::special_erfcx.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7933582Z processing existing schema: aten::special_xlog1py(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7934709Z processing existing schema: aten::special_xlog1py.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7936150Z processing existing schema: aten::special_xlog1py.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7937733Z processing existing schema: aten::special_xlog1py.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7939320Z processing existing schema: aten::special_xlog1py.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7941069Z processing existing schema: aten::special_xlog1py.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7942341Z processing existing schema: aten::special_zeta(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7943881Z processing existing schema: aten::special_zeta.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7945375Z processing existing schema: aten::special_zeta.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.7947132Z processing existing schema: aten::special_zeta.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7948740Z processing existing schema: aten::special_zeta.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.7950336Z processing existing schema: aten::special_zeta.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7951497Z processing existing schema: aten::special_i0e(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7953210Z processing existing schema: aten::special_i0e.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7954289Z processing existing schema: aten::special_i1(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7955980Z processing existing schema: aten::special_i1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7957357Z processing existing schema: aten::special_i1e(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7958991Z processing existing schema: aten::special_i1e.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7960855Z processing existing schema: aten::linalg_cross(Tensor self, Tensor other, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.7963172Z processing existing schema: aten::linalg_cross.out(Tensor self, Tensor other, *, int dim=-1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7964425Z processing existing schema: aten::linalg_lu_factor_ex(Tensor A, *, bool pivot=True, bool check_errors=False) -> (Tensor LU, Tensor pivots, Tensor info) 2022-05-18T03:33:20.7967199Z processing existing schema: aten::linalg_lu_factor_ex.out(Tensor A, *, bool pivot=True, bool check_errors=False, Tensor(a!) LU, Tensor(b!) pivots, Tensor(c!) info) -> (Tensor(a!) LU, Tensor(b!) pivots, Tensor(c!) info) 2022-05-18T03:33:20.7968541Z processing existing schema: aten::linalg_lu(Tensor A, *, bool pivot=True) -> (Tensor P, Tensor L, Tensor U) 2022-05-18T03:33:20.7971293Z processing existing schema: aten::linalg_lu.out(Tensor A, *, bool pivot=True, Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) -> (Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) 2022-05-18T03:33:20.7972579Z processing existing schema: aten::_det_lu_based_helper(Tensor self) -> (Tensor det, Tensor lu, Tensor pivs) 2022-05-18T03:33:20.7974041Z processing existing schema: aten::_det_lu_based_helper_backward_helper(Tensor det_grad, Tensor det, Tensor self, Tensor lu, Tensor pivs) -> (Tensor) 2022-05-18T03:33:20.7975691Z processing existing schema: aten::linalg_ldl_factor_ex(Tensor self, *, bool hermitian=False, bool check_errors=False) -> (Tensor LD, Tensor pivots, Tensor info) 2022-05-18T03:33:20.7978511Z processing existing schema: aten::linalg_ldl_factor_ex.out(Tensor self, *, bool hermitian=False, bool check_errors=False, Tensor(a!) LD, Tensor(b!) pivots, Tensor(c!) info) -> (Tensor(a!) LD, Tensor(b!) pivots, Tensor(c!) info) 2022-05-18T03:33:20.7979882Z processing existing schema: aten::linalg_ldl_solve(Tensor LD, Tensor pivots, Tensor B, *, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.7981788Z processing existing schema: aten::linalg_ldl_solve.out(Tensor LD, Tensor pivots, Tensor B, *, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7982890Z processing existing schema: aten::linalg_matrix_exp(Tensor self) -> (Tensor) 2022-05-18T03:33:20.7984338Z processing existing schema: aten::linalg_slogdet(Tensor self) -> (Tensor sign, Tensor logabsdet) 2022-05-18T03:33:20.7986630Z processing existing schema: aten::linalg_slogdet.out(Tensor self, *, Tensor(a!) sign, Tensor(b!) logabsdet) -> (Tensor(a!) sign, Tensor(b!) logabsdet) 2022-05-18T03:33:20.7988601Z processing existing schema: aten::_linalg_inv_out_helper_(Tensor(a!) self, Tensor(b!) infos_lu, Tensor(c!) infos_getri) -> (Tensor(a!)) 2022-05-18T03:33:20.7990610Z processing existing schema: aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.7993076Z processing existing schema: aten::linalg_vector_norm.out(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.7994730Z processing existing schema: aten::_linalg_svd(Tensor A, bool full_matrices=False, bool compute_uv=True) -> (Tensor U, Tensor S, Tensor Vh) 2022-05-18T03:33:20.7997600Z processing existing schema: aten::_linalg_svd.U(Tensor A, bool full_matrices=False, bool compute_uv=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) 2022-05-18T03:33:20.7998862Z processing existing schema: aten::_linalg_qr_helper(Tensor self, str mode) -> (Tensor, Tensor) 2022-05-18T03:33:20.8000665Z processing existing schema: aten::_test_optional_intlist(Tensor values, int[]? addends) -> (Tensor) 2022-05-18T03:33:20.8002057Z processing existing schema: aten::_test_optional_filled_intlist(Tensor values, int[2]? addends) -> (Tensor) 2022-05-18T03:33:20.8003678Z processing existing schema: aten::_test_optional_floatlist(Tensor values, float[]? addends) -> (Tensor) 2022-05-18T03:33:20.8006278Z processing existing schema: aten::segment_reduce(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, int axis=0, bool unsafe=False, Scalar? initial=None) -> (Tensor) 2022-05-18T03:33:20.8007901Z processing existing schema: aten::_segment_reduce_backward(Tensor grad, Tensor output, Tensor data, str reduce, *, Tensor? lengths=None, int axis=0) -> (Tensor) 2022-05-18T03:33:20.8011055Z processing existing schema: aten::_transformer_encoder_layer_fwd(Tensor src, int embed_dim, int num_heads, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, bool use_gelu, bool norm_first, float eps, Tensor norm_weight_1, Tensor norm_bias_1, Tensor norm_weight_2, Tensor norm_bias_2, Tensor ffn_weight_1, Tensor ffn_bias_1, Tensor ffn_weight_2, Tensor ffn_bias_2, Tensor? mask=None) -> (Tensor) 2022-05-18T03:33:20.8013272Z processing existing schema: aten::_native_multi_head_attention(Tensor query, Tensor key, Tensor value, int embed_dim, int num_head, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, Tensor? mask=None, bool need_weights=True, bool average_attn_weights=True) -> (Tensor, Tensor) 2022-05-18T03:33:20.8014534Z processing existing schema: aten::_neg_view(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8016292Z processing existing schema: aten::diag_embed(Tensor self, int offset=0, int dim1=-2, int dim2=-1) -> (Tensor) 2022-05-18T03:33:20.8018357Z processing existing schema: aten::extend.t(t[](a!) self, t[] other) -> () 2022-05-18T03:33:20.8020332Z processing existing schema: aten::embedding(Tensor weight, Tensor indices, int padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> (Tensor) 2022-05-18T03:33:20.8021801Z processing existing schema: aten::count(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.8023307Z processing existing schema: aten::count.int(int[] self, int el) -> (int) 2022-05-18T03:33:20.8024915Z processing existing schema: aten::count.float(float[] self, float el) -> (int) 2022-05-18T03:33:20.8026524Z processing existing schema: aten::count.bool(bool[] self, bool el) -> (int) 2022-05-18T03:33:20.8028183Z processing existing schema: aten::count.Tensor(Tensor[] self, Tensor el) -> (int) 2022-05-18T03:33:20.8029820Z processing existing schema: aten::count.str(str[] self, str el) -> (int) 2022-05-18T03:33:20.8031289Z processing existing schema: aten::fill.Scalar(Tensor self, Scalar value) -> (Tensor) 2022-05-18T03:33:20.8032682Z processing existing schema: aten::fill.Tensor(Tensor self, Tensor value) -> (Tensor) 2022-05-18T03:33:20.8035235Z processing existing schema: aten::index_put_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:20.8037229Z processing existing schema: aten::index_put_.hacked_twin(Tensor(a!) self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:20.8039363Z processing existing schema: aten::nan_to_num_(Tensor(a!) self, float? nan=None, float? posinf=None, float? neginf=None) -> (Tensor(a!)) 2022-05-18T03:33:20.8040501Z processing existing schema: aten::logdet(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8043095Z processing existing schema: aten::mkldnn_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.8044605Z processing existing schema: aten::mvlgamma_(Tensor(a!) self, int p) -> (Tensor(a!)) 2022-05-18T03:33:20.8046656Z processing existing schema: aten::_nnpack_spatial_convolution(Tensor input, Tensor weight, Tensor? bias, int[2] padding, int[2] stride=[1, 1]) -> (Tensor) 2022-05-18T03:33:20.8047635Z processing existing schema: aten::_euclidean_dist(Tensor x1, Tensor x2) -> (Tensor) 2022-05-18T03:33:20.8049281Z processing existing schema: aten::repeat(Tensor self, int[] repeats) -> (Tensor) 2022-05-18T03:33:20.8051279Z processing existing schema: aten::slice_scatter(Tensor self, Tensor src, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor) 2022-05-18T03:33:20.8053099Z processing existing schema: aten::select_scatter(Tensor self, Tensor src, int dim, int index) -> (Tensor) 2022-05-18T03:33:20.8054802Z processing existing schema: aten::diagonal_scatter(Tensor self, Tensor src, int offset=0, int dim1=0, int dim2=1) -> (Tensor) 2022-05-18T03:33:20.8055398Z processing existing schema: aten::isdigit(str self) -> (bool) 2022-05-18T03:33:20.8057066Z processing existing schema: aten::slogdet(Tensor self) -> (Tensor sign, Tensor logabsdet) 2022-05-18T03:33:20.8059102Z processing existing schema: aten::rot90(Tensor self, int k=1, int[] dims=[0, 1]) -> (Tensor) 2022-05-18T03:33:20.8062597Z processing existing schema: aten::_trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> (Tensor) 2022-05-18T03:33:20.8064052Z processing existing schema: aten::_sparse_sum.dim(Tensor self, int[1] dim) -> (Tensor) 2022-05-18T03:33:20.8065476Z processing existing schema: aten::_sparse_sum(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8067083Z processing existing schema: aten::_sparse_sum.dtype(Tensor self, *, int dtype) -> (Tensor) 2022-05-18T03:33:20.8068711Z processing existing schema: aten::_sparse_sum.dim_dtype(Tensor self, int[1] dim, *, int dtype) -> (Tensor) 2022-05-18T03:33:20.8069035Z schema: aten::_sparse_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8070891Z processing existing schema: aten::_pack_padded_sequence(Tensor input, Tensor lengths, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.8072295Z processing existing schema: aten::masked_scatter(Tensor self, Tensor mask, Tensor source) -> (Tensor) 2022-05-18T03:33:20.8073775Z processing existing schema: aten::_linalg_check_errors(Tensor info, str api_name, *, bool is_matrix) -> () 2022-05-18T03:33:20.8075289Z processing existing schema: aten::soft_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:20.8077311Z processing existing schema: aten::soft_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.8079345Z processing existing schema: aten::rrelu_with_noise_backward(Tensor grad_output, Tensor self, Tensor noise, Scalar lower, Scalar upper, bool training, bool self_is_result) -> (Tensor) 2022-05-18T03:33:20.8081115Z processing existing schema: aten::linalg_pinv.atol_rtol_tensor(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.8083341Z processing existing schema: aten::linalg_pinv.atol_rtol_tensor_out(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8085203Z processing existing schema: aten::linalg_pinv.atol_rtol_float(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.8087428Z processing existing schema: aten::linalg_pinv.atol_rtol_float_out(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8088883Z processing existing schema: aten::linalg_pinv(Tensor self, float rcond, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.8090879Z processing existing schema: aten::linalg_pinv.out(Tensor self, float rcond, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8092429Z processing existing schema: aten::linalg_pinv.rcond_tensor(Tensor self, Tensor rcond, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:20.8094476Z processing existing schema: aten::linalg_pinv.out_rcond_tensor(Tensor self, Tensor rcond, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8095818Z processing existing schema: aten::_test_warn_in_autograd(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8097368Z processing existing schema: aten::_fw_primal_copy(Tensor self, int level) -> (Tensor) 2022-05-18T03:33:20.8098845Z processing existing schema: aten::_make_dual_copy(Tensor primal, Tensor tangent, int level) -> (Tensor) 2022-05-18T03:33:20.8099958Z processing existing schema: aten::view_as_real_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8101332Z processing existing schema: aten::view_as_complex_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8102363Z processing existing schema: aten::_conj_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8103663Z processing existing schema: aten::_neg_view_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8105551Z processing existing schema: aten::_sparse_broadcast_to_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:20.8107113Z processing existing schema: aten::diagonal_copy(Tensor self, int offset=0, int dim1=0, int dim2=1) -> (Tensor) 2022-05-18T03:33:20.8108401Z processing existing schema: prim::data(Tensor(a) a) -> (Tensor(a)) 2022-05-18T03:33:20.8110238Z processing existing schema: aten::expand_copy(Tensor self, int[] size, *, bool implicit=False) -> (Tensor) 2022-05-18T03:33:20.8112159Z processing existing schema: aten::expand_copy.SymInt(Tensor self, SymInt[] size, *, bool implicit=False) -> (Tensor) 2022-05-18T03:33:20.8113397Z processing existing schema: prim::is_quantized(Tensor a) -> (bool) 2022-05-18T03:33:20.8115236Z processing existing schema: aten::permute_copy(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:20.8117397Z processing existing schema: aten::_reshape_alias_copy(Tensor self, int[] size, int[] stride) -> (Tensor) 2022-05-18T03:33:20.8118782Z processing existing schema: aten::select_copy.int(Tensor self, int dim, int index) -> (Tensor) 2022-05-18T03:33:20.8120158Z processing existing schema: aten::detach_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8121489Z processing existing schema: aten::storage_offset(Tensor self) -> (int) 2022-05-18T03:33:20.8123495Z processing existing schema: aten::slice_copy.Tensor(Tensor self, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor) 2022-05-18T03:33:20.8125542Z processing existing schema: aten::split_copy.Tensor(Tensor self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.8127638Z processing existing schema: aten::split_with_sizes_copy(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.8128560Z processing existing schema: aten::squeeze_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8130523Z processing existing schema: aten::squeeze_copy.dim(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.8131429Z processing existing schema: aten::t_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8133310Z processing existing schema: aten::transpose_copy.int(Tensor self, int dim0, int dim1) -> (Tensor) 2022-05-18T03:33:20.8134296Z processing existing schema: aten::unsqueeze_copy(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:20.8135847Z processing existing schema: aten::_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8137092Z processing existing schema: aten::_values_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8138308Z processing existing schema: aten::indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8139768Z processing existing schema: aten::values_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8141016Z processing existing schema: aten::crow_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8142310Z processing existing schema: aten::row_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8144053Z processing existing schema: aten::unbind_copy.int(Tensor self, int dim=0) -> (Tensor[]) 2022-05-18T03:33:20.8145914Z processing existing schema: aten::view_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:20.8147156Z processing existing schema: aten::view_copy.dtype(Tensor self, int dtype) -> (Tensor) 2022-05-18T03:33:20.8148590Z processing existing schema: aten::unfold_copy(Tensor self, int dimension, int size, int step) -> (Tensor) 2022-05-18T03:33:20.8150060Z processing existing schema: aten::_cast_Byte(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8151363Z processing existing schema: aten::_cast_Char(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8152930Z processing existing schema: aten::_cast_Double(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8154271Z processing existing schema: aten::_cast_Float(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8155647Z processing existing schema: aten::_cast_Int(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8156962Z processing existing schema: aten::_cast_Long(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8158434Z processing existing schema: aten::_cast_Short(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8159915Z processing existing schema: aten::_cast_Half(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.8161155Z processing existing schema: aten::retains_grad(Tensor self) -> (bool) 2022-05-18T03:33:20.8162861Z processing existing schema: aten::_unpack_dual(Tensor(a) dual, int level) -> (Tensor(a) primal, Tensor tangent) 2022-05-18T03:33:20.8163913Z processing existing schema: aten::_use_cudnn_rnn_flatten_weight() -> (bool) 2022-05-18T03:33:20.8165226Z processing existing schema: aten::_debug_has_internal_overlap(Tensor self) -> (int) 2022-05-18T03:33:20.8167220Z processing existing schema: aten::_sobol_engine_draw(Tensor quasi, int n, Tensor sobolstate, int dimension, int num_generated, int? dtype) -> (Tensor, Tensor) 2022-05-18T03:33:20.8169145Z processing existing schema: aten::_sobol_engine_ff_(Tensor(a!) self, int n, Tensor sobolstate, int dimension, int num_generated) -> (Tensor(a!)) 2022-05-18T03:33:20.8170832Z processing existing schema: aten::_sobol_engine_scramble_(Tensor(a!) self, Tensor ltm, int dimension) -> (Tensor(a!)) 2022-05-18T03:33:20.8172229Z processing existing schema: aten::_sobol_engine_initialize_state_(Tensor(a!) self, int dimension) -> (Tensor(a!)) 2022-05-18T03:33:20.8173544Z processing existing schema: aten::_reshape_from_tensor(Tensor self, Tensor shape) -> (Tensor) 2022-05-18T03:33:20.8174895Z processing existing schema: aten::_shape_as_tensor(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8176277Z processing existing schema: aten::feature_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:20.8178289Z processing existing schema: aten::feature_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:20.8179714Z processing existing schema: aten::feature_alpha_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:20.8181316Z processing existing schema: aten::feature_alpha_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:20.8182638Z processing existing schema: aten::arccos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.8184227Z processing existing schema: aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor) 2022-05-18T03:33:20.8186190Z processing existing schema: aten::adaptive_max_pool1d(Tensor self, int[1] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:20.8187537Z processing existing schema: aten::_dim_arange(Tensor like, int dim) -> (Tensor) 2022-05-18T03:33:20.8190338Z processing existing schema: aten::_batch_norm_impl_index(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor, Tensor, Tensor, Tensor, int) 2022-05-18T03:33:20.8193100Z processing existing schema: aten::_batch_norm_impl_index_backward(int impl_index, Tensor input, Tensor grad_output, Tensor? weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var_transform, bool train, float eps, bool[3] output_mask, Tensor reservedSpace) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8193945Z processing existing schema: aten::cudnn_is_acceptable(Tensor self) -> (bool) 2022-05-18T03:33:20.8195404Z processing existing schema: profiler::_record_function_exit(Tensor _0) -> () 2022-05-18T03:33:20.8196924Z processing existing schema: profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> () 2022-05-18T03:33:20.8199504Z processing existing schema: aten::_convolution_mode(Tensor input, Tensor weight, Tensor? bias, int[] stride, str padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.8203589Z processing existing schema: aten::_convolution_double_backward(Tensor? ggI, Tensor? ggW, Tensor? ggb, Tensor gO, Tensor weight, Tensor self, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8204755Z processing existing schema: aten::cummaxmin_backward(Tensor grad, Tensor input, Tensor indices, int dim) -> (Tensor) 2022-05-18T03:33:20.8205066Z schema: static_runtime::reshape_copy(Tensor self, int[] shape) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8206677Z processing existing schema: aten::cumprod_backward(Tensor grad, Tensor input, int dim, Tensor output) -> (Tensor) 2022-05-18T03:33:20.8207268Z schema: static_runtime::to_copy.prim_dtype(Tensor self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8207633Z schema: static_runtime::to_copy.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8208037Z schema: static_runtime::to_copy.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8208924Z processing existing schema: aten::cumulative_trapezoid.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8211165Z processing existing schema: aten::cumulative_trapezoid.dx(Tensor y, *, Scalar dx=1, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8211628Z schema: static_runtime::dequantize_copy.self(Tensor self) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8213655Z processing existing schema: aten::linalg_diagonal(Tensor(a) A, *, int offset=0, int dim1=-2, int dim2=-1) -> (Tensor(a)) 2022-05-18T03:33:20.8215873Z processing existing schema: aten::fill_diagonal_(Tensor(a!) self, Scalar fill_value, bool wrap=False) -> (Tensor(a!)) 2022-05-18T03:33:20.8218094Z processing existing schema: aten::diff(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None) -> (Tensor) 2022-05-18T03:33:20.8220885Z processing existing schema: aten::diff.out(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8222184Z processing existing schema: aten::isspace(str self) -> (bool) 2022-05-18T03:33:20.8225048Z processing existing schema: aten::gradient.scalarint(Tensor self, *, Scalar? spacing=None, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8227669Z processing existing schema: aten::gradient.scalararray(Tensor self, *, Scalar spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8230087Z processing existing schema: aten::gradient.array(Tensor self, *, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8232890Z processing existing schema: aten::gradient.scalarrayint(Tensor self, *, Scalar[] spacing, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8235626Z processing existing schema: aten::gradient.scalarrayarray(Tensor self, *, Scalar[] spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8238229Z processing existing schema: aten::gradient.tensorarrayint(Tensor self, *, Tensor[] spacing, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8241202Z processing existing schema: aten::gradient.tensorarray(Tensor self, *, Tensor[] spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:20.8242786Z processing existing schema: aten::divide.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8245025Z processing existing schema: aten::divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8246547Z processing existing schema: aten::divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8248642Z processing existing schema: aten::divide.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:20.8250941Z processing existing schema: aten::divide.out_mode(Tensor self, Tensor other, *, str? rounding_mode, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8252825Z processing existing schema: aten::divide.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:20.8254256Z processing existing schema: aten::swapcase(str self) -> (str) 2022-05-18T03:33:20.8256332Z processing existing schema: aten::divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8258216Z processing existing schema: aten::divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8260272Z processing existing schema: aten::divide_.Tensor_mode(Tensor(a!) self, Tensor other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:20.8262282Z processing existing schema: aten::divide_.Scalar_mode(Tensor(a!) self, Scalar other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:20.8264328Z processing existing schema: aten::get.str(Dict(str, t) self, str key) -> (t(*)?) 2022-05-18T03:33:20.8266666Z processing existing schema: aten::get.default_str(Dict(str, t) self, str key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8268674Z processing existing schema: aten::get.int(Dict(int, t) self, int key) -> (t(*)?) 2022-05-18T03:33:20.8270852Z processing existing schema: aten::get.default_int(Dict(int, t) self, int key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8272898Z processing existing schema: aten::get.bool(Dict(bool, t) self, bool key) -> (t(*)?) 2022-05-18T03:33:20.8275128Z processing existing schema: aten::get.default_bool(Dict(bool, t) self, bool key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8277194Z processing existing schema: aten::get.float(Dict(float, t) self, float key) -> (t(*)?) 2022-05-18T03:33:20.8279716Z processing existing schema: aten::get.default_float(Dict(float, t) self, float key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8281841Z processing existing schema: aten::get.complex(Dict(complex, t) self, complex key) -> (t(*)?) 2022-05-18T03:33:20.8284143Z processing existing schema: aten::get.default_complex(Dict(complex, t) self, complex key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8286212Z processing existing schema: aten::get.Tensor(Dict(Tensor, t) self, Tensor key) -> (t(*)?) 2022-05-18T03:33:20.8288596Z processing existing schema: aten::get.default_Tensor(Dict(Tensor, t) self, Tensor key, t default_value) -> (t(*)) 2022-05-18T03:33:20.8290633Z processing existing schema: aten::embedding_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq, bool sparse) -> (Tensor) 2022-05-18T03:33:20.8292385Z processing existing schema: aten::endswith(str self, str substr, int start=0, int end=-1) -> (bool) 2022-05-18T03:33:20.8294297Z processing existing schema: aten::embedding_sparse_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> (Tensor) 2022-05-18T03:33:20.8296083Z processing existing schema: aten::rindex(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.8297813Z processing existing schema: aten::_rowwise_prune(Tensor weight, Tensor mask, int compressed_indices_dtype) -> (Tensor, Tensor) 2022-05-18T03:33:20.8299777Z processing existing schema: aten::row_stack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.8302163Z processing existing schema: aten::row_stack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8305154Z processing existing schema: aten::embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8308033Z processing existing schema: aten::embedding_bag.padding_idx(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, bool include_last_offset, int? padding_idx) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8309631Z processing existing schema: aten::startswith(str self, str substr, int start=0, int end=-1) -> (bool) 2022-05-18T03:33:20.8313197Z processing existing schema: aten::_embedding_bag_backward(Tensor grad, Tensor indices, Tensor offsets, Tensor offset2bag, Tensor bag_size, Tensor maximum_indices, int num_weights, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:20.8315176Z processing existing schema: aten::_embedding_bag_sparse_backward(Tensor grad, Tensor indices, Tensor offsets, Tensor offset2bag, Tensor bag_size, int num_weights, bool scale_grad_by_freq, int mode, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:20.8317775Z processing existing schema: aten::new_full(Tensor self, int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.8320445Z processing existing schema: aten::new_ones(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.8322748Z processing existing schema: aten::_grid_sampler_2d_cpu_fallback_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor, Tensor) 2022-05-18T03:33:20.8324155Z processing existing schema: aten::_cufft_get_plan_cache_size(int device_index) -> (int) 2022-05-18T03:33:20.8325674Z processing existing schema: aten::_cufft_get_plan_cache_max_size(int device_index) -> (int) 2022-05-18T03:33:20.8327353Z processing existing schema: aten::_cufft_set_plan_cache_max_size(int device_index, int max_size) -> () 2022-05-18T03:33:20.8328913Z processing existing schema: aten::_cufft_clear_plan_cache(int device_index) -> () 2022-05-18T03:33:20.8331867Z processing existing schema: aten::isclose(Tensor self, Tensor other, float rtol=1.0000000000000001e-05, float atol=1e-08, bool equal_nan=False) -> (Tensor) 2022-05-18T03:33:20.8333165Z processing existing schema: aten::is_distributed(Tensor self) -> (bool) 2022-05-18T03:33:20.8334786Z processing existing schema: aten::is_conj(Tensor self) -> (bool) 2022-05-18T03:33:20.8336155Z processing existing schema: aten::_is_zerotensor(Tensor self) -> (bool) 2022-05-18T03:33:20.8338266Z processing existing schema: aten::is_neg(Tensor self) -> (bool) 2022-05-18T03:33:20.8339315Z processing existing schema: aten::isreal(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8341290Z processing existing schema: aten::kron(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8343448Z processing existing schema: aten::kron.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8346159Z processing existing schema: aten::fbgemm_linear_int8_weight_fp32_activation(Tensor input, Tensor weight, Tensor packed, Tensor col_offsets, Scalar weight_scale, Scalar weight_zero_point, Tensor bias) -> (Tensor) 2022-05-18T03:33:20.8348238Z processing existing schema: aten::fbgemm_linear_int8_weight(Tensor input, Tensor weight, Tensor packed, Tensor col_offsets, Scalar weight_scale, Scalar weight_zero_point, Tensor bias) -> (Tensor) 2022-05-18T03:33:20.8349586Z processing existing schema: aten::fbgemm_linear_quantize_weight(Tensor input) -> (Tensor, Tensor, float, int) 2022-05-18T03:33:20.8351009Z processing existing schema: aten::fbgemm_pack_gemm_matrix_fp16(Tensor input) -> (Tensor) 2022-05-18T03:33:20.8353335Z processing existing schema: aten::fbgemm_linear_fp16_weight_fp32_activation(Tensor input, Tensor packed_weight, Tensor bias) -> (Tensor) 2022-05-18T03:33:20.8354407Z processing existing schema: aten::fbgemm_linear_fp16_weight(Tensor input, Tensor packed_weight, Tensor bias) -> (Tensor) 2022-05-18T03:33:20.8356390Z processing existing schema: aten::fbgemm_pack_quantized_matrix(Tensor input) -> (Tensor) 2022-05-18T03:33:20.8357390Z processing existing schema: aten::fbgemm_pack_quantized_matrix.KN(Tensor input, int K, int N) -> (Tensor) 2022-05-18T03:33:20.8359582Z processing existing schema: aten::ldexp.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8361472Z processing existing schema: aten::ldexp.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8363195Z processing existing schema: aten::ldexp(float x, int i) -> (float) 2022-05-18T03:33:20.8365292Z processing existing schema: aten::ldexp_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8366075Z processing existing schema: aten::matrix_power(Tensor self, int n) -> (Tensor) 2022-05-18T03:33:20.8368725Z processing existing schema: aten::matrix_power.out(Tensor self, int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8369526Z processing existing schema: aten::matrix_exp(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8371591Z processing existing schema: aten::matrix_exp_backward(Tensor self, Tensor grad) -> (Tensor) 2022-05-18T03:33:20.8373881Z processing existing schema: aten::value_selecting_reduction_backward(Tensor grad, int dim, Tensor indices, int[] sizes, bool keepdim) -> (Tensor) 2022-05-18T03:33:20.8376247Z processing existing schema: aten::nanmean(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8378550Z processing existing schema: aten::nanmean.out(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8379887Z processing existing schema: aten::_sparse_mm(Tensor sparse, Tensor dense) -> (Tensor) 2022-05-18T03:33:20.8381811Z processing existing schema: aten::multiply.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8383914Z processing existing schema: aten::multiply.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8385374Z processing existing schema: aten::multiply.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8387519Z processing existing schema: aten::multiply_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8389448Z processing existing schema: aten::multiply_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8390830Z processing existing schema: aten::is_vulkan_available() -> (bool) 2022-05-18T03:33:20.8392966Z processing existing schema: aten::_nnpack_available() -> (bool) 2022-05-18T03:33:20.8395982Z processing existing schema: aten::pairwise_distance(Tensor x1, Tensor x2, float p=2., float eps=9.9999999999999995e-07, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.8398924Z processing existing schema: aten::moveaxis.intlist(Tensor(a) self, int[] source, int[] destination) -> (Tensor(a)) 2022-05-18T03:33:20.8401004Z processing existing schema: aten::moveaxis.int(Tensor(a) self, int source, int destination) -> (Tensor(a)) 2022-05-18T03:33:20.8402907Z processing existing schema: aten::pixel_shuffle(Tensor self, int upscale_factor) -> (Tensor) 2022-05-18T03:33:20.8404826Z processing existing schema: aten::pixel_unshuffle(Tensor self, int downscale_factor) -> (Tensor) 2022-05-18T03:33:20.8407229Z processing existing schema: aten::pin_memory(Tensor(a) self, Device? device=None) -> (Tensor(a)) 2022-05-18T03:33:20.8408978Z processing existing schema: aten::ravel(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8410804Z processing existing schema: aten::negative(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8413268Z processing existing schema: aten::negative.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8415110Z processing existing schema: aten::negative_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.8418639Z processing existing schema: aten::rrelu(Tensor self, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:20.8422117Z processing existing schema: aten::rrelu_(Tensor(a!) self, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:20.8423112Z processing existing schema: aten::relu6(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8425581Z processing existing schema: aten::relu6_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.8427200Z processing existing schema: aten::infinitely_differentiable_gelu_backward(Tensor grad, Tensor self) -> (Tensor) 2022-05-18T03:33:20.8429441Z processing existing schema: aten::selu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.8430863Z processing existing schema: aten::smm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:20.8433265Z processing existing schema: aten::hstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.8435839Z processing existing schema: aten::hstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8437478Z processing existing schema: aten::vstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.8440466Z processing existing schema: aten::vstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8442017Z processing existing schema: aten::dstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.8444938Z processing existing schema: aten::dstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8446544Z processing existing schema: aten::splitlines(str self, bool keepends=False) -> (str[]) 2022-05-18T03:33:20.8449466Z processing existing schema: aten::istft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, bool normalized=False, bool? onesided=None, int? length=None, bool return_complex=False) -> (Tensor) 2022-05-18T03:33:20.8450685Z processing existing schema: aten::sum_to_size(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:20.8452253Z processing existing schema: aten::tile(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:20.8454035Z processing existing schema: aten::one_hot(Tensor self, int num_classes=-1) -> (Tensor) 2022-05-18T03:33:20.8455559Z processing existing schema: aten::fliplr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8456438Z processing existing schema: aten::flipud(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8458110Z processing existing schema: aten::trapezoid.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8459649Z processing existing schema: aten::trapezoid.dx(Tensor y, *, Scalar dx=1, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8461333Z processing existing schema: aten::trapz.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8462757Z processing existing schema: aten::trapz.dx(Tensor y, *, float dx=1., int dim=-1) -> (Tensor) 2022-05-18T03:33:20.8463592Z processing existing schema: aten::fix(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8465536Z processing existing schema: aten::fix.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8467016Z processing existing schema: aten::fix_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.8468323Z processing existing schema: aten::type_as(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8469683Z processing existing schema: aten::_has_compatible_shallow_copy_type(Tensor self, Tensor from) -> (bool) 2022-05-18T03:33:20.8471124Z processing existing schema: aten::norm_except_dim(Tensor v, int pow=2, int dim=0) -> (Tensor) 2022-05-18T03:33:20.8472779Z processing existing schema: aten::_weight_norm(Tensor v, Tensor g, int dim=0) -> (Tensor) 2022-05-18T03:33:20.8474507Z processing existing schema: aten::_weight_norm_differentiable_backward(Tensor grad_w, Tensor saved_v, Tensor saved_g, Tensor saved_norms, int dim) -> (Tensor, Tensor) 2022-05-18T03:33:20.8475831Z processing existing schema: aten::positive(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8477364Z processing existing schema: aten::subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.8479385Z processing existing schema: aten::subtract.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8480838Z processing existing schema: aten::subtract.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.8482547Z processing existing schema: aten::subtract_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.8484269Z processing existing schema: aten::subtract_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:20.8485971Z processing existing schema: aten::_validate_sparse_coo_tensor_args(Tensor indices, Tensor values, int[] size) -> () 2022-05-18T03:33:20.8487870Z processing existing schema: aten::_validate_sparse_compressed_tensor_args(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, int layout) -> () 2022-05-18T03:33:20.8489663Z processing existing schema: aten::_validate_sparse_csr_tensor_args(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:20.8491345Z processing existing schema: aten::_validate_sparse_csc_tensor_args(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:20.8493089Z processing existing schema: aten::_validate_sparse_bsr_tensor_args(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:20.8494831Z processing existing schema: aten::_validate_sparse_bsc_tensor_args(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:20.8496439Z processing existing schema: aten::_to_cpu(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.8498202Z processing existing schema: aten::to_dense(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8499367Z processing existing schema: aten::to_dense_backward(Tensor grad, Tensor input) -> (Tensor) 2022-05-18T03:33:20.8500699Z processing existing schema: aten::to_mkldnn_backward(Tensor grad, Tensor input) -> (Tensor) 2022-05-18T03:33:20.8502144Z processing existing schema: aten::fake_quantize_per_tensor_affine_cachemask_backward(Tensor grad, Tensor mask) -> (Tensor) 2022-05-18T03:33:20.8503412Z processing existing schema: aten::radians.int(int a) -> (float) 2022-05-18T03:33:20.8504823Z processing existing schema: aten::radians.float(float a) -> (float) 2022-05-18T03:33:20.8506222Z processing existing schema: aten::radians.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.8508408Z processing existing schema: aten::_fake_quantize_learnable_per_tensor_affine_backward(Tensor grad, Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8509948Z processing existing schema: aten::fake_quantize_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:20.8511319Z processing existing schema: aten::cuda(Tensor(a) self) -> (Tensor(a|b)) 2022-05-18T03:33:20.8512770Z processing existing schema: aten::fake_quantize_per_channel_affine_cachemask_backward(Tensor grad, Tensor mask) -> (Tensor) 2022-05-18T03:33:20.8513987Z processing existing schema: aten::modf(float a) -> (float, float) 2022-05-18T03:33:20.8516519Z processing existing schema: aten::_fake_quantize_learnable_per_channel_affine_backward(Tensor grad, Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8519924Z processing existing schema: aten::fused_moving_avg_obs_fake_quant(Tensor self, Tensor observer_on, Tensor fake_quant_on, Tensor(a!) running_min, Tensor(b!) running_max, Tensor(c!) scale, Tensor(d!) zero_point, float averaging_const, int quant_min, int quant_max, int ch_axis, bool per_row_fake_quant=False, bool symmetric_quant=False) -> (Tensor) 2022-05-18T03:33:20.8520509Z processing existing schema: aten::_choose_qparams_per_tensor(Tensor self, bool reduce_range=False) -> (float, int) 2022-05-18T03:33:20.8522150Z processing existing schema: aten::_saturate_weight_to_fp16(Tensor weight) -> (Tensor) 2022-05-18T03:33:20.8523722Z processing existing schema: aten::_autocast_to_reduced_precision(Tensor(a) self, bool cuda_enabled, bool cpu_enabled, int cuda_dtype, int cpu_dtype) -> (Tensor(a)) 2022-05-18T03:33:20.8525741Z processing existing schema: aten::_autocast_to_full_precision(Tensor(a) self, bool cuda_enabled, bool cpu_enabled) -> (Tensor(a)) 2022-05-18T03:33:20.8527275Z processing existing schema: aten::meshgrid(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.8529193Z processing existing schema: aten::meshgrid.indexing(Tensor[] tensors, *, str indexing) -> (Tensor[]) 2022-05-18T03:33:20.8530495Z processing existing schema: aten::promote_types(int type1, int type2) -> (int) 2022-05-18T03:33:20.8554776Z processing existing schema: aten::_thnn_fused_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8555568Z processing existing schema: aten::_thnn_differentiable_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor input_gates, Tensor hidden_gates, Tensor? input_bias, Tensor? hidden_bias, Tensor cx, Tensor cy) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8556075Z processing existing schema: aten::_thnn_differentiable_gru_cell_backward(Tensor grad_hy, Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias, Tensor? hidden_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8556545Z processing existing schema: aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8557071Z processing existing schema: aten::lstm.data(Tensor data, Tensor batch_sizes, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8557632Z processing existing schema: aten::gru.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.8558025Z processing existing schema: aten::gru.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:20.8558476Z processing existing schema: aten::rnn_tanh.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.8558941Z processing existing schema: aten::rnn_tanh.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:20.8559583Z processing existing schema: aten::rnn_relu.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:20.8560073Z processing existing schema: aten::rnn_relu.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:20.8560635Z processing existing schema: aten::quantized_lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor, Tensor) 2022-05-18T03:33:20.8563073Z processing existing schema: aten::quantized_gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:20.8565688Z processing existing schema: aten::quantized_rnn_relu_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:20.8568354Z processing existing schema: aten::quantized_rnn_tanh_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:20.8570316Z processing existing schema: aten::_pack_padded_sequence_backward(Tensor grad, int[] input_size, Tensor batch_sizes, bool batch_first) -> (Tensor) 2022-05-18T03:33:20.8572391Z processing existing schema: aten::_pad_packed_sequence(Tensor data, Tensor batch_sizes, bool batch_first, Scalar padding_value, int total_length) -> (Tensor, Tensor) 2022-05-18T03:33:20.8574200Z processing existing schema: aten::put(Tensor self, Tensor index, Tensor source, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:20.8575819Z processing existing schema: aten::__and__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8577549Z processing existing schema: aten::__and__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8579191Z processing existing schema: aten::__and__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:20.8580941Z processing existing schema: aten::__and__.int(int a, int b) -> (int) 2022-05-18T03:33:20.8583060Z processing existing schema: aten::__iand__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8585400Z processing existing schema: aten::__iand__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8586841Z processing existing schema: aten::__or__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8588483Z processing existing schema: aten::__or__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8590456Z processing existing schema: aten::__or__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:20.8591929Z processing existing schema: aten::__or__.int(int a, int b) -> (int) 2022-05-18T03:33:20.8594590Z processing existing schema: aten::__ior__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8596681Z processing existing schema: aten::__ior__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8598245Z processing existing schema: aten::__xor__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8600507Z processing existing schema: aten::__xor__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8601948Z processing existing schema: aten::__xor__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:20.8604035Z processing existing schema: aten::__xor__.int(int a, int b) -> (int) 2022-05-18T03:33:20.8606174Z processing existing schema: aten::__ixor__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8608290Z processing existing schema: aten::__ixor__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8610647Z processing existing schema: aten::diag_backward(Tensor grad, int[] input_sizes, int diagonal) -> (Tensor) 2022-05-18T03:33:20.8612671Z processing existing schema: aten::reverse.t(t[](a!) self) -> () 2022-05-18T03:33:20.8614374Z processing existing schema: aten::not_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8616917Z processing existing schema: aten::not_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8618377Z processing existing schema: aten::not_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8621137Z processing existing schema: aten::not_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8623024Z processing existing schema: aten::not_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8625512Z processing existing schema: aten::not_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8626960Z processing existing schema: aten::greater_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8629741Z processing existing schema: aten::greater_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8631156Z processing existing schema: aten::greater_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8633907Z processing existing schema: aten::greater_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8635535Z processing existing schema: aten::greater_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8637122Z processing existing schema: aten::greater_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8638570Z processing existing schema: aten::less_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8640575Z processing existing schema: aten::less_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8642228Z processing existing schema: aten::less_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8643825Z processing existing schema: aten::less_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8645602Z processing existing schema: aten::less_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8647156Z processing existing schema: aten::less_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8648444Z processing existing schema: aten::greater.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8650311Z processing existing schema: aten::greater.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8651938Z processing existing schema: aten::greater.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8653692Z processing existing schema: aten::greater.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8655201Z processing existing schema: aten::greater_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8656920Z processing existing schema: aten::greater_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8658265Z processing existing schema: aten::less.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8659952Z processing existing schema: aten::less.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8661442Z processing existing schema: aten::less.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8663285Z processing existing schema: aten::less.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8665109Z processing existing schema: aten::less_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:20.8666778Z processing existing schema: aten::less_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.8668363Z processing existing schema: aten::take_along_dim(Tensor self, Tensor indices, int? dim=None) -> (Tensor) 2022-05-18T03:33:20.8670418Z processing existing schema: aten::take_along_dim.out(Tensor self, Tensor indices, int? dim=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8672265Z processing existing schema: aten::index_select_backward(Tensor grad, int[] self_sizes, int dim, Tensor index) -> (Tensor) 2022-05-18T03:33:20.8673816Z processing existing schema: aten::masked_select_backward(Tensor grad, Tensor input, Tensor mask) -> (Tensor) 2022-05-18T03:33:20.8675356Z processing existing schema: aten::nonzero_numpy(Tensor self) -> (Tensor[]) 2022-05-18T03:33:20.8677146Z processing existing schema: aten::gather_backward(Tensor grad, Tensor self, int dim, Tensor index, bool sparse_grad) -> (Tensor) 2022-05-18T03:33:20.8678729Z processing existing schema: aten::_gather_sparse_backward(Tensor self, int dim, Tensor index, Tensor grad) -> (Tensor) 2022-05-18T03:33:20.8680440Z processing existing schema: aten::linalg_vander(Tensor x, *, int? N=None) -> (Tensor) 2022-05-18T03:33:20.8682242Z processing existing schema: aten::swapaxes_(Tensor(a!) self, int axis0, int axis1) -> (Tensor(a!)) 2022-05-18T03:33:20.8683899Z processing existing schema: aten::swapdims_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:20.8686947Z processing existing schema: aten::histogramdd(Tensor self, int[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:20.8689609Z processing existing schema: aten::histogramdd.int_bins(Tensor self, int bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:20.8692624Z processing existing schema: aten::histogramdd.TensorList_bins(Tensor self, Tensor[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:20.8693834Z processing existing schema: aten::msort(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8695547Z processing existing schema: aten::msort.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8697242Z processing existing schema: aten::argsort(Tensor self, int dim=-1, bool descending=False) -> (Tensor) 2022-05-18T03:33:20.8698932Z processing existing schema: aten::argsort.dimname(Tensor self, str dim, bool descending=False) -> (Tensor) 2022-05-18T03:33:20.8700440Z processing existing schema: aten::float_power.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:20.8702292Z processing existing schema: aten::float_power.Tensor_Tensor_out(Tensor self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8703775Z processing existing schema: aten::float_power.Scalar(Scalar self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:20.8705682Z processing existing schema: aten::float_power.Scalar_out(Scalar self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8707220Z processing existing schema: aten::float_power.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor) 2022-05-18T03:33:20.8709144Z processing existing schema: aten::float_power.Tensor_Scalar_out(Tensor self, Scalar exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8710818Z processing existing schema: aten::float_power_.Tensor(Tensor(a!) self, Tensor exponent) -> (Tensor(a!)) 2022-05-18T03:33:20.8712534Z processing existing schema: aten::float_power_.Scalar(Tensor(a!) self, Scalar exponent) -> (Tensor(a!)) 2022-05-18T03:33:20.8714535Z processing existing schema: aten::nll_loss_nd(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:20.8715886Z processing existing schema: aten::log_sigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8717871Z processing existing schema: aten::log_sigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8719828Z processing existing schema: aten::_pad_circular(Tensor self, int[] pad) -> (Tensor) 2022-05-18T03:33:20.8721864Z processing existing schema: aten::_pad_enum(Tensor self, int[] pad, int mode, float? value=None) -> (Tensor) 2022-05-18T03:33:20.8724149Z processing existing schema: aten::pad(Tensor self, int[] pad, str mode="constant", float? value=None) -> (Tensor) 2022-05-18T03:33:20.8726803Z processing existing schema: aten::thnn_conv2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0]) -> (Tensor) 2022-05-18T03:33:20.8729869Z processing existing schema: aten::thnn_conv2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8732480Z processing existing schema: aten::slow_conv3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0]) -> (Tensor) 2022-05-18T03:33:20.8735703Z processing existing schema: aten::slow_conv3d.out(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8737046Z processing existing schema: aten::special_expm1(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8738905Z processing existing schema: aten::special_expm1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8740296Z processing existing schema: aten::special_exp2(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8742059Z processing existing schema: aten::special_exp2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8743440Z processing existing schema: aten::special_psi(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8745455Z processing existing schema: aten::special_psi.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8747376Z processing existing schema: aten::special_digamma(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8749068Z processing existing schema: aten::special_digamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8750313Z processing existing schema: aten::special_gammaln(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8752123Z processing existing schema: aten::special_gammaln.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8753501Z processing existing schema: aten::special_erf(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8755291Z processing existing schema: aten::special_erf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8756637Z processing existing schema: aten::special_erfc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8758621Z processing existing schema: aten::special_erfc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8760249Z processing existing schema: aten::special_erfinv(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8761584Z processing existing schema: aten::special_erfinv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8762743Z processing existing schema: aten::special_ndtr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8764374Z processing existing schema: aten::special_ndtr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8765780Z processing existing schema: aten::special_xlogy(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8767394Z processing existing schema: aten::special_xlogy.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8768850Z processing existing schema: aten::special_xlogy.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8770666Z processing existing schema: aten::special_xlogy.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8772103Z processing existing schema: aten::special_xlogy.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:20.8774017Z processing existing schema: aten::special_xlogy.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8775366Z processing existing schema: aten::special_i0(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8777314Z processing existing schema: aten::special_i0.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8778828Z processing existing schema: aten::special_logit(Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:20.8780729Z processing existing schema: aten::special_logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8782174Z processing existing schema: aten::special_polygamma(int n, Tensor self) -> (Tensor) 2022-05-18T03:33:20.8783999Z processing existing schema: aten::special_polygamma.out(int n, Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8785783Z processing existing schema: aten::special_logsumexp(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:20.8787886Z processing existing schema: aten::special_logsumexp.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8789274Z processing existing schema: aten::special_expit(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8790957Z processing existing schema: aten::special_expit.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8792321Z processing existing schema: aten::special_sinc(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8794103Z processing existing schema: aten::special_sinc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8795632Z processing existing schema: aten::special_round(Tensor self, *, int decimals=0) -> (Tensor) 2022-05-18T03:33:20.8797655Z processing existing schema: aten::special_round.out(Tensor self, *, int decimals=0, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8798990Z processing existing schema: aten::special_log1p(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8801057Z processing existing schema: aten::special_log1p.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8802760Z processing existing schema: aten::special_log_softmax(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8804302Z processing existing schema: aten::special_gammainc(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8806163Z processing existing schema: aten::special_gammainc.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8807653Z processing existing schema: aten::special_gammaincc(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8809604Z processing existing schema: aten::special_gammaincc.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8811559Z processing existing schema: aten::special_multigammaln(Tensor self, int p) -> (Tensor) 2022-05-18T03:33:20.8813345Z processing existing schema: aten::special_multigammaln.out(Tensor self, int p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8815023Z processing existing schema: aten::special_softmax(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8817217Z processing existing schema: aten::fft_hfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.8819825Z processing existing schema: aten::fft_hfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8821714Z processing existing schema: aten::fft_ihfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:20.8824157Z processing existing schema: aten::fft_ihfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8826239Z processing existing schema: aten::fft_hfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.8828694Z processing existing schema: aten::fft_hfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8830651Z processing existing schema: aten::fft_ihfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:20.8833073Z processing existing schema: aten::fft_ihfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8834987Z processing existing schema: aten::fft_fftshift(Tensor self, int[1]? dim=None) -> (Tensor) 2022-05-18T03:33:20.8836398Z processing existing schema: aten::fft_ifftshift(Tensor self, int[1]? dim=None) -> (Tensor) 2022-05-18T03:33:20.8837957Z processing existing schema: aten::linalg_lu_factor(Tensor A, *, bool pivot=True) -> (Tensor LU, Tensor pivots) 2022-05-18T03:33:20.8840496Z processing existing schema: aten::linalg_lu_factor.out(Tensor A, *, bool pivot=True, Tensor(a!) LU, Tensor(b!) pivots) -> (Tensor(a!) LU, Tensor(b!) pivots) 2022-05-18T03:33:20.8841430Z processing existing schema: aten::linalg_det(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8843689Z processing existing schema: aten::linalg_det.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8844650Z processing existing schema: aten::det(Tensor self) -> (Tensor) 2022-05-18T03:33:20.8846265Z processing existing schema: aten::element_size(Tensor self) -> (int) 2022-05-18T03:33:20.8847953Z processing existing schema: aten::linalg_ldl_factor(Tensor self, *, bool hermitian=False) -> (Tensor LD, Tensor pivots) 2022-05-18T03:33:20.8850378Z processing existing schema: aten::linalg_ldl_factor.out(Tensor self, *, bool hermitian=False, Tensor(a!) LD, Tensor(b!) pivots) -> (Tensor(a!) LD, Tensor(b!) pivots) 2022-05-18T03:33:20.8851924Z processing existing schema: aten::linalg_matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8854016Z processing existing schema: aten::linalg_matmul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8855357Z processing existing schema: aten::inner(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.8857201Z processing existing schema: aten::inner.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8858826Z processing existing schema: aten::outer(Tensor self, Tensor vec2) -> (Tensor) 2022-05-18T03:33:20.8860606Z processing existing schema: aten::outer.out(Tensor self, Tensor vec2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8862130Z processing existing schema: aten::ger(Tensor self, Tensor vec2) -> (Tensor) 2022-05-18T03:33:20.8864224Z processing existing schema: aten::ger.out(Tensor self, Tensor vec2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8866404Z processing existing schema: aten::linalg_norm(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8869161Z processing existing schema: aten::linalg_norm.out(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8871264Z processing existing schema: aten::linalg_norm.ord_str(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8874148Z processing existing schema: aten::linalg_norm.ord_str_out(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8875480Z processing existing schema: aten::linalg_matrix_power(Tensor self, int n) -> (Tensor) 2022-05-18T03:33:20.8877441Z processing existing schema: aten::linalg_matrix_power.out(Tensor self, int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8879436Z processing existing schema: aten::_test_serialization_subcmul(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.8881310Z processing existing schema: aten::_test_string_default(Tensor dummy, str a="\"\'\\", str b="\"\'\\") -> (Tensor) 2022-05-18T03:33:20.8883136Z processing existing schema: aten::_test_ambiguous_defaults.a(Tensor dummy, int a=1, int b=1) -> (Tensor) 2022-05-18T03:33:20.8884937Z processing existing schema: aten::_test_ambiguous_defaults.b(Tensor dummy, int a=2, str b="2") -> (Tensor) 2022-05-18T03:33:20.8887134Z processing existing schema: aten::pad_sequence(Tensor[] sequences, bool batch_first=False, float padding_value=0.) -> (Tensor) 2022-05-18T03:33:20.8888886Z processing existing schema: aten::flatten_dense_tensors(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:20.8891022Z processing existing schema: aten::unflatten_dense_tensors(Tensor flat, Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:20.8893502Z processing existing schema: aten::nested_tensor(Tensor[] list, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:20.8895592Z processing existing schema: aten::_sparse_broadcast_to(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:20.8897809Z processing existing schema: aten::_resize_output_(Tensor(a!) self, int[] size, Device device) -> (Tensor(a!)) 2022-05-18T03:33:20.8899735Z processing existing schema: aten::_mkldnn_transpose_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:20.8902134Z processing existing schema: aten::sparse_resize_(Tensor(a!) self, int[] size, int sparse_dim, int dense_dim) -> (Tensor(a!)) 2022-05-18T03:33:20.8903728Z processing existing schema: aten::values(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8906093Z processing existing schema: aten::values.str(Dict(str, t) self) -> (t[](*)) 2022-05-18T03:33:20.8908354Z processing existing schema: aten::values.int(Dict(int, t) self) -> (t[](*)) 2022-05-18T03:33:20.8910592Z processing existing schema: aten::values.bool(Dict(bool, t) self) -> (t[](*)) 2022-05-18T03:33:20.8912880Z processing existing schema: aten::values.float(Dict(float, t) self) -> (t[](*)) 2022-05-18T03:33:20.8915263Z processing existing schema: aten::values.complex(Dict(complex, t) self) -> (t[](*)) 2022-05-18T03:33:20.8917535Z processing existing schema: aten::values.Tensor(Dict(Tensor, t) self) -> (t[](*)) 2022-05-18T03:33:20.8919450Z processing existing schema: aten::row_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8921957Z processing existing schema: aten::_amp_update_scale_(Tensor(a!) self, Tensor(b!) growth_tracker, Tensor found_inf, float scale_growth_factor, float scale_backoff_factor, int growth_interval) -> (Tensor(a!)) 2022-05-18T03:33:20.8924636Z processing existing schema: aten::_conv_depthwise2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, int[2] dilation, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8926986Z processing existing schema: aten::_conv_depthwise2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, int[2] dilation) -> (Tensor) 2022-05-18T03:33:20.8928753Z processing existing schema: aten::resize_as_sparse_(Tensor(a!) self, Tensor the_template) -> (Tensor(a!)) 2022-05-18T03:33:20.8931101Z processing existing schema: aten::sparse_resize_and_clear_(Tensor(a!) self, int[] size, int sparse_dim, int dense_dim) -> (Tensor(a!)) 2022-05-18T03:33:20.8932711Z processing existing schema: aten::_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8934480Z processing existing schema: aten::indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8936408Z processing existing schema: aten::hspmm.out(Tensor mat1, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8937978Z processing existing schema: aten::hspmm(Tensor mat1, Tensor mat2) -> (Tensor) 2022-05-18T03:33:20.8940455Z processing existing schema: aten::sparse_sampled_addmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.8942421Z processing existing schema: aten::sparse_sampled_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:20.8944003Z processing existing schema: aten::_values(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:20.8946044Z processing existing schema: aten::_coalesced_(Tensor(a!) self, bool coalesced) -> (Tensor(a!)) 2022-05-18T03:33:20.8948194Z processing existing schema: aten::_amp_foreach_non_finite_check_and_unscale_(Tensor[] self, Tensor(b!) found_inf, Tensor inv_scale) -> () 2022-05-18T03:33:20.8949947Z processing existing schema: aten::mkldnn_linear(Tensor self, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:20.8951961Z processing existing schema: aten::mkldnn_linear_backward_input(int[] input_size, Tensor grad_output, Tensor weight) -> (Tensor) 2022-05-18T03:33:20.8953831Z processing existing schema: aten::mkldnn_linear_backward_weights(Tensor grad_output, Tensor input, Tensor weight, bool bias_defined) -> (Tensor, Tensor) 2022-05-18T03:33:20.8955762Z processing existing schema: aten::mkldnn_linear_backward(Tensor self, Tensor grad_output, Tensor weight, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.8958565Z processing existing schema: aten::mkldnn_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8961659Z processing existing schema: aten::mkldnn_max_pool2d_backward(Tensor grad_output, Tensor output, Tensor input, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8964386Z processing existing schema: aten::mkldnn_max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8967427Z processing existing schema: aten::mkldnn_max_pool3d_backward(Tensor grad_output, Tensor output, Tensor input, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8969132Z processing existing schema: aten::_mkldnn_reshape(Tensor self, int[] shape) -> (Tensor) 2022-05-18T03:33:20.8970743Z processing existing schema: aten::_mkldnn_transpose(Tensor self, int dim0, int dim1) -> (Tensor) 2022-05-18T03:33:20.8972406Z processing existing schema: aten::_to_dense(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.8975121Z processing existing schema: aten::mkldnn_reorder_conv2d_weight(Tensor self, int[2] padding=[0, 0], int[2] stride=[1, 1], int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.8977949Z processing existing schema: aten::mkldnn_reorder_conv3d_weight(Tensor self, int[3] padding=[0, 0, 0], int[3] stride=[1, 1, 1], int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:20.8979534Z processing existing schema: aten::mkldnn_adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:20.8981153Z processing existing schema: aten::mkldnn_adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.8982759Z processing existing schema: aten::_nested_from_padded_and_nested_example(Tensor padded, Tensor nt_example) -> (Tensor) 2022-05-18T03:33:20.8984899Z processing existing schema: aten::to_padded_tensor(Tensor self, float padding, int[]? output_size=None) -> (Tensor) 2022-05-18T03:33:20.8986439Z schema: aten::_nested_tensor_layer_norm(Tensor self, Tensor? weight, Tensor? bias, float eps) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.8988620Z processing existing schema: aten::quantized_batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:20.8991203Z processing existing schema: aten::quantized_max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8993993Z processing existing schema: aten::quantized_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.8995414Z processing existing schema: aten::q_scale(Tensor self) -> (float) 2022-05-18T03:33:20.8996855Z processing existing schema: aten::q_zero_point(Tensor self) -> (int) 2022-05-18T03:33:20.8998429Z processing existing schema: aten::q_per_channel_scales(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9000066Z processing existing schema: aten::q_per_channel_zero_points(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9001462Z processing existing schema: aten::q_per_channel_axis(Tensor self) -> (int) 2022-05-18T03:33:20.9002985Z processing existing schema: aten::int_repr(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9004482Z processing existing schema: aten::qscheme(Tensor self) -> (QScheme) 2022-05-18T03:33:20.9007046Z processing existing schema: aten::_use_cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank) -> (bool) 2022-05-18T03:33:20.9009759Z processing existing schema: aten::_cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank, bool deterministic, bool zero_infinity) -> (Tensor, Tensor) 2022-05-18T03:33:20.9012419Z processing existing schema: aten::_cudnn_rnn_flatten_weight(Tensor[] weight_arr, int weight_stride0, int input_size, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, bool bidirectional) -> (Tensor) 2022-05-18T03:33:20.9016108Z processing existing schema: aten::_cudnn_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor? weight_buf, Tensor hx, Tensor? cx, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9020969Z processing existing schema: aten::_cudnn_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[]) 2022-05-18T03:33:20.9021814Z processing existing schema: aten::_masked_scale(Tensor self, Tensor mask, float scale) -> (Tensor) 2022-05-18T03:33:20.9024037Z processing existing schema: aten::_copy_from(Tensor self, Tensor dst, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:20.9025400Z processing existing schema: aten::_copy_from_and_resize(Tensor self, Tensor dst) -> (Tensor) 2022-05-18T03:33:20.9028523Z processing existing schema: aten::_mps_convolution_transpose(Tensor self, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.9031836Z processing existing schema: aten::mps_convolution_transpose_backward(Tensor self, Tensor grad_output, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool[2] output_mask) -> (Tensor, Tensor) 2022-05-18T03:33:20.9032897Z processing existing schema: aten::cudnn_grid_sampler(Tensor self, Tensor grid) -> (Tensor output) 2022-05-18T03:33:20.9033290Z schema: profiler::_record_function_enter(str name, str? args=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:20.9035060Z processing existing schema: aten::cudnn_grid_sampler_backward(Tensor self, Tensor grid, Tensor grad_output) -> (Tensor grad_self, Tensor grad_grid) 2022-05-18T03:33:20.9035456Z schema: profiler::_record_function_enter_new(str name, str? args=None) -> (__torch__.torch.classes.profiler._RecordFunction) found on allowlist, skipping 2022-05-18T03:33:20.9036912Z processing existing schema: aten::_mps_linear(Tensor self, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:20.9038858Z processing existing schema: aten::_mps_linear_backward_input(int[] input_size, Tensor grad_output, Tensor weight) -> (Tensor) 2022-05-18T03:33:20.9040472Z processing existing schema: aten::_mps_linear_backward_weights(Tensor grad_output, Tensor input, Tensor weight, bool bias_defined) -> (Tensor, Tensor) 2022-05-18T03:33:20.9042195Z processing existing schema: aten::mps_linear_backward(Tensor self, Tensor grad_output, Tensor weight, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9044751Z processing existing schema: aten::_mps_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.9047466Z processing existing schema: aten::mps_max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:20.9050045Z processing existing schema: aten::_mps_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:20.9053041Z processing existing schema: aten::mps_convolution_backward(Tensor self, Tensor grad_output, Tensor weight, int[] padding, int[] stride, int[] dilation, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9055163Z processing existing schema: aten::miopen_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9057285Z processing existing schema: aten::miopen_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9060088Z processing existing schema: aten::miopen_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:20.9063377Z processing existing schema: aten::miopen_convolution_transpose(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:20.9066694Z processing existing schema: aten::miopen_depthwise_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:20.9069923Z processing existing schema: aten::miopen_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor hx, Tensor? cx, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9074467Z processing existing schema: aten::miopen_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[]) 2022-05-18T03:33:20.9075390Z processing existing schema: aten::_sparse_sparse_matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.9076905Z processing existing schema: aten::_sparse_mask_helper(Tensor t, Tensor mask_indices) -> (Tensor) 2022-05-18T03:33:20.9078427Z processing existing schema: aten::native_norm(Tensor self, Scalar p=2) -> (Tensor) 2022-05-18T03:33:20.9080639Z processing existing schema: aten::native_norm.ScalarOpt_dim_dtype(Tensor self, Scalar? p, int[1] dim, bool keepdim, int? dtype) -> (Tensor) 2022-05-18T03:33:20.9082763Z processing existing schema: aten::_sparse_sum_backward(Tensor grad, Tensor self, int[] dim) -> (Tensor) 2022-05-18T03:33:20.9084643Z processing existing schema: aten::_sparse_csr_sum.dim_dtype(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.9086665Z processing existing schema: aten::_sparse_csr_prod.dim_dtype(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:20.9088370Z processing existing schema: aten::_sparse_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 2022-05-18T03:33:20.9090038Z processing existing schema: aten::_sparse_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 2022-05-18T03:33:20.9091440Z processing existing schema: aten::sparse_mask(Tensor self, Tensor mask) -> (Tensor) 2022-05-18T03:33:20.9092848Z processing existing schema: aten::sparse_dim(Tensor self) -> (int) 2022-05-18T03:33:20.9094029Z processing existing schema: aten::_dimI(Tensor self) -> (int) 2022-05-18T03:33:20.9095633Z processing existing schema: aten::dense_dim(Tensor self) -> (int) 2022-05-18T03:33:20.9097214Z processing existing schema: aten::cpu(Tensor(a) self) -> (Tensor(a|b)) 2022-05-18T03:33:20.9098186Z processing existing schema: aten::_dimV(Tensor self) -> (int) 2022-05-18T03:33:20.9099449Z processing existing schema: aten::_nnz(Tensor self) -> (int) 2022-05-18T03:33:20.9100706Z processing existing schema: aten::_coalesce(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9103643Z processing existing schema: aten::_lstm_mps(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9107514Z processing existing schema: aten::lstm_mps_backward(Tensor grad_y, Tensor? grad_hy, Tensor? grad_cy, Tensor z_state, Tensor cell_state_fwd, Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor[], Tensor[]) 2022-05-18T03:33:20.9108953Z processing existing schema: aten::_thnn_fused_lstm_cell_backward_impl(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9110427Z processing existing schema: aten::_thnn_fused_gru_cell_backward(Tensor grad_hy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9112025Z processing existing schema: aten::_torch_cuda_cu_linker_symbol_op(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9113279Z processing existing schema: aten::record_stream(Tensor(a!) self, Stream s) -> () 2022-05-18T03:33:20.9114419Z processing existing schema: aten::str(t elem) -> (str) 2022-05-18T03:33:20.9115804Z processing existing schema: aten::list(str t) -> (str[]) 2022-05-18T03:33:20.9117674Z processing existing schema: aten::list.t(t[] l) -> (t[]) 2022-05-18T03:33:20.9119008Z processing existing schema: prim::layout(Tensor a) -> (int) 2022-05-18T03:33:20.9120764Z processing existing schema: aten::__range_length(int lo, int hi, int step) -> (int) 2022-05-18T03:33:20.9122179Z processing existing schema: aten::__derive_index(int index, int start, int step) -> (int) 2022-05-18T03:33:20.9123480Z processing existing schema: prim::TupleUnpack(Any tup) -> (...) 2022-05-18T03:33:20.9124625Z processing existing schema: prim::unchecked_cast(t x) -> (t) 2022-05-18T03:33:20.9125915Z processing existing schema: aten::IntImplicit(Tensor a) -> (int) 2022-05-18T03:33:20.9126915Z processing existing schema: aten::ComplexImplicit(Tensor a) -> (complex) 2022-05-18T03:33:20.9128386Z processing existing schema: aten::FloatImplicit(Tensor a) -> (float) 2022-05-18T03:33:20.9129619Z processing existing schema: aten::ScalarImplicit(Tensor a) -> (Scalar) 2022-05-18T03:33:20.9130815Z processing existing schema: aten::Bool.Tensor(Tensor a) -> (bool) 2022-05-18T03:33:20.9132115Z processing existing schema: aten::Bool.int(int a) -> (bool) 2022-05-18T03:33:20.9133300Z processing existing schema: aten::Bool.float(float a) -> (bool) 2022-05-18T03:33:20.9134555Z processing existing schema: aten::Int.Tensor(Tensor a) -> (int) 2022-05-18T03:33:20.9135688Z processing existing schema: aten::Int.bool(bool a) -> (int) 2022-05-18T03:33:20.9137327Z processing existing schema: aten::Int.float(float a) -> (int) 2022-05-18T03:33:20.9138618Z processing existing schema: aten::Int.Scalar(Scalar a) -> (int) 2022-05-18T03:33:20.9139936Z processing existing schema: aten::Int.str(str a) -> (int) 2022-05-18T03:33:20.9141104Z processing existing schema: aten::Float.Tensor(Tensor a) -> (float) 2022-05-18T03:33:20.9142322Z processing existing schema: aten::Float.Scalar(Scalar a) -> (float) 2022-05-18T03:33:20.9143713Z processing existing schema: aten::Float.int(int a) -> (float) 2022-05-18T03:33:20.9145135Z processing existing schema: aten::Float.bool(bool a) -> (float) 2022-05-18T03:33:20.9146359Z processing existing schema: aten::Float.str(str a) -> (float) 2022-05-18T03:33:20.9147764Z processing existing schema: aten::Complex.Scalar(Scalar a) -> (complex) 2022-05-18T03:33:20.9149204Z processing existing schema: aten::Complex.Tensor_Tensor(Tensor a, Tensor b) -> (complex) 2022-05-18T03:33:20.9150782Z processing existing schema: aten::Complex.int_bool(int x, bool y) -> (complex) 2022-05-18T03:33:20.9152116Z processing existing schema: aten::Complex.bool_int(bool x, int y) -> (complex) 2022-05-18T03:33:20.9153557Z processing existing schema: aten::Complex.float_bool(float x, bool y) -> (complex) 2022-05-18T03:33:20.9155216Z processing existing schema: aten::Complex.bool_float(bool x, float y) -> (complex) 2022-05-18T03:33:20.9156664Z processing existing schema: aten::Complex.float_int(float x, int y) -> (complex) 2022-05-18T03:33:20.9158285Z processing existing schema: aten::Complex.int_float(int x, float y) -> (complex) 2022-05-18T03:33:20.9160080Z processing existing schema: aten::Complex.int_int(int x, int y) -> (complex) 2022-05-18T03:33:20.9160839Z processing existing schema: aten::Complex.bool_bool(bool x, bool y) -> (complex) 2022-05-18T03:33:20.9162488Z processing existing schema: aten::Complex.float_float(float x, float y) -> (complex) 2022-05-18T03:33:20.9163778Z processing existing schema: aten::Complex.Tensor_float(Tensor x, float y) -> (complex) 2022-05-18T03:33:20.9165249Z processing existing schema: aten::Complex.float_Tensor(float x, Tensor y) -> (complex) 2022-05-18T03:33:20.9166664Z processing existing schema: aten::Complex.Tensor_int(Tensor x, int y) -> (complex) 2022-05-18T03:33:20.9167961Z processing existing schema: aten::Complex.int_Tensor(int x, Tensor y) -> (complex) 2022-05-18T03:33:20.9169403Z processing existing schema: aten::Complex.Tensor_bool(Tensor x, bool y) -> (complex) 2022-05-18T03:33:20.9170708Z processing existing schema: aten::Complex.bool_Tensor(bool x, Tensor y) -> (complex) 2022-05-18T03:33:20.9172122Z processing existing schema: aten::format(str self, ...) -> (str) 2022-05-18T03:33:20.9174113Z processing existing schema: prim::NumToTensor.Scalar(Scalar a) -> (Tensor) 2022-05-18T03:33:20.9174659Z processing existing schema: prim::NumToTensor.bool(bool a) -> (Tensor) 2022-05-18T03:33:20.9176426Z processing existing schema: prim::RaiseException(str msg, str? cls=None) -> () 2022-05-18T03:33:20.9177883Z processing existing schema: prim::EnumName(AnyEnumType enum) -> (str) 2022-05-18T03:33:20.9179187Z processing existing schema: prim::EnumValue.int(AnyEnumType enum) -> (int) 2022-05-18T03:33:20.9180702Z processing existing schema: prim::EnumValue.float(AnyEnumType enum) -> (float) 2022-05-18T03:33:20.9181884Z processing existing schema: prim::EnumValue.str(AnyEnumType enum) -> (str) 2022-05-18T03:33:20.9183188Z processing existing schema: prim::TupleIndex(Any tup, int i) -> (Any) 2022-05-18T03:33:20.9185145Z processing existing schema: prim::unchecked_unwrap_optional(t(a)? optional) -> (t(a)) 2022-05-18T03:33:20.9186085Z processing existing schema: prim::device(Tensor a) -> (Device) 2022-05-18T03:33:20.9188239Z processing existing schema: prim::dtype(Tensor a) -> (int) 2022-05-18T03:33:20.9188576Z processing existing schema: aten::__not__(bool self) -> (bool) 2022-05-18T03:33:20.9190267Z processing existing schema: aten::__is__(t1 self, t2 obj) -> (bool) 2022-05-18T03:33:20.9192119Z processing existing schema: aten::__isnot__(t1 self, t2 obj) -> (bool) 2022-05-18T03:33:20.9193552Z processing existing schema: aten::dim(Tensor self) -> (int) 2022-05-18T03:33:20.9196286Z processing existing schema: aten::__getitem__.t(t[](a) list, int idx) -> (t(*)) 2022-05-18T03:33:20.9197725Z processing existing schema: aten::__getitem__.str(str s, int index) -> (str) 2022-05-18T03:33:20.9200693Z processing existing schema: aten::__getitem__.Dict_str(Dict(str, t) self, str key) -> (t(*)) 2022-05-18T03:33:20.9202871Z processing existing schema: aten::__getitem__.Dict_int(Dict(int, t) self, int key) -> (t(*)) 2022-05-18T03:33:20.9205249Z processing existing schema: aten::__getitem__.Dict_bool(Dict(bool, t) self, bool key) -> (t(*)) 2022-05-18T03:33:20.9207578Z processing existing schema: aten::__getitem__.Dict_float(Dict(float, t) self, float key) -> (t(*)) 2022-05-18T03:33:20.9210062Z processing existing schema: aten::__getitem__.Dict_complex(Dict(complex, t) self, complex key) -> (t(*)) 2022-05-18T03:33:20.9212388Z processing existing schema: aten::__getitem__.Dict_Tensor(Dict(Tensor, t) self, Tensor key) -> (t(*)) 2022-05-18T03:33:20.9215245Z processing existing schema: aten::append.t(t[](a!) self, t(c -> *) el) -> (t[](a!)) 2022-05-18T03:33:20.9218114Z processing existing schema: aten::_set_item.t(t[](a!) l, int idx, t(b -> *) el) -> (t[](a!)) 2022-05-18T03:33:20.9220934Z processing existing schema: aten::_set_item.str(Dict(str, t)(a!) l, str(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9223561Z processing existing schema: aten::_set_item.int(Dict(int, t)(a!) l, int(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9226433Z processing existing schema: aten::_set_item.bool(Dict(bool, t)(a!) l, bool(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9229195Z processing existing schema: aten::_set_item.float(Dict(float, t)(a!) l, float(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9232116Z processing existing schema: aten::_set_item.complex(Dict(complex, t)(a!) l, complex(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9234874Z processing existing schema: aten::_set_item.Tensor(Dict(Tensor, t)(a!) l, Tensor(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:20.9236903Z processing existing schema: aten::clear.t(t[](a!) self) -> () 2022-05-18T03:33:20.9239381Z processing existing schema: aten::clear.str(Dict(str, t)(a!) self) -> () 2022-05-18T03:33:20.9241539Z processing existing schema: aten::clear.int(Dict(int, t)(a!) self) -> () 2022-05-18T03:33:20.9243734Z processing existing schema: aten::clear.bool(Dict(bool, t)(a!) self) -> () 2022-05-18T03:33:20.9246027Z processing existing schema: aten::clear.float(Dict(float, t)(a!) self) -> () 2022-05-18T03:33:20.9248382Z processing existing schema: aten::clear.complex(Dict(complex, t)(a!) self) -> () 2022-05-18T03:33:20.9250729Z processing existing schema: aten::clear.Tensor(Dict(Tensor, t)(a!) self) -> () 2022-05-18T03:33:20.9252894Z processing existing schema: aten::Delete.t(t[](a!) self, int idx) -> () 2022-05-18T03:33:20.9255301Z processing existing schema: aten::Delete.Dict_str(Dict(str, t)(a!) self, str key) -> () 2022-05-18T03:33:20.9257592Z processing existing schema: aten::Delete.Dict_int(Dict(int, t)(a!) self, int key) -> () 2022-05-18T03:33:20.9260063Z processing existing schema: aten::Delete.Dict_bool(Dict(bool, t)(a!) self, bool key) -> () 2022-05-18T03:33:20.9262261Z processing existing schema: aten::Delete.Dict_float(Dict(float, t)(a!) self, float key) -> () 2022-05-18T03:33:20.9264937Z processing existing schema: aten::Delete.Dict_complex(Dict(complex, t)(a!) self, complex key) -> () 2022-05-18T03:33:20.9267225Z processing existing schema: aten::Delete.Dict_Tensor(Dict(Tensor, t)(a!) self, Tensor key) -> () 2022-05-18T03:33:20.9269803Z processing existing schema: aten::insert.t(t[](a!) self, int idx, t(b -> *) el) -> () 2022-05-18T03:33:20.9272233Z processing existing schema: aten::pop.t(t[](a!) self, int idx=-1) -> (t(*)) 2022-05-18T03:33:20.9274810Z processing existing schema: aten::pop.Dict_str(Dict(str, t)(a!) self, str key) -> (t(*)) 2022-05-18T03:33:20.9277443Z processing existing schema: aten::pop.Dict_default_str(Dict(str, t)(a!) self, str key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9280068Z processing existing schema: aten::pop.Dict_int(Dict(int, t)(a!) self, int key) -> (t(*)) 2022-05-18T03:33:20.9282764Z processing existing schema: aten::pop.Dict_default_int(Dict(int, t)(a!) self, int key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9285130Z processing existing schema: aten::pop.Dict_bool(Dict(bool, t)(a!) self, bool key) -> (t(*)) 2022-05-18T03:33:20.9287790Z processing existing schema: aten::pop.Dict_default_bool(Dict(bool, t)(a!) self, bool key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9290338Z processing existing schema: aten::pop.Dict_float(Dict(float, t)(a!) self, float key) -> (t(*)) 2022-05-18T03:33:20.9292986Z processing existing schema: aten::pop.Dict_default_float(Dict(float, t)(a!) self, float key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9295594Z processing existing schema: aten::pop.Dict_complex(Dict(complex, t)(a!) self, complex key) -> (t(*)) 2022-05-18T03:33:20.9298329Z processing existing schema: aten::pop.Dict_default_complex(Dict(complex, t)(a!) self, complex key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9300849Z processing existing schema: aten::pop.Dict_Tensor(Dict(Tensor, t)(a!) self, Tensor key) -> (t(*)) 2022-05-18T03:33:20.9303563Z processing existing schema: aten::pop.Dict_default_Tensor(Dict(Tensor, t)(a!) self, Tensor key, t default_value) -> (t(*)) 2022-05-18T03:33:20.9305594Z processing existing schema: aten::len.t(t[] a) -> (int) 2022-05-18T03:33:20.9307012Z processing existing schema: aten::len.Tensor(Tensor t) -> (int) 2022-05-18T03:33:20.9309242Z processing existing schema: aten::len.str(str s) -> (int) 2022-05-18T03:33:20.9311492Z processing existing schema: aten::len.Dict_str(Dict(str, t) self) -> (int) 2022-05-18T03:33:20.9313616Z processing existing schema: aten::len.Dict_int(Dict(int, t) self) -> (int) 2022-05-18T03:33:20.9315887Z processing existing schema: aten::len.Dict_bool(Dict(bool, t) self) -> (int) 2022-05-18T03:33:20.9318071Z processing existing schema: aten::len.Dict_float(Dict(float, t) self) -> (int) 2022-05-18T03:33:20.9320652Z processing existing schema: aten::len.Dict_complex(Dict(complex, t) self) -> (int) 2022-05-18T03:33:20.9322747Z processing existing schema: aten::len.Dict_Tensor(Dict(Tensor, t) self) -> (int) 2022-05-18T03:33:20.9324969Z processing existing schema: aten::len.any(Any[] a) -> (int) 2022-05-18T03:33:20.9326392Z processing existing schema: prim::Uninitialized() -> (Any) 2022-05-18T03:33:20.9328139Z processing existing schema: prim::Print(...) -> () 2022-05-18T03:33:20.9329930Z processing existing schema: prim::VarConcat(...) -> (Tensor) 2022-05-18T03:33:20.9331414Z processing existing schema: prim::VarStack(...) -> (Tensor) 2022-05-18T03:33:20.9333983Z processing existing schema: prim::IfThenElse(bool cond, Any(a) x, Any(b) y) -> (Any(a|b)) 2022-05-18T03:33:20.9335334Z processing existing schema: aten::floordiv.int(int a, int b) -> (int) 2022-05-18T03:33:20.9337605Z processing existing schema: aten::floordiv.float(float a, float b) -> (float) 2022-05-18T03:33:20.9339074Z processing existing schema: aten::floordiv.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.9341187Z processing existing schema: aten::floordiv.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.9342829Z processing existing schema: aten::floordiv(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.9344713Z processing existing schema: prim::min.int(int a, int b) -> (int) 2022-05-18T03:33:20.9346548Z processing existing schema: prim::min.float(float a, float b) -> (float) 2022-05-18T03:33:20.9348242Z processing existing schema: prim::min.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.9350522Z processing existing schema: prim::min.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.9351574Z processing existing schema: prim::min(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.9353924Z processing existing schema: prim::min.int_list(int[] l, int[] r) -> (int[]) 2022-05-18T03:33:20.9355466Z processing existing schema: prim::min.self_int(int[] self) -> (int) 2022-05-18T03:33:20.9357757Z processing existing schema: prim::min.float_list(float[] l, float[] r) -> (float[]) 2022-05-18T03:33:20.9359529Z processing existing schema: prim::min.self_float(float[] self) -> (float) 2022-05-18T03:33:20.9361782Z processing existing schema: prim::min.bool_list(bool[] l, bool[] r) -> (bool[]) 2022-05-18T03:33:20.9363768Z processing existing schema: prim::min.self_bool(bool[] self) -> (bool) 2022-05-18T03:33:20.9365112Z processing existing schema: prim::max.int(int a, int b) -> (int) 2022-05-18T03:33:20.9366584Z processing existing schema: prim::max.float(float a, float b) -> (float) 2022-05-18T03:33:20.9368079Z processing existing schema: prim::max.int_float(int a, float b) -> (float) 2022-05-18T03:33:20.9369650Z processing existing schema: prim::max.float_int(float a, int b) -> (float) 2022-05-18T03:33:20.9371058Z processing existing schema: prim::max(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:20.9373399Z processing existing schema: prim::max.int_list(int[] l, int[] r) -> (int[]) 2022-05-18T03:33:20.9375037Z processing existing schema: prim::max.self_int(int[] self) -> (int) 2022-05-18T03:33:20.9377417Z processing existing schema: prim::max.float_list(float[] l, float[] r) -> (float[]) 2022-05-18T03:33:20.9379131Z processing existing schema: prim::max.self_float(float[] self) -> (float) 2022-05-18T03:33:20.9381592Z processing existing schema: prim::max.bool_list(bool[] l, bool[] r) -> (bool[]) 2022-05-18T03:33:20.9383400Z processing existing schema: prim::max.self_bool(bool[] self) -> (bool) 2022-05-18T03:33:20.9384934Z processing existing schema: aten::ord(str string) -> (int) 2022-05-18T03:33:20.9386494Z processing existing schema: aten::__contains__.int_list(int[] l, int item) -> (bool) 2022-05-18T03:33:20.9388079Z processing existing schema: aten::__contains__.str_list(str[] l, str item) -> (bool) 2022-05-18T03:33:20.9389811Z processing existing schema: aten::__contains__.str(Dict(str, t) dict, str key) -> (bool) 2022-05-18T03:33:20.9391491Z processing existing schema: aten::__contains__.int(Dict(int, t) dict, int key) -> (bool) 2022-05-18T03:33:20.9393189Z processing existing schema: aten::__contains__.bool(Dict(bool, t) dict, bool key) -> (bool) 2022-05-18T03:33:20.9394850Z processing existing schema: aten::__contains__.float(Dict(float, t) dict, float key) -> (bool) 2022-05-18T03:33:20.9396708Z processing existing schema: aten::__contains__.complex(Dict(complex, t) dict, complex key) -> (bool) 2022-05-18T03:33:20.9398409Z processing existing schema: aten::__contains__.Tensor(Dict(Tensor, t) dict, Tensor key) -> (bool) 2022-05-18T03:33:20.9400203Z processing existing schema: aten::__contains__.float_list(float[] l, float item) -> (bool) 2022-05-18T03:33:20.9401763Z processing existing schema: aten::dict() -> (Dict(str, Tensor)) 2022-05-18T03:33:20.9404001Z processing existing schema: aten::dict.str((str, tVal)[] inputs) -> (Dict(str, tVal)) 2022-05-18T03:33:20.9406111Z processing existing schema: aten::dict.Dict_str(Dict(str, t)(a) self) -> (Dict(str, t)) 2022-05-18T03:33:20.9408453Z processing existing schema: aten::dict.int((int, tVal)[] inputs) -> (Dict(int, tVal)) 2022-05-18T03:33:20.9410423Z processing existing schema: aten::dict.Dict_int(Dict(int, t)(a) self) -> (Dict(int, t)) 2022-05-18T03:33:20.9412738Z processing existing schema: aten::dict.bool((bool, tVal)[] inputs) -> (Dict(bool, tVal)) 2022-05-18T03:33:20.9414821Z processing existing schema: aten::dict.Dict_bool(Dict(bool, t)(a) self) -> (Dict(bool, t)) 2022-05-18T03:33:20.9417149Z processing existing schema: aten::dict.float((float, tVal)[] inputs) -> (Dict(float, tVal)) 2022-05-18T03:33:20.9419249Z processing existing schema: aten::dict.Dict_float(Dict(float, t)(a) self) -> (Dict(float, t)) 2022-05-18T03:33:20.9421762Z processing existing schema: aten::dict.complex((complex, tVal)[] inputs) -> (Dict(complex, tVal)) 2022-05-18T03:33:20.9424041Z processing existing schema: aten::dict.Dict_complex(Dict(complex, t)(a) self) -> (Dict(complex, t)) 2022-05-18T03:33:20.9426588Z processing existing schema: aten::dict.Tensor((Tensor, tVal)[] inputs) -> (Dict(Tensor, tVal)) 2022-05-18T03:33:20.9428739Z processing existing schema: aten::dict.Dict_Tensor(Dict(Tensor, t)(a) self) -> (Dict(Tensor, t)) 2022-05-18T03:33:20.9430662Z processing existing schema: aten::backward(Tensor self, Tensor? gradient=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:20.9433345Z processing existing schema: aten::backward.TensorList(Tensor[] tensors, Tensor?[]? grad_tensors=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:20.9434557Z processing existing schema: prim::is_cuda(Tensor a) -> (bool) 2022-05-18T03:33:20.9435657Z processing existing schema: prim::tolist(...) -> (...) 2022-05-18T03:33:20.9437885Z processing existing schema: aten::keys.str(Dict(str, t) self) -> (str[](*)) 2022-05-18T03:33:20.9439830Z processing existing schema: aten::keys.int(Dict(int, t) self) -> (int[](*)) 2022-05-18T03:33:20.9441801Z processing existing schema: aten::keys.bool(Dict(bool, t) self) -> (bool[](*)) 2022-05-18T03:33:20.9443597Z processing existing schema: aten::keys.float(Dict(float, t) self) -> (float[](*)) 2022-05-18T03:33:20.9445529Z processing existing schema: aten::keys.complex(Dict(complex, t) self) -> (complex[](*)) 2022-05-18T03:33:20.9447570Z processing existing schema: aten::keys.Tensor(Dict(Tensor, t) self) -> (Tensor[](*)) 2022-05-18T03:33:20.9450021Z processing existing schema: aten::setdefault.str(Dict(str, t)(a!) self, str(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9452278Z processing existing schema: aten::setdefault.int(Dict(int, t)(a!) self, int(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9454577Z processing existing schema: aten::setdefault.bool(Dict(bool, t)(a!) self, bool(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9456878Z processing existing schema: aten::setdefault.float(Dict(float, t)(a!) self, float(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9459312Z processing existing schema: aten::setdefault.complex(Dict(complex, t)(a!) self, complex(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9461566Z processing existing schema: aten::setdefault.Tensor(Dict(Tensor, t)(a!) self, Tensor(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:20.9463227Z processing existing schema: aten::find(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.9464671Z processing existing schema: prim::rangelist(int n) -> (int[]) 2022-05-18T03:33:20.9466203Z processing existing schema: aten::device(str a) -> (Device) 2022-05-18T03:33:20.9467785Z processing existing schema: aten::percentFormat(str self, ...) -> (str) 2022-05-18T03:33:20.9469570Z processing existing schema: prim::requires_grad(Tensor a) -> (bool) 2022-05-18T03:33:20.9470542Z processing existing schema: prim::grad(Tensor a) -> (Tensor(*)) 2022-05-18T03:33:20.9471985Z processing existing schema: prim::is_nested(Tensor a) -> (bool) 2022-05-18T03:33:20.9473518Z processing existing schema: aten::manual_seed(int seed) -> () 2022-05-18T03:33:20.9474619Z processing existing schema: prim::AutogradZero() -> (Tensor) 2022-05-18T03:33:20.9477353Z processing existing schema: prim::ReductionSizes(int[] size, int[] red_axes, bool keepdim=False) -> (int[]) 2022-05-18T03:33:20.9478875Z processing existing schema: prim::BroadcastSizes(...) -> (int[]) 2022-05-18T03:33:20.9480712Z processing existing schema: aten::warn(str message, int stacklevel=2) -> () 2022-05-18T03:33:20.9482152Z processing existing schema: onnx::Reshape(Tensor input, Tensor shape) -> (Tensor) 2022-05-18T03:33:20.9483506Z processing existing schema: onnx::Shape(Tensor t) -> (Tensor) 2022-05-18T03:33:20.9484963Z processing existing schema: prim::AutogradAnyNonZero(...) -> (bool) 2022-05-18T03:33:20.9486301Z processing existing schema: prim::AutogradAllZero(...) -> (bool) 2022-05-18T03:33:20.9487674Z processing existing schema: prim::AutogradAllNonZero(...) -> (bool) 2022-05-18T03:33:20.9489236Z processing existing schema: prim::AutogradAdd(Any a, Any b) -> (Any) 2022-05-18T03:33:20.9491698Z processing existing schema: aten::_size_if_not_equal(int[] self_size, int[] other_size) -> (int[]?) 2022-05-18T03:33:20.9493327Z processing existing schema: aten::_unwrap_optional(t(a)? optional) -> (t(a)) 2022-05-18T03:33:20.9495504Z processing existing schema: aten::sorted.int(int[](a) input) -> (int[]) 2022-05-18T03:33:20.9497686Z processing existing schema: aten::sorted.float(float[](a) input) -> (float[]) 2022-05-18T03:33:20.9499944Z processing existing schema: aten::sorted.Tensor(Tensor[](a) input) -> (Tensor[]) 2022-05-18T03:33:20.9502116Z processing existing schema: aten::sorted.bool(bool[](a) input) -> (bool[]) 2022-05-18T03:33:20.9504312Z processing existing schema: aten::sorted.str(str[](a) input) -> (str[]) 2022-05-18T03:33:20.9506571Z processing existing schema: aten::sorted.any(t[](a) self) -> (t[]) 2022-05-18T03:33:20.9508343Z processing existing schema: aten::hex(int i) -> (str) 2022-05-18T03:33:20.9509443Z processing existing schema: aten::oct(int i) -> (str) 2022-05-18T03:33:20.9510990Z processing existing schema: aten::bin(int i) -> (str) 2022-05-18T03:33:20.9512521Z processing existing schema: prim::StringIndex(str string, int index) -> (str) 2022-05-18T03:33:20.9513814Z processing existing schema: aten::chr(int i) -> (str) 2022-05-18T03:33:20.9515493Z processing existing schema: aten::__round_to_zero_floordiv.int(int a, int b) -> (int) 2022-05-18T03:33:20.9517994Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:20.9520554Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.LinearPackedParamsBase _0, (Tensor, Tensor?) _1) -> (NoneType _0) 2022-05-18T03:33:20.9521971Z processing existing schema: bias(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:20.9524293Z processing existing schema: unpack(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:20.9527930Z processing existing schema: __getstate__(__torch__.torch.classes.rnn.CellParamsBase _0) -> ((str, Tensor[], float[], int[], __torch__.torch.classes.quantized.LinearPackedParamsBase[]) _0) 2022-05-18T03:33:20.9531450Z processing existing schema: __setstate__(__torch__.torch.classes.rnn.CellParamsBase _0, (str, Tensor[], float[], int[], __torch__.torch.classes.quantized.LinearPackedParamsBase[]) _1) -> (NoneType _0) 2022-05-18T03:33:20.9533769Z processing existing schema: __getstate__(__torch__.torch.classes.sparse.LinearPackedParamsBase _0) -> ((Tensor, Tensor?, int[]) _0) 2022-05-18T03:33:20.9536485Z processing existing schema: __setstate__(__torch__.torch.classes.sparse.LinearPackedParamsBase _0, (Tensor, Tensor?, int[]) _1) -> (NoneType _0) 2022-05-18T03:33:20.9539412Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> ((str, Tensor[], Tensor?[]) _0) 2022-05-18T03:33:20.9541109Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0, Any _1) -> (NoneType _0) 2022-05-18T03:33:20.9542521Z processing existing schema: weight(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (Tensor _0) 2022-05-18T03:33:20.9544099Z processing existing schema: bias(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:20.9546535Z processing existing schema: unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:20.9548328Z processing existing schema: stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9550256Z processing existing schema: padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9552111Z processing existing schema: output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9553922Z processing existing schema: dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9555450Z processing existing schema: groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:20.9557025Z processing existing schema: transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (bool _0) 2022-05-18T03:33:20.9559900Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> ((str, Tensor[], Tensor?[]) _0) 2022-05-18T03:33:20.9561612Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0, Any _1) -> (NoneType _0) 2022-05-18T03:33:20.9563071Z processing existing schema: weight(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (Tensor _0) 2022-05-18T03:33:20.9564545Z processing existing schema: bias(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:20.9567073Z processing existing schema: unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:20.9568154Z processing existing schema: stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9569825Z processing existing schema: padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9571506Z processing existing schema: output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9573144Z processing existing schema: dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:20.9574316Z processing existing schema: groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:20.9575792Z processing existing schema: transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (bool _0) 2022-05-18T03:33:20.9579172Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> ((int, Tensor[], float[], int[]) _0) 2022-05-18T03:33:20.9582056Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0, (int, Tensor[], float[], int[]) _1) -> (NoneType _0) 2022-05-18T03:33:20.9583635Z processing existing schema: bit_rate(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:20.9585073Z processing existing schema: version(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:20.9588219Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0) -> ((Tensor, Tensor?, Scalar?, Scalar?) _0) 2022-05-18T03:33:20.9591079Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0, (Tensor, Tensor?, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:20.9595114Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _0) 2022-05-18T03:33:20.9599255Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:20.9603550Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _0) 2022-05-18T03:33:20.9607877Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:20.9608988Z processing existing schema: __init__(__torch__.torch.classes._nnapi.Compilation _0) -> (NoneType _0) 2022-05-18T03:33:20.9611166Z processing existing schema: init(__torch__.torch.classes._nnapi.Compilation _0, Tensor _1, Tensor[] _2) -> (NoneType _0) 2022-05-18T03:33:20.9613348Z processing existing schema: run(__torch__.torch.classes._nnapi.Compilation _0, Tensor[] _1, Tensor[] _2) -> (NoneType _0) 2022-05-18T03:33:20.9614941Z processing existing schema: __init__(__torch__.torch.classes.backendutils.BackendDebugInfo _0) -> (NoneType _0) 2022-05-18T03:33:20.9616456Z processing existing schema: __init__(__torch__.torch.classes.__backends__.nnc _0) -> (NoneType _0) 2022-05-18T03:33:20.9617882Z processing existing schema: is_available(Any self) -> (bool available) 2022-05-18T03:33:20.9620251Z processing existing schema: compile(Any self, Any processed, Dict(str, Any) method_compile_spec) -> (Dict(str, Any) handles) 2022-05-18T03:33:20.9622283Z processing existing schema: execute(Any self, Any handle, Any[] input) -> (Any[] output) 2022-05-18T03:33:20.9623872Z processing existing schema: starting_lineno(__torch__.torch.classes.profiling.SourceRef _0) -> (int _0) 2022-05-18T03:33:20.9625399Z processing existing schema: text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0) 2022-05-18T03:33:20.9626976Z processing existing schema: count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0) 2022-05-18T03:33:20.9628519Z processing existing schema: duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0) 2022-05-18T03:33:20.9630205Z processing existing schema: source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0) 2022-05-18T03:33:20.9632412Z processing existing schema: line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0) 2022-05-18T03:33:20.9633576Z processing existing schema: __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:20.9635200Z processing existing schema: enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:20.9636735Z processing existing schema: disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:20.9638931Z processing existing schema: _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0) 2022-05-18T03:33:20.9640699Z processing existing schema: __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0) 2022-05-18T03:33:20.9640938Z Found forward compatible schemas for all existing schemas 2022-05-18T03:33:20.9743977Z processing existing schema: prim::rpc_async(...) -> (...) 2022-05-18T03:33:20.9744269Z processing existing schema: prim::rpc_remote(...) -> (...) 2022-05-18T03:33:20.9745297Z processing existing schema: prim::rpc_sync(...) -> (...) 2022-05-18T03:33:20.9747319Z processing existing schema: aten::dist_backward(int context_id, Tensor[] roots, bool retain_graph=False) -> () 2022-05-18T03:33:20.9748891Z processing existing schema: aten::confirmed_by_owner(RRef(t) self) -> (bool) 2022-05-18T03:33:20.9750676Z processing existing schema: aten::owner_name(RRef(t) self) -> (str) 2022-05-18T03:33:20.9752161Z processing existing schema: aten::owner(RRef(t) self) -> (__torch__.torch.classes.dist_rpc.WorkerInfo) 2022-05-18T03:33:20.9753561Z processing existing schema: aten::is_owner(RRef(t) self) -> (bool) 2022-05-18T03:33:20.9755387Z processing existing schema: aten::local_value(RRef(t) self) -> (t(*)) 2022-05-18T03:33:20.9757219Z processing existing schema: aten::to_here(RRef(t) self, float timeout=60.) -> (t(*)) 2022-05-18T03:33:20.9758425Z processing existing schema: prim::PythonOp(...) -> (...) 2022-05-18T03:33:20.9759961Z processing existing schema: quantization::_FloatToBfloat16Quantized(Tensor input) -> (Tensor) 2022-05-18T03:33:20.9761422Z processing existing schema: quantization::_Bfloat16QuantizedToFloat(Tensor input) -> (Tensor) 2022-05-18T03:33:20.9762690Z processing existing schema: aten::set_grad_enabled(bool val) -> () 2022-05-18T03:33:20.9763923Z processing existing schema: aten::is_grad_enabled() -> (bool) 2022-05-18T03:33:20.9765367Z processing existing schema: aten::_no_grad_zero_(Tensor(a!) tensor) -> (Tensor(a!)) 2022-05-18T03:33:20.9766994Z processing existing schema: aten::_no_grad_fill_(Tensor(a!) tensor, float val) -> (Tensor(a!)) 2022-05-18T03:33:20.9768770Z processing existing schema: aten::_no_grad_normal_(Tensor(a!) tensor, float mean, float std) -> (Tensor(a!)) 2022-05-18T03:33:20.9770372Z processing existing schema: aten::_no_grad_uniform_(Tensor(a!) tensor, float a, float b) -> (Tensor(a!)) 2022-05-18T03:33:20.9771439Z processing existing schema: aten::has_torch_function(...) -> (bool) 2022-05-18T03:33:20.9772640Z processing existing schema: aten::is_scripting() -> (bool) 2022-05-18T03:33:20.9773889Z processing existing schema: aten::_get_tracing_state() -> (bool) 2022-05-18T03:33:20.9775972Z processing existing schema: aten::_pack_sequence(Tensor output, Tensor batch_sizes, Tensor? sorted_indices, Tensor? unsorted_indices) -> (Tensor, Tensor, Tensor?, Tensor?) 2022-05-18T03:33:20.9777412Z processing existing schema: aten::_no_grad_embedding_renorm_(Tensor weight, Tensor input, float max_norm, float norm_type) -> (Tensor) 2022-05-18T03:33:20.9779406Z processing existing schema: aten::_infer_size(int[] a, int[] b) -> (int[]) 2022-05-18T03:33:20.9781118Z processing existing schema: aten::as_tensor.float(float t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.9782721Z processing existing schema: aten::as_tensor.int(int t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.9784310Z processing existing schema: aten::as_tensor.bool(bool t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.9786012Z processing existing schema: aten::as_tensor.complex(complex t, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.9787901Z processing existing schema: aten::as_tensor(Tensor(a) data, *, int? dtype=None, Device? device=None) -> (Tensor(a|b)) 2022-05-18T03:33:20.9789784Z processing existing schema: aten::as_tensor.list(t[] data, *, int? dtype=None, Device? device=None) -> (Tensor) 2022-05-18T03:33:20.9791613Z processing existing schema: aten::tensor.float(float t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.9793340Z processing existing schema: aten::tensor.int(int t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.9795117Z processing existing schema: aten::tensor.bool(bool t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.9796883Z processing existing schema: aten::tensor.complex(complex t, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.9798890Z processing existing schema: aten::tensor(t[] data, *, int? dtype=None, Device? device=None, bool requires_grad=False) -> (Tensor) 2022-05-18T03:33:20.9800791Z processing existing schema: _test::get_first(str[][] _0) -> (str _0) 2022-05-18T03:33:20.9802453Z processing existing schema: _test::cat(Tensor[] inputs) -> (Tensor) 2022-05-18T03:33:20.9804378Z processing existing schema: _test::leaky_relu(Tensor self, float v=0.01) -> (Tensor) 2022-05-18T03:33:20.9805936Z processing existing schema: aten::__upsample_bilinear(Tensor input, int? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9807876Z processing existing schema: aten::__upsample_bilinear.size_list(Tensor input, int[]? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9809821Z processing existing schema: aten::__upsample_bilinear.scale_list(Tensor input, int? size=None, int[]? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9812090Z processing existing schema: aten::__upsample_bilinear.size_list_scale_list(Tensor input, int[]? size=None, int[]? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9814203Z processing existing schema: aten::__upsample(Tensor input, int? size=None, int? scale_factor=None, str mode="nearest", bool? align_corners=None) -> (Tensor) 2022-05-18T03:33:20.9816646Z processing existing schema: aten::__upsample.size_list(Tensor input, int[]? size=None, int? scale_factor=None, str mode="nearest", bool? align_corners=None) -> (Tensor) 2022-05-18T03:33:20.9818207Z processing existing schema: aten::__upsample_nearest(Tensor input, int? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9820151Z processing existing schema: aten::__upsample_nearest.size_list(Tensor input, int[]? size=None, int? scale_factor=None) -> (Tensor) 2022-05-18T03:33:20.9823129Z processing existing schema: aten::__interpolate.scale_list(Tensor input, int? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.9826565Z processing existing schema: aten::__interpolate.size_list_scale_list(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.9828900Z processing existing schema: aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.9831696Z processing existing schema: aten::__interpolate.size_list(Tensor input, int[]? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None, bool antialias=False) -> (Tensor) 2022-05-18T03:33:20.9832623Z processing existing schema: prim::TimePoint() -> (int) 2022-05-18T03:33:20.9834203Z processing existing schema: prim::AddStatValue(str key, int val) -> () 2022-05-18T03:33:20.9835982Z processing existing schema: aten::wait(Future(t) self) -> (t) 2022-05-18T03:33:20.9836938Z processing existing schema: prim::IgnoredPythonOp(...) -> (NoneType) 2022-05-18T03:33:20.9838836Z processing existing schema: aten::save(t item, str filename) -> () 2022-05-18T03:33:20.9842578Z processing existing schema: aten::grad(Tensor[] outputs, Tensor[] inputs, Tensor?[]? grad_outputs=None, bool? retain_graph=None, bool create_graph=False, bool allow_unused=False) -> (Tensor?[]) 2022-05-18T03:33:20.9843472Z processing existing schema: prim::BailoutTemplate() -> (int) 2022-05-18T03:33:20.9844957Z processing existing schema: prim::BailOut(...) -> (Tensor(a)) 2022-05-18T03:33:20.9846652Z processing existing schema: prim::Guard(Tensor(a) t) -> (Tensor(a)) 2022-05-18T03:33:20.9847861Z processing existing schema: prim::FallbackGraph(...) -> (...) 2022-05-18T03:33:20.9848635Z processing existing schema: prim::TypeCheck(...) -> (...) 2022-05-18T03:33:20.9850823Z processing existing schema: aten::_grad_sum_to_size(Tensor(a) self, int[]? size) -> (Tensor(a)) 2022-05-18T03:33:20.9852882Z processing existing schema: prim::ChunkSizes(...) -> (...) 2022-05-18T03:33:20.9853145Z processing existing schema: prim::ConstantChunk(...) -> (...) 2022-05-18T03:33:20.9854582Z processing existing schema: prim::RequiresGradCheck(...) -> (...) 2022-05-18T03:33:20.9855343Z processing existing schema: prim::FusionGroup(...) -> (...) 2022-05-18T03:33:20.9856696Z processing existing schema: prim::profile_ivalue(...) -> (...) 2022-05-18T03:33:20.9858124Z processing existing schema: prim::profile(...) -> (...) 2022-05-18T03:33:20.9859350Z processing existing schema: aten::hash.generic(t value) -> (int) 2022-05-18T03:33:20.9860785Z processing existing schema: prim::ModuleContainerIndex.list(Any self, int ind) -> (Any) 2022-05-18T03:33:20.9862308Z processing existing schema: prim::ModuleContainerIndex.dict(Any self, str ind) -> (Any) 2022-05-18T03:33:20.9863493Z processing existing schema: prim::id(AnyClassType? x) -> (int) 2022-05-18T03:33:20.9865085Z processing existing schema: aten::divmod.int(int x, int y) -> (int, int) 2022-05-18T03:33:20.9866678Z processing existing schema: aten::divmod.float(float x, float y) -> (float, float) 2022-05-18T03:33:20.9868147Z processing existing schema: aten::divmod.int_float(int x, float y) -> (float, float) 2022-05-18T03:33:20.9869664Z processing existing schema: aten::divmod.float_int(float x, int y) -> (float, float) 2022-05-18T03:33:20.9871491Z processing existing schema: aten::_list_to_tensor(int[] self) -> (Tensor) 2022-05-18T03:33:20.9873080Z processing existing schema: aten::_tensor_to_list(Tensor self) -> (int[]) 2022-05-18T03:33:20.9874010Z processing existing schema: prim::abs.int(int a) -> (int) 2022-05-18T03:33:20.9875496Z processing existing schema: prim::abs.float(float a) -> (float) 2022-05-18T03:33:20.9876954Z processing existing schema: prim::abs.complex(complex a) -> (float) 2022-05-18T03:33:20.9878109Z processing existing schema: prim::abs.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.9879529Z processing existing schema: prim::abs(Tensor x) -> (Tensor) 2022-05-18T03:33:20.9881105Z processing existing schema: aten::fabs.int(int a) -> (float) 2022-05-18T03:33:20.9883177Z processing existing schema: aten::fabs.float(float a) -> (float) 2022-05-18T03:33:20.9883659Z processing existing schema: aten::fabs.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.9885065Z processing existing schema: aten::gamma.int(int a) -> (float) 2022-05-18T03:33:20.9886399Z processing existing schema: aten::gamma.float(float a) -> (float) 2022-05-18T03:33:20.9887783Z processing existing schema: aten::gamma.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:20.9889153Z processing existing schema: aten::factorial.int(int a) -> (int) 2022-05-18T03:33:20.9890602Z processing existing schema: aten::_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:20.9892348Z processing existing schema: aten::_softmax.out(Tensor self, int dim, bool half_to_float, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9893761Z processing existing schema: aten::sinc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.9895516Z processing existing schema: aten::logit_(Tensor(a!) self, float? eps=None) -> (Tensor(a!)) 2022-05-18T03:33:20.9896982Z processing existing schema: aten::mish_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:20.9898438Z processing existing schema: aten::mish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.9899722Z processing existing schema: aten::mish(Tensor self) -> (Tensor) 2022-05-18T03:33:20.9901349Z processing existing schema: aten::mish.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9902859Z processing existing schema: aten::silu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:20.9904512Z processing existing schema: aten::hardshrink_backward(Tensor grad_out, Tensor self, Scalar lambd) -> (Tensor) 2022-05-18T03:33:20.9906606Z processing existing schema: aten::hardshrink_backward.grad_input(Tensor grad_out, Tensor self, Scalar lambd, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.9908147Z processing existing schema: aten::hardshrink(Tensor self, Scalar lambd=0.5) -> (Tensor) 2022-05-18T03:33:20.9910546Z processing existing schema: aten::hardshrink.out(Tensor self, Scalar lambd=0.5, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9912254Z processing existing schema: aten::gelu_backward(Tensor grad_output, Tensor self, *, str approximate="none") -> (Tensor) 2022-05-18T03:33:20.9914446Z processing existing schema: aten::gelu_backward.grad_input(Tensor grad_output, Tensor self, *, str approximate="none", Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:20.9916302Z processing existing schema: aten::gelu_(Tensor(a!) self, *, str approximate="none") -> (Tensor(a!)) 2022-05-18T03:33:20.9917995Z processing existing schema: aten::gelu(Tensor self, *, str approximate="none") -> (Tensor) 2022-05-18T03:33:20.9920372Z processing existing schema: aten::gelu.out(Tensor self, *, str approximate="none", Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9921924Z processing existing schema: aten::prelu_backward(Tensor grad_output, Tensor self, Tensor weight) -> (Tensor, Tensor) 2022-05-18T03:33:20.9923431Z processing existing schema: aten::native_channel_shuffle(Tensor self, int groups) -> (Tensor) 2022-05-18T03:33:20.9925254Z processing existing schema: aten::batch_norm_update_stats(Tensor input, Tensor? running_mean, Tensor? running_var, float momentum) -> (Tensor, Tensor) 2022-05-18T03:33:20.9926799Z processing existing schema: quantized::conv_transpose3d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:20.9929381Z processing existing schema: aten::native_batch_norm_backward(Tensor grad_out, Tensor input, Tensor? weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_invstd, bool train, float eps, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9930510Z processing existing schema: aten::narrow_copy(Tensor self, int dim, int start, int length) -> (Tensor) 2022-05-18T03:33:20.9932410Z processing existing schema: aten::narrow_copy.out(Tensor self, int dim, int start, int length, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9934015Z processing existing schema: aten::narrow_copy.SymInt(Tensor self, int dim, int start, SymInt length) -> (Tensor) 2022-05-18T03:33:20.9935772Z processing existing schema: aten::mvlgamma.out(Tensor self, int p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9937245Z processing existing schema: aten::mvlgamma(Tensor self, int p) -> (Tensor) 2022-05-18T03:33:20.9939975Z processing existing schema: aten::nan_to_num.out(Tensor self, float? nan=None, float? posinf=None, float? neginf=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9941300Z processing existing schema: aten::nan_to_num(Tensor self, float? nan=None, float? posinf=None, float? neginf=None) -> (Tensor) 2022-05-18T03:33:20.9943961Z processing existing schema: aten::native_layer_norm_backward(Tensor grad_out, Tensor input, int[] normalized_shape, Tensor mean, Tensor rstd, Tensor? weight, Tensor? bias, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9945691Z processing existing schema: aten::kl_div_backward(Tensor grad_output, Tensor self, Tensor target, int reduction=1, *, bool log_target=False) -> (Tensor) 2022-05-18T03:33:20.9947404Z processing existing schema: aten::isin.Tensor_Tensor(Tensor elements, Tensor test_elements, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.9949441Z processing existing schema: aten::isin.Tensor_Tensor_out(Tensor elements, Tensor test_elements, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9951192Z processing existing schema: aten::isin.Tensor_Scalar(Tensor elements, Scalar test_element, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.9953257Z processing existing schema: aten::isin.Tensor_Scalar_out(Tensor elements, Scalar test_element, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9954987Z processing existing schema: aten::isin.Scalar_Tensor(Scalar element, Tensor test_elements, *, bool assume_unique=False, bool invert=False) -> (Tensor) 2022-05-18T03:33:20.9957220Z processing existing schema: aten::isin.Scalar_Tensor_out(Scalar element, Tensor test_elements, *, bool assume_unique=False, bool invert=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9959707Z processing existing schema: aten::_index_put_impl_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False, bool unsafe=False) -> (Tensor(a!)) 2022-05-18T03:33:20.9961945Z processing existing schema: aten::_index_put_impl_.hacked_twin(Tensor(a!) self, Tensor[] indices, Tensor values, bool accumulate=False, bool unsafe=False) -> (Tensor(a!)) 2022-05-18T03:33:20.9963608Z processing existing schema: aten::index_copy_(Tensor(a!) self, int dim, Tensor index, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.9965190Z processing existing schema: aten::index_copy_.dimname(Tensor(a!) self, str dim, Tensor index, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:20.9967035Z processing existing schema: aten::index.Tensor(Tensor self, Tensor?[] indices) -> (Tensor) 2022-05-18T03:33:20.9968552Z processing existing schema: aten::index.Tensor_hacked_twin(Tensor self, Tensor[] indices) -> (Tensor) 2022-05-18T03:33:20.9970188Z processing existing schema: aten::index.str(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:20.9971628Z processing existing schema: aten::index.list_int(int[] self, int el) -> (int) 2022-05-18T03:33:20.9973318Z processing existing schema: aten::index.list_float(float[] self, float el) -> (int) 2022-05-18T03:33:20.9974788Z processing existing schema: aten::index.list_bool(bool[] self, bool el) -> (int) 2022-05-18T03:33:20.9976485Z processing existing schema: aten::index.list_Tensor(Tensor[] self, Tensor el) -> (int) 2022-05-18T03:33:20.9978091Z processing existing schema: aten::index.list_str(str[] self, str el) -> (int) 2022-05-18T03:33:20.9979952Z processing existing schema: aten::_fft_r2c(Tensor self, int[] dim, int normalization, bool onesided) -> (Tensor) 2022-05-18T03:33:20.9982172Z processing existing schema: aten::_fft_r2c.out(Tensor self, int[] dim, int normalization, bool onesided, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9984190Z processing existing schema: aten::native_group_norm(Tensor input, Tensor? weight, Tensor? bias, int N, int C, int HxW, int group, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:20.9985981Z schema: aten::grid_sampler_3d_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners, bool[2] output_mask) -> (Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:20.9987842Z processing existing schema: aten::grid_sampler_2d_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners, bool[2] output_mask) -> (Tensor, Tensor) 2022-05-18T03:33:20.9989070Z processing existing schema: aten::lcm_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.9990447Z processing existing schema: aten::lcm(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.9992233Z processing existing schema: aten::lcm.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9993747Z processing existing schema: aten::gcd_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:20.9995317Z processing existing schema: aten::gcd(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:20.9997145Z processing existing schema: aten::gcd.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:20.9997840Z processing existing schema: aten::gcd.int(int a, int b) -> (int) 2022-05-18T03:33:21.0000967Z processing existing schema: aten::_embedding_bag_per_sample_weights_backward(Tensor grad, Tensor weight, Tensor indices, Tensor offsets, Tensor offset2bag, int mode, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:21.0002380Z schema: aten::_embedding_bag_dense_backward(Tensor grad, Tensor indices, Tensor offset2bag, Tensor bag_size, Tensor maximum_indices, int num_weights, bool scale_grad_by_freq, int mode, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.0003492Z processing existing schema: aten::logit(Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:21.0006430Z processing existing schema: aten::logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0008565Z processing existing schema: aten::_cummin_helper(Tensor self, Tensor(a!) values, Tensor(b!) indices, int dim) -> () 2022-05-18T03:33:21.0010655Z processing existing schema: aten::bincount(Tensor self, Tensor? weights=None, int minlength=0) -> (Tensor) 2022-05-18T03:33:21.0014462Z processing existing schema: _quantized::conv2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0016417Z processing existing schema: aten::binary_cross_entropy_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.0019648Z processing existing schema: aten::binary_cross_entropy_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=1, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.0023508Z processing existing schema: quantized::conv_transpose1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0024864Z processing existing schema: aten::argmin(Tensor self, int? dim=None, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0027950Z processing existing schema: aten::argmin.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0029948Z processing existing schema: quantized::add_scalar_relu_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0032511Z processing existing schema: quantized::add_scalar_relu_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0034307Z processing existing schema: aten::argmax(Tensor self, int? dim=None, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0037163Z processing existing schema: aten::argmax.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0039339Z processing existing schema: quantized::add_scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0041808Z processing existing schema: quantized::add_scalar_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0043425Z processing existing schema: aten::native_dropout_backward(Tensor grad_output, Tensor mask, float scale) -> (Tensor) 2022-05-18T03:33:21.0045115Z processing existing schema: aten::_assert_async(Tensor self) -> () 2022-05-18T03:33:21.0048771Z processing existing schema: aten::_sparse_coo_tensor_unsafe(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0051637Z processing existing schema: aten::sparse_coo_tensor.size(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0054390Z processing existing schema: aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0057652Z processing existing schema: aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0060932Z processing existing schema: aten::_sparse_bsc_tensor_unsafe(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0064211Z processing existing schema: aten::_sparse_bsr_tensor_unsafe(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0067666Z processing existing schema: aten::_sparse_compressed_tensor_unsafe(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0070866Z processing existing schema: aten::sparse_bsc_tensor.ccol_row_value_size(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0073749Z processing existing schema: aten::sparse_bsc_tensor.ccol_row_value(Tensor ccol_indices, Tensor row_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0076523Z processing existing schema: aten::_efficientzerotensor(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0079768Z processing existing schema: aten::range.step(Scalar start, Scalar end, Scalar step=1, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0082273Z processing existing schema: aten::range(Scalar start, Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0084522Z processing existing schema: aten::range.out(Scalar start, Scalar end, Scalar step=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0087284Z processing existing schema: aten::scalar_tensor(Scalar s, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0090596Z processing existing schema: aten::ones.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0093351Z processing existing schema: aten::ones(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0095688Z processing existing schema: aten::ones.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0098890Z processing existing schema: aten::logspace(Scalar start, Scalar end, int steps, float base=10., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0101419Z processing existing schema: aten::logspace.out(Scalar start, Scalar end, int steps, float base=10., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0104292Z processing existing schema: aten::linspace(Scalar start, Scalar end, int steps, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0106543Z processing existing schema: aten::linspace.out(Scalar start, Scalar end, int steps, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0109312Z processing existing schema: aten::kaiser_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0112053Z processing existing schema: aten::kaiser_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0114949Z processing existing schema: aten::kaiser_window.beta(int window_length, bool periodic, float beta, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0117629Z processing existing schema: aten::hamming_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0120915Z processing existing schema: aten::hamming_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0123686Z processing existing schema: aten::hamming_window.periodic_alpha(int window_length, bool periodic, float alpha, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0127052Z processing existing schema: aten::hamming_window.periodic_alpha_beta(int window_length, bool periodic, float alpha, float beta, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0129071Z processing existing schema: aten::hann_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0132043Z processing existing schema: aten::hann_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0134984Z processing existing schema: aten::from_file(str filename, bool? shared=None, int? size=0, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0138478Z schema: aten::full.names(int[] size, Scalar fill_value, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.0141457Z schema: aten::full(int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.0143829Z schema: aten::full.out(int[] size, Scalar fill_value, *, Tensor(a!) out) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.0146565Z processing existing schema: aten::bartlett_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0149397Z processing existing schema: aten::bartlett_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0151337Z processing existing schema: _quantized::conv2d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0154172Z processing existing schema: aten::arange(Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0156851Z processing existing schema: aten::arange.start(Scalar start, Scalar end, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0159827Z processing existing schema: aten::arange.start_step(Scalar start, Scalar end, Scalar step, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0162073Z processing existing schema: aten::arange.start_out(Scalar start, Scalar end, Scalar step=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0164124Z processing existing schema: aten::arange.out(Scalar end, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0167278Z processing existing schema: quantized::quantized_rnn_relu_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:21.0169796Z processing existing schema: aten::_cudnn_init_dropout_state(float dropout, bool train, int dropout_seed, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0171617Z processing existing schema: prepacked::conv2d_transpose_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.TransposeConv2dOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.0175371Z processing existing schema: aten::cudnn_convolution_relu(Tensor self, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.0176744Z processing existing schema: prepacked::conv2d_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.Conv2dOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.0180889Z processing existing schema: aten::cudnn_convolution_add_relu(Tensor self, Tensor weight, Tensor z, Scalar? alpha, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.0184562Z processing existing schema: prepacked::conv2d_transpose_clamp_prepack(Tensor W, Tensor? B, int[2] stride, int[2] padding, int[2] output_padding, int[2] dilation, int groups, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.TransposeConv2dOpContext) 2022-05-18T03:33:21.0187871Z processing existing schema: aten::cudnn_convolution(Tensor self, Tensor weight, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:21.0190922Z processing existing schema: prepacked::conv2d_clamp_prepack(Tensor W, Tensor? B, int[2] stride, int[2] padding, int[2] dilation, int groups, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.Conv2dOpContext) 2022-05-18T03:33:21.0193920Z processing existing schema: aten::cudnn_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon, Tensor reserveSpace) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0195019Z processing existing schema: prepacked::linear_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.LinearOpContext W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.0198525Z processing existing schema: aten::cudnn_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0200841Z processing existing schema: prepacked::linear_clamp_prepack(Tensor W, Tensor? B=None, Scalar? output_min=None, Scalar? output_max=None) -> (__torch__.torch.classes.xnnpack.LinearOpContext) 2022-05-18T03:33:21.0202897Z processing existing schema: aten::cudnn_affine_grid_generator_backward(Tensor grad, int N, int C, int H, int W) -> (Tensor grad_theta) 2022-05-18T03:33:21.0203963Z schema: prepacked::unpack_prepacked_sizes_linear(Any W_prepack) -> (Any) found on allowlist, skipping 2022-05-18T03:33:21.0206798Z processing existing schema: aten::cudnn_affine_grid_generator(Tensor theta, int N, int C, int H, int W) -> (Tensor grid) 2022-05-18T03:33:21.0207273Z schema: prepacked::unpack_prepacked_sizes_conv2d(Any W_prepack) -> (Any) found on allowlist, skipping 2022-05-18T03:33:21.0211162Z processing existing schema: aten::ctc_loss.IntList(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank=0, int reduction=1, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:21.0213887Z processing existing schema: aten::ctc_loss.Tensor(Tensor log_probs, Tensor targets, Tensor input_lengths, Tensor target_lengths, int blank=0, int reduction=1, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:21.0215228Z processing existing schema: _quantized::linear_prepack_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:21.0217697Z processing existing schema: aten::crow_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.0219892Z processing existing schema: _quantized::conv3d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0222797Z processing existing schema: aten::cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, float label_smoothing=0.) -> (Tensor) 2022-05-18T03:33:21.0224541Z processing existing schema: quantized::linear_unpack_fp16(__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:21.0226884Z processing existing schema: quantized::linear_unpack_fp16.legacy(Tensor W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:21.0229198Z processing existing schema: aten::cov(Tensor self, *, int correction=1, Tensor? fweights=None, Tensor? aweights=None) -> (Tensor) 2022-05-18T03:33:21.0231065Z processing existing schema: quantized::linear_unpack(__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:21.0233176Z processing existing schema: quantized::linear_unpack.legacy(Tensor W_prepack) -> (Tensor W_origin, Tensor? B_origin) 2022-05-18T03:33:21.0235579Z processing existing schema: aten::count_nonzero.dim_IntList(Tensor self, int[] dim) -> (Tensor) 2022-05-18T03:33:21.0237174Z processing existing schema: aten::count_nonzero(Tensor self, int? dim=None) -> (Tensor) 2022-05-18T03:33:21.0239722Z processing existing schema: quantized::conv_transpose3d_transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.0242383Z processing existing schema: aten::cosine_similarity(Tensor x1, Tensor x2, int dim=1, float eps=1e-08) -> (Tensor) 2022-05-18T03:33:21.0244794Z processing existing schema: aten::eye(int n, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0247447Z processing existing schema: aten::eye.m(int n, int m, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0249192Z processing existing schema: aten::eye.out(int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0251658Z processing existing schema: aten::eye.m_out(int n, int m, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0253154Z processing existing schema: prim::index(Device self) -> (int?) 2022-05-18T03:33:21.0255243Z processing existing schema: quantized::conv_transpose3d_groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.0257751Z processing existing schema: aten::cosine_embedding_loss(Tensor input1, Tensor input2, Tensor target, float margin=0., int reduction=1) -> (Tensor) 2022-05-18T03:33:21.0259908Z processing existing schema: quantized::conv_transpose3d_output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0261031Z processing existing schema: aten::cosh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0263680Z processing existing schema: aten::cosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0264862Z processing existing schema: aten::cosh.int(int a) -> (float) 2022-05-18T03:33:21.0266897Z processing existing schema: aten::cosh.float(float a) -> (float) 2022-05-18T03:33:21.0268459Z processing existing schema: aten::cosh.complex(complex a) -> (complex) 2022-05-18T03:33:21.0270296Z processing existing schema: aten::cosh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.0272823Z processing existing schema: quantized::conv_transpose3d_unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0273674Z processing existing schema: aten::corrcoef(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0276081Z processing existing schema: aten::exp2_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0277238Z processing existing schema: prim::is_mkldnn(Tensor a) -> (bool) 2022-05-18T03:33:21.0280435Z processing existing schema: quantized::conv_transpose2d_output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0281055Z processing existing schema: aten::exp2(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0283502Z processing existing schema: aten::exp2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0284756Z processing existing schema: prim::is_sparse_csr(Tensor a) -> (bool) 2022-05-18T03:33:21.0287672Z processing existing schema: quantized::conv_transpose2d_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0289380Z processing existing schema: aten::copy_(Tensor(a!) self, Tensor src, bool non_blocking=False) -> (Tensor(a!)) 2022-05-18T03:33:21.0291794Z processing existing schema: aten::copy_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0293627Z processing existing schema: aten::copy_.int(Tensor(a!) self, int other) -> (Tensor(a!)) 2022-05-18T03:33:21.0295938Z processing existing schema: aten::copy_.float(Tensor(a!) self, float other) -> (Tensor(a!)) 2022-05-18T03:33:21.0298162Z processing existing schema: quantized::conv3d_stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0300429Z processing existing schema: aten::conv_tbc_backward(Tensor self, Tensor input, Tensor weight, Tensor bias, int pad) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0303667Z processing existing schema: aten::empty_quantized(int[] size, Tensor qtensor, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.0304605Z processing existing schema: aten::isprintable(str self) -> (bool) 2022-05-18T03:33:21.0307717Z processing existing schema: quantized::conv2d_dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0311303Z processing existing schema: aten::conv3d(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.0314807Z processing existing schema: aten::conv3d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], str padding="valid", int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.0316831Z processing existing schema: quantized::conv2d_stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0319331Z processing existing schema: aten::contiguous(Tensor(a) self, *, int memory_format=0) -> (Tensor(a)) 2022-05-18T03:33:21.0321472Z processing existing schema: aten::embedding_renorm_(Tensor(a!) self, Tensor indices, float max_norm, float norm_type) -> (Tensor(a!)) 2022-05-18T03:33:21.0323485Z processing existing schema: aten::rfind(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:21.0325719Z processing existing schema: quantized::conv3d_unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0328399Z processing existing schema: aten::constant_pad_nd(Tensor self, int[] pad, Scalar value=0) -> (Tensor) 2022-05-18T03:33:21.0329333Z processing existing schema: quantized::conv2d_unpack_sizes(Any packed_weights) -> (Any) 2022-05-18T03:33:21.0332002Z processing existing schema: aten::conj_physical_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0333583Z processing existing schema: aten::embedding_dense_backward(Tensor grad_output, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> (Tensor) 2022-05-18T03:33:21.0334530Z processing existing schema: aten::expandtabs(str self, int tabsize=8) -> (str) 2022-05-18T03:33:21.0336863Z processing existing schema: quantized::conv2d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0337486Z processing existing schema: aten::conj_physical(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0338929Z processing existing schema: aten::conj_physical.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0340631Z processing existing schema: quantized::conv1d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0342127Z processing existing schema: aten::conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.0344203Z processing existing schema: quantized::conv_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0345512Z processing existing schema: aten::concat(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:21.0347985Z processing existing schema: aten::concat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0349466Z processing existing schema: aten::concat.names(Tensor[] tensors, str dim) -> (Tensor) 2022-05-18T03:33:21.0352288Z processing existing schema: aten::concat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0353260Z processing existing schema: quantized::threshold(Tensor qx, Scalar threshold, Scalar value) -> (Tensor qy) 2022-05-18T03:33:21.0355223Z processing existing schema: aten::complex.out(Tensor real, Tensor imag, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0356570Z processing existing schema: aten::complex(Tensor real, Tensor imag) -> (Tensor) 2022-05-18T03:33:21.0358274Z processing existing schema: quantized::softmax(Tensor qx, int dim, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0359829Z processing existing schema: aten::combinations(Tensor self, int r=2, bool with_replacement=False) -> (Tensor) 2022-05-18T03:33:21.0361224Z processing existing schema: quantized::sigmoid(Tensor qx, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0362643Z processing existing schema: aten::column_stack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.0364859Z processing existing schema: aten::column_stack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0367925Z processing existing schema: quantized::max_pool1d(Tensor qx, int[] kernel_size, int[] stride, int[] padding, int[] dilation, bool ceil_mode) -> (Tensor) 2022-05-18T03:33:21.0369701Z processing existing schema: aten::col2im(Tensor self, int[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:21.0372257Z processing existing schema: aten::col2im.out(Tensor self, int[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0373854Z processing existing schema: quantized::mul_scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0375755Z processing existing schema: quantized::mul_scalar_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0377293Z processing existing schema: aten::clamp_min_(Tensor(a!) self, Scalar min) -> (Tensor(a!)) 2022-05-18T03:33:21.0378966Z processing existing schema: aten::clamp_min_.Tensor(Tensor(a!) self, Tensor min) -> (Tensor(a!)) 2022-05-18T03:33:21.0380240Z processing existing schema: quantized::mul_scalar_relu(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.0381646Z processing existing schema: quantized::mul_scalar_relu.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:21.0382982Z processing existing schema: aten::clamp_min(Tensor self, Scalar min) -> (Tensor) 2022-05-18T03:33:21.0384362Z processing existing schema: aten::clamp_min.Tensor(Tensor self, Tensor min) -> (Tensor) 2022-05-18T03:33:21.0386286Z processing existing schema: aten::clamp_min.out(Tensor self, Scalar min, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0387957Z processing existing schema: aten::clamp_min.Tensor_out(Tensor self, Tensor min, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0389623Z processing existing schema: aten::xlogy_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0391139Z processing existing schema: aten::xlogy_.Scalar_Other(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0392759Z processing existing schema: quantized::matmul(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.0394352Z processing existing schema: aten::choose_qparams_optimized(Tensor input, int numel, int n_bins, float ratio, int bit_width) -> (Tensor, Tensor) 2022-05-18T03:33:21.0395631Z processing existing schema: aten::xlogy.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0397870Z processing existing schema: aten::xlogy.OutTensor(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0399284Z processing existing schema: aten::xlogy.Scalar_Self(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0401059Z processing existing schema: aten::xlogy.OutScalar_Self(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0402585Z processing existing schema: aten::xlogy.Scalar_Other(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.0404172Z processing existing schema: aten::xlogy.OutScalar_Other(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0406215Z processing existing schema: _quantized::linear_prepack_fp16_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:21.0407301Z processing existing schema: aten::cholesky_solve(Tensor self, Tensor input2, bool upper=False) -> (Tensor) 2022-05-18T03:33:21.0409307Z processing existing schema: aten::cholesky_solve.out(Tensor self, Tensor input2, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0411186Z processing existing schema: _quantized::linear_prepack_fp16(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:21.0412157Z processing existing schema: aten::cholesky_inverse(Tensor self, bool upper=False) -> (Tensor) 2022-05-18T03:33:21.0414333Z processing existing schema: aten::cholesky_inverse.out(Tensor self, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0416039Z processing existing schema: quantized::linear_prepack_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:21.0418012Z processing existing schema: aten::chain_matmul(Tensor[] matrices) -> (Tensor) 2022-05-18T03:33:21.0419805Z processing existing schema: aten::chain_matmul.out(Tensor[] matrices, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0420868Z processing existing schema: quantized::embedding_bag_2bit_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:21.0422270Z processing existing schema: aten::can_cast(int from, int to) -> (bool) 2022-05-18T03:33:21.0424229Z processing existing schema: aten::cumsum_(Tensor(a!) self, int dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0426543Z processing existing schema: aten::cumsum_.dimname(Tensor(a!) self, str dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0427551Z schema: static_runtime::signed_log1p(Tensor input) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.0428313Z processing existing schema: quantized::embedding_bag_4bit_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:21.0429591Z processing existing schema: aten::bucketize.Tensor(Tensor self, Tensor boundaries, *, bool out_int32=False, bool right=False) -> (Tensor) 2022-05-18T03:33:21.0431802Z processing existing schema: aten::bucketize.Tensor_out(Tensor self, Tensor boundaries, *, bool out_int32=False, bool right=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0433558Z processing existing schema: aten::bucketize.Scalar(Scalar self, Tensor boundaries, *, bool out_int32=False, bool right=False) -> (Tensor) 2022-05-18T03:33:21.0435247Z processing existing schema: quantized::embedding_bag_prepack(Tensor weight) -> (__torch__.torch.classes.quantized.EmbeddingPackedParamsBase W_prepack) 2022-05-18T03:33:21.0437095Z processing existing schema: aten::broadcast_tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.0440608Z processing existing schema: quantized::embedding_bag_2bit_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:21.0441677Z processing existing schema: aten::bitwise_xor_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0443616Z processing existing schema: aten::bitwise_xor_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0446855Z processing existing schema: quantized::embedding_bag_byte_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:21.0448107Z processing existing schema: aten::bitwise_right_shift_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0450042Z processing existing schema: aten::bitwise_right_shift_.Tensor_Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0452060Z processing existing schema: quantized::embedding_byte(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, bool pruned_weights=False) -> (Tensor) 2022-05-18T03:33:21.0453454Z processing existing schema: aten::bitwise_or_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0455461Z processing existing schema: aten::bitwise_or_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0458758Z processing existing schema: quantized::embedding_bag_byte(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:21.0459884Z processing existing schema: aten::bitwise_not_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0461422Z processing existing schema: quantized::celu(Tensor self, float output_scale, int output_zero_point, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.0462546Z processing existing schema: aten::bitwise_not(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0464490Z processing existing schema: aten::bitwise_not.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0468093Z processing existing schema: quantized::conv_transpose3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:21.0469956Z processing existing schema: aten::binary_cross_entropy_with_logits_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.0473228Z processing existing schema: quantized::conv_transpose2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0474970Z processing existing schema: aten::binary_cross_entropy_with_logits(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.0477884Z processing existing schema: quantized::conv2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0479406Z processing existing schema: aten::bilinear(Tensor input1, Tensor input2, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:21.0482416Z processing existing schema: quantized::conv1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0483901Z processing existing schema: aten::bernoulli_.Tensor(Tensor(a!) self, Tensor p, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0486558Z processing existing schema: aten::bernoulli_.float(Tensor(a!) self, float p=0.5, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0488291Z processing existing schema: quantized::conv1d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:21.0490518Z processing existing schema: aten::batch_norm_backward_reduce(Tensor grad_out, Tensor input, Tensor mean, Tensor invstd, Tensor? weight, bool input_g, bool weight_g, bool bias_g) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0492158Z processing existing schema: quantized::conv_transpose2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0494439Z processing existing schema: aten::avg_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> (Tensor) 2022-05-18T03:33:21.0497176Z processing existing schema: aten::avg_pool3d_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, bool ceil_mode, bool count_include_pad, int? divisor_override, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.0498655Z processing existing schema: quantized::conv_transpose1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0501359Z processing existing schema: aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor) 2022-05-18T03:33:21.0504613Z processing existing schema: aten::avg_pool3d.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0506705Z processing existing schema: aten::tril_indices(int row, int col, int offset=0, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.0508519Z processing existing schema: quantized::conv3d.new(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0511541Z processing existing schema: quantized::conv3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0513990Z processing existing schema: aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor) 2022-05-18T03:33:21.0517061Z processing existing schema: aten::avg_pool2d.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0518780Z processing existing schema: quantized::cat_relu_out(Tensor[] qx, int dim, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0520359Z processing existing schema: aten::atanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0522750Z processing existing schema: quantized::batch_norm2d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0523543Z processing existing schema: aten::asinh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0525512Z processing existing schema: aten::asinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0526658Z processing existing schema: aten::asinh.int(int a) -> (float) 2022-05-18T03:33:21.0528040Z processing existing schema: aten::asinh.float(float a) -> (float) 2022-05-18T03:33:21.0529324Z processing existing schema: aten::asinh.complex(complex a) -> (complex) 2022-05-18T03:33:21.0530653Z processing existing schema: aten::asinh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.0533147Z processing existing schema: quantized::batch_norm_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0535526Z processing existing schema: aten::as_strided_(Tensor(a!) self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0537685Z processing existing schema: quantized::batch_norm(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0540027Z processing existing schema: aten::as_strided(Tensor(a) self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor(a)) 2022-05-18T03:33:21.0541476Z processing existing schema: _quantized::add(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.0542619Z processing existing schema: aten::argwhere(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0544200Z processing existing schema: quantized::add_scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.0545722Z processing existing schema: quantized::add_scalar.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:21.0546962Z processing existing schema: aten::arctanh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0548928Z processing existing schema: aten::arctanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0550482Z processing existing schema: quantized::add(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.0552402Z processing existing schema: quantized::add.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0553687Z processing existing schema: quantized::add.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.0555163Z processing existing schema: quantized::add.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:21.0557247Z processing existing schema: quantized::add.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0558604Z processing existing schema: aten::arctan(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0561106Z processing existing schema: aten::arctan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0563006Z processing existing schema: quantized::quantized_rnn_tanh_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:21.0563943Z processing existing schema: aten::arccos(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0565314Z processing existing schema: aten::arccos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0567855Z processing existing schema: aten::native_batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0570923Z processing existing schema: aten::native_batch_norm.out(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, *, Tensor(a!) out, Tensor(b!) save_mean, Tensor(c!) save_invstd) -> (Tensor(a!), Tensor(b!), Tensor(c!)) 2022-05-18T03:33:21.0571848Z processing existing schema: aten::_fw_primal(Tensor(a) self, int level) -> (Tensor(a)) 2022-05-18T03:33:21.0573287Z processing existing schema: aten::retain_grad(Tensor(a!) self) -> () 2022-05-18T03:33:21.0574513Z processing existing schema: aten::is_leaf(Tensor self) -> (bool) 2022-05-18T03:33:21.0575987Z processing existing schema: quantized::embedding_bag_unpack(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase W_prepack) -> (Tensor W_origin) 2022-05-18T03:33:21.0578621Z processing existing schema: aten::cartesian_prod(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.0579110Z processing existing schema: aten::data(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0579546Z schema: static_runtime::select_tensor(Tensor(a) a, Tensor(b) b, bool use_b) -> (Tensor(a|b)) found on allowlist, skipping 2022-05-18T03:33:21.0581029Z processing existing schema: _quantized::linear(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:21.0582111Z processing existing schema: aten::ccol_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.0583871Z processing existing schema: aten::var_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 2022-05-18T03:33:21.0585857Z processing existing schema: aten::var_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0587632Z processing existing schema: aten::var_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0589448Z processing existing schema: aten::var_mean.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0591320Z processing existing schema: aten::var_mean.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0593571Z processing existing schema: quantized::linear_relu(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:21.0595947Z processing existing schema: aten::cauchy_(Tensor(a!) self, float median=0., float sigma=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0597647Z processing existing schema: aten::var(Tensor self, bool unbiased=True) -> (Tensor) 2022-05-18T03:33:21.0599898Z processing existing schema: aten::var.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0602189Z processing existing schema: aten::var.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0604774Z processing existing schema: aten::var.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0607148Z processing existing schema: aten::var.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0609257Z processing existing schema: aten::var.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0611792Z processing existing schema: aten::var.correction_out(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0613894Z processing existing schema: aten::var.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0616393Z processing existing schema: aten::var.correction_names_out(Tensor self, str[1] dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0619947Z processing existing schema: _quantized::conv_transpose1d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.0621292Z processing existing schema: aten::bitwise_and.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0623579Z processing existing schema: aten::bitwise_and.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0625318Z processing existing schema: aten::bitwise_and.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.0627602Z processing existing schema: aten::bitwise_and.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0630108Z processing existing schema: aten::unsafe_split_with_sizes(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.0633356Z processing existing schema: _quantized::conv3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:21.0635124Z processing existing schema: aten::binomial(Tensor count, Tensor prob, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.0637268Z processing existing schema: aten::unsafe_split.Tensor(Tensor self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.0639483Z processing existing schema: aten::copysign_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0641440Z processing existing schema: aten::copysign_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0643260Z processing existing schema: quantized::conv_transpose2d_transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.0645244Z processing existing schema: _quantized::conv_transpose2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0647412Z processing existing schema: aten::batch_norm_backward_elemt(Tensor grad_out, Tensor input, Tensor mean, Tensor invstd, Tensor? weight, Tensor mean_dy, Tensor mean_dy_xmu, Tensor count) -> (Tensor) 2022-05-18T03:33:21.0648874Z processing existing schema: aten::trunc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0650548Z processing existing schema: aten::true_divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0652194Z processing existing schema: aten::true_divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0653974Z processing existing schema: _quantized::conv2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.0655910Z processing existing schema: aten::baddbmm_(Tensor(a!) self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.0657406Z processing existing schema: aten::true_divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.0658623Z processing existing schema: aten::true_divide.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0660557Z processing existing schema: aten::true_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0662148Z processing existing schema: quantized::add_relu(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.0663818Z processing existing schema: quantized::add_relu.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0665389Z processing existing schema: quantized::add_relu.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.0666845Z processing existing schema: quantized::add_relu.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:21.0668618Z processing existing schema: quantized::add_relu.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.0669726Z processing existing schema: aten::arctan2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0672184Z processing existing schema: aten::arctan2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0673019Z processing existing schema: aten::threshold(Tensor self, Scalar threshold, Scalar value) -> (Tensor) 2022-05-18T03:33:21.0674860Z processing existing schema: aten::threshold.out(Tensor self, Scalar threshold, Scalar value, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0676263Z processing existing schema: aten::square_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0678398Z processing existing schema: quantized::elu(Tensor self, float output_scale, int output_zero_point, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor) 2022-05-18T03:33:21.0680293Z processing existing schema: aten::addcdiv_(Tensor(a!) self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor(a!)) 2022-05-18T03:33:21.0681238Z processing existing schema: aten::square(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0683187Z processing existing schema: aten::square.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0684681Z processing existing schema: aten::sqrt_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0686190Z processing existing schema: aten::sinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0687721Z processing existing schema: aten::signbit(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0689325Z processing existing schema: aten::signbit.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0690737Z processing existing schema: aten::sign_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0691922Z processing existing schema: aten::_version(Tensor self) -> (int) 2022-05-18T03:33:21.0693539Z processing existing schema: aten::round_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0695196Z processing existing schema: aten::round_.decimals(Tensor(a!) self, *, int decimals) -> (Tensor(a!)) 2022-05-18T03:33:21.0697261Z processing existing schema: aten::resize_as_(Tensor(a!) self, Tensor the_template, *, int? memory_format=None) -> (Tensor(a!)) 2022-05-18T03:33:21.0699073Z processing existing schema: aten::rename_(Tensor(a!) self, str[]? names) -> (Tensor(a!)) 2022-05-18T03:33:21.0700560Z processing existing schema: aten::relu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0702548Z processing existing schema: aten::refine_names(Tensor(a) self, str[] names) -> (Tensor(a)) 2022-05-18T03:33:21.0704059Z processing existing schema: aten::reciprocal_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0707164Z processing existing schema: aten::_sparse_coo_tensor_with_dims_and_tensors(int sparse_dim, int dense_dim, int[] size, Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0708312Z processing existing schema: aten::rad2deg_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0711096Z processing existing schema: aten::_sparse_coo_tensor_with_dims(int sparse_dim, int dense_dim, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.0712384Z processing existing schema: aten::rad2deg(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0713889Z processing existing schema: aten::rad2deg.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0715485Z processing existing schema: aten::pow_.Scalar(Tensor(a!) self, Scalar exponent) -> (Tensor(a!)) 2022-05-18T03:33:21.0717049Z processing existing schema: aten::pow_.Tensor(Tensor(a!) self, Tensor exponent) -> (Tensor(a!)) 2022-05-18T03:33:21.0718996Z processing existing schema: aten::output_nr(Tensor self) -> (int) 2022-05-18T03:33:21.0721102Z processing existing schema: aten::ones_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.0721842Z processing existing schema: aten::_logcumsumexp(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.0723522Z processing existing schema: aten::_logcumsumexp.out(Tensor self, int dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0725233Z processing existing schema: aten::nextafter_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0726748Z processing existing schema: aten::_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, int input_dtype) -> (Tensor) 2022-05-18T03:33:21.0728688Z processing existing schema: aten::_log_softmax_backward_data.out(Tensor grad_output, Tensor output, int dim, int input_dtype, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0729875Z processing existing schema: aten::nextafter(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0731800Z processing existing schema: aten::nextafter.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0733513Z processing existing schema: aten::neg_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0735317Z processing existing schema: aten::mul_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0737097Z processing existing schema: aten::mul_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0739352Z processing existing schema: aten::mul_.t(t[](a!) l, int n) -> (t[](a!)) 2022-05-18T03:33:21.0741277Z processing existing schema: aten::mode(Tensor self, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0743058Z processing existing schema: aten::mode.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0745761Z processing existing schema: aten::mode.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0748333Z processing existing schema: aten::mode.values(Tensor self, int dim=-1, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0749421Z processing existing schema: aten::min(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0751308Z processing existing schema: aten::min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0753835Z processing existing schema: aten::min.dim_min(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0755536Z processing existing schema: aten::min.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0758124Z processing existing schema: aten::min.names_dim_min(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) min, Tensor(b!) min_indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0759652Z processing existing schema: aten::min.other(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0761525Z processing existing schema: aten::min.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0762865Z processing existing schema: aten::nanmedian(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0764660Z processing existing schema: aten::nanmedian.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0767212Z processing existing schema: aten::nanmedian.dim_values(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0768909Z processing existing schema: aten::nanmedian.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0771497Z processing existing schema: aten::nanmedian.names_dim_values(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0772595Z processing existing schema: aten::median(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0774517Z processing existing schema: aten::median.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0777029Z processing existing schema: aten::median.dim_values(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0778739Z processing existing schema: aten::median.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0781414Z processing existing schema: aten::median.names_dim_values(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0782700Z processing existing schema: aten::mean(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.0784816Z processing existing schema: aten::mean.dim(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.0786787Z processing existing schema: aten::mean.names_dim(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.0789169Z processing existing schema: aten::mean.names_out(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0791504Z processing existing schema: aten::mean.out(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0794416Z processing existing schema: aten::max_pool3d_with_indices(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0798096Z processing existing schema: aten::max_pool3d_with_indices.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.0800769Z processing existing schema: aten::max_pool2d_with_indices(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0804266Z processing existing schema: aten::max_pool2d_with_indices.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.0806721Z processing existing schema: aten::max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.0809336Z processing existing schema: aten::max_pool1d_with_indices(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0811852Z processing existing schema: aten::max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.0813091Z processing existing schema: aten::max(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0814843Z processing existing schema: aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0817639Z processing existing schema: aten::max.dim_max(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) max, Tensor(b!) max_values) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0819283Z processing existing schema: aten::max.names_dim(Tensor self, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0821679Z processing existing schema: aten::max.names_dim_max(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) max, Tensor(b!) max_values) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0822956Z processing existing schema: aten::max.other(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0825806Z processing existing schema: aten::max.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0826382Z processing existing schema: aten::masked_select(Tensor self, Tensor mask) -> (Tensor) 2022-05-18T03:33:21.0829073Z processing existing schema: aten::masked_select.out(Tensor self, Tensor mask, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0830296Z processing existing schema: aten::masked_fill_.Scalar(Tensor(a!) self, Tensor mask, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.0832355Z processing existing schema: aten::masked_fill_.Tensor(Tensor(a!) self, Tensor mask, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:21.0833706Z processing existing schema: aten::masked_fill.Scalar(Tensor self, Tensor mask, Scalar value) -> (Tensor) 2022-05-18T03:33:21.0835439Z processing existing schema: aten::masked_fill.Tensor(Tensor self, Tensor mask, Tensor value) -> (Tensor) 2022-05-18T03:33:21.0836678Z processing existing schema: aten::logsumexp(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0838481Z processing existing schema: aten::logsumexp.names(Tensor self, str[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.0840823Z processing existing schema: aten::logsumexp.names_out(Tensor self, str[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0842526Z processing existing schema: aten::logsumexp.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0844203Z processing existing schema: aten::_cummax_helper(Tensor self, Tensor(a!) values, Tensor(b!) indices, int dim) -> () 2022-05-18T03:33:21.0845955Z processing existing schema: aten::logical_xor_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0847038Z processing existing schema: aten::logical_xor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0849123Z processing existing schema: aten::logical_xor.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0850941Z processing existing schema: aten::logical_or_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0851857Z processing existing schema: aten::logical_or(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0853721Z processing existing schema: aten::logical_or.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0855335Z processing existing schema: aten::logical_not_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0856828Z processing existing schema: aten::logical_not(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0858761Z processing existing schema: aten::logical_not.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0860448Z processing existing schema: aten::logical_and_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0862070Z processing existing schema: aten::logical_and(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0863959Z processing existing schema: aten::logical_and.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0866971Z processing existing schema: aten::_ctc_loss_backward(Tensor grad, Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, Tensor neg_log_likelihood, Tensor log_alpha, int blank, bool zero_infinity=False) -> (Tensor) 2022-05-18T03:33:21.0868048Z processing existing schema: aten::logaddexp2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0870015Z processing existing schema: aten::logaddexp2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0872791Z processing existing schema: aten::_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank=0, bool zero_infinity=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.0874175Z processing existing schema: aten::logaddexp(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0876166Z processing existing schema: aten::logaddexp.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0877805Z processing existing schema: aten::log_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0879783Z processing existing schema: aten::log2_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0881557Z processing existing schema: aten::log1p_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0883197Z processing existing schema: aten::_compute_linear_combination(Tensor input, Tensor coefficients) -> (Tensor) 2022-05-18T03:33:21.0885022Z processing existing schema: aten::_compute_linear_combination.out(Tensor input, Tensor coefficients, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0886720Z processing existing schema: aten::log10_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0888277Z processing existing schema: aten::lgamma_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0890183Z processing existing schema: aten::kthvalue(Tensor self, int k, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0892090Z processing existing schema: aten::kthvalue.dimname(Tensor self, int k, str dim, bool keepdim=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.0894700Z processing existing schema: aten::kthvalue.dimname_out(Tensor self, int k, str dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0897333Z processing existing schema: aten::kthvalue.values(Tensor self, int k, int dim=-1, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.0898457Z processing existing schema: aten::item(Tensor self) -> (Scalar) 2022-05-18T03:33:21.0899936Z processing existing schema: aten::isnan(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0901381Z processing existing schema: aten::isnan.float(float a) -> (bool) 2022-05-18T03:33:21.0902771Z processing existing schema: aten::isnan.complex(complex a) -> (bool) 2022-05-18T03:33:21.0904086Z processing existing schema: aten::isinf(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0905630Z processing existing schema: aten::isinf.float(float a) -> (bool) 2022-05-18T03:33:21.0907070Z processing existing schema: aten::isinf.complex(complex a) -> (bool) 2022-05-18T03:33:21.0908463Z processing existing schema: aten::isfinite(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0909844Z processing existing schema: aten::isfinite.float(float a) -> (bool) 2022-05-18T03:33:21.0911226Z processing existing schema: aten::isfinite.complex(complex a) -> (bool) 2022-05-18T03:33:21.0912573Z processing existing schema: aten::is_signed(Tensor self) -> (bool) 2022-05-18T03:33:21.0914153Z processing existing schema: aten::is_pinned(Tensor self, Device? device=None) -> (bool) 2022-05-18T03:33:21.0915495Z processing existing schema: aten::is_nonzero(Tensor self) -> (bool) 2022-05-18T03:33:21.0916936Z processing existing schema: aten::is_inference(Tensor self) -> (bool) 2022-05-18T03:33:21.0918307Z processing existing schema: aten::is_coalesced(Tensor self) -> (bool) 2022-05-18T03:33:21.0921023Z processing existing schema: aten::index_fill_.Dimname_Scalar(Tensor(a!) self, str dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.0922334Z processing existing schema: aten::index_fill_.Dimname_Tensor(Tensor(a!) self, str dim, Tensor index, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:21.0924268Z processing existing schema: aten::index_fill_.int_Scalar(Tensor(a!) self, int dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.0926189Z processing existing schema: aten::index_fill_.int_Tensor(Tensor(a!) self, int dim, Tensor index, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:21.0927888Z processing existing schema: aten::index_fill.Dimname_Scalar(Tensor self, str dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:21.0929574Z processing existing schema: aten::index_fill.Dimname_Tensor(Tensor self, str dim, Tensor index, Tensor value) -> (Tensor) 2022-05-18T03:33:21.0931236Z processing existing schema: aten::index_fill.int_Scalar(Tensor self, int dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:21.0932942Z processing existing schema: aten::index_fill.int_Tensor(Tensor self, int dim, Tensor index, Tensor value) -> (Tensor) 2022-05-18T03:33:21.0934397Z processing existing schema: aten::igammac(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0936299Z processing existing schema: aten::igammac.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0938045Z processing existing schema: aten::igamma_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0939611Z processing existing schema: aten::igamma(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0941413Z processing existing schema: aten::igamma.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0942999Z processing existing schema: aten::i0_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0944536Z processing existing schema: aten::i0(Tensor self) -> (Tensor) 2022-05-18T03:33:21.0946297Z processing existing schema: aten::i0.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0947946Z processing existing schema: aten::hypot_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0949579Z processing existing schema: aten::hypot(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0951341Z processing existing schema: aten::hypot.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0952894Z processing existing schema: aten::frac_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0954754Z processing existing schema: aten::floor_divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.0956554Z processing existing schema: aten::floor_divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.0957998Z processing existing schema: aten::floor_divide(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.0959666Z processing existing schema: aten::floor_divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.0961499Z processing existing schema: aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.0963004Z processing existing schema: aten::floor_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0965192Z processing existing schema: aten::flatten.DimnameList(Tensor(a) self, str[] dims, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:21.0967075Z processing existing schema: aten::flatten.named_out_dim(Tensor(a) self, int start_dim, int end_dim, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:21.0969031Z processing existing schema: aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> (Tensor(a)) 2022-05-18T03:33:21.0971052Z processing existing schema: aten::flatten.using_names(Tensor(a) self, str start_dim, str end_dim, str out_dim) -> (Tensor(a)) 2022-05-18T03:33:21.0972846Z processing existing schema: quantized::conv_transpose3d_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0974292Z processing existing schema: aten::cos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0975927Z processing existing schema: aten::expm1_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0977376Z processing existing schema: prim::is_ort(Tensor a) -> (bool) 2022-05-18T03:33:21.0979434Z processing existing schema: quantized::conv_transpose2d_dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.0981227Z processing existing schema: aten::copy_sparse_to_sparse_(Tensor(a!) self, Tensor src, bool non_blocking=False) -> (Tensor(a!)) 2022-05-18T03:33:21.0982673Z processing existing schema: aten::exp_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0984138Z processing existing schema: prim::is_mps(Tensor a) -> (bool) 2022-05-18T03:33:21.0986891Z processing existing schema: quantized::conv_transpose2d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.0989545Z processing existing schema: aten::convolution_overrideable(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups) -> (Tensor) 2022-05-18T03:33:21.0991148Z processing existing schema: aten::erfinv_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.0992841Z processing existing schema: aten::join(str self, str[] values) -> (str) 2022-05-18T03:33:21.0994235Z processing existing schema: quantized::conv3d_transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.0998241Z processing existing schema: aten::convolution_backward(Tensor grad_output, Tensor input, Tensor weight, int[]? bias_sizes, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.0999524Z processing existing schema: aten::erfc_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1000909Z processing existing schema: aten::rpartition(str self, str separator) -> (str, str, str) 2022-05-18T03:33:21.1002411Z processing existing schema: quantized::conv3d_groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.1005979Z processing existing schema: aten::convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups) -> (Tensor) 2022-05-18T03:33:21.1006820Z processing existing schema: aten::erfc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1008564Z processing existing schema: aten::erfc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1009744Z processing existing schema: aten::erfc.int(int a) -> (float) 2022-05-18T03:33:21.1011154Z processing existing schema: aten::erfc.float(float a) -> (float) 2022-05-18T03:33:21.1012596Z processing existing schema: aten::erfc.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1014154Z processing existing schema: aten::partition(str self, str separator) -> (str, str, str) 2022-05-18T03:33:21.1015801Z processing existing schema: quantized::conv3d_dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1019073Z processing existing schema: aten::conv_transpose3d.input(Tensor input, Tensor weight, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int groups=1, int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:21.1020261Z processing existing schema: aten::erf_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1021945Z processing existing schema: aten::replace(str self, str old, str new, int max=-1) -> (str) 2022-05-18T03:33:21.1023611Z processing existing schema: quantized::conv3d_output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1026729Z processing existing schema: aten::conv_transpose2d.input(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int groups=1, int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:21.1027619Z processing existing schema: aten::erf(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1029329Z processing existing schema: aten::erf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1030730Z processing existing schema: aten::erf.int(int a) -> (float) 2022-05-18T03:33:21.1032139Z processing existing schema: aten::erf.float(float a) -> (float) 2022-05-18T03:33:21.1033489Z processing existing schema: aten::erf.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1035229Z processing existing schema: aten::rstrip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:21.1037151Z processing existing schema: quantized::conv3d_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1040400Z processing existing schema: aten::conv_transpose1d(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], int[1] padding=[0], int[1] output_padding=[0], int groups=1, int[1] dilation=[1]) -> (Tensor) 2022-05-18T03:33:21.1041561Z processing existing schema: aten::equal(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:21.1043281Z processing existing schema: aten::lstrip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:21.1045455Z processing existing schema: quantized::group_norm(Tensor input, int num_groups, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1046992Z processing existing schema: aten::clone(Tensor self, *, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.1048827Z processing existing schema: aten::dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:21.1051165Z processing existing schema: aten::items.str(Dict(str, t) self) -> ((str, t)[]) 2022-05-18T03:33:21.1053548Z processing existing schema: aten::items.int(Dict(int, t) self) -> ((int, t)[]) 2022-05-18T03:33:21.1055929Z processing existing schema: aten::items.bool(Dict(bool, t) self) -> ((bool, t)[]) 2022-05-18T03:33:21.1058361Z processing existing schema: aten::items.float(Dict(float, t) self) -> ((float, t)[]) 2022-05-18T03:33:21.1061008Z processing existing schema: aten::items.complex(Dict(complex, t) self) -> ((complex, t)[]) 2022-05-18T03:33:21.1063292Z processing existing schema: aten::items.Tensor(Dict(Tensor, t) self) -> ((Tensor, t)[]) 2022-05-18T03:33:21.1065898Z processing existing schema: quantized::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1067742Z processing existing schema: aten::clip_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1069885Z processing existing schema: aten::clip_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1071344Z processing existing schema: aten::dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:21.1074176Z processing existing schema: aten::update.str(Dict(str, t)(a!) self, Dict(str, t)(a!) to_add) -> () 2022-05-18T03:33:21.1076421Z processing existing schema: aten::update.int(Dict(int, t)(a!) self, Dict(int, t)(a!) to_add) -> () 2022-05-18T03:33:21.1078782Z processing existing schema: aten::update.bool(Dict(bool, t)(a!) self, Dict(bool, t)(a!) to_add) -> () 2022-05-18T03:33:21.1081286Z processing existing schema: aten::update.float(Dict(float, t)(a!) self, Dict(float, t)(a!) to_add) -> () 2022-05-18T03:33:21.1083739Z processing existing schema: aten::update.complex(Dict(complex, t)(a!) self, Dict(complex, t)(a!) to_add) -> () 2022-05-18T03:33:21.1086061Z processing existing schema: aten::update.Tensor(Dict(Tensor, t)(a!) self, Dict(Tensor, t)(a!) to_add) -> () 2022-05-18T03:33:21.1087521Z processing existing schema: quantized::mul_scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.1089119Z processing existing schema: quantized::mul_scalar.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:21.1090405Z processing existing schema: aten::clamp_max_(Tensor(a!) self, Scalar max) -> (Tensor(a!)) 2022-05-18T03:33:21.1092053Z processing existing schema: aten::clamp_max_.Tensor(Tensor(a!) self, Tensor max) -> (Tensor(a!)) 2022-05-18T03:33:21.1093869Z schema: aten::div_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.1095526Z schema: aten::div_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.1097441Z schema: aten::div_.Tensor_mode(Tensor(a!) self, Tensor other, *, str? rounding_mode) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.1099318Z schema: aten::div_.Scalar_mode(Tensor(a!) self, Scalar other, *, str? rounding_mode) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.1100711Z processing existing schema: aten::upper(str self) -> (str) 2022-05-18T03:33:21.1102231Z processing existing schema: quantized::hardswish(Tensor input, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1103826Z processing existing schema: aten::cat(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:21.1105730Z processing existing schema: aten::cat.names(Tensor[] tensors, str dim) -> (Tensor) 2022-05-18T03:33:21.1107980Z processing existing schema: aten::cat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1110423Z processing existing schema: aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1111320Z processing existing schema: aten::deg2rad(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1113537Z processing existing schema: aten::deg2rad.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1114764Z schema: static_runtime::embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:21.1115878Z schema: static_runtime::embedding_bag.padding_idx(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, bool include_last_offset, int? padding_idx) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:21.1116842Z processing existing schema: quantized::conv_transpose2d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:21.1117522Z processing existing schema: aten::batch_norm_stats(Tensor input, float eps) -> (Tensor, Tensor) 2022-05-18T03:33:21.1118219Z processing existing schema: aten::cosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1120406Z processing existing schema: quantized::conv_transpose3d_dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1121844Z processing existing schema: quantized::conv3d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:21.1123981Z processing existing schema: aten::batch_norm_gather_stats(Tensor input, Tensor mean, Tensor invstd, Tensor? running_mean, Tensor? running_var, float momentum, float eps, int count) -> (Tensor, Tensor) 2022-05-18T03:33:21.1125345Z processing existing schema: sparse::qlinear_relu_dynamic(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.1126966Z processing existing schema: aten::arcsin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1128768Z processing existing schema: aten::_upsample_nearest_exact1d(Tensor self, int[1] output_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.1131297Z processing existing schema: aten::_upsample_nearest_exact1d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.1133135Z processing existing schema: aten::_upsample_nearest_exact1d.out(Tensor self, int[1] output_size, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1134880Z processing existing schema: aten::multinomial(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.1137088Z processing existing schema: aten::multinomial.out(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1138448Z processing existing schema: aten::is_floating_point(Tensor self) -> (bool) 2022-05-18T03:33:21.1140665Z processing existing schema: aten::full_like(Tensor self, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.1141620Z processing existing schema: aten::reciprocal(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1143263Z processing existing schema: aten::reciprocal.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1145382Z processing existing schema: aten::resize_(Tensor(a!) self, int[] size, *, int? memory_format=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1146853Z processing existing schema: quantized::mul(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.1148509Z processing existing schema: quantized::mul.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1150004Z processing existing schema: quantized::mul.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.1151064Z processing existing schema: quantized::mul.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:21.1152799Z processing existing schema: quantized::mul.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1154683Z processing existing schema: aten::chunk(Tensor(a -> *) self, int chunks, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.1156075Z processing existing schema: aten::digamma(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1157636Z processing existing schema: aten::digamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1158711Z processing existing schema: aten::isalnum(str self) -> (bool) 2022-05-18T03:33:21.1160952Z processing existing schema: aten::multilabel_margin_loss_forward(Tensor self, Tensor target, int reduction) -> (Tensor output, Tensor is_target) 2022-05-18T03:33:21.1162774Z processing existing schema: aten::multilabel_margin_loss_forward.output(Tensor self, Tensor target, int reduction, *, Tensor(a!) output, Tensor(b!) is_target) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.1163916Z processing existing schema: aten::hsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:21.1166261Z processing existing schema: aten::hsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:21.1167436Z processing existing schema: aten::absolute(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1169398Z processing existing schema: aten::absolute.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1172607Z processing existing schema: _quantized::conv_transpose3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:21.1173799Z processing existing schema: aten::bitwise_left_shift.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1175737Z processing existing schema: aten::bitwise_left_shift.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1177139Z processing existing schema: aten::bitwise_left_shift.Tensor_Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1179100Z processing existing schema: aten::bitwise_left_shift.Tensor_Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1181145Z processing existing schema: aten::bitwise_left_shift.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1182315Z processing existing schema: aten::unsqueeze_(Tensor(a!) self, int dim) -> (Tensor(a!)) 2022-05-18T03:33:21.1184550Z processing existing schema: quantized::linear(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:21.1185537Z processing existing schema: aten::deg2rad_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1186057Z schema: static_runtime::clamp_nan_to_num(Tensor input, Scalar? min, Scalar? max, float? nan, float? posinf, float? posinf) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.1187268Z processing existing schema: aten::vander(Tensor x, int? N=None, bool increasing=False) -> (Tensor) 2022-05-18T03:33:21.1188594Z processing existing schema: aten::sgn(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1190410Z processing existing schema: aten::sgn.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1192574Z processing existing schema: aten::addmm_(Tensor(a!) self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.1194113Z processing existing schema: prim::MKLDNNHardSwish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1195614Z processing existing schema: aten::set_data(Tensor(a!) self, Tensor new_data) -> () 2022-05-18T03:33:21.1197553Z processing existing schema: aten::addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.1199968Z processing existing schema: aten::addmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1201951Z processing existing schema: aten::_ncf_view(Tensor(a) self, int[] input_shape, int normalized_ndim) -> (Tensor(a)) 2022-05-18T03:33:21.1203902Z processing existing schema: aten::index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:21.1205864Z processing existing schema: aten::index_put.hacked_twin(Tensor self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:21.1207423Z processing existing schema: aten::repeat_interleave.Tensor(Tensor repeats, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:21.1209188Z processing existing schema: aten::repeat_interleave.self_Tensor(Tensor self, Tensor repeats, int? dim=None, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:21.1211297Z processing existing schema: aten::repeat_interleave.self_int(Tensor self, int repeats, int? dim=None, *, int? output_size=None) -> (Tensor) 2022-05-18T03:33:21.1212276Z processing existing schema: aten::log(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1213861Z processing existing schema: aten::log.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1215277Z processing existing schema: aten::log.int(int a) -> (float) 2022-05-18T03:33:21.1216362Z processing existing schema: aten::log.float(float a) -> (float) 2022-05-18T03:33:21.1217733Z processing existing schema: aten::log.complex(complex a) -> (complex) 2022-05-18T03:33:21.1219237Z processing existing schema: aten::log.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1220666Z processing existing schema: aten::log.int_int(int a, int b) -> (float) 2022-05-18T03:33:21.1222103Z processing existing schema: aten::log.float_float(float a, float b) -> (float) 2022-05-18T03:33:21.1223719Z processing existing schema: aten::log.complex_complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.1224753Z processing existing schema: aten::log.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.1226143Z processing existing schema: aten::log.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.1227653Z processing existing schema: aten::log.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:21.1229633Z processing existing schema: aten::log.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:21.1230421Z processing existing schema: aten::log.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:21.1231704Z processing existing schema: aten::log.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:21.1233222Z processing existing schema: aten::log.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.1235133Z processing existing schema: quantized::instance_norm(Tensor input, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1236420Z processing existing schema: aten::coalesce(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1238554Z processing existing schema: aten::dsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:21.1241065Z processing existing schema: aten::dsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:21.1242606Z processing existing schema: aten::strip(str self, str chars=" \n\t\f\v") -> (str) 2022-05-18T03:33:21.1244032Z processing existing schema: aten::sigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1245924Z processing existing schema: aten::addr(Tensor self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.1248084Z processing existing schema: aten::addr.out(Tensor self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1249306Z processing existing schema: prim::MKLDNNClamp_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1250893Z processing existing schema: aten::ne.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1252034Z processing existing schema: aten::ne.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1253942Z processing existing schema: aten::ne.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1255639Z processing existing schema: aten::ne.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1257476Z processing existing schema: aten::ne.int_list(int[] a, int[] b) -> (bool) 2022-05-18T03:33:21.1258608Z processing existing schema: aten::ne.device(Device a, Device b) -> (bool) 2022-05-18T03:33:21.1260205Z processing existing schema: aten::ne.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:21.1261638Z processing existing schema: aten::ne.enum(AnyEnumType a, AnyEnumType b) -> (bool) 2022-05-18T03:33:21.1262792Z processing existing schema: aten::ne.int(int a, int b) -> (bool) 2022-05-18T03:33:21.1264154Z processing existing schema: aten::ne.complex(complex a, complex b) -> (bool) 2022-05-18T03:33:21.1266102Z processing existing schema: aten::ne.float(float a, float b) -> (bool) 2022-05-18T03:33:21.1267161Z processing existing schema: aten::ne.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.1268509Z processing existing schema: aten::ne.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.1270107Z processing existing schema: aten::ne.float_complex(float a, complex b) -> (bool) 2022-05-18T03:33:21.1271250Z processing existing schema: aten::ne.complex_float(complex a, float b) -> (bool) 2022-05-18T03:33:21.1272480Z processing existing schema: aten::ne(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.1274013Z processing existing schema: aten::ne.str(str a, str b) -> (bool) 2022-05-18T03:33:21.1276054Z processing existing schema: aten::ne.float_list(float[] a, float[] b) -> (bool) 2022-05-18T03:33:21.1278103Z processing existing schema: aten::ne.Tensor_list(Tensor[] a, Tensor[] b) -> (bool) 2022-05-18T03:33:21.1279877Z processing existing schema: aten::ne.bool_list(bool[] a, bool[] b) -> (bool) 2022-05-18T03:33:21.1281953Z processing existing schema: aten::ne.str_list(str[] a, str[] b) -> (bool) 2022-05-18T03:33:21.1284082Z processing existing schema: quantized::conv2d_output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1286640Z processing existing schema: aten::conv2d(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.1289294Z processing existing schema: aten::conv2d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[2] stride=[1, 1], str padding="valid", int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.1291346Z processing existing schema: aten::empty_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.1292169Z processing existing schema: aten::istitle(str self) -> (bool) 2022-05-18T03:33:21.1294583Z processing existing schema: quantized::batch_norm2d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1295580Z processing existing schema: aten::asin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1298676Z processing existing schema: aten::_embedding_bag_forward_only(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.1299572Z processing existing schema: aten::lu_solve(Tensor self, Tensor LU_data, Tensor LU_pivots) -> (Tensor) 2022-05-18T03:33:21.1301598Z processing existing schema: aten::lu_solve.out(Tensor self, Tensor LU_data, Tensor LU_pivots, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1302510Z processing existing schema: aten::ge.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1304018Z processing existing schema: aten::ge.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1305728Z processing existing schema: aten::ge.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1308376Z processing existing schema: aten::ge.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1309142Z processing existing schema: aten::ge.int(int a, int b) -> (bool) 2022-05-18T03:33:21.1311141Z processing existing schema: aten::ge.float(float a, float b) -> (bool) 2022-05-18T03:33:21.1312353Z processing existing schema: aten::ge.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.1314030Z processing existing schema: aten::ge.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.1315359Z processing existing schema: aten::ge(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.1316759Z processing existing schema: aten::ge.str(str a, str b) -> (bool) 2022-05-18T03:33:21.1318102Z processing existing schema: aten::reflection_pad2d(Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:21.1320122Z processing existing schema: aten::reflection_pad2d.out(Tensor self, int[4] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1322065Z processing existing schema: sparse::qlinear(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:21.1323258Z processing existing schema: aten::arccosh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1325006Z processing existing schema: aten::arccosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1326696Z processing existing schema: aten::tan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1328275Z processing existing schema: aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor) 2022-05-18T03:33:21.1329933Z processing existing schema: aten::clamp.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> (Tensor) 2022-05-18T03:33:21.1331998Z processing existing schema: aten::clamp.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1334088Z processing existing schema: aten::clamp.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1335633Z processing existing schema: quantized::mul_relu(Tensor qa, Tensor qb, float scale, int zero_point) -> (Tensor qc) 2022-05-18T03:33:21.1337474Z processing existing schema: quantized::mul_relu.out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1339538Z processing existing schema: quantized::mul_relu.Scalar(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.1340296Z processing existing schema: quantized::mul_relu.Scalar2(Scalar b, Tensor qa) -> (Tensor qc) 2022-05-18T03:33:21.1342229Z processing existing schema: quantized::mul_relu.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1344206Z processing existing schema: quantized::batch_norm3d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1345597Z processing existing schema: aten::asinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1347075Z processing existing schema: aten::mH(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1348556Z processing existing schema: aten::mH.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1350459Z processing existing schema: _quantized::linear_prepack(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:21.1351972Z processing existing schema: aten::cholesky(Tensor self, bool upper=False) -> (Tensor) 2022-05-18T03:33:21.1354478Z processing existing schema: aten::cholesky.out(Tensor self, bool upper=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1354992Z schema: aten::diagonal_backward(Tensor grad_output, int[] input_sizes, int offset, int dim1, int dim2) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.1356560Z processing existing schema: prim::is_xpu(Tensor a) -> (bool) 2022-05-18T03:33:21.1358541Z processing existing schema: aten::multi_margin_loss(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.1361119Z processing existing schema: aten::multi_margin_loss.out(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1362814Z processing existing schema: aten::aminmax(Tensor self, *, int? dim=None, bool keepdim=False) -> (Tensor min, Tensor max) 2022-05-18T03:33:21.1365218Z processing existing schema: aten::aminmax.out(Tensor self, *, int? dim=None, bool keepdim=False, Tensor(a!) min, Tensor(b!) max) -> (Tensor(a!) min, Tensor(b!) max) 2022-05-18T03:33:21.1367541Z processing existing schema: quantized::make_quantized_cell_params(Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:21.1367884Z schema: aten::slice_backward(Tensor grad_output, int[] input_sizes, int dim, int start, int end, int step) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.1369491Z processing existing schema: aten::addcmul(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor) 2022-05-18T03:33:21.1371880Z processing existing schema: aten::addcmul.out(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1372840Z processing existing schema: aten::mm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:21.1374768Z processing existing schema: aten::mm.out(Tensor self, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1375978Z processing existing schema: aten::linalg_tensorinv(Tensor self, int ind=2) -> (Tensor) 2022-05-18T03:33:21.1377829Z processing existing schema: aten::linalg_tensorinv.out(Tensor self, int ind=2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1379538Z processing existing schema: quantized::mul_scalar_relu_out(Tensor qa, Scalar b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1381281Z processing existing schema: quantized::mul_scalar_relu_out.Tensor(Tensor qa, Tensor b, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1382862Z processing existing schema: aten::clip(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor) 2022-05-18T03:33:21.1384778Z processing existing schema: aten::clip.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1386299Z processing existing schema: aten::clip.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> (Tensor) 2022-05-18T03:33:21.1388356Z processing existing schema: aten::clip.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1389476Z processing existing schema: aten::dot(Tensor self, Tensor tensor) -> (Tensor) 2022-05-18T03:33:21.1391200Z processing existing schema: aten::dot.out(Tensor self, Tensor tensor, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1393130Z processing existing schema: aten::popitem.str(Dict(str, t)(a!) self) -> ((str, t)) 2022-05-18T03:33:21.1395183Z processing existing schema: aten::popitem.int(Dict(int, t)(a!) self) -> ((int, t)) 2022-05-18T03:33:21.1397201Z processing existing schema: aten::popitem.bool(Dict(bool, t)(a!) self) -> ((bool, t)) 2022-05-18T03:33:21.1399496Z processing existing schema: aten::popitem.float(Dict(float, t)(a!) self) -> ((float, t)) 2022-05-18T03:33:21.1401543Z processing existing schema: aten::popitem.complex(Dict(complex, t)(a!) self) -> ((complex, t)) 2022-05-18T03:33:21.1403563Z processing existing schema: aten::popitem.Tensor(Dict(Tensor, t)(a!) self) -> ((Tensor, t)) 2022-05-18T03:33:21.1405078Z processing existing schema: aten::resolve_neg(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1407043Z processing existing schema: aten::_upsample_nearest_exact3d(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.1409217Z processing existing schema: aten::_upsample_nearest_exact3d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.1411497Z processing existing schema: aten::_upsample_nearest_exact3d.out(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1412880Z processing existing schema: aten::resolve_conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1414435Z processing existing schema: aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:21.1416527Z processing existing schema: aten::linear.out(Tensor input, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1419473Z processing existing schema: aten::quantized_gru.input(Tensor input, Tensor hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.1422066Z processing existing schema: aten::quantized_gru.data(Tensor data, Tensor batch_sizes, Tensor hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:21.1424488Z processing existing schema: aten::quantized_gru.input_legacy(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.1427002Z processing existing schema: aten::quantized_gru.data_legacy(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:21.1428638Z processing existing schema: aten::alpha_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:21.1430368Z processing existing schema: aten::swapdims(Tensor(a) self, int dim0, int dim1) -> (Tensor(a)) 2022-05-18T03:33:21.1432741Z processing existing schema: quantized::quantized_gru_cell_dynamic(Tensor input, Tensor hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor b_ih, Tensor b_hh) -> (Tensor) 2022-05-18T03:33:21.1433786Z processing existing schema: aten::any(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1435600Z processing existing schema: aten::any.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.1437500Z processing existing schema: aten::any.out(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1439280Z processing existing schema: aten::any.all_out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1440935Z processing existing schema: aten::any.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.1442919Z processing existing schema: aten::any.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1444417Z processing existing schema: aten::any.str(str[] self) -> (bool) 2022-05-18T03:33:21.1446008Z processing existing schema: aten::any.int(int[] self) -> (bool) 2022-05-18T03:33:21.1447627Z processing existing schema: aten::any.float(float[] self) -> (bool) 2022-05-18T03:33:21.1449289Z processing existing schema: aten::any.bool(bool[] self) -> (bool) 2022-05-18T03:33:21.1451472Z processing existing schema: quantized::batch_norm1d(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1453668Z processing existing schema: aten::as_strided_copy(Tensor self, int[] size, int[] stride, int? storage_offset=None) -> (Tensor) 2022-05-18T03:33:21.1454941Z processing existing schema: aten::lt.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1456276Z processing existing schema: aten::lt.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1458172Z processing existing schema: aten::lt.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1460054Z processing existing schema: aten::lt.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1461479Z processing existing schema: aten::lt.int(int a, int b) -> (bool) 2022-05-18T03:33:21.1462634Z processing existing schema: aten::lt.float(float a, float b) -> (bool) 2022-05-18T03:33:21.1464131Z processing existing schema: aten::lt.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.1466003Z processing existing schema: aten::lt.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.1467302Z processing existing schema: aten::lt(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.1468618Z processing existing schema: aten::lt.str(str a, str b) -> (bool) 2022-05-18T03:33:21.1470561Z processing existing schema: quantized::linear_prepack(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:21.1472281Z processing existing schema: aten::celu_(Tensor(a!) self, Scalar alpha=1.) -> (Tensor(a!)) 2022-05-18T03:33:21.1473647Z processing existing schema: aten::view_as_real(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1475043Z processing existing schema: aten::real(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1476614Z processing existing schema: aten::imag(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1478007Z processing existing schema: aten::result_type.Tensor(Tensor tensor, Tensor other) -> (int) 2022-05-18T03:33:21.1479576Z processing existing schema: aten::result_type.Scalar(Tensor tensor, Scalar other) -> (int) 2022-05-18T03:33:21.1480665Z processing existing schema: aten::result_type.Scalar_Tensor(Scalar scalar, Tensor tensor) -> (int) 2022-05-18T03:33:21.1482099Z processing existing schema: aten::result_type.Scalar_Scalar(Scalar scalar1, Scalar scalar2) -> (int) 2022-05-18T03:33:21.1483072Z processing existing schema: prim::FusedConcat(...) -> (...) 2022-05-18T03:33:21.1485191Z processing existing schema: aten::linalg_svd(Tensor A, bool full_matrices=True) -> (Tensor U, Tensor S, Tensor Vh) 2022-05-18T03:33:21.1487718Z processing existing schema: aten::linalg_svd.U(Tensor A, bool full_matrices=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) 2022-05-18T03:33:21.1488977Z processing existing schema: aten::sqrt(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1490762Z processing existing schema: aten::sqrt.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1492124Z processing existing schema: aten::sqrt.int(int a) -> (float) 2022-05-18T03:33:21.1493550Z processing existing schema: aten::sqrt.float(float a) -> (float) 2022-05-18T03:33:21.1495030Z processing existing schema: aten::sqrt.complex(complex a) -> (complex) 2022-05-18T03:33:21.1496467Z processing existing schema: aten::sqrt.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1498055Z processing existing schema: aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:21.1499596Z processing existing schema: aten::pow.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor) 2022-05-18T03:33:21.1501121Z processing existing schema: aten::pow.Scalar(Scalar self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:21.1503093Z processing existing schema: aten::pow.Scalar_out(Scalar self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1505134Z processing existing schema: aten::pow.Tensor_Scalar_out(Tensor self, Scalar exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1507136Z processing existing schema: aten::pow.Tensor_Tensor_out(Tensor self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1508478Z processing existing schema: aten::pow.int(int a, int b) -> (float) 2022-05-18T03:33:21.1510037Z processing existing schema: aten::pow.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.1511542Z processing existing schema: aten::pow.float(float a, float b) -> (float) 2022-05-18T03:33:21.1513062Z processing existing schema: aten::pow.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.1514606Z processing existing schema: aten::pow.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.1516205Z processing existing schema: aten::pow.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:21.1517759Z processing existing schema: aten::pow.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:21.1519519Z processing existing schema: aten::pow.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.1520935Z processing existing schema: aten::pow.int_to_int(int a, int b) -> (int) 2022-05-18T03:33:21.1522399Z processing existing schema: aten::silu(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1524303Z processing existing schema: aten::silu.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1525890Z processing existing schema: aten::alias(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1527730Z processing existing schema: prim::MKLDNNScalarMul_(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.1529084Z processing existing schema: aten::_conj_physical(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1530462Z processing existing schema: aten::log2(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1532311Z processing existing schema: aten::log2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1534198Z processing existing schema: aten::stack(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:21.1536515Z processing existing schema: aten::stack.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1538161Z processing existing schema: prim::MKLDNNHardTanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1540451Z processing existing schema: aten::addmv_(Tensor(a!) self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.1541862Z processing existing schema: aten::bmm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:21.1543819Z processing existing schema: aten::bmm.out(Tensor self, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1546400Z processing existing schema: quantized::embedding_bag_2bit_prepack(Tensor weight, bool optimized_qparams=False, int nbins=200, float ratio=0.16) -> (Tensor) 2022-05-18T03:33:21.1548048Z processing existing schema: prim::MKLDNNHardSigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1550144Z processing existing schema: aten::addmv(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.1552423Z processing existing schema: aten::addmv.out(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1554485Z processing existing schema: quantized::add_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1556095Z processing existing schema: aten::arctan2_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.1557921Z processing existing schema: aten::threshold_(Tensor(a!) self, Scalar threshold, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.1559662Z processing existing schema: aten::copysign.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1561578Z processing existing schema: aten::copysign.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1562993Z processing existing schema: aten::copysign.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1565090Z processing existing schema: aten::copysign.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1566639Z processing existing schema: aten::copysign.int(int a, int b) -> (float) 2022-05-18T03:33:21.1568358Z processing existing schema: aten::copysign.float(float a, float b) -> (float) 2022-05-18T03:33:21.1569725Z processing existing schema: aten::copysign.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.1571389Z processing existing schema: aten::copysign.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.1573454Z processing existing schema: aten::copysign(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.1574630Z processing existing schema: quantized::conv_transpose2d_groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.1576791Z processing existing schema: _quantized::conv_transpose1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1578539Z processing existing schema: aten::batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:21.1579094Z processing existing schema: aten::trunc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1580896Z processing existing schema: aten::trunc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1582927Z processing existing schema: quantized::conv_transpose1d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:21.1584968Z processing existing schema: aten::batch_norm_gather_stats_with_counts(Tensor input, Tensor mean, Tensor invstd, Tensor? running_mean, Tensor? running_var, float momentum, float eps, Tensor counts) -> (Tensor, Tensor) 2022-05-18T03:33:21.1587210Z processing existing schema: aten::unflatten.int(Tensor(a) self, int dim, int[] sizes, str[]? names=None) -> (Tensor(a)) 2022-05-18T03:33:21.1589611Z processing existing schema: aten::unflatten.Dimname(Tensor(a) self, str dim, int[] sizes, str[] names) -> (Tensor(a)) 2022-05-18T03:33:21.1591476Z processing existing schema: aten::geometric_(Tensor(a!) self, float p, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1593156Z processing existing schema: quantized::conv2d_groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.1595421Z processing existing schema: aten::conv_depthwise3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding, int[3] dilation) -> (Tensor) 2022-05-18T03:33:21.1597920Z processing existing schema: aten::empty_strided(int[] size, int[] stride, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1599501Z processing existing schema: aten::ljust(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:21.1600556Z processing existing schema: aten::neg(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1602459Z processing existing schema: aten::neg.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1603902Z processing existing schema: aten::neg.int(int a) -> (int) 2022-05-18T03:33:21.1605510Z processing existing schema: aten::neg.float(float a) -> (float) 2022-05-18T03:33:21.1606671Z processing existing schema: aten::neg.complex(complex a) -> (complex) 2022-05-18T03:33:21.1608139Z processing existing schema: aten::neg.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1611286Z processing existing schema: aten::sparse_compressed_tensor.comp_plain_value_size(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.1613611Z processing existing schema: aten::sparse_compressed_tensor.comp_plain_value(Tensor compressed_indices, Tensor plain_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.1614453Z processing existing schema: aten::sinh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1616143Z processing existing schema: aten::sinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1617605Z processing existing schema: aten::sinh.int(int a) -> (float) 2022-05-18T03:33:21.1619153Z processing existing schema: aten::sinh.float(float a) -> (float) 2022-05-18T03:33:21.1620549Z processing existing schema: aten::sinh.complex(complex a) -> (complex) 2022-05-18T03:33:21.1621843Z processing existing schema: aten::sinh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1623237Z processing existing schema: aten::angle(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1625077Z processing existing schema: aten::angle.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1626408Z processing existing schema: aten::angle.int(int a) -> (float) 2022-05-18T03:33:21.1627762Z processing existing schema: aten::angle.float(float a) -> (float) 2022-05-18T03:33:21.1629124Z processing existing schema: aten::angle.complex(complex a) -> (float) 2022-05-18T03:33:21.1630483Z processing existing schema: aten::angle.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1633200Z processing existing schema: quantized::quantized_lstm_cell_dynamic(Tensor input, Tensor[] hx, __torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor bias_ih, Tensor bias_hh) -> (Tensor, Tensor) 2022-05-18T03:33:21.1634579Z processing existing schema: quantized::linear_relu_dynamic_fp16(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.1635940Z processing existing schema: aten::ceil_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1637497Z processing existing schema: aten::view_as_complex(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1639239Z processing existing schema: quantized::linear_prepack_fp16_legacy(Tensor W, Tensor? B=None) -> (Tensor W_prepack) 2022-05-18T03:33:21.1640554Z processing existing schema: aten::channel_shuffle(Tensor self, int groups) -> (Tensor) 2022-05-18T03:33:21.1642416Z processing existing schema: aten::vsplit.int(Tensor(a -> *) self, int sections) -> (Tensor[]) 2022-05-18T03:33:21.1644624Z processing existing schema: aten::vsplit.array(Tensor(a -> *) self, int[] indices) -> (Tensor[]) 2022-05-18T03:33:21.1646630Z processing existing schema: aten::diagonal(Tensor(a) self, int offset=0, int dim1=0, int dim2=1) -> (Tensor(a)) 2022-05-18T03:33:21.1648678Z processing existing schema: aten::diagonal.Dimname(Tensor(a) self, *, str outdim, str dim1, str dim2, int offset=0) -> (Tensor(a)) 2022-05-18T03:33:21.1649899Z processing existing schema: aten::lower(str self) -> (str) 2022-05-18T03:33:21.1651250Z processing existing schema: aten::sign(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1652953Z processing existing schema: aten::sign.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1654406Z processing existing schema: aten::silu_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.1656225Z processing existing schema: aten::silu_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.1657539Z processing existing schema: aten::align_as(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1659501Z processing existing schema: prim::CudaFusionSizeEq(...) -> (bool) 2022-05-18T03:33:21.1660691Z processing existing schema: quantized::linear_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:21.1661719Z processing existing schema: aten::ccol_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1663150Z processing existing schema: aten::vdot(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1665072Z processing existing schema: aten::vdot.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1666433Z processing existing schema: aten::logcumsumexp(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.1667796Z processing existing schema: aten::logcumsumexp.dimname(Tensor self, str dim) -> (Tensor) 2022-05-18T03:33:21.1669570Z processing existing schema: aten::logcumsumexp.dimname_out(Tensor self, str dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1671328Z processing existing schema: aten::logcumsumexp.out(Tensor self, int dim, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1673312Z processing existing schema: quantized::make_quantized_cell_params_fp16(__torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:21.1674697Z processing existing schema: aten::amin(Tensor self, int[1] dim=[], bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.1676798Z processing existing schema: aten::amin.out(Tensor self, int[1] dim=[], bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1678597Z processing existing schema: aten::symeig(Tensor self, bool eigenvectors=False, bool upper=True) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:21.1681158Z processing existing schema: aten::symeig.e(Tensor self, bool eigenvectors=False, bool upper=True, *, Tensor(a!) e, Tensor(b!) V) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:21.1682322Z processing existing schema: aten::polygamma(int n, Tensor self) -> (Tensor) 2022-05-18T03:33:21.1683963Z processing existing schema: aten::polygamma.out(int n, Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1685443Z processing existing schema: aten::_remove_batch_dim(Tensor self, int level, int batch_size, int out_dim) -> (Tensor) 2022-05-18T03:33:21.1686715Z processing existing schema: prim::TensorExprDynamicGroup(...) -> (...) 2022-05-18T03:33:21.1688216Z processing existing schema: aten::t(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1689516Z processing existing schema: aten::tan(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1691072Z processing existing schema: aten::tan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1693003Z processing existing schema: aten::tan.int(int a) -> (float) 2022-05-18T03:33:21.1693771Z processing existing schema: aten::tan.float(float a) -> (float) 2022-05-18T03:33:21.1695183Z processing existing schema: aten::tan.complex(complex a) -> (complex) 2022-05-18T03:33:21.1696230Z processing existing schema: aten::tan.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1697967Z processing existing schema: aten::swapaxes(Tensor(a) self, int axis0, int axis1) -> (Tensor(a)) 2022-05-18T03:33:21.1699750Z schema: prim::infer_squeeze_size.dim(int[] a, int dim) -> (int[]) found on allowlist, skipping 2022-05-18T03:33:21.1701595Z schema: prim::infer_squeeze_size(int[] a) -> (int[]) found on allowlist, skipping 2022-05-18T03:33:21.1704006Z processing existing schema: aten::allclose(Tensor self, Tensor other, float rtol=1.0000000000000001e-05, float atol=1e-08, bool equal_nan=False) -> (bool) 2022-05-18T03:33:21.1706460Z processing existing schema: aten::fft_rfftfreq(int n, float d=1., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1708190Z processing existing schema: aten::fft_rfftfreq.out(int n, float d=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1709921Z processing existing schema: prim::reshape_copy(Tensor self, int[] shape) -> (Tensor) 2022-05-18T03:33:21.1711382Z processing existing schema: aten::prelu(Tensor self, Tensor weight) -> (Tensor) 2022-05-18T03:33:21.1713221Z processing existing schema: quantized::conv1d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1714226Z processing existing schema: aten::atleast_1d(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1716139Z processing existing schema: aten::atleast_1d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.1717707Z processing existing schema: aten::igammac_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.1719480Z processing existing schema: aten::_ncf_unsqueeze(Tensor(a) self, int ndim) -> (Tensor(a)) 2022-05-18T03:33:21.1720932Z processing existing schema: aten::bernoulli(Tensor self, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.1722754Z processing existing schema: aten::bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1724387Z processing existing schema: aten::bernoulli.p(Tensor self, float p, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.1727314Z processing existing schema: quantized::conv_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.1728037Z processing existing schema: aten::linalg_inv(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1729929Z processing existing schema: aten::linalg_inv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1731585Z processing existing schema: aten::linalg_cholesky_ex(Tensor self, *, bool upper=False, bool check_errors=False) -> (Tensor L, Tensor info) 2022-05-18T03:33:21.1734038Z processing existing schema: aten::linalg_cholesky_ex.L(Tensor self, *, bool upper=False, bool check_errors=False, Tensor(a!) L, Tensor(b!) info) -> (Tensor(a!) L, Tensor(b!) info) 2022-05-18T03:33:21.1735541Z processing existing schema: aten::_new_zeros_with_same_feature_meta(Tensor self, Tensor other, *, int self_num_batch_dims=0) -> (Tensor) 2022-05-18T03:33:21.1736872Z processing existing schema: aten::orgqr(Tensor self, Tensor input2) -> (Tensor) 2022-05-18T03:33:21.1738607Z processing existing schema: aten::orgqr.out(Tensor self, Tensor input2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1739860Z processing existing schema: aten::lift(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1741663Z processing existing schema: quantized::conv_transpose2d_stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.1743076Z processing existing schema: aten::copy(Tensor self, Tensor src, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.1744961Z processing existing schema: aten::copy.t(t[](a) self) -> (t[]) 2022-05-18T03:33:21.1747183Z processing existing schema: aten::copy.Dict_str(Dict(str, t)(a) self) -> (Dict(str, t)) 2022-05-18T03:33:21.1749238Z processing existing schema: aten::copy.Dict_int(Dict(int, t)(a) self) -> (Dict(int, t)) 2022-05-18T03:33:21.1751371Z processing existing schema: aten::copy.Dict_bool(Dict(bool, t)(a) self) -> (Dict(bool, t)) 2022-05-18T03:33:21.1753690Z processing existing schema: aten::copy.Dict_float(Dict(float, t)(a) self) -> (Dict(float, t)) 2022-05-18T03:33:21.1755713Z processing existing schema: aten::copy.Dict_complex(Dict(complex, t)(a) self) -> (Dict(complex, t)) 2022-05-18T03:33:21.1757765Z processing existing schema: aten::copy.Dict_Tensor(Dict(Tensor, t)(a) self) -> (Dict(Tensor, t)) 2022-05-18T03:33:21.1759021Z processing existing schema: aten::exp(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1760731Z processing existing schema: aten::exp.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1761998Z processing existing schema: aten::exp.int(int a) -> (float) 2022-05-18T03:33:21.1764218Z processing existing schema: aten::exp.float(float a) -> (float) 2022-05-18T03:33:21.1764446Z processing existing schema: aten::exp.complex(complex a) -> (complex) 2022-05-18T03:33:21.1765750Z processing existing schema: aten::exp.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1767405Z processing existing schema: prim::is_sparse(Tensor a) -> (bool) 2022-05-18T03:33:21.1768709Z processing existing schema: aten::cdist(Tensor x1, Tensor x2, float p=2., int? compute_mode=None) -> (Tensor) 2022-05-18T03:33:21.1770116Z processing existing schema: quantized::linear_relu_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:21.1770849Z processing existing schema: prim::CudaFusionIvalGuard(...) -> (bool) 2022-05-18T03:33:21.1772946Z processing existing schema: aten::align_to(Tensor(a) self, str[] names) -> (Tensor(a)) 2022-05-18T03:33:21.1775129Z processing existing schema: aten::align_to.ellipsis_idx(Tensor(a) self, str[] order, int ellipsis_idx) -> (Tensor(a)) 2022-05-18T03:33:21.1776476Z processing existing schema: prim::CudaFusionGuard(...) -> (bool) 2022-05-18T03:33:21.1778035Z processing existing schema: aten::_pin_memory(Tensor self, Device? device=None) -> (Tensor) 2022-05-18T03:33:21.1779517Z processing existing schema: aten::polar(Tensor abs, Tensor angle) -> (Tensor) 2022-05-18T03:33:21.1781382Z processing existing schema: aten::polar.out(Tensor abs, Tensor angle, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1782872Z processing existing schema: aten::polar.int(int a, int b) -> (complex) 2022-05-18T03:33:21.1784350Z processing existing schema: aten::polar.float(float a, float b) -> (complex) 2022-05-18T03:33:21.1785978Z processing existing schema: aten::polar.int_float(int a, float b) -> (complex) 2022-05-18T03:33:21.1787464Z processing existing schema: aten::polar.float_int(float a, int b) -> (complex) 2022-05-18T03:33:21.1789021Z processing existing schema: aten::polar.Scalar_Scalar(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.1791282Z processing existing schema: aten::fft_ifft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1794073Z processing existing schema: aten::fft_ifft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1795346Z processing existing schema: prim::CudaFusionGroup(...) -> (...) 2022-05-18T03:33:21.1796974Z processing existing schema: aten::_pdist_forward(Tensor self, float p=2.) -> (Tensor) 2022-05-18T03:33:21.1798816Z processing existing schema: aten::poisson_nll_loss(Tensor input, Tensor target, bool log_input, bool full, float eps, int reduction) -> (Tensor) 2022-05-18T03:33:21.1800564Z processing existing schema: aten::log_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.1802208Z processing existing schema: aten::log_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.1804126Z processing existing schema: aten::log_softmax.int_out(Tensor self, int dim, int? dtype=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1805623Z processing existing schema: aten::t_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1806876Z processing existing schema: prim::TensorExprGroup(...) -> (...) 2022-05-18T03:33:21.1808632Z processing existing schema: prim::view_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:21.1810873Z processing existing schema: aten::fft_rfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1813543Z processing existing schema: aten::fft_rfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1815048Z processing existing schema: quantized::add_scalar_relu(Tensor qa, Scalar b) -> (Tensor qc) 2022-05-18T03:33:21.1816650Z processing existing schema: quantized::add_scalar_relu.Tensor(Tensor qa, Tensor b) -> (Tensor qc) 2022-05-18T03:33:21.1818139Z processing existing schema: aten::arctanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1820823Z processing existing schema: aten::to.device(Tensor(a) self, Device device, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:21.1822954Z processing existing schema: aten::to.dtype(Tensor(a) self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:21.1825243Z processing existing schema: aten::to.other(Tensor(a) self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:21.1828232Z processing existing schema: aten::to.dtype_layout(Tensor(a) self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor(a)) 2022-05-18T03:33:21.1830283Z processing existing schema: aten::to.prim_Device(Tensor(a) self, Device? device, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:21.1832405Z processing existing schema: aten::to.prim_dtype(Tensor(a) self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:21.1834375Z processing existing schema: aten::to.prim_other(Tensor(a) self, bool non_blocking=False, bool copy=False) -> (Tensor(a|b)) 2022-05-18T03:33:21.1836304Z processing existing schema: aten::_make_dual(Tensor(a) primal, Tensor tangent, int level) -> (Tensor(a)) 2022-05-18T03:33:21.1837554Z processing existing schema: aten::_log_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:21.1839569Z processing existing schema: aten::_log_softmax.out(Tensor self, int dim, bool half_to_float, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1842238Z processing existing schema: aten::new_zeros(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1844152Z processing existing schema: aten::split.Tensor(Tensor(a -> *) self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.1846560Z processing existing schema: aten::split.sizes(Tensor(a -> *) self, int[] split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.1848353Z processing existing schema: aten::split.str(str self, str? separator=None, int max=-1) -> (str[]) 2022-05-18T03:33:21.1850532Z processing existing schema: aten::split(Tensor(a -> *) self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.1851403Z schema: aten::linalg_qr(Tensor A, str mode="reduced") -> (Tensor Q, Tensor R) found on allowlist, skipping 2022-05-18T03:33:21.1852930Z schema: aten::linalg_qr.out(Tensor A, str mode="reduced", *, Tensor(a!) Q, Tensor(b!) R) -> (Tensor(a!) Q, Tensor(b!) R) found on allowlist, skipping 2022-05-18T03:33:21.1855041Z processing existing schema: aten::exponential_(Tensor(a!) self, float lambd=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1856098Z processing existing schema: prim::name(Tensor a) -> (str?) 2022-05-18T03:33:21.1858256Z processing existing schema: aten::baddbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.1860517Z processing existing schema: aten::baddbmm.out(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1862269Z processing existing schema: quantized::conv_transpose3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1863365Z processing existing schema: aten::linalg_cholesky(Tensor self, *, bool upper=False) -> (Tensor) 2022-05-18T03:33:21.1865392Z processing existing schema: aten::linalg_cholesky.out(Tensor self, *, bool upper=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1867264Z processing existing schema: aten::fft_ifft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1869468Z processing existing schema: aten::fft_ifft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1870729Z processing existing schema: prim::StaticRuntimeCopyOuts(...) -> (...) 2022-05-18T03:33:21.1872631Z processing existing schema: aten::fft_fftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1875195Z processing existing schema: aten::fft_fftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1876832Z processing existing schema: aten::reshape(Tensor(a) self, int[] shape) -> (Tensor(a)) 2022-05-18T03:33:21.1878753Z processing existing schema: aten::gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:21.1880658Z processing existing schema: sparse::qlinear_relu(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack, float Y_scale_i, int Y_zero_point_i) -> (Tensor Y) 2022-05-18T03:33:21.1881991Z processing existing schema: aten::arccosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1883919Z processing existing schema: aten::clamp_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1885834Z processing existing schema: aten::clamp_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> (Tensor(a!)) 2022-05-18T03:33:21.1887641Z processing existing schema: quantized::mul_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.1888832Z processing existing schema: aten::tanh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1890563Z processing existing schema: aten::tanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1891902Z processing existing schema: aten::tanh.int(int a) -> (float) 2022-05-18T03:33:21.1893225Z processing existing schema: aten::tanh.float(float a) -> (float) 2022-05-18T03:33:21.1894471Z processing existing schema: aten::tanh.complex(complex a) -> (complex) 2022-05-18T03:33:21.1895694Z processing existing schema: aten::tanh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1897845Z processing existing schema: aten::blackman_window(int window_length, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1900020Z processing existing schema: aten::blackman_window.periodic(int window_length, bool periodic, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1901025Z processing existing schema: quantized::embedding_bag_byte_prepack(Tensor weight) -> (Tensor) 2022-05-18T03:33:21.1902584Z processing existing schema: aten::squeeze_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.1904046Z processing existing schema: aten::squeeze_.dim(Tensor(a!) self, int dim) -> (Tensor(a!)) 2022-05-18T03:33:21.1905590Z processing existing schema: aten::squeeze_.dimname(Tensor(a!) self, str dim) -> (Tensor(a!)) 2022-05-18T03:33:21.1906776Z processing existing schema: prim::TensorExprDynamicGuard(...) -> (bool) 2022-05-18T03:33:21.1908224Z processing existing schema: aten::alias_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1910131Z processing existing schema: quantized::batch_norm1d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.1911237Z processing existing schema: aten::asin(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1912948Z processing existing schema: aten::asin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1914441Z processing existing schema: aten::asin.int(int a) -> (float) 2022-05-18T03:33:21.1915704Z processing existing schema: aten::asin.float(float a) -> (float) 2022-05-18T03:33:21.1916867Z processing existing schema: aten::asin.complex(complex a) -> (complex) 2022-05-18T03:33:21.1918161Z processing existing schema: aten::asin.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1919696Z processing existing schema: prim::MKLDNNHardTanh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1921681Z processing existing schema: aten::permute(Tensor(a) self, int[] dims) -> (Tensor(a)) 2022-05-18T03:33:21.1923017Z processing existing schema: prim::ConstantMKLDNNTensor(...) -> (...) 2022-05-18T03:33:21.1925120Z processing existing schema: aten::fft_fft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1927573Z processing existing schema: aten::fft_fft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1929082Z processing existing schema: aten::_conj(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.1930366Z processing existing schema: aten::log1p(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1932086Z processing existing schema: aten::log1p.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1933977Z processing existing schema: aten::log1p.int(int a) -> (float) 2022-05-18T03:33:21.1934648Z processing existing schema: aten::log1p.float(float a) -> (float) 2022-05-18T03:33:21.1935925Z processing existing schema: aten::log1p.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.1937566Z processing existing schema: prim::add_optional(Tensor(a) input, Tensor? bias) -> (Tensor(a)) 2022-05-18T03:33:21.1939499Z processing existing schema: aten::fft_rfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.1941845Z processing existing schema: aten::fft_rfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1943350Z processing existing schema: aten::expand_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:21.1944691Z processing existing schema: prim::is_ipu(Tensor a) -> (bool) 2022-05-18T03:33:21.1946976Z processing existing schema: aten::expand(Tensor(a) self, int[] size, *, bool implicit=False) -> (Tensor(a)) 2022-05-18T03:33:21.1949028Z processing existing schema: aten::expand.SymInt(Tensor(a) self, SymInt[] size, *, bool implicit=False) -> (Tensor(a)) 2022-05-18T03:33:21.1950183Z processing existing schema: prim::is_vulkan(Tensor a) -> (bool) 2022-05-18T03:33:21.1952518Z processing existing schema: aten::fft_fftfreq(int n, float d=1., *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.1954202Z processing existing schema: aten::fft_fftfreq.out(int n, float d=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1955323Z processing existing schema: prim::BroadcastMKLDNNTensors(...) -> (...) 2022-05-18T03:33:21.1958068Z processing existing schema: aten::sparse_bsr_tensor.crow_col_value_size(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.1960357Z processing existing schema: aten::sparse_bsr_tensor.crow_col_value(Tensor crow_indices, Tensor col_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.1961659Z processing existing schema: aten::l1_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.1963601Z processing existing schema: aten::l1_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1964830Z processing existing schema: prim::MKLDNNClamp(Tensor self) -> (Tensor) 2022-05-18T03:33:21.1966401Z processing existing schema: aten::is_same_size(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:21.1967995Z processing existing schema: aten::amax(Tensor self, int[1] dim=[], bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.1969981Z processing existing schema: aten::amax.out(Tensor self, int[1] dim=[], bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1972521Z processing existing schema: quantized::make_quantized_cell_params_dynamic(__torch__.torch.classes.quantized.LinearPackedParamsBase w_ih, __torch__.torch.classes.quantized.LinearPackedParamsBase w_hh, Tensor bias_ih, Tensor bias_hh, bool reduce_range=False) -> (__torch__.torch.classes.rnn.CellParamsBase) 2022-05-18T03:33:21.1973248Z processing existing schema: aten::size.int(Tensor self, int dim) -> (int) 2022-05-18T03:33:21.1974860Z processing existing schema: aten::size.Dimname(Tensor self, str dim) -> (int) 2022-05-18T03:33:21.1976430Z processing existing schema: aten::size(Tensor self) -> (int[]) 2022-05-18T03:33:21.1977891Z processing existing schema: aten::reshape_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:21.1979260Z processing existing schema: aten::gt.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.1980653Z processing existing schema: aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.1982277Z processing existing schema: aten::gt.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1983927Z processing existing schema: aten::gt.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.1985563Z processing existing schema: aten::gt.int(int a, int b) -> (bool) 2022-05-18T03:33:21.1987097Z processing existing schema: aten::gt.float(float a, float b) -> (bool) 2022-05-18T03:33:21.1988563Z processing existing schema: aten::gt.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.1989999Z processing existing schema: aten::gt.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.1991338Z processing existing schema: aten::gt(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.1992906Z processing existing schema: aten::gt.str(str a, str b) -> (bool) 2022-05-18T03:33:21.1994799Z processing existing schema: aten::_upsample_nearest_exact2d(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.1997285Z processing existing schema: aten::_upsample_nearest_exact2d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.1999481Z processing existing schema: aten::_upsample_nearest_exact2d.out(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2001966Z processing existing schema: aten::new_empty(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2002722Z processing existing schema: prim::MKLDNNHardSwish(Tensor a) -> (Tensor) 2022-05-18T03:33:21.2004398Z processing existing schema: aten::linalg_inv_ex(Tensor self, *, bool check_errors=False) -> (Tensor inverse, Tensor info) 2022-05-18T03:33:21.2006902Z processing existing schema: aten::linalg_inv_ex.inverse(Tensor self, *, bool check_errors=False, Tensor(a!) inverse, Tensor(b!) info) -> (Tensor(a!) inverse, Tensor(b!) info) 2022-05-18T03:33:21.2008336Z processing existing schema: aten::_add_batch_dim(Tensor self, int batch_dim, int level) -> (Tensor) 2022-05-18T03:33:21.2010050Z processing existing schema: aten::linalg_eigh(Tensor self, str UPLO="L") -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:21.2012520Z processing existing schema: aten::linalg_eigh.eigvals(Tensor self, str UPLO="L", *, Tensor(a!) eigvals, Tensor(b!) eigvecs) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:21.2013681Z processing existing schema: aten::atan(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2015267Z processing existing schema: aten::atan.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2016469Z processing existing schema: aten::atan.int(int a) -> (float) 2022-05-18T03:33:21.2017769Z processing existing schema: aten::atan.float(float a) -> (float) 2022-05-18T03:33:21.2018906Z processing existing schema: aten::atan.complex(complex a) -> (complex) 2022-05-18T03:33:21.2020033Z processing existing schema: aten::atan.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2022240Z processing existing schema: quantized::batch_norm3d_relu(Tensor qx, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2023489Z processing existing schema: aten::le.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2024902Z processing existing schema: aten::le.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2026835Z processing existing schema: aten::le.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2028340Z processing existing schema: aten::le.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2029682Z processing existing schema: aten::le.int(int a, int b) -> (bool) 2022-05-18T03:33:21.2030994Z processing existing schema: aten::le.float(float a, float b) -> (bool) 2022-05-18T03:33:21.2032306Z processing existing schema: aten::le.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.2033620Z processing existing schema: aten::le.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.2034853Z processing existing schema: aten::le(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.2036287Z processing existing schema: aten::le.str(str a, str b) -> (bool) 2022-05-18T03:33:21.2038384Z processing existing schema: aten::fft_irfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2040782Z processing existing schema: aten::fft_irfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2041807Z processing existing schema: prim::oneDNNFusionGroup(...) -> (...) 2022-05-18T03:33:21.2043850Z processing existing schema: aten::fft_ifftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2046285Z processing existing schema: aten::fft_ifftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2047958Z processing existing schema: quantized::conv2d_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.2050554Z processing existing schema: aten::conv1d(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], int[1] padding=[0], int[1] dilation=[1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.2053225Z processing existing schema: aten::conv1d.padding(Tensor input, Tensor weight, Tensor? bias=None, int[1] stride=[1], str padding="valid", int[1] dilation=[1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.2055625Z processing existing schema: aten::empty.memory_format(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2057728Z processing existing schema: aten::empty.out(int[] size, *, int? memory_format=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2060689Z processing existing schema: aten::empty.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2062032Z processing existing schema: aten::isidentifier(str self) -> (bool) 2022-05-18T03:33:21.2063604Z processing existing schema: quantized::relu6(Tensor qx, bool inplace=False) -> (Tensor) 2022-05-18T03:33:21.2065090Z processing existing schema: aten::col_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2066727Z processing existing schema: aten::einsum(str equation, Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.2068151Z processing existing schema: aten::einsum.sublist(Tensor a, ...) -> (Tensor) 2022-05-18T03:33:21.2069514Z processing existing schema: aten::islower(str self) -> (bool) 2022-05-18T03:33:21.2071757Z processing existing schema: aten::triu_indices(int row, int col, int offset=0, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2073785Z processing existing schema: aten::sub_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.2075517Z processing existing schema: aten::sub_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.2076849Z processing existing schema: aten::lgamma(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2078735Z processing existing schema: aten::lgamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2080623Z processing existing schema: aten::lgamma.int(int a) -> (float) 2022-05-18T03:33:21.2081525Z processing existing schema: aten::lgamma.float(float a) -> (float) 2022-05-18T03:33:21.2082908Z processing existing schema: aten::lgamma.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2085059Z processing existing schema: aten::fft_irfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2087564Z processing existing schema: aten::fft_irfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2088877Z processing existing schema: prim::oneDNNFusionGuard(...) -> (...) 2022-05-18T03:33:21.2090231Z processing existing schema: aten::mv(Tensor self, Tensor vec) -> (Tensor) 2022-05-18T03:33:21.2092123Z processing existing schema: aten::mv.out(Tensor self, Tensor vec, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2106428Z processing existing schema: aten::detach(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2106916Z processing existing schema: aten::numel(Tensor self) -> (int) 2022-05-18T03:33:21.2107332Z processing existing schema: aten::view(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:21.2107748Z processing existing schema: aten::view.dtype(Tensor(a) self, int dtype) -> (Tensor(a)) 2022-05-18T03:33:21.2107949Z processing existing schema: prim::StaticSubgraph(...) -> (...) 2022-05-18T03:33:21.2108166Z processing existing schema: prim::squeeze_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2108390Z processing existing schema: prim::squeeze_copy.dim(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.2108697Z processing existing schema: aten::fft_rfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2109134Z processing existing schema: aten::fft_rfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2109782Z processing existing schema: aten::slice.Tensor(Tensor(a) self, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor(a)) 2022-05-18T03:33:21.2111765Z processing existing schema: aten::slice.t(t[] l, int? start=None, int? end=None, int step=1) -> (t[]) 2022-05-18T03:33:21.2113612Z processing existing schema: aten::slice.str(str string, int? start=None, int? end=None, int step=1) -> (str) 2022-05-18T03:33:21.2115390Z processing existing schema: aten::requires_grad_(Tensor(a!) self, bool requires_grad=True) -> (Tensor(a!)) 2022-05-18T03:33:21.2116833Z processing existing schema: aten::_unsafe_view(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:21.2118590Z processing existing schema: aten::grid_sampler_2d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.2120033Z processing existing schema: aten::replication_pad2d(Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:21.2121849Z processing existing schema: aten::replication_pad2d.out(Tensor self, int[4] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2123144Z processing existing schema: aten::log10(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2124835Z processing existing schema: aten::log10.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2126182Z processing existing schema: aten::log10.int(int a) -> (float) 2022-05-18T03:33:21.2127498Z processing existing schema: aten::log10.float(float a) -> (float) 2022-05-18T03:33:21.2128909Z processing existing schema: aten::log10.complex(complex a) -> (complex) 2022-05-18T03:33:21.2130189Z processing existing schema: aten::log10.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2133026Z processing existing schema: aten::new_empty_strided(Tensor self, int[] size, int[] stride, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2134856Z processing existing schema: quantized::conv_transpose3d_stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weights) -> (int[]) 2022-05-18T03:33:21.2136119Z processing existing schema: aten::cos(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2137608Z processing existing schema: aten::cos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2138978Z processing existing schema: aten::cos.int(int a) -> (float) 2022-05-18T03:33:21.2140327Z processing existing schema: aten::cos.float(float a) -> (float) 2022-05-18T03:33:21.2141686Z processing existing schema: aten::cos.complex(complex a) -> (complex) 2022-05-18T03:33:21.2142960Z processing existing schema: aten::cos.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2144388Z processing existing schema: aten::expm1(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2146169Z processing existing schema: aten::expm1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2147538Z processing existing schema: aten::expm1.int(int a) -> (float) 2022-05-18T03:33:21.2148884Z processing existing schema: aten::expm1.float(float a) -> (float) 2022-05-18T03:33:21.2150237Z processing existing schema: aten::expm1.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2151511Z processing existing schema: prim::is_meta(Tensor a) -> (bool) 2022-05-18T03:33:21.2153292Z processing existing schema: aten::fft_hfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2155645Z processing existing schema: aten::fft_hfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2157005Z processing existing schema: prim::MKLDNNHardSigmoid(Tensor a) -> (Tensor) 2022-05-18T03:33:21.2158435Z processing existing schema: aten::pdist(Tensor self, float p=2.) -> (Tensor) 2022-05-18T03:33:21.2160438Z processing existing schema: aten::fft_fft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2162633Z processing existing schema: aten::fft_fft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2165468Z processing existing schema: aten::sparse_csc_tensor.ccol_row_value_size(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.2167663Z processing existing schema: aten::sparse_csc_tensor.ccol_row_value(Tensor ccol_indices, Tensor row_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.2169050Z processing existing schema: aten::std(Tensor self, bool unbiased=True) -> (Tensor) 2022-05-18T03:33:21.2171418Z processing existing schema: aten::std.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2172949Z processing existing schema: aten::std.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2175100Z processing existing schema: aten::std.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2177014Z processing existing schema: aten::std.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2178802Z processing existing schema: aten::std.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2180863Z processing existing schema: aten::std.correction_out(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2182560Z processing existing schema: aten::std.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2184899Z processing existing schema: aten::std.correction_names_out(Tensor self, str[1] dim, *, int? correction, bool keepdim=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2186797Z processing existing schema: prim::infer_unsqueeze_size(int[] a, int dim) -> (int[]) 2022-05-18T03:33:21.2188558Z processing existing schema: aten::all(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2189741Z processing existing schema: aten::all.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2191749Z processing existing schema: aten::all.out(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2193335Z processing existing schema: aten::all.all_out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2194787Z processing existing schema: aten::all.dimname(Tensor self, str dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2196872Z processing existing schema: aten::all.dimname_out(Tensor self, str dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2198305Z processing existing schema: aten::all.int(int[] self) -> (bool) 2022-05-18T03:33:21.2200037Z processing existing schema: aten::all.float(float[] self) -> (bool) 2022-05-18T03:33:21.2201630Z processing existing schema: aten::all.bool(bool[] self) -> (bool) 2022-05-18T03:33:21.2203465Z processing existing schema: aten::svd(Tensor self, bool some=True, bool compute_uv=True) -> (Tensor U, Tensor S, Tensor V) 2022-05-18T03:33:21.2206435Z processing existing schema: aten::svd.U(Tensor self, bool some=True, bool compute_uv=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) V) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) V) 2022-05-18T03:33:21.2207606Z processing existing schema: aten::ceil(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2209155Z processing existing schema: aten::ceil.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2210433Z processing existing schema: aten::ceil.int(int a) -> (int) 2022-05-18T03:33:21.2211595Z processing existing schema: aten::ceil.float(float a) -> (int) 2022-05-18T03:33:21.2213149Z processing existing schema: aten::ceil.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2214661Z processing existing schema: quantized::linear_dynamic_fp16(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.2216194Z processing existing schema: aten::numpy_T(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2217424Z processing existing schema: aten::numpy_T.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2218640Z processing existing schema: aten::relu(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2220051Z processing existing schema: aten::stride.int(Tensor self, int dim) -> (int) 2022-05-18T03:33:21.2221619Z processing existing schema: aten::stride.Dimname(Tensor self, str dim) -> (int) 2022-05-18T03:33:21.2224605Z processing existing schema: prim::mkldnn_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.2226394Z processing existing schema: aten::affine_grid_generator(Tensor theta, int[] size, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.2227657Z processing existing schema: prim::CudaFusionViewGuard(...) -> (bool) 2022-05-18T03:33:21.2229531Z processing existing schema: aten::align_tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.2231762Z processing existing schema: aten::sum.dim_IntList(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2232639Z processing existing schema: aten::sum(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2234529Z processing existing schema: aten::sum.dim_DimnameList(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2236812Z processing existing schema: aten::sum.DimnameList_out(Tensor self, str[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2238929Z processing existing schema: aten::sum.IntList_out(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2240457Z processing existing schema: aten::sum.int(int[] self) -> (int) 2022-05-18T03:33:21.2242155Z processing existing schema: aten::sum.float(float[] self) -> (float) 2022-05-18T03:33:21.2243812Z processing existing schema: aten::sum.complex(complex[] self) -> (complex) 2022-05-18T03:33:21.2245423Z processing existing schema: aten::sum.bool(bool[] self) -> (int) 2022-05-18T03:33:21.2249283Z processing existing schema: aten::_convolution.deprecated(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:21.2252753Z processing existing schema: aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:21.2254724Z processing existing schema: aten::log_normal_(Tensor(a!) self, float mean=1., float std=2., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.2256170Z processing existing schema: aten::mul.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2257631Z processing existing schema: aten::mul.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2259435Z processing existing schema: aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2261485Z processing existing schema: aten::mul.left_t(t[] l, int n) -> (t[]) 2022-05-18T03:33:21.2263506Z processing existing schema: aten::mul.right_(int n, t[] l) -> (t[]) 2022-05-18T03:33:21.2265077Z processing existing schema: aten::mul.int(int a, int b) -> (int) 2022-05-18T03:33:21.2266634Z processing existing schema: aten::mul.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.2268102Z processing existing schema: aten::mul.float(float a, float b) -> (float) 2022-05-18T03:33:21.2269573Z processing existing schema: aten::mul.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:21.2271063Z processing existing schema: aten::mul.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:21.2272550Z processing existing schema: aten::mul.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:21.2274066Z processing existing schema: aten::mul.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:21.2275478Z processing existing schema: aten::mul.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.2276970Z processing existing schema: aten::mul.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.2278445Z processing existing schema: aten::mul(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.2280586Z processing existing schema: aten::detach_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2281816Z processing existing schema: aten::get_device(Tensor self) -> (int) 2022-05-18T03:33:21.2283380Z processing existing schema: aten::view_as(Tensor(a) self, Tensor other) -> (Tensor(a)) 2022-05-18T03:33:21.2285416Z processing existing schema: quantized::linear_prepack_fp16(Tensor W, Tensor? B=None) -> (__torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:21.2286735Z processing existing schema: aten::chalf(Tensor self, *, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2288186Z processing existing schema: aten::diagflat(Tensor self, int offset=0) -> (Tensor) 2022-05-18T03:33:21.2289680Z processing existing schema: prim::type(Device self) -> (str) 2022-05-18T03:33:21.2292209Z processing existing schema: aten::native_group_norm_backward(Tensor grad_out, Tensor input, Tensor mean, Tensor rstd, Tensor? weight, int N, int C, int HxW, int group, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2293597Z processing existing schema: aten::_has_same_storage_numel(Tensor self, Tensor other) -> (bool) 2022-05-18T03:33:21.2295329Z processing existing schema: quantized::add_relu_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.2296990Z processing existing schema: aten::arctan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2298543Z processing existing schema: aten::threshold_backward(Tensor grad_output, Tensor self, Scalar threshold) -> (Tensor) 2022-05-18T03:33:21.2300603Z processing existing schema: aten::threshold_backward.grad_input(Tensor grad_output, Tensor self, Scalar threshold, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.2302225Z processing existing schema: aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.2303812Z processing existing schema: aten::sub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.2305941Z processing existing schema: aten::sub.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2307425Z processing existing schema: aten::sub.int(int a, int b) -> (int) 2022-05-18T03:33:21.2309143Z processing existing schema: aten::sub.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.2310664Z processing existing schema: aten::sub.float(float a, float b) -> (float) 2022-05-18T03:33:21.2312300Z processing existing schema: aten::sub.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:21.2314182Z processing existing schema: aten::sub.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:21.2315198Z processing existing schema: aten::sub.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:21.2316517Z processing existing schema: aten::sub.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:21.2317764Z processing existing schema: aten::sub.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.2319481Z processing existing schema: aten::sub.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.2320630Z processing existing schema: aten::sub(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.2322053Z processing existing schema: prim::MKLDNNScalarMul(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2323829Z processing existing schema: aten::affine_grid_generator_backward(Tensor grad, int[] size, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.2325540Z processing existing schema: aten::sigmoid_backward(Tensor grad_output, Tensor output) -> (Tensor) 2022-05-18T03:33:21.2327051Z processing existing schema: aten::sigmoid_backward.grad_input(Tensor grad_output, Tensor output, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.2329093Z processing existing schema: aten::instance_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool use_input_stats, float momentum, float eps, bool cudnn_enabled) -> (Tensor) 2022-05-18T03:33:21.2330284Z processing existing schema: aten::is_complex(Tensor self) -> (bool) 2022-05-18T03:33:21.2331579Z processing existing schema: aten::sinc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2333089Z processing existing schema: aten::sinc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2334960Z processing existing schema: aten::linalg_solve_triangular(Tensor self, Tensor B, *, bool upper, bool left=True, bool unitriangular=False) -> (Tensor) 2022-05-18T03:33:21.2337174Z processing existing schema: aten::linalg_solve_triangular.out(Tensor self, Tensor B, *, bool upper, bool left=True, bool unitriangular=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2338654Z processing existing schema: aten::_cdist_forward(Tensor x1, Tensor x2, float p, int? compute_mode) -> (Tensor) 2022-05-18T03:33:21.2340063Z processing existing schema: prim::DifferentiableGraph(...) -> (...) 2022-05-18T03:33:21.2341580Z processing existing schema: aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.2343218Z processing existing schema: aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!)) 2022-05-18T03:33:21.2344875Z processing existing schema: quantized::conv2d_transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (int) 2022-05-18T03:33:21.2346456Z processing existing schema: aten::conv_tbc(Tensor self, Tensor weight, Tensor bias, int pad=0) -> (Tensor) 2022-05-18T03:33:21.2347783Z processing existing schema: aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2349134Z processing existing schema: aten::eq.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2350974Z processing existing schema: aten::eq.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2352753Z processing existing schema: aten::eq.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2354719Z processing existing schema: aten::eq.int_list(int[] a, int[] b) -> (bool) 2022-05-18T03:33:21.2356096Z processing existing schema: aten::eq.device(Device a, Device b) -> (bool) 2022-05-18T03:33:21.2357495Z processing existing schema: aten::eq.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:21.2358910Z processing existing schema: aten::eq.enum(AnyEnumType a, AnyEnumType b) -> (bool) 2022-05-18T03:33:21.2360518Z processing existing schema: aten::eq.int(int a, int b) -> (bool) 2022-05-18T03:33:21.2361848Z processing existing schema: aten::eq.complex(complex a, complex b) -> (bool) 2022-05-18T03:33:21.2363222Z processing existing schema: aten::eq.float(float a, float b) -> (bool) 2022-05-18T03:33:21.2364926Z processing existing schema: aten::eq.int_float(int a, float b) -> (bool) 2022-05-18T03:33:21.2366229Z processing existing schema: aten::eq.float_int(float a, int b) -> (bool) 2022-05-18T03:33:21.2367616Z processing existing schema: aten::eq.float_complex(float a, complex b) -> (bool) 2022-05-18T03:33:21.2369029Z processing existing schema: aten::eq.complex_float(complex a, float b) -> (bool) 2022-05-18T03:33:21.2370384Z processing existing schema: aten::eq(Scalar a, Scalar b) -> (bool) 2022-05-18T03:33:21.2372022Z processing existing schema: aten::eq.str(str a, str b) -> (bool) 2022-05-18T03:33:21.2373934Z processing existing schema: aten::eq.float_list(float[] a, float[] b) -> (bool) 2022-05-18T03:33:21.2375917Z processing existing schema: aten::eq.Tensor_list(Tensor[] a, Tensor[] b) -> (bool) 2022-05-18T03:33:21.2377964Z processing existing schema: aten::eq.bool_list(bool[] a, bool[] b) -> (bool) 2022-05-18T03:33:21.2379732Z processing existing schema: aten::eq.str_list(str[] a, str[] b) -> (bool) 2022-05-18T03:33:21.2381382Z processing existing schema: aten::rjust(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:21.2382809Z processing existing schema: aten::digamma_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2385268Z processing existing schema: aten::isalpha(str self) -> (bool) 2022-05-18T03:33:21.2385753Z processing existing schema: aten::zero_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2387932Z processing existing schema: sparse::qlinear_prepack(Tensor W, Tensor? B, int out_features_block_size, int in_features_block_size) -> (__torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) 2022-05-18T03:33:21.2388380Z processing existing schema: aten::arcsinh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2390223Z processing existing schema: aten::arcsinh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2392040Z processing existing schema: aten::tensor_split.sections(Tensor(a -> *) self, int sections, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.2394341Z processing existing schema: aten::tensor_split.indices(Tensor(a -> *) self, int[] indices, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.2396341Z processing existing schema: aten::tensor_split.tensor_indices_or_sections(Tensor(a -> *) self, Tensor tensor_indices_or_sections, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.2398556Z processing existing schema: aten::movedim.intlist(Tensor(a) self, int[] source, int[] destination) -> (Tensor(a)) 2022-05-18T03:33:21.2400338Z processing existing schema: aten::movedim.int(Tensor(a) self, int source, int destination) -> (Tensor(a)) 2022-05-18T03:33:21.2401526Z processing existing schema: aten::frac(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2403494Z processing existing schema: aten::frac.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2405953Z processing existing schema: aten::randint(int high, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2408392Z processing existing schema: aten::randint.generator(int high, int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2410751Z processing existing schema: aten::randint.low(int low, int high, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2413356Z processing existing schema: aten::randint.low_generator(int low, int high, int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2415265Z processing existing schema: aten::randint.out(int high, int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2417489Z processing existing schema: aten::randint.generator_out(int high, int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2419624Z processing existing schema: aten::randint.low_out(int low, int high, int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2421945Z processing existing schema: aten::randint.low_generator_out(int low, int high, int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2423197Z processing existing schema: prim::MMTreeReduce(...) -> (Tensor) 2022-05-18T03:33:21.2425569Z schema: aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool? onesided=None, bool? return_complex=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.2428359Z schema: aten::stft.center(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, str pad_mode="reflect", bool normalized=False, bool? onesided=None, bool? return_complex=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.2431187Z processing existing schema: prim::MKLDNNLayerNorm_(Tensor(a!) input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor(a!)) 2022-05-18T03:33:21.2432596Z processing existing schema: aten::adjoint(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2436436Z processing existing schema: aten::quantized_lstm.input(Tensor input, Tensor[] hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2439535Z processing existing schema: aten::quantized_lstm.data(Tensor data, Tensor batch_sizes, Tensor[] hx, __torch__.torch.classes.rnn.CellParamsBase[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2442445Z processing existing schema: aten::quantized_lstm.input_legacy(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2445519Z processing existing schema: aten::quantized_lstm.data_legacy(Tensor data, Tensor batch_sizes, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, *, int? dtype=None, bool use_dynamic=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2446267Z processing existing schema: aten::alpha_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:21.2447736Z processing existing schema: aten::celu(Tensor self, Scalar alpha=1.) -> (Tensor) 2022-05-18T03:33:21.2449667Z processing existing schema: _quantized::linear_dynamic(Tensor X, __torch__.torch.classes.quantized.LinearPackedParamsBase W_prepack, bool reduce_range=False) -> (Tensor Y) 2022-05-18T03:33:21.2452376Z processing existing schema: aten::_empty_affine_quantized(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, float scale=1., int zero_point=0, int? memory_format=0) -> (Tensor) 2022-05-18T03:33:21.2453272Z processing existing schema: aten::mT(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2454947Z processing existing schema: aten::mT.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2456054Z processing existing schema: aten::trace(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2457587Z processing existing schema: aten::std_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 2022-05-18T03:33:21.2459376Z processing existing schema: aten::std_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.2461099Z processing existing schema: aten::std_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.2463098Z processing existing schema: aten::std_mean.correction(Tensor self, int[1]? dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.2464702Z processing existing schema: aten::std_mean.correction_names(Tensor self, str[1] dim, *, int? correction, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.2467583Z processing existing schema: prim::MKLDNNLayerNorm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor) 2022-05-18T03:33:21.2469261Z processing existing schema: aten::addr_(Tensor(a!) self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.2471027Z processing existing schema: aten::_fft_c2r(Tensor self, int[] dim, int normalization, int last_dim_size) -> (Tensor) 2022-05-18T03:33:21.2473516Z processing existing schema: aten::_fft_c2r.out(Tensor self, int[] dim, int normalization, int last_dim_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2475030Z processing existing schema: aten::matrix_H(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2476654Z processing existing schema: aten::matrix_H.a(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2478797Z processing existing schema: quantized::conv2d_relu.new(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2481934Z processing existing schema: quantized::conv2d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2483891Z processing existing schema: aten::avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], bool ceil_mode=False, bool count_include_pad=True) -> (Tensor) 2022-05-18T03:33:21.2486390Z processing existing schema: aten::max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.2488039Z processing existing schema: aten::normal.Tensor_float(Tensor mean, float std=1., *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.2490144Z processing existing schema: aten::normal.Tensor_float_out(Tensor mean, float std=1., *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2491977Z processing existing schema: aten::normal.float_Tensor_out(float mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2493440Z processing existing schema: aten::normal.float_Tensor(float mean, Tensor std, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.2495088Z processing existing schema: aten::normal.Tensor_Tensor(Tensor mean, Tensor std, *, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.2496974Z processing existing schema: aten::normal.Tensor_Tensor_out(Tensor mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2499724Z processing existing schema: aten::normal.float_float(float mean, float std, int[] size, *, Generator? generator=None, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2501949Z processing existing schema: aten::normal.float_float_out(float mean, float std, int[] size, *, Generator? generator=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2503189Z processing existing schema: prim::unsqueeze_copy(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.2504805Z processing existing schema: aten::prod(Tensor self, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2506615Z processing existing schema: aten::prod.dim_int(Tensor self, int dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2508364Z processing existing schema: aten::prod.dim_Dimname(Tensor self, str dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2510623Z processing existing schema: aten::prod.Dimname_out(Tensor self, str dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2513194Z processing existing schema: aten::prod.int_out(Tensor self, int dim, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2514716Z processing existing schema: aten::fft_irfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.2517143Z processing existing schema: aten::fft_irfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2518826Z processing existing schema: aten::polygamma_(Tensor(a!) self, int n) -> (Tensor(a!)) 2022-05-18T03:33:21.2521100Z processing existing schema: aten::_reshape_alias(Tensor(a) self, int[] size, int[] stride) -> (Tensor(a)) 2022-05-18T03:33:21.2522859Z processing existing schema: quantized::cat_relu(Tensor[] qx, int dim, float? scale, int? zero_point) -> (Tensor) 2022-05-18T03:33:21.2524390Z processing existing schema: aten::atan_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2526054Z processing existing schema: aten::transpose_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:21.2527936Z processing existing schema: quantized::conv2d.new(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2530811Z processing existing schema: quantized::conv2d(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2532000Z processing existing schema: aten::atleast_3d(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2533704Z processing existing schema: aten::atleast_3d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.2535437Z processing existing schema: aten::_fft_c2c(Tensor self, int[] dim, int normalization, bool forward) -> (Tensor) 2022-05-18T03:33:21.2537738Z processing existing schema: aten::_fft_c2c.out(Tensor self, int[] dim, int normalization, bool forward, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2538872Z processing existing schema: aten::matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2540742Z processing existing schema: aten::matmul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2542006Z processing existing schema: aten::abs(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2543622Z processing existing schema: aten::abs.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2545330Z processing existing schema: aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.2546904Z processing existing schema: aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.2548816Z processing existing schema: aten::add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2550778Z processing existing schema: aten::add.t(t[] a, t[] b) -> (t[]) 2022-05-18T03:33:21.2552182Z processing existing schema: aten::add.str(str a, str b) -> (str) 2022-05-18T03:33:21.2553587Z processing existing schema: aten::add.int(int a, int b) -> (int) 2022-05-18T03:33:21.2555033Z processing existing schema: aten::add.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.2556512Z processing existing schema: aten::add.float(float a, float b) -> (float) 2022-05-18T03:33:21.2558070Z processing existing schema: aten::add.int_complex(int a, complex b) -> (complex) 2022-05-18T03:33:21.2559824Z processing existing schema: aten::add.complex_int(complex a, int b) -> (complex) 2022-05-18T03:33:21.2561436Z processing existing schema: aten::add.float_complex(float a, complex b) -> (complex) 2022-05-18T03:33:21.2562871Z processing existing schema: aten::add.complex_float(complex a, float b) -> (complex) 2022-05-18T03:33:21.2564392Z processing existing schema: aten::add.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.2565917Z processing existing schema: aten::add.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.2567385Z processing existing schema: aten::add(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.2567618Z schema: static_runtime::VarTupleUnpack(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:21.2569497Z processing existing schema: aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a)) 2022-05-18T03:33:21.2571313Z processing existing schema: aten::select.Dimname(Tensor(a) self, str dim, int index) -> (Tensor(a)) 2022-05-18T03:33:21.2573268Z processing existing schema: aten::select.t(t[](a) list, int idx) -> (t(*)) 2022-05-18T03:33:21.2575746Z processing existing schema: aten::split_with_sizes(Tensor(a -> *) self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.2577237Z processing existing schema: aten::linalg_solve(Tensor input, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2578942Z processing existing schema: aten::linalg_solve.out(Tensor input, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2580673Z processing existing schema: quantized::conv2d_dynamic(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, bool reduce_range=False) -> (Tensor) 2022-05-18T03:33:21.2582756Z processing existing schema: aten::batch_norm_elemt.out(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor invstd, float eps, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2584579Z processing existing schema: aten::batch_norm_elemt(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor invstd, float eps) -> (Tensor) 2022-05-18T03:33:21.2586468Z processing existing schema: aten::unbind.int(Tensor(a -> *) self, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.2588317Z processing existing schema: aten::unbind.Dimname(Tensor(a -> *) self, str dim) -> (Tensor[]) 2022-05-18T03:33:21.2589863Z processing existing schema: aten::round(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2591430Z processing existing schema: aten::round.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2593249Z processing existing schema: aten::round.decimals(Tensor self, *, int decimals) -> (Tensor) 2022-05-18T03:33:21.2594864Z processing existing schema: aten::round.decimals_out(Tensor self, *, int decimals, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2596691Z processing existing schema: aten::round.int(int a) -> (float) 2022-05-18T03:33:21.2597528Z processing existing schema: aten::round.float(float a) -> (float) 2022-05-18T03:33:21.2599216Z processing existing schema: aten::round.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2600958Z processing existing schema: aten::histc(Tensor self, int bins=100, Scalar min=0, Scalar max=0) -> (Tensor) 2022-05-18T03:33:21.2602856Z processing existing schema: aten::histc.out(Tensor self, int bins=100, Scalar min=0, Scalar max=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2604233Z processing existing schema: aten::adaptive_avg_pool3d(Tensor self, int[3] output_size) -> (Tensor) 2022-05-18T03:33:21.2606302Z processing existing schema: aten::adaptive_avg_pool3d.out(Tensor self, int[3] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2609642Z processing existing schema: _quantized::conv_transpose2d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] output_padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv2dPackedParamsBase) 2022-05-18T03:33:21.2610945Z processing existing schema: aten::bitwise_and_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.2612681Z processing existing schema: aten::bitwise_and_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.2614624Z processing existing schema: aten::unsqueeze(Tensor(a) self, int dim) -> (Tensor(a)) 2022-05-18T03:33:21.2614907Z schema: profiler::_call_end_callbacks_on_jit_fut(Tensor x, Future(t) y) -> (Future(t)) found on allowlist, skipping 2022-05-18T03:33:21.2615288Z schema: profiler::_call_end_callbacks_on_jit_fut._RecordFunction(__torch__.torch.classes.profiler._RecordFunction x, Future(t) y) -> (Future(t)) found on allowlist, skipping 2022-05-18T03:33:21.2617532Z processing existing schema: aten::addcmul_(Tensor(a!) self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor(a!)) 2022-05-18T03:33:21.2619052Z processing existing schema: aten::squeeze(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.2620726Z processing existing schema: aten::squeeze.dim(Tensor(a) self, int dim) -> (Tensor(a)) 2022-05-18T03:33:21.2622466Z processing existing schema: aten::squeeze.dimname(Tensor(a) self, str dim) -> (Tensor(a)) 2022-05-18T03:33:21.2624069Z processing existing schema: sparse::qlinear_dynamic(Tensor X, __torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor Y) 2022-05-18T03:33:21.2625293Z processing existing schema: aten::arcsin(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2627073Z processing existing schema: aten::arcsin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2628662Z processing existing schema: aten::tanh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2630038Z processing existing schema: aten::clamp_max(Tensor self, Scalar max) -> (Tensor) 2022-05-18T03:33:21.2631488Z processing existing schema: aten::clamp_max.Tensor(Tensor self, Tensor max) -> (Tensor) 2022-05-18T03:33:21.2633406Z processing existing schema: aten::clamp_max.out(Tensor self, Scalar max, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2635215Z processing existing schema: aten::clamp_max.Tensor_out(Tensor self, Tensor max, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2636978Z processing existing schema: quantized::mul_relu_out(Tensor qa, Tensor qb, Tensor(a!) out) -> (Tensor(a!) out) 2022-05-18T03:33:21.2638166Z processing existing schema: aten::acos(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2640119Z processing existing schema: aten::acos.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2641329Z processing existing schema: aten::acos.int(int a) -> (float) 2022-05-18T03:33:21.2642660Z processing existing schema: aten::acos.float(float a) -> (float) 2022-05-18T03:33:21.2643994Z processing existing schema: aten::acos.complex(complex a) -> (complex) 2022-05-18T03:33:21.2645321Z processing existing schema: aten::acos.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2646630Z processing existing schema: aten::floor(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2648395Z processing existing schema: aten::floor.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2649667Z processing existing schema: aten::floor.int(int a) -> (int) 2022-05-18T03:33:21.2651062Z processing existing schema: aten::floor.float(float a) -> (int) 2022-05-18T03:33:21.2652385Z processing existing schema: aten::floor.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2654697Z processing existing schema: aten::normal_(Tensor(a!) self, float mean=0., float std=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.2656212Z processing existing schema: aten::_pdist_backward(Tensor grad, Tensor self, float p, Tensor pdist) -> (Tensor) 2022-05-18T03:33:21.2657689Z processing existing schema: aten::poisson(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.2659814Z processing existing schema: aten::random_.from(Tensor(a!) self, int from, int? to, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.2661722Z processing existing schema: aten::random_.to(Tensor(a!) self, int to, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.2663608Z processing existing schema: aten::random_(Tensor(a!) self, *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.2665931Z processing existing schema: aten::rand_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2668123Z processing existing schema: aten::randn_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2670873Z processing existing schema: aten::_sparse_csr_tensor_unsafe(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2673009Z processing existing schema: aten::randint_like(Tensor self, int high, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2675444Z processing existing schema: aten::randint_like.low_dtype(Tensor self, int low, int high, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2678093Z processing existing schema: aten::_sparse_csc_tensor_unsafe(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2680206Z processing existing schema: aten::rand(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2682764Z processing existing schema: aten::rand.generator(int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2685480Z processing existing schema: aten::rand.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2688493Z processing existing schema: aten::rand.generator_with_names(int[] size, *, Generator? generator, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2690279Z processing existing schema: aten::rand.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2692468Z processing existing schema: aten::rand.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2693918Z processing existing schema: aten::fmod.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2695412Z processing existing schema: aten::fmod.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2697350Z processing existing schema: aten::fmod.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2699277Z processing existing schema: aten::fmod.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2700749Z processing existing schema: aten::fmod.int(int a, int b) -> (float) 2022-05-18T03:33:21.2702425Z processing existing schema: aten::fmod.float(float a, float b) -> (float) 2022-05-18T03:33:21.2703938Z processing existing schema: aten::fmod.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.2705503Z processing existing schema: aten::fmod.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.2706937Z processing existing schema: aten::fmod(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.2709006Z processing existing schema: aten::fractional_max_pool2d(Tensor self, int[2] kernel_size, int[2] output_size, Tensor random_samples) -> (Tensor, Tensor) 2022-05-18T03:33:21.2711976Z processing existing schema: aten::fractional_max_pool2d.output(Tensor self, int[2] kernel_size, int[2] output_size, Tensor random_samples, *, Tensor(a!) output, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.2713503Z processing existing schema: aten::_sparse_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2715059Z processing existing schema: aten::_sparse_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2716767Z processing existing schema: aten::_sparse_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:21.2717874Z schema: aten::randperm(int n, *, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.2719549Z schema: aten::randperm.generator(int n, *, Generator? generator, int? dtype=4, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.2719888Z schema: aten::randperm.out(int n, *, Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.2720345Z schema: aten::randperm.generator_out(int n, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.2721802Z schema: aten::div.Tensor(Tensor self, Tensor other) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.2723339Z schema: aten::div.Scalar(Tensor self, Scalar other) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.2725084Z schema: aten::div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.2726737Z schema: aten::div.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> (Tensor) has valid upgrader, skipping 2022-05-18T03:33:21.2728628Z schema: aten::div.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.2730667Z schema: aten::div.out_mode(Tensor self, Tensor other, *, str? rounding_mode, Tensor(a!) out) -> (Tensor(a!)) has valid upgrader, skipping 2022-05-18T03:33:21.2731923Z processing existing schema: aten::div.int(int a, int b) -> (float) 2022-05-18T03:33:21.2733600Z processing existing schema: aten::div.complex(complex a, complex b) -> (complex) 2022-05-18T03:33:21.2735352Z processing existing schema: aten::div.float(float a, float b) -> (float) 2022-05-18T03:33:21.2736767Z processing existing schema: aten::div(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.2738077Z processing existing schema: aten::isnumeric(str self) -> (bool) 2022-05-18T03:33:21.2740391Z processing existing schema: aten::zeros_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.2742106Z processing existing schema: aten::narrow(Tensor(a) self, int dim, int start, int length) -> (Tensor(a)) 2022-05-18T03:33:21.2744229Z processing existing schema: aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> (Tensor(a)) 2022-05-18T03:33:21.2745936Z processing existing schema: aten::_fused_dropout(Tensor self, float p, Generator? generator=None) -> (Tensor, Tensor) 2022-05-18T03:33:21.2747680Z processing existing schema: quantized::clamp(Tensor qx, Scalar? min=None, Scalar? max=None) -> (Tensor qy) 2022-05-18T03:33:21.2749407Z processing existing schema: aten::atan2(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2751108Z processing existing schema: aten::atan2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2752557Z processing existing schema: aten::atan2.int(int a, int b) -> (float) 2022-05-18T03:33:21.2754325Z processing existing schema: aten::atan2.float(float a, float b) -> (float) 2022-05-18T03:33:21.2755270Z processing existing schema: aten::atan2.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.2756753Z processing existing schema: aten::atan2.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.2758402Z processing existing schema: aten::atan2.Scalar_Scalar(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.2759924Z processing existing schema: aten::trace_backward(Tensor grad, int[] sizes) -> (Tensor) 2022-05-18T03:33:21.2762756Z processing existing schema: aten::_empty_per_channel_affine_quantized(int[] size, *, Tensor scales, Tensor zero_points, int axis, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=0) -> (Tensor) 2022-05-18T03:33:21.2764248Z processing existing schema: aten::margin_ranking_loss(Tensor input1, Tensor input2, Tensor target, float margin=0., int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2765924Z processing existing schema: quantized::cat(Tensor[] qx, int dim, float? scale, int? zero_point) -> (Tensor) 2022-05-18T03:33:21.2767422Z processing existing schema: aten::atan2_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.2769071Z processing existing schema: aten::transpose.int(Tensor(a) self, int dim0, int dim1) -> (Tensor(a)) 2022-05-18T03:33:21.2770619Z processing existing schema: aten::transpose.Dimname(Tensor(a) self, str dim0, str dim1) -> (Tensor(a)) 2022-05-18T03:33:21.2772519Z processing existing schema: quantized::cat_out(Tensor[] qx, int dim, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2773280Z processing existing schema: aten::atanh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2775019Z processing existing schema: aten::atanh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2775880Z processing existing schema: aten::atanh.int(int a) -> (float) 2022-05-18T03:33:21.2776822Z processing existing schema: aten::atanh.float(float a) -> (float) 2022-05-18T03:33:21.2778145Z processing existing schema: aten::atanh.complex(complex a) -> (complex) 2022-05-18T03:33:21.2779466Z processing existing schema: aten::atanh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.2781881Z processing existing schema: quantized::dropout(Tensor self, float output_scale, int output_zero_point, Scalar p=0.5, bool training=False) -> (Tensor) 2022-05-18T03:33:21.2783599Z processing existing schema: aten::bitwise_left_shift_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.2785468Z processing existing schema: aten::bitwise_left_shift_.Tensor_Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.2787109Z processing existing schema: aten::sgn_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2789059Z processing existing schema: aten::addbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.2791486Z processing existing schema: aten::addbmm.out(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2791878Z schema: static_runtime::create_owned_ref(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:21.2793011Z processing existing schema: aten::linalg_multi_dot(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.2795231Z processing existing schema: aten::linalg_multi_dot.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2797187Z processing existing schema: aten::rename(Tensor(a) self, str[]? names) -> (Tensor(a)) 2022-05-18T03:33:21.2799744Z processing existing schema: aten::_thnn_fused_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2801540Z processing existing schema: aten::_thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor) 2022-05-18T03:33:21.2803688Z processing existing schema: aten::lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor, Tensor) 2022-05-18T03:33:21.2805499Z processing existing schema: aten::rnn_tanh_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:21.2807557Z processing existing schema: aten::rnn_relu_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor) 2022-05-18T03:33:21.2809411Z processing existing schema: quantized::conv_transpose1d_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 2022-05-18T03:33:21.2813171Z processing existing schema: aten::convolution_backward_overrideable(Tensor grad_output, Tensor input, Tensor weight, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias) 2022-05-18T03:33:21.2813804Z processing existing schema: aten::erfinv(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2815739Z processing existing schema: aten::erfinv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2817788Z processing existing schema: aten::rsplit(str self, str separator=" ", int max=-1) -> (str[]) 2022-05-18T03:33:21.2819471Z processing existing schema: aten::softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> (Tensor) 2022-05-18T03:33:21.2821568Z processing existing schema: aten::softplus.out(Tensor self, Scalar beta=1, Scalar threshold=20, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2824682Z processing existing schema: aten::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor) 2022-05-18T03:33:21.2826725Z processing existing schema: aten::native_layer_norm(Tensor input, int[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.2829261Z processing existing schema: aten::group_norm(Tensor input, int num_groups, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enabled=True) -> (Tensor) 2022-05-18T03:33:21.2830163Z processing existing schema: aten::frobenius_norm(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2832065Z processing existing schema: aten::frobenius_norm.dim(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2834189Z processing existing schema: aten::frobenius_norm.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2835539Z processing existing schema: aten::nuclear_norm(Tensor self, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2837273Z processing existing schema: aten::nuclear_norm.dim(Tensor self, int[2] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2839294Z processing existing schema: aten::nuclear_norm.out(Tensor self, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2841493Z processing existing schema: aten::nuclear_norm.dim_out(Tensor self, int[2] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2843093Z processing existing schema: aten::unfold(Tensor(a) self, int dimension, int size, int step) -> (Tensor(a)) 2022-05-18T03:33:21.2845081Z processing existing schema: aten::max_unpool3d(Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding) -> (Tensor) 2022-05-18T03:33:21.2847662Z processing existing schema: aten::max_unpool3d.out(Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2849542Z processing existing schema: aten::nll_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:21.2851855Z processing existing schema: aten::nll_loss.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2853561Z processing existing schema: aten::_lu_with_info(Tensor self, bool pivot=True, bool check_errors=True) -> (Tensor LU, Tensor pivots, Tensor info) 2022-05-18T03:33:21.2855474Z processing existing schema: aten::nll_loss2d(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:21.2857893Z processing existing schema: aten::nll_loss2d.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2858941Z processing existing schema: prim::MMBatchSide(...) -> (...) 2022-05-18T03:33:21.2860900Z processing existing schema: aten::hinge_embedding_loss(Tensor self, Tensor target, float margin=1., int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2862989Z processing existing schema: aten::kl_div(Tensor self, Tensor target, int reduction=1, *, bool log_target=False) -> (Tensor) 2022-05-18T03:33:21.2863997Z processing existing schema: aten::soft_margin_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2866225Z processing existing schema: aten::soft_margin_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2867303Z processing existing schema: aten::smooth_l1_loss(Tensor self, Tensor target, int reduction=1, float beta=1.) -> (Tensor) 2022-05-18T03:33:21.2869510Z processing existing schema: aten::smooth_l1_loss.out(Tensor self, Tensor target, int reduction=1, float beta=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2871101Z processing existing schema: aten::huber_loss(Tensor self, Tensor target, int reduction=1, float delta=1.) -> (Tensor) 2022-05-18T03:33:21.2873171Z processing existing schema: aten::huber_loss.out(Tensor self, Tensor target, int reduction=1, float delta=1., *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2875279Z processing existing schema: aten::rsqrt(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2876551Z processing existing schema: aten::rsqrt.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2878181Z processing existing schema: aten::acos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.2880419Z processing existing schema: aten::mse_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2882546Z processing existing schema: aten::mse_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2884106Z processing existing schema: aten::diag(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:21.2886447Z processing existing schema: aten::diag.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2887821Z processing existing schema: aten::is_contiguous(Tensor self) -> (bool) 2022-05-18T03:33:21.2889978Z processing existing schema: aten::multilabel_margin_loss(Tensor self, Tensor target, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2892308Z processing existing schema: aten::multilabel_margin_loss.out(Tensor self, Tensor target, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2894455Z processing existing schema: quantized::conv3d_relu.new(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2897740Z processing existing schema: quantized::conv3d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase weight, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.2900208Z processing existing schema: aten::avg_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> (Tensor) 2022-05-18T03:33:21.2903160Z processing existing schema: aten::avg_pool2d_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.2906278Z processing existing schema: aten::triplet_margin_loss(Tensor anchor, Tensor positive, Tensor negative, float margin=1., float p=2., float eps=9.9999999999999995e-07, bool swap=False, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2907482Z processing existing schema: aten::_aminmax(Tensor self) -> (Tensor, Tensor) 2022-05-18T03:33:21.2910045Z processing existing schema: aten::_aminmax.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.2912362Z processing existing schema: aten::linalg_lstsq(Tensor self, Tensor b, float? rcond=None, *, str? driver=None) -> (Tensor solution, Tensor residuals, Tensor rank, Tensor singular_values) 2022-05-18T03:33:21.2916246Z processing existing schema: aten::linalg_lstsq.out(Tensor self, Tensor b, float? rcond=None, *, str? driver=None, Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) -> (Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) 2022-05-18T03:33:21.2919268Z processing existing schema: aten::zeros.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2921746Z processing existing schema: aten::zeros(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.2923964Z processing existing schema: aten::zeros.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2925878Z processing existing schema: aten::dist(Tensor self, Tensor other, Scalar p=2) -> (Tensor) 2022-05-18T03:33:21.2927224Z processing existing schema: aten::isdecimal(str self) -> (bool) 2022-05-18T03:33:21.2929184Z processing existing schema: aten::renorm(Tensor self, Scalar p, int dim, Scalar maxnorm) -> (Tensor) 2022-05-18T03:33:21.2931507Z processing existing schema: aten::renorm.out(Tensor self, Scalar p, int dim, Scalar maxnorm, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2933143Z processing existing schema: aten::softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2935007Z processing existing schema: aten::softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2936985Z processing existing schema: aten::softmax.int_out(Tensor self, int dim, int? dtype=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2939452Z processing existing schema: quantized::embedding_bag_4bit_prepack(Tensor weight, bool optimized_qparams=False, int nbins=200, float ratio=0.16) -> (Tensor) 2022-05-18T03:33:21.2941096Z processing existing schema: aten::block_diag(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.2942795Z processing existing schema: aten::cumprod(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2945031Z processing existing schema: aten::cumprod.dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.2947147Z processing existing schema: aten::cumprod.dimname_out(Tensor self, str dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2949130Z processing existing schema: aten::cumprod.out(Tensor self, int dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2949634Z schema: static_runtime::expand_dims_copy(Tensor input, int[] dims) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.2951170Z processing existing schema: quantized::embedding_4bit(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, bool pruned_weights=False) -> (Tensor) 2022-05-18T03:33:21.2952196Z processing existing schema: aten::bitwise_right_shift.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2954021Z processing existing schema: aten::bitwise_right_shift.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2955343Z processing existing schema: aten::bitwise_right_shift.Tensor_Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.2957002Z processing existing schema: aten::bitwise_right_shift.Tensor_Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2958250Z processing existing schema: aten::bitwise_right_shift.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.2960337Z processing existing schema: aten::upsample_linear1d(Tensor self, int[1] output_size, bool align_corners, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.2962467Z processing existing schema: aten::upsample_linear1d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.2964601Z processing existing schema: aten::upsample_linear1d.out(Tensor self, int[1] output_size, bool align_corners, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2965955Z processing existing schema: aten::norm.Scalar(Tensor self, Scalar p=2) -> (Tensor) 2022-05-18T03:33:21.2967615Z processing existing schema: aten::norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2969325Z processing existing schema: aten::norm.names_ScalarOpt_dim(Tensor self, Scalar? p, str[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.2971149Z processing existing schema: aten::norm.ScalarOpt_dim_dtype(Tensor self, Scalar? p, int[1] dim, bool keepdim, *, int dtype) -> (Tensor) 2022-05-18T03:33:21.2973176Z processing existing schema: aten::norm.dtype_out(Tensor self, Scalar? p, int[1] dim, bool keepdim, *, int dtype, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2975152Z processing existing schema: aten::norm.out(Tensor self, Scalar? p, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2976758Z processing existing schema: aten::norm.ScalarOpt_dtype(Tensor self, Scalar? p, *, int dtype) -> (Tensor) 2022-05-18T03:33:21.2978455Z processing existing schema: aten::norm.names_ScalarOpt_dim_dtype(Tensor self, Scalar? p, str[1] dim, bool keepdim, *, int dtype) -> (Tensor) 2022-05-18T03:33:21.2980648Z processing existing schema: aten::norm.names_dtype_out(Tensor self, Scalar? p, str[1] dim, bool keepdim, *, int dtype, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2982729Z processing existing schema: aten::norm.names_out(Tensor self, Scalar? p, str[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2983900Z processing existing schema: aten::selu(Tensor self) -> (Tensor) 2022-05-18T03:33:21.2985595Z processing existing schema: aten::addcdiv(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> (Tensor) 2022-05-18T03:33:21.2987508Z processing existing schema: aten::addcdiv.out(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2989085Z processing existing schema: aten::index_copy(Tensor self, int dim, Tensor index, Tensor source) -> (Tensor) 2022-05-18T03:33:21.2990515Z processing existing schema: aten::index_copy.dimname(Tensor self, str dim, Tensor index, Tensor source) -> (Tensor) 2022-05-18T03:33:21.2992515Z processing existing schema: aten::index_copy.out(Tensor self, int dim, Tensor index, Tensor source, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.2995386Z processing existing schema: quantized::conv3d_prepack(Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> (__torch__.torch.classes.quantized.Conv3dPackedParamsBase) 2022-05-18T03:33:21.2997069Z processing existing schema: aten::binary_cross_entropy(Tensor self, Tensor target, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.2999593Z processing existing schema: aten::binary_cross_entropy.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3001424Z processing existing schema: aten::uniform_(Tensor(a!) self, float from=0., float to=1., *, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.3002835Z processing existing schema: aten::cross(Tensor self, Tensor other, int? dim=None) -> (Tensor) 2022-05-18T03:33:21.3004775Z processing existing schema: aten::cross.out(Tensor self, Tensor other, int? dim=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3006649Z processing existing schema: _quantized::conv3d(Tensor qx, __torch__.torch.classes.quantized.Conv3dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.3008162Z processing existing schema: aten::grid_sampler(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.3010116Z processing existing schema: sparse::qlinear_unpack(__torch__.torch.classes.sparse.LinearPackedParamsBase W_prepack) -> (Tensor W_origin, Tensor? B_origin, int[] block_pattern) 2022-05-18T03:33:21.3011511Z processing existing schema: aten::arcsinh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3013608Z processing existing schema: aten::tensordot(Tensor self, Tensor other, int[] dims_self, int[] dims_other) -> (Tensor) 2022-05-18T03:33:21.3016197Z processing existing schema: aten::tensordot.out(Tensor self, Tensor other, int[] dims_self, int[] dims_other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3017682Z processing existing schema: aten::scatter_add(Tensor self, int dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:21.3019640Z processing existing schema: aten::scatter_add.out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3021173Z processing existing schema: aten::scatter_add.dimname(Tensor self, str dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:21.3024151Z processing existing schema: quantized::embedding_bag_4bit_rowwise_offsets(Tensor weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:21.3025128Z processing existing schema: aten::bitwise_xor.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3027263Z processing existing schema: aten::bitwise_xor.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3029227Z processing existing schema: aten::bitwise_xor.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3030284Z processing existing schema: aten::bitwise_xor.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3031836Z processing existing schema: aten::cummax(Tensor self, int dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3033371Z processing existing schema: aten::cummax.dimname(Tensor self, str dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3035827Z processing existing schema: aten::cummax.dimname_out(Tensor self, str dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3038188Z processing existing schema: aten::cummax.out(Tensor self, int dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3038568Z schema: static_runtime::permute_copy(Tensor self, int[] dims) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3040168Z processing existing schema: aten::upsample_nearest1d(Tensor self, int[1] output_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.3042048Z processing existing schema: aten::upsample_nearest1d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.3044314Z processing existing schema: aten::upsample_nearest1d.out(Tensor self, int[1] output_size, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3045689Z processing existing schema: aten::cummin(Tensor self, int dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3047017Z processing existing schema: aten::cummin.dimname(Tensor self, str dim) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3049210Z processing existing schema: aten::cummin.dimname_out(Tensor self, str dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3051362Z processing existing schema: aten::cummin.out(Tensor self, int dim, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3051669Z schema: static_runtime::flatten_copy.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3053287Z processing existing schema: aten::upsample_nearest2d(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.3055341Z processing existing schema: aten::upsample_nearest2d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.3057725Z processing existing schema: aten::upsample_nearest2d.out(Tensor self, int[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3059444Z processing existing schema: aten::cumprod_(Tensor(a!) self, int dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:21.3062070Z processing existing schema: aten::cumprod_.dimname(Tensor(a!) self, str dim, *, int? dtype=None) -> (Tensor(a!)) 2022-05-18T03:33:21.3062641Z schema: static_runtime::to_maybe_copy_out.prim_dtype(Tensor self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:21.3063087Z schema: static_runtime::to_maybe_copy_out.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:21.3063469Z schema: static_runtime::to_maybe_copy_out.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor, bool) found on allowlist, skipping 2022-05-18T03:33:21.3064012Z processing existing schema: aten::upsample_nearest3d(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.3066410Z processing existing schema: aten::upsample_nearest3d.vec(Tensor input, int[]? output_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.3069024Z processing existing schema: aten::upsample_nearest3d.out(Tensor self, int[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3072078Z processing existing schema: quantized::embedding_bag_4bit(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase weight, Tensor indices, Tensor? offsets=None, bool scale_grad_by_freq=False, int mode=0, bool pruned_weights=False, Tensor? per_sample_weights=None, Tensor? compressed_indices_mapping=None, bool include_last_offset=False) -> (Tensor) 2022-05-18T03:33:21.3073570Z processing existing schema: aten::bitwise_or.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3074913Z processing existing schema: aten::bitwise_or.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3076538Z processing existing schema: aten::bitwise_or.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3077927Z processing existing schema: aten::bitwise_or.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3081614Z processing existing schema: aten::cudnn_convolution_transpose(Tensor self, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic, bool allow_tf32) -> (Tensor) 2022-05-18T03:33:21.3083220Z processing existing schema: aten::get_gradients(int context_id) -> (Dict(Tensor, Tensor)) 2022-05-18T03:33:21.3085475Z processing existing schema: aten::upsample_bilinear2d(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.3088634Z processing existing schema: aten::upsample_bilinear2d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.3091046Z processing existing schema: aten::upsample_bilinear2d.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3091661Z processing existing schema: quantized::embedding_bag_byte_unpack(Tensor weight) -> (Tensor) 2022-05-18T03:33:21.3094051Z processing existing schema: aten::broadcast_to(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:21.3095617Z processing existing schema: aten::cumsum(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3097342Z processing existing schema: aten::cumsum.dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3099408Z processing existing schema: aten::cumsum.dimname_out(Tensor self, str dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3101484Z processing existing schema: aten::cumsum.out(Tensor self, int dim, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3101939Z schema: static_runtime::layer_norm(Tensor input, int[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1.0000000000000001e-05, bool cudnn_enable=True) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3103754Z processing existing schema: aten::upsample_trilinear3d(Tensor self, int[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.3106058Z processing existing schema: aten::upsample_trilinear3d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.3108682Z processing existing schema: aten::upsample_trilinear3d.out(Tensor self, int[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3110337Z schema: aten::quantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3112010Z schema: aten::quantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3113874Z schema: aten::quantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.3115848Z schema: aten::quantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.3117431Z schema: aten::nanquantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3119383Z schema: aten::nanquantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear") -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3121324Z schema: aten::nanquantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.3123267Z schema: aten::nanquantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation="linear", Tensor(a!) out) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.3125007Z processing existing schema: aten::grid_sampler_3d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.3126561Z processing existing schema: aten::replication_pad3d(Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:21.3128621Z processing existing schema: aten::replication_pad3d.out(Tensor self, int[6] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3129891Z processing existing schema: aten::inverse(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3131358Z processing existing schema: aten::inverse.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3132650Z processing existing schema: aten::sin(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3134431Z processing existing schema: aten::sin.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3135670Z processing existing schema: aten::sin.int(int a) -> (float) 2022-05-18T03:33:21.3137060Z processing existing schema: aten::sin.float(float a) -> (float) 2022-05-18T03:33:21.3138464Z processing existing schema: aten::sin.complex(complex a) -> (complex) 2022-05-18T03:33:21.3139771Z processing existing schema: aten::sin.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.3141292Z processing existing schema: aten::matrix_rank(Tensor self, bool symmetric=False) -> (Tensor) 2022-05-18T03:33:21.3142949Z processing existing schema: aten::matrix_rank.tol(Tensor self, float tol, bool symmetric=False) -> (Tensor) 2022-05-18T03:33:21.3144883Z processing existing schema: aten::ormqr(Tensor self, Tensor input2, Tensor input3, bool left=True, bool transpose=False) -> (Tensor) 2022-05-18T03:33:21.3147139Z processing existing schema: aten::ormqr.out(Tensor self, Tensor input2, Tensor input3, bool left=True, bool transpose=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3148964Z processing existing schema: aten::pinverse(Tensor self, float rcond=1.0000000000000001e-15) -> (Tensor) 2022-05-18T03:33:21.3150606Z processing existing schema: aten::max_unpool2d(Tensor self, Tensor indices, int[2] output_size) -> (Tensor) 2022-05-18T03:33:21.3152632Z processing existing schema: aten::max_unpool2d.out(Tensor self, Tensor indices, int[2] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3154046Z processing existing schema: aten::reflection_pad1d(Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:21.3156057Z processing existing schema: aten::reflection_pad1d.out(Tensor self, int[2] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3157444Z processing existing schema: aten::replication_pad1d(Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:21.3159609Z processing existing schema: aten::replication_pad1d.out(Tensor self, int[2] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3161307Z processing existing schema: quantized::leaky_relu(Tensor qx, Scalar negative_slope, bool inplace, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.3162472Z processing existing schema: aten::col_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3164368Z processing existing schema: aten::elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor) 2022-05-18T03:33:21.3166576Z processing existing schema: aten::elu.out(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3167762Z processing existing schema: aten::capitalize(str self) -> (str) 2022-05-18T03:33:21.3169821Z processing existing schema: aten::unsafe_chunk(Tensor self, int chunks, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.3171728Z processing existing schema: aten::fft_ihfft(Tensor self, int? n=None, int dim=-1, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.3174046Z processing existing schema: aten::fft_ihfft.out(Tensor self, int? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3176530Z processing existing schema: aten::linalg_matrix_norm(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3179145Z processing existing schema: aten::linalg_matrix_norm.str_ord(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3182014Z processing existing schema: aten::linalg_matrix_norm.out(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3185295Z processing existing schema: aten::linalg_matrix_norm.str_ord_out(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3187311Z processing existing schema: aten::linalg_cond(Tensor self, Scalar? p=None) -> (Tensor) 2022-05-18T03:33:21.3188335Z processing existing schema: aten::linalg_cond.p_str(Tensor self, str p) -> (Tensor) 2022-05-18T03:33:21.3190245Z processing existing schema: aten::linalg_cond.out(Tensor self, Scalar? p=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3191976Z processing existing schema: aten::linalg_cond.p_str_out(Tensor self, str p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3194046Z processing existing schema: aten::_backward(Tensor self, Tensor[] inputs, Tensor? gradient=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:21.3195550Z processing existing schema: aten::linalg_matrix_rank(Tensor self, float tol, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.3197062Z processing existing schema: aten::linalg_matrix_rank.tol_tensor(Tensor input, Tensor tol, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.3198749Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_tensor(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.3201287Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_float(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.3203275Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_tensor_out(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3205396Z processing existing schema: aten::linalg_matrix_rank.atol_rtol_float_out(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3207083Z processing existing schema: aten::linalg_matrix_rank.out(Tensor self, float tol, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3209038Z processing existing schema: aten::linalg_matrix_rank.out_tol_tensor(Tensor input, Tensor tol, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3209952Z processing existing schema: aten::linalg_svdvals(Tensor A) -> (Tensor) 2022-05-18T03:33:21.3211683Z processing existing schema: aten::linalg_svdvals.out(Tensor A, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3212768Z processing existing schema: aten::linalg_eigvals(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3214484Z processing existing schema: aten::linalg_eigvals.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3215833Z processing existing schema: aten::_add_relu.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3217680Z processing existing schema: aten::_add_relu.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3219054Z processing existing schema: aten::_add_relu.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3220423Z processing existing schema: aten::linalg_eigvalsh(Tensor self, str UPLO="L") -> (Tensor) 2022-05-18T03:33:21.3222234Z processing existing schema: aten::linalg_eigvalsh.out(Tensor self, str UPLO="L", *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3224014Z processing existing schema: aten::_add_relu_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3225895Z processing existing schema: aten::_add_relu_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3227143Z processing existing schema: aten::linalg_householder_product(Tensor input, Tensor tau) -> (Tensor) 2022-05-18T03:33:21.3229006Z processing existing schema: aten::linalg_householder_product.out(Tensor input, Tensor tau, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3230526Z processing existing schema: aten::_cdist_backward(Tensor grad, Tensor x1, Tensor x2, float p, Tensor cdist) -> (Tensor) 2022-05-18T03:33:21.3232278Z processing existing schema: aten::linalg_tensorsolve(Tensor self, Tensor other, int[]? dims=None) -> (Tensor) 2022-05-18T03:33:21.3234461Z processing existing schema: aten::linalg_tensorsolve.out(Tensor self, Tensor other, int[]? dims=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3236280Z processing existing schema: aten::fake_quantize_per_tensor_affine(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:21.3238179Z processing existing schema: aten::fake_quantize_per_tensor_affine.tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:21.3239223Z processing existing schema: aten::mathremainder.int(int a, int b) -> (float) 2022-05-18T03:33:21.3240646Z processing existing schema: aten::mathremainder.float(float a, float b) -> (float) 2022-05-18T03:33:21.3242235Z processing existing schema: aten::mathremainder.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.3243732Z processing existing schema: aten::mathremainder.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.3245487Z processing existing schema: aten::mathremainder(Scalar a, Scalar b) -> (float) 2022-05-18T03:33:21.3246602Z processing existing schema: aten::glu(Tensor self, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.3248385Z processing existing schema: aten::glu.out(Tensor self, int dim=-1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3251288Z processing existing schema: quantized::max_pool2d(Tensor qx, int[] kernel_size, int[] stride, int[] padding, int[] dilation, bool ceil_mode) -> (Tensor) 2022-05-18T03:33:21.3253105Z processing existing schema: aten::col2im_backward(Tensor grad_output, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:21.3255405Z processing existing schema: aten::col2im_backward.grad_input(Tensor grad_output, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3256797Z processing existing schema: aten::eig(Tensor self, bool eigenvectors=False) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:21.3259487Z processing existing schema: aten::eig.e(Tensor self, bool eigenvectors=False, *, Tensor(a!) e, Tensor(b!) v) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:21.3260125Z processing existing schema: aten::isupper(str self) -> (bool) 2022-05-18T03:33:21.3261774Z processing existing schema: aten::geqrf(Tensor self) -> (Tensor a, Tensor tau) 2022-05-18T03:33:21.3263811Z processing existing schema: aten::geqrf.a(Tensor self, *, Tensor(a!) a, Tensor(b!) tau) -> (Tensor(a!) a, Tensor(b!) tau) 2022-05-18T03:33:21.3266736Z processing existing schema: aten::_embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.3267658Z processing existing schema: aten::lstsq(Tensor self, Tensor A) -> (Tensor solution, Tensor QR) 2022-05-18T03:33:21.3270040Z processing existing schema: aten::lstsq.X(Tensor self, Tensor A, *, Tensor(a!) X, Tensor(b!) qr) -> (Tensor(a!) solution, Tensor(b!) QR) 2022-05-18T03:33:21.3271391Z processing existing schema: aten::qr(Tensor self, bool some=True) -> (Tensor Q, Tensor R) 2022-05-18T03:33:21.3273586Z processing existing schema: aten::qr.Q(Tensor self, bool some=True, *, Tensor(a!) Q, Tensor(b!) R) -> (Tensor(a!) Q, Tensor(b!) R) 2022-05-18T03:33:21.3275492Z processing existing schema: quantized::conv1d_relu(Tensor qx, __torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weight, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.3276228Z processing existing schema: aten::atleast_2d(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3278186Z processing existing schema: aten::atleast_2d.Sequence(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3280650Z processing existing schema: aten::triangular_solve(Tensor self, Tensor A, bool upper=True, bool transpose=False, bool unitriangular=False) -> (Tensor solution, Tensor cloned_coefficient) 2022-05-18T03:33:21.3283227Z processing existing schema: aten::triangular_solve.X(Tensor self, Tensor A, bool upper=True, bool transpose=False, bool unitriangular=False, *, Tensor(a!) X, Tensor(b!) M) -> (Tensor(a!) solution, Tensor(b!) cloned_coefficient) 2022-05-18T03:33:21.3284845Z processing existing schema: aten::fractional_max_pool3d(Tensor self, int[3] kernel_size, int[3] output_size, Tensor random_samples) -> (Tensor, Tensor) 2022-05-18T03:33:21.3287511Z processing existing schema: aten::fractional_max_pool3d.output(Tensor self, int[3] kernel_size, int[3] output_size, Tensor random_samples, *, Tensor(a!) output, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.3288951Z processing existing schema: aten::adaptive_max_pool3d(Tensor self, int[3] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:21.3291225Z processing existing schema: aten::adaptive_max_pool3d.out(Tensor self, int[3] output_size, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.3292650Z processing existing schema: aten::linalg_eig(Tensor self) -> (Tensor eigenvalues, Tensor eigenvectors) 2022-05-18T03:33:21.3294697Z processing existing schema: aten::linalg_eig.out(Tensor self, *, Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) -> (Tensor(a!) eigenvalues, Tensor(b!) eigenvectors) 2022-05-18T03:33:21.3296255Z processing existing schema: aten::_grid_sampler_2d_cpu_fallback(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor) 2022-05-18T03:33:21.3297717Z processing existing schema: aten::native_dropout(Tensor input, float p, bool? train) -> (Tensor, Tensor) 2022-05-18T03:33:21.3299104Z processing existing schema: aten::_local_scalar_dense(Tensor self) -> (Scalar) 2022-05-18T03:33:21.3301457Z processing existing schema: aten::randn(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.3303881Z processing existing schema: aten::randn.generator(int[] size, *, Generator? generator, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.3306420Z processing existing schema: aten::randn.names(int[] size, *, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.3309335Z processing existing schema: aten::randn.generator_with_names(int[] size, *, Generator? generator, str[]? names, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.3311135Z processing existing schema: aten::randn.out(int[] size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3313352Z processing existing schema: aten::randn.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3314907Z processing existing schema: aten::_sparse_log_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3316425Z processing existing schema: aten::_sparse_log_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3317998Z processing existing schema: aten::_sparse_log_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 2022-05-18T03:33:21.3320847Z processing existing schema: aten::_to_copy(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, int? memory_format=None) -> (Tensor) 2022-05-18T03:33:21.3321826Z processing existing schema: aten::abs_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3323466Z processing existing schema: aten::absolute_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3324854Z processing existing schema: aten::rsqrt_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3326376Z processing existing schema: aten::acosh(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3327897Z processing existing schema: aten::acosh.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3329237Z processing existing schema: aten::acosh.int(int a) -> (float) 2022-05-18T03:33:21.3330682Z processing existing schema: aten::acosh.float(float a) -> (float) 2022-05-18T03:33:21.3332122Z processing existing schema: aten::acosh.complex(complex a) -> (complex) 2022-05-18T03:33:21.3333428Z processing existing schema: aten::acosh.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.3335315Z processing existing schema: aten::rsub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3336881Z processing existing schema: aten::rsub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3338601Z processing existing schema: aten::acosh_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3339074Z schema: aten::select_backward(Tensor grad_output, int[] input_sizes, int dim, int index) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3341317Z processing existing schema: aten::add_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3343016Z processing existing schema: aten::add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3345634Z processing existing schema: aten::add_.t(t[](a!) self, t[] b) -> (t[]) 2022-05-18T03:33:21.3345932Z schema: static_runtime::fused_equally_split(Tensor input, int num_split, int dim) -> (...) found on allowlist, skipping 2022-05-18T03:33:21.3348763Z processing existing schema: aten::set_.source_Storage_storage_offset(Tensor(a!) self, Storage source, int storage_offset, int[] size, int[] stride=[]) -> (Tensor(a!)) 2022-05-18T03:33:21.3351162Z processing existing schema: aten::set_.source_Tensor(Tensor(a!) self, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:21.3351746Z processing existing schema: aten::set_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3353649Z processing existing schema: aten::set_.source_Storage(Tensor(a!) self, Storage source) -> (Tensor(a!)) 2022-05-18T03:33:21.3356301Z processing existing schema: aten::set_.source_Tensor_storage_offset(Tensor(a!) self, Tensor source, int storage_offset, int[] size, int[] stride=[]) -> (Tensor(a!)) 2022-05-18T03:33:21.3357543Z processing existing schema: aten::sigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3359346Z processing existing schema: aten::sigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3360847Z processing existing schema: aten::sin_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.3363499Z processing existing schema: aten::sparse_csr_tensor.crow_col_value_size(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.3365708Z processing existing schema: aten::sparse_csr_tensor.crow_col_value(Tensor crow_indices, Tensor col_indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=False) -> (Tensor) 2022-05-18T03:33:21.3367081Z processing existing schema: aten::_softmax_backward_data(Tensor grad_output, Tensor output, int dim, int input_dtype) -> (Tensor) 2022-05-18T03:33:21.3369289Z processing existing schema: aten::_softmax_backward_data.out(Tensor grad_output, Tensor output, int dim, int input_dtype, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3371622Z processing existing schema: aten::sspaddmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3373325Z processing existing schema: aten::sspaddmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3375040Z processing existing schema: aten::_stack(Tensor[] tensors, int dim=0) -> (Tensor) 2022-05-18T03:33:21.3377354Z processing existing schema: aten::_stack.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3379377Z processing existing schema: aten::nansum(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3381793Z processing existing schema: aten::nansum.out(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3383474Z processing existing schema: aten::flip(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:21.3385418Z processing existing schema: aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor) 2022-05-18T03:33:21.3386970Z schema: aten::_transform_bias_rescale_qkv(Tensor qkv, Tensor qkv_bias, int num_heads) -> (Tensor, Tensor, Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3387967Z schema: aten::_nested_tensor_from_mask(Tensor t, Tensor mask) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.3389735Z processing existing schema: aten::_nested_from_padded(Tensor padded, Tensor cpu_nested_shape_example, bool fuse_transform_0213=False) -> (Tensor) 2022-05-18T03:33:21.3391455Z processing existing schema: aten::_unique(Tensor self, bool sorted=True, bool return_inverse=False) -> (Tensor, Tensor) 2022-05-18T03:33:21.3393453Z processing existing schema: aten::unique_dim(Tensor self, int dim, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.3395308Z processing existing schema: aten::unique_consecutive(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.3397051Z processing existing schema: aten::unique_dim_consecutive(Tensor self, int dim, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.3398873Z processing existing schema: aten::_unique2(Tensor self, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.3400430Z processing existing schema: aten::where.self(Tensor condition, Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3402328Z processing existing schema: aten::where.self_out(Tensor condition, Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3403781Z processing existing schema: aten::where.ScalarSelf(Tensor condition, Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3405310Z processing existing schema: aten::where.ScalarOther(Tensor condition, Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3406804Z processing existing schema: aten::where.Scalar(Tensor condition, Scalar self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3408364Z processing existing schema: aten::where(Tensor condition) -> (Tensor[]) 2022-05-18T03:33:21.3410045Z processing existing schema: aten::_weight_norm_interface(Tensor v, Tensor g, int dim=0) -> (Tensor, Tensor) 2022-05-18T03:33:21.3411823Z processing existing schema: aten::_weight_norm_interface_backward(Tensor grad_w, Tensor saved_v, Tensor saved_g, Tensor saved_norms, int dim) -> (Tensor, Tensor) 2022-05-18T03:33:21.3413093Z processing existing schema: aten::_standard_gamma_grad(Tensor self, Tensor output) -> (Tensor) 2022-05-18T03:33:21.3414609Z processing existing schema: aten::_standard_gamma(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.3416139Z processing existing schema: aten::_dirichlet_grad(Tensor x, Tensor alpha, Tensor total) -> (Tensor) 2022-05-18T03:33:21.3417576Z processing existing schema: aten::_sample_dirichlet(Tensor self, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.3419854Z processing existing schema: aten::frexp.Tensor_out(Tensor self, *, Tensor(a!) mantissa, Tensor(b!) exponent) -> (Tensor(a!) mantissa, Tensor(b!) exponent) 2022-05-18T03:33:21.3421172Z processing existing schema: aten::frexp.Tensor(Tensor self) -> (Tensor mantissa, Tensor exponent) 2022-05-18T03:33:21.3422448Z processing existing schema: aten::frexp(float a) -> (float, int) 2022-05-18T03:33:21.3423855Z processing existing schema: aten::heaviside(Tensor self, Tensor values) -> (Tensor) 2022-05-18T03:33:21.3425963Z processing existing schema: aten::heaviside.out(Tensor self, Tensor values, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3427514Z processing existing schema: aten::heaviside_(Tensor(a!) self, Tensor values) -> (Tensor(a!)) 2022-05-18T03:33:21.3429582Z processing existing schema: aten::_addmm_activation(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, bool use_gelu=False) -> (Tensor) 2022-05-18T03:33:21.3431965Z processing existing schema: aten::_addmm_activation.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, bool use_gelu=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3433262Z processing existing schema: aten::to_sparse.sparse_dim(Tensor self, int sparse_dim) -> (Tensor) 2022-05-18T03:33:21.3434496Z processing existing schema: aten::to_sparse(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3435817Z processing existing schema: aten::to_sparse_csr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3437139Z processing existing schema: aten::to_sparse_csc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3438659Z processing existing schema: aten::to_sparse_bsr(Tensor self, int[2] blocksize) -> (Tensor) 2022-05-18T03:33:21.3440355Z processing existing schema: aten::to_sparse_bsc(Tensor self, int[2] blocksize) -> (Tensor) 2022-05-18T03:33:21.3441769Z processing existing schema: aten::to_mkldnn(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.3443292Z processing existing schema: aten::quantize_per_tensor_dynamic(Tensor self, int dtype, bool reduce_range) -> (Tensor) 2022-05-18T03:33:21.3444844Z processing existing schema: aten::quantize_per_tensor(Tensor self, float scale, int zero_point, int dtype) -> (Tensor) 2022-05-18T03:33:21.3446457Z processing existing schema: aten::quantize_per_tensor.tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, int dtype) -> (Tensor) 2022-05-18T03:33:21.3448670Z processing existing schema: aten::quantize_per_tensor.tensors(Tensor[] tensors, Tensor scales, Tensor zero_points, int dtype) -> (Tensor[]) 2022-05-18T03:33:21.3450277Z processing existing schema: aten::quantize_per_channel(Tensor self, Tensor scales, Tensor zero_points, int axis, int dtype) -> (Tensor) 2022-05-18T03:33:21.3451497Z processing existing schema: aten::dequantize.self(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3453435Z processing existing schema: aten::dequantize.tensors(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3454730Z processing existing schema: aten::dequantize.tensor(Tensor qtensor) -> (Tensor) 2022-05-18T03:33:21.3456638Z processing existing schema: aten::dequantize.list(Tensor[] qtensors) -> (Tensor[]) 2022-05-18T03:33:21.3457945Z processing existing schema: aten::dequantize.any(Any tensors) -> (Any) 2022-05-18T03:33:21.3459785Z processing existing schema: aten::Size(int[] sizes) -> (int[]) 2022-05-18T03:33:21.3461356Z processing existing schema: aten::_make_per_tensor_quantized_tensor(Tensor self, float scale, int zero_point) -> (Tensor) 2022-05-18T03:33:21.3462972Z processing existing schema: aten::_make_per_channel_quantized_tensor(Tensor self, Tensor scale, Tensor zero_point, int axis) -> (Tensor) 2022-05-18T03:33:21.3464937Z processing existing schema: aten::fake_quantize_per_tensor_affine_cachemask(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:21.3465933Z processing existing schema: aten::degrees.int(int a) -> (float) 2022-05-18T03:33:21.3467382Z processing existing schema: aten::degrees.float(float a) -> (float) 2022-05-18T03:33:21.3468698Z processing existing schema: aten::degrees.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.3470938Z processing existing schema: aten::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, Tensor fake_quant_enabled, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:21.3472625Z processing existing schema: aten::_fake_quantize_learnable_per_tensor_affine(Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor) 2022-05-18T03:33:21.3474489Z processing existing schema: aten::fake_quantize_per_channel_affine_cachemask(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor output, Tensor mask) 2022-05-18T03:33:21.3476126Z processing existing schema: aten::remove.int(int[](a!) self, int el) -> () 2022-05-18T03:33:21.3477972Z processing existing schema: aten::remove.float(float[](a!) self, float el) -> () 2022-05-18T03:33:21.3479965Z processing existing schema: aten::remove.bool(bool[](a!) self, bool el) -> () 2022-05-18T03:33:21.3481716Z processing existing schema: aten::remove.Tensor(Tensor[](a!) self, Tensor el) -> () 2022-05-18T03:33:21.3483523Z processing existing schema: aten::remove.str(str[](a!) self, str el) -> () 2022-05-18T03:33:21.3485656Z processing existing schema: aten::_fake_quantize_learnable_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor) 2022-05-18T03:33:21.3489145Z processing existing schema: aten::_fused_moving_avg_obs_fq_helper(Tensor self, Tensor observer_on, Tensor fake_quant_on, Tensor(a!) running_min, Tensor(b!) running_max, Tensor(c!) scale, Tensor(d!) zero_point, float averaging_const, int quant_min, int quant_max, int ch_axis, bool per_row_fake_quant=False, bool symmetric_quant=False) -> (Tensor output, Tensor mask) 2022-05-18T03:33:21.3489847Z processing existing schema: aten::is_set_to(Tensor self, Tensor tensor) -> (bool) 2022-05-18T03:33:21.3491761Z processing existing schema: aten::masked_scatter_(Tensor(a!) self, Tensor mask, Tensor source) -> (Tensor(a!)) 2022-05-18T03:33:21.3493324Z processing existing schema: aten::_masked_softmax(Tensor self, Tensor mask, int? dim=None) -> (Tensor) 2022-05-18T03:33:21.3495030Z processing existing schema: aten::_masked_softmax_backward(Tensor grad_output, Tensor output, Tensor mask, int? dim=None) -> (Tensor) 2022-05-18T03:33:21.3496934Z processing existing schema: aten::put_(Tensor(a!) self, Tensor index, Tensor source, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:21.3498659Z processing existing schema: aten::index_add(Tensor self, int dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3500831Z processing existing schema: aten::index_add.out(Tensor self, int dim, Tensor index, Tensor source, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3502607Z processing existing schema: aten::index_add.dimname(Tensor self, str dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.3504647Z processing existing schema: aten::index_add_(Tensor(a!) self, int dim, Tensor index, Tensor source, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3506593Z processing existing schema: aten::index_reduce(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor) 2022-05-18T03:33:21.3508848Z processing existing schema: aten::index_reduce.out(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3511034Z processing existing schema: aten::index_reduce_(Tensor(a!) self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor(a!)) 2022-05-18T03:33:21.3512457Z processing existing schema: aten::scatter.src(Tensor self, int dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:21.3514443Z processing existing schema: aten::scatter.src_out(Tensor self, int dim, Tensor index, Tensor src, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3515992Z processing existing schema: aten::scatter.value(Tensor self, int dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:21.3518011Z processing existing schema: aten::scatter.value_out(Tensor self, int dim, Tensor index, Scalar value, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3519870Z processing existing schema: aten::scatter.reduce(Tensor self, int dim, Tensor index, Tensor src, *, str reduce) -> (Tensor) 2022-05-18T03:33:21.3521948Z processing existing schema: aten::scatter.reduce_out(Tensor self, int dim, Tensor index, Tensor src, *, str reduce, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3523677Z processing existing schema: aten::scatter.value_reduce(Tensor self, int dim, Tensor index, Scalar value, *, str reduce) -> (Tensor) 2022-05-18T03:33:21.3525863Z processing existing schema: aten::scatter.value_reduce_out(Tensor self, int dim, Tensor index, Scalar value, *, str reduce, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3527403Z processing existing schema: aten::scatter.dimname_src(Tensor self, str dim, Tensor index, Tensor src) -> (Tensor) 2022-05-18T03:33:21.3529035Z processing existing schema: aten::scatter.dimname_value(Tensor self, str dim, Tensor index, Scalar value) -> (Tensor) 2022-05-18T03:33:21.3530896Z processing existing schema: aten::scatter_.src(Tensor(a!) self, int dim, Tensor index, Tensor src) -> (Tensor(a!)) 2022-05-18T03:33:21.3532733Z processing existing schema: aten::scatter_.value(Tensor(a!) self, int dim, Tensor index, Scalar value) -> (Tensor(a!)) 2022-05-18T03:33:21.3534746Z processing existing schema: aten::scatter_.reduce(Tensor(a!) self, int dim, Tensor index, Tensor src, *, str reduce) -> (Tensor(a!)) 2022-05-18T03:33:21.3536763Z processing existing schema: aten::scatter_.value_reduce(Tensor(a!) self, int dim, Tensor index, Scalar value, *, str reduce) -> (Tensor(a!)) 2022-05-18T03:33:21.3538542Z processing existing schema: aten::scatter_add_(Tensor(a!) self, int dim, Tensor index, Tensor src) -> (Tensor(a!)) 2022-05-18T03:33:21.3540461Z processing existing schema: aten::scatter_reduce.two(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> (Tensor) 2022-05-18T03:33:21.3542705Z processing existing schema: aten::scatter_reduce.two_out(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3544964Z processing existing schema: aten::scatter_reduce_.two(Tensor(a!) self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> (Tensor(a!)) 2022-05-18T03:33:21.3546604Z processing existing schema: aten::eq_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3548263Z processing existing schema: aten::eq_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3549615Z processing existing schema: aten::zfill(str self, int width) -> (str) 2022-05-18T03:33:21.3551091Z processing existing schema: aten::__lshift__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3552501Z processing existing schema: aten::__lshift__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3553829Z processing existing schema: aten::__lshift__.int(int a, int b) -> (int) 2022-05-18T03:33:21.3555594Z processing existing schema: aten::__ilshift__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3557296Z processing existing schema: aten::__ilshift__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3559042Z processing existing schema: aten::__rshift__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3561006Z processing existing schema: aten::__rshift__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3562002Z processing existing schema: aten::__rshift__.int(int a, int b) -> (int) 2022-05-18T03:33:21.3564093Z processing existing schema: aten::__irshift__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3566036Z processing existing schema: aten::__irshift__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3567707Z processing existing schema: aten::tril(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:21.3569996Z processing existing schema: aten::tril.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3572027Z processing existing schema: aten::tril_(Tensor(a!) self, int diagonal=0) -> (Tensor(a!)) 2022-05-18T03:33:21.3573737Z processing existing schema: aten::triu(Tensor self, int diagonal=0) -> (Tensor) 2022-05-18T03:33:21.3576067Z processing existing schema: aten::triu.out(Tensor self, int diagonal=0, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3578073Z processing existing schema: aten::triu_(Tensor(a!) self, int diagonal=0) -> (Tensor(a!)) 2022-05-18T03:33:21.3579913Z processing existing schema: aten::lerp.Scalar(Tensor self, Tensor end, Scalar weight) -> (Tensor) 2022-05-18T03:33:21.3582203Z processing existing schema: aten::lerp.Scalar_out(Tensor self, Tensor end, Scalar weight, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3583906Z processing existing schema: aten::lerp.Tensor(Tensor self, Tensor end, Tensor weight) -> (Tensor) 2022-05-18T03:33:21.3586347Z processing existing schema: aten::lerp.Tensor_out(Tensor self, Tensor end, Tensor weight, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3588417Z processing existing schema: aten::lerp_.Scalar(Tensor(a!) self, Tensor end, Scalar weight) -> (Tensor(a!)) 2022-05-18T03:33:21.3590530Z processing existing schema: aten::lerp_.Tensor(Tensor(a!) self, Tensor end, Tensor weight) -> (Tensor(a!)) 2022-05-18T03:33:21.3593031Z processing existing schema: aten::addbmm_(Tensor(a!) self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.3594968Z processing existing schema: aten::ne_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3596910Z processing existing schema: aten::ne_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3598925Z processing existing schema: aten::ge_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3601112Z processing existing schema: aten::ge_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3603076Z processing existing schema: aten::le_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3605077Z processing existing schema: aten::le_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3607090Z processing existing schema: aten::gt_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3609180Z processing existing schema: aten::gt_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3611143Z processing existing schema: aten::lt_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3613183Z processing existing schema: aten::lt_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3614838Z processing existing schema: aten::take(Tensor self, Tensor index) -> (Tensor) 2022-05-18T03:33:21.3617233Z processing existing schema: aten::take.out(Tensor self, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3618758Z processing existing schema: aten::index_select(Tensor self, int dim, Tensor index) -> (Tensor) 2022-05-18T03:33:21.3621064Z processing existing schema: aten::index_select.out(Tensor self, int dim, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3622722Z processing existing schema: aten::index_select.dimname(Tensor self, str dim, Tensor index) -> (Tensor) 2022-05-18T03:33:21.3625184Z processing existing schema: aten::index_select.dimname_out(Tensor self, str dim, Tensor index, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3626796Z processing existing schema: aten::nonzero(Tensor self) -> (Tensor) 2022-05-18T03:33:21.3628679Z processing existing schema: aten::nonzero.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3630477Z processing existing schema: aten::gather(Tensor self, int dim, Tensor index, *, bool sparse_grad=False) -> (Tensor) 2022-05-18T03:33:21.3632831Z processing existing schema: aten::gather.out(Tensor self, int dim, Tensor index, *, bool sparse_grad=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3634717Z processing existing schema: aten::gather.dimname(Tensor self, str dim, Tensor index, *, bool sparse_grad=False) -> (Tensor) 2022-05-18T03:33:21.3637104Z processing existing schema: aten::gather.dimname_out(Tensor self, str dim, Tensor index, *, bool sparse_grad=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3638791Z processing existing schema: aten::_symeig_helper(Tensor self, bool eigenvectors, bool upper) -> (Tensor, Tensor) 2022-05-18T03:33:21.3640616Z processing existing schema: aten::_cholesky_solve_helper(Tensor self, Tensor A, bool upper) -> (Tensor) 2022-05-18T03:33:21.3642897Z processing existing schema: aten::lu_unpack(Tensor LU_data, Tensor LU_pivots, bool unpack_data=True, bool unpack_pivots=True) -> (Tensor P, Tensor L, Tensor U) 2022-05-18T03:33:21.3646206Z processing existing schema: aten::lu_unpack.out(Tensor LU_data, Tensor LU_pivots, bool unpack_data=True, bool unpack_pivots=True, *, Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) -> (Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) 2022-05-18T03:33:21.3648263Z processing existing schema: aten::histogram.bins_tensor(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) 2022-05-18T03:33:21.3651239Z processing existing schema: aten::histogram.bins_tensor_out(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) 2022-05-18T03:33:21.3653976Z processing existing schema: aten::histogram.bin_ct(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) 2022-05-18T03:33:21.3657400Z processing existing schema: aten::histogram.bin_ct_out(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) 2022-05-18T03:33:21.3660415Z processing existing schema: aten::_histogramdd_bin_edges(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor[]) 2022-05-18T03:33:21.3663166Z processing existing schema: aten::_histogramdd_from_bin_cts(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor) 2022-05-18T03:33:21.3665542Z processing existing schema: aten::_histogramdd_from_bin_tensors(Tensor self, Tensor[] bins, *, Tensor? weight=None, bool density=False) -> (Tensor) 2022-05-18T03:33:21.3667422Z processing existing schema: aten::fmod_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3669363Z processing existing schema: aten::fmod_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3671154Z processing existing schema: aten::remainder.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3673314Z processing existing schema: aten::remainder.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3674540Z processing existing schema: aten::remainder.Scalar_Tensor(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3676522Z processing existing schema: aten::remainder.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.3678967Z processing existing schema: aten::remainder.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3680414Z processing existing schema: aten::remainder.int(int a, int b) -> (int) 2022-05-18T03:33:21.3682153Z processing existing schema: aten::remainder.float(float a, float b) -> (float) 2022-05-18T03:33:21.3683948Z processing existing schema: aten::remainder.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.3685183Z processing existing schema: aten::remainder.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.3686928Z processing existing schema: aten::remainder(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.3688996Z processing existing schema: aten::remainder_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.3690916Z processing existing schema: aten::remainder_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.3692356Z processing existing schema: aten::fmin(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3694664Z processing existing schema: aten::fmin.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3696029Z processing existing schema: aten::fmax(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3698360Z processing existing schema: aten::fmax.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3699713Z processing existing schema: aten::maximum(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3702017Z processing existing schema: aten::maximum.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3703498Z processing existing schema: aten::minimum(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.3705866Z processing existing schema: aten::minimum.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3708109Z processing existing schema: aten::sort.stable(Tensor self, *, bool? stable, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3711074Z processing existing schema: aten::sort.values_stable(Tensor self, *, bool? stable, int dim=-1, bool descending=False, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3712972Z processing existing schema: aten::sort(Tensor self, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3715846Z processing existing schema: aten::sort.values(Tensor self, int dim=-1, bool descending=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3717753Z processing existing schema: aten::sort.dimname(Tensor self, str dim, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3720666Z processing existing schema: aten::sort.dimname_values(Tensor self, str dim, bool descending=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3722702Z processing existing schema: aten::sort.dimname_stable(Tensor self, *, bool? stable, str dim, bool descending=False) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3725659Z processing existing schema: aten::sort.dimname_values_stable(Tensor self, *, bool? stable, str dim, bool descending=False, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3727641Z processing existing schema: aten::sort.int(int[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3729819Z processing existing schema: aten::sort.float(float[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3732030Z processing existing schema: aten::sort.Tensor(Tensor[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3734220Z processing existing schema: aten::sort.bool(bool[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3736424Z processing existing schema: aten::sort.str(str[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3738610Z processing existing schema: aten::sort.any(t[](a!) self, bool reverse=False) -> () 2022-05-18T03:33:21.3741051Z processing existing schema: aten::topk(Tensor self, int k, int dim=-1, bool largest=True, bool sorted=True) -> (Tensor values, Tensor indices) 2022-05-18T03:33:21.3744025Z processing existing schema: aten::topk.values(Tensor self, int k, int dim=-1, bool largest=True, bool sorted=True, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices) 2022-05-18T03:33:21.3746096Z processing existing schema: aten::renorm_(Tensor(a!) self, Scalar p, int dim, Scalar maxnorm) -> (Tensor(a!)) 2022-05-18T03:33:21.3748308Z processing existing schema: aten::unfold_backward(Tensor grad_in, int[] input_sizes, int dim, int size, int step) -> (Tensor) 2022-05-18T03:33:21.3750578Z processing existing schema: aten::_foreach_add.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:21.3753346Z processing existing schema: aten::_foreach_add.List(Tensor[] tensors1, Tensor[] tensors2, *, Scalar alpha=1) -> (Tensor[]) 2022-05-18T03:33:21.3755906Z processing existing schema: aten::_foreach_add.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3757866Z processing existing schema: aten::_foreach_add_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:21.3760444Z processing existing schema: aten::_foreach_add_.List(Tensor[] self, Tensor[] other, *, Scalar alpha=1) -> () 2022-05-18T03:33:21.3762732Z processing existing schema: aten::_foreach_add_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:21.3765001Z processing existing schema: aten::_foreach_sub.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:21.3767770Z processing existing schema: aten::_foreach_sub.List(Tensor[] tensors1, Tensor[] tensors2, *, Scalar alpha=1) -> (Tensor[]) 2022-05-18T03:33:21.3770312Z processing existing schema: aten::_foreach_sub.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3772220Z processing existing schema: aten::_foreach_sub_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:21.3774688Z processing existing schema: aten::_foreach_sub_.List(Tensor[] self, Tensor[] other, *, Scalar alpha=1) -> () 2022-05-18T03:33:21.3776950Z processing existing schema: aten::_foreach_sub_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:21.3779226Z processing existing schema: aten::_foreach_mul.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:21.3781792Z processing existing schema: aten::_foreach_mul.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:21.3784366Z processing existing schema: aten::_foreach_mul.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3786420Z processing existing schema: aten::_foreach_mul_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:21.3788663Z processing existing schema: aten::_foreach_mul_.List(Tensor[] self, Tensor[] other) -> () 2022-05-18T03:33:21.3790931Z processing existing schema: aten::_foreach_mul_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:21.3793227Z processing existing schema: aten::_foreach_div.Scalar(Tensor[] tensors, Scalar scalar) -> (Tensor[]) 2022-05-18T03:33:21.3795767Z processing existing schema: aten::_foreach_div.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:21.3798379Z processing existing schema: aten::_foreach_div.ScalarList(Tensor[] tensors, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3800551Z processing existing schema: aten::_foreach_div_.Scalar(Tensor[] self, Scalar scalar) -> () 2022-05-18T03:33:21.3802754Z processing existing schema: aten::_foreach_div_.List(Tensor[] self, Tensor[] other) -> () 2022-05-18T03:33:21.3805084Z processing existing schema: aten::_foreach_div_.ScalarList(Tensor[] self, Scalar[] scalars) -> () 2022-05-18T03:33:21.3807355Z processing existing schema: aten::_foreach_exp(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3808787Z processing existing schema: aten::_foreach_zero_(Tensor[] self) -> () 2022-05-18T03:33:21.3810886Z processing existing schema: aten::_foreach_exp_(Tensor[] self) -> () 2022-05-18T03:33:21.3813227Z processing existing schema: aten::_foreach_sqrt(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3815101Z processing existing schema: aten::_foreach_sqrt_(Tensor[] self) -> () 2022-05-18T03:33:21.3817447Z processing existing schema: aten::_foreach_abs(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3819179Z processing existing schema: aten::_foreach_abs_(Tensor[] self) -> () 2022-05-18T03:33:21.3821459Z processing existing schema: aten::_foreach_acos(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3823269Z processing existing schema: aten::_foreach_acos_(Tensor[] self) -> () 2022-05-18T03:33:21.3825534Z processing existing schema: aten::_foreach_asin(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3827335Z processing existing schema: aten::_foreach_asin_(Tensor[] self) -> () 2022-05-18T03:33:21.3829563Z processing existing schema: aten::_foreach_atan(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3831482Z processing existing schema: aten::_foreach_atan_(Tensor[] self) -> () 2022-05-18T03:33:21.3833599Z processing existing schema: aten::_foreach_ceil(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3835352Z processing existing schema: aten::_foreach_ceil_(Tensor[] self) -> () 2022-05-18T03:33:21.3837563Z processing existing schema: aten::_foreach_cos(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3839566Z processing existing schema: aten::_foreach_cos_(Tensor[] self) -> () 2022-05-18T03:33:21.3841705Z processing existing schema: aten::_foreach_cosh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3843485Z processing existing schema: aten::_foreach_cosh_(Tensor[] self) -> () 2022-05-18T03:33:21.3845745Z processing existing schema: aten::_foreach_erf(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3847619Z processing existing schema: aten::_foreach_erf_(Tensor[] self) -> () 2022-05-18T03:33:21.3849827Z processing existing schema: aten::_foreach_erfc(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3851617Z processing existing schema: aten::_foreach_erfc_(Tensor[] self) -> () 2022-05-18T03:33:21.3853911Z processing existing schema: aten::_foreach_expm1(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3855652Z processing existing schema: aten::_foreach_expm1_(Tensor[] self) -> () 2022-05-18T03:33:21.3857835Z processing existing schema: aten::_foreach_floor(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3859707Z processing existing schema: aten::_foreach_floor_(Tensor[] self) -> () 2022-05-18T03:33:21.3861941Z processing existing schema: aten::_foreach_log(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3863739Z processing existing schema: aten::_foreach_log_(Tensor[] self) -> () 2022-05-18T03:33:21.3866722Z processing existing schema: aten::_foreach_log10(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3868120Z processing existing schema: aten::_foreach_log10_(Tensor[] self) -> () 2022-05-18T03:33:21.3870858Z processing existing schema: aten::_foreach_log1p(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3872532Z processing existing schema: aten::_foreach_log1p_(Tensor[] self) -> () 2022-05-18T03:33:21.3875052Z processing existing schema: aten::_foreach_log2(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3877042Z processing existing schema: aten::_foreach_log2_(Tensor[] self) -> () 2022-05-18T03:33:21.3879630Z processing existing schema: aten::_foreach_neg(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3881193Z processing existing schema: aten::_foreach_neg_(Tensor[] self) -> () 2022-05-18T03:33:21.3883965Z processing existing schema: aten::_foreach_tan(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3885648Z processing existing schema: aten::_foreach_tan_(Tensor[] self) -> () 2022-05-18T03:33:21.3888316Z processing existing schema: aten::_foreach_tanh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3889938Z processing existing schema: aten::_foreach_tanh_(Tensor[] self) -> () 2022-05-18T03:33:21.3892569Z processing existing schema: aten::_foreach_sin(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3894336Z processing existing schema: aten::_foreach_sin_(Tensor[] self) -> () 2022-05-18T03:33:21.3896917Z processing existing schema: aten::_foreach_sinh(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3898610Z processing existing schema: aten::_foreach_sinh_(Tensor[] self) -> () 2022-05-18T03:33:21.3901213Z processing existing schema: aten::_foreach_round(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3902936Z processing existing schema: aten::_foreach_round_(Tensor[] self) -> () 2022-05-18T03:33:21.3905645Z processing existing schema: aten::_foreach_lgamma(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3907347Z processing existing schema: aten::_foreach_lgamma_(Tensor[] self) -> () 2022-05-18T03:33:21.3909968Z processing existing schema: aten::_foreach_frac(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3911662Z processing existing schema: aten::_foreach_frac_(Tensor[] self) -> () 2022-05-18T03:33:21.3914330Z processing existing schema: aten::_foreach_reciprocal(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3916043Z processing existing schema: aten::_foreach_reciprocal_(Tensor[] self) -> () 2022-05-18T03:33:21.3918668Z processing existing schema: aten::_foreach_sigmoid(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3920538Z processing existing schema: aten::_foreach_sigmoid_(Tensor[] self) -> () 2022-05-18T03:33:21.3923100Z processing existing schema: aten::_foreach_trunc(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.3924786Z processing existing schema: aten::_foreach_trunc_(Tensor[] self) -> () 2022-05-18T03:33:21.3928189Z processing existing schema: aten::_foreach_addcdiv_.Scalar(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> () 2022-05-18T03:33:21.3931417Z processing existing schema: aten::_foreach_addcdiv_.ScalarList(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> () 2022-05-18T03:33:21.3934385Z processing existing schema: aten::_foreach_addcmul_.Scalar(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> () 2022-05-18T03:33:21.3937605Z processing existing schema: aten::_foreach_addcmul_.ScalarList(Tensor[] self, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> () 2022-05-18T03:33:21.3940887Z processing existing schema: aten::_foreach_addcdiv.Scalar(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> (Tensor[]) 2022-05-18T03:33:21.3944588Z processing existing schema: aten::_foreach_addcdiv.ScalarList(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3947932Z processing existing schema: aten::_foreach_addcmul.Scalar(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar value=1) -> (Tensor[]) 2022-05-18T03:33:21.3951503Z processing existing schema: aten::_foreach_addcmul.ScalarList(Tensor[] input, Tensor[] tensor1, Tensor[] tensor2, Scalar[] scalars) -> (Tensor[]) 2022-05-18T03:33:21.3954266Z processing existing schema: aten::_foreach_maximum.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:21.3957079Z processing existing schema: aten::_foreach_minimum.List(Tensor[] tensors1, Tensor[] tensors2) -> (Tensor[]) 2022-05-18T03:33:21.3959852Z processing existing schema: aten::_foreach_norm.Scalar(Tensor[] tensors, Scalar ord=2) -> (Tensor[]) 2022-05-18T03:33:21.3962680Z processing existing schema: aten::searchsorted.Tensor(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> (Tensor) 2022-05-18T03:33:21.3965813Z processing existing schema: aten::searchsorted.Tensor_out(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3968411Z processing existing schema: aten::searchsorted.Scalar(Tensor sorted_sequence, Scalar self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> (Tensor) 2022-05-18T03:33:21.3970003Z processing existing schema: aten::_convert_indices_from_coo_to_csr(Tensor self, int size, *, bool out_int32=False) -> (Tensor) 2022-05-18T03:33:21.3972858Z processing existing schema: aten::_convert_indices_from_coo_to_csr.out(Tensor self, int size, *, bool out_int32=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3974958Z processing existing schema: aten::_convert_indices_from_csr_to_coo(Tensor crow_indices, Tensor col_indices, *, bool out_int32=False, bool transpose=False) -> (Tensor) 2022-05-18T03:33:21.3977872Z processing existing schema: aten::_convert_indices_from_csr_to_coo.out(Tensor crow_indices, Tensor col_indices, *, bool out_int32=False, bool transpose=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.3979249Z processing existing schema: aten::mse_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:21.3982324Z processing existing schema: aten::mse_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3984868Z processing existing schema: aten::l1_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3986559Z processing existing schema: aten::l1_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:21.3989657Z processing existing schema: aten::multi_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=1) -> (Tensor) 2022-05-18T03:33:21.3992766Z processing existing schema: aten::multi_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=1, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3994444Z processing existing schema: aten::multilabel_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, Tensor is_target) -> (Tensor) 2022-05-18T03:33:21.3997610Z processing existing schema: aten::multilabel_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, Tensor is_target, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.3999755Z processing existing schema: aten::nll_loss_forward(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index) -> (Tensor output, Tensor total_weight) 2022-05-18T03:33:21.4003107Z processing existing schema: aten::nll_loss_forward.output(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, *, Tensor(a!) output, Tensor(b!) total_weight) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.4005304Z processing existing schema: aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> (Tensor) 2022-05-18T03:33:21.4008394Z processing existing schema: aten::nll_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4010406Z processing existing schema: aten::nll_loss2d_forward(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index) -> (Tensor output, Tensor total_weight) 2022-05-18T03:33:21.4013828Z processing existing schema: aten::nll_loss2d_forward.output(Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, *, Tensor(a!) output, Tensor(b!) total_weight) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.4015966Z processing existing schema: aten::nll_loss2d_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> (Tensor) 2022-05-18T03:33:21.4019261Z processing existing schema: aten::nll_loss2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4021840Z processing existing schema: aten::smooth_l1_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, float beta, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4023585Z processing existing schema: aten::smooth_l1_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float beta) -> (Tensor) 2022-05-18T03:33:21.4026775Z processing existing schema: aten::huber_loss_backward.out(Tensor grad_output, Tensor self, Tensor target, int reduction, float delta, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4028417Z processing existing schema: aten::huber_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float delta) -> (Tensor) 2022-05-18T03:33:21.4031265Z processing existing schema: aten::elu_(Tensor(a!) self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor(a!)) 2022-05-18T03:33:21.4032593Z processing existing schema: aten::title(str self) -> (str) 2022-05-18T03:33:21.4035422Z processing existing schema: aten::elu_backward(Tensor grad_output, Scalar alpha, Scalar scale, Scalar input_scale, bool is_result, Tensor self_or_result) -> (Tensor) 2022-05-18T03:33:21.4038188Z processing existing schema: aten::elu_backward.grad_input(Tensor grad_output, Scalar alpha, Scalar scale, Scalar input_scale, bool is_result, Tensor self_or_result, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4039481Z processing existing schema: aten::center(str self, int width, str fillchar=" ") -> (str) 2022-05-18T03:33:21.4041912Z processing existing schema: aten::glu_backward(Tensor grad_output, Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.4044321Z processing existing schema: aten::glu_backward.grad_input(Tensor grad_output, Tensor self, int dim, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4045888Z processing existing schema: aten::glu_jvp(Tensor glu, Tensor x, Tensor dx, int dim) -> (Tensor) 2022-05-18T03:33:21.4047698Z processing existing schema: aten::hardsigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4050112Z processing existing schema: aten::hardsigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4051768Z processing existing schema: aten::hardsigmoid_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4053910Z processing existing schema: aten::hardsigmoid_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.4056321Z processing existing schema: aten::hardsigmoid_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4058158Z processing existing schema: aten::hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor) 2022-05-18T03:33:21.4060888Z processing existing schema: aten::hardtanh.out(Tensor self, Scalar min_val=-1, Scalar max_val=1, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4063103Z processing existing schema: aten::hardtanh_(Tensor(a!) self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor(a!)) 2022-05-18T03:33:21.4065520Z processing existing schema: aten::hardtanh_backward(Tensor grad_output, Tensor self, Scalar min_val, Scalar max_val) -> (Tensor) 2022-05-18T03:33:21.4067567Z processing existing schema: aten::hardtanh_backward.grad_input(Tensor grad_output, Tensor self, Scalar min_val, Scalar max_val, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4068254Z processing existing schema: aten::hardswish(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4070226Z processing existing schema: aten::hardswish.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4071674Z processing existing schema: aten::hardswish_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4072983Z processing existing schema: aten::hardswish_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.4074807Z processing existing schema: aten::leaky_relu(Tensor self, Scalar negative_slope=0.01) -> (Tensor) 2022-05-18T03:33:21.4077286Z processing existing schema: aten::leaky_relu.out(Tensor self, Scalar negative_slope=0.01, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4079310Z processing existing schema: aten::leaky_relu_(Tensor(a!) self, Scalar negative_slope=0.01) -> (Tensor(a!)) 2022-05-18T03:33:21.4080934Z processing existing schema: aten::leaky_relu_backward(Tensor grad_output, Tensor self, Scalar negative_slope, bool self_is_result) -> (Tensor) 2022-05-18T03:33:21.4083210Z processing existing schema: aten::leaky_relu_backward.grad_input(Tensor grad_output, Tensor self, Scalar negative_slope, bool self_is_result, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4084885Z processing existing schema: aten::log_sigmoid_forward(Tensor self) -> (Tensor output, Tensor buffer) 2022-05-18T03:33:21.4086841Z processing existing schema: aten::log_sigmoid_forward.output(Tensor self, *, Tensor(a!) output, Tensor(b!) buffer) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.4088244Z processing existing schema: aten::log_sigmoid_backward(Tensor grad_output, Tensor self, Tensor buffer) -> (Tensor) 2022-05-18T03:33:21.4090420Z processing existing schema: aten::log_sigmoid_backward.grad_input(Tensor grad_output, Tensor self, Tensor buffer, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4093127Z processing existing schema: aten::rrelu_with_noise(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.4096100Z processing existing schema: aten::rrelu_with_noise.out(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4098845Z processing existing schema: aten::rrelu_with_noise_(Tensor(a!) self, Tensor noise, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.4100407Z processing existing schema: aten::softplus_backward(Tensor grad_output, Tensor self, Scalar beta, Scalar threshold) -> (Tensor) 2022-05-18T03:33:21.4102224Z processing existing schema: aten::softplus_backward.grad_input(Tensor grad_output, Tensor self, Scalar beta, Scalar threshold, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4103871Z processing existing schema: aten::softshrink(Tensor self, Scalar lambd=0.5) -> (Tensor) 2022-05-18T03:33:21.4106177Z processing existing schema: aten::softshrink.out(Tensor self, Scalar lambd=0.5, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4107718Z processing existing schema: aten::softshrink_backward(Tensor grad_output, Tensor self, Scalar lambd) -> (Tensor) 2022-05-18T03:33:21.4109634Z processing existing schema: aten::softshrink_backward.grad_input(Tensor grad_output, Tensor self, Scalar lambd, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4111404Z processing existing schema: aten::adaptive_avg_pool2d.out(Tensor self, int[2] output_size, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4112860Z processing existing schema: aten::adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:21.4114269Z processing existing schema: aten::_adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:21.4115614Z processing existing schema: aten::_adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.4117928Z processing existing schema: aten::_adaptive_avg_pool3d(Tensor self, int[3] output_size) -> (Tensor) 2022-05-18T03:33:21.4118524Z schema: aten::adaptive_avg_pool3d_backward.grad_input(Tensor grad_output, Tensor self, *, Tensor(a!) grad_input) -> (Tensor(a!)) found on allowlist, skipping 2022-05-18T03:33:21.4119513Z processing existing schema: aten::_adaptive_avg_pool3d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.4121245Z processing existing schema: aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:21.4123523Z processing existing schema: aten::adaptive_max_pool2d.out(Tensor self, int[2] output_size, *, Tensor(a!) out, Tensor(b!) indices) -> (Tensor(a!), Tensor(b!)) 2022-05-18T03:33:21.4124610Z processing existing schema: aten::adaptive_max_pool2d_backward(Tensor grad_output, Tensor self, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4126735Z processing existing schema: aten::adaptive_max_pool2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4127921Z processing existing schema: aten::adaptive_max_pool3d_backward(Tensor grad_output, Tensor self, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4130124Z processing existing schema: aten::adaptive_max_pool3d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4130481Z schema: static_runtime::dict_unpack(...) -> (...) found on allowlist, skipping 2022-05-18T03:33:21.4132116Z processing existing schema: aten::fractional_max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] output_size, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4134464Z processing existing schema: aten::fractional_max_pool2d_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] output_size, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4136160Z processing existing schema: aten::fractional_max_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] output_size, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4138470Z processing existing schema: aten::fractional_max_pool3d_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] output_size, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4140689Z processing existing schema: aten::max_pool2d_with_indices_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4143437Z processing existing schema: aten::max_pool2d_with_indices_backward.grad_input(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4145684Z processing existing schema: aten::max_pool3d_with_indices_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, int[3] dilation, bool ceil_mode, Tensor indices) -> (Tensor) 2022-05-18T03:33:21.4148415Z processing existing schema: aten::max_pool3d_with_indices_backward.grad_input(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, int[3] dilation, bool ceil_mode, Tensor indices, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4149821Z processing existing schema: aten::reflection_pad1d_backward(Tensor grad_output, Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:21.4151988Z processing existing schema: aten::reflection_pad1d_backward.grad_input(Tensor grad_output, Tensor self, int[2] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4153405Z processing existing schema: aten::reflection_pad2d_backward(Tensor grad_output, Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:21.4155457Z processing existing schema: aten::reflection_pad2d_backward.grad_input(Tensor grad_output, Tensor self, int[4] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4156844Z processing existing schema: aten::reflection_pad3d(Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:21.4158793Z processing existing schema: aten::reflection_pad3d.out(Tensor self, int[6] padding, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4160467Z processing existing schema: aten::reflection_pad3d_backward(Tensor grad_output, Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:21.4162473Z processing existing schema: aten::reflection_pad3d_backward.grad_input(Tensor grad_output, Tensor self, int[6] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4164571Z processing existing schema: aten::replication_pad1d_backward(Tensor grad_output, Tensor self, int[2] padding) -> (Tensor) 2022-05-18T03:33:21.4166329Z processing existing schema: aten::replication_pad1d_backward.grad_input(Tensor grad_output, Tensor self, int[2] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4167747Z processing existing schema: aten::replication_pad2d_backward(Tensor grad_output, Tensor self, int[4] padding) -> (Tensor) 2022-05-18T03:33:21.4169680Z processing existing schema: aten::replication_pad2d_backward.grad_input(Tensor grad_output, Tensor self, int[4] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4171082Z processing existing schema: aten::replication_pad3d_backward(Tensor grad_output, Tensor self, int[6] padding) -> (Tensor) 2022-05-18T03:33:21.4173132Z processing existing schema: aten::replication_pad3d_backward.grad_input(Tensor grad_output, Tensor self, int[6] padding, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4175558Z processing existing schema: aten::upsample_nearest3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4177920Z processing existing schema: aten::upsample_nearest3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4180652Z processing existing schema: aten::upsample_nearest3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4183157Z processing existing schema: aten::_upsample_nearest_exact3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4185728Z processing existing schema: aten::_upsample_nearest_exact3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4188401Z processing existing schema: aten::_upsample_nearest_exact3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4190393Z processing existing schema: aten::upsample_linear1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, bool align_corners, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.4192892Z processing existing schema: aten::upsample_linear1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, bool align_corners, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4195579Z processing existing schema: aten::upsample_linear1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4197793Z processing existing schema: aten::upsample_bilinear2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4200653Z processing existing schema: aten::upsample_bilinear2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4203152Z processing existing schema: aten::upsample_bilinear2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4205119Z processing existing schema: aten::_upsample_bilinear2d_aa(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4207477Z processing existing schema: aten::_upsample_bilinear2d_aa.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4209549Z processing existing schema: aten::_upsample_bilinear2d_aa.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4211685Z processing existing schema: aten::_upsample_bilinear2d_aa_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4214205Z processing existing schema: aten::_upsample_bilinear2d_aa_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4216685Z processing existing schema: aten::_upsample_bilinear2d_aa_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4218599Z processing existing schema: aten::upsample_bicubic2d(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4220862Z processing existing schema: aten::upsample_bicubic2d.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4223007Z processing existing schema: aten::upsample_bicubic2d.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4225302Z processing existing schema: aten::upsample_bicubic2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4227869Z processing existing schema: aten::upsample_bicubic2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4230334Z processing existing schema: aten::upsample_bicubic2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4232201Z processing existing schema: aten::_upsample_bicubic2d_aa(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4234499Z processing existing schema: aten::_upsample_bicubic2d_aa.out(Tensor self, int[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4236619Z processing existing schema: aten::_upsample_bicubic2d_aa.vec(Tensor input, int[]? output_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4238757Z processing existing schema: aten::_upsample_bicubic2d_aa_backward(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4241482Z processing existing schema: aten::_upsample_bicubic2d_aa_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4244028Z processing existing schema: aten::_upsample_bicubic2d_aa_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4246213Z processing existing schema: aten::upsample_trilinear3d_backward(Tensor grad_output, int[3] output_size, int[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4248954Z processing existing schema: aten::upsample_trilinear3d_backward.grad_input(Tensor grad_output, int[3] output_size, int[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4251364Z processing existing schema: aten::upsample_trilinear3d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, bool align_corners, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4253195Z processing existing schema: aten::upsample_nearest1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.4255450Z processing existing schema: aten::upsample_nearest1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4257881Z processing existing schema: aten::upsample_nearest1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4259729Z processing existing schema: aten::_upsample_nearest_exact1d_backward(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None) -> (Tensor) 2022-05-18T03:33:21.4262000Z processing existing schema: aten::_upsample_nearest_exact1d_backward.grad_input(Tensor grad_output, int[1] output_size, int[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4264528Z processing existing schema: aten::_upsample_nearest_exact1d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4266577Z processing existing schema: aten::upsample_nearest2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4269086Z processing existing schema: aten::upsample_nearest2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4271481Z processing existing schema: aten::upsample_nearest2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4273512Z processing existing schema: aten::_upsample_nearest_exact2d_backward(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None) -> (Tensor) 2022-05-18T03:33:21.4275972Z processing existing schema: aten::_upsample_nearest_exact2d_backward.grad_input(Tensor grad_output, int[2] output_size, int[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4278348Z processing existing schema: aten::_upsample_nearest_exact2d_backward.vec(Tensor grad_output, int[]? output_size, int[] input_size, float[]? scale_factors) -> (Tensor) 2022-05-18T03:33:21.4280232Z processing existing schema: aten::logit_backward(Tensor grad_output, Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:21.4282127Z processing existing schema: aten::logit_backward.grad_input(Tensor grad_output, Tensor self, float? eps=None, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4283277Z processing existing schema: aten::tanh_backward(Tensor grad_output, Tensor output) -> (Tensor) 2022-05-18T03:33:21.4285033Z processing existing schema: aten::tanh_backward.grad_input(Tensor grad_output, Tensor output, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4288205Z processing existing schema: aten::slow_conv_transpose2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:21.4291677Z processing existing schema: aten::slow_conv_transpose2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] output_padding=[0, 0], int[2] dilation=[1, 1], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4294931Z processing existing schema: aten::slow_conv_transpose3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:21.4298653Z processing existing schema: aten::slow_conv_transpose3d.out(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] output_padding=[0, 0, 0], int[3] dilation=[1, 1, 1], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4300489Z processing existing schema: aten::_slow_conv2d_forward(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding) -> (Tensor) 2022-05-18T03:33:21.4302948Z processing existing schema: aten::_slow_conv2d_forward.output(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, *, Tensor(a!) output) -> (Tensor(a!)) 2022-05-18T03:33:21.4306779Z processing existing schema: aten::_slow_conv2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, *, Tensor(a!) grad_input, Tensor(b!) grad_weight, Tensor(c!) grad_bias) -> (Tensor(a!), Tensor(b!), Tensor(c!)) 2022-05-18T03:33:21.4309183Z processing existing schema: aten::_slow_conv2d_backward.output_mask(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias) 2022-05-18T03:33:21.4311001Z processing existing schema: aten::slow_conv3d_forward(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding) -> (Tensor) 2022-05-18T03:33:21.4313578Z processing existing schema: aten::slow_conv3d_forward.output(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding, *, Tensor(a!) output) -> (Tensor(a!)) 2022-05-18T03:33:21.4316310Z processing existing schema: aten::slow_conv_dilated2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], int[2] dilation=[1, 1]) -> (Tensor) 2022-05-18T03:33:21.4319333Z processing existing schema: aten::slow_conv_dilated3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1]) -> (Tensor) 2022-05-18T03:33:21.4321209Z processing existing schema: aten::im2col(Tensor self, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:21.4323992Z processing existing schema: aten::im2col.out(Tensor self, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4325924Z processing existing schema: aten::im2col_backward(Tensor grad_output, int[2] input_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> (Tensor) 2022-05-18T03:33:21.4328698Z processing existing schema: aten::im2col_backward.grad_input(Tensor grad_output, int[2] input_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4329628Z processing existing schema: aten::isposinf(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4331565Z processing existing schema: aten::isposinf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4333049Z processing existing schema: aten::isneginf(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4334887Z processing existing schema: aten::isneginf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4336406Z processing existing schema: aten::special_entr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4338180Z processing existing schema: aten::special_entr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4339656Z processing existing schema: aten::special_ndtri(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4341489Z processing existing schema: aten::special_ndtri.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4343034Z processing existing schema: aten::special_log_ndtr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4344967Z processing existing schema: aten::special_log_ndtr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4346377Z processing existing schema: aten::special_erfcx(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4348085Z processing existing schema: aten::special_erfcx.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4349561Z processing existing schema: aten::special_xlog1py(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4351410Z processing existing schema: aten::special_xlog1py.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4353296Z processing existing schema: aten::special_xlog1py.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4355005Z processing existing schema: aten::special_xlog1py.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4356606Z processing existing schema: aten::special_xlog1py.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4358189Z processing existing schema: aten::special_xlog1py.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4359723Z processing existing schema: aten::special_zeta(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4361522Z processing existing schema: aten::special_zeta.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4362944Z processing existing schema: aten::special_zeta.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4364854Z processing existing schema: aten::special_zeta.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4366265Z processing existing schema: aten::special_zeta.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4368243Z processing existing schema: aten::special_zeta.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4369517Z processing existing schema: aten::special_i0e(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4371294Z processing existing schema: aten::special_i0e.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4372734Z processing existing schema: aten::special_i1(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4374416Z processing existing schema: aten::special_i1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4376049Z processing existing schema: aten::special_i1e(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4377599Z processing existing schema: aten::special_i1e.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4379151Z processing existing schema: aten::linalg_cross(Tensor self, Tensor other, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4381155Z processing existing schema: aten::linalg_cross.out(Tensor self, Tensor other, *, int dim=-1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4383068Z processing existing schema: aten::linalg_lu_factor_ex(Tensor A, *, bool pivot=True, bool check_errors=False) -> (Tensor LU, Tensor pivots, Tensor info) 2022-05-18T03:33:21.4386159Z processing existing schema: aten::linalg_lu_factor_ex.out(Tensor A, *, bool pivot=True, bool check_errors=False, Tensor(a!) LU, Tensor(b!) pivots, Tensor(c!) info) -> (Tensor(a!) LU, Tensor(b!) pivots, Tensor(c!) info) 2022-05-18T03:33:21.4387811Z processing existing schema: aten::linalg_lu(Tensor A, *, bool pivot=True) -> (Tensor P, Tensor L, Tensor U) 2022-05-18T03:33:21.4390387Z processing existing schema: aten::linalg_lu.out(Tensor A, *, bool pivot=True, Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) -> (Tensor(a!) P, Tensor(b!) L, Tensor(c!) U) 2022-05-18T03:33:21.4391976Z processing existing schema: aten::_det_lu_based_helper(Tensor self) -> (Tensor det, Tensor lu, Tensor pivs) 2022-05-18T03:33:21.4393643Z processing existing schema: aten::_det_lu_based_helper_backward_helper(Tensor det_grad, Tensor det, Tensor self, Tensor lu, Tensor pivs) -> (Tensor) 2022-05-18T03:33:21.4395452Z processing existing schema: aten::linalg_ldl_factor_ex(Tensor self, *, bool hermitian=False, bool check_errors=False) -> (Tensor LD, Tensor pivots, Tensor info) 2022-05-18T03:33:21.4398402Z processing existing schema: aten::linalg_ldl_factor_ex.out(Tensor self, *, bool hermitian=False, bool check_errors=False, Tensor(a!) LD, Tensor(b!) pivots, Tensor(c!) info) -> (Tensor(a!) LD, Tensor(b!) pivots, Tensor(c!) info) 2022-05-18T03:33:21.4400074Z processing existing schema: aten::linalg_ldl_solve(Tensor LD, Tensor pivots, Tensor B, *, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.4402024Z processing existing schema: aten::linalg_ldl_solve.out(Tensor LD, Tensor pivots, Tensor B, *, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4403351Z processing existing schema: aten::linalg_matrix_exp(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4404765Z processing existing schema: aten::linalg_slogdet(Tensor self) -> (Tensor sign, Tensor logabsdet) 2022-05-18T03:33:21.4407084Z processing existing schema: aten::linalg_slogdet.out(Tensor self, *, Tensor(a!) sign, Tensor(b!) logabsdet) -> (Tensor(a!) sign, Tensor(b!) logabsdet) 2022-05-18T03:33:21.4409078Z processing existing schema: aten::_linalg_inv_out_helper_(Tensor(a!) self, Tensor(b!) infos_lu, Tensor(c!) infos_getri) -> (Tensor(a!)) 2022-05-18T03:33:21.4411249Z processing existing schema: aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.4413795Z processing existing schema: aten::linalg_vector_norm.out(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4415553Z processing existing schema: aten::_linalg_svd(Tensor A, bool full_matrices=False, bool compute_uv=True) -> (Tensor U, Tensor S, Tensor Vh) 2022-05-18T03:33:21.4418522Z processing existing schema: aten::_linalg_svd.U(Tensor A, bool full_matrices=False, bool compute_uv=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) 2022-05-18T03:33:21.4419905Z processing existing schema: aten::_linalg_qr_helper(Tensor self, str mode) -> (Tensor, Tensor) 2022-05-18T03:33:21.4421692Z processing existing schema: aten::_test_optional_intlist(Tensor values, int[]? addends) -> (Tensor) 2022-05-18T03:33:21.4423284Z processing existing schema: aten::_test_optional_filled_intlist(Tensor values, int[2]? addends) -> (Tensor) 2022-05-18T03:33:21.4425163Z processing existing schema: aten::_test_optional_floatlist(Tensor values, float[]? addends) -> (Tensor) 2022-05-18T03:33:21.4427620Z processing existing schema: aten::segment_reduce(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, int axis=0, bool unsafe=False, Scalar? initial=None) -> (Tensor) 2022-05-18T03:33:21.4429611Z processing existing schema: aten::_segment_reduce_backward(Tensor grad, Tensor output, Tensor data, str reduce, *, Tensor? lengths=None, int axis=0) -> (Tensor) 2022-05-18T03:33:21.4433014Z processing existing schema: aten::_transformer_encoder_layer_fwd(Tensor src, int embed_dim, int num_heads, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, bool use_gelu, bool norm_first, float eps, Tensor norm_weight_1, Tensor norm_bias_1, Tensor norm_weight_2, Tensor norm_bias_2, Tensor ffn_weight_1, Tensor ffn_bias_1, Tensor ffn_weight_2, Tensor ffn_bias_2, Tensor? mask=None) -> (Tensor) 2022-05-18T03:33:21.4435519Z processing existing schema: aten::_native_multi_head_attention(Tensor query, Tensor key, Tensor value, int embed_dim, int num_head, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, Tensor? mask=None, bool need_weights=True, bool average_attn_weights=True) -> (Tensor, Tensor) 2022-05-18T03:33:21.4436670Z processing existing schema: aten::_neg_view(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.4438582Z processing existing schema: aten::diag_embed(Tensor self, int offset=0, int dim1=-2, int dim2=-1) -> (Tensor) 2022-05-18T03:33:21.4440713Z processing existing schema: aten::extend.t(t[](a!) self, t[] other) -> () 2022-05-18T03:33:21.4442822Z processing existing schema: aten::embedding(Tensor weight, Tensor indices, int padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> (Tensor) 2022-05-18T03:33:21.4444392Z processing existing schema: aten::count(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:21.4446106Z processing existing schema: aten::count.int(int[] self, int el) -> (int) 2022-05-18T03:33:21.4447875Z processing existing schema: aten::count.float(float[] self, float el) -> (int) 2022-05-18T03:33:21.4449652Z processing existing schema: aten::count.bool(bool[] self, bool el) -> (int) 2022-05-18T03:33:21.4451509Z processing existing schema: aten::count.Tensor(Tensor[] self, Tensor el) -> (int) 2022-05-18T03:33:21.4453222Z processing existing schema: aten::count.str(str[] self, str el) -> (int) 2022-05-18T03:33:21.4454747Z processing existing schema: aten::fill.Scalar(Tensor self, Scalar value) -> (Tensor) 2022-05-18T03:33:21.4456253Z processing existing schema: aten::fill.Tensor(Tensor self, Tensor value) -> (Tensor) 2022-05-18T03:33:21.4458797Z processing existing schema: aten::index_put_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:21.4461058Z processing existing schema: aten::index_put_.hacked_twin(Tensor(a!) self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)) 2022-05-18T03:33:21.4463194Z processing existing schema: aten::nan_to_num_(Tensor(a!) self, float? nan=None, float? posinf=None, float? neginf=None) -> (Tensor(a!)) 2022-05-18T03:33:21.4464823Z processing existing schema: aten::logdet(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4467520Z processing existing schema: aten::mkldnn_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.4469149Z processing existing schema: aten::mvlgamma_(Tensor(a!) self, int p) -> (Tensor(a!)) 2022-05-18T03:33:21.4471454Z processing existing schema: aten::_nnpack_spatial_convolution(Tensor input, Tensor weight, Tensor? bias, int[2] padding, int[2] stride=[1, 1]) -> (Tensor) 2022-05-18T03:33:21.4472893Z processing existing schema: aten::_euclidean_dist(Tensor x1, Tensor x2) -> (Tensor) 2022-05-18T03:33:21.4474617Z processing existing schema: aten::repeat(Tensor self, int[] repeats) -> (Tensor) 2022-05-18T03:33:21.4476651Z processing existing schema: aten::slice_scatter(Tensor self, Tensor src, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor) 2022-05-18T03:33:21.4478284Z processing existing schema: aten::select_scatter(Tensor self, Tensor src, int dim, int index) -> (Tensor) 2022-05-18T03:33:21.4480255Z processing existing schema: aten::diagonal_scatter(Tensor self, Tensor src, int offset=0, int dim1=0, int dim2=1) -> (Tensor) 2022-05-18T03:33:21.4481645Z processing existing schema: aten::isdigit(str self) -> (bool) 2022-05-18T03:33:21.4483147Z processing existing schema: aten::slogdet(Tensor self) -> (Tensor sign, Tensor logabsdet) 2022-05-18T03:33:21.4485193Z processing existing schema: aten::rot90(Tensor self, int k=1, int[] dims=[0, 1]) -> (Tensor) 2022-05-18T03:33:21.4488407Z processing existing schema: aten::_trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> (Tensor) 2022-05-18T03:33:21.4489910Z processing existing schema: aten::_sparse_sum.dim(Tensor self, int[1] dim) -> (Tensor) 2022-05-18T03:33:21.4491291Z processing existing schema: aten::_sparse_sum(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4492823Z processing existing schema: aten::_sparse_sum.dtype(Tensor self, *, int dtype) -> (Tensor) 2022-05-18T03:33:21.4494366Z processing existing schema: aten::_sparse_sum.dim_dtype(Tensor self, int[1] dim, *, int dtype) -> (Tensor) 2022-05-18T03:33:21.4495221Z schema: aten::_sparse_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4496534Z processing existing schema: aten::_pack_padded_sequence(Tensor input, Tensor lengths, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.4498058Z processing existing schema: aten::masked_scatter(Tensor self, Tensor mask, Tensor source) -> (Tensor) 2022-05-18T03:33:21.4499664Z processing existing schema: aten::_linalg_check_errors(Tensor info, str api_name, *, bool is_matrix) -> () 2022-05-18T03:33:21.4501246Z processing existing schema: aten::soft_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> (Tensor) 2022-05-18T03:33:21.4503325Z processing existing schema: aten::soft_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, int reduction, *, Tensor(a!) grad_input) -> (Tensor(a!)) 2022-05-18T03:33:21.4505300Z processing existing schema: aten::rrelu_with_noise_backward(Tensor grad_output, Tensor self, Tensor noise, Scalar lower, Scalar upper, bool training, bool self_is_result) -> (Tensor) 2022-05-18T03:33:21.4507155Z processing existing schema: aten::linalg_pinv.atol_rtol_tensor(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.4509494Z processing existing schema: aten::linalg_pinv.atol_rtol_tensor_out(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4511372Z processing existing schema: aten::linalg_pinv.atol_rtol_float(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.4513732Z processing existing schema: aten::linalg_pinv.atol_rtol_float_out(Tensor self, *, float? atol=None, float? rtol=None, bool hermitian=False, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4515318Z processing existing schema: aten::linalg_pinv(Tensor self, float rcond, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.4517419Z processing existing schema: aten::linalg_pinv.out(Tensor self, float rcond, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4519040Z processing existing schema: aten::linalg_pinv.rcond_tensor(Tensor self, Tensor rcond, bool hermitian=False) -> (Tensor) 2022-05-18T03:33:21.4521155Z processing existing schema: aten::linalg_pinv.out_rcond_tensor(Tensor self, Tensor rcond, bool hermitian=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4522527Z processing existing schema: aten::_test_warn_in_autograd(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4523912Z processing existing schema: aten::_fw_primal_copy(Tensor self, int level) -> (Tensor) 2022-05-18T03:33:21.4525439Z processing existing schema: aten::_make_dual_copy(Tensor primal, Tensor tangent, int level) -> (Tensor) 2022-05-18T03:33:21.4526915Z processing existing schema: aten::view_as_real_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4528293Z processing existing schema: aten::view_as_complex_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4529593Z processing existing schema: aten::_conj_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4530912Z processing existing schema: aten::_neg_view_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4532778Z processing existing schema: aten::_sparse_broadcast_to_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:21.4534543Z processing existing schema: aten::diagonal_copy(Tensor self, int offset=0, int dim1=0, int dim2=1) -> (Tensor) 2022-05-18T03:33:21.4536167Z processing existing schema: prim::data(Tensor(a) a) -> (Tensor(a)) 2022-05-18T03:33:21.4538174Z processing existing schema: aten::expand_copy(Tensor self, int[] size, *, bool implicit=False) -> (Tensor) 2022-05-18T03:33:21.4540263Z processing existing schema: aten::expand_copy.SymInt(Tensor self, SymInt[] size, *, bool implicit=False) -> (Tensor) 2022-05-18T03:33:21.4541490Z processing existing schema: prim::is_quantized(Tensor a) -> (bool) 2022-05-18T03:33:21.4543316Z processing existing schema: aten::permute_copy(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:21.4545496Z processing existing schema: aten::_reshape_alias_copy(Tensor self, int[] size, int[] stride) -> (Tensor) 2022-05-18T03:33:21.4547018Z processing existing schema: aten::select_copy.int(Tensor self, int dim, int index) -> (Tensor) 2022-05-18T03:33:21.4548362Z processing existing schema: aten::detach_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4549763Z processing existing schema: aten::storage_offset(Tensor self) -> (int) 2022-05-18T03:33:21.4551950Z processing existing schema: aten::slice_copy.Tensor(Tensor self, int dim=0, int? start=None, int? end=None, int step=1) -> (Tensor) 2022-05-18T03:33:21.4554020Z processing existing schema: aten::split_copy.Tensor(Tensor self, int split_size, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.4556605Z processing existing schema: aten::split_with_sizes_copy(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.4557765Z processing existing schema: aten::squeeze_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4559329Z processing existing schema: aten::squeeze_copy.dim(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.4560651Z processing existing schema: aten::t_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4562367Z processing existing schema: aten::transpose_copy.int(Tensor self, int dim0, int dim1) -> (Tensor) 2022-05-18T03:33:21.4563879Z processing existing schema: aten::unsqueeze_copy(Tensor self, int dim) -> (Tensor) 2022-05-18T03:33:21.4565168Z processing existing schema: aten::_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4566748Z processing existing schema: aten::_values_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4568031Z processing existing schema: aten::indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4569304Z processing existing schema: aten::values_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4571312Z processing existing schema: aten::crow_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4571928Z processing existing schema: aten::row_indices_copy(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4573700Z processing existing schema: aten::unbind_copy.int(Tensor self, int dim=0) -> (Tensor[]) 2022-05-18T03:33:21.4575618Z processing existing schema: aten::view_copy(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:21.4576459Z processing existing schema: aten::view_copy.dtype(Tensor self, int dtype) -> (Tensor) 2022-05-18T03:33:21.4578311Z processing existing schema: aten::unfold_copy(Tensor self, int dimension, int size, int step) -> (Tensor) 2022-05-18T03:33:21.4579801Z processing existing schema: aten::_cast_Byte(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4581097Z processing existing schema: aten::_cast_Char(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4583119Z processing existing schema: aten::_cast_Double(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4584041Z processing existing schema: aten::_cast_Float(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4585591Z processing existing schema: aten::_cast_Int(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4587090Z processing existing schema: aten::_cast_Long(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4588669Z processing existing schema: aten::_cast_Short(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4590004Z processing existing schema: aten::_cast_Half(Tensor self, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.4590981Z processing existing schema: aten::retains_grad(Tensor self) -> (bool) 2022-05-18T03:33:21.4593035Z processing existing schema: aten::_unpack_dual(Tensor(a) dual, int level) -> (Tensor(a) primal, Tensor tangent) 2022-05-18T03:33:21.4593917Z processing existing schema: aten::_use_cudnn_rnn_flatten_weight() -> (bool) 2022-05-18T03:33:21.4595608Z processing existing schema: aten::_debug_has_internal_overlap(Tensor self) -> (int) 2022-05-18T03:33:21.4597787Z processing existing schema: aten::_sobol_engine_draw(Tensor quasi, int n, Tensor sobolstate, int dimension, int num_generated, int? dtype) -> (Tensor, Tensor) 2022-05-18T03:33:21.4600007Z processing existing schema: aten::_sobol_engine_ff_(Tensor(a!) self, int n, Tensor sobolstate, int dimension, int num_generated) -> (Tensor(a!)) 2022-05-18T03:33:21.4601743Z processing existing schema: aten::_sobol_engine_scramble_(Tensor(a!) self, Tensor ltm, int dimension) -> (Tensor(a!)) 2022-05-18T03:33:21.4603350Z processing existing schema: aten::_sobol_engine_initialize_state_(Tensor(a!) self, int dimension) -> (Tensor(a!)) 2022-05-18T03:33:21.4604734Z processing existing schema: aten::_reshape_from_tensor(Tensor self, Tensor shape) -> (Tensor) 2022-05-18T03:33:21.4605906Z processing existing schema: aten::_shape_as_tensor(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4607356Z processing existing schema: aten::feature_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:21.4609248Z processing existing schema: aten::feature_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:21.4610610Z processing existing schema: aten::feature_alpha_dropout(Tensor input, float p, bool train) -> (Tensor) 2022-05-18T03:33:21.4612180Z processing existing schema: aten::feature_alpha_dropout_(Tensor(a!) self, float p, bool train) -> (Tensor(a!)) 2022-05-18T03:33:21.4613631Z processing existing schema: aten::arccos_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4615246Z processing existing schema: aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor) 2022-05-18T03:33:21.4616645Z processing existing schema: aten::adaptive_max_pool1d(Tensor self, int[1] output_size) -> (Tensor, Tensor) 2022-05-18T03:33:21.4617850Z processing existing schema: aten::_dim_arange(Tensor like, int dim) -> (Tensor) 2022-05-18T03:33:21.4620833Z processing existing schema: aten::_batch_norm_impl_index(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor, Tensor, Tensor, Tensor, int) 2022-05-18T03:33:21.4623403Z processing existing schema: aten::_batch_norm_impl_index_backward(int impl_index, Tensor input, Tensor grad_output, Tensor? weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var_transform, bool train, float eps, bool[3] output_mask, Tensor reservedSpace) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4623991Z processing existing schema: aten::cudnn_is_acceptable(Tensor self) -> (bool) 2022-05-18T03:33:21.4625279Z processing existing schema: profiler::_record_function_exit(Tensor _0) -> () 2022-05-18T03:33:21.4626639Z processing existing schema: profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> () 2022-05-18T03:33:21.4629227Z processing existing schema: aten::_convolution_mode(Tensor input, Tensor weight, Tensor? bias, int[] stride, str padding, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.4632820Z processing existing schema: aten::_convolution_double_backward(Tensor? ggI, Tensor? ggW, Tensor? ggb, Tensor gO, Tensor weight, Tensor self, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4634154Z processing existing schema: aten::cummaxmin_backward(Tensor grad, Tensor input, Tensor indices, int dim) -> (Tensor) 2022-05-18T03:33:21.4634568Z schema: static_runtime::reshape_copy(Tensor self, int[] shape) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4635963Z processing existing schema: aten::cumprod_backward(Tensor grad, Tensor input, int dim, Tensor output) -> (Tensor) 2022-05-18T03:33:21.4636308Z schema: static_runtime::to_copy.prim_dtype(Tensor self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4636807Z schema: static_runtime::to_copy.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4637262Z schema: static_runtime::to_copy.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False, int? memory_format=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4637800Z processing existing schema: aten::cumulative_trapezoid.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4639552Z processing existing schema: aten::cumulative_trapezoid.dx(Tensor y, *, Scalar dx=1, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4639807Z schema: static_runtime::dequantize_copy.self(Tensor self) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.4641560Z processing existing schema: aten::linalg_diagonal(Tensor(a) A, *, int offset=0, int dim1=-2, int dim2=-1) -> (Tensor(a)) 2022-05-18T03:33:21.4643387Z processing existing schema: aten::fill_diagonal_(Tensor(a!) self, Scalar fill_value, bool wrap=False) -> (Tensor(a!)) 2022-05-18T03:33:21.4645170Z processing existing schema: aten::diff(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None) -> (Tensor) 2022-05-18T03:33:21.4647840Z processing existing schema: aten::diff.out(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4648959Z processing existing schema: aten::isspace(str self) -> (bool) 2022-05-18T03:33:21.4651521Z processing existing schema: aten::gradient.scalarint(Tensor self, *, Scalar? spacing=None, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4653720Z processing existing schema: aten::gradient.scalararray(Tensor self, *, Scalar spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4655680Z processing existing schema: aten::gradient.array(Tensor self, *, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4657740Z processing existing schema: aten::gradient.scalarrayint(Tensor self, *, Scalar[] spacing, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4660090Z processing existing schema: aten::gradient.scalarrayarray(Tensor self, *, Scalar[] spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4662238Z processing existing schema: aten::gradient.tensorarrayint(Tensor self, *, Tensor[] spacing, int? dim=None, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4664677Z processing existing schema: aten::gradient.tensorarray(Tensor self, *, Tensor[] spacing, int[] dim, int edge_order=1) -> (Tensor[]) 2022-05-18T03:33:21.4666036Z processing existing schema: aten::divide.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4667772Z processing existing schema: aten::divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4669165Z processing existing schema: aten::divide.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4670610Z processing existing schema: aten::divide.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:21.4672426Z processing existing schema: aten::divide.out_mode(Tensor self, Tensor other, *, str? rounding_mode, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4673791Z processing existing schema: aten::divide.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> (Tensor) 2022-05-18T03:33:21.4675167Z processing existing schema: aten::swapcase(str self) -> (str) 2022-05-18T03:33:21.4676891Z processing existing schema: aten::divide_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4678530Z processing existing schema: aten::divide_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.4680570Z processing existing schema: aten::divide_.Tensor_mode(Tensor(a!) self, Tensor other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:21.4682343Z processing existing schema: aten::divide_.Scalar_mode(Tensor(a!) self, Scalar other, *, str? rounding_mode) -> (Tensor(a!)) 2022-05-18T03:33:21.4684232Z processing existing schema: aten::get.str(Dict(str, t) self, str key) -> (t(*)?) 2022-05-18T03:33:21.4686041Z processing existing schema: aten::get.default_str(Dict(str, t) self, str key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4687813Z processing existing schema: aten::get.int(Dict(int, t) self, int key) -> (t(*)?) 2022-05-18T03:33:21.4689672Z processing existing schema: aten::get.default_int(Dict(int, t) self, int key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4691778Z processing existing schema: aten::get.bool(Dict(bool, t) self, bool key) -> (t(*)?) 2022-05-18T03:33:21.4693455Z processing existing schema: aten::get.default_bool(Dict(bool, t) self, bool key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4695249Z processing existing schema: aten::get.float(Dict(float, t) self, float key) -> (t(*)?) 2022-05-18T03:33:21.4697312Z processing existing schema: aten::get.default_float(Dict(float, t) self, float key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4699304Z processing existing schema: aten::get.complex(Dict(complex, t) self, complex key) -> (t(*)?) 2022-05-18T03:33:21.4701551Z processing existing schema: aten::get.default_complex(Dict(complex, t) self, complex key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4703339Z processing existing schema: aten::get.Tensor(Dict(Tensor, t) self, Tensor key) -> (t(*)?) 2022-05-18T03:33:21.4705490Z processing existing schema: aten::get.default_Tensor(Dict(Tensor, t) self, Tensor key, t default_value) -> (t(*)) 2022-05-18T03:33:21.4707457Z processing existing schema: aten::embedding_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq, bool sparse) -> (Tensor) 2022-05-18T03:33:21.4709068Z processing existing schema: aten::endswith(str self, str substr, int start=0, int end=-1) -> (bool) 2022-05-18T03:33:21.4710868Z processing existing schema: aten::embedding_sparse_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> (Tensor) 2022-05-18T03:33:21.4712431Z processing existing schema: aten::rindex(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:21.4714078Z processing existing schema: aten::_rowwise_prune(Tensor weight, Tensor mask, int compressed_indices_dtype) -> (Tensor, Tensor) 2022-05-18T03:33:21.4715722Z processing existing schema: aten::row_stack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.4717662Z processing existing schema: aten::row_stack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4720595Z processing existing schema: aten::embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4722778Z processing existing schema: aten::embedding_bag.padding_idx(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, bool include_last_offset, int? padding_idx) -> (Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4724359Z processing existing schema: aten::startswith(str self, str substr, int start=0, int end=-1) -> (bool) 2022-05-18T03:33:21.4726937Z processing existing schema: aten::_embedding_bag_backward(Tensor grad, Tensor indices, Tensor offsets, Tensor offset2bag, Tensor bag_size, Tensor maximum_indices, int num_weights, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:21.4729103Z processing existing schema: aten::_embedding_bag_sparse_backward(Tensor grad, Tensor indices, Tensor offsets, Tensor offset2bag, Tensor bag_size, int num_weights, bool scale_grad_by_freq, int mode, Tensor? per_sample_weights, int padding_idx=-1) -> (Tensor) 2022-05-18T03:33:21.4731483Z processing existing schema: aten::new_full(Tensor self, int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.4733913Z processing existing schema: aten::new_ones(Tensor self, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.4735838Z processing existing schema: aten::_grid_sampler_2d_cpu_fallback_backward(Tensor grad_output, Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> (Tensor, Tensor) 2022-05-18T03:33:21.4736827Z processing existing schema: aten::_cufft_get_plan_cache_size(int device_index) -> (int) 2022-05-18T03:33:21.4738246Z processing existing schema: aten::_cufft_get_plan_cache_max_size(int device_index) -> (int) 2022-05-18T03:33:21.4739648Z processing existing schema: aten::_cufft_set_plan_cache_max_size(int device_index, int max_size) -> () 2022-05-18T03:33:21.4740893Z processing existing schema: aten::_cufft_clear_plan_cache(int device_index) -> () 2022-05-18T03:33:21.4743537Z processing existing schema: aten::isclose(Tensor self, Tensor other, float rtol=1.0000000000000001e-05, float atol=1e-08, bool equal_nan=False) -> (Tensor) 2022-05-18T03:33:21.4744921Z processing existing schema: aten::is_distributed(Tensor self) -> (bool) 2022-05-18T03:33:21.4746256Z processing existing schema: aten::is_conj(Tensor self) -> (bool) 2022-05-18T03:33:21.4748421Z processing existing schema: aten::_is_zerotensor(Tensor self) -> (bool) 2022-05-18T03:33:21.4748932Z processing existing schema: aten::is_neg(Tensor self) -> (bool) 2022-05-18T03:33:21.4749911Z processing existing schema: aten::isreal(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4751516Z processing existing schema: aten::kron(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4753297Z processing existing schema: aten::kron.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4755538Z processing existing schema: aten::fbgemm_linear_int8_weight_fp32_activation(Tensor input, Tensor weight, Tensor packed, Tensor col_offsets, Scalar weight_scale, Scalar weight_zero_point, Tensor bias) -> (Tensor) 2022-05-18T03:33:21.4757355Z processing existing schema: aten::fbgemm_linear_int8_weight(Tensor input, Tensor weight, Tensor packed, Tensor col_offsets, Scalar weight_scale, Scalar weight_zero_point, Tensor bias) -> (Tensor) 2022-05-18T03:33:21.4758443Z processing existing schema: aten::fbgemm_linear_quantize_weight(Tensor input) -> (Tensor, Tensor, float, int) 2022-05-18T03:33:21.4760017Z processing existing schema: aten::fbgemm_pack_gemm_matrix_fp16(Tensor input) -> (Tensor) 2022-05-18T03:33:21.4761586Z processing existing schema: aten::fbgemm_linear_fp16_weight_fp32_activation(Tensor input, Tensor packed_weight, Tensor bias) -> (Tensor) 2022-05-18T03:33:21.4763026Z processing existing schema: aten::fbgemm_linear_fp16_weight(Tensor input, Tensor packed_weight, Tensor bias) -> (Tensor) 2022-05-18T03:33:21.4764058Z processing existing schema: aten::fbgemm_pack_quantized_matrix(Tensor input) -> (Tensor) 2022-05-18T03:33:21.4765507Z processing existing schema: aten::fbgemm_pack_quantized_matrix.KN(Tensor input, int K, int N) -> (Tensor) 2022-05-18T03:33:21.4766952Z processing existing schema: aten::ldexp.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4768640Z processing existing schema: aten::ldexp.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4770041Z processing existing schema: aten::ldexp(float x, int i) -> (float) 2022-05-18T03:33:21.4771611Z processing existing schema: aten::ldexp_(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4772947Z processing existing schema: aten::matrix_power(Tensor self, int n) -> (Tensor) 2022-05-18T03:33:21.4774795Z processing existing schema: aten::matrix_power.out(Tensor self, int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4775993Z processing existing schema: aten::matrix_exp(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4777401Z processing existing schema: aten::matrix_exp_backward(Tensor self, Tensor grad) -> (Tensor) 2022-05-18T03:33:21.4779483Z processing existing schema: aten::value_selecting_reduction_backward(Tensor grad, int dim, Tensor indices, int[] sizes, bool keepdim) -> (Tensor) 2022-05-18T03:33:21.4781367Z processing existing schema: aten::nanmean(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.4783652Z processing existing schema: aten::nanmean.out(Tensor self, int[1] dim=[], bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4785039Z processing existing schema: aten::_sparse_mm(Tensor sparse, Tensor dense) -> (Tensor) 2022-05-18T03:33:21.4786447Z processing existing schema: aten::multiply.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4789041Z processing existing schema: aten::multiply.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4789807Z processing existing schema: aten::multiply.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4791673Z processing existing schema: aten::multiply_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4793197Z processing existing schema: aten::multiply_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.4794614Z processing existing schema: aten::is_vulkan_available() -> (bool) 2022-05-18T03:33:21.4795581Z processing existing schema: aten::_nnpack_available() -> (bool) 2022-05-18T03:33:21.4798012Z processing existing schema: aten::pairwise_distance(Tensor x1, Tensor x2, float p=2., float eps=9.9999999999999995e-07, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.4800487Z processing existing schema: aten::moveaxis.intlist(Tensor(a) self, int[] source, int[] destination) -> (Tensor(a)) 2022-05-18T03:33:21.4802145Z processing existing schema: aten::moveaxis.int(Tensor(a) self, int source, int destination) -> (Tensor(a)) 2022-05-18T03:33:21.4803414Z processing existing schema: aten::pixel_shuffle(Tensor self, int upscale_factor) -> (Tensor) 2022-05-18T03:33:21.4805688Z processing existing schema: aten::pixel_unshuffle(Tensor self, int downscale_factor) -> (Tensor) 2022-05-18T03:33:21.4807377Z processing existing schema: aten::pin_memory(Tensor(a) self, Device? device=None) -> (Tensor(a)) 2022-05-18T03:33:21.4809676Z processing existing schema: aten::ravel(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.4810561Z processing existing schema: aten::negative(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4813448Z processing existing schema: aten::negative.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4815050Z processing existing schema: aten::negative_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4818648Z processing existing schema: aten::rrelu(Tensor self, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor) 2022-05-18T03:33:21.4822100Z processing existing schema: aten::rrelu_(Tensor(a!) self, Scalar lower=0.125, Scalar upper=0.33333333333333331, bool training=False, Generator? generator=None) -> (Tensor(a!)) 2022-05-18T03:33:21.4822878Z processing existing schema: aten::relu6(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4824702Z processing existing schema: aten::relu6_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4826357Z processing existing schema: aten::infinitely_differentiable_gelu_backward(Tensor grad, Tensor self) -> (Tensor) 2022-05-18T03:33:21.4827776Z processing existing schema: aten::selu_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4829850Z processing existing schema: aten::smm(Tensor self, Tensor mat2) -> (Tensor) 2022-05-18T03:33:21.4830863Z processing existing schema: aten::hstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.4833198Z processing existing schema: aten::hstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4834666Z processing existing schema: aten::vstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.4836774Z processing existing schema: aten::vstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4838531Z processing existing schema: aten::dstack(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.4840610Z processing existing schema: aten::dstack.out(Tensor[] tensors, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4842312Z processing existing schema: aten::splitlines(str self, bool keepends=False) -> (str[]) 2022-05-18T03:33:21.4845244Z processing existing schema: aten::istft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, bool normalized=False, bool? onesided=None, int? length=None, bool return_complex=False) -> (Tensor) 2022-05-18T03:33:21.4846762Z processing existing schema: aten::sum_to_size(Tensor self, int[] size) -> (Tensor) 2022-05-18T03:33:21.4848519Z processing existing schema: aten::tile(Tensor self, int[] dims) -> (Tensor) 2022-05-18T03:33:21.4849975Z processing existing schema: aten::one_hot(Tensor self, int num_classes=-1) -> (Tensor) 2022-05-18T03:33:21.4851352Z processing existing schema: aten::fliplr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4852499Z processing existing schema: aten::flipud(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4854294Z processing existing schema: aten::trapezoid.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4855896Z processing existing schema: aten::trapezoid.dx(Tensor y, *, Scalar dx=1, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4857448Z processing existing schema: aten::trapz.x(Tensor y, Tensor x, *, int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4859174Z processing existing schema: aten::trapz.dx(Tensor y, *, float dx=1., int dim=-1) -> (Tensor) 2022-05-18T03:33:21.4860419Z processing existing schema: aten::fix(Tensor self) -> (Tensor) 2022-05-18T03:33:21.4862059Z processing existing schema: aten::fix.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4863590Z processing existing schema: aten::fix_(Tensor(a!) self) -> (Tensor(a!)) 2022-05-18T03:33:21.4865119Z processing existing schema: aten::type_as(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4866539Z processing existing schema: aten::_has_compatible_shallow_copy_type(Tensor self, Tensor from) -> (bool) 2022-05-18T03:33:21.4868018Z processing existing schema: aten::norm_except_dim(Tensor v, int pow=2, int dim=0) -> (Tensor) 2022-05-18T03:33:21.4869458Z processing existing schema: aten::_weight_norm(Tensor v, Tensor g, int dim=0) -> (Tensor) 2022-05-18T03:33:21.4871314Z processing existing schema: aten::_weight_norm_differentiable_backward(Tensor grad_w, Tensor saved_v, Tensor saved_g, Tensor saved_norms, int dim) -> (Tensor, Tensor) 2022-05-18T03:33:21.4872633Z processing existing schema: aten::positive(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.4874152Z processing existing schema: aten::subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.4876135Z processing existing schema: aten::subtract.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.4877643Z processing existing schema: aten::subtract.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.4879590Z processing existing schema: aten::subtract_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.4881394Z processing existing schema: aten::subtract_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 2022-05-18T03:33:21.4883077Z processing existing schema: aten::_validate_sparse_coo_tensor_args(Tensor indices, Tensor values, int[] size) -> () 2022-05-18T03:33:21.4885126Z processing existing schema: aten::_validate_sparse_compressed_tensor_args(Tensor compressed_indices, Tensor plain_indices, Tensor values, int[] size, int layout) -> () 2022-05-18T03:33:21.4886888Z processing existing schema: aten::_validate_sparse_csr_tensor_args(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:21.4888788Z processing existing schema: aten::_validate_sparse_csc_tensor_args(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:21.4890523Z processing existing schema: aten::_validate_sparse_bsr_tensor_args(Tensor crow_indices, Tensor col_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:21.4892391Z processing existing schema: aten::_validate_sparse_bsc_tensor_args(Tensor ccol_indices, Tensor row_indices, Tensor values, int[] size) -> () 2022-05-18T03:33:21.4894059Z processing existing schema: aten::_to_cpu(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.4895568Z processing existing schema: aten::to_dense(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.4897535Z processing existing schema: aten::to_dense_backward(Tensor grad, Tensor input) -> (Tensor) 2022-05-18T03:33:21.4898464Z processing existing schema: aten::to_mkldnn_backward(Tensor grad, Tensor input) -> (Tensor) 2022-05-18T03:33:21.4900161Z processing existing schema: aten::fake_quantize_per_tensor_affine_cachemask_backward(Tensor grad, Tensor mask) -> (Tensor) 2022-05-18T03:33:21.4901065Z processing existing schema: aten::radians.int(int a) -> (float) 2022-05-18T03:33:21.4902679Z processing existing schema: aten::radians.float(float a) -> (float) 2022-05-18T03:33:21.4903887Z processing existing schema: aten::radians.Scalar(Scalar a) -> (Scalar) 2022-05-18T03:33:21.4906236Z processing existing schema: aten::_fake_quantize_learnable_per_tensor_affine_backward(Tensor grad, Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4907732Z processing existing schema: aten::fake_quantize_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor) 2022-05-18T03:33:21.4909077Z processing existing schema: aten::cuda(Tensor(a) self) -> (Tensor(a|b)) 2022-05-18T03:33:21.4910724Z processing existing schema: aten::fake_quantize_per_channel_affine_cachemask_backward(Tensor grad, Tensor mask) -> (Tensor) 2022-05-18T03:33:21.4912005Z processing existing schema: aten::modf(float a) -> (float, float) 2022-05-18T03:33:21.4914238Z processing existing schema: aten::_fake_quantize_learnable_per_channel_affine_backward(Tensor grad, Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max, float grad_factor=1.) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4917747Z processing existing schema: aten::fused_moving_avg_obs_fake_quant(Tensor self, Tensor observer_on, Tensor fake_quant_on, Tensor(a!) running_min, Tensor(b!) running_max, Tensor(c!) scale, Tensor(d!) zero_point, float averaging_const, int quant_min, int quant_max, int ch_axis, bool per_row_fake_quant=False, bool symmetric_quant=False) -> (Tensor) 2022-05-18T03:33:21.4918478Z processing existing schema: aten::_choose_qparams_per_tensor(Tensor self, bool reduce_range=False) -> (float, int) 2022-05-18T03:33:21.4920248Z processing existing schema: aten::_saturate_weight_to_fp16(Tensor weight) -> (Tensor) 2022-05-18T03:33:21.4921730Z processing existing schema: aten::_autocast_to_reduced_precision(Tensor(a) self, bool cuda_enabled, bool cpu_enabled, int cuda_dtype, int cpu_dtype) -> (Tensor(a)) 2022-05-18T03:33:21.4923269Z processing existing schema: aten::_autocast_to_full_precision(Tensor(a) self, bool cuda_enabled, bool cpu_enabled) -> (Tensor(a)) 2022-05-18T03:33:21.4924945Z processing existing schema: aten::meshgrid(Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.4926971Z processing existing schema: aten::meshgrid.indexing(Tensor[] tensors, *, str indexing) -> (Tensor[]) 2022-05-18T03:33:21.4928300Z processing existing schema: aten::promote_types(int type1, int type2) -> (int) 2022-05-18T03:33:21.4930359Z processing existing schema: aten::_thnn_fused_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4932890Z processing existing schema: aten::_thnn_differentiable_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor input_gates, Tensor hidden_gates, Tensor? input_bias, Tensor? hidden_bias, Tensor cx, Tensor cy) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4934709Z processing existing schema: aten::_thnn_differentiable_gru_cell_backward(Tensor grad_hy, Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias, Tensor? hidden_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4937338Z processing existing schema: aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4939865Z processing existing schema: aten::lstm.data(Tensor data, Tensor batch_sizes, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.4942142Z processing existing schema: aten::gru.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.4944550Z processing existing schema: aten::gru.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:21.4946890Z processing existing schema: aten::rnn_tanh.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.4949208Z processing existing schema: aten::rnn_tanh.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:21.4951472Z processing existing schema: aten::rnn_relu.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 2022-05-18T03:33:21.4953802Z processing existing schema: aten::rnn_relu.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 2022-05-18T03:33:21.4956727Z processing existing schema: aten::quantized_lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor, Tensor) 2022-05-18T03:33:21.4959224Z processing existing schema: aten::quantized_gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:21.4961401Z processing existing schema: aten::quantized_rnn_relu_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:21.4963744Z processing existing schema: aten::quantized_rnn_tanh_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor b_ih, Tensor b_hh, Tensor packed_ih, Tensor packed_hh, Tensor col_offsets_ih, Tensor col_offsets_hh, Scalar scale_ih, Scalar scale_hh, Scalar zero_point_ih, Scalar zero_point_hh) -> (Tensor) 2022-05-18T03:33:21.4965389Z processing existing schema: aten::_pack_padded_sequence_backward(Tensor grad, int[] input_size, Tensor batch_sizes, bool batch_first) -> (Tensor) 2022-05-18T03:33:21.4967085Z processing existing schema: aten::_pad_packed_sequence(Tensor data, Tensor batch_sizes, bool batch_first, Scalar padding_value, int total_length) -> (Tensor, Tensor) 2022-05-18T03:33:21.4968475Z processing existing schema: aten::put(Tensor self, Tensor index, Tensor source, bool accumulate=False) -> (Tensor) 2022-05-18T03:33:21.4969732Z processing existing schema: aten::__and__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4971189Z processing existing schema: aten::__and__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4972604Z processing existing schema: aten::__and__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:21.4973883Z processing existing schema: aten::__and__.int(int a, int b) -> (int) 2022-05-18T03:33:21.4975511Z processing existing schema: aten::__iand__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.4976992Z processing existing schema: aten::__iand__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4978446Z processing existing schema: aten::__or__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4979903Z processing existing schema: aten::__or__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4981191Z processing existing schema: aten::__or__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:21.4982497Z processing existing schema: aten::__or__.int(int a, int b) -> (int) 2022-05-18T03:33:21.4984139Z processing existing schema: aten::__ior__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.4985785Z processing existing schema: aten::__ior__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4987415Z processing existing schema: aten::__xor__.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.4989097Z processing existing schema: aten::__xor__.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.4990234Z processing existing schema: aten::__xor__.bool(bool a, bool b) -> (bool) 2022-05-18T03:33:21.4991579Z processing existing schema: aten::__xor__.int(int a, int b) -> (int) 2022-05-18T03:33:21.4993246Z processing existing schema: aten::__ixor__.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.4994979Z processing existing schema: aten::__ixor__.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.4996835Z processing existing schema: aten::diag_backward(Tensor grad, int[] input_sizes, int diagonal) -> (Tensor) 2022-05-18T03:33:21.4998357Z processing existing schema: aten::reverse.t(t[](a!) self) -> () 2022-05-18T03:33:21.5000230Z processing existing schema: aten::not_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5001769Z processing existing schema: aten::not_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5003149Z processing existing schema: aten::not_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5004913Z processing existing schema: aten::not_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5006520Z processing existing schema: aten::not_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.5008112Z processing existing schema: aten::not_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.5009520Z processing existing schema: aten::greater_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5011354Z processing existing schema: aten::greater_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5012759Z processing existing schema: aten::greater_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5014417Z processing existing schema: aten::greater_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5015934Z processing existing schema: aten::greater_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.5017544Z processing existing schema: aten::greater_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.5019132Z processing existing schema: aten::less_equal.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5020882Z processing existing schema: aten::less_equal.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5022304Z processing existing schema: aten::less_equal.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5023883Z processing existing schema: aten::less_equal.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5025669Z processing existing schema: aten::less_equal_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.5027466Z processing existing schema: aten::less_equal_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.5028931Z processing existing schema: aten::greater.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5030837Z processing existing schema: aten::greater.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5032315Z processing existing schema: aten::greater.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5034175Z processing existing schema: aten::greater.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5035999Z processing existing schema: aten::greater_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.5037691Z processing existing schema: aten::greater_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.5039353Z processing existing schema: aten::less.Scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5041243Z processing existing schema: aten::less.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5042739Z processing existing schema: aten::less.Tensor(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5044658Z processing existing schema: aten::less.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5046424Z processing existing schema: aten::less_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!)) 2022-05-18T03:33:21.5048179Z processing existing schema: aten::less_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!)) 2022-05-18T03:33:21.5049901Z processing existing schema: aten::take_along_dim(Tensor self, Tensor indices, int? dim=None) -> (Tensor) 2022-05-18T03:33:21.5052403Z processing existing schema: aten::take_along_dim.out(Tensor self, Tensor indices, int? dim=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5054145Z processing existing schema: aten::index_select_backward(Tensor grad, int[] self_sizes, int dim, Tensor index) -> (Tensor) 2022-05-18T03:33:21.5055669Z processing existing schema: aten::masked_select_backward(Tensor grad, Tensor input, Tensor mask) -> (Tensor) 2022-05-18T03:33:21.5057214Z processing existing schema: aten::nonzero_numpy(Tensor self) -> (Tensor[]) 2022-05-18T03:33:21.5059078Z processing existing schema: aten::gather_backward(Tensor grad, Tensor self, int dim, Tensor index, bool sparse_grad) -> (Tensor) 2022-05-18T03:33:21.5060605Z processing existing schema: aten::_gather_sparse_backward(Tensor self, int dim, Tensor index, Tensor grad) -> (Tensor) 2022-05-18T03:33:21.5062092Z processing existing schema: aten::linalg_vander(Tensor x, *, int? N=None) -> (Tensor) 2022-05-18T03:33:21.5064137Z processing existing schema: aten::swapaxes_(Tensor(a!) self, int axis0, int axis1) -> (Tensor(a!)) 2022-05-18T03:33:21.5066052Z processing existing schema: aten::swapdims_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:21.5069192Z processing existing schema: aten::histogramdd(Tensor self, int[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:21.5071882Z processing existing schema: aten::histogramdd.int_bins(Tensor self, int bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:21.5075015Z processing existing schema: aten::histogramdd.TensorList_bins(Tensor self, Tensor[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) 2022-05-18T03:33:21.5075885Z processing existing schema: aten::msort(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5077845Z processing existing schema: aten::msort.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5079541Z processing existing schema: aten::argsort(Tensor self, int dim=-1, bool descending=False) -> (Tensor) 2022-05-18T03:33:21.5081065Z processing existing schema: aten::argsort.dimname(Tensor self, str dim, bool descending=False) -> (Tensor) 2022-05-18T03:33:21.5082413Z processing existing schema: aten::float_power.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:21.5084290Z processing existing schema: aten::float_power.Tensor_Tensor_out(Tensor self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5085755Z processing existing schema: aten::float_power.Scalar(Scalar self, Tensor exponent) -> (Tensor) 2022-05-18T03:33:21.5087412Z processing existing schema: aten::float_power.Scalar_out(Scalar self, Tensor exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5088805Z processing existing schema: aten::float_power.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor) 2022-05-18T03:33:21.5090598Z processing existing schema: aten::float_power.Tensor_Scalar_out(Tensor self, Scalar exponent, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5092217Z processing existing schema: aten::float_power_.Tensor(Tensor(a!) self, Tensor exponent) -> (Tensor(a!)) 2022-05-18T03:33:21.5093728Z processing existing schema: aten::float_power_.Scalar(Tensor(a!) self, Scalar exponent) -> (Tensor(a!)) 2022-05-18T03:33:21.5095542Z processing existing schema: aten::nll_loss_nd(Tensor self, Tensor target, Tensor? weight=None, int reduction=1, int ignore_index=-100) -> (Tensor) 2022-05-18T03:33:21.5096721Z processing existing schema: aten::log_sigmoid(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5098457Z processing existing schema: aten::log_sigmoid.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5100092Z processing existing schema: aten::_pad_circular(Tensor self, int[] pad) -> (Tensor) 2022-05-18T03:33:21.5102022Z processing existing schema: aten::_pad_enum(Tensor self, int[] pad, int mode, float? value=None) -> (Tensor) 2022-05-18T03:33:21.5104111Z processing existing schema: aten::pad(Tensor self, int[] pad, str mode="constant", float? value=None) -> (Tensor) 2022-05-18T03:33:21.5106798Z processing existing schema: aten::thnn_conv2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0]) -> (Tensor) 2022-05-18T03:33:21.5109664Z processing existing schema: aten::thnn_conv2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias=None, int[2] stride=[1, 1], int[2] padding=[0, 0], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5112133Z processing existing schema: aten::slow_conv3d(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0]) -> (Tensor) 2022-05-18T03:33:21.5115057Z processing existing schema: aten::slow_conv3d.out(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias=None, int[3] stride=[1, 1, 1], int[3] padding=[0, 0, 0], *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5116368Z processing existing schema: aten::special_expm1(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5118605Z processing existing schema: aten::special_expm1.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5119528Z processing existing schema: aten::special_exp2(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5121099Z processing existing schema: aten::special_exp2.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5122323Z processing existing schema: aten::special_psi(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5123974Z processing existing schema: aten::special_psi.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5125346Z processing existing schema: aten::special_digamma(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5127044Z processing existing schema: aten::special_digamma.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5128374Z processing existing schema: aten::special_gammaln(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5130210Z processing existing schema: aten::special_gammaln.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5131490Z processing existing schema: aten::special_erf(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5133275Z processing existing schema: aten::special_erf.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5134550Z processing existing schema: aten::special_erfc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5136366Z processing existing schema: aten::special_erfc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5138057Z processing existing schema: aten::special_erfinv(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5139831Z processing existing schema: aten::special_erfinv.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5141041Z processing existing schema: aten::special_ndtr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5142553Z processing existing schema: aten::special_ndtr.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5144512Z processing existing schema: aten::special_xlogy(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5145901Z processing existing schema: aten::special_xlogy.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5147290Z processing existing schema: aten::special_xlogy.self_scalar(Scalar self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5149526Z processing existing schema: aten::special_xlogy.self_scalar_out(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5150571Z processing existing schema: aten::special_xlogy.other_scalar(Tensor self, Scalar other) -> (Tensor) 2022-05-18T03:33:21.5152403Z processing existing schema: aten::special_xlogy.other_scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5153546Z processing existing schema: aten::special_i0(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5155096Z processing existing schema: aten::special_i0.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5156510Z processing existing schema: aten::special_logit(Tensor self, float? eps=None) -> (Tensor) 2022-05-18T03:33:21.5158435Z processing existing schema: aten::special_logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5160037Z processing existing schema: aten::special_polygamma(int n, Tensor self) -> (Tensor) 2022-05-18T03:33:21.5161823Z processing existing schema: aten::special_polygamma.out(int n, Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5163475Z processing existing schema: aten::special_logsumexp(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 2022-05-18T03:33:21.5165630Z processing existing schema: aten::special_logsumexp.out(Tensor self, int[1] dim, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5166803Z processing existing schema: aten::special_expit(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5168492Z processing existing schema: aten::special_expit.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5169761Z processing existing schema: aten::special_sinc(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5171610Z processing existing schema: aten::special_sinc.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5173157Z processing existing schema: aten::special_round(Tensor self, *, int decimals=0) -> (Tensor) 2022-05-18T03:33:21.5175072Z processing existing schema: aten::special_round.out(Tensor self, *, int decimals=0, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5176381Z processing existing schema: aten::special_log1p(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5178261Z processing existing schema: aten::special_log1p.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5179872Z processing existing schema: aten::special_log_softmax(Tensor self, int dim, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5181274Z processing existing schema: aten::special_gammainc(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5183161Z processing existing schema: aten::special_gammainc.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5184665Z processing existing schema: aten::special_gammaincc(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5186574Z processing existing schema: aten::special_gammaincc.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5187952Z processing existing schema: aten::special_multigammaln(Tensor self, int p) -> (Tensor) 2022-05-18T03:33:21.5189826Z processing existing schema: aten::special_multigammaln.out(Tensor self, int p, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5191432Z processing existing schema: aten::special_softmax(Tensor self, int dim, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5193601Z processing existing schema: aten::fft_hfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.5196654Z processing existing schema: aten::fft_hfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5198470Z processing existing schema: aten::fft_ihfft2(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None) -> (Tensor) 2022-05-18T03:33:21.5200934Z processing existing schema: aten::fft_ihfft2.out(Tensor self, int[1]? s=None, int[1] dim=[-2, -1], str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5202663Z processing existing schema: aten::fft_hfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.5204983Z processing existing schema: aten::fft_hfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5206751Z processing existing schema: aten::fft_ihfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> (Tensor) 2022-05-18T03:33:21.5209030Z processing existing schema: aten::fft_ihfftn.out(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5210506Z processing existing schema: aten::fft_fftshift(Tensor self, int[1]? dim=None) -> (Tensor) 2022-05-18T03:33:21.5211872Z processing existing schema: aten::fft_ifftshift(Tensor self, int[1]? dim=None) -> (Tensor) 2022-05-18T03:33:21.5213563Z processing existing schema: aten::linalg_lu_factor(Tensor A, *, bool pivot=True) -> (Tensor LU, Tensor pivots) 2022-05-18T03:33:21.5215673Z processing existing schema: aten::linalg_lu_factor.out(Tensor A, *, bool pivot=True, Tensor(a!) LU, Tensor(b!) pivots) -> (Tensor(a!) LU, Tensor(b!) pivots) 2022-05-18T03:33:21.5216424Z processing existing schema: aten::linalg_det(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5218308Z processing existing schema: aten::linalg_det.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5219516Z processing existing schema: aten::det(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5220681Z processing existing schema: aten::element_size(Tensor self) -> (int) 2022-05-18T03:33:21.5222277Z processing existing schema: aten::linalg_ldl_factor(Tensor self, *, bool hermitian=False) -> (Tensor LD, Tensor pivots) 2022-05-18T03:33:21.5224761Z processing existing schema: aten::linalg_ldl_factor.out(Tensor self, *, bool hermitian=False, Tensor(a!) LD, Tensor(b!) pivots) -> (Tensor(a!) LD, Tensor(b!) pivots) 2022-05-18T03:33:21.5225671Z processing existing schema: aten::linalg_matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5227707Z processing existing schema: aten::linalg_matmul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5229000Z processing existing schema: aten::inner(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5230879Z processing existing schema: aten::inner.out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5232234Z processing existing schema: aten::outer(Tensor self, Tensor vec2) -> (Tensor) 2022-05-18T03:33:21.5234121Z processing existing schema: aten::outer.out(Tensor self, Tensor vec2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5235690Z processing existing schema: aten::ger(Tensor self, Tensor vec2) -> (Tensor) 2022-05-18T03:33:21.5237643Z processing existing schema: aten::ger.out(Tensor self, Tensor vec2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5239978Z processing existing schema: aten::linalg_norm(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5242480Z processing existing schema: aten::linalg_norm.out(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5244386Z processing existing schema: aten::linalg_norm.ord_str(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5246868Z processing existing schema: aten::linalg_norm.ord_str_out(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, int? dtype=None, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5248237Z processing existing schema: aten::linalg_matrix_power(Tensor self, int n) -> (Tensor) 2022-05-18T03:33:21.5250011Z processing existing schema: aten::linalg_matrix_power.out(Tensor self, int n, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5251484Z processing existing schema: aten::_test_serialization_subcmul(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.5253206Z processing existing schema: aten::_test_string_default(Tensor dummy, str a="\"\'\\", str b="\"\'\\") -> (Tensor) 2022-05-18T03:33:21.5254828Z processing existing schema: aten::_test_ambiguous_defaults.a(Tensor dummy, int a=1, int b=1) -> (Tensor) 2022-05-18T03:33:21.5256482Z processing existing schema: aten::_test_ambiguous_defaults.b(Tensor dummy, int a=2, str b="2") -> (Tensor) 2022-05-18T03:33:21.5258455Z processing existing schema: aten::pad_sequence(Tensor[] sequences, bool batch_first=False, float padding_value=0.) -> (Tensor) 2022-05-18T03:33:21.5260097Z processing existing schema: aten::flatten_dense_tensors(Tensor[] tensors) -> (Tensor) 2022-05-18T03:33:21.5262064Z processing existing schema: aten::unflatten_dense_tensors(Tensor flat, Tensor[] tensors) -> (Tensor[]) 2022-05-18T03:33:21.5264379Z processing existing schema: aten::nested_tensor(Tensor[] list, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 2022-05-18T03:33:21.5266351Z processing existing schema: aten::_sparse_broadcast_to(Tensor(a) self, int[] size) -> (Tensor(a)) 2022-05-18T03:33:21.5268382Z processing existing schema: aten::_resize_output_(Tensor(a!) self, int[] size, Device device) -> (Tensor(a!)) 2022-05-18T03:33:21.5270111Z processing existing schema: aten::_mkldnn_transpose_(Tensor(a!) self, int dim0, int dim1) -> (Tensor(a!)) 2022-05-18T03:33:21.5272267Z processing existing schema: aten::sparse_resize_(Tensor(a!) self, int[] size, int sparse_dim, int dense_dim) -> (Tensor(a!)) 2022-05-18T03:33:21.5273714Z processing existing schema: aten::values(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.5275827Z processing existing schema: aten::values.str(Dict(str, t) self) -> (t[](*)) 2022-05-18T03:33:21.5277822Z processing existing schema: aten::values.int(Dict(int, t) self) -> (t[](*)) 2022-05-18T03:33:21.5280141Z processing existing schema: aten::values.bool(Dict(bool, t) self) -> (t[](*)) 2022-05-18T03:33:21.5282073Z processing existing schema: aten::values.float(Dict(float, t) self) -> (t[](*)) 2022-05-18T03:33:21.5284175Z processing existing schema: aten::values.complex(Dict(complex, t) self) -> (t[](*)) 2022-05-18T03:33:21.5286090Z processing existing schema: aten::values.Tensor(Dict(Tensor, t) self) -> (t[](*)) 2022-05-18T03:33:21.5287612Z processing existing schema: aten::row_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.5290107Z processing existing schema: aten::_amp_update_scale_(Tensor(a!) self, Tensor(b!) growth_tracker, Tensor found_inf, float scale_growth_factor, float scale_backoff_factor, int growth_interval) -> (Tensor(a!)) 2022-05-18T03:33:21.5292587Z processing existing schema: aten::_conv_depthwise2d.out(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, int[2] dilation, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5294667Z processing existing schema: aten::_conv_depthwise2d(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, int[2] dilation) -> (Tensor) 2022-05-18T03:33:21.5296161Z processing existing schema: aten::resize_as_sparse_(Tensor(a!) self, Tensor the_template) -> (Tensor(a!)) 2022-05-18T03:33:21.5298662Z processing existing schema: aten::sparse_resize_and_clear_(Tensor(a!) self, int[] size, int sparse_dim, int dense_dim) -> (Tensor(a!)) 2022-05-18T03:33:21.5300183Z processing existing schema: aten::_indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.5301604Z processing existing schema: aten::indices(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.5303842Z processing existing schema: aten::hspmm.out(Tensor mat1, Tensor mat2, *, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5305818Z processing existing schema: aten::hspmm(Tensor mat1, Tensor mat2) -> (Tensor) 2022-05-18T03:33:21.5307866Z processing existing schema: aten::sparse_sampled_addmm.out(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!)) 2022-05-18T03:33:21.5309528Z processing existing schema: aten::sparse_sampled_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor) 2022-05-18T03:33:21.5310983Z processing existing schema: aten::_values(Tensor(a) self) -> (Tensor(a)) 2022-05-18T03:33:21.5312619Z processing existing schema: aten::_coalesced_(Tensor(a!) self, bool coalesced) -> (Tensor(a!)) 2022-05-18T03:33:21.5314523Z processing existing schema: aten::_amp_foreach_non_finite_check_and_unscale_(Tensor[] self, Tensor(b!) found_inf, Tensor inv_scale) -> () 2022-05-18T03:33:21.5316161Z processing existing schema: aten::mkldnn_linear(Tensor self, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:21.5317972Z processing existing schema: aten::mkldnn_linear_backward_input(int[] input_size, Tensor grad_output, Tensor weight) -> (Tensor) 2022-05-18T03:33:21.5319856Z processing existing schema: aten::mkldnn_linear_backward_weights(Tensor grad_output, Tensor input, Tensor weight, bool bias_defined) -> (Tensor, Tensor) 2022-05-18T03:33:21.5321832Z processing existing schema: aten::mkldnn_linear_backward(Tensor self, Tensor grad_output, Tensor weight, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5324216Z processing existing schema: aten::mkldnn_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5327110Z processing existing schema: aten::mkldnn_max_pool2d_backward(Tensor grad_output, Tensor output, Tensor input, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5329678Z processing existing schema: aten::mkldnn_max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5332707Z processing existing schema: aten::mkldnn_max_pool3d_backward(Tensor grad_output, Tensor output, Tensor input, int[3] kernel_size, int[3] stride=[], int[3] padding=[0, 0, 0], int[3] dilation=[1, 1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5334309Z processing existing schema: aten::_mkldnn_reshape(Tensor self, int[] shape) -> (Tensor) 2022-05-18T03:33:21.5335610Z processing existing schema: aten::_mkldnn_transpose(Tensor self, int dim0, int dim1) -> (Tensor) 2022-05-18T03:33:21.5337153Z processing existing schema: aten::_to_dense(Tensor self, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5339730Z processing existing schema: aten::mkldnn_reorder_conv2d_weight(Tensor self, int[2] padding=[0, 0], int[2] stride=[1, 1], int[2] dilation=[1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.5342453Z processing existing schema: aten::mkldnn_reorder_conv3d_weight(Tensor self, int[3] padding=[0, 0, 0], int[3] stride=[1, 1, 1], int[3] dilation=[1, 1, 1], int groups=1) -> (Tensor) 2022-05-18T03:33:21.5343887Z processing existing schema: aten::mkldnn_adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor) 2022-05-18T03:33:21.5345503Z processing existing schema: aten::mkldnn_adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> (Tensor) 2022-05-18T03:33:21.5346983Z processing existing schema: aten::_nested_from_padded_and_nested_example(Tensor padded, Tensor nt_example) -> (Tensor) 2022-05-18T03:33:21.5348857Z processing existing schema: aten::to_padded_tensor(Tensor self, float padding, int[]? output_size=None) -> (Tensor) 2022-05-18T03:33:21.5350277Z schema: aten::_nested_tensor_layer_norm(Tensor self, Tensor? weight, Tensor? bias, float eps) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.5352080Z processing existing schema: aten::quantized_batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor mean, Tensor var, float eps, float output_scale, int output_zero_point) -> (Tensor) 2022-05-18T03:33:21.5354630Z processing existing schema: aten::quantized_max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], int[1] dilation=[1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5356928Z processing existing schema: aten::quantized_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5357961Z processing existing schema: aten::q_scale(Tensor self) -> (float) 2022-05-18T03:33:21.5359478Z processing existing schema: aten::q_zero_point(Tensor self) -> (int) 2022-05-18T03:33:21.5360755Z processing existing schema: aten::q_per_channel_scales(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5362022Z processing existing schema: aten::q_per_channel_zero_points(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5362997Z processing existing schema: aten::q_per_channel_axis(Tensor self) -> (int) 2022-05-18T03:33:21.5364340Z processing existing schema: aten::int_repr(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5365571Z processing existing schema: aten::qscheme(Tensor self) -> (QScheme) 2022-05-18T03:33:21.5367867Z processing existing schema: aten::_use_cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank) -> (bool) 2022-05-18T03:33:21.5370418Z processing existing schema: aten::_cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank, bool deterministic, bool zero_infinity) -> (Tensor, Tensor) 2022-05-18T03:33:21.5372688Z processing existing schema: aten::_cudnn_rnn_flatten_weight(Tensor[] weight_arr, int weight_stride0, int input_size, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, bool bidirectional) -> (Tensor) 2022-05-18T03:33:21.5376382Z processing existing schema: aten::_cudnn_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor? weight_buf, Tensor hx, Tensor? cx, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5380986Z processing existing schema: aten::_cudnn_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, int hidden_size, int proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[]) 2022-05-18T03:33:21.5381597Z processing existing schema: aten::_masked_scale(Tensor self, Tensor mask, float scale) -> (Tensor) 2022-05-18T03:33:21.5383197Z processing existing schema: aten::_copy_from(Tensor self, Tensor dst, bool non_blocking=False) -> (Tensor) 2022-05-18T03:33:21.5384636Z processing existing schema: aten::_copy_from_and_resize(Tensor self, Tensor dst) -> (Tensor) 2022-05-18T03:33:21.5387668Z processing existing schema: aten::_mps_convolution_transpose(Tensor self, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.5391038Z processing existing schema: aten::mps_convolution_transpose_backward(Tensor self, Tensor grad_output, Tensor weight, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool[2] output_mask) -> (Tensor, Tensor) 2022-05-18T03:33:21.5392470Z processing existing schema: aten::cudnn_grid_sampler(Tensor self, Tensor grid) -> (Tensor output) 2022-05-18T03:33:21.5392736Z schema: profiler::_record_function_enter(str name, str? args=None) -> (Tensor) found on allowlist, skipping 2022-05-18T03:33:21.5394797Z processing existing schema: aten::cudnn_grid_sampler_backward(Tensor self, Tensor grid, Tensor grad_output) -> (Tensor grad_self, Tensor grad_grid) 2022-05-18T03:33:21.5395157Z schema: profiler::_record_function_enter_new(str name, str? args=None) -> (__torch__.torch.classes.profiler._RecordFunction) found on allowlist, skipping 2022-05-18T03:33:21.5396594Z processing existing schema: aten::_mps_linear(Tensor self, Tensor weight, Tensor? bias=None) -> (Tensor) 2022-05-18T03:33:21.5398518Z processing existing schema: aten::_mps_linear_backward_input(int[] input_size, Tensor grad_output, Tensor weight) -> (Tensor) 2022-05-18T03:33:21.5400527Z processing existing schema: aten::_mps_linear_backward_weights(Tensor grad_output, Tensor input, Tensor weight, bool bias_defined) -> (Tensor, Tensor) 2022-05-18T03:33:21.5402220Z processing existing schema: aten::mps_linear_backward(Tensor self, Tensor grad_output, Tensor weight, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5404941Z processing existing schema: aten::_mps_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5407841Z processing existing schema: aten::mps_max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor) 2022-05-18T03:33:21.5410449Z processing existing schema: aten::_mps_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups) -> (Tensor) 2022-05-18T03:33:21.5413626Z processing existing schema: aten::mps_convolution_backward(Tensor self, Tensor grad_output, Tensor weight, int[] padding, int[] stride, int[] dilation, int groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5415709Z processing existing schema: aten::miopen_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5418025Z processing existing schema: aten::miopen_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5420950Z processing existing schema: aten::miopen_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:21.5424324Z processing existing schema: aten::miopen_convolution_transpose(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] output_padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:21.5427418Z processing existing schema: aten::miopen_depthwise_convolution(Tensor self, Tensor weight, Tensor? bias, int[] padding, int[] stride, int[] dilation, int groups, bool benchmark, bool deterministic) -> (Tensor) 2022-05-18T03:33:21.5430884Z processing existing schema: aten::miopen_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor hx, Tensor? cx, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5435154Z processing existing schema: aten::miopen_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[]) 2022-05-18T03:33:21.5436069Z processing existing schema: aten::_sparse_sparse_matmul(Tensor self, Tensor other) -> (Tensor) 2022-05-18T03:33:21.5437103Z processing existing schema: aten::_sparse_mask_helper(Tensor t, Tensor mask_indices) -> (Tensor) 2022-05-18T03:33:21.5438312Z processing existing schema: aten::native_norm(Tensor self, Scalar p=2) -> (Tensor) 2022-05-18T03:33:21.5440695Z processing existing schema: aten::native_norm.ScalarOpt_dim_dtype(Tensor self, Scalar? p, int[1] dim, bool keepdim, int? dtype) -> (Tensor) 2022-05-18T03:33:21.5442219Z processing existing schema: aten::_sparse_sum_backward(Tensor grad, Tensor self, int[] dim) -> (Tensor) 2022-05-18T03:33:21.5443884Z processing existing schema: aten::_sparse_csr_sum.dim_dtype(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5445646Z processing existing schema: aten::_sparse_csr_prod.dim_dtype(Tensor self, int[1] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor) 2022-05-18T03:33:21.5447290Z processing existing schema: aten::_sparse_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 2022-05-18T03:33:21.5448520Z processing existing schema: aten::_sparse_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 2022-05-18T03:33:21.5449651Z processing existing schema: aten::sparse_mask(Tensor self, Tensor mask) -> (Tensor) 2022-05-18T03:33:21.5451162Z processing existing schema: aten::sparse_dim(Tensor self) -> (int) 2022-05-18T03:33:21.5452171Z processing existing schema: aten::_dimI(Tensor self) -> (int) 2022-05-18T03:33:21.5453529Z processing existing schema: aten::dense_dim(Tensor self) -> (int) 2022-05-18T03:33:21.5455017Z processing existing schema: aten::cpu(Tensor(a) self) -> (Tensor(a|b)) 2022-05-18T03:33:21.5456256Z processing existing schema: aten::_dimV(Tensor self) -> (int) 2022-05-18T03:33:21.5457458Z processing existing schema: aten::_nnz(Tensor self) -> (int) 2022-05-18T03:33:21.5458823Z processing existing schema: aten::_coalesce(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5461828Z processing existing schema: aten::_lstm_mps(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5465635Z processing existing schema: aten::lstm_mps_backward(Tensor grad_y, Tensor? grad_hy, Tensor? grad_cy, Tensor z_state, Tensor cell_state_fwd, Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor[], Tensor[]) 2022-05-18T03:33:21.5467518Z processing existing schema: aten::_thnn_fused_lstm_cell_backward_impl(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5469085Z processing existing schema: aten::_thnn_fused_gru_cell_backward(Tensor grad_hy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 2022-05-18T03:33:21.5470235Z processing existing schema: aten::_torch_cuda_cu_linker_symbol_op(Tensor self) -> (Tensor) 2022-05-18T03:33:21.5471722Z processing existing schema: aten::record_stream(Tensor(a!) self, Stream s) -> () 2022-05-18T03:33:21.5473065Z processing existing schema: aten::str(t elem) -> (str) 2022-05-18T03:33:21.5474620Z processing existing schema: aten::list(str t) -> (str[]) 2022-05-18T03:33:21.5476458Z processing existing schema: aten::list.t(t[] l) -> (t[]) 2022-05-18T03:33:21.5477781Z processing existing schema: prim::layout(Tensor a) -> (int) 2022-05-18T03:33:21.5479452Z processing existing schema: aten::__range_length(int lo, int hi, int step) -> (int) 2022-05-18T03:33:21.5480954Z processing existing schema: aten::__derive_index(int index, int start, int step) -> (int) 2022-05-18T03:33:21.5482955Z processing existing schema: prim::TupleUnpack(Any tup) -> (...) 2022-05-18T03:33:21.5483587Z processing existing schema: prim::unchecked_cast(t x) -> (t) 2022-05-18T03:33:21.5484806Z processing existing schema: aten::IntImplicit(Tensor a) -> (int) 2022-05-18T03:33:21.5486023Z processing existing schema: aten::ComplexImplicit(Tensor a) -> (complex) 2022-05-18T03:33:21.5487301Z processing existing schema: aten::FloatImplicit(Tensor a) -> (float) 2022-05-18T03:33:21.5488484Z processing existing schema: aten::ScalarImplicit(Tensor a) -> (Scalar) 2022-05-18T03:33:21.5489779Z processing existing schema: aten::Bool.Tensor(Tensor a) -> (bool) 2022-05-18T03:33:21.5491056Z processing existing schema: aten::Bool.int(int a) -> (bool) 2022-05-18T03:33:21.5492318Z processing existing schema: aten::Bool.float(float a) -> (bool) 2022-05-18T03:33:21.5493609Z processing existing schema: aten::Int.Tensor(Tensor a) -> (int) 2022-05-18T03:33:21.5494965Z processing existing schema: aten::Int.bool(bool a) -> (int) 2022-05-18T03:33:21.5496186Z processing existing schema: aten::Int.float(float a) -> (int) 2022-05-18T03:33:21.5497579Z processing existing schema: aten::Int.Scalar(Scalar a) -> (int) 2022-05-18T03:33:21.5498736Z processing existing schema: aten::Int.str(str a) -> (int) 2022-05-18T03:33:21.5499942Z processing existing schema: aten::Float.Tensor(Tensor a) -> (float) 2022-05-18T03:33:21.5501906Z processing existing schema: aten::Float.Scalar(Scalar a) -> (float) 2022-05-18T03:33:21.5502518Z processing existing schema: aten::Float.int(int a) -> (float) 2022-05-18T03:33:21.5503727Z processing existing schema: aten::Float.bool(bool a) -> (float) 2022-05-18T03:33:21.5505268Z processing existing schema: aten::Float.str(str a) -> (float) 2022-05-18T03:33:21.5506591Z processing existing schema: aten::Complex.Scalar(Scalar a) -> (complex) 2022-05-18T03:33:21.5507948Z processing existing schema: aten::Complex.Tensor_Tensor(Tensor a, Tensor b) -> (complex) 2022-05-18T03:33:21.5509218Z processing existing schema: aten::Complex.int_bool(int x, bool y) -> (complex) 2022-05-18T03:33:21.5510486Z processing existing schema: aten::Complex.bool_int(bool x, int y) -> (complex) 2022-05-18T03:33:21.5511877Z processing existing schema: aten::Complex.float_bool(float x, bool y) -> (complex) 2022-05-18T03:33:21.5513178Z processing existing schema: aten::Complex.bool_float(bool x, float y) -> (complex) 2022-05-18T03:33:21.5514479Z processing existing schema: aten::Complex.float_int(float x, int y) -> (complex) 2022-05-18T03:33:21.5515789Z processing existing schema: aten::Complex.int_float(int x, float y) -> (complex) 2022-05-18T03:33:21.5517131Z processing existing schema: aten::Complex.int_int(int x, int y) -> (complex) 2022-05-18T03:33:21.5518475Z processing existing schema: aten::Complex.bool_bool(bool x, bool y) -> (complex) 2022-05-18T03:33:21.5520097Z processing existing schema: aten::Complex.float_float(float x, float y) -> (complex) 2022-05-18T03:33:21.5521514Z processing existing schema: aten::Complex.Tensor_float(Tensor x, float y) -> (complex) 2022-05-18T03:33:21.5522907Z processing existing schema: aten::Complex.float_Tensor(float x, Tensor y) -> (complex) 2022-05-18T03:33:21.5524464Z processing existing schema: aten::Complex.Tensor_int(Tensor x, int y) -> (complex) 2022-05-18T03:33:21.5526067Z processing existing schema: aten::Complex.int_Tensor(int x, Tensor y) -> (complex) 2022-05-18T03:33:21.5527404Z processing existing schema: aten::Complex.Tensor_bool(Tensor x, bool y) -> (complex) 2022-05-18T03:33:21.5528809Z processing existing schema: aten::Complex.bool_Tensor(bool x, Tensor y) -> (complex) 2022-05-18T03:33:21.5530060Z processing existing schema: aten::format(str self, ...) -> (str) 2022-05-18T03:33:21.5531406Z processing existing schema: prim::NumToTensor.Scalar(Scalar a) -> (Tensor) 2022-05-18T03:33:21.5532954Z processing existing schema: prim::NumToTensor.bool(bool a) -> (Tensor) 2022-05-18T03:33:21.5534393Z processing existing schema: prim::RaiseException(str msg, str? cls=None) -> () 2022-05-18T03:33:21.5535702Z processing existing schema: prim::EnumName(AnyEnumType enum) -> (str) 2022-05-18T03:33:21.5537101Z processing existing schema: prim::EnumValue.int(AnyEnumType enum) -> (int) 2022-05-18T03:33:21.5538486Z processing existing schema: prim::EnumValue.float(AnyEnumType enum) -> (float) 2022-05-18T03:33:21.5539873Z processing existing schema: prim::EnumValue.str(AnyEnumType enum) -> (str) 2022-05-18T03:33:21.5541250Z processing existing schema: prim::TupleIndex(Any tup, int i) -> (Any) 2022-05-18T03:33:21.5543062Z processing existing schema: prim::unchecked_unwrap_optional(t(a)? optional) -> (t(a)) 2022-05-18T03:33:21.5544267Z processing existing schema: prim::device(Tensor a) -> (Device) 2022-05-18T03:33:21.5545707Z processing existing schema: prim::dtype(Tensor a) -> (int) 2022-05-18T03:33:21.5547308Z processing existing schema: aten::__not__(bool self) -> (bool) 2022-05-18T03:33:21.5548662Z processing existing schema: aten::__is__(t1 self, t2 obj) -> (bool) 2022-05-18T03:33:21.5550209Z processing existing schema: aten::__isnot__(t1 self, t2 obj) -> (bool) 2022-05-18T03:33:21.5551437Z processing existing schema: aten::dim(Tensor self) -> (int) 2022-05-18T03:33:21.5553645Z processing existing schema: aten::__getitem__.t(t[](a) list, int idx) -> (t(*)) 2022-05-18T03:33:21.5555057Z processing existing schema: aten::__getitem__.str(str s, int index) -> (str) 2022-05-18T03:33:21.5557173Z processing existing schema: aten::__getitem__.Dict_str(Dict(str, t) self, str key) -> (t(*)) 2022-05-18T03:33:21.5559320Z processing existing schema: aten::__getitem__.Dict_int(Dict(int, t) self, int key) -> (t(*)) 2022-05-18T03:33:21.5561497Z processing existing schema: aten::__getitem__.Dict_bool(Dict(bool, t) self, bool key) -> (t(*)) 2022-05-18T03:33:21.5563439Z processing existing schema: aten::__getitem__.Dict_float(Dict(float, t) self, float key) -> (t(*)) 2022-05-18T03:33:21.5565539Z processing existing schema: aten::__getitem__.Dict_complex(Dict(complex, t) self, complex key) -> (t(*)) 2022-05-18T03:33:21.5567333Z processing existing schema: aten::__getitem__.Dict_Tensor(Dict(Tensor, t) self, Tensor key) -> (t(*)) 2022-05-18T03:33:21.5569839Z processing existing schema: aten::append.t(t[](a!) self, t(c -> *) el) -> (t[](a!)) 2022-05-18T03:33:21.5572222Z processing existing schema: aten::_set_item.t(t[](a!) l, int idx, t(b -> *) el) -> (t[](a!)) 2022-05-18T03:33:21.5574541Z processing existing schema: aten::_set_item.str(Dict(str, t)(a!) l, str(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5576816Z processing existing schema: aten::_set_item.int(Dict(int, t)(a!) l, int(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5579049Z processing existing schema: aten::_set_item.bool(Dict(bool, t)(a!) l, bool(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5581254Z processing existing schema: aten::_set_item.float(Dict(float, t)(a!) l, float(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5583783Z processing existing schema: aten::_set_item.complex(Dict(complex, t)(a!) l, complex(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5586059Z processing existing schema: aten::_set_item.Tensor(Dict(Tensor, t)(a!) l, Tensor(b -> *) idx, t(c -> *) v) -> () 2022-05-18T03:33:21.5587739Z processing existing schema: aten::clear.t(t[](a!) self) -> () 2022-05-18T03:33:21.5589478Z processing existing schema: aten::clear.str(Dict(str, t)(a!) self) -> () 2022-05-18T03:33:21.5591174Z processing existing schema: aten::clear.int(Dict(int, t)(a!) self) -> () 2022-05-18T03:33:21.5593034Z processing existing schema: aten::clear.bool(Dict(bool, t)(a!) self) -> () 2022-05-18T03:33:21.5595006Z processing existing schema: aten::clear.float(Dict(float, t)(a!) self) -> () 2022-05-18T03:33:21.5597012Z processing existing schema: aten::clear.complex(Dict(complex, t)(a!) self) -> () 2022-05-18T03:33:21.5598614Z processing existing schema: aten::clear.Tensor(Dict(Tensor, t)(a!) self) -> () 2022-05-18T03:33:21.5600589Z processing existing schema: aten::Delete.t(t[](a!) self, int idx) -> () 2022-05-18T03:33:21.5602501Z processing existing schema: aten::Delete.Dict_str(Dict(str, t)(a!) self, str key) -> () 2022-05-18T03:33:21.5604244Z processing existing schema: aten::Delete.Dict_int(Dict(int, t)(a!) self, int key) -> () 2022-05-18T03:33:21.5606136Z processing existing schema: aten::Delete.Dict_bool(Dict(bool, t)(a!) self, bool key) -> () 2022-05-18T03:33:21.5607981Z processing existing schema: aten::Delete.Dict_float(Dict(float, t)(a!) self, float key) -> () 2022-05-18T03:33:21.5609913Z processing existing schema: aten::Delete.Dict_complex(Dict(complex, t)(a!) self, complex key) -> () 2022-05-18T03:33:21.5611786Z processing existing schema: aten::Delete.Dict_Tensor(Dict(Tensor, t)(a!) self, Tensor key) -> () 2022-05-18T03:33:21.5613879Z processing existing schema: aten::insert.t(t[](a!) self, int idx, t(b -> *) el) -> () 2022-05-18T03:33:21.5615803Z processing existing schema: aten::pop.t(t[](a!) self, int idx=-1) -> (t(*)) 2022-05-18T03:33:21.5617853Z processing existing schema: aten::pop.Dict_str(Dict(str, t)(a!) self, str key) -> (t(*)) 2022-05-18T03:33:21.5619999Z processing existing schema: aten::pop.Dict_default_str(Dict(str, t)(a!) self, str key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5621919Z processing existing schema: aten::pop.Dict_int(Dict(int, t)(a!) self, int key) -> (t(*)) 2022-05-18T03:33:21.5624009Z processing existing schema: aten::pop.Dict_default_int(Dict(int, t)(a!) self, int key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5626054Z processing existing schema: aten::pop.Dict_bool(Dict(bool, t)(a!) self, bool key) -> (t(*)) 2022-05-18T03:33:21.5628242Z processing existing schema: aten::pop.Dict_default_bool(Dict(bool, t)(a!) self, bool key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5630165Z processing existing schema: aten::pop.Dict_float(Dict(float, t)(a!) self, float key) -> (t(*)) 2022-05-18T03:33:21.5632298Z processing existing schema: aten::pop.Dict_default_float(Dict(float, t)(a!) self, float key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5634359Z processing existing schema: aten::pop.Dict_complex(Dict(complex, t)(a!) self, complex key) -> (t(*)) 2022-05-18T03:33:21.5636558Z processing existing schema: aten::pop.Dict_default_complex(Dict(complex, t)(a!) self, complex key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5638543Z processing existing schema: aten::pop.Dict_Tensor(Dict(Tensor, t)(a!) self, Tensor key) -> (t(*)) 2022-05-18T03:33:21.5640856Z processing existing schema: aten::pop.Dict_default_Tensor(Dict(Tensor, t)(a!) self, Tensor key, t default_value) -> (t(*)) 2022-05-18T03:33:21.5642406Z processing existing schema: aten::len.t(t[] a) -> (int) 2022-05-18T03:33:21.5643850Z processing existing schema: aten::len.Tensor(Tensor t) -> (int) 2022-05-18T03:33:21.5644856Z processing existing schema: aten::len.str(str s) -> (int) 2022-05-18T03:33:21.5646472Z processing existing schema: aten::len.Dict_str(Dict(str, t) self) -> (int) 2022-05-18T03:33:21.5648198Z processing existing schema: aten::len.Dict_int(Dict(int, t) self) -> (int) 2022-05-18T03:33:21.5649713Z processing existing schema: aten::len.Dict_bool(Dict(bool, t) self) -> (int) 2022-05-18T03:33:21.5651277Z processing existing schema: aten::len.Dict_float(Dict(float, t) self) -> (int) 2022-05-18T03:33:21.5653005Z processing existing schema: aten::len.Dict_complex(Dict(complex, t) self) -> (int) 2022-05-18T03:33:21.5654597Z processing existing schema: aten::len.Dict_Tensor(Dict(Tensor, t) self) -> (int) 2022-05-18T03:33:21.5656089Z processing existing schema: aten::len.any(Any[] a) -> (int) 2022-05-18T03:33:21.5657151Z processing existing schema: prim::Uninitialized() -> (Any) 2022-05-18T03:33:21.5658230Z processing existing schema: prim::Print(...) -> () 2022-05-18T03:33:21.5659617Z processing existing schema: prim::VarConcat(...) -> (Tensor) 2022-05-18T03:33:21.5660709Z processing existing schema: prim::VarStack(...) -> (Tensor) 2022-05-18T03:33:21.5662770Z processing existing schema: prim::IfThenElse(bool cond, Any(a) x, Any(b) y) -> (Any(a|b)) 2022-05-18T03:33:21.5664210Z processing existing schema: aten::floordiv.int(int a, int b) -> (int) 2022-05-18T03:33:21.5665418Z processing existing schema: aten::floordiv.float(float a, float b) -> (float) 2022-05-18T03:33:21.5666996Z processing existing schema: aten::floordiv.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.5668059Z processing existing schema: aten::floordiv.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.5669480Z processing existing schema: aten::floordiv(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.5670580Z processing existing schema: prim::min.int(int a, int b) -> (int) 2022-05-18T03:33:21.5672015Z processing existing schema: prim::min.float(float a, float b) -> (float) 2022-05-18T03:33:21.5673134Z processing existing schema: prim::min.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.5674589Z processing existing schema: prim::min.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.5675709Z processing existing schema: prim::min(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.5678113Z processing existing schema: prim::min.int_list(int[] l, int[] r) -> (int[]) 2022-05-18T03:33:21.5679682Z processing existing schema: prim::min.self_int(int[] self) -> (int) 2022-05-18T03:33:21.5681819Z processing existing schema: prim::min.float_list(float[] l, float[] r) -> (float[]) 2022-05-18T03:33:21.5683414Z processing existing schema: prim::min.self_float(float[] self) -> (float) 2022-05-18T03:33:21.5685506Z processing existing schema: prim::min.bool_list(bool[] l, bool[] r) -> (bool[]) 2022-05-18T03:33:21.5687047Z processing existing schema: prim::min.self_bool(bool[] self) -> (bool) 2022-05-18T03:33:21.5688404Z processing existing schema: prim::max.int(int a, int b) -> (int) 2022-05-18T03:33:21.5689681Z processing existing schema: prim::max.float(float a, float b) -> (float) 2022-05-18T03:33:21.5690993Z processing existing schema: prim::max.int_float(int a, float b) -> (float) 2022-05-18T03:33:21.5692199Z processing existing schema: prim::max.float_int(float a, int b) -> (float) 2022-05-18T03:33:21.5693761Z processing existing schema: prim::max(Scalar a, Scalar b) -> (Scalar) 2022-05-18T03:33:21.5696056Z processing existing schema: prim::max.int_list(int[] l, int[] r) -> (int[]) 2022-05-18T03:33:21.5697600Z processing existing schema: prim::max.self_int(int[] self) -> (int) 2022-05-18T03:33:21.5700326Z processing existing schema: prim::max.float_list(float[] l, float[] r) -> (float[]) 2022-05-18T03:33:21.5701457Z processing existing schema: prim::max.self_float(float[] self) -> (float) 2022-05-18T03:33:21.5703654Z processing existing schema: prim::max.bool_list(bool[] l, bool[] r) -> (bool[]) 2022-05-18T03:33:21.5705142Z processing existing schema: prim::max.self_bool(bool[] self) -> (bool) 2022-05-18T03:33:21.5706372Z processing existing schema: aten::ord(str string) -> (int) 2022-05-18T03:33:21.5708304Z processing existing schema: aten::__contains__.int_list(int[] l, int item) -> (bool) 2022-05-18T03:33:21.5709639Z processing existing schema: aten::__contains__.str_list(str[] l, str item) -> (bool) 2022-05-18T03:33:21.5711453Z processing existing schema: aten::__contains__.str(Dict(str, t) dict, str key) -> (bool) 2022-05-18T03:33:21.5713059Z processing existing schema: aten::__contains__.int(Dict(int, t) dict, int key) -> (bool) 2022-05-18T03:33:21.5714722Z processing existing schema: aten::__contains__.bool(Dict(bool, t) dict, bool key) -> (bool) 2022-05-18T03:33:21.5716278Z processing existing schema: aten::__contains__.float(Dict(float, t) dict, float key) -> (bool) 2022-05-18T03:33:21.5718125Z processing existing schema: aten::__contains__.complex(Dict(complex, t) dict, complex key) -> (bool) 2022-05-18T03:33:21.5719851Z processing existing schema: aten::__contains__.Tensor(Dict(Tensor, t) dict, Tensor key) -> (bool) 2022-05-18T03:33:21.5721262Z processing existing schema: aten::__contains__.float_list(float[] l, float item) -> (bool) 2022-05-18T03:33:21.5722867Z processing existing schema: aten::dict() -> (Dict(str, Tensor)) 2022-05-18T03:33:21.5725110Z processing existing schema: aten::dict.str((str, tVal)[] inputs) -> (Dict(str, tVal)) 2022-05-18T03:33:21.5727132Z processing existing schema: aten::dict.Dict_str(Dict(str, t)(a) self) -> (Dict(str, t)) 2022-05-18T03:33:21.5729331Z processing existing schema: aten::dict.int((int, tVal)[] inputs) -> (Dict(int, tVal)) 2022-05-18T03:33:21.5731341Z processing existing schema: aten::dict.Dict_int(Dict(int, t)(a) self) -> (Dict(int, t)) 2022-05-18T03:33:21.5733597Z processing existing schema: aten::dict.bool((bool, tVal)[] inputs) -> (Dict(bool, tVal)) 2022-05-18T03:33:21.5735611Z processing existing schema: aten::dict.Dict_bool(Dict(bool, t)(a) self) -> (Dict(bool, t)) 2022-05-18T03:33:21.5737878Z processing existing schema: aten::dict.float((float, tVal)[] inputs) -> (Dict(float, tVal)) 2022-05-18T03:33:21.5739910Z processing existing schema: aten::dict.Dict_float(Dict(float, t)(a) self) -> (Dict(float, t)) 2022-05-18T03:33:21.5742333Z processing existing schema: aten::dict.complex((complex, tVal)[] inputs) -> (Dict(complex, tVal)) 2022-05-18T03:33:21.5744624Z processing existing schema: aten::dict.Dict_complex(Dict(complex, t)(a) self) -> (Dict(complex, t)) 2022-05-18T03:33:21.5746907Z processing existing schema: aten::dict.Tensor((Tensor, tVal)[] inputs) -> (Dict(Tensor, tVal)) 2022-05-18T03:33:21.5749319Z processing existing schema: aten::dict.Dict_Tensor(Dict(Tensor, t)(a) self) -> (Dict(Tensor, t)) 2022-05-18T03:33:21.5750919Z processing existing schema: aten::backward(Tensor self, Tensor? gradient=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:21.5753786Z processing existing schema: aten::backward.TensorList(Tensor[] tensors, Tensor?[]? grad_tensors=None, bool? retain_graph=None, bool create_graph=False) -> () 2022-05-18T03:33:21.5754398Z processing existing schema: prim::is_cuda(Tensor a) -> (bool) 2022-05-18T03:33:21.5755514Z processing existing schema: prim::tolist(...) -> (...) 2022-05-18T03:33:21.5757701Z processing existing schema: aten::keys.str(Dict(str, t) self) -> (str[](*)) 2022-05-18T03:33:21.5760675Z processing existing schema: aten::keys.int(Dict(int, t) self) -> (int[](*)) 2022-05-18T03:33:21.5763178Z processing existing schema: aten::keys.bool(Dict(bool, t) self) -> (bool[](*)) 2022-05-18T03:33:21.5766292Z processing existing schema: aten::keys.float(Dict(float, t) self) -> (float[](*)) 2022-05-18T03:33:21.5769308Z processing existing schema: aten::keys.complex(Dict(complex, t) self) -> (complex[](*)) 2022-05-18T03:33:21.5772182Z processing existing schema: aten::keys.Tensor(Dict(Tensor, t) self) -> (Tensor[](*)) 2022-05-18T03:33:21.5775604Z processing existing schema: aten::setdefault.str(Dict(str, t)(a!) self, str(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5778727Z processing existing schema: aten::setdefault.int(Dict(int, t)(a!) self, int(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5782054Z processing existing schema: aten::setdefault.bool(Dict(bool, t)(a!) self, bool(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5785208Z processing existing schema: aten::setdefault.float(Dict(float, t)(a!) self, float(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5788474Z processing existing schema: aten::setdefault.complex(Dict(complex, t)(a!) self, complex(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5791731Z processing existing schema: aten::setdefault.Tensor(Dict(Tensor, t)(a!) self, Tensor(b -> *) key, t(c -> *) default_value) -> (t(*)) 2022-05-18T03:33:21.5793949Z processing existing schema: aten::find(str self, str substr, int start=0, int end=-1) -> (int) 2022-05-18T03:33:21.5795997Z processing existing schema: prim::rangelist(int n) -> (int[]) 2022-05-18T03:33:21.5797744Z processing existing schema: aten::device(str a) -> (Device) 2022-05-18T03:33:21.5799867Z processing existing schema: aten::percentFormat(str self, ...) -> (str) 2022-05-18T03:33:21.5801486Z processing existing schema: prim::requires_grad(Tensor a) -> (bool) 2022-05-18T03:33:21.5803716Z processing existing schema: prim::grad(Tensor a) -> (Tensor(*)) 2022-05-18T03:33:21.5804846Z processing existing schema: prim::is_nested(Tensor a) -> (bool) 2022-05-18T03:33:21.5806988Z processing existing schema: aten::manual_seed(int seed) -> () 2022-05-18T03:33:21.5808299Z processing existing schema: prim::AutogradZero() -> (Tensor) 2022-05-18T03:33:21.5811762Z processing existing schema: prim::ReductionSizes(int[] size, int[] red_axes, bool keepdim=False) -> (int[]) 2022-05-18T03:33:21.5813230Z processing existing schema: prim::BroadcastSizes(...) -> (int[]) 2022-05-18T03:33:21.5815658Z processing existing schema: aten::warn(str message, int stacklevel=2) -> () 2022-05-18T03:33:21.5817038Z processing existing schema: onnx::Reshape(Tensor input, Tensor shape) -> (Tensor) 2022-05-18T03:33:21.5819365Z processing existing schema: onnx::Shape(Tensor t) -> (Tensor) 2022-05-18T03:33:21.5820739Z processing existing schema: prim::AutogradAnyNonZero(...) -> (bool) 2022-05-18T03:33:21.5821239Z processing existing schema: prim::AutogradAllZero(...) -> (bool) 2022-05-18T03:33:21.5822675Z processing existing schema: prim::AutogradAllNonZero(...) -> (bool) 2022-05-18T03:33:21.5823985Z processing existing schema: prim::AutogradAdd(Any a, Any b) -> (Any) 2022-05-18T03:33:21.5826404Z processing existing schema: aten::_size_if_not_equal(int[] self_size, int[] other_size) -> (int[]?) 2022-05-18T03:33:21.5827916Z processing existing schema: aten::_unwrap_optional(t(a)? optional) -> (t(a)) 2022-05-18T03:33:21.5830007Z processing existing schema: aten::sorted.int(int[](a) input) -> (int[]) 2022-05-18T03:33:21.5832177Z processing existing schema: aten::sorted.float(float[](a) input) -> (float[]) 2022-05-18T03:33:21.5834199Z processing existing schema: aten::sorted.Tensor(Tensor[](a) input) -> (Tensor[]) 2022-05-18T03:33:21.5836290Z processing existing schema: aten::sorted.bool(bool[](a) input) -> (bool[]) 2022-05-18T03:33:21.5838403Z processing existing schema: aten::sorted.str(str[](a) input) -> (str[]) 2022-05-18T03:33:21.5840608Z processing existing schema: aten::sorted.any(t[](a) self) -> (t[]) 2022-05-18T03:33:21.5841681Z processing existing schema: aten::hex(int i) -> (str) 2022-05-18T03:33:21.5842892Z processing existing schema: aten::oct(int i) -> (str) 2022-05-18T03:33:21.5844388Z processing existing schema: aten::bin(int i) -> (str) 2022-05-18T03:33:21.5845833Z processing existing schema: prim::StringIndex(str string, int index) -> (str) 2022-05-18T03:33:21.5846921Z processing existing schema: aten::chr(int i) -> (str) 2022-05-18T03:33:21.5848481Z processing existing schema: aten::__round_to_zero_floordiv.int(int a, int b) -> (int) 2022-05-18T03:33:21.5850719Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:21.5853063Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.LinearPackedParamsBase _0, (Tensor, Tensor?) _1) -> (NoneType _0) 2022-05-18T03:33:21.5854250Z processing existing schema: bias(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:21.5856536Z processing existing schema: unpack(__torch__.torch.classes.quantized.LinearPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:21.5859988Z processing existing schema: __getstate__(__torch__.torch.classes.rnn.CellParamsBase _0) -> ((str, Tensor[], float[], int[], __torch__.torch.classes.quantized.LinearPackedParamsBase[]) _0) 2022-05-18T03:33:21.5863334Z processing existing schema: __setstate__(__torch__.torch.classes.rnn.CellParamsBase _0, (str, Tensor[], float[], int[], __torch__.torch.classes.quantized.LinearPackedParamsBase[]) _1) -> (NoneType _0) 2022-05-18T03:33:21.5865827Z processing existing schema: __getstate__(__torch__.torch.classes.sparse.LinearPackedParamsBase _0) -> ((Tensor, Tensor?, int[]) _0) 2022-05-18T03:33:21.5868755Z processing existing schema: __setstate__(__torch__.torch.classes.sparse.LinearPackedParamsBase _0, (Tensor, Tensor?, int[]) _1) -> (NoneType _0) 2022-05-18T03:33:21.5871219Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> ((str, Tensor[], Tensor?[]) _0) 2022-05-18T03:33:21.5872574Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0, Any _1) -> (NoneType _0) 2022-05-18T03:33:21.5874166Z processing existing schema: weight(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (Tensor _0) 2022-05-18T03:33:21.5875416Z processing existing schema: bias(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:21.5877691Z processing existing schema: unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:21.5879649Z processing existing schema: stride(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5881472Z processing existing schema: padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5883105Z processing existing schema: output_padding(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5884917Z processing existing schema: dilation(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5886118Z processing existing schema: groups(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:21.5887735Z processing existing schema: transpose(__torch__.torch.classes.quantized.Conv2dPackedParamsBase _0) -> (bool _0) 2022-05-18T03:33:21.5891162Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> ((str, Tensor[], Tensor?[]) _0) 2022-05-18T03:33:21.5892330Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0, Any _1) -> (NoneType _0) 2022-05-18T03:33:21.5893411Z processing existing schema: weight(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (Tensor _0) 2022-05-18T03:33:21.5894696Z processing existing schema: bias(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (Tensor? _0) 2022-05-18T03:33:21.5896831Z processing existing schema: unpack(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> ((Tensor, Tensor?) _0) 2022-05-18T03:33:21.5898147Z processing existing schema: stride(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5899528Z processing existing schema: padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5901266Z processing existing schema: output_padding(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5911937Z processing existing schema: dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 2022-05-18T03:33:21.5912814Z processing existing schema: groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:21.5913437Z processing existing schema: transpose(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (bool _0) 2022-05-18T03:33:21.5914018Z processing existing schema: __getstate__(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> ((int, Tensor[], float[], int[]) _0) 2022-05-18T03:33:21.5914694Z processing existing schema: __setstate__(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0, (int, Tensor[], float[], int[]) _1) -> (NoneType _0) 2022-05-18T03:33:21.5915421Z processing existing schema: bit_rate(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:21.5916039Z processing existing schema: version(__torch__.torch.classes.quantized.EmbeddingPackedParamsBase _0) -> (int _0) 2022-05-18T03:33:21.5916595Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0) -> ((Tensor, Tensor?, Scalar?, Scalar?) _0) 2022-05-18T03:33:21.5918866Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0, (Tensor, Tensor?, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:21.5922669Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _0) 2022-05-18T03:33:21.5926330Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:21.5930358Z processing existing schema: __getstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _0) 2022-05-18T03:33:21.5934453Z processing existing schema: __setstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _1) -> (NoneType _0) 2022-05-18T03:33:21.5935405Z processing existing schema: __init__(__torch__.torch.classes._nnapi.Compilation _0) -> (NoneType _0) 2022-05-18T03:33:21.5937622Z processing existing schema: init(__torch__.torch.classes._nnapi.Compilation _0, Tensor _1, Tensor[] _2) -> (NoneType _0) 2022-05-18T03:33:21.5939889Z processing existing schema: run(__torch__.torch.classes._nnapi.Compilation _0, Tensor[] _1, Tensor[] _2) -> (NoneType _0) 2022-05-18T03:33:21.5941045Z processing existing schema: __init__(__torch__.torch.classes.backendutils.BackendDebugInfo _0) -> (NoneType _0) 2022-05-18T03:33:21.5942287Z processing existing schema: __init__(__torch__.torch.classes.__backends__.nnc _0) -> (NoneType _0) 2022-05-18T03:33:21.5943702Z processing existing schema: is_available(Any self) -> (bool available) 2022-05-18T03:33:21.5945988Z processing existing schema: compile(Any self, Any processed, Dict(str, Any) method_compile_spec) -> (Dict(str, Any) handles) 2022-05-18T03:33:21.5948009Z processing existing schema: execute(Any self, Any handle, Any[] input) -> (Any[] output) 2022-05-18T03:33:21.5949548Z processing existing schema: starting_lineno(__torch__.torch.classes.profiling.SourceRef _0) -> (int _0) 2022-05-18T03:33:21.5950949Z processing existing schema: text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0) 2022-05-18T03:33:21.5952468Z processing existing schema: count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0) 2022-05-18T03:33:21.5953824Z processing existing schema: duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0) 2022-05-18T03:33:21.5955486Z processing existing schema: source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0) 2022-05-18T03:33:21.5957498Z processing existing schema: line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0) 2022-05-18T03:33:21.5958937Z processing existing schema: __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:21.5960336Z processing existing schema: enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:21.5961781Z processing existing schema: disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0) 2022-05-18T03:33:21.5963899Z processing existing schema: _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0) 2022-05-18T03:33:21.5965734Z processing existing schema: __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0) 2022-05-18T03:33:21.5966300Z Found backward compatible schemas for all existing schemas 2022-05-18T03:33:21.7222580Z + python ../load_torchscript_model.py /tmp/model_old.pt 2022-05-18T03:33:22.2868673Z RecursiveScriptModule( 2022-05-18T03:33:22.2869307Z original_name=NeuralNetwork 2022-05-18T03:33:22.2869790Z (flatten): RecursiveScriptModule(original_name=Flatten) 2022-05-18T03:33:22.2870118Z (linear_relu_stack): RecursiveScriptModule( 2022-05-18T03:33:22.2870346Z original_name=Sequential 2022-05-18T03:33:22.2870586Z (0): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:22.2870915Z (1): RecursiveScriptModule(original_name=ReLU) 2022-05-18T03:33:22.2871163Z (2): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:22.2871462Z (3): RecursiveScriptModule(original_name=ReLU) 2022-05-18T03:33:22.2871748Z (4): RecursiveScriptModule(original_name=Linear) 2022-05-18T03:33:22.2871934Z ) 2022-05-18T03:33:22.2872081Z ) 2022-05-18T03:33:22.3954548Z + popd 2022-05-18T03:33:22.3954764Z ~/workspace 2022-05-18T03:33:22.3954958Z + set +x 2022-05-18T03:33:22.9738321Z EXITED_USER_LAND 2022-05-18T03:33:22.9801448Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master 2022-05-18T03:33:22.9801708Z with: 2022-05-18T03:33:22.9802115Z github-token: *** 2022-05-18T03:33:22.9802289Z env: 2022-05-18T03:33:22.9802431Z IN_CI: 1 2022-05-18T03:33:22.9802597Z IS_GHA: 1 2022-05-18T03:33:22.9802779Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:22.9802954Z ##[endgroup] 2022-05-18T03:33:22.9826644Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a 2022-05-18T03:33:22.9826869Z with: 2022-05-18T03:33:22.9827029Z shell: bash 2022-05-18T03:33:22.9827202Z timeout_minutes: 10 2022-05-18T03:33:22.9827375Z max_attempts: 5 2022-05-18T03:33:22.9827555Z retry_wait_seconds: 30 2022-05-18T03:33:22.9827933Z command: set -x python3 -m pip install requests==2.26.0 GHA_WORKFLOW_JOB_ID=$(python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}") echo "::set-output name=job-id::${GHA_WORKFLOW_JOB_ID}" 2022-05-18T03:33:22.9828351Z polling_interval_seconds: 1 2022-05-18T03:33:22.9828537Z warning_on_retry: true 2022-05-18T03:33:22.9828726Z continue_on_error: false 2022-05-18T03:33:22.9828898Z env: 2022-05-18T03:33:22.9829035Z IN_CI: 1 2022-05-18T03:33:22.9829193Z IS_GHA: 1 2022-05-18T03:33:22.9829373Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:22.9829656Z GITHUB_TOKEN: *** 2022-05-18T03:33:22.9829830Z ##[endgroup] 2022-05-18T03:33:23.0147563Z 2022-05-18T03:33:23.0199979Z + python3 -m pip install requests==2.26.0 2022-05-18T03:33:23.2347763Z Defaulting to user installation because normal site-packages is not writeable 2022-05-18T03:33:23.2526767Z Requirement already satisfied: requests==2.26.0 in /home/ec2-user/.local/lib/python3.7/site-packages (2.26.0) 2022-05-18T03:33:23.2658935Z Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (2.0.12) 2022-05-18T03:33:23.2680353Z Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (3.3) 2022-05-18T03:33:23.2691580Z Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (2021.10.8) 2022-05-18T03:33:23.2699029Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests==2.26.0) (1.26.9) 2022-05-18T03:33:23.3885445Z ++ python3 .github/scripts/get_workflow_job_id.py 2342799944 i-0dae033c09f631bd6 2022-05-18T03:33:25.2904510Z + GHA_WORKFLOW_JOB_ID=6482431953 2022-05-18T03:33:25.2905093Z + echo '::set-output name=job-id::6482431953' 2022-05-18T03:33:26.0227262Z Command completed after 1 attempt(s). 2022-05-18T03:33:26.0227569Z 2022-05-18T03:33:26.0348842Z Prepare all required actions 2022-05-18T03:33:26.0349157Z Getting action download info 2022-05-18T03:33:26.1831876Z Download action repository 'actions/upload-artifact@v2' (SHA:82c141cc518b40d92cc801eee768e7aafc9c2fa2) 2022-05-18T03:33:26.3060255Z ##[group]Run ./.github/actions/upload-test-artifacts 2022-05-18T03:33:26.3060467Z with: 2022-05-18T03:33:26.3060703Z file-suffix: test-backwards_compat-1-1-linux.2xlarge_6482431953 2022-05-18T03:33:26.3060931Z env: 2022-05-18T03:33:26.3061069Z IN_CI: 1 2022-05-18T03:33:26.3061227Z IS_GHA: 1 2022-05-18T03:33:26.3061402Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:26.3061573Z ##[endgroup] 2022-05-18T03:33:26.3083227Z ##[group]Run # Remove any previous test jsons if they exist 2022-05-18T03:33:26.3083520Z # Remove any previous test jsons if they exist 2022-05-18T03:33:26.3083753Z rm -f test-jsons-*.zip 2022-05-18T03:33:26.3083995Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test -i '*.json' 2022-05-18T03:33:26.3095287Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:33:26.3095507Z env: 2022-05-18T03:33:26.3095652Z IN_CI: 1 2022-05-18T03:33:26.3095813Z IS_GHA: 1 2022-05-18T03:33:26.3095993Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:26.3096237Z FILE_SUFFIX: test-backwards_compat-1-1-linux.2xlarge_6482431953 2022-05-18T03:33:26.3096471Z ##[endgroup] 2022-05-18T03:33:26.3180745Z adding: test/allowlist_for_publicAPI.json (deflated 82%) 2022-05-18T03:33:26.3207718Z adding: test/benchmark_utils/callgrind_artifacts.json (deflated 92%) 2022-05-18T03:33:26.3226773Z ##[group]Run # Remove any previous test reports if they exist 2022-05-18T03:33:26.3227068Z # Remove any previous test reports if they exist 2022-05-18T03:33:26.3227307Z rm -f test-reports-*.zip 2022-05-18T03:33:26.3227548Z zip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml' 2022-05-18T03:33:26.3238606Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:33:26.3238840Z env: 2022-05-18T03:33:26.3238989Z IN_CI: 1 2022-05-18T03:33:26.3239369Z IS_GHA: 1 2022-05-18T03:33:26.3239554Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:26.3239799Z FILE_SUFFIX: test-backwards_compat-1-1-linux.2xlarge_6482431953 2022-05-18T03:33:26.3240038Z ##[endgroup] 2022-05-18T03:33:26.3307290Z zip warning: zip file empty 2022-05-18T03:33:26.3339299Z ##[group]Run seemethere/upload-artifact-s3@v4 2022-05-18T03:33:26.3339515Z with: 2022-05-18T03:33:26.3339685Z retention-days: 14 2022-05-18T03:33:26.3339866Z if-no-files-found: warn 2022-05-18T03:33:26.3340064Z path: test-jsons-*.zip 2022-05-18T03:33:26.3340249Z name: artifact 2022-05-18T03:33:26.3340419Z s3-bucket: gha-artifacts 2022-05-18T03:33:26.3340609Z region: us-east-1 2022-05-18T03:33:26.3340774Z env: 2022-05-18T03:33:26.3340912Z IN_CI: 1 2022-05-18T03:33:26.3341070Z IS_GHA: 1 2022-05-18T03:33:26.3341315Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:26.3341487Z ##[endgroup] 2022-05-18T03:33:26.6702892Z With the provided path, there will be 1 file uploaded 2022-05-18T03:33:26.6703449Z Uploading to s3 prefix: pytorch/pytorch/2342799944/1/artifact 2022-05-18T03:33:26.6710485Z Starting upload of test-jsons-test-backwards_compat-1-1-linux.2xlarge_6482431953.zip 2022-05-18T03:33:26.7801172Z Finished upload of test-jsons-test-backwards_compat-1-1-linux.2xlarge_6482431953.zip 2022-05-18T03:33:26.7900786Z ##[group]Run seemethere/upload-artifact-s3@v4 2022-05-18T03:33:26.7901025Z with: 2022-05-18T03:33:26.7901187Z retention-days: 14 2022-05-18T03:33:26.7901389Z if-no-files-found: error 2022-05-18T03:33:26.7901595Z path: test-reports-*.zip 2022-05-18T03:33:26.7901770Z name: artifact 2022-05-18T03:33:26.7901952Z s3-bucket: gha-artifacts 2022-05-18T03:33:26.7902146Z region: us-east-1 2022-05-18T03:33:26.7902302Z env: 2022-05-18T03:33:26.7902456Z IN_CI: 1 2022-05-18T03:33:26.7902611Z IS_GHA: 1 2022-05-18T03:33:26.7902783Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:26.7902969Z ##[endgroup] 2022-05-18T03:33:27.1265806Z With the provided path, there will be 1 file uploaded 2022-05-18T03:33:27.1266126Z Uploading to s3 prefix: pytorch/pytorch/2342799944/1/artifact 2022-05-18T03:33:27.1272236Z Starting upload of test-reports-test-backwards_compat-1-1-linux.2xlarge_6482431953.zip 2022-05-18T03:33:27.2230517Z Finished upload of test-reports-test-backwards_compat-1-1-linux.2xlarge_6482431953.zip 2022-05-18T03:33:27.2335148Z ##[group]Run set -x 2022-05-18T03:33:27.2335349Z set -x 2022-05-18T03:33:27.2335567Z python3 -m pip install -r requirements.txt 2022-05-18T03:33:27.2335818Z python3 -m pip install boto3==1.19.12 2022-05-18T03:33:27.2336097Z python3 -m tools.stats.print_test_stats --upload-to-s3 --compare-with-s3 test 2022-05-18T03:33:27.2347714Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:33:27.2347934Z env: 2022-05-18T03:33:27.2348081Z IN_CI: 1 2022-05-18T03:33:27.2348249Z IS_GHA: 1 2022-05-18T03:33:27.2348432Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:27.2348624Z AWS_DEFAULT_REGION: us-east-1 2022-05-18T03:33:27.2348811Z BRANCH: master 2022-05-18T03:33:27.2349032Z JOB_BASE_NAME: linux-xenial-py3.7-gcc5.4-test 2022-05-18T03:33:27.2349253Z TEST_CONFIG: backwards_compat 2022-05-18T03:33:27.2349441Z SHARD_NUMBER: 1 2022-05-18T03:33:27.2349661Z BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc5.4 2022-05-18T03:33:27.2349895Z PR_NUMBER: 2022-05-18T03:33:27.2350101Z SHA1: 3b2375291aab7b48442f2e6fb1ef66cebc761e24 2022-05-18T03:33:27.2350297Z TAG: 2022-05-18T03:33:27.2350449Z WORKFLOW_ID: 2342799944 2022-05-18T03:33:27.2350758Z GITHUB_TOKEN: *** 2022-05-18T03:33:27.2350948Z GHA_WORKFLOW_JOB_ID: 6482431953 2022-05-18T03:33:27.2351136Z ##[endgroup] 2022-05-18T03:33:27.2376192Z + python3 -m pip install -r requirements.txt 2022-05-18T03:33:27.4548837Z Defaulting to user installation because normal site-packages is not writeable 2022-05-18T03:33:27.4800572Z Ignoring dataclasses: markers 'python_version < "3.7"' don't match your environment 2022-05-18T03:33:27.4802887Z Requirement already satisfied: astunparse in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (1.6.3) 2022-05-18T03:33:27.4829848Z Requirement already satisfied: expecttest in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (0.1.3) 2022-05-18T03:33:27.4837969Z Requirement already satisfied: future in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (0.18.2) 2022-05-18T03:33:27.4846094Z Requirement already satisfied: numpy in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (1.21.6) 2022-05-18T03:33:27.4854056Z Requirement already satisfied: psutil in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 6)) (5.9.0) 2022-05-18T03:33:27.4953581Z Requirement already satisfied: pyyaml in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 7)) (6.0) 2022-05-18T03:33:27.4961850Z Requirement already satisfied: requests in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 8)) (2.26.0) 2022-05-18T03:33:27.5076951Z Requirement already satisfied: setuptools in /usr/lib/python3.7/site-packages (from -r requirements.txt (line 9)) (49.1.3) 2022-05-18T03:33:27.5247822Z Requirement already satisfied: six in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 10)) (1.16.0) 2022-05-18T03:33:27.5256749Z Requirement already satisfied: types-dataclasses in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 11)) (0.6.5) 2022-05-18T03:33:27.5262056Z Requirement already satisfied: typing_extensions in /home/ec2-user/.local/lib/python3.7/site-packages (from -r requirements.txt (line 12)) (4.2.0) 2022-05-18T03:33:27.5272127Z Requirement already satisfied: wheel<1.0,>=0.23.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from astunparse->-r requirements.txt (line 2)) (0.37.1) 2022-05-18T03:33:27.5296489Z Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 8)) (2021.10.8) 2022-05-18T03:33:27.5306177Z Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 8)) (2.0.12) 2022-05-18T03:33:27.5325873Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 8)) (1.26.9) 2022-05-18T03:33:27.5607366Z Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /home/ec2-user/.local/lib/python3.7/site-packages (from requests->-r requirements.txt (line 8)) (3.3) 2022-05-18T03:33:27.6191818Z + python3 -m pip install boto3==1.19.12 2022-05-18T03:33:27.8346493Z Defaulting to user installation because normal site-packages is not writeable 2022-05-18T03:33:27.8525951Z Requirement already satisfied: boto3==1.19.12 in /home/ec2-user/.local/lib/python3.7/site-packages (1.19.12) 2022-05-18T03:33:27.8590732Z Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (0.5.2) 2022-05-18T03:33:27.8602887Z Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (0.10.0) 2022-05-18T03:33:27.8614823Z Requirement already satisfied: botocore<1.23.0,>=1.22.12 in /home/ec2-user/.local/lib/python3.7/site-packages (from boto3==1.19.12) (1.22.12) 2022-05-18T03:33:27.8656139Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/.local/lib/python3.7/site-packages (from botocore<1.23.0,>=1.22.12->boto3==1.19.12) (1.26.9) 2022-05-18T03:33:27.8813638Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/.local/lib/python3.7/site-packages (from botocore<1.23.0,>=1.22.12->boto3==1.19.12) (2.8.2) 2022-05-18T03:33:27.8834515Z Requirement already satisfied: six>=1.5 in /home/ec2-user/.local/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.23.0,>=1.22.12->boto3==1.19.12) (1.16.0) 2022-05-18T03:33:27.9865590Z + python3 -m tools.stats.print_test_stats --upload-to-s3 --compare-with-s3 test 2022-05-18T03:33:28.2048106Z No tests in reports found in test 2022-05-18T03:33:28.2544643Z Prepare all required actions 2022-05-18T03:33:28.2587520Z ##[group]Run ./.github/actions/teardown-linux 2022-05-18T03:33:28.2587737Z with: 2022-05-18T03:33:28.2587875Z env: 2022-05-18T03:33:28.2588029Z IN_CI: 1 2022-05-18T03:33:28.2588188Z IS_GHA: 1 2022-05-18T03:33:28.2588353Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:28.2588539Z ##[endgroup] 2022-05-18T03:33:28.2601605Z ##[group]Run .github/scripts/wait_for_ssh_to_drain.sh 2022-05-18T03:33:28.2601870Z .github/scripts/wait_for_ssh_to_drain.sh 2022-05-18T03:33:28.2612963Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:33:28.2613254Z env: 2022-05-18T03:33:28.2613412Z IN_CI: 1 2022-05-18T03:33:28.2613573Z IS_GHA: 1 2022-05-18T03:33:28.2613744Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:28.2613932Z ##[endgroup] 2022-05-18T03:33:28.2651333Z Holding runner for 2 hours until all ssh sessions have logged out 2022-05-18T03:33:28.2690023Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2022-05-18T03:33:28.2690336Z # ignore expansion of "docker ps -q" since it could be empty 2022-05-18T03:33:28.2690588Z # shellcheck disable=SC2046 2022-05-18T03:33:28.2690818Z docker stop $(docker ps -q) || true 2022-05-18T03:33:28.2691035Z # Prune all of the docker images 2022-05-18T03:33:28.2691251Z docker system prune -af 2022-05-18T03:33:28.2702113Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2022-05-18T03:33:28.2702321Z env: 2022-05-18T03:33:28.2702481Z IN_CI: 1 2022-05-18T03:33:28.2702644Z IS_GHA: 1 2022-05-18T03:33:28.2702819Z GIT_DEFAULT_BRANCH: master 2022-05-18T03:33:28.2703007Z ##[endgroup] 2022-05-18T03:33:31.3041342Z 55bd3be61eda 2022-05-18T03:33:31.6633190Z Deleted Containers: 2022-05-18T03:33:31.6633590Z 55bd3be61eda7b9683d31d97f73e33a89445917b6839b0a67cacf21da99c10c3 2022-05-18T03:33:31.6633774Z 2022-05-18T03:33:34.0608126Z Deleted Images: 2022-05-18T03:33:34.0609012Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4:6deab82db6a72ca54cd3e3322ee4f13864536734 2022-05-18T03:33:34.0609734Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.7-gcc5.4@sha256:9c228d64aeaa1a84153f684d8bf8d2b818b53df05ec50809bfb8bb625f2aea5c 2022-05-18T03:33:34.0610211Z deleted: sha256:59de092f48b8a69bedc5a97cdf7eb5f359b81a9ab8db3a062ddf64b0eeb3218c 2022-05-18T03:33:34.0610720Z deleted: sha256:a5c218bd38a05a6a5d34cd5b8705d6ef377789e4ad88a0452aff96d9d15ba536 2022-05-18T03:33:34.0611286Z deleted: sha256:98b641aa9ac53c109f7dbe617af11f8ddcda039b3671f8faed15eaa4bd8bcfb8 2022-05-18T03:33:34.0611781Z deleted: sha256:18725d5d610c69999c371599c7dfdbfca81db91bdce7335aae3dc49348367ab8 2022-05-18T03:33:34.0612278Z deleted: sha256:85f4743616e5158b5567fc794918955beb32b5236b2e10f819267c2ff313ee69 2022-05-18T03:33:34.0612802Z deleted: sha256:ab66af73e31229cc15f953e8c2279f4d766fa3c2051238adbfa58a3665de7f65 2022-05-18T03:33:34.0613359Z deleted: sha256:1caf1fe31f6355fc053d0a885626cb13472cc19222869ad8e6977bc1151830b4 2022-05-18T03:33:34.0613696Z deleted: sha256:c330d81b0faf6dede539990f626ac105eec327f65ab36c1a7a374f7242467013 2022-05-18T03:33:34.0614018Z deleted: sha256:718e0b324b58098941fadb420dd7e57dbbdd3b591289a143fbfe5ec42979f7f6 2022-05-18T03:33:34.0614365Z deleted: sha256:195247420be4cd7903121d178b00f6a257dd02af36c0129e340ac8ad968a008c 2022-05-18T03:33:34.0614918Z deleted: sha256:656c5db2781399301eb88e975b028e903a76c3fa4bdcb5ee2f601596d3770fe0 2022-05-18T03:33:34.0615480Z deleted: sha256:8aa8cf05fa63d857f6a09fc51e63e28efa3eeb017fc313091bbd2c41188ee73a 2022-05-18T03:33:34.0615812Z deleted: sha256:f3103bd0a274988a3c363a8c3e0c66cf27245bd0fbbf36f23c3a776bebce0636 2022-05-18T03:33:34.0616126Z deleted: sha256:14d2527d258aeb357a7c02ab642543354c6926e735b319b2840837e5f0bc6338 2022-05-18T03:33:34.0616446Z deleted: sha256:c1f967cb927feb91ad01be2df341c90526a0eeb449a5ba45d77955063390f5bd 2022-05-18T03:33:34.0616919Z deleted: sha256:facd48d4abdd74267bc84f37717a90597ee71abf90b442c2f080b8bff506eb28 2022-05-18T03:33:34.0617265Z deleted: sha256:36ca359a3de05a1ade3b9d47f386d0c7fdb7a2dd92ff430ac0b20252fdc971e6 2022-05-18T03:33:34.0617599Z deleted: sha256:f239740304fbce1682f9451b901322be1fd04bdba61ce3ec18219e2aa06abf92 2022-05-18T03:33:34.0617913Z deleted: sha256:95725bafda94d79e52e791051ca845246bac6420b367799457f13c9c4823e42f 2022-05-18T03:33:34.0618235Z deleted: sha256:edeffeace97c6ee0cb3c330b29296533b666ab913d0e3ce41e7b7ec02aa86a00 2022-05-18T03:33:34.0618573Z deleted: sha256:f9dc7cff308d456d8441528cfa5b049927aa044ca92450651297e1ea392b5176 2022-05-18T03:33:34.0618889Z deleted: sha256:829e5bacf960e9a03137a045e2962146d3675320f540b9f78bb656f6a5ebbd87 2022-05-18T03:33:34.0619263Z deleted: sha256:45c8bf957922e20b8cc1b780b1fa1d9cc6134f6898ac929075ebcb67f8e3d8ce 2022-05-18T03:33:34.0619594Z deleted: sha256:8e711fe9a02652cabb3d87bd91027d6f196acca0cac325f5218833bd2124edd0 2022-05-18T03:33:34.0619919Z deleted: sha256:41110f2b015c22f06f254e16ceec24e6d5f0508a3c03832e9e1bafdc9dcd6de9 2022-05-18T03:33:34.0620236Z deleted: sha256:492624ed296210ba14ce215ca0c827f0338281b43fa797db8beb8dc9eb73a075 2022-05-18T03:33:34.0620583Z deleted: sha256:1fa662cad30854c503cb8a6dca8775feb55ee9c810ab8a1964deb4df50423c59 2022-05-18T03:33:34.0620911Z deleted: sha256:0620304215b9c7b0c8a939c442cd21e16f69b3acf6e41b63db620388bcffe636 2022-05-18T03:33:34.0621247Z deleted: sha256:aad214394eed942b6cffbd34edd8d075fd9ec3cdae03021b68bb6a665df027f7 2022-05-18T03:33:34.0621569Z deleted: sha256:ef98d4551894826fa3e82876524657321978570336809b9d5f65f37ea57ba737 2022-05-18T03:33:34.0621854Z deleted: sha256:d4a7f9505f526ff752975a5b42ec0649e121dce31a1c20b64580bdb272398106 2022-05-18T03:33:34.0622170Z deleted: sha256:7c5163657fc0146a1c01bb4248aa43ef7e9f7bcdaa6476d60682c942743e7a76 2022-05-18T03:33:34.0622493Z deleted: sha256:914eca704a7c999d8554ac377319fba5d20d7bd71d037bcaea5c1789a0cd4588 2022-05-18T03:33:34.0622816Z deleted: sha256:41cc6d134c10d2b07ee7b79af7d9e2d9adaeeb66740a7f3f2b6110ff5c9b3750 2022-05-18T03:33:34.0623165Z deleted: sha256:3c52cedd16ec9a3d3ba05b9d59f95fe9e9c17b9ef45a8535780e33a4c796399f 2022-05-18T03:33:34.0623494Z deleted: sha256:b4ff4898d9d5652835190faa9f1656f944eef4b15d71d28980910a097520b29e 2022-05-18T03:33:34.0623823Z deleted: sha256:ba1a0ef9d6dacf9bd6a466603f1c0a439c5e4b16a1f08123366d98cbd451e552 2022-05-18T03:33:34.0624132Z deleted: sha256:9e86378183e56a8b8e03ac1662012715ba76ff596ac9d82493a26e2c05469e0c 2022-05-18T03:33:34.0624522Z deleted: sha256:b8f9bd44c8e3ef9185a9356770c6809d3b1e7eabe334a235bb3809c940736ee8 2022-05-18T03:33:34.0624837Z deleted: sha256:2194650d50c88f79f0c9316ff973d3fc8f05b34482c634ae5d1872d0488d6063 2022-05-18T03:33:34.0625139Z deleted: sha256:b6b59e40dec31e9e447977692ead70dc928bc273b29366a09f960ffe36615ca5 2022-05-18T03:33:34.0625454Z deleted: sha256:7538e51bd192b2c72757560bf89efb2b558e7feae722a54dce9ac5011b11334f 2022-05-18T03:33:34.0625778Z deleted: sha256:157a5446033e8978b7d89bbcd2289cd0973cee0da068e5983d6aec23abc16606 2022-05-18T03:33:34.0626108Z deleted: sha256:7886d23d8ebae5acb8cf55f4023a5f24d001deb6175bc294c49cc8426dbcd0fe 2022-05-18T03:33:34.0626419Z deleted: sha256:055c9429b696b8a0b5ae3182361238d4ad7b7ebe936dbb5329784a6e3e466eaf 2022-05-18T03:33:34.0626738Z deleted: sha256:0828f67acda3c667c57d5ee7d8c702dade92c908d6c673b041b693dd31ae1d25 2022-05-18T03:33:34.0627069Z deleted: sha256:25f905fb8bb2f1c914abedbc5059b4e47897cd0492f971d68b6977a921a35219 2022-05-18T03:33:34.0627388Z deleted: sha256:c8192f9ee988fa7475dda1364fe7643d799337ba7cbab7ef34fda310c2902122 2022-05-18T03:33:34.0627716Z deleted: sha256:d2c75ac26d00f774923ecde3f66d668f57f552b7e648bdc696922ad82dd5ae23 2022-05-18T03:33:34.0628046Z deleted: sha256:13ba83328c52b569d2601deca46a81b79edf7abd41d4b0c0b51bac7a098630df 2022-05-18T03:33:34.0628363Z deleted: sha256:05525537eae2e6755a75df8627211d016b48e97f1be4e17e41d58d7710493358 2022-05-18T03:33:34.0628651Z deleted: sha256:0214f4b057d78b44fd12702828152f67c0ce115f9346acc63acdf997cab7e7c8 2022-05-18T03:33:34.0628956Z deleted: sha256:1b9d0485372c5562fa614d5b35766f6c442539bcee9825a6e90d1158c3299a61 2022-05-18T03:33:34.0629320Z deleted: sha256:3c0f34be6eb98057c607b9080237cce0be0b86f52d51ba620dc018a3d421baea 2022-05-18T03:33:34.0629631Z deleted: sha256:be96a3f634de79f523f07c7e4e0216c28af45eb5776e7a6238a2392f71e01069 2022-05-18T03:33:34.0629809Z 2022-05-18T03:33:34.0645639Z Total reclaimed space: 6.039GB 2022-05-18T03:33:34.0693084Z Post job cleanup. 2022-05-18T03:33:34.0720430Z Post job cleanup. 2022-05-18T03:33:34.1716517Z [command]/usr/bin/git version 2022-05-18T03:33:34.1756022Z git version 2.32.0 2022-05-18T03:33:34.1798251Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/0c23c61b-04ef-4f83-9ab1-8ea4d00bae53' before making global git config changes 2022-05-18T03:33:34.1799469Z Adding repository directory to the temporary git global config as a safe directory 2022-05-18T03:33:34.1805943Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2022-05-18T03:33:34.1846779Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2022-05-18T03:33:34.1879604Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2022-05-18T03:33:34.2149486Z Entering 'android/libs/fbjni' 2022-05-18T03:33:34.2183813Z Entering 'third_party/FP16' 2022-05-18T03:33:34.2222412Z Entering 'third_party/FXdiv' 2022-05-18T03:33:34.2256077Z Entering 'third_party/NNPACK' 2022-05-18T03:33:34.2291085Z Entering 'third_party/QNNPACK' 2022-05-18T03:33:34.2327727Z Entering 'third_party/XNNPACK' 2022-05-18T03:33:34.2372930Z Entering 'third_party/benchmark' 2022-05-18T03:33:34.2408376Z Entering 'third_party/cpuinfo' 2022-05-18T03:33:34.2444734Z Entering 'third_party/cub' 2022-05-18T03:33:34.2479509Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:33:34.2519927Z Entering 'third_party/eigen' 2022-05-18T03:33:34.2557901Z Entering 'third_party/fbgemm' 2022-05-18T03:33:34.2593176Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:33:34.2626619Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:33:34.2661613Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:33:34.2697941Z Entering 'third_party/flatbuffers' 2022-05-18T03:33:34.2736743Z Entering 'third_party/fmt' 2022-05-18T03:33:34.2771148Z Entering 'third_party/foxi' 2022-05-18T03:33:34.2806027Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:33:34.2840876Z Entering 'third_party/gloo' 2022-05-18T03:33:34.2874806Z Entering 'third_party/googletest' 2022-05-18T03:33:34.2909707Z Entering 'third_party/ideep' 2022-05-18T03:33:34.2943844Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:33:34.2981445Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:33:34.3022305Z Entering 'third_party/ios-cmake' 2022-05-18T03:33:34.3058516Z Entering 'third_party/kineto' 2022-05-18T03:33:34.3093664Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:33:34.3129801Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:33:34.3168306Z Entering 'third_party/nccl/nccl' 2022-05-18T03:33:34.3204192Z Entering 'third_party/neon2sse' 2022-05-18T03:33:34.3238420Z Entering 'third_party/onnx' 2022-05-18T03:33:34.3285710Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:33:34.3320938Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:33:34.3356974Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:33:34.3393046Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:33:34.3433420Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:33:34.3467658Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:33:34.3502193Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:33:34.3542499Z Entering 'third_party/pocketfft' 2022-05-18T03:33:34.3577300Z Entering 'third_party/protobuf' 2022-05-18T03:33:34.3616706Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:33:34.3650779Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:33:34.3687404Z Entering 'third_party/psimd' 2022-05-18T03:33:34.3722384Z Entering 'third_party/pthreadpool' 2022-05-18T03:33:34.3757040Z Entering 'third_party/pybind11' 2022-05-18T03:33:34.3791793Z Entering 'third_party/python-enum' 2022-05-18T03:33:34.3825635Z Entering 'third_party/python-peachpy' 2022-05-18T03:33:34.3859746Z Entering 'third_party/python-six' 2022-05-18T03:33:34.3895814Z Entering 'third_party/sleef' 2022-05-18T03:33:34.3930155Z Entering 'third_party/tbb' 2022-05-18T03:33:34.3967217Z Entering 'third_party/tensorpipe' 2022-05-18T03:33:34.4002854Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:33:34.4036432Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:33:34.4069361Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:33:34.4103381Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:33:34.4136905Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:33:34.4174047Z Entering 'third_party/zstd' 2022-05-18T03:33:34.4225197Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2022-05-18T03:33:34.4251808Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4260234Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2022-05-18T03:33:34.4294766Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2022-05-18T03:33:34.4562177Z Entering 'android/libs/fbjni' 2022-05-18T03:33:34.4582222Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4609516Z Entering 'third_party/FP16' 2022-05-18T03:33:34.4630211Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4657812Z Entering 'third_party/FXdiv' 2022-05-18T03:33:34.4678736Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4705440Z Entering 'third_party/NNPACK' 2022-05-18T03:33:34.4726321Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4753344Z Entering 'third_party/QNNPACK' 2022-05-18T03:33:34.4775127Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4801771Z Entering 'third_party/XNNPACK' 2022-05-18T03:33:34.4822629Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4860786Z Entering 'third_party/benchmark' 2022-05-18T03:33:34.4882034Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4909511Z Entering 'third_party/cpuinfo' 2022-05-18T03:33:34.4930419Z http.https://github.com/.extraheader 2022-05-18T03:33:34.4958526Z Entering 'third_party/cub' 2022-05-18T03:33:34.4979712Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5007311Z Entering 'third_party/cudnn_frontend' 2022-05-18T03:33:34.5028751Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5062921Z Entering 'third_party/eigen' 2022-05-18T03:33:34.5084637Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5114839Z Entering 'third_party/fbgemm' 2022-05-18T03:33:34.5136427Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5167526Z Entering 'third_party/fbgemm/third_party/asmjit' 2022-05-18T03:33:34.5188099Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5215780Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2022-05-18T03:33:34.5236289Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5264836Z Entering 'third_party/fbgemm/third_party/googletest' 2022-05-18T03:33:34.5285386Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5314613Z Entering 'third_party/flatbuffers' 2022-05-18T03:33:34.5335413Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5365842Z Entering 'third_party/fmt' 2022-05-18T03:33:34.5387068Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5414231Z Entering 'third_party/foxi' 2022-05-18T03:33:34.5434967Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5462657Z Entering 'third_party/gemmlowp/gemmlowp' 2022-05-18T03:33:34.5483295Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5510938Z Entering 'third_party/gloo' 2022-05-18T03:33:34.5531087Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5557972Z Entering 'third_party/googletest' 2022-05-18T03:33:34.5578554Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5609918Z Entering 'third_party/ideep' 2022-05-18T03:33:34.5629582Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5656187Z Entering 'third_party/ideep/mkl-dnn' 2022-05-18T03:33:34.5676852Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5705790Z Entering 'third_party/ideep/mkl-dnn/third_party/oneDNN' 2022-05-18T03:33:34.5726892Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5760324Z Entering 'third_party/ios-cmake' 2022-05-18T03:33:34.5781254Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5807726Z Entering 'third_party/kineto' 2022-05-18T03:33:34.5828928Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5856906Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2022-05-18T03:33:34.5876654Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5903923Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2022-05-18T03:33:34.5923859Z http.https://github.com/.extraheader 2022-05-18T03:33:34.5952529Z Entering 'third_party/nccl/nccl' 2022-05-18T03:33:34.5974470Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6003271Z Entering 'third_party/neon2sse' 2022-05-18T03:33:34.6023923Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6051236Z Entering 'third_party/onnx' 2022-05-18T03:33:34.6072577Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6110558Z Entering 'third_party/onnx/third_party/benchmark' 2022-05-18T03:33:34.6132163Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6158615Z Entering 'third_party/onnx/third_party/pybind11' 2022-05-18T03:33:34.6180159Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6208903Z Entering 'third_party/onnx-tensorrt' 2022-05-18T03:33:34.6230521Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6257490Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2022-05-18T03:33:34.6276940Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6309512Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2022-05-18T03:33:34.6330186Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6357189Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2022-05-18T03:33:34.6377984Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6405250Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2022-05-18T03:33:34.6425621Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6457818Z Entering 'third_party/pocketfft' 2022-05-18T03:33:34.6479745Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6506835Z Entering 'third_party/protobuf' 2022-05-18T03:33:34.6527551Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6558230Z Entering 'third_party/protobuf/third_party/benchmark' 2022-05-18T03:33:34.6579663Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6606518Z Entering 'third_party/protobuf/third_party/googletest' 2022-05-18T03:33:34.6627013Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6656515Z Entering 'third_party/psimd' 2022-05-18T03:33:34.6677762Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6705175Z Entering 'third_party/pthreadpool' 2022-05-18T03:33:34.6727021Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6754051Z Entering 'third_party/pybind11' 2022-05-18T03:33:34.6774081Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6801386Z Entering 'third_party/python-enum' 2022-05-18T03:33:34.6821633Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6849388Z Entering 'third_party/python-peachpy' 2022-05-18T03:33:34.6870099Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6896734Z Entering 'third_party/python-six' 2022-05-18T03:33:34.6917343Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6943978Z Entering 'third_party/sleef' 2022-05-18T03:33:34.6966000Z http.https://github.com/.extraheader 2022-05-18T03:33:34.6992964Z Entering 'third_party/tbb' 2022-05-18T03:33:34.7012982Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7042059Z Entering 'third_party/tensorpipe' 2022-05-18T03:33:34.7063206Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7089122Z Entering 'third_party/tensorpipe/third_party/googletest' 2022-05-18T03:33:34.7109769Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7137384Z Entering 'third_party/tensorpipe/third_party/libnop' 2022-05-18T03:33:34.7157084Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7184045Z Entering 'third_party/tensorpipe/third_party/libuv' 2022-05-18T03:33:34.7204483Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7231809Z Entering 'third_party/tensorpipe/third_party/pybind11' 2022-05-18T03:33:34.7253764Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7280123Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2022-05-18T03:33:34.7301142Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7331032Z Entering 'third_party/zstd' 2022-05-18T03:33:34.7352016Z http.https://github.com/.extraheader 2022-05-18T03:33:34.7691017Z Cleaning up orphan processes